filename
stringlengths 18
35
| content
stringlengths 1.53k
616k
| source
stringclasses 1
value | template
stringclasses 1
value |
---|---|---|---|
0002_APPLICATE_727862.md
|
# EXECUTIVE SUMMARY
This plan is based on the H2020 FAIR Data Management Plan (DMP) template
designed to be applicable to any H2020 project that produces, collect or
processes research data. This is the same plan as OpenAIRE is referring to in
their guidance material. The purpose of the Data Management Plan is to
describe the data that will be created and how, as well as the plans for
sharing and preservation of the data generated. This plan is a living document
that will be updated during the project.
APPLICATE follows a metadata-driven approach where a physically distributed
number of data centres are integrated using standardised discovery metadata
and interoperability interfaces for metadata and data. The APPLICATE Data
portal, providing a unified search interface to all APPLICATE will also be
able to host data. APPLICATE promotes free and open access to data in line
with the European Open Research Data Pilot (OpenAIRE).
Within this plan an overview of the production chains for model simulations is
provided as well as an initial outline of dissemination. This version of the
plan is an update on the first version submitted in June 2017. A second update
to the plan are scheduled for October 2019\.
# Introduction
## Background and motivation
The purpose of the data management plan is to document how the data generated
by the project is handled during and after the project. It describes the basic
principles for data management within the project. This includes standards and
generation of discovery and use metadata, data sharing and preservation and
life cycle management.
This document is a living document that will be updated during the project in
time with the periodic reports (project months 18, 36 and 48).
APPLICATE is following the principles outlined by the Open Research Data Pilot
and The FAIR Guiding Principles for scientific data management and stewardship
(Wilkinson et al. 2016).
## Organisation of the plan
This plan is based on the H2020 FAIR Data Management Plan (DMP) template 1
designed to be applicable to any H2020 project that produces, collect or
processes research data. This is the same plan as OpenAIRE is referring to in
their guidance material.
# Administration details
<table>
<tr>
<th>
Project Name
</th>
<th>
APPLICATE
</th> </tr>
<tr>
<td>
Funding
</td>
<td>
EU HORIZON 2020 Research and Innovation Programme
</td> </tr>
<tr>
<td>
Partners
</td>
<td>
Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI)
- Bremerhaven, Germany
Barcelona Supercomputing Center - Barcelona, Spain
European Centre for Medium-Range Weather Forecasts (ECMWF) - Reading, United
Kingdom
University of Bergen (UiB) - Bergen, Norway
Uni Research AS - Bergen, Norway
Norwegian Meteorological Institute (MET Norway) - Oslo, Norway
Met Office - Exeter, United Kingdom
Catholic University of Louvain (UCL) - Louvain-la-Neuve, Belgium
The University of Reading (UREAD) - Reading, United Kingdom
Stockholm University (SU) - Stockholm, Sweden
National Center for Scientific Research (CNRS-GAME) - Paris, France (with
contributions from Météo France)
European Centre for Research and Advanced Training in Scientific Calculation
(CERFACS) - Toulose, France
Arctic Portal - Akureyri, Iceland
University of Tromsø (UiT) - Tromsø, Norway
P.P. Shirshov Institute of Oceanology, Russian Academy of Sciences (IORAS) -
Moscow, Russia
Federal State Budgetary Institution Voeikov Main Geophysical Observatory (MGO)
- St. Petersburg, Russia
</td> </tr> </table>
# Data summary
The overarching mission of APPLICATE is
_To develop enhanced predictive1 capacity for weather and climate in the
Arctic and beyond, and to determine the influence of Arctic climate change on
Northern Hemisphere midlatitudes, for the benefit of policy makers, businesses
and society_ .
Therefore, APPLICATE is primarily a project in which numerical models (for
weather and climate prediction) are used. As such it depends on observations
(e.g. for model evaluation and initialization), but the data generated by the
project is primarily gridded output from the numerical simulations.
The APPLICATE data management system will be used to collect information of
relevant third-party datasets that the APPLICATE community could benefit from,
and to share and preserve the datasets APPLICATE is generating, both
internally and externally.
A full overview of the datasets to be generated is yet not fully known, but
there is an overview of the production chains. This was prepared in the
proposal and is provided in Tables 1–3 below.
## Data overview
### Types and formats of data generated/collected
APPLICATE will primarily generate gridded output resulting from numerical
simulations and metrics based on these core datasets. The models used produce
a number of output formats which is not known in detail, but specific
requirements apply for data sharing and preservation (see below).
Self-explaining file formats (e.g. NetCDF, HDF/HDF5) combined with semantic
and structural standards like the Climate and Forecast Convention will be
used. The default format for APPLICATE datasets are NetCDF following the
Climate and Forecast Convention (feature types grid, timeseries, profiles and
trajectories if applicable). This includes the Coupled Model Intercomparison
Project (CMIP) requirements. The NetCDF files must be created using the NetCDF
Classic Model (i.e. compression is allowed, but not groups and compound data
types). The ESGF CMOR is recommended for conversion of model output.
Some datasets may be made available as WMO GRIB or BUFR. Where no clear
standard is identified initially, dedicated work will be attributed to
identifying a common approach for those data.
APPLICATE will exploit existing data in the region. In particular operational
meteorological data made available through WMO Gobal Telecommunication System
will be important for the model experiments. No full overview of third party
data that will be used is currently available, but since the start of the
project SYNOP data from WMO GTS have been available to the APPLICATE
community. Work is in proigress for more data from GTS. If necessary (required
by the scientific community in APPLICATE) metadata describing relevant
thirdparty observations will be harvested and ingested in the data management
system and through this simplifying the data discovery process for APPLICATE
scientists. There is however no plan initially to harvest the data.
Furthermore, model data produced in the context of CMIP5 and CMIP6 will be
used as a baseline against which model improvements will be tested.
### Origin of the data
Data will be generated by a suite of numerical models, including operational
weather prediction and climate models. A preliminary list was provided in the
proposal and is included below.
APPLICATE is primarily a project in which numerical models are used. As such
it depends on observations (e.g. for model evaluation and initialization), but
the data generated by the project is primarily gridded output from numerical
simulations.
A summary of the numerical models to be used is provided in Tables 1-3.
Table 1: List of climate models.
<table>
<tr>
<th>
Model
</th>
<th>
AWI-CM
</th>
<th>
EC-Earth CNRM-CM
</th>
<th>
NorESM
</th>
<th>
HadGEM
</th> </tr>
<tr>
<td>
Partner
</td>
<td>
AWI
</td>
<td>
BSC, UCL, SU CNRSGAME, CERFACS
</td>
<td>
UiB, UR, Met.no
</td>
<td>
MO, UREAD
</td> </tr>
<tr>
<td>
Atmosphere
</td>
<td>
ECHAM6 T127
L95
</td>
<td>
IFS ARPEGE-Climat
T255/T511 L91 T127/T359
L91
</td>
<td>
CAM-OSLO 1o×1o
L32 / L46
</td>
<td>
MetUM N216/N96
L85
</td> </tr>
<tr>
<td>
Ocean
</td>
<td>
FESOM Unstruct. mesh 15-100 km L41
4.5-80 km L41
</td>
<td>
NEMO NEMO 1o , 0.25o L75 1o, 0.25 o L75
</td>
<td>
NorESM-O (extended
MICOM) 1o, 0.25o
L75
</td>
<td>
NEMO 1o×1o L75 0.25o×0.25o L75
</td> </tr>
<tr>
<td>
Sea ice
</td>
<td>
FESIM
</td>
<td>
LIM3 GELATO
</td>
<td>
CICE
</td>
<td>
CICE
</td> </tr>
<tr>
<td>
Surface
</td>
<td>
JSBACH
</td>
<td>
HTESSEL SURFEX
</td>
<td>
SURFEX
</td>
<td>
JULES
</td> </tr>
<tr>
<td>
CMIP6
</td>
<td>
Yes
</td>
<td>
Yes Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr> </table>
Table 2: List of subseasonal to seasonal prediction systems.
<table>
<tr>
<th>
Model
</th>
<th>
EC-Earth
</th>
<th>
CNRM-CM
</th>
<th>
IFS
</th>
<th>
HadGEM/GloSea
</th> </tr>
<tr>
<td>
Partner
</td>
<td>
BSC, UCL, AWI
</td>
<td>
CNRS-GAME
</td>
<td>
ECMWF
</td>
<td>
MO, UREAD
</td> </tr>
<tr>
<td>
Atmosphere
</td>
<td>
IFS
T255/T511 L91
</td>
<td>
ARPEGE Climat
T255/T359 L91
</td>
<td>
IFS
T511-T319 L91
</td>
<td>
MetUM
N216 L85
</td> </tr>
<tr>
<td>
Ocean
</td>
<td>
NEMO
1°/0.25° L75
</td>
<td>
NEMO
1°/0.25°, L75
</td>
<td>
NEMO
1°, L75
</td>
<td>
NEMO
0.25o×0.25o L75
</td> </tr>
<tr>
<td>
Sea ice
</td>
<td>
LIM3
</td>
<td>
GELATO
</td>
<td>
LIM2/3
</td>
<td>
CICE
</td> </tr>
<tr>
<td>
Land
</td>
<td>
HTESSEL
</td>
<td>
SURFEX
</td>
<td>
HTESSEL
</td>
<td>
JULES
</td> </tr>
<tr>
<td>
Data assimilation
</td>
<td>
Ensemble Kalman
filter
</td>
<td>
Extended Kalman Filter
SAM2
</td>
<td>
4D-Var
</td>
<td>
4D-Var, NEMOVAR
3D-Var FGAT
</td> </tr> </table>
Table 3: Numerical weather prediction systems.
<table>
<tr>
<th>
Model
</th>
<th>
ARPEGE
</th>
<th>
AROME
</th>
<th>
IFS
</th>
<th>
AROME-Arctic
</th> </tr>
<tr>
<td>
Partner
</td>
<td>
CNRS-GAME
</td>
<td>
CNRS-GAME
</td>
<td>
ECMWF
</td>
<td>
Met.no
</td> </tr>
<tr>
<td>
Atmosphere
</td>
<td>
ARPEGE
T1198, stretched HR
</td>
<td>
AROME
1.3km / 500m, 90 vertical
</td>
<td>
IFS
T1279 L137
</td>
<td>
AROME
2.5 km L65
</td> </tr>
<tr>
<td>
</td>
<td>
(7.5km on grid pole),
L105
</td>
<td>
levels
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Ocean
</td>
<td>
N/A
</td>
<td>
N/A
</td>
<td>
N/A
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Sea ice
</td>
<td>
GELATO
</td>
<td>
GELATO
</td>
<td>
N/A
</td>
<td>
SICE
</td> </tr>
<tr>
<td>
Land
</td>
<td>
SURFEX
</td>
<td>
SURFEX
</td>
<td>
HTESSEL
</td>
<td>
SURFEX
</td> </tr>
<tr>
<td>
Data assimilation
</td>
<td>
4D-Var
</td>
<td>
dynamical adaptation
</td>
<td>
4D-Var
</td>
<td>
3D-Var
</td> </tr> </table>
In the original version of this data management plan, the total amount of data
was not known. This is still not known in detail, but some information on the
expected volumes for publication is known (this is a consequence of the
“partial dissemination” term used in Table 4). The ECMWF YOPP dataset is
excluded from this overview currently.
The major volumes to be disseminated through the data management system are
the ECMWF YOPP dataset, and seasonal forecasts and potentially climate
forecasts from WP5. Preliminary estimates (maximum values) of the volumes (Tb)
planned for dissemination are currently:
* ECMWF YOPP dataset
◦ Analysis and forecast dataset, including process tendencies, amounting to a
total volume of 300 Tb.
* WP 5 Seasonal forecasts
◦ Three different models, each producing in total approximately 20 Tb
throughout the project. In total approximately 60 Tb.
* WP 5 Climate change simulations
◦ One model in standard resolution approximately 20 Tb.
◦ One model in high resolution as well, approximately 685 Tb, would be
available for dissemination. However, in practice only a subset of the data
will be useful to the wider community, and hence significant data volume
reduction is being considered for dissemination.
### ECMWF YOPP data
Within APPLICATE, ECMWF has begun to generate an extended two-year global
dataset to support the World Meteorological Organization’s Year of Polar
Prediction (YOPP). The start of production was timed to coincide with the
official launch of YOPP in Geneva, Switzerland, on 15 May. The dataset is
intended to support YOPP’s goal of boosting polar forecasting capacity. In
addition to the usual forecast data stored at ECMWF, it will include
additional parameters for research purposes. These include ‘tendencies’ in
physical processes modelled in ECMWF’s Integrated Forecasting System (IFS).
More information on the ECMWF YOPP dataset is available from ECMWF. The actual
data is available through the ECMWF YOPP Data Portal. This are discoverable
through APPLICATE Data Portal as well as the YOPP Data Portal.
## Making data findable, including provisions for metadata [fair data]
APPLICATE is following a metadata driven approach, utilizing internationally
accepted standards and protocols for documentation and exchange of discovery
and use metadata. This ensures interoperability at the discovery level with
international systems and frameworks, including WMO Information System (WIS),
Year of Polar Prediction (YOPP), and many national and international Arctic
and marine data centers (e.g. Svalbard Integrated Arctic Earth Observing
System).
APPLICATE data management is distributed in nature, relying on a number of
data centres with a long-term mandate. This ensures preservation of the
scientific legacy. The approach chosen is in line with lessons learned from
the International Polar Year, and the ongoing efforts by the combined
SAON/IASC Arctic Data Committee to establish an Arctic data ecosystem.
APPLICATE promotes the implementation of Persistent Identifiers at each
contributing data centre. Some have this in place, while others are in the
process of establishing this. Although application of globally resolvable
Persistent Identifiers (e.g. Digital Object Identifiers) is not required, it
is promoted by the APPLICATE data management system. However, each
contributing data centre has to support locally unique and persistent
identifiers if Digital Object Identifiers or similar are not supported.
Concerning naming conventions, APPLICATE requires that controlled vocabularies
are used both at the discovery level and the data level to describe the
content. Discovery level metadata must identify the convention used and the
convention has to be available in machine readable form (preferably through
Simple Knowledge Organisation System). The fallback solution for controlled
vocabularies is the Global Change Master Directory vocabularies.
The search model of the data management system is based on GCMD Science
Keywords for parameter identification through discovery metadata. At the data
level the Climate and Forecast Convention is used for all NetCDF files. For
data encoded using WMO standards, GRIB and BUFR, the standard approach at the
host institute is followed. All discovery metadata records are required to
include GCMD Science Keywords. Furthermore, CMOR standards will be employed
for some of the climate model simulations, especially those contributing to
CMIP6.
Versioning of data is required for the data published in the data management
system. Details on requirements for how to define a new version of a dataset
is to be agreed, but the general principles include that a new version of a
model dataset is defined if the physical basis for the model has changed (e.g.
modification of spatial and temporal resolution, number of vertical levels and
internal dynamics or physics). Integration of datasets (e.g. to create a long
time series) is encouraged, but these datasets must be clearly documented.
The APPLICATE data management system can consume and expose discovery metadata
provided in GCMD DIF and ISO19115. If ISO19115 is used, GCMD keywords must be
used to describe physical and dynamical parameters. Support for more formats
is being considered. More specifications will be identified early in the
project. As ISO19115 is a container that can be used in many contexts,
APPLICATE promotes the application of the WMO Profile for discovery metadata.
This is based on ISO19115. APPLICATE will be more pragmatic than WMO accepting
records that not fully qualify in all aspects. The dialogue on what is
required will be aligned with the ongoing efforts of the combined SAON /IASC
Arctic Data Committee to ensure integration with relevant scientific
communities.
APPLICATE will integrate with the YOPP Data Portal to make sure that APPLICATE
datasets are discoverable through the YOPP Data Portal. This will be
implemented letting the YOPP Data Portal harvest the relevant discovery
metadata from the APPLICATE data catalogue.
## Making data openly accessible [fair data]
All discovery metadata will be available through a web based search interface
available through the central project website (applicate.met.no 2 ). Some
data may have temporal access restrictions (embargo period). These will be
handled accordingly.
Valid reasons for an embargo period on data are primarily for educational
reasons, allowing Ph.D. students to prepare and publish their work. Even if
data constrained in the embargo period, data will be shared internally in the
project. Any disagreements on access to data or misuse of data internally are
to be settled by the APPLICATE Executive Board.
Data in the central repository will be made available through a THREDDS Data
Server, activating OPeNDAP support for all datasets and OGC Web Map Service
for visualisation of gridded datasets. Standardisation of data access
interfaces and linkage to the Common Data Model through OPeNDAP 3 is
promoted for all data centres contributing to APPLICATE. This enables direct
access of data within analysis tools like Matlab, Excel 4 and R. Activation
of these interfaces to data are recommended for other contributing data
centres as well.
Metadata and data for the datasets are maintained by the responsible data
centres (including the central data repository). Metadata supporting unified
search is harvested and ingested in the central node (through
applicate.met.no) where it will be made available through human (web
interface) and machine interfaces (OAI-PMH, support for OpenSearch is
considered).
Datasets with restrictions are initially handled by the responsible data
centre. Generally, the metadata will be searchable and contain information on
how to request access to the dataset. An example of a dataset with access
restrictions is the ECMWF YOPP dataset where user registration is required.
Access to information about the dataset does however not require registration
## Making data interoperable [fair data]
In order to be able to reuse data, standardisation is important. This implies
both standardisation of the encoding/documentation, as well as the interfaces
to the data. Further up in the document, it is referred to documentation
standards widely used by the modelling communities. This includes encoding
model output as NetCDF files, following the Climate and Forecast convention or
the WMO GRIB format. The WMN formats are table driven formats where the tables
identify the content and makes it interoperable. NetCDF files following the CF
convention is self-describing and interoperable. Application of the CF
conventions implies requirements on the structure and semantic annotation of
data (e.g. through identification of variables/parameters through CF standard
names). Furthermore, it requires encoding of missing values etc.
To simplify the process of accessing data, APPLICATE recommends all data
centres to support OPeNDAP. OPeNDAP allows streaming of data and access
without downloading the data as physical files. If OPeNDAP is not supported,
straightforward HTTP access must be supported.
In order to ensure consistency between discovery level and use level metadata,
a system for translation of discovery metadata keywords (i.e. GCMD Science
keywords) to CF Standard names is under development. This implies that e.g.
controlled vocabularies used in the documentation of data may be mapped on the
fly to vocabularies used by other communities. This is in line with current
activities in the SAON/IASC Arctic Data Committee.
## Increase data re-use (through clarifying licenses) [fair data]
APPLICATE promotes free and open data sharing in line with the Open Research
Data Pilot. Each dataset needs a license attached. The recommendation in
APPLICATE is to use Creative Commons attribution license for data. See
https://creativecommons.org/licenses/by/3.0/ for details.
APPLICATE data should be delivered in a timely manner meaning without un-due
delay. Any delay, due or un-due, shall not be longer than one year after the
dataset is finished. Discovery metadata shall be delivered immediately.
APPLICATE is promoting free and open access to data. Some data may have
constraints (e.g. on access or dissemination) and may be available to members
only initially. Furthermore, some of the data will be used for modelling
development purposes and are thus of limited interest to the broader
community; these data will not be made publicly available. A draft
dissemination plan was outlined in the proposal and is provided in Table 4.
This will be updated as the project progresses.
Table 4: Draft dissemination plan.
<table>
<tr>
<th>
Purpose
</th>
<th>
Model systems
</th>
<th>
Experimental design
</th>
<th>
Data
</th> </tr>
<tr>
<td>
Determine the impact of model enhancements on process representation and
systematic model error
(WP2)
</td>
<td>
* AWI-CM
* EC-Earth
* CNRM-CM
* NorESM
* HadGEM
</td>
<td>
Baseline data: CMIP6-DECK experiments
Implement the model changes suggested in WP2 in coupled models: • 200-yr pre-
industrial control experiments • CMIP6 historical experiments •
1% CO 2 increase experiments
</td>
<td>
Partial Dissemination
</td> </tr>
<tr>
<td>
Determine Arctic- lower latitude linkages in atmosphere and ocean
(WP3)
</td>
<td>
Coupled models
* AWI-CM
* EC-Earth
* CNRM-CM
* NorESM
* HadGEM
</td>
<td>
Large ensembles (50-100 members) of 12-months experiments starting June 1st
with sea ice constrained to observed and projected sea ice fields Multi-
decadal experiments with and without artificially reduced Arctic sea ice
(enhanced downwelling LW radiation over sea ice); use of tracers for the ocean
Repeat with enhanced models
</td>
<td>
Full Dissemination
</td> </tr>
<tr>
<td>
Atmospheric models
* ECHAM6
* IFS
* ARPEGE- Climat
* CAM-OSLO
* MetUM
</td>
<td>
Large ensembles (50-100 members) of 12-months experiments starting June 1st
with sea ice constrained to observed and projected sea ice fields Various
corresponding sensitivity experiments to explore the role of the background
flow, and the prescribed sea ice pattern Repeat with enhanced models
</td>
<td>
Full Dissemination
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Seasonal prediction systems • EC-Earth
• CNRM-CM
</th>
<th>
Seasonal prediction experiments with and without relaxation of the Arctic
atmosphere towards ERA-Interim reanalysis data: 9-member ensemble forecasts
with members initialized on Nov 1st, Feb 1st, May 1st and Aug 1st for the
years 1979-2016 and 19932016 for EC-Earth and CNRM-CM, respectively.
</th>
<th>
Full Dissemination
</th> </tr>
<tr>
<td>
Arctic observing system development (WP4)
</td>
<td>
Atmospheric model
• IFS
</td>
<td>
Data denial experiments with the IFS for key observations (snow, surface
pressure, wind, moisture) and different seasons.
</td>
<td>
Partial dissemination
</td> </tr>
<tr>
<td>
Seasonal prediction
* EC-Earth
* HadGEM
* GloSea
</td>
<td>
\- Perfect model experiments to characterize basic sensitivity of forecasts to
initial conditions. - Different configurations of initial conditions using
reanalyses, new observations, ocean reruns forced by atmospheric reanalyses. -
Experiments focused on sea-ice thickness, snow and spatial data sampling
</td>
<td>
Partial dissemination
</td> </tr>
<tr>
<td>
Determine the impact of APPLI-
CATE model enhancements on weather and climate prediction
(WP5)
</td>
<td>
Atmospheric model
* ARPEGE
* AROME
* IFS
* AROME-Arctic
</td>
<td>
Test recommendations for model enhancements made in WP2 in pre- operational
configurations Explore the impact of nesting, driving model and resolution
</td>
<td>
Partial dissemination
</td> </tr>
<tr>
<td>
Seasonal prediction
* EC-Earth
* CNRM-CM
* HadGEM
</td>
<td>
Test recommendations for model enhancements made in WP2 in pre- operational
configurations
</td>
<td>
Partial dissemination
</td> </tr>
<tr>
<td>
</td>
<td>
Climate change
* AWI-CM
* EC-Earth
* NorESM
* AWI-CM
</td>
<td>
Establish the impact of model enhancements developed in WP2 on climate
sensitivity by carrying out experiments using the same initial conditions and
time period (1950—2050) employed in HiResMIP climate sensitivity by carrying
out experiments using the same initial 2050) employed in HiResMIP climate
sensitivity by carrying out experiments using the same initial conditions and
time period
(1950—2050) employed in HiResMIP
</td>
<td>
Partial dissemination
</td> </tr> </table>
The quality of each dataset is the responsibility of the Principal
Investigator. The Data Management System will ensure the quality of the
discovery metadata and that datasets are delivered according to the format
specifications.
Numerical simulations and analysed products will be preserved for at least 10
years after publication.
# Allocation of resources
In the current situation, it is not possible to estimate the cost for making
APPLICATE data FAIR. Part of the reason is that this work is relying on
existing functionality at the contributing data centres and that this
functionality has been developed over years. The cost of preparing the data in
accordance with the specifications and initial sharing is covered by the
project. Maintenance of this over time is covered by the business models of
the data centres.
A preliminary list of data centres involved is given in Table 5.
Table 5: As of autumn 2018, the following data centres are contributing to the
APPLICATE project.
<table>
<tr>
<th>
Data centre
</th>
<th>
URL
</th>
<th>
Contact
</th>
<th>
Comment
</th> </tr>
<tr>
<td>
BSC
</td>
<td>
https://www.bsc.es/
</td>
<td>
Pierre-Antoine Bretonniére
</td>
<td>
</td> </tr>
<tr>
<td>
ECMWF
</td>
<td>
https://www.ecmwf.int
</td>
<td>
Manuel Fuentes
</td>
<td>
</td> </tr>
<tr>
<td>
DKRZ
</td>
<td>
http://www.dkrz.de
</td>
<td>
Thomas Jung
</td>
<td>
</td> </tr>
<tr>
<td>
Norwegian Meteoro-
logical Institute/Arctic
Data Centre
</td>
<td>
https://applicate.met.no/
</td>
<td>
Øystein Godøy
</td>
<td>
This subsystem will provide a unified search interface to all the data
APPLICATE is generating. It will also host data not being hosted by other data
centres contributing to APPLICATE. Metadata interfaces are available, data
interoperability supported using OGC WMS and OPeNDAP. Will integrate relevant
data from WMO GTS.
</td> </tr> </table>
Each data centre is responsible for accepting, managing, sharing and
preserving the relevant datasets. Concerning interoperability interfaces the
following interfaces are required:
1. Metadata
1. OAI-PMH serving either CCMD DIF or the ISO19115 minimum profile with GCMD Science Keywords. Dedicated sets should be available to identify APPLICATE data in large data collections.
2. Data (will also use whatever is available and deliver this in original form, for those data no synthesis products are possible without an extensive effort)
1. OGC WMS (actual visual representation, not data)
2. OPeNDAP
In the current situation, long-term preservation of 50 Tb for 10 years is
covered. Volumes to be preserved are still somewhat uncertain and the storage
costs for some of the data produced in the project are covered by other
projects/activities, e.g. the CMIP6 data and operational models. For some of
these data only preservation of minor datasets is required by APPLICATE.
All data that will contribute to CMIP6 will be stored in data centres
contributing to the Earth System Grid Federation (ESGF). APPLICATE data
centres contributing to this will be shown in the table above. For APPLICATE,
the experiments contributing to the Polar Amplification Model Intercomparison
Project (PA-MIP) will be managed in a ESGF data centre.
# Data security
Data security relies on the existing mechanisms of the contributing data
centres. APPLICATE recommends ensuring the communication between data centres
and users with secure HTTP. Concerning the internal security of the data
centre, APPLICATE recommends the best practises from OAIS. The technical
solution will vary between data centres, but most data centres have solutions
using automated check sums and replication.
The central node relies on secure HTTP, but not all contributing data centres
support this yet.
# Ethical aspects
APPLICATE is not concerned with ethical sensitive data and follows the
guidance of the IASC Statement of Principles and Practises for Arctic Data
Management.
# Other
APPLICATE is linked to WMO’s Year of Polar Prediction activity. In this
context APPLICATE is relating to the WMO principles for data management
identified through the WMO Information System.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0005_Made4You_780298.md
|
**Introduction**
The Made4You project is committed to high quality output and responsible
research and innovation. Thus, this document defines a set of procedures that
the consortium is committed to adhere to and to improve in the course of the
project.
Openness and transparency are two of the guiding principles that the reader
will see reflected in the different processes and methods described. At the
same time there is a strong awareness within the consortium related to privacy
and data protection of individual citizens. These core principles underlying
the research work in Made4You correspond with the practices related to
Responsible Research and Innovation (RRI).
Section 2 below describes the management structures, including the nominees
for the various boards. Section 3 is dedicated to specific quality management
procedures, including communication structures and tools, the peer reviewing
process for high quality deliverables as well as risk management, SWOT and
other quality assurance means. In Section 4 the technical infrastructure for
communication and collaboration is presented. Section 5 presents the RRI
policies and identifies the most relevant aspects for Made4You. It includes
the ethical approach and guidelines that the project is following (together
with deliverables D8.1 and D8.2). In Section 6 the consortium’s strategy
towards openness is described and relates to open source in terms of software
as well as open access in terms of publications and other project results.
Finally, Section 7 draws the conclusions that are relevant for a high quality
implementation of the project.
The appendix includes examples of templates mentioned throughout the document.
**2**
**Management structure**
Both the Grant Agreement (GA) and the Consortium Agreement (CA) specify a
number of bodies for the management of the project. Though the GA and CA,
being legal documents, take precedence over this handbook, the following
sections specify the operational view of these bodies.
Made4You is a large-scale innovation action aiming at a wide community on a
global scale. Therefore, the management structure and procedures work in a
**flexible manner** in order to:
* Achieve integration of all consortium members and to mobilise their expertise, knowledge and networks in every stage of the project
* Efficiently coordinate the processing of the work plan in a collaborative environment
* Continuously involve contextual expertise and knowledge of relevant stakeholders (patients, families, healthcare professionals, makers) and their networks
Our approach is a combination of integration and decentralisation strategies.
_Integration_ is achieved through the composition of a consortium with
complementary skills and knowledge, the development of a joint framework, the
agreement on common guidelines for co-design activities, the joint work on the
platform and community development, and project workshops and meetings. The
resources of all partners will be mobilised by _decentralisation of
responsibilities_ through the assignment of leadership for work packages and
defined work package tasks with a clear task sharing based on the different
competence fields of the partners.
**Figure 1: Made4You – Management Structure: responsible roles in management**
The management structure defines the basic roles and responsibilities. The
Coordinator (Dr. Barbara Kieslinger, ZSI) is responsible for the overall line
of actions and the day-to-day management carried out by the project.
Additional ZSI staff is providing financial and administrative support to the
coordinator.
The Project Coordinator is supported by the WP leaders in the strategic
coordination of the innovation action. In addition, the Community Manager, who
is also coordinating the dissemination and exploitation WP, is responsible for
the coordination of the Made4You extended network. In close cooperation with
the project manager the community manager will take care of the broad
visibility of the project, amongst specific stakeholder groups and will have a
special interest in the exploitation and transferability of the project
results.
## 2.1 Work Package (WP)
The work package (WP) is the building block of the project. The WP leader
* organizes the WP and coordinates the different tasks,
* prepares and chairs WP meetings,
* organizes the production of the results of the WP, • represents the WP in the Project Management Board.
Each work package has been appointed a Work Package Leader who is responsible
for the progress within the work package and who is supported by task leaders
and other members of the consortium involved in each of the WPs. Clear
responsibilities (based on the competences of each partner) are described in
the Work Package Description. Current WP leaders are shown in Table 1.
<table>
<tr>
<th>
**Workpackage**
</th>
<th>
**Lead partner**
</th>
<th>
**Name**
</th> </tr>
<tr>
<td>
WP1 Engagement & Community Growth
</td>
<td>
WAAG
</td>
<td>
Jurre Ongering
</td> </tr>
<tr>
<td>
WP2 Pilot Open Solutions
</td>
<td>
MAKEA
</td>
<td>
Daniel Heltzel
</td> </tr>
<tr>
<td>
WP3 Platform & Tooling
</td>
<td>
OPEN
</td>
<td>
Enrico Bassi
</td> </tr>
<tr>
<td>
WP4 Evaluation & Impact Assessment
</td>
<td>
ZSI
</td>
<td>
Teresa Schäfer
</td> </tr>
<tr>
<td>
WP5 Dissemination & Outreach
</td>
<td>
GIG
</td>
<td>
Sandra Mamitzsch
</td> </tr>
<tr>
<td>
WP6 Lethal & Ethical Aspects
</td>
<td>
KUL
</td>
<td>
Erik Kamenjasevic
</td> </tr>
<tr>
<td>
WP7 Project Management
</td>
<td>
ZSI
</td>
<td>
Barbara Kieslinger
</td> </tr>
<tr>
<td>
WP8 Ethical requirements
</td>
<td>
ZSI
</td>
<td>
Barbara Kieslinger
</td> </tr> </table>
**Table 1 Current WP leaders**
## 2.2 Project Management Board (PMB)
The project is managed through the Project Management Board (PMB). It provides
the overall direction for the project, both strategic and operational. The PMB
maintains the project directions and obtains advice from the Work Package
Leaders, to ensure that the project meets its stated and implied goals. The
PMB ultimately supervises all project management processes, including
initiation, planning, execution, control, and closure of project phases.
Within this framework, the Work Package Leaders coordinate the detailed
planning, execution and control of the technical tasks to meet the project’s
scientific and technical objectives relevant to their work packages.
The Project Management Board is responsible for the proper execution and
implementation of the decisions of the General Assembly and makes suggestions
to the General Assembly on pending decision such as:
* Accept or reject changes to the work plan, changes in the Grant Agreement and amendments to the Consortium Agreement
* Make changes in the Project Management structure
The PMB is chaired by the Project Coordinator and composed of the Work Package
Leaders plus a representative from partners not leading a work package. The
PMB is currently composed of the persons listed in **Table 2** below.
<table>
<tr>
<th>
Partner
</th>
<th>
Partner manager
</th> </tr>
<tr>
<td>
ZSI
</td>
<td>
Barbara Kieslinger
</td> </tr>
<tr>
<td>
WAAG
</td>
<td>
Jurre Ongering
</td> </tr>
<tr>
<td>
OPEN
</td>
<td>
Enrico Bassi
</td> </tr>
<tr>
<td>
GIG
</td>
<td>
Sandra Mamitzsch
</td> </tr>
<tr>
<td>
MAKEA
</td>
<td>
Daniel Heltzel
</td> </tr>
<tr>
<td>
WEV
</td>
<td>
Richard Hulskes
</td> </tr>
<tr>
<td>
KUL
</td>
<td>
Erik Kamenjasevic
</td> </tr>
<tr>
<td>
TOG
</td>
<td>
Chiara Nizzola
</td> </tr> </table>
**Table 2: Partner managers**
## 2.3 General Assembly (GA)
The General Assembly is the ultimate decision-making body of the consortium
and functions as highest authority, as last resort of all relevant project
decisions. The body consists of one representative per partner.
A face-to-face general assembly comprising all project consortium partners
will take place at least once a year, to coordinate overall project work.
Additional extraordinary meetings can be held at any time upon request of the
PMB or 1/3 of the members of the GA. Within the project, the general assembly
will function as highest authority, as last resort of all relevant project
decisions.
Decisions taken by the General Assembly include the content, e.g. changes in
the Description of Action (DoA), finances and intellectual property rights.
This body also has the right to decide on the evolution of the partnership
(e.g. entry of new partner), and the project as such (e.g. termination of the
project).
## 2.4 Made4You Advisory Board
The Made4You Advisory Board (MAB) is a group of persons from outside the
project. The MAB will be consulted for important decisions that affect the
direction of research and/or are related to adoption of the results from the
Made4You project.
The MAB members are listed in Table 3.
<table>
<tr>
<th>
**MAB member**
</th>
<th>
**Affiliation**
</th> </tr>
<tr>
<td>
Sherry Lassiter
</td>
<td>
President of Fab Foundation, MIT
</td> </tr>
<tr>
<td>
John Schull
</td>
<td>
Co-founder eNABLE
</td> </tr>
<tr>
<td>
David Ott
</td>
<td>
Global Humanitarian Lab
</td> </tr>
<tr>
<td>
Raul Krauthausen
</td>
<td>
Sozialhelden
</td> </tr> </table>
**Table 3: MAB members**
During the Kick-off meeting it was decided that this group of external experts
can still be expanded with 2-3 persons for strategic purpose.
## 2.5 Consortium Agreement (CA)
Before the start of the project a consortium agreement has been signed by all
partners. It defines the specific operational procedures for the different
project bodies described above. This includes, amongst other aspects, the
responsibilities of the parties and their liabilities towards each other as
well as the governance structure, financial provision and Intellectual
Property Rights (IPR) issues. The consortium agreement also describes the
decision making structures and defines the General Assembly as the ultimate
decision making body.
# 3 Quality procedures and Code of Conduct
Quality assurance is of high priority in collaborative research, such as
Made4You, and the consortium is committed to a set of quality procedures to
guarantee high quality project output. Measures to ensure good quality include
e.g. the definition of internal communication structures, regular internal
reflections on risks and a proper SWOT analysis (Strengths, Weaknesses,
Opportunities and Threats Analysis) as well as a defined peer review process
for any project deliverable. The detailed procedures will be described in more
detail in the following sections.
## 3.1 Internal communication structures & procedures
Internal communication is first and foremost based on the concept of openness
and transparency. An active communication strategy is implemented to establish
a strong project identity in order to obtain maximum transparency for all
partners involved, and to increase synergy in cooperation.
Daily communication among the WPs, the partners, etc. is established mainly
through
. e-mails and a central mailing list including all project partners,
. a Slack group for quick communication across teams and partners,
. a project space hosted at the ZSI (Nextcloud) for internal exchange and
online storage of all documents as well as offline communication;
https://wolke1.zsi.at/,
. web-conferencing (Skype) for regular online meetings,
. face-to-face communication (during physical project meetings).
The consortium partners meet approximately every three to four months face-to-
face (at synchronisation points) to coordinate the progress.Each month, at
least one virtual consortium meeting takes place via video conferencing,
currently Skype. These meetings ensure the internal communication among
partners, allow the WP Leaders/thematic leaders to coordinate the various
tasks, and report the progress of work to the team members.
During the meeting “live minutes” will be produced and are made accessible to
all partners, to view at a later time. Each team reports about latest updates
before a meeting in a shared document, all participants are invited to get an
update before the meeting starts and the most relevant issues are then
discussed during the meetings. The minutes are available on the Nextcloud.
In addition to these virtual consortium meetings, thematic groups (similar to
WPs, but overlapping in some cases) have started to emerge during the kick-off
meeting and virtual meetings are organised by these working groups. Similar to
the consortium meetings notes and recordings are available on the Nextcloud
and each member of the consortium is invited to attend any of these meetings.
## 3.2 External communication structures & procedures
The communication strategy also aims to effectively communicate with parties
outside the consortium, especially since Made4You is an innovation action that
aims at reaching and engaging a broad audience to create impact.
Stakeholders will be addressed via the community engagement and communications
strategy, which is coordinated in a collaborative effort by WP1 and WP5. The
“ComCom” working group is elaborating the details for the external
communication in terms of procedures and material. Basically different
communication options will be elaborated for the different target groups.
Most importantly, it should be mentioned that Made4You decided to promote the
name of the platform, **Careables (http://www.careables.org)** , and to use
the project name Made4You mainly for administrative purposes.
## 3.3 Quality of (non-)deliverables and peer review
A **peer-review process** for the Made4You project is set up in order to
obtain and guarantee the quality of the deliverables (documentation, reports,
prototypes, etc.) that will be produced during the course of the project and
delivered to the European Commission, offered to the Made4You stakeholders and
more globally to the general public. This section describes standards for the
Made4You deliverables and presents the peer-review procedure. A checklist for
the deliverables and a template for peer-review reports are given in
Appendices to this document.
### 3.3.1 Deliverables
Made4You deliverables serve different purposes. They are a communication means
within the consortium and communication with other people outside the
consortium. They are aimed at transferring the know-how, to exploit the
results and knowledge generated by the project. Deliverables should be written
with their target readers in mind. They should be concise and easy to follow.
The readability of a document is a vital ingredient for its success. The
following general structure should be followed and is as such provided in the
deliverable template of the project:
* Cover page
* Amendment History
* List of Authors/Contributors
* Table of Contents
* Abbreviations/Acronyms
* Executive summary
* Introductory part
* Core part
* References
* Annexes (optional)
Annex I includes a checklist that should serve as a guideline when preparing a
deliverable. A Made4You deliverable may be comprised of one or more volumes
and may consist of the following parts:
* The _Main part_ is the part that summarises the results for high-level executives, technical managers and experts with decision-making competence. It is typically one document and may contain Appendices
* _Annexes_ are optional and have detailed technical information for experts and implementers. They are added to the main part at the end of the document
Project deliverables may be classified according to different confidentiality
levels, such as public (PU), restricted (RE) or confidential (CO). Following
an open access strategy, which the project partners are committed to, all
Made4You deliverables have been classified as PU regarding their dissemination
level in the DoA. PU means completely public access and thus, all deliverables
will be made available on the project website and/or specific open
repositories (see data management plan further below. In the case consortium
members want to change the level of confidentiality of any of the deliverables
this requires a decision by the General Assembly and needs to be convincingly
argued.
In the following, steps to be taken for publishing a deliverable are listed:
1. These parts form the basis for the deliverable
* Title and description of the project deliverables
* The name(s) of the deliverables editor(s)
* The deliverable history including names(s) of contributors and internal reviewer(s) in charge of the peer review for the deliverable
2. The people appointed to generate parts of the Deliverable – the authors – provide their contribution to the editor.
3. The editor(s) prepare draft 0.1 of the Deliverable by assembling and integrating all contributions. This draft is discussed with all authors. It is recommended to involve the internal reviewers already at this stage.
4. When the editors and the authors are satisfied with the results achieved, the editor issues draft 1.0 and puts it on the Made4You Nextcloud and sends a note to the consortium.
5. They inform the internal reviewers and ask for a quality check, opinions and constructive comments within a defined deadline (normally one week).
6. The editor deals with all the comments and problems raised, if necessary with the help of the authors. This is a critical phase due to the many interactions involved. It may be necessary to have a meeting (physical, audio- or video conference) in order to speed up the process for reaching a consensus on the amendments.
7. The editor prepares draft 2.0, puts it on the Made4You Nextcloud and informs the project manager (Dr. Barbara Kieslinger) and the whole consortium that the deliverable has reached final status and can be submitted to the EC and the reviewers.
8. The deliverable is sent to the PO and the EC reviewers only by the project manager.
### 3.3.2 Peer review process
One of the feasible means to enhance the quality of the project deliverables
is an internal peer review system. Made4You deliverables shall be evaluated by
2-3 reviewers so as to gather diversified and balanced viewpoints.
Deliverables can be reviewed by members of the core project team or colleagues
from the partner institutions as well invited external experts, for example
Advisory Board members.
Peer reviewers should be nominated by the editor(s) at least 3 weeks before
the due date of the deliverable and communicated to the consortium. Nominated
peer reviewers can turn down the invitation with clear justification (e.g.
lack of expertise) and would thus be requested to nominate another candidate.
Consented peer reviewers are required to produce a peer review feedback within
7-10 days after receiving the deliverable from the editor. In case of any
expected delay, peer reviewers should notify the editor and the project
manager immediately. During the review process, peer reviewers are encouraged
to discuss the problems identified in the deliverable with the main
author/editor. Peer reviewers are advised to pay particular attention to the
following points:
* Is the deliverable aligned with the objectives of the project and relevant work packages?
* Does the deliverable make a significant contribution to the project or not?
* Is the content of the deliverable focused on the intended purpose? Is the content of the deliverable presented in a precise and to-the-point manner?
* Is the length of the deliverable justified? Are there superfluous or irrelevant parts that should be deleted? Are there overlong parts that should be shortened? Are there any parts that are written in flowery language and/or that are unspecific or redundant?
* Are there many grammatical errors and/or typographical errors and/or incomprehensive sentences? Specifically, clear annotations indicating errors and suggested corrections are very helpful for the authors of the deliverable. The annotated deliverable may be sent back to the editor/authors via email together with the peer review report.
* Does the deliverable require substantial revision or rewriting? If yes, it will facilitate the revision process if some concrete suggestions on how to improve the deliverable are given.
Peer review results are described in a peer review report/e-mail (see Annex
III), which contains the following information:
* Basic information about the deliverable, author and peer reviewer
* Comments on the length and content of the deliverable
* Major strengths and weaknesses of the deliverable
* Review summary
If minor or substantial revisions are necessary, authors of the deliverable
should make changes and produce the final version of the deliverable before
due submission date. The final responsibility for the content of the
deliverable remains with the editor and authors and it is thus their final
decision about how to address and integrate the feedback from the peer
reviewers. The review reports will be made available internally for the
consortium only.
**Figure**
**2**
**:**
**Peer review process**
### 3.3.3 Non-deliverables
For non-deliverables, such as publications and dissemination material, the
procedure for deliverables will be used where applicable and with a timeline
that fits the material.
Since there are many types of material, this handbook cannot provide details
for all cases. We distinguish the following broad categories of material.
* Dissemination material (flyer, website, leaflets, popular science publications, etc.) Default reviewer is the communication manager, supported by project manager.
* Scientific publication or conference presentation
Reviewed by one or more team members according to focus and contributions
## 3.4 Internal surveys
Made4You is committed to a **continuous improvement process** on the project
management level. In addition to open and transparent communication and
decision-making, the project management uses anonymous surveys for specific
input on process management, risks and critical issues. These surveys are kept
brief to ensure broad participation by each project member. The survey is
distributed according to needs (no pre-defined schedule), but at least once a
year (ideally before a GA meeting) to cover the following:
* _Project management._ In this section, participants are asked to share their positive and negative observations about the project management processes.
* _Current topics._ The second section focuses on topics that are currently important within the project. This can range from collaboration infrastructure, to satisfaction about certain results, or specific WP-level topics. A recurring topic will be questions regarding Responsible Research and Innovation (RRI) in order to sensitise project partners for the most relevant aspects of RRI for Made4you.
* _Expectations and perceived risks_ . The third section focuses on the future and asks participants to share their perception about risks and expectations.
An essential element of this survey process is that the results are discussed
and reflected upon in the consortium, preferably during a face-to-face
meeting. This allows for reacting to arising issues quickly and addressing
them collaboratively, e.g., by adapting the agenda.
## 3.5 Risk management
As stated above, internal surveys and discussions are used to check perceived
concerns and risks by all consortium partners. In addition, the quarterly
reports that each partner submits online (on Nextcloud) also include a section
on possible risks, deviations or corrective actions to be reported to the
project management.
The basic risk management methodology to be followed in the project consists
of four subsequent steps:
* Risk identification – areas of potential risk are identified and classified.
* Risk quantification – the probability of events is determined and the consequences associated with their occurrence are examined.
* Risk response – methods are produced to reduce or control the risk, e.g. switch to alternative technologies.
* Risk control and report – lessons learnt are documented.
Risks with medium or high probability and severe impact are handled with
particular caution during the project. At this point, it is expected that the
project safely achieves its expected results. This is also supported by the
preliminary risk analysis. Normal project risks are managed via “good-
practice” project management and rely on the experience from the successful
research projects that the partners have been performing. The close
supervision and tight control both by the project management and by the
various Boards ensure that results are available in time and with adequate
quality.
At the kick-off meeting a first risk analysis was performed for each of the
work packages. Before the kick-off, all partners were asked to reflect on
“dreams” and “fears” that they would associate with the work packages. The
following two images summarise on the one hand the dreams and expectations and
on the other hand the fears and risks associated with each of the work
packages. Work package leaders will follow up on these aspects and reflect on
contingencies should any of the identified risks, or emerging risks, start
having an influence on the activities progress.
In the course of the project, management is responsible for close monitoring
of the overall progress and risk identification. Risk identification is
however also collaboratively encouraged as part of reflective sessions during
the project meetings. Early communication of risks is encouraged as well as
discussions, in order to achieve a profound understanding of risks. The
project management promotes an open communication culture to openly discuss
any issues arising.
## 3.6 SWOT
A mid-term analysis of strengths, weaknesses, opportunities and threats (SWOT)
will be performed on the team and the project. This will be done during a
plenary meeting and is to be used to refocus, if needed, the project in the
second half of the project.
The SWOT analysis is a structured planning method to evaluate the Strengths,
Weaknesses Opportunities and Threats of a particular undertaking, be it for a
policy or programme, a project or product or for an organization or
individual. It is generally considered to be a simple and useful tool for
analysing project objectives by identifying the internal and external factors
that are favourable and unfavourable to achieving that objective. Strengths
and weaknesses are regarded internal to the project while opportunities and
threats generally relate to external factors.
Strengths can be seen as characteristics of the project that give it an
advantage over others while weaknesses are regarded as characteristics that
place the team at a disadvantage relative to others. Opportunities comprise
elements that the project could exploit to its advantage whilst threats
include elements in the environment that could cause trouble for the project.
Question to be answered during the SWOT analysis comprise:
_Strengths (S):_
* What do we do well? What are our assets?
* What advantages does the project have? What do we do better than anyone else? What unique resources can we draw upon that others can't?
* What are our core competencies? What is the Unique Selling Proposition (USP)?
* What do other people see as our strengths? _Weaknesses (W)_ :
* What could we improve? What can we do better?
* What should we avoid?
* Where do we lack resources?
* Which factors minimise the outcome?
* What are external people likely to see as weaknesses?
_Opportunities (O)_ :
* Which good opportunities can we spot? What are the emerging political and social opportunities?
* What interesting trends are we aware of? What are the economic trends that benefit us?
* What new needs of PES and other future users could we meet?
_Threats (T):_
* What obstacles do we face?
* Where are we vulnerable?
* Could any of our weaknesses seriously threaten our results? What are the negative political and social trends?
To develop strategies that take into account the SWOT profile, a matrix can be
constructed. The SWOT matrix (see below) includes strategies that make best
use of strengths and opportunities and minimise weaknesses and threats. SO-
Strategies pursue opportunities that are a good fit to the strengths. WO-
Strategies overcome weaknesses to pursue opportunities. ST-Strategies identify
ways in which the project can use its strengths to reduce its vulnerability to
external threats. WT-Strategies establish a defensive plan to prevent the
weaknesses from making it highly susceptible to external threats.
<table>
<tr>
<th>
**SWOT Matrix**
</th>
<th>
**Strengths**
</th>
<th>
**Weaknesses**
</th> </tr>
<tr>
<td>
**Opportunities**
</td>
<td>
SO-Strategies
</td>
<td>
WO-Strategies
</td> </tr>
<tr>
<td>
**Threats**
</td>
<td>
ST-Strategies
</td>
<td>
WT-Strategies
</td> </tr> </table>
**Figure 5: SWOT Matrix**
After the first matrix has been drawn from the answers by the consortium, the
following questions should be answered during the discussion and establishment
of the project strategy:
* How to make best use of strengths and opportunities?
* How to best minimise weaknesses by making best use of opportunities?
* How to make best use of strengths by reducing risk of threats?
* How to best minimise weaknesses even with the expected threats?
While SWOT can be a good complementary tool for analysing the project and
redefining strategy, it has also several blind spots. These comprise, for
instance that SWOT is a linear analysis and an expert's or group’s monophonic
analysis. In the case of the Made4You project some external view, e.g. from
the Advisory Board would give an important complementary interpretation of the
project development. Overall, SWOT is an easy usable tool that provides quick
access to the positive and negative aspects of a project and its environment
and seems appropriate for the Made4You project to be performed mid-term.
## 3.7 Project templates
Made4You intends to use a consistent ‘project style’. This is implemented by
providing templates for deliverables and reports, presentations, posters and
other dissemination and communication material. More project style templates
can be produced by the communication and outreach team when needed.
At the kick-off meeting the consortium decided to name the central platform of
the project “Careables”. Thus, the main message in any promotional material
will focus on the advertising Careables (http://www.careables.org).
All available project style templates are available on the shared workspace on
Nextcloud.
# 4 Tools and collaboration infrastructure
While the previous section was concerned with the processes of communication
and collaboration there is also a technical side to this and a number of
technical tools are used to provide the Made4You collaboration infrastructure.
It consists of several pieces:
* **Made4You mailing list** is used for project-wide asynchronous communication. The address of the mailing list is: [email protected]_
* **Slack** for ad-hoc communication to the whole team as well as to different subgroups and individual team members;
* **Skype** is used for regular web conferencing (monthly meetings)
* **Nextcloud** (https://wolke1.zsi.at/) is used for sharing files and for real-time cocreation of documents
* **e-mail and telephone** are used for bilateral communication
* **Careables Website** (http://www.careables.org) is the main portal for sharing open healthcare solutions and also used for presenting our work to the public
The choice for this collaboration structures has been made taking into
consideration practical
aspects as well as privacy and data protection issues related to
the
EU
General
Data Protection
Regulation (GDPR).
**Figure**
**6**
**:**
**Nextcloud Workspace for Made4You**
**5**
**Responsible research and innovation (RRI)**
5.1
What is RRI?
Responsible Research and Innovation (RRI) has been formulated and widely
promoted
as guiding
principle and policy concept by the European Commission to better align
science with society and to meet the so called grand challenges 1 . It has
been promoted as a cross-cutting issue within the H2020 research programme. A
widely accepted definition describes RRI as “a transparent, interactive
process by which societal actors and innovators become mutually responsive to
each other with a view on the (ethical) acceptability, sustainability and
societal desirability of the innovation process and its marketable products”
(Schomberg, 2013). Others’ definitions of RRI (c.f. Jacob et al., 2013; Owen
et al., 2013) might slightly differ from Von Schomberg’s but as described by
Wickson & Carey (2014) the overall common accordance is that responsible
research and innovation should (1) address significant socio-ecological needs
and challenges, (2) actively engage different stakeholders, (3) anticipate
potential problems and assess available alternatives and reflect on underlying
values and beliefs and (4) to adapt according to these ideas. Generally
speaking, RRI is doing science and innovation with and for society by re-
imaging the science-society relationship.
According to the European Commission (Jacob et al., 2013), RRI comprises the
following key dimensions 2 :
1. **Governance** : Governance of policymakers to prevent harmful or unethical developments in research and innovation
2. **Open Access** : Open access to research results and publications to boost innovation and increase the use of scientific results
3. **Ethics** : Research must respect ethical standards and fundamental rights to respond to societal challenges
4. **Gender** : Gender equality and in a wider sense diversity
5. **Public Engagement** : Engagement of all societal actors (researchers, industry, policy makers, civil society) in a reflective research process
6. **Science education** : Enhancement of current education processes to better equip future researchers and society as a whole with the necessary competences to participate in research processes
In addition to these key dimensions, which are reflected in the European
policy agendas, RRI can also be defined with regards to its process
requirements which include **openness and transparency, anticipation and
reflection, responsiveness and adaptive change and diversity and inclusion.**
Figure 7, which stems from the RRI-Tools project 3 where the ZSI has been a
core partner, shows an integrative view on these tow perspectives, which
complement each other.
2
A different operationalisation is described by Wickson and Carew (2014) who
describe RRI from a process perspective
with the following quality criteria: 1. Socially relevant and Solution
oriented; 2. Sustainability centered and Future scanning; 3. Diverse and
Deliberative; 4. Reflexive and Responsive; 5. Rigorous and Robust; 6. Creative
and Elegant; and 7. Honest and Accountable
3 http://www.rri-tools.eu
**Figure 7: Overview of key dimensions and process requirements of RRI
according to RRI-Tools project**
In the following, we briefly describe the six key dimensions and how they
related to Made4You.
## 5.2 Governance
Among the six key dimensions of RRI, governance has a slightly different
function compared to the others, as it is rather an organising and steering
principle that determinates the success of all other RRI dimensions. In other
words, RRI relies on good governing structures for the promotion of RRI.
Governance methods range from foresight techniques (scenario studies, value
sensitive design, etc.), assessment (ethical committees, needs assessment,
technology assessment, etc.), agenda setting (consultation, co-creation, etc.)
to regulation (code of conduct, policies, funding guidelines, etc.).
Currently, governance of RRI is rarely seen on a project level; it is rather
applied on funding level or within organisations, e.g. to call for
organisation-wide RRI guidelines and policies. The **Made4You project** can be
perceived as an attempt to tackle RRI on a project level. However,
comprehensive RRI guidelines for projects are still missing and thus this
handbook together with the deliverables D8.1 and D8.2 will aim at meeting this
need. Also, it has to be acknowledged that governance structures need to be at
least on institutional level in order to be sustainable. On a project level
however, it makes sense to break down what RRI in the specific context means
and how it can be adapted to the project particularities.
## 5.3 Open access
In the narrower sense, open access is about enabling or giving access to
research results and publications to the public. It addresses only the final
stage of research activity, the publication and dissemination phase. With the
launch of Horizon 2020 it has become mandatory to follow open access
publication strategies (European Commission, 2012) .
Open access, in the narrow sense, is different from open science, open
innovation and open data, although there are obvious overlaps. For instance,
in contrast to open access, open science implies opening up the whole science
process in real time to the public, from choosing areas to investigate in,
formulating the research questions to choosing the methods, collecting data
and finally discussing the results. Open science means democratising science
and research, usually through ICT.
When talking about open access in the context of Made4You we refer to open
access in the narrower sense. Our project will basically follow an open access
publication strategy, but will also make data available to the public at an
earlier stage where suitable (c.f. chapter 6).
## 5.4 Ethics
The European Commission defines ethics as key dimension of RRI as follows:
_“European society is based on shared values. In order to adequately respond
to societal challenges, research and innovation must respect fundamental
rights and the highest ethical standards. Beyond the mandatory legal aspects,
this aims to ensure increased societal relevance and acceptability of research
and innovation outcomes. Ethics should not be perceived as a constraint to
research and innovation, but rather as a way of ensuring high quality
results.” (p.4)_ 2
Ethics thereby shall not be perceived as a constraint but rather as a guiding
principle to help ensure high quality outcomes and to justify decisions. This
is also the case for Made4You. A specific work package (WP6) is dedicated to
legal and ethical aspects. We will deal with the three main aspects of ethics
as defined by the European Commission (2015), namely 1) Research integrity and
good research practice, 2) Research ethics for the protection of research
objects, and 3) Societal relevance and ethical acceptability of research and
innovation outcomes.
Ethics further implies social justice and inclusion aspects: The widest range
of societal actors and civil society shall benefit from research and
innovation outcomes. In other words, products and services as a result of
Research & Innovation (R&I) activities shall be acceptable and affordable for
different social groups, which is also a special goal of Made4You.
Ethics is an integral part of responsible research, from the conceptual phase
to the publication of research results. The consortium of Made4You is clearly
committed to show appreciation of potential ethical issues that may arise
during the course of the project and has as such defined a set of procedures
on how to deal with ethics in a responsible way.
The main aspects the project is dealing with in regards to ethics are the
protection of identity, privacy, obtaining informed consent and communicating
benefits and risks to the involved target groups. The activities performed in
Made4You may include data collection from individuals and organisations
remotely as well as on site. In the following, we outline the basic processes
of ethical compliance of the project with a general view on the scientific
data collection. Complementary, there is also Deliverable D8.2, which
describes in more detail how the patient data collection, processing and
storing on the Made4You platform, called “Careables”, is compliant with the
GDPR.
### Data protection and privacy
During any data collection process, data protection issues with regards to
handling personal data will be addressed by the following strategies:
Participants, who volunteer to being enrolled in our activities, will be
exhaustively informed so that they are able to autonomously decide whether
they consent to participate or not. The purposes of the research, the
procedures as well as the handling of their data (protection, storage) will be
explained. For online interviews these explanations will be a part of the
initial briefing of interviewees. For face-to-face interventions, informed
consent (provided in D8.1) shall be agreed and signed by both, the study
participants as well as the respective research partner.
The data exploitation will be in line with the respective national data
protection acts. Since data privacy is under threat when data are traced back
to individuals – they may become identifiable and the data may be abused – to
mitigate this risk, we will anonymise all data.
Data gathered through questionnaires, interviews, observational studies, focus
groups, workshops and other possible data gathering methods during this
research will be anonymised and therefore the data cannot be traced back to
the individual. Data will be stored only in anonymous forms so the identities
of the participants will only be known by the research partners involved. Raw
data like interview protocols and audio files will be shared within the
consortium partners only after having signed the confidentially agreement (See
Annex I). Reports based on interviews, focus group and other data gathering
methods will be based on aggregated information and will comprise anonymous
quotations respectively.
The collected data will be stored on password-protected servers at the partner
institution responsible for data collection and analysis. The data will be
used only within the project and will not be made accessible for any third
party, unless anonymised. Sensitive data or personal will not be stored after
the end of the project (incl. the time for final publications) unless required
by specific national legislation.
The stored data do not contain the names or addresses of participants and will
be edited for full anonymity before being processed (e.g. in project reports).
### Communication strategy
Study participants will be made aware of the potential benefits and identified
risks of participating in the project at all times.
The main means of communicating benefits and risks to the individual is the
informed consent (see Deliverable D8.1). Prior to consent, each individual
participant in any of the studies in MADE4YOU will be clearly informed of its
goals, its possible adverse events, and the possibility to refuse to enter or
to retract at any time with no consequences. This will be done through a
project information sheet or the informed consent form and it will be
reinforced verbally.
In order to make sure that participants are able to recall what they agree
upon when signing, the informed consent forms will be provided in the native
language of the participants. In addition, the consortium partners will make
sure that the informed consent is written in a language suitable for the
target group(s). Different informed consents will be made available, e.g.
consent of adult participants, parental consent, informed assent for
children/minors.
For media material (e.g. photos, videos) produced during any of the Made4You
events a **media** **waiver** will be distributed to participants to make sure
that participants are aware of this and agree/disagree to the production and
use of such material by the project partners. A template for the waiver is
provided in the Annex IV of this document.
### Informed consent/informed assent
As stated above informed consent/assent will be collected from all
participants involved in Made4You studies. The declaration of consent forms is
provided in the deliverable D8.1. _**Relevant regulations and scientific
standards** _
The consortium is following European regulations and scientific standards to
perform ethical research. The following lists some of the basic regulations
and guidelines.
The Made4You project will fully respect the citizens’ rights as reported by
EGE and as proclaimed in the Charter of Fundamental Rights of the European
Union (2000/C 364/01), having as its main goal to enhance and to foster the
participation of European citizens to education, regardless of cultural,
linguistic or social backgrounds. Regarding the personal data collected during
the research the project will make every effort to heed the rules for the
protection of personal data as described in Directive 95/46/EC 3 .
In addition, the consortium is following the following European Regulations
and Guidelines:
* The Charter of Fundamental Rights of the European Union:
* European Convention on Human Rights http://www.echr.coe.int/Documents/Convention_ENG.pdf
* Horizon 2020 ethics self-assessment http://ec.europa.eu/research/participants/portal/doc/call/h2020/h2020-msca-itn-
2015/1620147-h2020_-_guidance_ethics_self_assess_en.pdf
* EU Code of Ethics:
* European Textbook on Ethics in Research https://ec.europa.eu/research/sciencesociety/document_library/pdf_06/textbook-on-ethics-report_en.pdf
* European data protection legislation
* RESPECT Code of Practice for Socio-Economic Research
* Code of Ethics of the International Sociological Association (ISA)
### National and Local Regulations and Standards
In addition to the more general and EU-wide guidelines, project partners have
to adhere to, and respect, national regulations and laws as well as to
research organisational ethical approval as requested by the own institutions.
All partners are aware of their responsibilities in that respect and will
follow the respective guidelines.
## 5.5 Gender
Gender equality generally means equal rights, opportunities, and
responsibilities for both genders so that individuals can exploit and realise
their full potentials independently from their sex.
Gender equality as key dimension of RRI comprises two main aspects (European
Commission, 2015), namely to strive for gender balanced teams in research and
innovation (at operational as well as at decision making level) and the
inclusion and integration of gender perspectives in research and innovation
content and process. Gender analysis and gender monitoring throughout the
project shall aim at looking at both aspects of gender equality, at the human
capital dimension (where possible, apart from institutional conditions) and
the research aspect of gender (Föger et al., 2016).
In Made4You gender is mostly relevant when it comes to internal processes,
such as the composition of project teams, of work package leaders, of advisory
group, the use of gender sensitive language and the awareness of producing
gender sensitive content. We are aware of the current imbalance in the
advisory board and we will consider gender specifically in any new
allocations.
In line with the Toolkit on Gender in EU-funded research (European Commission,
2009) Made4You will strive at doing gender-sensitive innovation. Particularly
in the following project steps gender has to be addressed and taken into
account:
* Project design and methodology: we will make sure that for any of our approaches in co-design and other engagement activities, we will aim at representative data in the sense that different gender perspectives will be described, where relevant.
* Project implementation: Data-collection tools such as questionnaire, interview guidelines, etc. need to be gender sensitive and use gender-neutral language and have to allow for differentiation between gender perspectives. In the evaluation data analysis we will particularly pay attention whether there are differences between males and females, for instance, in terms of artefacts that are produced, in terms of communicating and sharing, etc.
* Dissemination phase – reporting of data: We will use gender-neutral language in our publications. Furthermore, we will sensitively decide which visual materials to use. In addition, we will aim at publishing gender specific results.
### Science Education
Science education under the RRI umbrella is meant to meet several objectives
(European Commission, 2015; Föger et al., 2016):
1. To empower the society to critically reflect and to improve on their skills to be able to challenge research, thus to make them “science-literate” (in this sense, there is a great overlap with the key dimension of public engagement)
2. To enhance future researchers and other societal actors to become good RRI actors
3. To make science attractive to children and teenagers with the purpose to promote science careers, especially in STEM (Science, Technology, Engineering, and Mathematics)
4. To close the gap between science and education. There is still a significant distance between the two areas.
Co-design is regarded as a possible empowering tool for science education as
it enables participants to shape the development of certain technologies or
services according to the RRITools project. In Made4You we plan to include
children in the co-design process for certain cases. In addition, educational
activities and student engagement are part of the WP1 activities. They are
targeted at students in the field of medicine, paramedical professions, design
& arts, biomedical engineering and Fab Academy and aim to familiarise them
with co-design processes. Also, the maker spaces involved in the Made4You
project regularly offer educational activities to young people and schools as
the maker movement has started to get attention from schools and educational
authorities.
## 5.6 Public Engagement
In recent years, science communication has moved from the one-way-
communication approach to basically inform the general public towards public
engagement, which means more elaborate and active involvement of citizens
leading to collaboration and empowerment.
There is a vast range of tools and methods with different levels of
participation available, e.g. public consultations, public deliberations for
decision making, public participation in R&I processes, Citizen Science, etc.
The goal by opening up research and innovation processes to the public is to
better meet the values, needs and expectations of society and thus to improve
R&I and to find solutions to the so called grand challenges that society is
facing (Cagnin, Amanatidou, & Keenan, 2012).
Thus, realising this key dimension of RRI is an important goal in Made4You and
two work packages are jointly working together to reach high quality public
engagement. WP1 and WP5 are closely working on a joint strategy and have
created a working team at the kick-off meeting to jointly define and execute
the engagement and communication strategy of the project.
## 5.7 RRI management in Made4You
The notion of Responsible Research and Innovation does not offer a checklist
or one universal guideline how to do RRI. It is also not in the spirit of RRI
to have such set of measures, as RRI is rather perceived as a process that
requires continuous questioning and reflection. Thus, mechanisms have to be
installed and embedded in the project by work package 6 and 7 to stimulate
reflection of the consortium and to keep these alive throughout the lifetime
of the project.
We would like to point out that not all key dimensions are equally relevant
for Made4You as can be inferred from the discussion above. In the following we
will therefore concentrate on these key dimensions which will be dealt with in
more detail. However, also the remaining dimensions shall remain in our mind-
sets as we would like so continuously stimulate reflection and discussion on
RRI.
In order to stimulate reflection and deliberation on Responsible research and
innovation and to keep these alive we have foreseen several instruments:
* **Ethical and legal questionnaire** : a questionnaire addressing specific ethical and legal aspects has been distributed to all project partners at the beginning of the project. Questions range from the data that is being stored at the Careables platform to the data being collected to the compliance of the platform with the GDPR as well as data subjects’ rights. This questionnaire especially informs the deliverables D8.1 and D8.2 and partially also this handbook.
* **RRI Self-Reflection-Tool** : The RRI-Tools project has developed the so called “RRI SelfReflection-Tool”. It is an online tool for different stakeholder groups and for people with different levels of knowledge on RRI. The tool is meant to comprise food for thought and to sensitise for RRI and to stimulate reflection on RRI key dimensions and process requirements. Participants can choose which questions they would like to reflect upon (since not all of them will be relevant) and receive suggestions at the end how to further improve in terms of RRI. Further resources such as best practice examples, tools or literature will be recommended. In Made4You we will invite the project partners to regularly make use of the SelfReflection –Tool.
* **Legal and ethics workshop** : At selected consortium meetings WP6 is running a legal and ethics workshop to discuss relevant topics based on the results of the questionnaire and the experiences made by the consortium.
To summarise, the main instruments for implementing RRI are the following:
* ethical guidelines, including forms for informed consent and confidentiality agreement
* open data management plan
* RRI self-assessment tool
* RRI-related legal and ethics workshops
# 6 Open access and open research data
The project firmly believes in openness to be a major factor for innovation
and this was also one of the main motivations for Made4You, which promotes
openness in healthcare. Openness has many facets. The most important ones for
the Made4You consortium are, following Carlos Moedas’s (European Commissioner
for Research, Science and Innovation) strategy of the 3 Os, Open Science, Open
Innovation and Open Data 4 :
* **Open project collaboration.** All partners are committed to developing (working) relationships with external partners for mutual benefit. Making contacts with similar projects and establishing collaboration is considered beneficial for all. Open collaboration in Made4You is understood in a trans-disciplinary way, opening innovation processes to the wider public and allowing new form of collaboration as intended in the co-design activities of the project.
* **Open source technology.** From a technology perspective, the project fosters the sharing of open healthcare solutions to be shared on the Careables platform. A main aim is to share the co-designed technological artefacts with the community. Business models and exploitation strategies are not based on locking down access to project results, but on providing added value through services.
* **Open access to scientific results.** From a scientific perspective, the consortium clearly favours open access to its scientific output, which is supported by several project members’ internal policies of supporting open access in general.
* **Open access to research data.** Made4You is part of a pilot action on open access to research data and is thus committed to providing access not only to project results and processes, but also to data collected during that process. Although Made4You is an innovation action according to the work programme definition and not a research action, some research related data will be collected, mainly from an evaluation perspective. Although the general policy of the Made4You project is to apply “open by default” to its research data, we have to handle privacy issues with special care. Legal rules on anonymity, as described above (chapter 6), are thus highly relevant and need to be agreed with each of the participants. In case of a doubt, data privacy of our participants always prevails over open data policy.
Made4You is part of the H2020 Open Research Data Pilot (ORDP), a pilot action
on open access to research data, which requires projects to define and execute
a data management plan. This deliverable includes the open data management
plan for Made4You. The open access strategy will be detailed in the following
sections.
## 6.1 Open access strategy for publications
In line with the EC policy initiative on open access 5 , which refers to the
practice of granting free Internet access to research articles, the project is
committed to follow a publication strategy considering a mix of both 'Green
open access' (immediate or delayed open access that is provided through self-
archiving) and 'Gold open access' (immediate open access that is provided by a
publisher) as far as possible.
All deliverables labelled as “public” will be made accessible via the Made4You
website (careables.org). The publications stemming from the project work will
also be made available on the website as far as it does not infringe the
publishers rights as well as on the OpenAIRE platform .
All outcomes of the project labelled as “public” will be distributed under
specific free/open license, where the authors retain the authors’ rights but
the users can redistribute the content freely. The following are a few
relevant sources for deciding on the specific license for each outcome:
* Data:
◦ A definition of Open Data: ◦ Licenses:
* Software:
◦ Free Software
▪ The definition:
▪ Licenses:
◦ Open Source Software:
▪ The definition: ▪ Licenses:
* Reports, publications, media: ◦ Creative Commons ▪ Explanation:
▪ Licenses:
▪ Choose a license:
◦ Sharing publications on the project website and via OpenAIRE
## 6.2 Data management plan (DMP)
This is a first version of the DMP for Made4You, which provides an analysis of
the main aspects to be followed by the project’s data management policy. The
DMP evolves in the course of the project and will be updated accordingly as
data is collected. However, we would like to stress once more that Made4You is
an innovation action and large collection of research data is thus not the
focus of the project.
This data management plan refers mainly to the data collected for the
achievement of the project objectives, namely co-designing a platform for
sharing open healthcare. Complementary to this data management plan,
Deliverable D8.2. (POPD) refers to the handling of personal (sensitive)
patient data. Please not that this is not addressed here.
At the time of writing it is expected that the project will produce the
following data:
* WP1: secondary data from stakeholders. e.g. other open healthcare initiatives, associations, healthcare providers, etc.
* WP2: secondary and primary data from pilot participants, e.g. demographic data
* WP3: platform usage data from Careables.org
* WP4: feedback data from participants in activities of other WPs, interview and questionnaire data, log data from the Careables platform, social media data and observational analysis
* WP5: data from other open healthcare projects regarding Dissemination, Exploitation, Communication of the Made4You project
* WP6: feedback data from participants in activities of other WPs, interview and questionnaire data
This initial list includes primary (empirical) and secondary (desk-top,
aggregated) data. For the currently identifiable primary research data sets,
that the project will produce, we follow the requested template description as
define by the European Commission 6 :
<table>
<tr>
<th>
**Data set reference & name **
</th>
<th>
**Data set description**
</th>
<th>
**Standards & metadata **
</th>
<th>
**Data sharing**
</th>
<th>
**Archiving & preservation **
</th> </tr>
<tr>
<td>
DOI_1
Made4You_Co-design_X
</td>
<td>
Feedback documented directly during co-design sessions regarding the codesign
process itself as well as documentation
standards
</td>
<td>
As indexed on the sharing platform e.g. Zenodo, it will have publication data,
Digital Object Identifier (DOI), keywords, collections,
license, uploaded by
</td>
<td>
Shared on Zenodo, open digital
repository; license will be most probably:
Creative Commons Attribution Share-
Alike
</td>
<td>
Zenodo is developed by under the EU FP7 project (grant agreement no.
283595); the service is free of charge for those without ready access to an
organized data centre; if this policy changes
Made4You will provide the data accessible via its website for the duration of
at least 5 years after project end.
</td> </tr>
<tr>
<td>
DOI_2
Made4You_Survey_X
</td>
<td>
Survey data being collected across the pilot participants and external
stakeholders; the data will be anonymised and will refer to aspects of
evaluation (e.g. usability and usefulness, process feed-back, etc.) and
sustainability (e.g. interest in sharing open healthcare, etc.)
</td>
<td>
As indexed on the sharing platform e.g. Zenodo, it will have publication data,
DOI, keywords, collections,
license, uploaded by
</td>
<td>
Shared on Zenodo, open digital
repository; license will be most probably:
Creative Commons Attribution Share-
Alike
</td>
<td>
Zenodo is developed by under the EU FP7 project (grant agreement no.
283595); the service is free of charge for those without ready access to an
organized data centre; if this policy changes
Made4You will provide the data accessible via its website for the duration of
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
at least 5 years after project end.
</th> </tr>
<tr>
<td>
DOI_3
Made4You_Interview_X
</td>
<td>
Interviews conducted with individuals being associated to any of the pilots
needs to be stored anonymously; sometimes only in aggregated from, if too many
details would allow to deduced a specific person.
The data may be in the following format (depending on the interviews and the
specific cases):
* audio files
* transcripts
* aggregated files
* interview
guidelines
</td>
<td>
As indexed on the sharing platform e.g. Zenodo, it will have publication data,
DOI, keywords, collections,
license, uploaded by
</td>
<td>
Shared on Zenodo, open digital
repository; license will be most probably:
Creative Commons Attribution Share-
Alike
</td>
<td>
Zenodo is developed by under the EU FP7 project (grant agreement no.
283595); the service is free of charge for those without ready access to an
organized data centre; if this policy changes
Made4You will provide the data accessible via its website for the duration of
at least 5 years after project end.
</td> </tr>
<tr>
<td>
DOI_5
Made4You_PlatformUsage_X
</td>
<td>
Platform usage data from Careables.org (anonymous data); the data includes:
Communication pattern, usage patterns, uploads, downloads, etc.
</td>
<td>
As indexed on the sharing platform e.g. Zenodo, it will have publication data,
DOI, keywords, collections,
license, uploaded by
</td>
<td>
Shared on Zenodo, open digital
repository; license will be most probably:
Creative Commons Attribution Share-
Alike
</td>
<td>
Zenodo is developed by under the EU FP7 project (grant agreement no.
283595); the service is free of charge for those without ready access to an
organized data centre; if this policy changes
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
Made4You will provide the data accessible via its website for the duration of
at least 5 years after project end.
</th> </tr> </table>
D7.1 Data management plan & handbook
To summarise, the main open access points for Made4You data, publications, and
innovation are:
* The project website:
* Zenodo:
* OpenAIRE for depositing publications and research data
## 6.1 Open access and open data handling process
The internal procedures to grant open access to any publication, research data
or other innovation stemming from the Made4You project (e.g. technology), are
following a lightweight structure, while respecting ethical issues at all
time.
The main workflow starts at the WP level, where each team is responsible for
respecting ethical procedures at all times during the data gathering and
processing steps. The WP/working team members are also responsible for any
data anonymization, if applicable. Agreement has to be reached within the team
for making any outcome openly available; the final approval is done by the
Project Management Board (see Figure 8):
**Figure**
**8**
**Open Access work flow**
**:**
Collecting
data
Preparing
p
ublication
Ethical guidelines, informed consent
Approval by
Project Management Board
Developing
standards
Open Repository &
www.careables.org
Approval
within
team
Anonymization
…
.
Due to the nature of the project, the Data Management Plan may have to be
revised during the course of project activities. As the co-design approach is
a rather dynamic methodology it is not possible to clearly specify all data
sources and collected outcomes from the beginning.
**Conclusions**
This handbook describes the main procedures of the Made4You project to operate
successfully and effectively in order to achieve high quality project results
following a responsible research and innovation (RRI) approach. Open access,
ethics, and engagement of all societal actors are amongst the key elements of
the European RRI framework (European Union, 2012). Made4You is clearly
committed to respond to societal challenges in a responsible way by itself,
given its main objective of open healthcare, and by the way the actions in the
project are conducted.
While this handbook is provided in the form of a report and deliverable, it is
a living document in the sense of being updated and challenged by the
consortium in the course of the project. The processes described in here are
implemented in the daily work of the consortium and most of the elements (e.g.
the forms for informed consent, data management plan, etc.) are separately
available on the collaboration infrastructure such as Nextcloud.
D7.1 Data management plan & handbook
The management reports will include updates on any crucial changes in the
handbook as well as on the results of specific measures such as the SWOT
analysis or any additional elements added to the project structure related to
high quality responsible research.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0008_GREEN-WIN_642018.md
|
# Background and purpose of a Data Management Plan
The GREEN-WIN Project participates in the **Open Research Data Pilot** , which
aims to improve and maximize access to research data generated by the project.
The project consortium has decided to participate in this pilot on a voluntary
basis, as stated in the article 29.3 of the Grant Agreement (p. 48). The
GREEN-WIN Project will therefore be monitored and receive specific support as
a participant in the pilot.
As stated in the “ _Guidelines on Open Access to scientific publication and
research data in_ _Horizon 2020_ ”, the Open Research Data Pilot applies to
two types of data: 1. the data needed to validate the results presented in
scientific publications;
2\. other data as specified within the **data management plan (DMP)** .
A DMP is a document outlining _how the research data collected or generated
will be handled_ during a research project, and after the project is
completed. The DMP describes _what data will be collected / generated_ and
following what _methodology and standards_ , whether and _how this data will
be shared and/or made open_ , and _how it will be curated and preserved_ .
The “ _Guidelines on Data Management in Horizon 2020_ ” states that a first
version of the DMP must be provided within six months of the project, but that
the DMP is not a fixed document; it evolves and gains more precision and
substance during the lifespan of the project. The DMP would need to be updated
at least by the mid-term and final review to fine-tune it to the data
generated and the uses identified by the consortium since not all data or
potential uses are clear from the start.
# Data set description
## Type of data generated
According to the Grant Agreement Annex 1 (Description of Action) Part B (p.
33), the following data will be collected and generated during the GREEN-WIN
project:
* Qualitative data on relevant policies, policy processes and institutions, including description of actors, interests and actor-networks at national (WP2) and regional to local levels (WP4, WP5-7).
* Data on business models described through a standard protocol template associated with estimated capital needs (WP4, WP5-7).
* Quantitative data on financial flows, international trade and flows of goods.
* Quantitative data on key macroeconomic indicators, environmental and social impacts (mitigation pathways of WP3).
Public output papers of Green-Win project will include:
* Project deliverables and milestones
* Scientific publications
* Conference and workshop presentations
* Policy briefs
* Newsletters
* Posters/Flyers
* Blogs
## Methods of data collection
Both primary and secondary data will be collected.
* Primary data on win-win solutions, green business models and enabling environments will be collected via interviews, workshops and participatory observation in field trips and business collaborations/partnerships, e.g. in the case studies of WP5, 6 & WP7. This includes both quantitative and qualitative data.
* Primary data will also be generated by the macroeconomic modelling of WP3.
* Secondary data will be obtained via databases (Bloomberg, Thompson Reuters, etc.) and the review of policy documents and scientific literature.
Furthermore, both quantitative and qualitative data will be collected on win-
win solutions and green business models via tailor-made templates (WP4).
All work-package leaders are responsible for data management within their
work-package. The overall process is overseen by the project coordinator
(Jochen Hinkel, [email protected]) and the project
administrator (Daria Korsun, [email protected]).
## Formats
Qualitative data of interview and workshop may be collected through audio
recordings or concurrent note-taking.
* Text documents - .doc, .docx, .pdf or .odt files
* Images - .png, .jpg and .tif files
* Audio - .mp3, .flac and .wav files
* Video – .mp4 files
Quantitative data will be stored as
* Tabular data - .csv .xls, .xlsx or .csv files.
# Standards and metadata
## Metadata used and metadata standard
Data will be documented following the common standards provided by Horizon
2020 guidance.
* The terms “European Union (EU)” and “Horizon 2020 (H2020)”
* The name of the action, acronym and the grant number
* The publication date, and the length of embargo period of applicable
* The authors
* A persistent identifier
It also includes a description of the document’s content (summary or blog) and
key words (tags) for search.
Scientific publications deposited in a repository will include bibliographic
metadata required by the publisher and use a standard identification mechanism
such as Digital Object Identifiers (DOI). Internally, all Green-Win partners
use a version numbers format as such: V0, V1, V2, etc for submission of the
project papers.
# Data sharing
## Ownership
The ownership policy is described in the Consortium Agreement section 8 (p.
15) and in the Grant Agreement Article 26 (pp. 43-45). In particular:
* “Results are owned by the beneficiary who generates them” (art. 26.1).
* Detailed procedure for joint-ownership is stipulated in article 26.2.
* Article 26.3 describes the procedure to follow when results obtained by a beneficiary are generated by a third party (transfer of rights, licenses…).
* Beneficiaries have the obligation to protect results generated during the project that could be commercially exploited, and when protection is possible, reasonable and justified (Article 27.1). In the case that the beneficiary intends not to protect the results, the Agency may take over ownership under specific conditions described in article 26.4.
The ownership of specific results might be protected using Creative Common
Attribution Licenses (CC-BY or ODB-By), as stated in the Grant Agreement Annex
1 (Description of Action) Part B, p. 33.
## Access to data generated/collected
Datasets will be made available either attached to a published article or
published in existing data repositories (cf. table in section 5). Internally,
the data is stored on OwnCloud platform accessible to all consortium partners.
Externally, the data is accessible on the Green-Win website (green-win-
project.eu/deliverables and green-win-project.eu/publications) and on GGKP
platform (greengrowthknowledge.org/resources). Data concerning topics of
climate change mitigation is also stored on Climate Change Mitigation Platform
(climatechangemitigation.eu/about/related-eu-projects/green-win/). The data is
available in one of the formats specified in 2.3 Format section. No special
software tools are needed.
We will thereby follow the requirements of publishers concerning the
accessibility of datasets underlying a research article.
Data collected/generated but not yet published will remain inaccessible to the
public.
Furthermore, certain types of data will remain unavailable to the public
including: - Data originating from proprietary databases or under license, -
Confidential, private or personal data (following section 4.3).
## Specifics regarding anonymity
For data collected in interviews and workshops, data handling may need to
adhere to practices that ensure the anonymity of research participants is
maintained. Whether anonymity needs to be maintained is determined by choice
of the research participant and recorded in the GREEN-WIN Informed Consent
Forms, which must be completed by all research participants prior to
participating in the project (See D9.1 and D9.2). Maintaining the anonymity of
participants, when this has been requested, will take precedence over the
requirement to make data publicly available. The procedures for ensuring
anonymity of those research participants who have elected to remain anonymous
have been described in D9.5.
# Archiving and preservation
## Data storage
Copies of datasets are stored:
* On the internal website,
* On local computers (of the data producer and of the PMO).
The internal website is backed up on a server and synchronized to local
computers everyday. We use an Owncloud server to store and archive the data,
which is passwordprotected and encrypted (https).
## Data preservation
The Green-Win project website will be accessible a year after the end of the
project (December 2018). At the end of the project all the output papers will
be stored on the GGKP platform as well. The OwnCloud server storing the data
will be kept up and running for one year after the end of the project.
Afterwards, all data and files on the server will be archived on the GCF file
server for 5 years. Some partners will also store the datasets on their own
servers, which will also be publicly available (Table 1).
_GREEN-WIN Project 642018 RIA; Data Management Plan_
# Summary of data management plan
<table>
<tr>
<th>
**WP**
</th>
<th>
**Type of data produced**
</th>
<th>
**Qualitative/** **Quantitative**
</th>
<th>
**Anonymity measures to be applied (§3.3)**
</th>
<th>
**Dissemination**
</th>
<th>
**Data storage**
</th>
<th>
**Publicly available**
</th> </tr>
<tr>
<td>
**WP1**
</td>
<td>
Narratives from dialogue workshops
</td>
<td>
Qualitative
</td>
<td>
No
</td>
<td>
Peer-review publications solutions potentially featured on:
_http://climatechangem_
_itigation.eu_
</td>
<td>
GCF (personal computers and/or internal servers), internal website:
http://green-winproject.eu
</td>
<td>
Yes, documented in a report of the second dialogue workshop
Will be presented on Final
Green-Win conference in Barcelona in March 2018
</td> </tr>
<tr>
<td>
**WP2**
</td>
<td>
Interview transcripts, financial data to populate models
</td>
<td>
Qualitative & quantitative
</td>
<td>
Yes
</td>
<td>
Peer-review publication
</td>
<td>
UCL (personal computers and/or internal servers),
</td>
<td>
Not publicly available (team members)
</td> </tr>
<tr>
<td>
**WP3**
</td>
<td>
Model results
</td>
<td>
Quantitative
</td>
<td>
No
</td>
<td>
Report, Peer -review publications
</td>
<td>
E3M (personal computers and/or internal servers), internal website:
http://green-winproject.eu
</td>
<td>
Yes, available on project website
</td> </tr>
<tr>
<td>
**WP4**
</td>
<td>
Socioeconomic, technical and organisational information of WinWin strategies
and GBMs
</td>
<td>
Qualitative & Quantitative
</td>
<td>
Yes
</td>
<td>
Ground_Up
Association (Nonprofit) internal database
</td>
<td>
_http://survey.grounduppr oject.org/_
</td>
<td>
Not public. Only GREENWIN project partners can access it upon approval of
Ground_Up.
</td> </tr>
<tr>
<td>
**WP4**
</td>
<td>
Interview transcripts & minutes from workshops with investors
</td>
<td>
Qualitative
</td>
<td>
Yes
</td>
<td>
Peer-review publication
</td>
<td>
IASS (personal computers and/or internal servers)
</td>
<td>
Not publicly available (team members)
</td> </tr>
<tr>
<td>
**WP5**
</td>
<td>
Interview transcripts
</td>
<td>
Qualitative
</td>
<td>
Yes
</td>
<td>
Peer-review publication
</td>
<td>
Deltares (personal computers and/or internal servers)
</td>
<td>
Not publicly available (team members)
</td> </tr>
<tr>
<td>
**WP6**
</td>
<td>
Interview transcripts & questionnaire responses
</td>
<td>
Qualitative
</td>
<td>
Yes
</td>
<td>
Peer-review publication
</td>
<td>
Available upon request to the authors
</td>
<td>
Not publicly available (team members)
</td> </tr>
<tr>
<td>
**WP7**
</td>
<td>
Interview recordings & questionnaire responses
</td>
<td>
Qualitative
</td>
<td>
Yes
</td>
<td>
Peer-review publication
</td>
<td>
Available upon request to the authors
</td>
<td>
Not publicly available (team members)
</td> </tr>
<tr>
<td>
**WP8**
</td>
<td>
Socioeconomic, technical and organisational information of GBMs. Data is
introduced and shared on the platform directly by the entrepreneurs/GBM
leaders upon registration.
</td>
<td>
Qualitative & Quantitative
</td>
<td>
Yes
</td>
<td>
Ground_Up Project
(Company) platform
7
</td>
<td>
_http://groundupproject.ne t/_
</td>
<td>
Only basic company information and contact is public and only for those who
register on the platform (investors, entrepreneurs, service providers). GBM
description is confidential to all users and becomes available to investors
upon request of entrepreneurs.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0009_SAFEWAY_769255.md
|
# Executive Summary
This document describes the Data Management Plan (DMP) for the SAFEWAY
project. The DMP provides an analysis of the main elements of the data
management policy that will be used throughout the SAFEWAY project by the
project partners, with regard to all the datasets that will be generated by
the project. The documentation of this plan is a precursor to the WP1
Management. The format of the plan follows the Horizon 2020 template
“Guidelines on Data Management in Horizon 2020” 1 .
# Glossary of Terms
<table>
<tr>
<th>
DMP
</th>
<th>
Data Management Plan
</th> </tr>
<tr>
<td>
E&BP
</td>
<td>
Exploitation and Business Plan
</td> </tr>
<tr>
<td>
GDPR
</td>
<td>
General Data Protection Regulation
</td> </tr>
<tr>
<td>
GFS
</td>
<td>
Global Forecast System
</td> </tr>
<tr>
<td>
GIS
</td>
<td>
Geographic Information System
</td> </tr>
<tr>
<td>
IMS
</td>
<td>
Information Management System
</td> </tr>
<tr>
<td>
INEA
</td>
<td>
Innovation and Networks Executive Agency
</td> </tr>
<tr>
<td>
IPMA
</td>
<td>
Instituto Português do Mar e da Atmosfera
</td> </tr>
<tr>
<td>
IPR
</td>
<td>
Intellectual Property Rights
</td> </tr>
<tr>
<td>
MMS
</td>
<td>
Mobile Mapping System
</td> </tr>
<tr>
<td>
WP
</td>
<td>
Work Package
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
# Introduction
The elaboration of the Data Management Plan (DMP) will allow SAFEWAY partners
to address all issues related with data management.
Due to the importance of research data in the support of the publications, it
is necessary to define a data management policy. This document introduces the
first version of the project Data Management Plan where the different datasets
that will be produced within SAFEWAY project are identified. The document also
includes the main exploitation perspectives for each of those datasets and
major management principles the project will implement to handle those
datasets.
Although the DMP is a Deliverable to be submitted in Month 6 (D.1.3), this is
also a live document throughout the life of the project. This initial version
will evolve during the project according to the progress of project
activities.
**Table 1:** Planed calendar for submission of the DMP and its updates
<table>
<tr>
<th>
**Deliverable**
**Number**
</th>
<th>
**Deliverable Title**
</th>
<th>
**Due date**
</th> </tr>
<tr>
<td>
D1.3
</td>
<td>
Data Management Plan (DMP) V1
</td>
<td>
M6
</td> </tr>
<tr>
<td>
D1.5
</td>
<td>
Data Management Plan (DMP) V2
</td>
<td>
M18
</td> </tr>
<tr>
<td>
D1.7
</td>
<td>
Data Management Plan (DMP) V3
</td>
<td>
M30
</td> </tr>
<tr>
<td>
D1.9
</td>
<td>
Data Management Plan (DMP) V4
</td>
<td>
M42
</td> </tr> </table>
The DMP will cover the complete data life cycle as shown in Figure 1\.
**Figure 1.** Data life cycle
# General Principles
## Pilot on Open Research Data
The SAFEWAY Project is fully aware of the open access to scientific
publications article (Article 29.2 of the H2020 Grant Agreement), as well as
to the open access to research data article (Article 29.3 of the H2020 Grant
Agreement). However, project partners have opted to be out of the Open
Research Data due to a possible conflict with protecting results; SAFEWAY
results will be close to market and results’ disclosures should be taken with
care and always considering exploitation/commercialization possibilities.
## IPR management and security
The SAFEWAY project strategy for knowledge management and protection considers
a complete range of elements leading to the optimal visibility of the project
and its results, increasing the likelihood of market uptake of the provided
solution and ensuring a smooth handling of the individual intellectual
property rights of the involved partners in view or paving the way to
knowledge transfer:
IPR protection and IPR strategy activities will be managed by Laura TORDERA
from FERROVIAL (leader of WP10) as Innovation and Exploitation Manager with
the support of the H2020 IPR Helpdesk. The overall IPR strategy of the project
is to ensure that partners are free to benefit from their complementarities
and to fully exploit their market position. Hence, the project has a policy of
patenting where possible. An IPR Plan will be included in the Exploitation &
Business Plans (D10.4).
Regarding Background IP (tangible and intangible input held by each partner
prior to the project needed to the execution of the project and/or exploiting
the results) it will be detailed in the Consortium Agreement, defining any
royalty payments necessary for access to this IP. Regarding Foreground IP
(results generated under the project) they will belong to the partner who has
generated them. Each partner will take appropriate measures to properly manage
ownership issues. When several beneficiaries had jointly carried out
generating results and where their respective share of work cannot be
ascertained, they will have joint ownership of such results. They will
stablish an agreement regarding the allocation of terms of exercising the
joint ownership, including definition of the conditions for grating licenses
to third parties.
## Allocation of resources
The Project Technical Committee (PTC) will be responsible of collecting the
knowledge generated and defining protection strategy and the necessary access
rights for results exploitation, as well as propose fair solutions to any
possible conflict related to IPR. Complementarily, the PTC through the
Exploitation & Innovation Manager (E&IM) will keep a permanent surveillance
activity on the blocking IP or new IP generated elsewhere in the EU landscape
to ensure SAFEWAY freedom to operate. The output of this activity will be
included in the Exploitation and Business Plan (E&BP), which will be updated
during the project time frame.
## Personal data protection
For some of the activities to be carried out by the project, it may be
necessary to collect basic personal data (e.g. full name, contact details,
background), even though the project will avoid collecting such data unless
deemed necessary.
Such data will be protected in compliance with the EU's General Data
Protection Regulation, Regulation (EU) 2016/679. National legislations
applicable to the project will also be strictly followed.
All data collected by the project will be done after giving data subjects full
details on the experiments to be conducted, and after obtaining signed
informed consent forms. Such forms, provided in the previous deliverable D11.2
POPD – Requirement No 2, are included in Appendix 1 of this document.
Additionally, the overall information about procedures for data collection,
processing, storage, retention and destruction were also provided in D11.2,
which are annexed to the present DMP in Appendix 2.
## Data security
SAFEWAY shall take the following technical and organizational security
measures to protect personal data:
1. Organizational management and dedicated staff responsible for the development, implementation, and maintenance of SAFEWAY’s information security program.
2. Audit and risk assessment procedures for the purposes of periodic review, monitoring and maintaining compliance with SAFEWAY policies and procedures, and reporting the condition of its information security and compliance to senior internal management.
3. Maintain Information security policies and make sure that policies and measures are regularly reviewed and where necessary, improve them.
4. Password controls designed to manage and control password strength, and usage including prohibiting users from sharing passwords.
5. Security and communication protocols, following Big Data analytics, will be developed as required. SAFEWAY solutions will anticipate security not only technically, but also regarding Data Protection Regulation 2016/679 changes in the Data Protection Regime as of May 2018.
6. SAFEWAY solutions will not centralise all the native data in a common database, but instead will retrieve data with values for the platform functionalities on demand. The services layer of the platform includes communication application proceeding information disclosure.
7. Operational procedures and controls to provide for configuration, monitoring, and maintenance of technology and information systems according to prescribed internal and adopted industry standards, including secure disposal of systems and media to render all information or data contained therein as undecipherable or unrecoverable prior to final disposal.
8. Change management procedures and tracking mechanisms designed to test, approve and monitor all changes to SAFEWAY technology and information assets.
9. Incident management procedures designed to investigate, respond to, mitigate and notify of events related to SAFEWAY technology and information assets.
10.Vulnerability assessment, patch management, and threat protection
technologies and scheduled monitoring procedures designed to identify, assess,
mitigate and protect against identified security threats, viruses and other
malicious code.
11.Data could wherever be processed in anonymised or pseudonymised form.
12.Data will be processed ONLY if it is really adequate, relevant and limited
to what is necessary for the research (‘data minimisation principle’).
1. Personal data will be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed.
2. The minimum amount of personal data necessary to fulfil the purpose of SAFEWAY will be identified.
3. No more personal data than necessary for the purpose of SAFEWAY will be achieved and stored.
4. Whenever it is necessary to process certain particular information about certain individuals, it will be collected only for those individuals.
5. Personal data will not be collected if it could be useful in the future.
These guidelines will be of special application for INNOVACTORY and TØI
CENTRE, the two project partners who are more intensive in the use of personal
data. In the Deliverable D11.1-Ethics Requirements are annexed the exact
treatment of the data made by these two entities.
## Ethical aspects
An ethical approach will be adopted and maintained throughout the fieldwork
process. The Ethics Mentor will assure that the EU standards regarding ethics
and Data Management are fulfilled. Each partner will proceed with the survey
according to the provisions of the national legislation that are adjusted
according to the respective EU Directives for Data Management and ethics.
The consortium will ensure the participants’ right to privacy and
confidentiality of data in the surveys, by providing participants to the
survey with the Informed Consent Procedures:
\- for those participating in the surveys being carried out within Task 4.3,
by the Institute of Transport Economics-Norwegian Center for Transport
Research
These documents will be sent electronically and will provide information about
how the answers will be used and what is the purpose of the survey.
Participants will be assured that their answers, or personal data, will be
used only for the purposes of the specific survey. The voluntary character of
participation will be stated explicitly in the Consent Form.
As it is established in Deliverable D11.3, an Ethics Mentor is appointed to
advise the project participants on ethics issues relevant to protection of
personal data.
The Ethics Mentor will advise and supervise the following aspects of the
Project:
* _Data protection by design and default_ . The Project will require data control to implement appropriate technical and organisational measures to give effect to the GDPR’s core data-protection principles.
* _Informed consent to data processing_ . Whenever any personal data is collected directly from research participants, their informed consent will be seek by means of a procedure that meets the standards of the GDPR.
* _Use of previously collected data (‘secondary use’)_ . If personal data is processed in the Project without the express consent of the data subjects, it will be explained how those data are obtained, and their use in the Project will be justified.
* _Data protection impact assessments (DPIA)_ . If the Project involves operations likely to result in a high risk to the rights and freedoms of natural persons, this document will be conducted.
* _Profiling, tracking, surveillance, automated decision-making and big data_ . If the Project involves these techniques, a detailed analysis will be provided of the ethics issues raised by this methodology. It will comprise an overview of all planned data collection and processing operations; identification and analysis of the ethics issues that these raise, and an explanation of how these issues will be addressed to mitigate them in practice.
* _Data security_ . Both ethical and legal measures will be conducted to ensure that participants’ information is properly protected. These may include the pseudonymisation and encryption of personal data, as well as policies and procedures to ensure the confidentiality, integrity, availability and resilience of processing systems
* _Deletion and archiving of data_ . Finally, the collected personal data will be kept only as long as it is necessary for the purposes for which they were collected, or in accordance with the established auditing, archiving or retention provisions for the Project. These must be explained to your research participants in accordance with informed consent procedures.
# Data Set Description
SAFEWAY is committed to adopt whenever possible the FAIR principles for
research data; this is, data should be findable, accessible, interoperable and
re-usable.
SAFEWAY partners have identified the datasets that will be produced during the
different phases of the project. The list is provided below, while the nature
and details for each dataset are given in Section 4\.
This list is indicative and allows estimating the data that SAFEWAY will
produce – it may be adapted (addition/removal of datasets) in the next
versions of the DMP to take into consideration the project developments.
**Table 2:** SAFEWAY Dataset overview
<table>
<tr>
<th>
**No**
</th>
<th>
**Dataset name**
</th>
<th>
**Responsibl e partner**
</th>
<th>
**Related Task**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Mobile Mapping System (MMS) data
</td>
<td>
UVIGO
</td>
<td>
T3.2
</td>
<td>
</td> </tr>
<tr>
<td>
2
</td>
<td>
Historic weather dataset
</td>
<td>
UVIGO
</td>
<td>
T3.1
T3.3
</td>
<td>
&
</td> </tr>
<tr>
<td>
3
</td>
<td>
Global Forecasting System (GFS) data
</td>
<td>
UVIGO
</td>
<td>
T3.1
T3.3
</td>
<td>
&
</td> </tr>
<tr>
<td>
4
</td>
<td>
Satellite data
</td>
<td>
PNK
</td>
<td>
T3.2
</td>
<td>
</td> </tr>
<tr>
<td>
5
</td>
<td>
Experts interviews
</td>
<td>
TØI
</td>
<td>
T4.3
</td>
<td>
</td> </tr>
<tr>
<td>
6
</td>
<td>
Data on risk tolerance
</td>
<td>
TØI
</td>
<td>
T4.3
</td>
<td>
</td> </tr>
<tr>
<td>
7
</td>
<td>
Sociotechnical system analysis
</td>
<td>
TØI
</td>
<td>
T4.3
</td>
<td>
</td> </tr>
<tr>
<td>
8
</td>
<td>
Infrastructure assets data
</td>
<td>
UMINHO
</td>
<td>
T5.1
</td>
<td>
</td> </tr>
<tr>
<td>
9
</td>
<td>
Information on the value system
</td>
<td>
IMC
</td>
<td>
T6.1
</td>
<td>
</td> </tr>
<tr>
<td>
10
</td>
<td>
Stakeholder contacts collection
</td>
<td>
UVIGO
</td>
<td>
WP10
</td> </tr>
<tr>
<td>
11
</td>
<td>
Workshops data
</td>
<td>
FERROVIAL
</td>
<td>
T10.3
</td> </tr> </table>
**Table 3:** Datasets description and purpose
<table>
<tr>
<th>
**No**
</th>
<th>
**Dataset name**
</th>
<th>
**Description**
</th>
<th>
**Purpose**
</th> </tr>
<tr>
<td>
1
</td>
<td>
MMS data
</td>
<td>
Data from the different sensors equipped in the Mobile Mapping System (MMS)
employed for the monitoring of the infrastructures, including data from some
or all the following sources: LiDAR sensors, RGB cameras, thermographic
cameras, and Ground Penetrating Radar.
</td>
<td>
Inspection of the infrastructure critical assets to quantify condition. From
this data, the input information for
predictive models (WP5) and SAFEWAY IMS (WP7) will be extracted.
</td> </tr>
<tr>
<td>
2
</td>
<td>
Historic weather dataset
</td>
<td>
Observational quantitative meteorological data measured with hourly (or less)
temporal frequency over the Instituto Português do Mar e da Atmosfera (IPMA)
weather stations network. Relevant variables are air temperature, atmospheric
pressure, wind speed and direction,
maximum wind gusts speed and direction, relative air humidity, instant rain
and solar radiation.
</td>
<td>
Main source of observational info for meteorological data interpolation and
short-term prediction systems. Base dataset for meteorological activities on
WP3.
</td> </tr>
<tr>
<td>
3
</td>
<td>
Global
Forecast System (GFS) data
</td>
<td>
Predictive quantitative meteorological data calculated with hourly temporal
frequency over a planetarywide ~11 km horizontal spatial resolution by the
National Oceanic and
Atmospheric Administration Global Forecast System (GFS) numerical model.
Relevant variables are those most analogous to the Historic weather dataset
ones.
</td>
<td>
Complementary source of
observational info for meteorological data interpolation and short-term
prediction systems. Used on the same way than the Historic weather dataset.
</td> </tr>
<tr>
<td>
4
</td>
<td>
Satellite data
</td>
<td>
Sentinel-1 satellite imagery from Copernicus Open Access
Hub, to optimize the
Rethicus® displacement
</td>
<td>
Geospatial information acquired from satellite are key to detect and
</td> </tr> </table>
<table>
<tr>
<th>
**No**
</th>
<th>
**Dataset name**
</th>
<th>
**Description**
</th>
<th>
**Purpose**
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
service based on MTInSAR algorithms.
</td>
<td>
quantify terrain displacement and deformation (e.g. landslides, subsidence,
etc.)
</td> </tr>
<tr>
<td>
5
</td>
<td>
Experts interviews
</td>
<td>
The data contain
transcriptions and notes from expert interviews with researchers and policy
makers. They will be either conducted personally, on the phone (or skype) or
they can also be conducted in written form. Include findings from
completed/ongoing EU projects
</td>
<td>
The aim is to identify and collect sources of knowledge on how the different
users think/act in extreme situations, as well as their level of preparedness
and risk tolerance, and identify case studies for analysis of risk tolerance
</td> </tr>
<tr>
<td>
6
</td>
<td>
Data on risk tolerance
</td>
<td>
This includes the evaluation of risk tolerance of different actors and
scheduling for use in focus groups, and follow-up surveys with different user
representatives.
</td>
<td>
To make findings on varying levels of risk tolerance and preparedness for a
range of short- and long-term extreme events, among the user groups
</td> </tr>
<tr>
<td>
7
</td>
<td>
Sociotechnical system analysis
</td>
<td>
Selected cases will be documented to represent a range of event types
occurring in Europe. Interviews and template analysis will be conducted with
people both managing and caught up in the extreme events studied.
</td>
<td>
These analyses along with established sociotechnical system principles will
inform on optimal social and technical arrangements for IMS.
</td> </tr>
<tr>
<td>
8
</td>
<td>
Infrastructure assets data
</td>
<td>
Database of infrastructures
with identification, conservation state, inspections and structural detailing
</td>
<td>
Databased needed to define the input data to the development of predictive
models.
</td> </tr>
<tr>
<td>
9
</td>
<td>
Information on the value system
</td>
<td>
The information on the value systems, decision making processes and key
performance indicators that
</td>
<td>
The monetized direct
and indirect consequences of inadequate
</td> </tr>
<tr>
<td>
**No**
</td>
<td>
**Dataset name**
</td>
<td>
**Description**
</td>
<td>
**Purpose**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
transportation infrastructure agencies and stakeholders within the project use
in management of their assets.
</td>
<td>
infrastructure performance is needed as input to develop the value system that
will allow to prioritize the intervention of stakeholders related to transport
infrastructure.
</td> </tr>
<tr>
<td>
10
</td>
<td>
Stakeholder contacts collection
</td>
<td>
The data contain information on the main stakeholders of SAFEWAY along the
major stakeholder groups. They include infrastructure managers, operators,
public administrations, researchers, practitioners, policy makers. The contact
information that is collected includes the name, institutional affiliation,
position, email address, phone number and office address.
</td>
<td>
The collection will be used for contacting the respondents for the validation
of the project outcomes. It also provides the basis for the dissemination of
the project and for promoting the SAFEWAY IT solutions.
</td> </tr>
<tr>
<td>
11
</td>
<td>
Workshops data
</td>
<td>
The data contain protocols, written notes and summaries that were done at the
three workshops, which are organized in different countries. The workshops aim
at developers and providers of technical solutions.
This dataset also includes the collection of contact information of attendees
that includes the name,
institutional affiliation, position, email address, phone number and office
address.
</td>
<td>
The information gathered at the workshops will support the
development of the SAFEWAY methodologies and tools.
</td> </tr> </table>
# SAFEWAY Datasets
## Dataset No 1: MMS data
<table>
<tr>
<th>
**Mobile Mapping System (MMS) data**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset comprises all the data collected by the mapping technologies
proposed by UVIGO in WP3. Therefore, it contains data from the different
sensors equipped in the Mobile Mapping System (MMS) employed for the
monitoring of the infrastructures, including data from some or all the
following sources: LiDAR sensors,
RGB cameras, thermographic cameras, and Ground Penetrating Radar. Data from
different LiDAR sensors (Terrestrial or Aerial) that may be employed for the
fulfilment of the different monitoring tasks will be comprised in this dataset
as well.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Sensor data gathered from the Mobile
Mapping System (MMS) owned by UVIGO.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
UVIGO; N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3:
-Task 3.1 (Data acquisition).
-Task 3.2 (Data pre-processing).
-Task 3.3 (Data processing and automation of monitoring)
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Point cloud data from LiDAR sensors will be produced in real time when the
monitoring of the infrastructures is carried out. The metadata of that
information, stored in ‘.las’ format, has its documentation in
_http://www.asprs.org/wp-_
_content/uploads/2019/03/LAS_1_4_r14.pdf_
</td> </tr> </table>
<table>
<tr>
<th>
**Mobile Mapping System (MMS) data**
</th> </tr>
<tr>
<td>
</td>
<td>
Imagery will be produced together with the point cloud data, and the metadata
will have the specifications of the correspondent image file format.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Data recorded from the different sensors of the MMS dataset will be stored in
standard formats:
* Point cloud data obtained from the LiDAR sensors will be stored either in standard binarized format (.las) or (less likely) as plain text (.txt).
* Imagery will be stored in standard image file formats (.jpg, .tiff…)
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The recorded data will be used for the monitoring of the infrastructures
within the case studies of the project. The raw data acquired by the set of
sensors equipped in the monitoring system will be processed to extract
meaningful information about the infrastructure that can feed different
attributes of the Infrastructure Information Model that is being developed in
Task 3.3, and also for three-dimensional visualization of the monitored
infrastructure.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Only the partner in charge of the data collection will have access to the raw
data of the dataset. The results of the data processing tasks (mainly
attribute fields required by the Infrastructure Information Model) will be
shared with other members as they will be integrated into the SAFEWAY
database. Any relevant threedimensional visualization of the data could be
made public for presenting final results.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Data sharing and re-use at the end of the project will be subjected to the
permission of the infrastructure owners. Nevertheless, data will be available
for research purposes (development of future data processing algorithms)
provided that datasets are fully anonymized in such a way they cannot be
associated to real structures.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Data collected from this dataset will not intentionally include any personal
data. In the event of an identifiable individual within the imagery part of
the dataset, these data
</td> </tr>
<tr>
<td>
**Mobile Mapping System (MMS) data**
</td> </tr>
<tr>
<td>
</td>
<td>
will be pre-processed to ensure that it is anonymised or pseudonymised.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will always comply with EU and national legislation, and will
be retained five years after the project ends.
</td> </tr> </table>
## Dataset No 2: Historic weather dataset
<table>
<tr>
<th>
**Historic weather dataset**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
IPMA’s Portugal Weather Dataset.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Instituto Português do Mar e da Atmosfera.
Web: _http://www.ipma.pt/pt/index.html_
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
IPMA.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
IP.
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
UVIGO.
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UVIGO.
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3, tasks 3.1, 3.3.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Observation weather data is continuously generated by the automated
meteorological stations belonging to the
</td> </tr>
<tr>
<td>
**Historic weather dataset**
</td> </tr>
<tr>
<td>
</td>
<td>
IPMA’s network with a 1 hour (or 10 minutes) frequency. IPMA will provide a
subset of such data, limited to the requested variables, for the considered
stations and timespan.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
JSON, XML or SQL formats for storing meteorological data. Hour-interval
numeric values for each of the 9 required meteorological variables (air
temperature, atmospheric pressure, wind speed and direction, maximum wind
gusts speed and direction, relative air humidity, instant rain and solar
radiation), for each of the provided observation weather stations (number
between 30 and 100), during the Portuguese meteorological case study time
lapse.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Input for interpolation and short-term prediction algorithms used in WP3.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Collected data will potentially be used in future scientific research papers.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No personal data.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be permanently stored in UVIGO computer facilities for the duration
of the SAFEWAY project.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data will be stored indefinitely, with no planned destruction.
</td> </tr> </table>
## Dataset No 3: GFS data
<table>
<tr>
<th>
**Global Forecast System (GFS) data**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
GFS Portugal Weather Dataset.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
National Oceanic and Atmospheric Administration’s Global Forecast System
weather forecast model.
Web: _https://www.ncdc.noaa.gov/dataaccess/model-data/model-
datasets/globalforcast-system-gfs_
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
NOAA.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
UVIGO.
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
UVIGO.
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UVIGO.
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3, tasks 3.1, 3.3.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Forecast weather data is generated during the 4 cycle daily executions of the
GFS model, with an hourly temporal resolution, for a global grid with ~11 km
horizontal spatial resolution. UVIGO will gather a subset of such data,
limited to the requested variables, for the considered geographic area and
timespan.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
SQL formats for storing meteorological data. Hour-interval numeric values for
each of the 9 required meteorological variables (air temperature, atmospheric
pressure, wind speed and direction, maximum wind gusts speed and direction,
relative air humidity, instant rain and solar radiation), for each of the
considered grid points (number 1000-2000) during the Portuguese meteorological
case study time lapse.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Input for interpolation and short-term prediction algorithms used in WP3.
</td> </tr>
<tr>
<td>
**Global Forecast System (GFS) data**
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Collected data will potentially be used in future scientific research papers.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No personal data.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be permanently stored in UVIGO computer facilities for the duration
of the SAFEWAY project.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data will be stored indefinitely, with no planned destruction.
</td> </tr> </table>
## Dataset No 4: Satellite data
<table>
<tr>
<th>
**Satellite data**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Sentinel-1 images
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Copernicus Open Access Hub
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
Any **Sentinel data** available through the Sentinel Data Hub will be governed
by the Legal Notice on the use of Copernicus
Sentinel Data and Service Information.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
Planetek Italia
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
Planetek Italia
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
Planetek Italia
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3 – Displacement monitoring of infrastructures (roads and railways)
</td> </tr>
<tr>
<td>
**Satellite data**
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
The metadata information are stored within a product.xml file
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
OGC standard format. Volume: about TB.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The Sentinel-1 images will be exploited using the Multi-Temporal
Interferometry algorithm through the Rheticus ® platform.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Access through the Rheticus ® platform protected by Username and Password.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No personal data
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
The data will be stored within the cloud service platform Rheticus ® owned
by Planetek Italia for the entire duration of the project.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
The data will be deleted in the cloud platform Rheticus ® five years after
the end of the project.
</td> </tr> </table>
## Dataset No 5: Experts interviews
<table>
<tr>
<th>
**EXPERTS INTERVIEWS**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
The data contain transcriptions and notes from expert interviews with
researchers and policy makers. They will be either conducted personally, on
the phone (or skype) or they can also be conducted in written form. Include
findings from completed/ongoing EU projects
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Interviews with experts
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4 and 6
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Production August 2019, anonymised data stored on secure server
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Word documents
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Gather state-of-the-art knowledge on risk tolerance, aspects of psychology and
behaviour of different user groups.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Scientific articles
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**EXPERTS INTERVIEWS**
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Will do
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will always comply with EU and national legislation.
</td> </tr> </table>
## Dataset No 6: Data on risk tolerance
<table>
<tr>
<th>
**DATA ON RISK TOLERANCE**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This includes the evaluation of risk tolerance of different actors and
scheduling for use in focus groups, and follow-up surveys with different user
representatives.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Focus groups and surveys
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4, 6
</td> </tr>
<tr>
<td>
**DATA ON RISK TOLERANCE**
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Production circa Jan 2020, anonymised data stored on secure server
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Word documents
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Gather knowledge on risk tolerance, aspects of psychology and behaviour of
different user groups.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Scientific articles
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Will do
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will always comply with EU and national legislation.
</td> </tr> </table>
## Dataset No 7: Sociotechnical system analysis
<table>
<tr>
<th>
**SOCIOTECHNICAL SYSTEM ANALYSIS**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Selected cases will be documented to represent a range of event types
occurring in Europe. Interviews and template analysis will be conducted with
people both managing and caught up in the extreme events studied.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Document analyses
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4 and 6
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Production circa June 2020, anonymised data stored on secure server
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Word documents
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Gather knowledge on risk tolerance, aspects of psychology and behaviour of
different user groups.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Scientific articles, report
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**SOCIOTECHNICAL SYSTEM ANALYSIS**
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will always comply with EU and national legislation.
</td> </tr> </table>
## Dataset No 8: Infrastructure assets data
<table>
<tr>
<th>
**INFRASTRUCTURE ASSETS DATA**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Database of infrastructures with identification, conservation state,
inspections and structural detailing
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Infraestruturas de Portugal; Ferrovial; Network Rails
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
Infraestruturas de Portugal; Ferrovial; Network Rails
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
University of Minho; University of
Cambridge; Infrastructure Management
Consultants Gmbh
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
University of Minho; University of
Cambridge; Infrastructure Management
Consultants Gmbh
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
University of Minho
</td> </tr>
<tr>
<td>
**INFRASTRUCTURE ASSETS DATA**
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5 – Task 5.1
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Development of predictive models for projecting risks of future infrastructure
damage, shutdown and deterioration. Based on the database, and analytical and
stochastic/probabilistic approaches, the most suitable models for risk and
impact projections will be selected.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Database is to be used by members of the Consortium and the derived results
are to be reviewed by the partner owner of data prior to publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
Not applicable.
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
There is no personal data
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in a physical external disk for storage during the
duration of the project. A copy will also be accessible on a restricted online
server for the partners involved in Task 5.1.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data will be retained five years after the project ends, and will be always
destroyed complying with EU and national legislation.
</td> </tr> </table>
## Dataset No 9: Information on the value systems
<table>
<tr>
<th>
**INFORMATION ON THE VALUE SYSTEM**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
The information on the value systems, decision making processes and key
performance indicators that transportation infrastructure agencies and
stakeholders within the project use in management of their assets.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
On-line survey developed on a freeware software platform.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
IMC
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
IMC
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
IMC
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
IMC
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP6, Task 6.1
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
None.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
.xls (MS Excel format).
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used in WP6 – for development of a robust decision support
framework for short and medium to longterm maintenance planning.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Currently confidential. Perhaps public after the project completion.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
See under data access policy.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
See under data access policy.
</td> </tr>
<tr>
<td>
**INFORMATION ON THE VALUE SYSTEM**
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Yes, there are. It is planned to include related consent as a part of the
survey, so subjects may comply.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will always comply with EU and national legislation.
</td> </tr> </table>
## Dataset No 10: Stakeholders contact collection
<table>
<tr>
<th>
**STAKEHOLDERS CONTACT COLLECTION**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
The data contain information on the main stakeholders of SAFEWAY along the
major stakeholder groups. They include infrastructure managers, operators,
public administrations, researchers, practitioners, policy makers. The contact
information that is collected includes the name, institutional affiliation,
position, email address, phone number and office address.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Archives of SAFEWAY partners.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
UVIGO; N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
UVIGO
</td> </tr> </table>
<table>
<tr>
<th>
**STAKEHOLDERS CONTACT COLLECTION**
</th> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP10:
-Task 10.1 (Dissemination, communication and IP management).
-Task 10.2 (Standardization activities)
-Task 10.3 (Technology transfer activities)
-Task 10.4 (Collaboration and clustering)
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
This dataset is only used to disseminate the results obtained through SAFEWAY
project.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
As this dataset can contain personal data, only the partner in charge of the
data collection will have access to the raw data. Data that is publicly
available will be share among consortium partners.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
This dataset can include some personal data. Before collecting any personal
data that is not publicly available, informed consents from subjects will be
gained.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the
</td> </tr>
<tr>
<td>
**STAKEHOLDERS CONTACT COLLECTION**
</td> </tr>
<tr>
<td>
</td>
<td>
administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will always comply with EU and national legislation.
</td> </tr> </table>
## Dataset No 11: Workshop data
<table>
<tr>
<th>
**STAKEHOLDERS DATA COLLECTION**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
The data contain contact information of
SAFEWAY workshops attendees, provided during their registration in the event.
The contact information that is collected includes the name, institutional
affiliation, position, email address, phone number and office address.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Archives of SAFEWAY partners.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
UVIGO; N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP10:
-Task 10.3 (Technology transfer activities)
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
**STAKEHOLDERS DATA COLLECTION**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
This dataset is only used to disseminate the results obtained through SAFEWAY
project.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
As this dataset can contain personal data, only the partner in charge of the
data collection will have access to the raw data. Data that is publicly
available will be share among consortium partners.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
This dataset can include some personal data. Before collecting any personal
data that is not publicly available, informed consents from subjects will be
gained.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will always comply with EU and national legislation.
</td> </tr> </table>
# Outlook Towards Next DMP
As stated in Table 1 of the Introduction, the next iteration of the DMP will
be prepared in month 18 of the project, just after WP2 finishes. Also, every
working package and their tasks (with the exception of WP9 – demonstrative
pilots) will be underway. Several questions that remain unanswered in this DMP
will be addressed in future stages of the project as its different activities
are carried out. Therefore, the upcoming DMP will provide updates regarding
the following topics:
**Table 4:** Planed updated in upcoming DMP versions
<table>
<tr>
<th>
**Category**
</th>
<th>
**Updates in upcoming DMP**
</th> </tr>
<tr>
<td>
Data interoperability
</td>
<td>
* Information regarding data exchange between researchers and organizations.
* Standards employed for allowing data exchange.
</td> </tr>
<tr>
<td>
Data re-use
</td>
<td>
* Data licensing to permit re-use.
* Data availability. Will the data be available for reuse? Will be an embargo to give time to publish or seek patents?
* Can the data be used by third parties at the end of the project? Will be any restriction?
* How long will the data be re-usable?
* Have been (or will be) data quality assurance processes described?
</td> </tr>
<tr>
<td>
Data allocation
</td>
<td>
* Where (and how) is existent data being stored?
What is its cost and potential value?
* Where (and how) will data still not acquired be stored?
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
\- What procedures have been conducted regarding data security (data recovery,
data storage and transference).
</td> </tr>
<tr>
<td>
Other aspects
</td>
<td>
\- Any other procedure regarding data management which has not been listed.
</td> </tr> </table>
# Update of the Ethical Aspects
At this stage of the project, two are the main ethical aspects to review. In
first place the outcome of the continuous monitoring process on ethical
aspects, in particular regarding vehicle data crowdsourcing and interviews or
surveys carried out during the development of WP4. Then, the report of the
Ethics Mentor.
## Ongoing monitoring
The ongoing monitoring regarding SAFEWAY ethical aspects has focused, in a
first place, in identifying those tasks with relevance for data protection
within the different activities of the project. It was concluded that the data
protection risk posed by SAFEWAY is fairly limited, as the only task that
might involve personal data collection is related to dissemination activities
in workshops and meetings; and an explicit and verifiable consent will be
obtained prior to any data collection, as required by the GDPR. Procedures for
collection, processing, storage, retention and destruction of data have been
defined to ensure its compliance with the legislative framework. Furthermore,
for those activities that require it (interviews and surveys) an informed
consent form together with an information sheet about the research study were
defined (see Appendices).
## Report of the Ethics Mentor
Throughout the duration of the project, the Ethics Mentor will organize the
internal monitoring of the implementation of the ethical protocol by the
consortium. This section of the Data Management Plan will include a report
from the Ethics Mentor to be updated, according to the Grant Agreement-Annex
1b-section 5.1.2, in M18, M30, M42.
# Acknowledgements
This deliverable was carried out in the framework of the GIS-Based
Infrastructure Management System for Optimized Response to Extreme Events of
Terrestrial Transport Networks (SAFEWAY) project, which has received funding
from the European Union’s Horizon 2020 research and innovation programme under
grant agreement No 769255.
**SAFEWAY**
GIS-BASED INFRASTRUCTURE MANAGEMENT SYSTEM
FOR OPTIMIZED RESPONSE TO EXTREME EVENTS OF TERRESTRIAL TRANSPORT NETWORKS
**Grant Agreement No. 769255**
**Data Management Plan (DMP) V1** \- **Appendices**
WP 1 Overall project coordination
<table>
<tr>
<th>
**Deliverable ID**
</th>
<th>
**D1.3**
</th> </tr>
<tr>
<td>
**Deliverable name**
</td>
<td>
**Data Management Plan (DMP) V1**
</td> </tr>
<tr>
<td>
Lead partner
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Contributors
</td>
<td>
DEMO, PNK, UMINHO, IMC
</td> </tr> </table>
**PUBLIC**
PROPRIETARY RIGHTS STATEMENT
This document contains information, which is proprietary to the SAFEWAY
Consortium.
Neither this document nor the information contained herein shall be used,
duplicated or communicated by any means to any third party, in whole or in
parts, except with prior written consent of the SAFEWAY Consortium.
**Appendices Contents**
# Appendix 1: Informed Consent Form
# Appendix 2: Protection of Personal Data within SAFEWAY
LEGAL NOTICE
The sole responsibility for the content of this publication lies with the
authors. It does not necessarily reflect the opinion of the European Union.
Neither the Innovation and
Networks Executive Agency (INEA) nor the European Commission are responsible
for any use that may be made of the information contained therein.
<table>
<tr>
<th>
**Appendix 1.**
</th>
<th>
**Informed Consent Form**
</th> </tr> </table>
GIS-Based Infrastructure Management System for Optimized
Response to Extreme Events of Terrestrial Transport Networks
### INFORMED CONSENT FORM
<table>
<tr>
<th>
Project acronym
</th>
<th>
SAFEWAY
</th> </tr>
<tr>
<td>
Project name
</td>
<td>
GIS-BASED INFRASTRUCTURE MANAGEMENT SYSTEM
FOR OPTIMIZED RESPONSE TO EXTREME EVENTS OF
TERRESTRIAL TRANSPORT NETWORKS
</td> </tr>
<tr>
<td>
Grant Agreement no.
</td>
<td>
769255
</td> </tr>
<tr>
<td>
Project type
</td>
<td>
Research and Innovation Action
</td> </tr>
<tr>
<td>
Start date of the project
</td>
<td>
01/09/2018
</td> </tr>
<tr>
<td>
Duration in months
</td>
<td>
42
</td> </tr>
<tr>
<td>
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under Grant Agreement No 769255\.
</td> </tr>
<tr>
<td>
Disclaimer: This document reflects only the views of the author(s). Neither
the Innovation and Networks Executive Agency (INEA) nor the European
Commission is in any way responsible for any use that may be made of the
information it contains.
</td> </tr> </table>
**SAFEWAY event:**
**Date: Location:**
**General Data Protection Regulation (GDPR) Compliance**
Data that is collected and processed for the purposes of facilitating and
administering SAFEWAY workshops and events is subjected to GDPR to the EU
General Data Protection Regulation (GDPR) which became applicable from the
25th of May 2018. Please see the document “POPD SAFEWAY.pdf” for further
guidance on our data management policies. To process your application, we
require your consent to the following (please check each box as appropriate).
<table>
<tr>
<th>
**Please circle as necessary**
</th> </tr>
<tr>
<td>
I give my consent for all personal information provided by registering to the
SAFEWAY ( _workshop/event name_ ) to be stored and processed by relevant
SAFEWAY project partners for Data Management Purposes.
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for all personal information provided by registering to the
SAFEWAY ( _workshop/event name_ ) to be stored and processed by SAFEWAY
partners for the purpose of administering the SAFEWAY ( _workshop/event name_
).
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for all personal information provided by registering to the
SAFEWAY ( _workshop/event name_ ) to be processed by the SAFEWAY (
_workshop/event name_ ) organizers to evaluate and decide on my application
where workshop places are limited.
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for all personal information provided by registering to the
SAFEWAY ( _workshop/event name_ ) to be stored and processed by UVIGO for the
purpose of overall coordination of the SAFEWAY project.
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for all personal information provided by registering to the
SAFEWAY ( _workshop/event name_ ) to be passed to UVIGO and FERROVIAL for
storage and processing for the purposes of supporting exploitation and
dissemination of workshop related information.
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for the following personal information to be passed on to
the European Commission in case my workshop application is approved: name,
surname, title, organization, position, email address, phone number.
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for the following personal information to be published on
the Internet and elsewhere for the purposes of project transparency: name,
surname and organisation affiliation.
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for my e-mail address to be published on the Internet or
elsewhere to assist others to contact me (optional).
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr> </table>
**_PARTICIPANT CERTIFICATION_ **
I have read the _PROTECTION OF PERSONAL DATA WITHIN SAFEWAY_ and answered to
all the questions on the table above. I have had the opportunity to ask, and I
have received answers to, any questions I had regarding the protection of my
personal data. By my signature I affirm that I am at least 18 years old and
that I have received a copy of this Consent and Authorization form.
…………………………………………………………………………………………………
Name and surname of participant
…………………………………………………………………………………………………
Place, date and signature of participant
**NB: Attach this completed form to your SAFEWAY _(workshop/event name)_
application. **
Further information: for any additional information or clarification please
contact SAFEWAY coordinators at UVIGO ( [email protected]_ ). This consent
form does not remove any of your rights under GDPR but provides us with the
necessary permissions to process your application and manage SAFEWAY workshops
and events.
<table>
<tr>
<th>
**Appendix 2.**
</th>
<th>
**Protection of Personal Data Within SAFEWAY**
</th> </tr> </table>
GIS-Based Infrastructure Management System for Optimized
Response to Extreme Events of Terrestrial Transport Networks
### PROTECTION OF PERSONAL DATA WITHIN SAFEWAY
<table>
<tr>
<th>
Project acronym
</th>
<th>
SAFEWAY
</th> </tr>
<tr>
<td>
Project name
</td>
<td>
GIS-BASED INFRASTRUCTURE MANAGEMENT SYSTEM
FOR OPTIMIZED RESPONSE TO EXTREME EVENTS OF
TERRESTRIAL TRANSPORT NETWORKS
</td> </tr>
<tr>
<td>
Grant Agreement no.
</td>
<td>
769255
</td> </tr>
<tr>
<td>
Project type
</td>
<td>
Research and Innovation Action
</td> </tr>
<tr>
<td>
Start date of the project
</td>
<td>
01/09/2018
</td> </tr>
<tr>
<td>
Duration in months
</td>
<td>
42
</td> </tr>
<tr>
<td>
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under Grant Agreement No 769255\.
</td> </tr>
<tr>
<td>
Disclaimer: This document reflects only the views of the author(s). Neither
the Innovation and Networks Executive Agency (INEA) nor the European
Commission is in any way responsible for any use that may be made of the
information it contains.
</td> </tr> </table>
### PROTECTION OF PERSONAL DATA WITHIN SAFEWAY
**_INTRODUCTION_ **
The SAFEWAY project assumes the responsibility of complying with current
legislation on data protection, guaranteeing the protection of personal
information in a lawful and transparent manner in accordance with Regulation
(EU) 2016/679 of the European Parliament and of the Council of April 27, 2016,
regarding the protection of individuals with regard to the processing of
personal data and their free circulation (GDPR) and with the national
regulations regarding the protection of personal data.
This document informs in detail the circumstances and conditions of the
processing of personal data and the rights that assist the interested persons.
As coordinator of the action, the University of Vigo is the data controller
for all personal data being collected for workshops and other communication
and dissemination events. The University of Vigo has appointed as Data
Protection Officer to: Pintos & Salgado Abogados S.C.P. with address at: Avda.
de Arteixo, 10, 1.o izq., 15004 A Coruña ( [email protected]_ ).
**_PURPOSE:_ **
SAFEWAY partners will only collect the personal data strictly necessary in
relation to the purposes for which they are treated, in accordance with the
principles set in Article 5 of the GDPR. The information necessary to
guarantee a fair and transparent treatment will be provided to the interested
persons at the moment of collection, in accordance with the provisions of
articles 13 and 14 of the GDPR.
The data collected by SAFEWAY for the dissemination activities aims to reach
the widest audience to disseminate SAFEWAY project outcomes and to communicate
the knowledge gained by its partners during the duration of the project.
The workshops or meetings with stakeholder are focused to present and discuss
all project results, not only among project partners but also open to
stakeholders and other target groups. The events will be targeted to
technology innovators on infrastructure management, including end-users,
materials and technology suppliers, the research community, regulatory agency,
standardization bodies and all the potential players interested in fields
associated to innovative resilience of transport infrastructure with special
focus on their application in railway and roads.
**_PROCESSING OF PERSONAL DATA:_ **
Your Personal Data is freely provided. Where it is specified in the
registration form, the provision of Personal Data is necessary to provide you
with the services expected from the dissemination event, and the access to
SAFEWY project results. if you refuse to communicate these Data, it may be
impossible for the Data Controller to fulfil your request. On the contrary,
with reference to Personal Data not marked as mandatory, you can refuse to
communicate them and this refusal shall not have any consequence for your
participation and attendance to SAFEWAY dissemination activities.
The provision of your Personal Data for publication of your contact details on
the Internet or elsewhere for networking implemented by the Data Controller is
optional, consequently you can freely decide whether or not give your consent,
or withdraw it at any time. Therefore, if you decide not to give your consent,
SAFEWAY dissemination responsible will not be able to carry out the
aforementioned activities.
SAFEWAY will never collect any special categories of Personal Data (personal
data revealing racial or ethnic origin, political opinions, religious or
philosophical beliefs, or trade union membership, genetic data, biometric
data, data concerning health or data concerning a natural person’s sex life or
sexual orientation – Art. 9 of GDPR). SAFEWAY asks you expressly to avoid
providing these categories of Data. In the event in which you voluntarily
choose to give us these Data, the Company may decide not to process them or to
process them only with your specific consent or, in any event, in compliance
with the applicable law.
In the event of accidental processing of third party Personal Data is
communicated to SAFEWAY, you become an autonomous Data Controller and assume
all the related obligations and responsibilities provided by the law. In this
regard, SAFEWAY is exempt from any liability arising from any claims or
requests made by third parties, whose Data have been processed by us because
of your spontaneous communication of them to us, in violation of the law on
the protection of Personal Data. In any event, if you provide or process third
party Personal Data, you must guarantee as of now, assuming any related
responsibility, that this particular hypothesis of processing is based on a
legal basis pursuant to Art. 6 of GDPR.
**_DATA STORAGE AND RETENTION:_ **
The personal data provided will be kept for the time necessary to fulfill the
purpose for which they are requested and to determine the possible liabilities
that could derive from the same purpose, in addition to the periods
established in the regulations on files and documentation. Unless otherwise
stated, the data will be retained for a period of five years after the end of
the project as this data can support the report of some of the implemented
activities.
During this period, the data will be stored in a secured area with access by a
limited number of researchers. SAFEWAY data managers will apply appropriate
technical and organizational measures to guarantee a level of safety
appropriate to the risk and in accordance with the provisions of article 32 of
the GDPR. The system also allows tracking of use of data. Five years after the
end of the project, the data will be destructed at the surveillance of the
Data Protection Officer at University of Vigo, as coordinating organization of
SAFEWAY.
**_RIGHTS OF THE DATA SUBJECT:_ **
Any person, as the holder of personal data, has the following rights
recognized in the terms and under the conditions indicated in articles 15-22
of the GDPR:
* Right of Access: obtain from the controller confirmation as to whether or not personal data concerning you are being processed, more information on the processing and copy of the personal data processed.
* Right to Rectification obtain from the controller, without undue delay, the rectification of inaccurate personal data concerning you and the right to have incomplete personal data competed.
* Right to Erasure: obtain from the controller, without undue delay, the erasure of personal data concerning you.
* Right to Restriction of Processing: obtain the restriction of the processing in the event you assume that your data are incorrect, the processing is illegal or if these data are necessary for the establishment of legal claims.
* Right to Data Portability: receive the personal data concerning you, which you have provided to a controller, in a structured, commonly used and machine-readable format, in order to transfer these data to another Controller.
* Right to Object: Object, on grounds relating to your particular situation, to the processing of personal data concerning you, unless the controller demonstrates compelling legitimate grounds for the processing. You can also object to processing your data where them are processed for direct marketing purposes.
* Right to withdraw the Consent: withdraw the consent at any time. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal.
The subject may exercise their rights without any cost and will have the right
to receive a response, within the deadlines established by current legislation
on data protection, by contacting SAFEWAY project coordinators at:
[email protected]_ , or by contacting the Data Protection Officer at:
[email protected]_ .
**_CONTACT PERSON_ **
For any additional information or clarification please contact SAFEWAY
coordinators at UVIGO ( [email protected]_ ). This consent form does not
remove any of your rights under GDPR but provides us with the necessary
permissions to process your application and manage SAFEWAY workshops and
events.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0010_SlideWiki_688095.md
|
1. Introduction
This deliverable outlines the strategy for data management to be followed
throughout the course of the SlideWiki project by formulating a Data
Management Plan for the datasets used within the context of the project.
Moreover, this plan includes the descriptions of dataset lifecycle,
stakeholder behaviours, and best practices for data management, data
management guidelines, and templates for data management used in the SlideWiki
project. This plan will be updated at every milestone cycle.
Based on the Guidelines for FAIR Data management in H2020 1 , Data
Management in H2020 2 and Linked Data Life Cycle (LDLC) 3 , we present the
data management guideline for SlideWiki as follows:
1. Data Reference Name – a naming policy for datasets.
2. Dataset Content, Provenance and Value – general descriptions of a dataset, indicating whether it is aggregated or transformed from existing datasets, or original datasets from data publishers.
3. Standards and Metadata – descriptions about the format and underlying standards, under which the metadata shall be provided to enable machine-processable descriptions of dataset (supporting data transformation of Any2RDF and RDF2Any).
4. Data Access and Sharing – it is envisaged that all datasets are freely accessed under the Open Data Commons Open Database License (ODbL). Exceptions shall be stated clearly.
5. Archiving, Maintenance and Preservation – locations of physical repository of datasets shall be listed for each dataset.
This deliverable will be updated at the completion of every milestone cycle,
in case significant changes have been made, aiming thus to take into account
any additional decisions or newly identified best practices.
Briefly stated, the Data Management Plan (DMP) outlines the datasets that will
be generated or collected during the project's lifetime highlighting the
following information:
1. How datasets will be exploited/shared/licensed. For those that cannot be shared, the reasons why are explained.
2. Which standards are followed for publishing datasets.
3. Which strategies are used for curation and archiving of datasets.
The first version of this deliverable outlines the a strategy for data
management to be followed throughout the course of the project, in terms of
data management guidelines and a template that can be instantiated for all
datasets corresponding to project outputs.
This deliverable will be periodically updated to take account of additional
decisions or best practices adopted during the project lifetime. At the end of
the project, it will include individual Data Management Plans for the ensuing
datasets (or groups of related datasets). The plan addresses a number of
questions related to hosting the data (persistence), appropriately describing
the data (data value, relevant audience for re-use, discoverability), access
and sharing (rights, privacy, limitations) and information about the human and
physical resources expected to carry out the plan.
1.1 Purpose and Scope
A Data Management Plan (DMP) is a formal document that specifies ways of
managing data throughout a project, as well as after the project is completed.
The purpose of DMP is to support the life cycle of data management, for all
data that is/will be collected, processed or generated by the project. A DMP
is not a fixed document, but evolves during the lifecycle of the project.
SlideWiki aims to increase the efficiency, effectiveness and quality of
education in Europe by enabling the creation, dissemination and use of widely
available, accessible, multilingual, timely, engaging and high-quality
educational material (i.e., OpenCourseWare). More specifically, the open-
source SlideWiki platform (available at SlideWiki.org) will enable the
creation, translation and evolution of highly-structured remixable OCW can be
widely shared (i.e. crowdsourced). Similarly to Wikipedia for encyclopaedic
content, SlideWiki allows
1. to collaboratively create comprehensive OCW (curricula, slide presentations, selfassessment tests, illustrations etc.) online in a crowdsourcing manner,
2. to semi-automatically translate this content into more than 50 different languages and to improve the translations in a collaborative manner and
3. to support engagement and social networking of educators and learners around that content. SlideWiki is already used by hundreds of educators, thousands of learners. Several hundred comprehensive course materials are available in SlideWiki in dozens of languages.
The major block of these aims is the heterogenic nature of data formats used
by various educational institutions, which vary extensively. Examples of the
most popular formats used include CSV, XLS, XML, PDF, and RDB and presentation
files, such as PowerPoint files (ppt), OpenDocumentPresentation (odp), PDF, or
ePub, etc.
By applying DCAT-AP standard for dataset descriptions and making them publicly
available, the SlideWiki DMP covers the 5 key aspects (dataset reference name,
dataset description, standards and metadata, access, sharing, and re-use,
archiving and preservation), following the guidelines on Data Management of
H2020 4 .
While the collaborative authoring of engaging, inclusive, standard-compliant
and multilingual OpenCourseWare content is deemed a crucial and still
neglected component of advancing educational technology, there is a plethora
of systems covering other stages of educational value chains, such as Learning
Management (e.g. ILIAS, Moodle , OLAT), Learning Content, Learning Delivery
(e.g., OpenCast, FutureLearn), Learning Analytics Systems and social networks.
Instead of trying to incorporate as much functionality as possible in a single
system, SlideWiki will facilitate the exchange of learning content and
educational data between different systems in order to establish sustainable
educational value chains and learning eco-systems. Open (learning metadata)
standards such as SCORM are first steps in this direction, but truly engaging,
inclusive and multi-lingual value chains can be only realized if we take the
content structuring to the next level and employ techniques such as
interactive HTML5 (which can be presented meanwhile on almost all devices) and
fine-grained semantic structuring and annotation of OpenCourseWare. In order
to implement this concept, a relational multi-versioning data structure will
be employed, which is dynamically mapped to an ontology, following the MVC
paradigm and exhibiting Linked Data.
1.2 Structure of the Deliverable
The rest of this deliverable is structured as follows: Section 2 presents the
data life-cycle of SlideWiki, the related stakeholders and 13 best practices
for data management. Section 3 describes basic information required for
datasets of the Slide Wiki project, and relevant guidelines. Section 4
presents DMP templates for data management. Each dataset has a unique
reference name. Each data source and each of the transformed form will be
described with metadata, which includes technical descriptions about
procedures and tools used for the transformation, and common-sense
descriptions for external users to better understand the published data. The
Open Data Commons Open Database License (ODBL) is taken as the default data
access, sharing, and re-use policies of the datasets used within the context
of SlideWiki. Physical location of datasets shall be provided.
2. Data Lifecycle
The SlideWiki platform is a Linked Data platform, whose data ingestion and
management follow the Linked Data Life Cycle (LDLC). The LDLC describes the
technical process required to create datasets and manage their quality. To
ease the process, best practices are described to guide dataset contributors
in the SlideWiki platform.
Formerly, data management was executed by a single person or a working group
that would also take responsibility for data management. With the popularity
of the Web and the widely distributed data sources, data management has
shifted to a service of a large stakeholder ecosystem.
2.1 Stakeholders
For the SlideWiki platform, the stakeholders who influence the data management
belong to the following categories:
1. **Data Source Publisher/Owner:** This category refers to organisations providing datasets to the SlideWiki platform. The communication between SlideWiki and DSPO is limited to two cases: SlideWiki downloads data from DSPO, and DSPO uploads data to SlideWiki.
2. **Data End-User:** This category refers to persons and organisations who use the SlideWiki platform in order to access, view and share OpenCourseWare (OCW).
3. **Data Wrangler:** This category refers to persons who integrate heterogenic datasets into the SlideWiki platform. They are able to understand both the terminology used in the datasets and the SlideWiki data model, and their role is to ensure that the data integration is semantically correct.
4. **Data Analyser:** This category refers to persons who provide query results to endusers of SlideWiki. They may need to use data mining software.
5. **System Administrator and Platform Developer:** This category refers to persons responsible for developing and maintaining the SlideWiki platform.
#### 2.2 The Generic SlideWiki Data Value Chain
Within the context of SlideWiki, we structure the generic data value chain as
follows:
1. _Discover_ . An end-user query can require data to be collected from many datasets located within different entities and potentially also distributed in different countries. Datasets hence need to be located and evaluated. For SlideWiki, the evaluation of datasets results in dataset metadata, which is one of the main best practices in the Linked Data community. DCAT-AP is used as the metadata vocabulary.
2. _Ingest and make the data machine processable_ . In order to realise the value creation stage (integration, analyse, and enrich), datasets in different formats are transformed into a machine processable format. In the case of SlideWiki, it is the RDF format. The conversion pipeline from heterogenic datasets into an RDF dataset is fundamental. A Data Wrangler is responsible for the conversion process. For CSV datasets, additional contextual information is required to make the semantics of the dataset explicit.
3. _Persist_ . Persistence of datasets happens throughout the whole data management process. When a new dataset comes into the SlideWiki platform, the first data persistence is to backup this dataset and the ingestion result of this dataset. Later data persistence is largely determined by the data analysis process. Two strategies used in data persistence are (a) keeping local copy – copy the dataset from DSPO to the SlideWiki platform; (b) caching, to enhance data locality to increase the efficiency of data management.
4. _Integrate, analyse, enrich_ . One of the data management tasks is to combine a variety of datasets and find out new insights. Data integration needs both domain knowledge and technical knowhow. This is achieved by using a Linked Data approach enriched with a shared ontology. The life cycle of Linked Data ETL process starts from the extraction of RDF triples from heterogenic datasets, and storing the extracted RDF data into a storage, that is available for SPARQL querying. The RDF storage can be manually updated. Then, the interlinking and data fusion is carried out, which use ontologies in several public Linked Data sources and creates the Web of Data. In contrast to a relational data warehouse, the Web of Data is a distributed knowledge graph. Based on Linked Data technologies, new RDF triples can be derived, and new enrichment is possible. Evaluation is necessary to control the quality of new knowledge, which further results in searching more data sources, and performing data extraction.
5. _Expose_ . The result of data analysis will be exposed to end-users in a clear, salient, and simple way. The SlideWiki platform is a Linked Data platform, whose outcomes include (a) metadata description about the results; (b) a SPARQL endpoint for the metadata; (c) a SPARQL endpoint for the resulting datasets; (d) a user-friendly interface for the above results.
#### 2.3 Best Practices
The SlideWiki platform is a Linked Data platform. Considering the best
practices for publishing Linked Data, the following 13 stages are recommended
in order to publish a standalone dataset, 6 of them are vital (marked as
must).
1. _Provide descriptive metadata with locale parameters:_ Metadata must be provided for both human users and computer applications. Metadata provides DEU with information to better understand the meaning of data. Providing metadata is a fundamental requirement when publishing data on the Web, because DSPO and DEU
may be unknown to each other. Then, it is essential to provide information
that helps DEU – both human users and software systems, to understand the
data, as well as other aspects of the dataset. Metadata should include the
following overall features of a dataset: The title and a description of the
dataset; the keywords describing the dataset; the date of publication of the
dataset.; the entity responsible (publisher) for making the dataset available;
the contact point of the dataset; the spatial coverage of the dataset; the
temporal period that the dataset covers; the themes/categories covered by a
dataset. Locale parameters metadata should include the following information:
the language of the dataset; the formats used for numeric values, dates and
time.
2. _Provide structural metadata:_ Information about the internal structure of a distribution must be described as metadata, for this information is necessary for understanding the meaning of the data and for querying the dataset.
3. _Provide data license information:_ License information is essential for DEU to assess data. Data re-use is more likely to happen, if the dataset has a clear open data license.
4. _Provide data provenance information_ : Data provenance describes data origin and history. Provenance becomes particularly important when data is shared between collaborators who might not have direct contact with one another.
5. _Provide data quality information_ : Data quality is commonly defined as “fitness for use” for a specific application or use case. The machine readable version of the dataset quality metadata may be provided according to the vocabulary that is being developed by the DWBP working group, i.e., the Data Quality and Granularity vocabulary.
6. _Provide versioning information_ : Version information makes a dataset uniquely identifiable. The uniqueness enables data consumers to determine how data has changed over time and to identify specifically which version of a dataset they are working with.
7. _Use persistent URIs as identifiers_ : Datasets must be identified by a persistent URI. Adopting a common identification system enables basic data identification and comparison processes by any stakeholder in a reliable way. They are an essential precondition for proper data management and re-use.
8. _Use machine-readable standardised data formats_ : Data must be available in a machine-readable standardised data format that is adequate for its intended or potential use.
9. _Data Vocabulary_ : Standardised terms should be used to provide metadata, Vocabularies should be clearly documented, shared in an open way, and include versioning information. Existing reference vocabularies should be re-used where possible.
10. _Data Access:_ Providing easy access to data on the Web enables both humans and machines to take advantage of the benefits of sharing data using the Web infrastructure. Data should be available for bulk download. APIs for accessing data should follow REST (REpresentational State Transfer) architectural approaches. When data is produced in real-time, it should be available on the Web in real-time. Data must be available in an up-to-date manner and the update frequency made explicit. If data is made available through an API, the API itself should be versioned separately from the data. Old versions should continue to be available.
11. _Data Preservation:_ Data depositors willing to send a data dump for long term preservation must use a well-established serialisation. Preserved datasets should be linked with their "live" counterparts.
12. _Feedback_ : Data publishers should provide a means for consumers to offer feedback.
13. _Data Enrichment_ : Data should be enriched whenever possible, generating richer metadata to represent and describe it.
3. Data Management Plan Guidelines
In this section, we describe guidelines of the DMP of SlideWiki. In order to
enable the export of SlideWiki content on Data Web, as a proof -of-concept the
RDB2RDF mapping tool Triplify 5 is employed in order to map SlideWiki
content to RDF and publish the resulting data on the Data Web. The SlideWiki
Triplify Linked Data interface will soon be available. With regard to Social
Networking, at the current stage, SlideWiki supports limited social networking
activities. In the future, it is envisaged that SlideWiki users will be able
follow other users, slides and decks, they can discuss and comment on slides
and decks, login/register to system using their Facebook account and share
slides/decks on popular social networks (e.g. Facebook, LinkedIn, G+,
Twitter).
3.1 Privacy and Security
It is a fact that educational data mining needs to cope with large
unstructured (live) data which needs to be handled, transferred and translated
into interpretable structured datasets 6 . Analog to other data sensitive
domains there is the critical question of privacy and (learning) data
protection. Also the irresolution of which data is important from a
pedagogical/technical point of view is still a complex intent and open
question, taking the complex and individual learning process into account.
It is mandatory for the collection of such data that the involved learner is
using campus tools and platforms that support tracking of learning action.
These analytics remain an immature field that has yet to be implemented
broadly across a range of institutional types, student populations and
learning technologies. So-called Learning Record Stores are the next
generation tracking and reporting repositories that support ideas like the Tin
Can protocol and the successor xAPI. Open analytic solutions as provided by
the Open Academic Analytics Initiative 6 (OAAI), which is already fostering
the collection of and meaningful interpretation of data across learning
institutes.
Given that the central aim of this consortium is to provide benefit to the
European community, the project will prefer open data and free, open tools and
provide the resources developed in the project under the Creative Commons
Attribution 4.0 License (CC-BY) 7 . This license allows the learning
material to be shared and adapted for any purpose, even commercially. The only
restriction is attribution: linking to the source and indicating the changes
made. Released in November 2013, CC-BY 4.0 improves its predecessor CC-BY 3.0,
as it is an international license and includes databases.
In order to prevent data loss and to ensure SlideWiki users' privacy, a
sophisticated backup and archiving strategy, guaranteeing data security will
be implemented and developed within the context of WP1. Within the context of
T1.3 Privacy, Data Security, Backup and Archiving, all OpenCourseWare content
stored in SlideWiki, be it slides, presentations, questionnaires, diagrams,
images, user data etc.), is regularly backed-up and archived. In SlideWiki all
content (also versioning histories and prior revisions) will be made available
via Linked Data, SPARQL interfaces, APIs and data dumps. Incremental updates
will be published, so that the interested parties (e.g. large universities,
school authorities) can host their own synchronized SlideWiki mirrors (similar
to services such as arXiv.org or DBLP), while ensuring that all privacy and
data security regulations are enforced.
In D1.8, privacy and data security report (M28), it is outlined how SlideWiki
implements all relevant privacy and data security regulations and best
practices.
Moreover, at the SlideWiki website 8 , the Statement of Data Protection
Conditions can be accessed, and it provides the following info with regard to
personal data:
“The Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
(FraunhoferGesellschaft) takes the protection of your personal data very
seriously. When we process the personal data that is collected during your
visits to our Web site, we always observe the rules laid down in the
applicable data protection laws. Your data will not be disclosed publicly by
us, nor transferred to any third parties without your consent.”
In the following sections, we explain what types of data we record when you
visit our Web site, and precisely how they are used:
**3.1.1 Recording and processing of data in connection with access over the
Internet**
When you visit our Web site, our Web server makes a temporary record of each
access and stores it in a log file. The following data are recorded, and
stored until an automatic deletion date:
1. IP address of the requesting processor
2. Date and time of access
3. Name and URL of the downloaded file
4. Volume of data transmitted
5. Indication whether download was successful
6. Data identifying the browser software and operating system
7. Web site from which our site was accessed
8. Name of your Internet service provider
The purpose of recording these data is to allow use of the Web site
(connection setup), for system security, for technical administration of the
network infrastructure and in order to optimize our Internet service. The IP
address is only evaluated in the event of fraudulent access to the network
infrastructure of the Fraunhofer-Gesellschaft.
Apart from the special cases cited above, we do not process personal data
without first obtaining your explicit consent to do so. Pseudonymous user
profiles can be created as stated under web analysis (see below).
#### 3.1.2 Orders
If you order information material or other goods via our website, we will use
the address data provided only for the purpose of processing your order.
**3.1.3 Use and transfer of personal data**
All use of your personal data is confined to the purposes stated above, and is
only undertaken to the extent necessary for these purposes. Your data is not
disclosed to third parties. Personal data will not be transferred to
government bodies or public authorities except in order to comply with
mandatory national legislation or if the transfer of such data should be
necessary in order to take legal action in cases of fraudulent access to our
network infrastructure. Personal data will not be transferred for any other
purpose.
**3.1.4 Consent to use data in other contexts**
The use of certain services on our website, such as newsletters or discussion
forums, may require prior registration and involves a more substantial
processing of personal data, such as longer-term storage of email addresses,
user IDs and passwords. We use such data only insofar as it has been sent to
us by you in person and you have given us your express prior consent for this
use. For example, we request your consent separately in the following cases:
**3.1.4.1 Newsletters and press distribution**
In order to register for a newsletter service provided by the Fraunhofer-
Gesellschaft, we need at least your e-mail address so that we know where to
send the newsletter. All other information you supply is on a voluntary basis,
and will be only if you give your consent, for example to contact you directly
or clear up questions concerning your e-mail address. If you request delivery
by post, we need your postal address. If you ask to be included on a press
distribution list, we need to know which publication you work for, to allow us
to check whether specific publications are actually receiving our press
material.
As a general rule, we employ the double opt-in method for the registration. In
other words, after you have registered for the service and informed us of your
e-mail address, you will receive an e-mail in return from us, containing a
link that you must use to confirm your registration. Your registration and
confirmation will be recorded. The newsletter will not be sent until this has
been done. This procedure is used to ensure that only you yourself can
register with the newsletter service under the specified e-mail address. You
must confirm your registration as soon as possible after receiving our e-mail,
otherwise your registration and email address will be erased from our
database. Until we receive your confirmation, our newsletter service will
refuse to accept any other registration requests using this e-mail address.
You can cancel subscriptions to our newsletters at any time. To do so, either
send us an email or follow the link at the end of the newsletter.
##### 3.1.4.2 Visitors’ books and forums
If you wish to sign up for an Internet forum run by the Fraunhofer-
Gesellschaft, we need at least a user ID, a password, and your e-mail address.
For your own protection, the registration procedure for this type of service,
like that for the newsletters, involves you confirming your request using the
link contained in the e-mail we send you and you giving your consent to the
use of further personal data where this is necessary to use the forum.
You can cancel your registration for this type of service at any time, by
sending us an e-mail via the Web page offering the service.
As a general rule, the content of visitors’ books and forums is not subject to
any form of monitoring by the Fraunhofer-Gesellschaft. Nevertheless, we
reserve the right to delete posted contributions and to prohibit users from
further use of the service at our own discretion, especially in cases where
posted content contravenes the law or is deemed incompatible with the
objectives of the Fraunhofer-Gesellschaft.
**3.1.5 Cookies**
We do not normally use cookies on our Web site, but in certain exceptional
cases we may use cookies which place technical session-control data in your
browser’s memory. These data are automatically erased at the latest when you
close your browser. If, exceptionally, one of our applications requires the
storage of personal data in a cookie, for instance a user ID, we will point
out you to it.
Of course, it is perfectly possible to consult our Web site without the use of
cookies. Please note, however, that most browsers are programmed to accept
cookies in their default configuration. You can prevent this by changing the
appropriate setting in the browser options. If you set the browser to refuse
all cookies, this may restrict your use of certain functions on our Web site.
**3.1.6 Security**
The Fraunhofer-Gesellschaft implements technical and organizational security
measures to safeguard stored personal data against inadvertent or deliberate
manipulation, loss or destruction and against access by unauthorized persons.
Our security measures are continuously improved in line with technological
progress.
**3.1.7 Links to Web sites operated by other providers**
Our Web pages may contain links to other providers’ Web pages. We would like
to point out that this statement of data protection conditions applies
exclusively to the Web pages managed by the Fraunhofer-Gesellschaft. We have
no way of influencing the practices of other providers with respect to data
protection, nor do we carry out any checks to ensure that they conform to the
relevant legislation.
#### 3.1.8 Right to information and contact data
You have a legal right to inspect any stored data concerning your person, and
also the right to demand their correction or deletion, and to withdraw your
consent for their further use.
In some cases, if you are a registered user of certain services provided by
the FraunhoferGesellschaft, we offer you the possibility of inspecting these
data online, and even of deleting or modifying the data yourself, via a user
account.
**3.1.9 Acceptance, validity and modification of data protection conditions**
By using our Web site, you implicitly agree to accept the use of your personal
data as specified above. This present statement of data protection conditions
came into effect on October 1st, 2013. As our Web site evolves, and new
technologies come into use, it may become necessary to amend the statement of
data protection conditions. The FraunhoferGesellschaft reserves the right to
modify its data protection conditions at any time, with effect as of a future
date. We recommend that you re-read the latest version from time to time.
3.2 Dataset Content, Provenance and Value
**3.2.1 What dataset will be collected or created?**
i. Used Datasets:
1. Continuously generated Web Server logs and (Google) analytics of project’s website access;
2. Continuously generated Social Media engagement data. ii. Produced Datasets:
1. Aggregated analytics of the courses developed within the framework;
2. Aggregated statistics of networking and engagement data produced as part of D10.4 and D10.5 reporting, usage statistics of the framework.
**3.2.2 What is its value for others?**
This will ensure flexibility, adaptability and an improved user experience
(UX).
3.3 Standards and Metadata
**3.3.1 Which data standards will the data conform to?**
SlideWiki aims to be a long-lasting open-standard based incubator for the
collaborative creation of OpenCourseWare in Europe. The evaluation of the
quality of use, based on standards like ISO 25010, will also produce
recommendations to improve SlideWiki user interfaces, novel interaction
paradigms, information architecture components, etc.
Furthermore, the distribution of the learning material to the following
educational platforms
* Massive Open Online Courses (MOOCs)
* Learning Management Systems (LMSs)
* Interactive eBooks
* Social Networks
will be facilitated by SlideWiki’s standard compliant HTML5, RDF/Linked Data
and SCORM compatible content model, within the context of WP5. In the context
of WP6, Secondary Education Trial, gold standards for the reconciliation of
different open data sources of the city and of external organisations, such as
the Spanish National Library or DBpedia will be generated. This activity is
being also transferred into other cities, such as Madrid, and a similar
expansion such as that of CoderDojo activities is expected.
**3.3.2 What documentation and metadata will accompany the data?**
Following the best practices for data on the web, the technical and user
documentation of the platform will be constantly updated (MS3). In T1.4,
Semantic search, an intuitive search facility for content, structure,
metadata, provenance and revision history of the educational material will be
designed and implemented.
Within the context of the Semantic representation of SlideWiki (D2.1),
existing ontologies and vocabularies for semantic representation of
OpenCourseWare material and enhancement of these for capturing SlideWiki
representations will be reviewed and the resulting vocabulary will support
representation of content, structure, metadata, provenance, and revision
history.
For D4.2, SlideWiki SEO plans and appropriate strategies such as integrating
embedded and structured metadata into SlideWiki pages as well as using smart
URLs will be implemented in order to increase the visibility of SlideWiki
content among popular search engines.
With regard to T4.4, Search engine optimization, embedded and structured
metadata will be integrated into SlideWiki pages following vocabularies
recognized by the main search engines like Schema.org so that its visibility
in search engines and results pages of SlideWiki is improved. Another strategy
is providing mechanisms like RDF2HTML converter for SEO in RDFaware search
engines. Smart URLs will be implemented as more user-friendly URLS (e.g.,
using _http://slidewiki.org/semantic-web/_ to refer to a deck about Semantic
Web). SlideWiki uses Ajax for client-side interactions and one problem we are
dealing with is how to facilitate indexing of dynamic Ajax pages by search
engines. To resolve this issue we will define suitable URL patterns and SEO
strategies for making dynamically loaded content fragments more visible to
search engines.
3.4 Data Access and Sharing
**3.4.1 Which data is open, re-usable and what licenses are applicable?**
The SlideWiki project aims at creating widely available, accessible,
multilingual, timely, engaging and high-quality educational material (i.e.,
OpenCourseWare).
In particular, the Open Data Commons Open Database License (ODbL) to open
datasets is adopted as a project's best practice. Suitable applicable licenses
(such as ODBL), anonymization of personal data, possibility and suitability
for reuse, and the long term management of the data resources in compliance
with the LOD lifecycle and best practices will be implemented where
applicable.
Overall only 28 out of the 100 courses have a truly open license, the vast
majority (i.e. 57 out of 100) are restricting reuse to non-commercial
scenarios (i.e. CC-BY-NC-SA), which is not open licensing according to the
Open Definition. Often, for example, if courses are offered with a fee or the
training organization is a for-profit organization, the non-commercial
restriction thus prevents reuse. With regard to content acquisition, an
inventory of existing material (e.g. PowerPoint presentations, PDFs, images
etc.), which can be used for the creation of new OCW will be created.
Particular attention will be given to license clearance, so that the content
can be published under the least restrictive conditions.
Furthermore, new opportunities are emerging in online education (technology-
enhanced learning), largely driven by the availability of high quality online
learning materials, also known as Open Educational Resources (OERs). OERs can
be described as teaching, learning and research resources that reside in the
public domain or have been released under an intellectual property license
that permits their free use or repurposing by others depending on which
Creative Commons license is used 9 .
SlideWiki aims to provide solutions for the very limited OCW availability, the
fragmented educational content, the restrictive licenses (e.g. non-commercial)
and the lack of inclusiveness or accessibility of educational content. This
will be achieved by establishing an Open Educational Content and an
educational ecosystem with focus on accessibility, which will be further
supported by multilingualism. The created content itself will be published in
an open manner without usage restrictions or license costs. However the
content itself shall keep records with regard to authorship, modifications and
possibly also its usage.
The SlideWiki consortium aims to benefit the European community. Therefore,
open data and free, open tools, such as the Creative Commons Attribution 4.0
License (CC-BY) 10 will be preferred. The Creative Commons Attribution 4.0
License will allow the learning material to be shared and adapted for any
purpose, even commercially. The only restriction is attribution: linking to
the source and indicating the changes made. Released in November 2013, CC-BY
4.0 improves its predecessor CC-BY 3.0, as it is an international license and
includes databases.
With regard to open data, where possible, the project will make use of
existing open source libraries and make its efforts highly visible and open to
external input aiming thus to attract collaboration rather than competition.
During the trials, innovative approaches will be implemented, such as the use
of crowdsourcing techniques among the participants and the collaboration with
key stakeholders, such as university researchers, in generating gold standards
for the reconciliation of different open data sources. The course material
will be completely open to the community that is represented by the
organisation, with the intention of incorporating additional materials from
additional potential users that are not members of the organisation, and with
the objective of making the course materials a reference for such domain.
**3.4.2 How will open data be accessible and how will such access be
maintained?**
Data should be available for bulk download. APIs for accessing data should
follow REST architectural approaches. Real-time data should be available on
the Web in real-time. Data must be available in an up-to-date manner, with
explicitly demonstrated update frequency. For data available through an API,
the API itself should be versioned separately from the data. Old versions
should continue to be available.
3.5 Data Archiving, Maintenance and Preservation
**3.5.1 Where will each dataset be physically stored?**
Datasets will be initially stored in a repository hosted by the SlideWiki
server, or one of participating consortium partners. Depending on its nature,
a dataset may be moved to an external repository, e.g. European Open Data
Portal, or the LOD2 project's PublicData.eu.
**3.5.2 Where will the data be processed?**
Datasets will be processed locally at the project partners. Later, datasets
will be processed on the SlideWiki server, using cloud services.
### 3.5.3 What physical resources are required to carry out the plan?
Hosting, persistence, and access will be managed by the SlideWiki project
partners. They will identify virtual machines, cloud services for long term
maintenance of the datasets and data processing clusters.
### 3.5.4 What are the physical security protection features?
For open accessible datasets, security will be taken to ensure that the
datasets are protected from any unwanted tampering, to guarantee the validity.
### 3.5.5 How will each dataset be preserved to ensure long-term value?
Since the SlideWiki datasets will follow Linked Data principles, the
consortium will follow the best practices for supporting the life cycle of
Linked Data, as defined in the EU-FP7 LOD2 project. This includes curation,
reparation, and evolution.
### 3.5.6 Who is responsible for the delivery of the plan?
Members of each WP should enrich this plan from their own aspect.
# Data Management Plan Template
The following template will be used to establish plans for each dataset
aggregated or produced during the project.
## Data Reference Name
A data reference name is an identifier for the dataset to be produced [1].
<table>
<tr>
<th>
**Description**
</th>
<th>
A dataset should have a standard name within SlideWiki, which can reveal its
content, provenance, format, related stakeholders, etc.
</th> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
Interpretation, guideline, and software tools shall be given, provided, or
indicated for generating, interpreting data reference names.
</td> </tr> </table>
**Table 1 - Template for Data Reference Name**
## Dataset Content, Provenance and Value
When completing this section, please refer to questions and answers 1-2 in
Section 3.1
<table>
<tr>
<th>
**Description**
</th>
<th>
A general description of the dataset, indicating whether it has been:
aggregated from existing source(s) created from scratch transformed from
existing data in other formats
generated via (a series of) other operations on existing dataset
The description should include reasons leading to the dataset, information
about its nature and size and links to scientific reports or publications that
refer to the dataset.
</th> </tr>
<tr>
<td>
**Provenance**
</td>
<td>
Links and credits to original data sources
</td> </tr>
<tr>
<td>
**Operations performed**
</td>
<td>
If the dataset is a result of transformation or other operations (including
queries, inference, etc.) over existing datasets, this information will be
retained.
</td> </tr>
<tr>
<td>
**Value in Reuse**
</td>
<td>
Information about the perceived value and potential candidates for exploiting
and reusing the dataset. Including references to datasets that can be
integrated for added value.
</td> </tr> </table>
**Table 2 - Template for Dataset Content, Provenance and Value**
## Standards and Metadata
When completing this section, please refer to questions and answers 3-4 in
Section 3.2
<table>
<tr>
<th>
**Format**
</th>
<th>
Identification of the format used and underlying standards. In case the DMP
refers to a collection of related datasets, indicate all of them.
</th> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
Specify what metadata has been provided to enable machine-processable
descriptions of dataset. Include a link if a DCAT-AP representation for the
dataset has been published.
</td> </tr> </table>
**Table 3 - Template for Standards and Metadata**
## Data Access and Sharing
When completing this section, please refer to questions and answers 5-6 in
Section 2.3
<table>
<tr>
<th>
**Data Access and Sharing**
**Policy**
</th>
<th>
It is envisaged that all datasets in the SlideWiki project should be freely
accessed, in particular, under the Open Data Commons Open Database License
(OdbL).
When an access is restricted, justifications will be cited (ethical, personal
data, intellectual property, commercial, privacy-related, security-related)
</th> </tr>
<tr>
<td>
**Copyright and IPR**
</td>
<td>
Where relevant, specific information regarding copyrights and intellectual
property should be provided.
</td> </tr>
<tr>
<td>
**Access**
**Procedures**
</td>
<td>
To specify how and in which manner can the data be accessed, retrieved,
queried, visualised, etc.
</td> </tr>
<tr>
<td>
**Dissemination and reuse Procedures**
</td>
<td>
To outline technical mechanisms for dissemination and re-use, including
special software, services, APIs, or other tools.
</td> </tr> </table>
**Table 4 - Template for Data Access and Sharing**
## Archiving, Maintenance and Preservation
When completing this section, please refer to questions and answers 6-12 in
Section 3.4
<table>
<tr>
<th>
**Storage**
</th>
<th>
Physical repository where data will be stored and made available for access
(if relevant) and indication of type:
* SlideWiki partner owned
* societal challenge domain repository
* open repository
* other
</th> </tr>
<tr>
<td>
**Preservation**
</td>
<td>
Procedures for guaranteed long-term data preservation and backup. Target
length of preservation.
</td> </tr>
<tr>
<td>
**Physical Resources**
</td>
<td>
Resources and infrastructures required to carry out the plan, especially
regarding long-term access and persistence. Information about access mechanism
including physical security features.
</td> </tr>
<tr>
<td>
**Expected Costs**
</td>
<td>
Approximate hosting, access, maintenance costs for the expected end volume,
and a strategy to cover them.
</td> </tr>
<tr>
<td>
**Responsibilities**
</td>
<td>
Individual and/or entities are responsible for ensuring that the DMP is
adhered to the data resource.
</td> </tr> </table>
**Table 5 - Template for Archiving, Maintenance and Preservation**
# Storage of the Datasets
All project-related datasets are stored on either GitHub or our SlideWiki
servers for public access. The so-called Learning Record Stores are the next
generation tracking and reporting repositories that support ideas like the Tin
Can protocol and the successor xAPI. Open analytic solutions as provided by
the Open Academic Analytics Initiative (OAAI) already fostering the collection
of and meaningful interpretation of data across learning institutes. The
Learning Locker data repository stores learning activity data generated by
xAPIcompliant (Tin Can) learning activities. A Learning Activity Database and
the mechanism for logging and collecting all activity data in the platform
will be developed. It will seamlessly track user actions, associate them with
semantically rich events of the activity data model and store them in the
Learning Activity Database. The mechanism will be implemented on top of state-
of-the-art open source big data logging and ingestion tools (such as Apache
Flume and Apache Kafka) such that it can exploit the dynamic scale-out
infrastructure of WP1 and achieve efficient data ingestion for large volume
and rate of user events.
The Consortium will make sure, that all OpenCourseWare content stored in
SlideWiki, be it slides, presentations, questionnaires, diagrams, images, user
data etc., is regularly backedup and archived.
# FAIR Data Management Principles
The SlideWiki consortium monitors the application of the FAIR Data management
principles, also listed here below.
## Making Data Findable, Including Provisions for Metadata
Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?
What naming conventions do you follow?
Will search keywords be provided that optimize possibilities for re-use?
Do you provide clear version numbers?
What metadata will be created? In case metadata standards do not exist in your
discipline, please outline what type of metadata will be created and how.
## Making Data Openly Accessible
Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions. Note that in multi-beneficiary projects
it is also possible for specific beneficiaries to keep their data closed if
relevant provisions are made in the consortium agreement and are in line with
the reasons for opting out.
How will the data be made accessible (e.g. by deposition in a repository)?
What methods or software tools are needed to access the data?
Is documentation about the software needed to access the data included?
Is it possible to include the relevant software (e.g. in open source code)?
Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.
Have you explored appropriate arrangements with the identified repository?
If there are restrictions on use, how will access be provided?
Is there a need for a data access committee?
Are there well described conditions for access (i.e. a machine readable
license)? How will the identity of the person accessing the data be
ascertained?
## Making Data Interoperable
Are the data produced in the project interoperable, that is allowing data
exchange and reuse between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different origins)?
What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?
Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?
In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?
## Increase Data Re-Use (Through Clarifying Licences)
How will the data be licensed to permit the widest re-use possible?
When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.
Are the data produced and/or used in the project useable by third parties, in
particular after the end of the project? If the re-use of some data is
restricted, explain why.
How long is it intended that the data remains re-usable? Are data quality
assurance processes described?
## Allocation of Resources
What are the costs for making data FAIR in your project?
How will these be covered? Note that costs related to open access to research
data are eligible as part of the Horizon 2020 grant (if compliant with the
Grant Agreement conditions).
Who will be responsible for data management in your project?
Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?
## Data Security
What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?
Is the data safely stored in certified repositories for long term preservation
and curation?
## Ethical Aspects
Are there any ethical or legal issues that can have an impact on data sharing?
These can also be discussed in the context of the ethics review. If relevant,
include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA).
Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?
## Other Issues
Do you make use of other national/funder/sectorial/departmental procedures for
data management? If yes, which ones?
# Conclusion
This deliverable outlines the guidelines and strategies for data management
within the context of the SlideWiki and will be fine-tuned and extended
throughout the course of the project. Following the guidelines on FAIR Data
Management in H2020 11 , Data Management in H2020 12 , we described the
purpose and scope of datasets of SlideWiki, and specified the datasets
management for the SlideWiki project. Five kinds of stakeholders related to
the SlideWiki data management plan are identified and described: original data
producer, data wrangler, data analyser, system administrator/developer, and
data end-user; generic data flow chain of SlideWiki is listed and explained:
data discover, data ingest, data persist, data analyse, and data expose.
Following the best practices of Linked Data Publishing, we specified the 13
steps of best practices for the SlideWiki dataset management. Based on the
above, we present DMP guidelines for SlideWiki, and DMP templates for data
management process during the lifetime of the SlideWiki project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0013_COROMA_723853.md
|
# 1\. DATA SUMMARY
## 1.1 OBJECTIVE
This document constitutes the first version of the Data Management Plan (D1.4)
of COROMA project, elaborated under Task 1.4 Data management plan for Open
Research Data Pilot of Work Package 1 Requirements definition.
The objective of this deliverable is to give an overview of the data that will
be collected during the runtime of the COROMA project. This document will
define processes how data will be stored, published, and distributed amongst
the consortium partners. COROMA is a large project with many partners which
makes a good data management strategy to share and archive documents
inevitable.
Project partners will collect data in order to ensure that all conducted
experiments are reproducible and to share knowledge between partners within
the project and with other researchers.
A key objective of the COROMA project is to develop the robot into an
autonomous system. Autonomous systems rely more and more on experiences and
collected data therefore data can be used to learn and improve various
cognitive components of the robot as well as for validation of approaches and
reproduction of results.
## 1.2 TYPES AND FORMATS
Several types of data will be produced during the project runtime. These are
for example:
* Experimental data (e.g. sensor data)
* Equipment data and environment data (e.g. CAD files)
* Configuration files for software and hardware
* Illustrations and videos
* Documents, e.g. dissemination and communication material (publications, posters, presentations, etc.), deliverables
* Software
It is important that data is provided in standard formats that are easily
accessible with standard software which should ideally be free of charge to
simplify sharing the data among partners or to the public. Examples of
preferable formats are
* Documents (e.g. publications, presentations): PDF
* Illustrations: PDF, SVG
* Images: high quality, e.g. TIFF, BMP, JPEG 2000
* CAD: ISO-10303-21 (STEP file)
* Dummy part data / process related data: SciLab, GNU Octave, CSV
Process related data such as force measurements, distance measurements, etc.
must be provided in
SI.
## 1.3 OVERVIEW
This Data Management Plan will briefly describe the general categories of data
that will be generated during the project and in some cases it will already
define specific details about the data that will be produced in this section.
However, everything is subject to change. This part of the data management
plan will be updated as soon as we know more details and it will form the
basis of deliverable 8.5 “Data Compilation Open Research Report”.
<table>
<tr>
<th>
**Partner**
</th>
<th>
**Data**
</th>
<th>
**Details in Section**
</th>
<th>
**Type**
</th>
<th>
**Shared with**
</th> </tr>
<tr>
<td>
ITR
</td>
<td>
Models of parts
</td>
<td>
1.3.1
</td>
<td>
CAD models
</td>
<td>
Consortium
</td> </tr>
<tr>
<td>
ITR
</td>
<td>
Semantic dataset
</td>
<td>
1.3.2
</td>
<td>
Sensor measurements
</td>
<td>
Public
</td> </tr>
<tr>
<td>
SRC
</td>
<td>
CORO-hand
</td>
<td>
1.3.3
</td>
<td>
CAD / simulation
models and metadata
</td>
<td>
Consortium
</td> </tr>
<tr>
<td>
SRC
</td>
<td>
Experimental Data (SRC)
</td>
<td>
1.3.4
</td>
<td>
Sensor measurements
</td>
<td>
Specific partners
</td> </tr>
<tr>
<td>
IDK
</td>
<td>
Experimental Data (IDK)
</td>
<td>
1.3.5
</td>
<td>
Sensor measurements
</td>
<td>
Public
</td> </tr>
<tr>
<td>
BEN
</td>
<td>
Demonstration data (BEN)
</td>
<td>
1.3.6
</td>
<td>
Sensor measurements and metadata
</td>
<td>
Public
</td> </tr>
<tr>
<td>
ACI
</td>
<td>
Demonstration data (ACI)
</td>
<td>
1.3.7
</td>
<td>
Sensor measurements and metadata
</td>
<td>
Consortium and public (after
approval)
</td> </tr>
<tr>
<td>
ENSA
</td>
<td>
Demonstration data (ENSA)
</td>
<td>
1.3.8
</td>
<td>
Sensor measurements and metadata
</td>
<td>
Consortium or public (after
approval)
</td> </tr>
<tr>
<td>
UNA
</td>
<td>
Experimental data (UNA)
</td>
<td>
1.3.9
</td>
<td>
Sensor measurements and metadata
</td>
<td>
Specific partners
</td> </tr>
<tr>
<td>
All
</td>
<td>
Publications
</td>
<td>
1.3.10
</td>
<td>
Documents
</td>
<td>
Public
</td> </tr>
<tr>
<td>
All
</td>
<td>
Internal documents
</td>
<td>
1.3.11
</td>
<td>
Documents
</td>
<td>
Consortium
</td> </tr>
<tr>
<td>
All
</td>
<td>
Deliverables
</td>
<td>
1.3.12
</td>
<td>
Documents
</td>
<td>
Public or
consortium
</td> </tr>
<tr>
<td>
DFKI
</td>
<td>
Machine learning datasets
</td>
<td>
1.3.13
</td>
<td>
Datasets
</td>
<td>
Unknown
</td> </tr> </table>
### 1.3.1 PART MODELS
**Responsible Partner:** ITR
**Description:** CAD models of products
**Purpose:** Evaluation of visual scanning methods or procedures
**Type / Format:** CAD, STL (Stereo Lithography Interface format)
**Metadata:** \-
**Required Software:** STL files can be opened with e.g. FreeCAD or Blender
**Data Collection:** CAD models will be provided by partners that are involved
in the demonstration **Linked to Publications:** \-
**License / Access:** Shared with the consortium
### 1.3.2 SEMANTIC DATASET
**Responsible Partner:** ITR
**Description:** Dataset for Semantic Segmentation
**Purpose:** Training set for new algorithms, can be used for benchmarking
**Type / Format:** Dataset for machine learning, ROS Bags (ROS logfiles), PCD
(point cloud library document), XML
**Metadata:** \-
**Required Software:** ROS Bags can be opened with the robot operating system
(rosbag, rqt_bag), PCD can be opened with pcd_viewer
**Data Collection:** Dataset will be acquired using 3D sensors in experimental
setups
**Linked to Publications:** \- **License / Access:** Shared publicly
### 1.3.3 CORO-HAND (SRC)
**Responsible Partner:** SRC
**Description:** (1) CAD and simulation data for mechanical parts, schematics
and CAD for electronics, (2) videos, images
**Purpose:** (1) used to prototype, design and verify the design of the CORO-
hand, (2) dissemination and communication activities
**Type / Format:** CAD data: STEP (ISO 10303), simulation models: URDF 1
**Metadata:** \-
**Required Software:** URDF files can be used and visualized with the robot
operating system (ROS), ROS supports multiple operating systems but Ubuntu is
preferred and has several tools to visualize data (e.g. rviz), STEP files can
be opened by any 3D CAD package (e.g. Solidworks, AutoCAD, Blender) **Data
Collection:** SRC designs and provides the CORO-hand models
**Linked to Publications:** \-
**License / Access:** Design files shall not be made publicly available. Any
CAD and simulation models of the CORO-hand that are made available to the
consortium shall be regarded as confidential and may be simplified models
which have been de-featured, e.g. external geometries only.
### 1.3.4 EXPERIMENTAL DATA (SRC)
**Responsible Partner:** SRC
**Description:** (1) Sensor data (joint angles, joint torques, motor
temperature/voltage/current, tactile sensors if they are used), (2) control
data (controller status/commands, driver status/commands)
**Purpose:** Data is used to monitor the health of the CORO-hand, debugging
and performance characterization
**Type / Format:** ROS Bags (ROS logfiles)
**Metadata:** Data shall be associated to which objects are being grasped
**Required Software:** ROS Bags can be used and visualized with the robot
operating system (ROS), ROS supports multiple operating systems but Ubuntu is
preferred and has several tools to visualize data
(e.g. rviz)
**Data Collection:** (1,2) shall be recorded either live from using the CORO-
hand or during simulation **Linked to Publications:** \-
**License / Access:** Sensor data coupled with simulation models might be
useful to consortium members, but shall remain private unless specifically
requested.
### 1.3.5 EXPERIMENTAL DATA (IDK)
**Responsible Partner:** IDK
**Description:** Robot dynamics dataset
**Purpose:** Monitoring of the dynamics of the robot during the machining
process it is involved (as a fixture)
**Type / Format:** Accelerometer signal, time signals and/or FFT signals,
force signal, time signal, Matlab MAT (*.mat) file 2 **Metadata:**
* Estimated acceleration measurement frequency: 5000 Hz
* Acceleration unit: m/s²
* Force unit: Newton
**Required Software:** Matlab files can be opened with Matlab, SciLab, GNU
Octave, etc.
**Data Collection:**
* Ingesys IC3 real time control system
* Industrial or laboratory accelerometer
* Force signal: force sensing plate, robot tool tip 1 or 6 axis force sensor
**Linked to Publications:** Linked to two peer reviewed scientific papers, one
about using the robot as a mobile fixturing system, one about robotic drilling
**License / Access:** Shared publicly
### 1.3.6 DEMONSTRATION DATA (BEN)
**Responsible Partner:** BEN
**Description:** Technical data recorded during the demonstration at the
facilities of BEN and corresponding metadata
**Purpose:** Technical documentation and analysis, reproduction of results in
a paper
**Type / Format:** Video, images (JPEG 2000), logfiles
**Metadata:** \- **Required Software:** \-
**Data Collection:** Data will be logged and recorded during the demonstration
**Linked to Publications:** Linked to a publication about the demonstration
**License / Access:** Shared publicly
### 1.3.7 DEMONSTRATION DATA (ACI)
**Responsible Partner:** ACI
**Description:** Technical data recorded during the demonstration at the
facilities of ACI and corresponding metadata
**Purpose:** Technical documentation and analysis, reproduction of results in
a paper
**Type / Format:** Video, images (JPEG 2000), logfiles
**Metadata:** \- **Required Software:** \-
**Data Collection:** Data will be logged and recorded during the demonstration
**Linked to Publications:** \-
**License / Access:** Data related to the process itself, images, or videos
without produced parts can be shared with the consortium and publicly after
approval. Images and videos of the produced parts are property of the
customers and cannot be shared.
### 1.3.8 DEMONSTRATION DATA (ENSA)
**Responsible Partner:** ENSA
**Description:** Technical data recorded during the demonstration at the
facilities of ENSA and corresponding metadata
**Purpose:** Technical documentation and analysis, reproduction of results in
a paper
**Type / Format:** Video, images (JPEG 2000), logfiles
**Metadata:** \- **Required Software:** \-
**Data Collection:** Data will be logged and recorded during the demonstration
**Linked to Publications:** \-
**License / Access:** Videos or pictures must be recorded by ENSA personnel.
Logged and recorded data must be approved by ENSA and its customers before
released to public. Customer information must not be published at all.
### 1.3.9 EXPERIMENTAL DATA (UNA)
**Responsible Partner:** UNA
**Description:** Experimental data (process monitoring, environment
monitoring), equipment data, and configuration files
**Purpose:** Benchmarking, evaluation of a method or procedure, reproduction
of results in papers
**Type / Format:** Equipment data (part file, path file equipment model file,
environment model file), configuration file (robot position file, process
conditions), videos, images (high quality, e.g. PNG, TIFF, BMP, JPEG 2000),
CAD (STEP file, CATPART, IGS, STL, IFC, …), process related data (SciLab, GNU
Octave, Matlab, Excel, PsiConsole (AGV)), force measurements (CSV)
**Metadata:** \-
**Required Software:**
* Required licenses: CATIA V5, Matlab, Autodesk / Powermill, MS Word
* Dependencies: Matlab robotic tool box, Windows 7
**Data Collection:** \-
**Linked to Publications:** Possibly
**License / Access:** Data will be shared according to WP distribution and
limited to collaboration **1.3.10 PUBLICATIONS**
**Responsible Partner:** UNA, IDK, BEN, …
**Description:** Scientific or industrial publications
**Purpose:** Dissemination of project results, making results reproducible
**Type / Format:** Publication, MS Word, LaTeX, PDF
**Metadata:** \-
**Required Software:** MS Word, PDF viewer
**Data Collection:** \- **Linked to Publications:** \-
**License / Access:** Scientific publications must be open access
### 1.3.11 INTERNAL DOCUMENTS
**Responsible Partner:** all
**Description:** Internal documents, confidential documents or temporary
documents
**Purpose:** Documents that are required to communicate among the partners to
achieve the project goals
**Type / Format:** Documents, PDF, DOCX, XLSX, PPTX
**Metadata:** \-
**Required Software:** MS Office, Adobe Acrobat Reader
**Data Collection:** \- **Linked to Publications:** \-
**License / Access:** Shared within the consortium
### 1.3.12 DELIVERABLES
**Responsible Partner:** all
**Description:** Deliverables
**Purpose:** Documentation and planning of the project COROMA
**Type / Format:** PDF
**Metadata:** \-
**Required Software:** PDF viewer
**Data Collection:** \- **Linked to Publications:** \-
**License / Access:** Public or confidential, as defined in the GA (annex 1,
part A, pp 6-10)
### 1.3.13 MACHINE LEARNING DATASETS
**Responsible Partner:** DFKI
**Description:** Datasets that will be preprocessed and used for machine
learning
**Purpose:** Benchmarking, reproduction of results in papers
**Type / Format:** \-
**Metadata:** \- **Required Software:** \-
**Data Collection:** Data will be collected by partners in the project COROMA
**Linked to Publications:** Probably
**License / Access:** Depends on the owner of the original data
# 2\. FAIR DATA
In COROMA, three different kinds of data will be aggregated and shared within
the project:
1. Public data and documents
2. Restricted data
3. Internal documents
Task leader DFKI will rely on the platform Zenodo 3 for data that has to be
made publicly available (1) and DFKI will maintain an overview of publicly
shared data and documents at the COROMA website 4 . Note that as COROMA is
an H2020 project, all publications must be open access. For data that will be
shared with a selected audience (e.g. the project consortium), DFKI
_recommends_ to use Zenodo as a platform which supports detailed access
control (2). It enables interested researchers to request access to restricted
data directly from the owner of the data and it allows the owner of the data
to define and revoke access rights per request. For internal documents (3)
such as minutes, confidential deliverables, internal presentations, etc. the
internal project website 5 will be used to share these documents among the
project consortium.
## 2.1 MAKING DATA FINDABLE
Publicly shared data produced in the project COROMA must be findable. Zenodo
provides search functionality which will guarantee that COROMA datasets can be
found. The search functionality will for example allow filtering by keywords
and name of the data repository. Each responsible partner must provide
appropriate keywords during the data publication process. These keywords
should assist people that search for the data at Zenodo. Each upload receives
a digital object identifier (DOI) which will make it easily findable even
though the URL to the dataset might have changed. In addition, we will provide
an overview of publicly available datasets and documents at our project
website.
All partners are encouraged to upload restricted data to Zenodo. Zenodo allows
restricted access for confidential data. Note that the data is still findable
although it is not visible if the access is restricted.
The internal COROMA website provides a section to store documents that have
been produced within the project. It will be used mostly to share confidential
data between partners of the project. Documents will be findable through the
hierarchical directory structure (e.g. ordered by categories like
deliverables, meetings, general information, work packages, etc.). The project
website will be available at least as long as the project COROMA is running.
## 2.2 MAKING DATA OPENLY ACCESSIBLE
Due to the industrial character of the COROMA project and the participation of
industrial partners, not all data generated within COROMA can be made public
according to the Consortium Agreement. However, important scientific data that
can be used to reproduce and validate results must be made openly accessible
and publications must be published open access. The data will be published in
forms that are specified at the beginning of Section 2. The data publication
process is described in Section 2.5.
Only free and standard software _should_ be required to use the data. Specific
formats are described in Section 1 of this document. Some proprietary tools
produce data in formats that can only be used by these tools. These tools must
either be standard tools that are used by experts that are interested in the
data or there must be a way to extract relevant information with free
software. If neither the first of the second case is applied, there is no
reason to release the data. All partners must be aware of these issues and
there must be a good reason to violate these guidelines (e.g. unreasonable
effort to convert a proprietary format).
Access to restricted data will be controlled by the responsible partners.
Zenodo allows searching for restricted datasets and it allows requesting
access to these datasets over the platform. The responsible partner has to
decide on each individual case whether the access will be granted.
## 2.3 MAKING DATA INTEROPERABLE
Data must be interoperable. Each dataset that is released will have sufficient
documentation included that describes the process of reading and using the
data. Sufficient means that professionals will easily be able to make use of
the data. For example, in the case of a publication in PDF format, no further
documentation would be required. The documentation should be plain text or PDF
that describes the required tools and procedures to make use of the data. It
might be required to provide a short example script in a free programming
language that loads and visualizes the data or metadata like information about
the columns of a CSV (e.g. measured quantity, units). The format won’t be
specified any further because the produced datasets and the systems used in
the project are so diverse that further constraints could make the cost of
releasing datasets unreasonably high.
A good example of an interoperable public dataset is the MNIST dataset 6
which has been used in the machine learning community as a benchmark dataset
for more than 15 years. Although the format of the data is very unusual, the
website contains a short and sufficient description of the format which makes
it transferable in almost any programming language. The website explains the
purpose of the dataset, how it is related to other datasets, and it includes a
comparison of various methods that have been evaluated with the dataset.
## 2.4 INCREASE DATA RE-USE
Data and documents have to be published with a corresponding license. The
responsible partner must make several decisions that affect the choice of an
appropriate license, for example:
* Is commercial use permitted?
* Is modification permitted? Is distribution permitted? Is private use granted?
* Is redistribution mandatory?
Possible licenses for published data and documents are
* Creative Commons Attribution 4.0, CC BY 4.0, _https://creativecommons.org/licenses/by/4.0/_ :
allows copying and redistribution, allows adaption for any purpose, cannot be
revoked by the author, users must give appropriate credit to the author
* Creative Commons Attribution-ShareAlike 4.0, CC BY-SA 4.0, _https://creativecommons.org/licenses/by-sa/4.0/_ : similar to CC BY 4.0 with the additional obligation to distribute adaptions under the same license
* Creative Commons Attribution-NonCommercial 4.0, CC BY-NC 4.0, _https://creativecommons.org/licenses/by-nc/4.0/_ : similar to CC BY 4.0 with the restriction that commercial use is not possible
* Creative Commons Attribution-NoDerivatives 4.0, CC BY-ND 4.0, _https://creativecommons.org/licenses/by-nd/4.0/_ : similar to CC BY 4.0 with the restriction adaptions must not be shared
* Any other Creative Commons license: _https://creativecommons.org/licenses/_
Other types of licenses are available for software. The choice of license
depends on the specific requirements of the copyright owner. In case software
should be released as open source partners can generally choose between
licenses that allow commercial use (e.g. BSD) and those that do not (e.g. GNU
General Public License, GPL). There are various platforms that help to select
an appropriate license. 7
Public data will be made available as soon as it has been produced and
documented, the corresponding publication has been published, or at the latest
at the end of M36.
DFKI is responsible for checking whether published data is findable,
interoperable, and reusable. The publication process is described in Section
2.5. By relying mostly on standard formats and requiring a documentation for
each published dataset, COROMA consortium will make the data reusable as long
as possible. No further guarantees are given.
## 2.5 DATA PUBLICATION PROCESS
We will briefly describe the process to publish public and restricted data and
documents at the platform Zenodo. The responsible partner will prepare the
data for publication. The preparation includes:
* Definition of appropriate keywords that make the dataset findable for potential users of the dataset or document
* Definition of access rights (open access, embargoed access and embargo date, restricted access)
* Selection of license (see Section 2.4 for details)
* Description of dataset, must be sufficient for experts in the field to make use of the data and for non-experts to understand for which purpose the data can be used
* Optional: implementation of an example that loads the data and demonstrates how it can be used, e.g. by visualization
After the preparation, the responsible partner uploads the data and generated
metadata to the COROMA community at the Zenodo platform. 8 DFKI is
responsible for checking whether all of the criteria mentioned above have been
fulfilled and will either contact the responsible partner to complete the
process or directly accept the dataset for the community. After the dataset
has been accepted for the Zenodo community, DFKI will contact IDK to add the
dataset to the COROMA website.
# 3\. ALLOCATION OF RESSOURCES
Collected data will be prepared for publication in T8.3. Datasets that require
documentation and quality assurance are prepared and released by DFKI, IDK,
UOS, UNA, and ITR. Other documents that will be used for communication or
dissemination will be released in T8.1 and T8.2 respectively and usually do
not require much further preparation for release apart from the work that is
required to generate these documents (e.g. articles, posters).
Each partner is responsible for his datasets as described in Section 1\. Each
responsible partner must guarantee that the data is in a format that can be
published, documented so that an expert is able to use the data, and can be
read with free or standard software. The release process is supervised by
DFKI. Details are described in Section 2.5. Each responsible partner must
communicate about the release of public data or dissemination documents with
DFKI. The release of communication documents must be communicated to IDK.
There are no costs for long-term preservation of public data. The platform
Zenodo is free and we cannot give any guarantees beyond the lifetime of Zenodo
and its successors.
# 4\. DATA SECURITY
Publicly shared data is stored at the platform Zenodo. Zenodo guarantees to
retain the data for the lifetime of the platform which is at least 20 years.
Zenodo further guarantees that data is backed up nightly and stored in
multiple online replicas. We rely on the platform’s security in terms of data
recovery, secure storage of restricted data and transfer of sensitive data.
# 5\. SUMMARY
COROMA project partners have completed the data management plan. After
analysing the planned data generating activities and the requirements of each
partner, a plan has been produced to give an overview of these datasets and
give guidelines for data management. In particular DFKI will focus on the
process to make data available to the public and how the datasets have to be
prepared. This data management plan is intended to be a living document that
will be extended during the project and will finally result in the deliverable
8.5 “Data Compilation Open Research Report”.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0014_CALIPSOplus_730872.md
|
The implementation of a Data Policy can start only with the availability of a
metadata catalogue software. Such a software manages the metadata of raw and
derived data taken at experiment facilities (i.e. partners of CALIPSOplus). In
a metadata catalogue different types of metadata are saved: 1) _administrative
metadata:_ data management lifecycle, ownership, file catalog and 2)
_scientific metadata:_ describing the sample, the beamline and experiment as
well as parameters relevant for data analysis.
A data catalogue software enables management of the data lifecycle, i.e. from
data aquisition to data analysis and eventual deletion of the data. The data
can be linked to proposals and samples, to publications (DOI, PID) and can be
migrated to and from longterm storage on tape.
A metadata catalogue helps keeping track of the data provenance (i.e. the
steps leading to the final results) and it allows to check scientific
integrity (checksum of data). It allows to find data based on the metadata
(i.e. the users own data and handles open access to data). In the long term:
metadata catalogues will help to automate standardised analysis workflows and
support the standardisation of data formats.
_Has your facility some e-infrastructures like metadata catalogue software in
place?_
<table>
<tr>
<th>
</th>
<th>
**CALIPSOplus partners**
</th>
<th>
**Country**
</th>
<th>
**Has your facility some e-infrastructures like Metadata Catagogue Software in
place?**
</th> </tr>
<tr>
<td>
1
</td>
<td>
HELMHOLTZ-ZENTRUM DRESDENROSSENDORF EV Germany
</td>
<td>
Germany
</td>
<td>
Not planned yet
</td> </tr>
<tr>
<td>
2
</td>
<td>
ANKARA UNIVERSITESI
</td>
<td>
Turkey
</td>
<td>
Not planned yet
</td> </tr>
<tr>
<td>
3
</td>
<td>
AARHUS UNIVERSITET
</td>
<td>
Denmark
</td>
<td>
Not planned yet
</td> </tr>
<tr>
<td>
4
</td>
<td>
ALBA - CONSORCIO PARA LA
CONSTRUCCION EQUIPAMIENTO Y
EXPLOTACION DEL LABORATORIO DE
LUZ DE SINCROTRON
</td>
<td>
Spain
</td>
<td>
Alba offers as well a portal for remote access to data from the experimental
team authenticated with the proposal ID. Alba is working on the implementation
of ICAT metadata catalogues for the beamlines. This will be progressively
implemented as well as the whole data policy on the beamlines. The plan is to
have the first prototype working on a beamline by 2018. In parallel ALBA is
also working in other specific macromolecular metadata laboratory information
management systems (ISPyB)
</td> </tr>
<tr>
<td>
5
</td>
<td>
CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE
</td>
<td>
France
</td>
<td>
Not planned yet
</td> </tr>
<tr>
<td>
6
</td>
<td>
STIFTUNG DEUTSCHES ELEKTRONENSYNCHROTRON DESY
</td>
<td>
Germany
</td>
<td>
ICAT metadata catalogue
</td> </tr>
<tr>
<td>
7
</td>
<td>
DIAMOND LIGHT SOURCE LIMITED
</td>
<td>
United Kingdom
</td>
<td>
ICAT metadata catalogue
</td> </tr>
<tr>
<td>
8
</td>
<td>
ELETTRA - SINCROTRONE TRIESTE SCPA
</td>
<td>
Italy
</td>
<td>
ICAT metadata catalogue planned
* We collect different type of data. Each beamline has its own data acquisition system.
* Most of the beamline acquisition systems are implemented using the Tango control system. Data acquired are saved in the storage system in an area called scratch.
* Tango, Labview
</td> </tr>
<tr>
<td>
9
</td>
<td>
EUXFEL - EUROPEAN X-RAY FREE-
ELECTRON LASER FACILITY GMBH
</td>
<td>
Germany
</td>
<td>
Planned
</td> </tr>
<tr>
<td>
10
</td>
<td>
HELMHOLTZ-ZENTRUM BERLIN FUR MATERIALIEN UND ENERGIE GMBH
</td>
<td>
Germany
</td>
<td>
We use a hierarchical storage management system for the central storage and
long time archival of the data. The bulk of the data will be on tape. We use
ICAT as the metadata catalogue and user portal for the access to the data.
</td> </tr>
<tr>
<td>
11
</td>
<td>
ISTITUTO NAZIONALE DI FISICA NUCLEARE
</td>
<td>
Italy
</td>
<td>
Not yet in place
</td> </tr>
<tr>
<td>
12
</td>
<td>
ESRF - INSTALLATION EUROPEENNE DE RAYONNEMENT SYNCHROTRON
</td>
<td>
France
</td>
<td>
ICAT metadata catalogue
</td> </tr>
<tr>
<td>
13
</td>
<td>
KIT - KARLSRUHER INSTITUT FUER TECHNOLOGIE
</td>
<td>
Germany
</td>
<td>
Not planned yet
</td> </tr>
<tr>
<td>
14
</td>
<td>
LUNDS UNIVERSITET
</td>
<td>
Sweden
</td>
<td>
MELANI
</td> </tr>
<tr>
<td>
15
</td>
<td>
PSI - PAUL SCHERRER INSTITUT
</td>
<td>
Switzerland
</td>
<td>
MELANI
</td> </tr>
<tr>
<td>
16
</td>
<td>
FELIX - STICHTING KATHOLIEKE UNIVERSITEIT
</td>
<td>
Netherlands
</td>
<td>
Not yet in place
</td> </tr>
<tr>
<td>
17
</td>
<td>
SESAME - SYNCHROTRON-LIGHT FOR
EXPERIMENTAL SCIENCE AND
APPLICATIONS IN THE MIDDLE EAST
</td>
<td>
Jordan
</td>
<td>
Not in user operation yet
</td> </tr>
<tr>
<td>
18
</td>
<td>
Société Civile Synchrotron SOLEIL
</td>
<td>
France
</td>
<td>
Not yet in place
</td> </tr>
<tr>
<td>
19
</td>
<td>
SOLARIS - UNIWERSYTET JAGIELLONSKI
</td>
<td>
Poland
</td>
<td>
Not yet in place
</td> </tr> </table>
Table 3. Implementation of metadata catalogue software
_How data curation is regulated at your facility?_
<table>
<tr>
<th>
</th>
<th>
**CALIPSOplus partners**
</th>
<th>
**Country**
</th>
<th>
**How data curation is regulated at your facility?**
</th> </tr>
<tr>
<td>
1
</td>
<td>
HELMHOLTZ-ZENTRUM DRESDENROSSENDORF EV Germany
</td>
<td>
Germany
</td>
<td>
\-
</td> </tr> </table>
<table>
<tr>
<th>
2
</th>
<th>
ANKARA UNIVERSITESI
</th>
<th>
Turkey
</th>
<th>
\-
</th> </tr>
<tr>
<td>
3
</td>
<td>
AARHUS UNIVERSITET
</td>
<td>
Denmark
</td>
<td>
The curation of data is carried out by the beam line scientists. All of the
original data files are kept here at the facility, but users are given copies
of all their data for analysis and publication.
</td> </tr>
<tr>
<td>
4
</td>
<td>
ALBA - CONSORCIO PARA LA
CONSTRUCCION EQUIPAMIENTO Y
EXPLOTACION DEL LABORATORIO DE
LUZ DE SINCROTRON
</td>
<td>
Spain
</td>
<td>
Access to raw data and the associated metadata obtained from a public access
experiment is restricted to the experimental team for 3 years. After this
embargo period, the data can be made publicly available. These data are
remotely accessible by the research group. Data preprocessing and analysis is
depending on the Beamline partially done at the Alba premises and completed by
the researchers at their home institutes. This means that the data repository
contains always raw data and in some cases processed and curated data.
</td> </tr>
<tr>
<td>
5
</td>
<td>
CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE
</td>
<td>
France
</td>
<td>
\-
</td> </tr>
<tr>
<td>
6
</td>
<td>
STIFTUNG DEUTSCHES ELEKTRONENSYNCHROTRON DESY
</td>
<td>
Germany
</td>
<td>
Long term archiving in tape library (up to 10 years)
</td> </tr>
<tr>
<td>
7
</td>
<td>
DIAMOND LIGHT SOURCE LIMITED
</td>
<td>
United Kingdom
</td>
<td>
\-
</td> </tr>
<tr>
<td>
8
</td>
<td>
ELETTRA - SINCROTRONE TRIESTE SCPA
</td>
<td>
Italy
</td>
<td>
* The users decide if the data acquired are of good quality and if the case they transform them into datasets. Datasets are saved in an area of the storage system called online where they can be processed, accessed. In the long term the idea is to move the data from the online area to the offline data that in principle can be remote.
* Preferred data format is HDF5.
</td> </tr>
<tr>
<td>
9
</td>
<td>
EUXFEL - EUROPEAN X-RAY FREE-
ELECTRON LASER FACILITY GMBH
</td>
<td>
Germany
</td>
<td>
Long term archiving in tape library (up to 10 years) • Preferred data format
is HDF5.
</td> </tr>
<tr>
<td>
10
</td>
<td>
HELMHOLTZ-ZENTRUM BERLIN FUR MATERIALIEN UND ENERGIE GMBH
</td>
<td>
Germany
</td>
<td>
We are currently implementing the Data Policy. Data curation is still work in
progress.
</td> </tr>
<tr>
<td>
11
</td>
<td>
ISTITUTO NAZIONALE DI FISICA NUCLEARE
</td>
<td>
Italy
</td>
<td>
We backup data on dedicated external Hard Drives
</td> </tr>
<tr>
<td>
12
</td>
<td>
ESRF - INSTALLATION EUROPEENNE DE RAYONNEMENT SYNCHROTRON
</td>
<td>
France
</td>
<td>
The deadline for implementing the data policy on all ESRF beamlines is in
2020. At the moment 11 beamlines are connected to the metadata catalogue (6
are inprogress), data archiving is work in progress at 17 beamlines. This is
long term archiving in tape library (data are being archived for 10 years in
tape archive)
</td> </tr>
<tr>
<td>
13
</td>
<td>
KIT - KARLSRUHER INSTITUT FUER TECHNOLOGIE
</td>
<td>
Germany
</td>
<td>
\-
</td> </tr>
<tr>
<td>
14
</td>
<td>
LUNDS UNIVERSITET
</td>
<td>
Sweden
</td>
<td>
Not yet in place
</td> </tr>
<tr>
<td>
15
</td>
<td>
PSI - PAUL SCHERRER INSTITUT
</td>
<td>
Switzerland
</td>
<td>
* Data Analysis as a service DaaS project: focusses on offline data analysis and large offline disk storage. Finishes end of October 2017
* Petabyte Archive: focuses on enabling the data flows to and from a longterm data storage at HPC CSCS/Lugano. Finishes end of 2017 (data storage up to 10 years)
* Data Curation Project, collaboration with ESS, focusses on data catalog and data analysis automation. Started 2017 and will last until end of 2019. Enabled to add dedicated manpower for data curation tasks.
</td> </tr>
<tr>
<td>
16
</td>
<td>
FELIX - STICHTING KATHOLIEKE UNIVERSITEIT
</td>
<td>
Netherlands
</td>
<td>
The experimental data are stored on a facility server; maintainenance and
back-up is organized centrally by the Radboud University IT department. We are
currently exploring and evaluating the possibilities to use local (Radboud)
and a national (e.g. DANS) repositories.
</td> </tr>
<tr>
<td>
17
</td>
<td>
SESAME - SYNCHROTRON-LIGHT FOR
EXPERIMENTAL SCIENCE AND
APPLICATIONS IN THE MIDDLE EAST
</td>
<td>
Jordan
</td>
<td>
Not in user operation yet
</td> </tr>
<tr>
<td>
18
</td>
<td>
Société Civile Synchrotron SOLEIL
</td>
<td>
France
</td>
<td>
Not yet in place
</td> </tr>
<tr>
<td>
19
</td>
<td>
SOLARIS - UNIWERSYTET JAGIELLONSKI
</td>
<td>
Poland
</td>
<td>
Not yet in place
</td> </tr> </table>
Table 4. Data curation at different facilities.
**Implementation of Data Management Plan for CALIPSOplus**
As is clear from the survey 11 of 19 partners, by the end of 2018 will have a
data policy in place based on the PaNdata data policy (Deliverable D2.1. of
PaNdata Europe FP7 project in 2011). The PaNData data policy frame work
defines (long term) goals concerning data storage, life cycle management, data
access and ownership. Implementation of PSI data policy needs a metadata
catalogue. This implies that the implementation of the data policies can only
start with the availability of metadata catalogue software. Some facilities
use the iCAT software (see Table 2) that was developed within the PaNdata ODI
FP7 project, others are currently developing a new metadata catalogue software
called MELANI (PSI, ESS, MAXIV) that is developed within the Data Analysis as
A Service (DAAS) project of PSI. Implementation of the Data Policies is done
step by step (i.e. role out from beamline to beamline) at all facilities. This
stepwise process implies that a DMP will only be complete once these processes
at the different facilities have been finished.
**FAIR Data management at a glance: DMP components to be covered**
1. **Data summary:** Within this project **data** are **collected** during the experiments at the facilities. The data are collected by users that received transnational access money from the CALIPSOplus project. Most data are collected in the HDF5 or Nexus **format** . Data are open access after the embargo period of 3-5 years and can be reused by third parties. Data origin is from the experimental station of the CALIPSoplus partner large scale facilities.
2. The two types of metadata catalogues planned to be used by the CALIPSOplus partners (ICAT and MELANI; see Table 3) make data findable, accessable, interoparable and reusable **(FAIR data),** with standards for metadata creation.
2. 2.The timeline in which **data** become **accessible** is defined by the embargo period (3-5 years) in the specific facility data policy (see example of data policy in ANNEX 1).
2.3 The **interoperability of the data** is guaranteed by the metadata
catalogue softwares in place.
3. The implementation of the data policies at the different facilities and with this the putting in place of metadata catalogue software is a just started and an ongoing process. Therefore the **allocation of recources** , i.e. the cost of making our data FAIR and the costs for long term preservation of data will be descibed in detail in the next update of our DMP which is due in month 19 of the project.
**Conclusion**
The stepwise implementation of the Data Policies (i.e. role out from beamline
to beamline) at all facilities implies that a DMP will only be complete once
these processes at the different facilities have been finished. The present
version of the DMP reflects the status as it is now for the different partners
of CALIPSOplus. In the next update of the DMP in month 19, a clear progress in
role out of this process will be visible.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0016_MURAB_688188.md
|
# General information
## MRI and ultrasound Robotic Assisted Biopsy (MURAB)
The MURAB project has the ambition to revolutionise the way cancer screening
and muscle diseases are researched for patients and has the potential to save
lives by early detection and treatment. The project intends to create a new
paradigm in which, the precision of great medical imaging modalities like MRI
and Ultrasound are combined with the precision of robotics in order to target
the right place in the body. This will be achieved by identifying a target
using Magnetic Resonance Imaging (MRI) and then use a robot with an ultrasound
(US) probe to match the images and navigate to the right location. The project
has received funding from the European Union’s Horizon 2020 research and
innovation programme under grant agreement No 688188.
Partner organisations include the University of Twente (UT), Zorggroep Twente
(ZGT), RadboudUMC, University of Verona (UV), Medical University of Vienna
(MUW), KUKA, and Siemens. The project will be performed during four years with
start date on January 1 st , 2016\. All institutions are involved in data
management, but the main responsibilities for data management are supervised
by the consortium leader from the UT, as shown in Table 1.
Table 1: Institutions and their main responsibilities for data management
<table>
<tr>
<th>
**Institution**
</th>
<th>
**Responsibility**
</th> </tr>
<tr>
<td>
**UT**
</td>
<td>
Vincent Groenhuis, Françoise Siepel – DMP document authors
Different members of MURAB team of Robotics and Mechatronics (RaM) – DMP
implementation, server setup.
</td> </tr>
<tr>
<td>
**Verona university**
</td>
<td>
Research and Dissemination
Marta Capiluppi – Dissemination coordinator
MURAB team of Verona – Monitoring DMP implementation.
</td> </tr>
<tr>
<td>
**ZGT, RadboudUMC,**
**Medical University of Vienna**
</td>
<td>
Medical investigations; responsible for anonymization of data collected at
hospitals and transferring medical data to researchers according to this DMP.
</td> </tr>
<tr>
<td>
**SIEMENS**
</td>
<td>
Support and business advisor during the scope of this project.
</td> </tr>
<tr>
<td>
**KUKA**
</td>
<td>
Technical and business support during the scope of the project.
</td> </tr>
<tr>
<td>
**All partners**
</td>
<td>
make use of DMP plan, sharing data through server etc.
</td> </tr> </table>
## Laws, policies, contracts and agreements to comply with
The MURAB data management Plan needs to comply with the following laws,
policies, contracts and agreements:
EU:
* Horizon2020 project, participating in Open Access Data pilot.
Guidelines:
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hioa-
data-mgt_en.pdf
* Grant Agreement number 688188, available in the Horizon2020 Participant Portal under ‘Document library’.
* Consortium agreement related to this Grant Agreement, available at each individual partner’s document library.
Dutch:
* Law protection personal data (Wet bescherming persoonsgegevens):
https://autoriteitpersoonsgegevens.nl/nl/over-privacy/wetten/wet-bescherming-
persoonsgegevens
* FMWV Code of conduct for medical research of the Dutch biomedical research community (Gedragscode gezondheidsonderzoek van de Nederlandse biomedische onderzoeksgemeenschap):
http://www.giantt.nl/gedragscode%20gezondheidsonderzoek.pdf ▪ UT data policy
and research datamanagement:
https://www.utwente.nl/ub/dienstverlening/MAIN/onderzoeksdatabeheer/
# Data Collection
## Data description
There are several types of data that will be collected and shown in Table 2.
At baseline all research data, as described in the table below, will form a
single dataset. The name of this dataset is “MURAB research data”.
Table 2: MURAB research data
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Format**
</th>
<th>
**Software**
</th>
<th>
**Data size/growth**
</th>
<th>
**Specific character**
</th> </tr>
<tr>
<td>
**New MRI scans taken at various locations (UT and hospitals), from dummies,
phantoms, and patients.**
</td>
<td>
DICOM (as produced by MRI equipment).
</td>
<td>
The MRI computer is used to generate the DICOM data.
DICOM software
(available on WWW) can read and display the datasets, for example 3DSlicer or
3DimViewer.
</td>
<td>
One slice with a size of
1MB on average
(range 0.1-50MB); one scan contains around
100 slices (100MB av-
erage); about 10 scans are taken per session (one session consists of scanning
one patient multiple times on one daypart), resulting in 1GB of data per
session on average.
Scanning 100 sessions : 100GB of DICOM data on average, with maximum size of
5TB.
</td>
<td>
Phantom models (exvivo) and possibly patients (in-vivo).
Target: breast cancer diagnosis, muscle diseases.
Patients: personal data is collected in hospitals (see below for anonymization
procedure).
</td> </tr>
<tr>
<td>
**Existing MRI and/or ultrasound datasets of several patients,**
**retrieved from hospitals and/or databases.**
</td>
<td>
MRI: DICOM
Ultrasound: depends on dataset, could be DICOM or a generic image format like
jpg/ png.
</td>
<td>
DICOM: See above. Ultrasound: depends on format; 3DSlicer can handle many
formats.
</td>
<td>
About 1GB per session. 10 to 100 sessions, so 10-100 GB
total
</td>
<td>
Anonymized breast scans of patients, all ages, retrieved from hospital’s
databases and/or other data sources.
</td> </tr>
<tr>
<td>
**Ultrasound scans with probe position data, and additional sensory data of
phantoms and patients.**
</td>
<td>
Ultrasound: a raw data is always processed first. Processed data format could
be
DICOM or other format depending on hardware and processing software used in
the different project stages.
Coordinate and sensor data: coupled to scans, either embedded as metadata in
DICOM or in separate data file(s) linked together. Coordinate data stored in
comma-
separated values
(CSV) format.
</td>
<td>
Ultrasound data recorded by own software (in case of research hardware), or
pro-
cessed by off-the-shelf system.
Self-written software to record and process sensory data.
</td>
<td>
Continuous ultrasound scanning results in multiple frames per second, each
frame has multiple KB’s of data so one session may end up in file sizes in the
order of 10 GB’s per case.
Single image ultrasound scanning: order of 10 MB per session.
</td>
<td>
Ultrasound scans of breast phantoms and patients, linked to probe position and
orientation data from the robot arm. Personal data is collected in hospitals
(see below for anonymization procedure).
</td> </tr>
<tr>
<td>
**Segmentation data of MRI/Ultrasound scans.**
</td>
<td>
List of features in a text file, exact format depends on process.
TBD
</td>
<td>
MATLAB, Mevislab, 3D slicer, 20-sim, Modelica, own software and/or other
software packages (TBD)
</td>
<td>
In the order of 1-10 MB per session.
</td>
<td>
Segmentation data mainly consists of landmark positions that can be used to
combine MRI and US together.
</td> </tr>
<tr>
<td>
**3D volume modeling data with elastography information, generated from MRI
and tracked ultrasound datasets.**
</td>
<td>
TBD
</td>
<td>
See above
</td>
<td>
Around 0.1GB per case, depending on
level of detail
</td>
<td>
Coupled with
MRI/US/Elastography data of the same phantom or patient. Elastography
generated from combinations of tracked ultrasound scans and pressure sensor.
</td> </tr>
<tr>
<td>
**Biopsy needle insertion accuracy/deformation measurements.**
</td>
<td>
TBD
</td>
<td>
See above
</td>
<td>
1-100 MB per case
</td>
<td>
Coupled to a 3D volume model, with preoperative path planning, this processed
data shows the effect of needle insertion in
the phantom or patient, in terms of qualitative tissue deformations
</td> </tr> </table>
A special subgroup of data consists of design files, software code,
organizational documents, pictures and videos for informative purposes. These
are not research data in the sense of observational, experimental, simulation,
derived or compiled data (Table 3). Still, such data has to be stored and
back-upped to be useful during the project, so this is described below. The
name of this dataset is “MURAB nonresearch data”.
Table 3: MURAB non research data
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Format**
</th>
<th>
**Software**
</th>
<th>
**Data size/growth**
</th>
<th>
**Specific character**
</th> </tr>
<tr>
<td>
**Design files, program code**
</td>
<td>
Depends on software, e.g. C++ for program code, Solidworks / FreeCAD for 3d
de-
sign files
</td>
<td>
Development and
CAD software
</td>
<td>
1GB total
</td>
<td>
</td> </tr>
<tr>
<td>
**Organizational files**
</td>
<td>
MS Office, Google docs
</td>
<td>
</td>
<td>
1GB total
</td>
<td>
Documents, spreadsheets
Some partners cannot access Google docs
</td> </tr>
<tr>
<td>
**Multimedia**
</td>
<td>
Common video and audio formats, such as JPG, PNG, MP4 etc.
</td>
<td>
Generic audio/video editing software, like Windows photo viewer
</td>
<td>
1-10GB per video, 0.11GB per photo album. With 10 videos and 20 albums: up to
140GB needed.
</td>
<td>
Video and photo collections about various parts of MURAB research.
</td> </tr> </table>
## Procedures for anonymization of patient data at hospitals
All data files are coded by using random numbers as subject codes. Linkage
information connecting names to codes, will be stored separately from the
coded data in a secured cabinet and will be destroyed ten years after
completion of the study. Pseudonymization protocols will be applied, and all
data will be stored at a secured server.
The UT anonymization procedures are already covered by pseudonymization:
\- deleting or masking _personal identifiers_ , such as _name_ and _social
security number_ , - suppressing or generalizing _quasi-identifiers_ , such as
_date of birth_ and _zip code_
The location of privacy-sensitive data will be restricted to the hospital and
its access is managed by existing policies of the respective hospitals in The
Netherlands (ZGT, RadboudUMC) and Austria (MUW). All privacy-sensitive data is
anonymized before sending to researchers according to existing procedures,
after which the data is basically safe for publication and patient identity
will not be traceable.
## Anonymization protocols for each hospital
Specific hospital guidelines are described below.
### RadboudUMC data protocol
All data is collected and processed into Castor EDC by the investigator.
Sources of data are electronic patient system for patient, demographic,
laboratory, pathological, and follow-up data. Data collected regarding the
biopsy procedure will be collected during the procedure on paper CRF's. The
paper CRF will be dated, signed and scanned. They will be kept as a pdf file
in the data folder. This data will be entered in the Castor database.
Statistical procedures will be performed in SPSS. The used data will be
imported from CASTOR EDC and locked by storing it on CD ROM.
The statistical analyses and lists of codes will be locked after completion by
storing it on CD ROM.
Data is securely saved on the Castor server. The scanned paper CRF's together
with the statistical analysis files are kept in the data folder on the server
of the Radboud UMC, on the account of the investigator. The paper CRF's are
stored in the locked cabin of the investigator.
The code lists are kept on the server of the Radboud UMC, on the account of
the investigator.
Upon archiving, the paper CRFs will be archived in the secured archive
facilities of Radboud UMC. All CD-roms be kept in the secured cabin in the
PI's office. All data will be removed from folder in the investigators drive
and from Castor EDC database.
The data is coded: The first letter of the prenom combined with the 2 first
letters of the last name are the code used for study patients. All study
patients will also be assigned a study number.
Access to the code list folder is limited to the creator of the folder (data
manager), the PI and the investigator.
### ZGT data protocol
The regulations of the “FMWV Gedragcode Gezondheidsonderzoek,” as authorized
by the College Bescherming Persoonsgegevens, will be adhered to.
In particular: medical data will be anonymized by ZGT before transferring them
to the researchers. Researchers working with this data, also have to follow
the “FMWV Gedragcode Gezondheidsonderzoek”. Researchers are not allowed to
perform operations on data that may reveal personal identities.
### MUW data protocol
All data are stored in a PACS. Externals have no access.
If data is needed for research purposes, then it will be anonymized first
before sending it to the researchers. Also, a data transfer agreement will be
set up which has to be signed be partners.
# Data Storage and Back-up
## Storage and back-up
In this section, the data storage methods and back-up systems are described.
This applies for both “MURAB research data” and “MURAB non-research data” (see
table 4).
Table 4: Type of data and storage
<table>
<tr>
<th>
**Data**
</th>
<th>
**Storage medium and location**
</th>
<th>
**Backup location, frequency, RPO and RTO**
</th> </tr>
<tr>
<td>
**Raw data generated by different research groups within the consortium in
experiments on phantoms**
</td>
<td>
Data is initially collected on a particular research computer within the
research group. Useful data is then copied to a local server according to
existing research group policies, and/or copied to the consortium’s main
server located in the RaM lab of the University of Twente, Enschede, The
Netherlands. Only authorized persons will have access to the RAM main server.
After copying, the data on the local computer will be deleted.
</td>
<td>
Research groups follow existing policies for data backup.
The consortium’s main server will have three copies: the server itself
(original), an on-site backup drive (back-upped weekly) and an off-line, off-
site drive (back-upped monthly). All drives have a size in the order of one
TB, with the intention that it can store all data (original and processed)
within the project. If at some point in the project the allocated storage size
appears to be insufficient, then the storage size will be increased.
In present times, threats not only come from possible hardware damage and
theft which can destroy local data only, but also from crypto locker
infections (ransomware) which can encrypt all data accessible by the infected
computer’s filesystem, including network shares and cloud storage. Therefore,
at least one backup drive will be logically disconnected from the server.
</td> </tr>
<tr>
<td>
**Raw data at hospital**
</td>
<td>
After anonymization, the data will be transferred to researchers. One option
is via secured USB devices as this is the default way of sharing DICOM.
</td>
<td>
When raw medical data (e.g. DICOM data) is copied from secured USB device to a
researcher’s computer (and possibly network drive or cloud), the original
secured USB device serves as back-up.
When the raw data are stored in the network drive and/or cloud, backup
procedures can follow the same one as for raw data at research group (see
above).
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
During development, data is processed at researcher’s computers and initially
also stored there. Useful datasets (collection of processed MRI, ultrasound
coupled with probe position, segmentation data, elastography, 3d volume model,
simulation data of the same phantom) are put on the group’s project server
(network drive etc.) and later moved to the consortium’s main server. As the
data is no longer needed on the researchers computer, the data will be
deleted.
</td>
<td>
Processed datasets belong to the partner that generated/processed it; data
which is relevant for other partners are shared with them via the consortium’s
server.
At end of the project, each partner itself decides which part(s) of their data
is selected for publication under Open Access. These parts will be collected,
filtered, documented and uploaded to a permanent storage location that is
accessible for everyone.
</td> </tr>
<tr>
<td>
**“MURAB non-research data”: files that are not research data, such as design
files, organizational files etc.**
</td>
<td>
Design files and software source code are not research data and thus
technically outside the scope of this document. It is included here only
because these files need back-up and version management as well.
Design files and source code are developed on researcher’s computers. When
multiple people are working on same software, then a protocol like SVN, Git,
Mercurial, effectively used by software
</td>
<td>
Organizational files that are relevant for all partners, are also put on the
consortium’s main server.
</td> </tr>
<tr>
<td>
</td>
<td>
like gitlab (self-hosted) will be used, operating from a group’s server.
Otherwise, models and code are shared over the network drive, and versioning
is managed by file naming conventions.
</td>
<td>
</td> </tr>
<tr>
<td>
**Informed consents (include names and signatures of informed participants).**
</td>
<td>
Patient related research will include IRB approval and informed consent. We
will use existing procedures at hospitals and its data stays at the hospitals.
Consent forms will be kept in a locked cabinet in the office of the project
manager who is the only person with a key to the cabinet.
Privacy-sensitive data will never leave the hospital’s boundary.
</td>
<td>
Hospitals already have existing infrastructure (and associated policies) for
backup and storage.
</td> </tr> </table>
## Consortium’s main server
The University of Twente will provide the consortium’s main server on which
master copies of all (anonymized) data can be stored and accessed by all
partners. For the permanent server solution two options were considered:
1. Use of a virtual server provided by ICTS, a service by the University of Twente. This service includes scalable storage size and backups (both on-site and off-site).
2. Use a physical computer, server or network-attached storage (NAS) located in the Robotics and Mechatronics (RaM) lab. The management and maintenance part will be performed by technicians.
Option b will be chosen and further implemented by the summer of 2016\.
The services (data storage and project management) are protected by
authentication via Lightweight Directory Access Protocol (LDAP). All
researchers within the consortium will receive credentials to make use of the
server functionality. No people outside the consortium will gain access to
research data on the server.
## Length of time regarding use of data, storage and destruction of data
The data (anonymised) will be kept indefinitely, but the linking data will be
destroyed after completion of the research. The consent forms will be
destroyed after ten years.
# Data Documentation
## Description documents
In each dataset, a master document describes all the data contained in that
dataset. It describes the overall folder hierarchy, and links to other
documents and spreadsheets that describe the different parts of the data.
Publication restrictions are described here as well (open access or
protected).
For phantom and anonymized patient examinations, data is grouped into
sessions. A session is a sequence of examinations and procedures on the same
phantom or patient. The session log at least contains the date, investigator,
types of examinations, analytical and procedural information and all links to
the associated examination and/or processed data, such as folder locations
containing the examination files. An examination can be a MRI scan, an
ultrasound scanning sequence, a partial or full MURAB procedure with all
associated data, or an entirely different research associated with the
project. Processed data could be a 3d elastography model, pre-operative path
plan, in-biopsy deformation, postbiopsy path analysis etc. All session logs
are documented; either into a single document or spreadsheet, or distributed
over a collection of documents/spreadsheets linked by the master document.
Each type of examination or processed data is also documented separately in
general. For MRI data, this documentation at least states that the format is
DICOM and that it can be viewed in most DICOM viewers such as 3D Slicer or
3DimViewer. For ultrasound, the document exactly describes the data format(s)
and how it can be accessed. Likewise, data types related to other examinations
and MURAB procedure stages are documented in a similar fashion. For primary
researchers (researchers within the consortium), all data is documented and
made available by default. For secondary researchers, the open-access part of
the data and its documentation (specific for the open-access dataset) is
uploaded to a databank.
## Folder policy and data linking
When data is stored on the network drive / cloud storage, there is a general
folder policy that has to be adhered to whenever possible:
* In every folder there is a readme.txt file describing the contents of the folder; in case of research data it describes which kind of data is found (e.g. MRI, ultrasound, camera), who created the data, on which date etc.
* Folders and subfolders must be organized following one of these logical structures:
* **Option 1)** Different types of data of the same session (e.g. MRI and ultrasound data from the same breast phantom, and its derived data) are put in different subfolders of the same folder.
* **Option 2)** Put all MRI data in one folder hierarchy, all ultrasound data in another folder hierarchy, and other type of data in yet another hierarchy. In this case, a spreadsheet will be used to link the data together. The spreadsheets contain the relative folder locations for the different types of data belonging to the same patient.
Which is the best option is yet to be seen; in any case the data should be
logically organized so that datasets of the same type (or the same case) can
be easily found in the folder hierarchy. Therefore, we have to work consistent
and keep all data in the same hierarchy, avoiding fragmentation.
## File naming convention
Default naming convention of files (folders): YYYYMMDD_Name_v1.0_AuthorI.ext.
If files in a folder do not follow this convention, then at least one parent
folder in the hierarchy should do. For example, a folder with DICOM data
consists of an IMAGES directory with files named like IM1234. These names are
part of the DICOM specification and the names cannot be changed, so the parent
folder of the IMAGES directory should follow the default naming convention.
## Identifiers
Each dataset will be given a unique and persistent identifier. These will be
defined when the dataset itself is being generated and/or published. When a
dataset is uploaded to a databank such as 3TU.Datacentrum, then a DOI is
automatically assigned to the dataset.
# Data Access
Legal issues considering IP (intellectual property) of anything generated
within the project, including research data, will be covered in the Grant
Agreement, supplemented by the Consortium Agreement. In particular, research
data is owned by the partner(s) that generated it.
## Access right for the consortium members
During the project, all consortium members have the rights to use the data
generated by any partner in the project, if this is needed for executing the
tasks as defined in the Grant Agreement. The primary method for accessing the
data is by using the consortium server. All data on the server is made
available to all partners by default, in principle it is never needed to grant
access data to specific partners only. All medical data on that server is
anonymized, so a basic level of security such as single-factor authentication,
suffices.
**5.2 Access rights for external users**
During the project, external users have no access to the research data.
# Data Sharing and Reuse
MURAB project participate in the Horizon 2020 Pilot on Open Research Data and,
therefore, is committed to make public the research data that does not belong
to IP. MURAB will define all suitable measures to enable third parties to
access, mine, exploit, reproduce, and disseminate (free of charge for any
user) its research data.
## Research data being used in scientific publications
After publication of a paper, the original data files are made available by
using open-access. Other researchers may want to validate the results from the
project’s papers, requiring access to the data after publication of the paper.
## Non-IP data after end of project
At end of the project, the MURAB research data that is not part of IP, will
also be made open-access. The subset of the data which is non-IP will be
discussed in a consortium meeting around July 2019 (six months before end of
the project). Also, the embargo period will be decided here.
Several types of data might be useful for other researchers that also work on
registration of different imaging modalities where deformable tissue is
involved. In MURAB’s case, the researchers record raw MRI and tracked
ultrasound data for position of the robotic arm on which other groups can try
their algorithms for registration and 3d volume reconstruction. This processed
data (reconstructed 3d volume with elastography information, needle insertion
path planning etc.) might be used as a reference point to quantitatively
evaluate the performance of their algorithms. Raw data using one specific
modality (e.g. MRI) from patients could also be useful for researchers working
on a different kind of study who need MRI scans of different patients.
MURAB will publish datasets in public-available databases alongside with
publications and at the end of the project, for example in 3TU.Datacentrum or
in a subject-based/thematic repository, and coupled with a Digital Object
Identifier. A Creative Commons Licence will be applied to the MURAB project
research data. The MURAB website will probably also refer to these datasets.
## IP data after end of project
IP data stays by its respective owner(s). If a partner wants to commercialize
project results and needs IP of other partners, they have to gain access to
these rights taking normal market conditions into account.
# Data Preservation and Archiving
After the end of the project, linkage information connecting patient names to
codes shall be destroyed.
Part of the data is placed in an Open Access dataset. Only data that is deemed
useful for other researchers and does not belong to intellectual property of
any partner of the consortium, is included in this dataset. Data that contain
no scientific value (testing data, incomplete cases, failed experiments etc.)
or data containing IP (especially models and code) are omitted from the
dataset. This way, high research data quality of the Open Access dataset is
ensured, without disclosing IP.
After the end of the project, the University of Twente is no longer obliged to
maintain the consortium server. All partners can make a copy of all data for
themselves, taking IP in mind. What will happen with the data on the
consortium server, will be discussed near the end of the project. If the RaM
group decides to keep the server (e.g. if there is a continuation of the
project), the server with all data can be transferred to the RaM group, taking
IP rights into account. Another option is to store all data for longterm,
protected storage in the 3TU.Datacentrum or other digital repository.
# MURABproject.eu
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0017_BREAKBEN_686865.md
|
# 1 Introduction
This document outlines the principles and processes for data collection,
annotation, analysis and distribution as well as storage, security and final
destroying of data within the Breaking the Nonuniqueness Barrier in
Electromagnetic Neuroimaging (BREAKBEN) project. The procedures will be
adopted by all project partners and third parties throughout the project in
order to ensure that all project-related data are well-managed according to
contractual obligations as well as applicable legislation both during and
after the project.
BREAKBEN shall participate in the Open Data Pilot according to the Grant
Agreement and Research Commitment. This document details the practices and
solutions regarding the storage and re-usability of the research data, which
will be made accessible for other researchers and the public for further use
and analyses.
One of the most important aspects of this document is to ensure that the data
are not opened if this would violate privacy, safety, security, terms of
project agreements or legitimate concerns of private partners.
The Grant Agreement of the BREAKBEN project as an Open Data Pilot participant
obligates the project to:
1. _deposit in a research data repository and take measures to make it possible for third parties to_ access, mine, exploit, reproduce and disseminate — free of charge for any user — the following: _(i) the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible; (ii) other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan', i.e. this document._
2. _provide information — via the repository — about tools and instruments at the disposal of the_ beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves).
However, the obligation to deposit research data in a databank does not change
the obligation to protect results, take care of the confidentiality and the
security obligations or the obligations to protect personal data. The Data
Management Plan addresses these topics and details how the seemingly
contradicting commitments to share and protect are implemented within the
project.
The Grant Agreement also contains an option to discard the obligation to
deposit a part of research data in the case where the achievement of the
action's main objective, described in Annex 1 of the Grant Agreement, would be
jeopardized. In such case, the Data Management Plan must contain the reasons
for not giving access.
The Data Management Plan has, on the other hand, also served the purpose of
acting as a tool to agree on the data processing of the BREAKBEN project
consortium. The production of the Data Management Plan has helped the
consortium to identify situations where the practices were thought to be
agreed upon and where a common understanding on practices was thought to have
been achieved, but where such in fact did not exist.
This Data Management Plan is a living document that will be submitted at the
end of June 2016 (M6) and will be updated and complemented as the project
evolves.
When creating the BREAKBEN Open Data policy the following principles should be
considered:
## 1\. Discoverability
The location of research data and the necessary software to access the data
are known, using a standard international identification mechanism.
## 2\. Accessibility
Research data and the necessary software to access the data shall be easily
accessible, an embargo period can be agreed upon, in order to achieve
strategic advantage for the creator. The licenses that may be considered for
research data are Creative Commons Attribution 4.0 International (CC BY 4.0),
for metadata CC0 1.0 Universal (CC0 1.0) Public Domain Dedication and for
software the MIT License.
## 3\. Assessability and intelligibility
The research data and the necessary software to access the data shall be
assessable for and intelligible to third parties for scientific scrutiny and
peer review. They shall be published together with related scientific
publications.
## 4\. Usability
The research data and the necessary software to access the data shall also be
usable for purposes beyond the original project. The research data chosen for
long-term preservation shall be safely stored and cured.
## 5\. Interoperability
Allows data exchange between researcher groups, higher education institutions
and research institutions in different countries. Interoperability will also
allow for the re-combination of different datasets from different origins.
# 2 Data Types
For each partner of the BREAKBEN consortium, there is a different profile of
data produced within the project. The main outcome of the project, however, is
not data, as the aims are focused towards developing methods, techniques and
instrumentation for research and clinical use. Therefore, a large share of the
results are in the documents and publications that describe the technological
advances, as well as in the prototype instrumentation. Nevertheless, some
phases of the project generate experimental data that may be able to produce
additional value under further analysis or be relevant to the reproducibility
of the science. From the scientific reproducibility point of view,
computational results are typically best reproduced by reapplying the
described theory and algorithm and by analyzing the validity of the model
used.
Data and documents in project BREAKBEN are described in terms of six basic
types:
1. **Reports and publications** ; e.g., deliverables, presentations, articles
2. **Project data** ; e.g., agreements, financial statements
3. **Design data** ; e.g., MRI–MEG equipment structures
4. **Research data** ; e.g., time series of physical quantities
5. **Analyzed research data** ; e.g., results of statistical analysis of research data
6. **Software** ; e.g., computer-programmed data-analysis pipelines
**Reports and publications** includes deliverables, presentations and for
example journal articles. This data type also refers to the contents of the
BREAKBEN project website.
For deliverables, a simple filename metadata scheme is used within the
consortium. Drafts and final deliverable documents will be named according to
the format (X = work package, Y = deliverable number):
BREAKBEN-DX.Y_Deliverable-Name_More-info_And-more.docx
It is also advised within the consortium that any email communication
regarding deliverables will have BREAKBEN-DX.Y in the subject line to help
manage the email traffic related to deliverables. When dates are added to file
names, they should follow the international standard for date format (ISO
8601) with the form YYYY-MM-DD. When this date format is inserted at the
beginning of a file name, or at a fixed position (such as after “Deliverable-
Name” for deliverables), automatic file name sorting will sort the files in
chronological order. It is further recommended to include the project name
BREAKBEN in the Subject line of email conversations, when the communication is
specific to the BREAKBEN project.
In order to ease the version handling of project deliverables and to diminish
the need of sending them via email, the project workspace Eduuni will be used.
Eduuni is a password-protected workspace, administrated by Aalto University.
There, the deliverables will be uploaded under the appropriate Work Package.
Eduuni allows the simultaneous editing of the document.
**Project data** include administrative and financial project data, including
contracts, partner information and reports, as well as accumulated data on
project meetings, teleconferences and other internal materials. These data are
confidential to the project consortium. Project data include mainly MS Office
documents, in English, which ensures ease of access and efficiency for project
management and reporting. Project data are stored in Eduuni workspace,
administrated by Aalto University.
There are differences in produced data types between different partners, as
described below:
## AALTO
The main role of Aalto University in the project is building expertise and
developing ULF MRI and MEG methods and instrumentation. However, also data are
produced as part of that development.
**Design data** : descriptions of designs, written documents and/or schematic
drawings; data types including LaTeX-generated pdf, Scalable Vector Graphics
and CAD formats; metadata written inline, in format-specific metadata fields
or in the file name.
**Research data** : stored neuroimaging or phantom data that are recorded
within the project; FIFF data format; metadata in format-specific fields or
additionally in separate text files using JSON or other suitable structuring
where applicable.
**Analyzed research data** : results from data-analysis pipelines; various
data formats, depending on the kind of analysis, sometimes in the form of
figure or table.
**Software** : software written to produce results or automate procedures,
including data-analysis pipelines;
e.g., written in Python 3.5 (file type .py or .ipynb, using the Jupyter
Notebook for describing the recipe for reproducing the results, including
documentation).
## ELEKTA
Elekta is a medical technology company with a quality system that complies
with ISO 9001 (2015), ISO 13485 (2015), and FDA QSR.
The role of Elekta in the BREAKBEN project is not to collect research data,
but to provide knowledge of commercial exploitation of medical technology and
regulatory requirements, and to provide technical expertise in writing
software and building instrumentation. Table 1 lists and describes the data
types generated by Elekta in the project.
<table>
<tr>
<th>
**Description**
</th>
<th>
**Data type**
</th>
<th>
**Tools**
</th>
<th>
**Volume**
</th>
<th>
**Metadata**
</th> </tr>
<tr>
<td>
Project data
</td>
<td>
Text files
</td>
<td>
MS office or equivalent
</td>
<td>
TBD
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Reports and communication data
</td>
<td>
Text files
</td>
<td>
MS office or equivalent
</td>
<td>
TBD
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Design data
</td>
<td>
Text files, CAD
files
</td>
<td>
MS office or equivalent, CAD programs
</td>
<td>
TBD
</td>
<td>
As required by
Elekta
document
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
management system
</td> </tr>
<tr>
<td>
Software
</td>
<td>
Source binaries
</td>
<td>
code,
</td>
<td>
SW development tools
</td>
<td>
TBD
</td>
<td>
As required by Elekta software
development tools
</td> </tr> </table>
**Table 1.**
In the BREAKBEN project, there are four basic types of data generated by
Elekta: project data, reports and communication data, design data, and
software. No research data are collected by Elekta in the project.
**Project data** and **Reports and communication data** is described above.
**Design data** includes all data created in the development process of the
software and hardware modules developed in the project. **Software** includes
source codes and binaries of the software generated in the project.
The Metadata concerning design data and software will be handled as required
by Elekta document management system.
**HUS**
## Recorded data
**Data types:** MEG, MRI and MEG-MRI signals of 5 healthy subjects and 2
patients with epilepsy. MRI images of four additional healthy subjects and 3
tumor patients.
**MEG** : approximately 5 files of spontaneous MEG per subject/patient
**MRI** : One MRI image stack from each subject and patient
**MEG-MRI** approximately 5 files of spontaneous MEG per subject/patient
**Analyzed research data:** The sources of somatosensory evoked oscillatory
activity will be analyzed with standard commercial MEG methods. The occurrence
of fast oscillatory activity in MEG of patients with epilepsy will be screened
in a manner routine for clinical evaluation. MRIs of the healthy subjects will
be analyzed by a neuroradiologist. The MRI images of the patients will be
analyzed in connection with their clinical workup.
## PTB
PTB will design, construct and operate ULF MR hardware for NCI and DCI. In
addition to “Reports and Publication” and “Project data”, further data types
will be generated.
**Design data:** CAD files generated by Autodesk Inventor, Text files.
**Research data:** These primary data include digitised voltages and personal
information on subjects. For subject data, personal data will be saved and
protected from unauthorized access by a third party.
**Analysed research data:** These constitute the measurement data and records
which are part of a measurement, are the result of the execution of a
measurement or can be derived from a measurement. These include post-processed
and/or analysed research data obtained on phantoms and subjects within the
project.
## UDA
The main role of UDA is to quantitatively assess the impact on the large scale
connectivity results induced by the new device built in the project. This
assessment will be carried out at the in-silico, phantom, and invivo levels.
This will result in the production of a small set of real data and
connectivity maps. Additionally, scripts to generate surrogate data and to
evaluate the impact on the large scale connectivity will be produced.
**Research data:** Time series containing MEG data will be stored in a
proprietary format recognized by several software tools freely available
(e.g., FieldTrip, Brainstorm). MEG data will be collected using the MEG device
operating at ITAB-UDA during rest and the auditory protocols that will be
specified along the project. Average data set size is 1 GB. Surrogate data
generated simulating real data acquisition will be also produced. In the
latter case, data will not be stored as individual files; as an alternative,
the generating algorithm will be given.
**Analyzed research data** : Results from real data-analysis pipelines (e.g.,
ERF, source activity, connectivity) and surrogate data results (e.g.,
connectivity values) will be saved using different data formats, depending on
the kind of analysis, including figures (e.g., png).
**Software:** software written to produce results from real and simulated data
from the data-analysis pipelines; e.g., written in Matlab and/or Python.
Documentation will be provided in the native editors of the programming
environments.
## VTT
The role of VTT is design and manufacture of superconducting sensors and their
front-end interface electronics. Sensors are fabricated in the VTT Micronova
clean room, whose operations are ISO9001:2008 certified.
**Design data** : The raw design data will be in various CAD and engineering
formats, primarily (i) GDS for the detector layout, (ii) PADS for electronics
design, and (iii) AWR APLAC for behavioural models. Because raw designs will
combine BREAKBEN-funded results with other results, both internally-funded and
from earlier externally funded projects, the raw engineering files will not be
made available. The publishable files will be edited to remove details which
may infringe company secrets, other agreements, or may reveal other
proprietary information, and get converted into MS-Office –compatible formats,
SVG, and GDS.
**Research data:** Detector and electronics performance data will be collected
by (i) commercial measurement instruments taken from VTT instrument inventory,
and (ii) home-built data acquisition setups. Commercial instruments have
proprietary data formats, whose variety is too wide to be listed here. Data
acquisition setups have not been designed or constructed yet, hence their data
formats cannot be described yet. This information will be updated later in the
document. For the sake of making the data accessible, they will be converted
into the CSV format or MS Office compatible formats.
**Analyzed research data:** Will be made available in MS Office compatible
formats.
# 3 Collecting the data
Different data collection principles apply in different institutions, and not
all partners collect research data. Since the project is developing new
technologies, many kinds of data collection do not have any applicable
standards. Therefore, the data are often collected using the most applicable
methods available or most convenient to implement. On the other hand, it is
among the aims of the project to move towards more standardized data handling
(e.g., Deliverable 4.1: Software architecture). Details are given regarding
each partner institution below. Good scientific practices will be applied
according to the guidelines used by each institution.
## AALTO
Data will be collected using in-house techniques fully or partially automated
using Python scripts and Jupyter Notebooks. Data will be collected from
phantoms, human subjects and non-imaging studies. Low-quality data are
rejected in-house by automatic or semiautomatic procedures.
## HUS
The MEG data are collected in BioMag Laboratory in standardized commercial MEG
recording setup. The MRI data of epilepsy and tumor patients are collected in
HUS in conjunction with their standard clinical examinations.
The quality control standards are those used for clinical MEG and MRI
recordings in HUS. No pre-existing data are used for this project.
## PTB
Personal data will be obtained via a questionnaire. Standards and protection
are described in the section 4.2.
Raw research data will be obtained with data acquisition cards, DAQs, and
other electronic equipment and stored in a proprietary format. PTB follows its
in-house quality management system with procedural requirements for data
handling and an additional directive for data storage. For testing purposes,
surrogate data will be generated, simulating real data acquisition.
## TUIL
Phantom data collection will be obtained according to: Tenner,U., Haueisen,J.,
Nowak,H., Leder,U., Brauer,H.: Source Localization in an Inhomogeneous
Physical Thorax Phantom. Physics in Medicine and Biology, 44, 1969 - 1981,
1999.
No existing data will be used. Quality assurance processes will be applied
according to the DFG guidelines for good scientific practice.
## UDA
Raw data on human subjects will be collected as time series using in-house
techniques and stored in a proprietary format. Personal metadata consisting of
consensus and questionnaire will be stored together with the raw data. To test
the impact of the project achievements on connectivity, surrogate data will be
generated simulating real data acquisition. Quality assurance processes will
be assessed according to national ethical committee guidelines.
## VTT
Due to the innovative and ground-breaking nature of the BREAKBEN project,
steps in the detector development depend strongly on the results obtained in
the previous step and on the difficulties encountered along the way. Data will
be collected whenever a new generation of detectors or electronics is finished
and its/their performance needs to be assessed. The data collection will take
place in ad hoc manner, following the general principles of scientific inquiry
and the craftsmanship of long-time practitioners of the art.
A large part of the data from the measurement instruments or data acquisition
systems will be a part of debugging work spent on the system. Only the data
obtained with devices in a proper working condition is counted as publishable
here.
# 4 Levels of Confidentiality and Flow of the data
## 4.1 Confidentiality
Overall, there are three basic levels of confidentiality, namely Public,
Confidential to consortium (including Commission Services), and Confidential
to the Partner / Subcontractor.
Each data type is treated differently with regard to the level of
confidentiality, e.g., the untreated research data such as personal data of
patients and research subjects, whereas most project deliverables are actively
disseminated. Some of the data falls under the EU and national laws on data
protection and for this reason the project is obliged to seek necessary
authorizations and to fulfill notification requirements.
The project will assume the principle of using commonly used data formats for
the sake of compatibility, efficiency and access. The preferred data types are
in MS Office compatible formats, where applicable.
**Figure 1. Data types displayed in three levels of confidentiality, as
applicable in most cases. There are exceptions to this confidentiality
classification. Software and design data are not displayed, because their
confidentiality may be in any of the three levels. See section 4 for more
information.**
Figure 1 displays how the previously mentioned data types are positioned in
the level of confidentiality context. Only one data type – (untreated or raw)
research data – is totally situated in one level of confidentiality, which
means that it solely remains with the partner or third party responsible for
collecting it. Three types (project data, analyzed data and reports and
communication) contain data of two different confidentiality levels. The
remaining two types, software and design data, may belong to any of the three
levels depending on the contents of the data and on potential restrictions due
to contracts and other policies. In general, there are exceptions to the given
confidentiality levels. For instance, there may be research data for which
public distribution for scientific purposes is possible and desirable. More
information is found later in this section.
The data flow in most cases starts at the bottom of the confidentiality levels
(Figure 1) and as the data moves up in levels, it is ensured that no data are
included that cannot be placed in the next level. This may involve reduction
of the data or anonymization, as necessary. However, anonymization is not
always possible, which may restrict the possibilities of changing
confidentiality level. Similarly, other privacy requirements may prevent
publishing data, including analyzed data (see below).
## AALTO/HUS
Confidentiality: The data from the control subjects and patients are available
in anonymized form for the members of the research consortium. The averaged
data and anonymized MRI images can be used in scientific publications provided
that the individuals taking part into the research cannot be identified from
them.
## ELEKTA
Again, there are three basic levels of confidentiality (Figure 2), namely
public, confidential to consortium, and confidential to the Partner (Elekta).
**Figure 2. Data types at Elekta displayed in three levels of
confidentiality**
**Project data** include agreements and financial data, and, therefore, are
confidential either to the project consortium or to the partner, depending on
the scope and content of the data.
**Reports and other communication data** are either public or confidential to
the project consortium, depending on the scope and content of the data.
**Design data** include information on Elekta’s development process and
background IP, and, therefore, are confidential either to the project
consortium or to the partner depending on the scope and content of the data.
**Software** include information on Elekta’s development process and
background IP, and, therefore, are confidential either to the project
consortium or to the partner depending on the scope and content of the data.
**Figure 3. Elekta data flows within and between the three levels of
confidentiality**
## PTB
Research data and analyzed research data obtained by performing experiments on
humans cannot be opened to the public but to consortium level of
confidentiality. The subjects are recruited within PTB, Institute Berlin. PTB
intends to use maximum 50 subjects out of the about 400 strong workforce.
There is a high risk of violation of data privacy laws as the use of metadata
is very likely to allow the identification of the subject.
Research data obtained on phantoms are open to the consortium level of
confidentiality. Design data are confidential to the partner level only and
software is open to the consortium level of confidentiality.
## UDA
Raw MEG and MRI recordings are available in anonymized form for the members of
the research consortium only upon request. Processed data are publicly
available after scientific publication. Reports are either public or
confidential to the project consortium, depending on the specific report as
indicated in the Grant Agreement.
## VTT
Unedited design data will be kept confidential. The edited design data which
contains the findings obtained within the BREAKBEN project will be made
Consortium confidential or Public, depending on the scope or content of the
data. Editing involves hiding or removing those design items which are results
of internallyfunded development work or which may infringe earlier
confidentiality agreements.
Research data and analyzed research data will be made public.
### 4.2 Data Storage and Protection
Each partner has its own policies in storing and protecting data. As general
project guidelines, it is recommended that git is used for version control
where applicable and that the data are pushed into a repository with regular
backups. One such repository is the GitLab service maintained by Aalto
University IT Services, to which also personnel of non-Aalto partners can be
given password-protected access. It is also possible to turn a GitLab data
repository into a public repository when appropriate according to the data
flow. For documents that do not support proper version control, such as
Microsoft Office documents, the Eduuni workspace, is often the proper space
for storage and sharing. For deliverables, and other consortiumwide written
documents, Eduuni is always used.
When date information is included in file and folder names, it is recommended
that the date is positioned In the beginning of the name according to the ISO
8601 standard, YYYY-MM-DD, followed by an underscore (_) if other information
is appended before the filename extension. However, policies at individual
partner organizations may override these general guidelines.
## AALTO
Data will be stored in regularly backed up storage maintained by the Aalto
University IT Services. In addition, the department GitLab service is used for
versioned data, which is also maintained by the local IT services. Data will
be anonymized, when applicable, before publishing.
Most data will be either versioned with git or saved under folder and/or file
names beginning with an ISO 8601 date stamp, or both.
## ELEKTA
All the data will be generated according to standard Elekta procedures, and
stored to Elekta’s archives. The archives are backed up regularly. The
retention time and disposal of the data follow the guidelines of Elekta’s
procedure _EOYbms00015 Management of quality records._
The data will be copied to consortium members or made public when applicable
(see 4.1 Confidentiality above).
## HUS
The MEG data are stored in the data storage system of the BioMag Laboratory.
The data are coded in recording phase and are protected by a user name and
password. The code key is kept in locked space separate from the data.
The MRI data of healthy subjects are stored in the Science PACS of HUS Medical
Imaging Center. The MRIs of the patients are stored in the Clinical PACS
system of HUS.
The data storage capacity for MEG and MRI images are adequate. No additional
services are needed.
Automatic tape backup for MEG data is used. The MRI data are backed up in the
usual hospital procedure. No new hardware or software is required. No charges
are required for data storage. The MEG-MRI data are stored by Aalto
University.
## PTB
For human experiments, the names of the subjects will be pseudonymised using
an algorithm and the data saved only under pseudonames. Contact data will be
separately securely stored. All data will be saved on the PTB internal server.
Such data will be protected in compliance with the EU's Data Protection
Directive 95/46/EC aiming at protecting personal data. Compliance with privacy
rules as stated by the local ethics committee and national laws will be
ensured.
Analyzed research data that will be accessible at consortium and at public
level will be stored into BREAKBEN repository in the Aalto University’s GitLab
data repository.
PTB has a quality management system with procedural requirements for data
handling (QM-VA-22) and an additional directive for data storage. Both
directives clarify the responsibility of divisions for data storage and
central data backup at PTB. These directives are based on the rules specified
by Bundesamt für Sicherheit in der Informationstechnik (BSI). PTB data
handling is governed by the DFG rules for good scientific practice and in
accordance with the in-house quality management system. In particular, primary
data will be archived for 10 years after acquisition. Archiving will be
password protected and anonymous.
## TUIL
Local TUIL repositories are used with no anonymization. The data will be
preserved on local computers. Useful conventions will be chosen for folder
structure and file naming. Versioning needs are minor and dealt with by
labeling data according to the date of the recording. The central service of
the computing center of TUIL takes care of backups. The TUIL staff is at
expert level. No hardware or software is required which is additional or
exceptional to existing institutional provision, nor will charges be applied
by data repositories.
## UDA
**Anonymization of data:** For some of the activities to be carried out by the
project, it may be necessary to collect basic personal data (e.g., full name,
contact details, background), even though the project will avoid collecting
such data unless deemed necessary. Such data will be protected in compliance
with the EU's Data Protection Directive 95/46/EC aiming at protecting personal
data. Compliance with privacy rules as stated by the local Ethics Committee
and National laws will be ensured.
**Preserving and archiving data:** Data will be stored in two different
repositories, according to level of confidentiality. Raw data that are open
only at Partner Level will be stored in a local data storage server using
RAID-6 technology. Analyzed research data that are accessible at consortium
and at public level will be stored into a different local data server (using
ISO 8601 standard) and in the Aalto University GitLab service.
## VTT
VTT will obtain an account in the commercial GitHub repository for the
duration of the project. The longer persistence of the data cannot be
guaranteed at the present date, as it is not clear which mechanisms could be
taken advantage of after the Breakben funding ends. Technical details of the
data storage are those provided by the GitHub service.
# 5 Opening the Data
As stated in previous sections, the project aims at providing open data where
applicable, although the nature of the data, or other requirements may prevent
publication.
All parties have signed/accessed to the project Grant Agreement and Consortium
Agreement, which detail the parties’ rights and obligations, including – but
not limited to – obligations regarding data security and the protection of
privacy. These obligations and the underlying legislation will guide all of
the data sharing actions of the project consortium.
The BREAKBEN project is participating in the Open Research Data Pilot, which
is an expression of the larger Open Access initiative of the European
Commission. Participation in the pilot is manifested on two levels: a)
depositing research data in an open access research database or repository and
b) choosing to provide open access to scientific publications which are
derived from the project research. At the same time, the consortium is
obligated to protect personal data and results.
The instructions to download and open the research data, together with
description of the contents of the files containing the data will be uploaded
in the same directory as the data themselves. A readme file describing the
data will be placed in the same repository as the data and identify the data
at the partner and consortium level. In most cases, publicly available
software and a desktop computer will be sufficient for validating the results.
To help potential users find the published data, the availability of the data
will be posted on the project website and the distribution of data will be
mentioned/cited in publications. The data will either be available to anyone
or only on a case-by-case basis with applicable conditions. Privacy
requirements may sometimes prohibit publication altogether, and a data sharing
agreement may be required in special cases. Data storage updates and backups
are managed by the system administrators or maintainers in order to prolong
the lifetime of the data.
Below are the policies designed for each participating organization.
## AALTO/ VTT
Regarding the time of data publication, the aim is for article-related data to
be public at the time of publication of the article. The data types used are
selected so that they have specifications available so that opening the data
is possible also in the future if the same software is no longer available.
However, for some data, pieces of software may even be added as metadata along
with the data itself to help others handle the data. Sometimes, it may be
sufficient to refer to the software or specifications provided elsewhere.
Where applicable, data will be shared in a public repository accessible over
the internet. When (analyzed) research data are published for further
scientific analysis, a DOI may be assigned using a service such as Zenodo.
Licensing and conditions for such data will be determined based on the purpose
of the data. The data are released under a Creative Commons Attribution
license, unless the purpose or nature of the data requires otherwise. If the
situation is unclear regarding risks related to data security or privacy, the
partner will not publish data. To keep private data secure, it is accessed
over a secured SSL connection when accessing from outside networks.
## ELEKTA
**The project data** will be stored in Elekta’s document archives and, when
applicable, copied to the project password protected Eduuni workspace
(confidential to the project consortium). When shared, the data will be made
available either according to external schedule or when requested.
**Reports and other communication data** include mainly MS Office documents,
in English. Public data will be published, for example, via the BREAKBEN
website. **Design data** include MS Office documents and CAD files and
**Software data** include the source code and the binaries of the software.
These data will be stored in Elekta’s document management system and when
applicable, shared, for example, via password protected Druva. When shared,
the data will be made available either according to external schedule or when
requested.
## HUS
The present Finnish legislation enables data banks of patient samples (such as
blood or tissue samples) but forbids the use of signals such as MEG or MRI
data from individuals by parties not defined in the ethical permission of the
project. The averaged data and anonymized MRI images can be used in scientific
publications provided that the individuals taking part into the research
cannot be identified from them.
The data collected in HUS will be owned by HUS. The collected data can be used
by the members of the consortium as specified in the data management plan and
in the ethical permission.
## TUIL
Data will be made available at the end of the publication process in formats
that enable sharing and longterm access to the data. Web page Typo3 will be
used for sharing the data, also beyond the original purpose of the data. Open
data will be shared with no conditions. Documentation will provide the
information needed for reading and interpreting the data in the future. In
addition, other imaging phantoms may be needed to validate the results.
Regarding managing risks to data security, no unacceptable risk is expected,
although normal backup policies are applied. Potential users (anyone) will
find out about data via a link in publications, and no data sharing agreement
or equivalent will be required.
## UDA/PTB
Data sharing and long-term access is then regulated as follows:
* Partner level: research data format: compatible with in-house or freeware tools made available to all the partner participants; shared through standard commercial programs (e.g. Office, Matlab). An internal repository available within the first deliverable will enable the sharing of the data.
* Consortium level: analyzed research data format will enable sharing through standard commercial software or software tools (e.g. scripts) made available to the Consortium members. A free of charge cloud system will be used to share the data.
* Public level: research result formats will enable sharing through standard commercial software or software tools (e.g. scripts) placed in the same repository as the results accessible via a free of charge repository.
Beyond the original purpose the data can be shared as follows:
* Partner level: Research data will be shared between researchers involved in specific tasks to ensure fulfillment of the project tasks and to internally discuss intermediate results;
* Consortium level: Analyzed research data will be shared between partners as required by specific project tasks, involving this and other partners.
Conditions on shared data and minimizing restrictions are:
* Third parties level: Access to research and analyzed data is restricted to users belonging to research entities. A formal request must be submitted by these users’ PIs. Specifically, re-use re-distribution, creation and publication of derivatives of the data with a third party can be granted within a cooperation after personal data protection/data privacy is guaranteed within the cooperation agreement.
# 6 Data ownership
At Aalto and VTT, the ownership of data will be will be handled according to
applicable policies depending on the situation and/or the guidelines presented
in the section 3 of the grant agreement. Attribution will mainly be received
via citations to the work. While no research data are collected or produced by
Elekta, the data types presented in section 2 will be handled according to the
guidelines presented in the section 3 of the grant agreement. Regarding PTB,
data are owned by the research institution collecting the data (PTB).
At TUIL, the owner of collected data will be TUIL. Re-use of the data will be
permitted with no conditions. Redistribution of the data as well as creation
and publication of derivatives from the data are permitted either with or
without conditions. Others will also be permitted to use the data to develop
commercial products or in ways that produce a financial benefit for
themselves, either with or without conditions. The people who generated the
data sets will receive attribution for their work via citations.
At UDA, research data will be owned by UDA. Re-use of the data, re-
distribution of the data as well as creation and publication of derivatives
from the data will be permitted either with or without conditions. The people
who generated the data sets will receive attribution for their work via
citations.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0022_INTERACT_730938.md
|
# 1\. Data Management Principles and Guidelines
## 1.1 Introduction
The Rif Field Station (RFS: Annex A) has a responsibility to promote and
ensure the proper management of data and information resulting from activities
conducted at RFS. Effective data stewardship is essential to ensure that
valuable data resources are accessible now, and in the future, to advance our
knowledge and understanding, promote public awareness, and support informed
decision making. In addition, accurate and retrievable data are an essential
component of research and are necessary to verify and defend, when required,
the process and outcomes of research.
This Document describes the principles and guidelines for management of data
and information generated through monitoring and research conducted at RFS.
These principals and guidelines support the long-term preservation of and
timely access to important Arctic datasets and information. The Document has
been developed through a collaboration between Polar Knowledge Canada;
Greenland Ecosystem Monitoring and Zackenberg Research Station; The
Conservation of Arctic Flora and Fauna and RFS.
This Document is informed by:
* The Management Principles and Guidelines for Polar Research and Monitoring in Canada (POLAR 2017)
* Zackenberg Research Station Data Management Plan (2018)
* The International Arctic Science Committee’s (IASC) Statement of Principles and Practices for Arctic Data Management (IASC 2013);
* Management planning for arctic and northern alpine research stations – Examples of good practices (INTERACT 2014);
* The Circumpolar Terrestrial Biodiversity Monitoring Plan (Christensen et al 2013);
* The Circumpolar Freshwater Biodiversity Monitoring Plan (Culp et al 2013); and
* Circumpolar Biodiversity Monitoring Program (CBMP) Strategic Plan 2018-2021 (CAFF 2018).
This Data Management Plan (DMP) will be revised after 1 year to determine if
it needs to be revised based upon lessons learned and feedback from users.
## 1.2 Goals and Objectives
The goal of this DMP is to ensure that there is comprehensive inventory of the
projects conducted at RFS and the data they produce. Metadata records are
intended to provide a comprehensive searchable and publicly accessible
inventory of these projects and datasets.
This Document serves as a guide to assist RFS and those conducting research at
RFS in applying consistent approaches to data management, and to clarify roles
and responsibilities of researchers and collaborators.
## 1.3 Principles of Data Management
RFS seeks to ensure long-term preservation of and access to data through
application of the following principles:
* Data are preserved by collecting, storing, and retaining data using formats that preserve the data beyond the duration of the original research project;
* Data are discoverable by applying commonly accepted standards and reporting protocols in the use of metadata;
* Data are accessible by supporting full, free, and open access with minimal delay, using a secure and curated repository or other platforms; and
* Data are ethically managed by respecting legal and ethical obligations, including consent, privacy, and confidentiality; secondary use of data; and data linkage.
This Document will be reviewed periodically by RFS to ensure the principles
and guidelines herein remain relevant.
## 1.4 Types of Data and Definition
### 1.4.1 Data and Metadata Considerations
RFS in collaboration with the international Arctic data management community,
seek to promote the highest standards in the stewardship of data and metadata
resources resulting from Arctic research and monitoring activities.
### 1.4.2 Definition of Data
These principles and guidelines take a very broad approach to the concept of
data, recognizing that it may take many forms, and, depending on the field of
research or monitoring, can mean different things. This includes but is not
limited to: survey results, written observations, software, interview
transcripts, photographs, automatic measurements, hand-drawn maps, stories and
video footage (FAO 2018). Thus, this Document’s definition of data
incorporates Western/academic and local knowledge.
There are five primary categories or sources of data:
* Institutional Data: Data systematically collected or produced as part of baseline monitoring conducted at RFS.
* Funded Data: Data collected or produced by funded projects at RFS.
* External Data: Data from external repositories or data providers, including existing operational data streams and historical sources, industry, international institutions, or others, as relevant.
* Rescued Data: Data retrieved from unpublished sources, e.g., field notebooks, records on outdated storage media, or photographic records, which are often at risk of loss.
* Local Knowledge (LK): is the knowledge that people in a given community has developed over time and continue to develop.
### 1.4.3 Definition of Metadata
Metadata provides the information about a dataset, specifically the _what,
where, how, when,_ _by whom_ it was collected, its current location, and any
access information. Metadata facilitates the understanding, use, and
management of data and is a means for networking and collaboration.
Standardized metadata records consist of a defined set of information fields
that must be completed to allow automatic sharing of records via
interoperability between metadata management facilities and data portals.
Metadata submitted to RFS should conform to the Darwin core (TDWG 2009).
### 1.4.4 Physical Samples as Research Data
The products of research and monitoring activities _may_ also include physical
samples, preserved and living biological specimens including microbiological
cultures, and other non-digital material. Researchers are responsible for the
preservation, documentation, and ethical use of these physical samples
according to existing standards relevant to the type of sample collected.
Researchers are expected to allow scientific sharing and investigation in
accordance with relevant standards and other guidance from a museum, research,
or other applicable community. Such non-digital holdings should be described
in a metadata record submitted to RFS.
### 1.4.5 Ethically _Open_ Access
In order to support open access practices to maximize the benefit of the
efforts put into proper stewardship of data, the RFS, through this Document,
requires data contributors to make research and monitoring data available
fully, freely, and openly, with minimal delay.
The only exceptions to the requirement of full, free, open, and permanent
access are
* Where human subjects are involved or in situations where small sample sizes may compromise anonymity, confidentiality shall be protected as appropriate and guided by the principles of informed consent and the legal rights of affected individuals;
* Where LK is concerned, rights of the knowledge holders shall not be compromised;
* Where data release may cause harm or compromise security or safety, specific aspects of the data may need to be protected (for example, locations of nests of endangered birds);
* Where pre-existing data are subject to access restrictions, access to data or information using this pre-existing data may be partially or completely restricted; and
* Where disclosure of information is not in accordance would be in conflict with the mandate of the organization in question or other unforeseen circumstance which require action from the RFS.
## 1.5 Roles and Responsibilities
### 1.5.1 General Responsibilities
The RFS metadata repository (Arctic Biodiversity Data Service (ABDS, Annex
B)), data contributors, project sponsors, and external collaborators (Annex C)
will work in partnership to implement good practices and meet relevant
requirements.
### 1.5.2 Responsibilities of RFS
* RFS in partnership with ABDS will provide advice to facilitate efficient and accurate metadata and data entry.
* Archiving and access requirements of all metadata records, datasets, or other research products involving LK will be considered on a case-by-case basis.
### 1.5.3 Responsibilities of Those Conducting Research and Monitoring
Activities at RFS
Compliance with the requirements in this Document including:
* Providing metadata that is accurate, complete and reliable;
* Submitting metadata detailing the data products they produce to RFS, as early in the project as possible, typically within the year of collection and ensure any revisions needed so that it accurately describes the final state of the data;
* Ensuring that their data are accessible to the general public, consistent with appropriate ethical, data sharing, and open access principles;
* Providing a persistent locator, for data collected at RFS. If possible provided in the form of a unique digital object identifier (DOI). This recognizes the intellectual work required to create a useful dataset and allows the dataset to be recognized and cited through formal publication activities, including formal publication of the data itself;
* Data creators acknowledge RFS where appropriate, in relevant presentations and publications; and
* Those conducting research and monitoring activities at RFS requiring a repository to archive their data, can contact the ABDS for advice.
# 2 Data Handling and Data Products
This document details how metadata and data from research conducted at RFS are
to be managed and submitted throughout a research project lifecycle (Fig 1).
## 2.1 Research Project Lifecycle
In order to properly frame the responsibilities of researchers operating in
the RFS’s extensive and intensive monitoring areas (Fig 2), a common research
project lifecycle is presented, and specific responsibilities at each stage
are noted.
### 2.1.1 Research Project Inception
When an individual or organization wishes to conduct research at RFS, an
application is made to the RFS. At this time, the applicant is responsible for
providing the following:
* Information about the Principal Investigator (PI)
* A description of the project
* An inventory of expected project outcomes and deliverables
* Logistical requirements
* An overview of potential impacts, and plans for impact mitigation
* How many participants expected to travel to RFS
These data are stored as a metadata record in the metadata catalogue, and the
application is passed to RFS for consideration.
### 2.1.2 Research Project Consideration, Review, and Subsequent Approval or
Rejection
When an application has been made to the RFS, it undergoes review and is
either accepted or rejected:
* If a project is rejected, the applicant will receive an explanation of the justifications resulting in rejection. At this point, the applicant may revise their application and resubmit, however there is no obligation to do so.
* There are no data or metadata responsibilities for either the applicant or RFS at this stage.
* If a project is accepted, the applicant proceeds to the project initiation phase.
Fig 1: RFS Project lifecycle and data overview
Fig 2: Extensive Monitoring Area and Intensive Monitoring Areas
### 2.1.3 Project Initiation
When an application is accepted, the RFS will provide the following:
* The RFS Monitoring Plan
* The RFS Data Management Plan
* Field Safety Guides
* Informative background data pertinent to the RFS
* Logistical assistance
* A preliminary metadata record to provide an overview of the project
* The RFS project metadata excel tool
At the same time, the applicant is responsible for understanding and complying
with the regulations and conditions described in the DMP and RFS Monitoring
plan
### 2.1.4 Annual Project Update
At the end of each field season of active research, the applicant is
responsible for:
* Updating the project metadata to properly represent data in production, or published by the research project;
* Visiting the published, revised (public facing) project metadata to validate the revision; and
* Pushing versioned, intermediate data to a repository if the preliminary data is deemed of immediate value to the scientific community, or if desired by the principal investigator.
### 2.1.5 Project Closure
At the conclusion of a research project, the applicant is responsible for:
* Updating the project metadata record to properly represent the final data product(s) of the research project;
* Visiting the published, revised (public facing) project metadata to validate the revision; and
* Pushing final quality assured data to a public repository. If a researcher requires a repository to archive datasets, this can be accommodated in the ABDS; and Ensuring a separate detailed metadata entry is present for each dataset.
## 2.2 Metadata and Data Standards, Requirements, and Best Practices
### 2.2.1 Detailed Metadata
Metadata is essential for users to understand how the data can be used to
determine the accuracy and validity of the initiative. To ensure accuracy and
accessibility of both project data and metadata, researchers are responsible
for ensuring the guidelines below are met when submitting metadata and
publishing data:
* Metadata collected should be consistent with metadata requirements as stated in this DMP (See Annex C).
* The metadata must clearly describe the datasets, their contents and all relevant information about the monitoring conducted including methods used, monitoring location and date, monitors and their skill level, etc.
* Best practices in the documentation of data collection procedures should be followed. Methodologies used must be included in the metadata, along with any discrepancies in applied methodologies.
To ensure metadata is accurate, accessible, and ingestible to the Arctic
Biodiversity Data Service (ABDS) the metadata is to be provided by completing
the RFS detailed metadata form, which is provided by RFS as an Excel document.
See Annex D for a list of the minimum mandatory elements required for
metadata.
### 2.2.2 Data standards and requirements
Data recording and data quality standards are the responsibility of the
researcher. RFS encourages data generators at RFS to comply with IPY Data
Policy on the delivery of free biodiversity data to the public and equivalent
legislation in the European Union for spatial information, such as the INSPIRE
Directive. Data formats should adhere to the Darwin Core. Acknowledgement is
mandatory when publications utilize data collected at RFS.
## 2.3 Contact Information
Questions arising from this document can be addressed to: Rif Field Station
(RFS), +354 856 9500, [email protected]
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0027_FISSAC_642154.md
|
# 1 Introduction
This document constitutes the first issue of Data Management Plan (DMP) in the
EU framework of the project FISSAC under Grant Agreement No 642154\. The
objective of the DMP is to establish the measures for promoting the findings
during the project’s life. The DMP enhances and ensures relevant project´s
information transferability and takes into account the restrictions
established by the Consortium Agreement. In this framework, the DMP sets the
basis for both Dissemination Plan and Exploitation Plan. The first version of
the DMP is delivered at M6; later the DMP will be monitored and updated in
parallel with the different versions of Dissemination and Exploitation Plans
(the progress of the implementation of DMP will be included in the Project
Progress Reports, at M18 and M36. It is acknowledged that not all data types
will be available at the start of the project. However and whenever important,
if any changes occur to the FISSAC project due to inclusion of new data sets,
changes in consortium policies or external factors, the DMP will be updated as
well in order to fine-tune it to the actual data generated and the user
requirements as identified by the FISSAC consortium participants.
FISSAC project comprises seven technical work packages (WP) as follows:
* WP1 - FROM CURRENT MODELS OF INDUSTRIAL SYMBIOSIS TO A NEW MODEL
* WP2 - CLOSED LOOP RECYCLING PROCESSES TO TRANSFORM WASTE INTO SECONDARY RAW MATERIALS
* WP3 - PRODUCT ECO-DESIGN AND CERTIFICATION
* WP4 - PRE-INDUSTRIAL SCALE DEMONSTRATION OF THE RECYCLING PROCESSES AND ECOINNOVATIVE PRODUCTS
* WP5 - INDUSTRIAL PRODUCTION & REAL SCALE DEMONSTRATION
* WP6 - FISSAC MODEL FOR INDUSTRIAL SYMBIOSIS
* WP7 - INDUSTRIAL SYMBIOSIS REPLICABILITY AND SOCIAL ISSUES
To facilitate the technical work there are three transversal work packages to
provide, structure, coordination, integration and communications across all
the work packages.
* WP8 - EXPLOITATION AND BUSINESS MODELS FOR INDUSTRIAL SYMBIOSIS
* WP9 - DISSEMINATION
* WP10 - MANAGEMENT
This document has been prepared to describe the data management life cycle for
all data sets that will be collected, processed or generated by FISSAC
project. It is a document outlining how research data will be handled during
FISSAC project, and after the project is completed. It describes what data
will be collected, processed or generated and what methodologies and standards
are to be applied. It also defines if and how this data will be shared and/or
made open, and how it will be curated and preserved.
# 2 Open Access and Open Research Data Pilot
Open access can be defined as the practice of providing on-line access to
scientific information that is free of charge to the reader and that is
reusable. In the context of research and innovation, 'scientific information'
can refer to:
(i) peer-reviewed scientific research articles (published in scholarly
journals) or (ii) research data (data underlying publications, curated data
and/or raw data).
The EC capitalises on open access and open science as it lowers barriers to
accessing publicly-funded research. This increases research impact, the free-
flow of ideas and facilitates (innovation in) a knowledgedriven society at the
same time underpinning the EU Digital Agenda (OpenAIRE Guide for Research
Administrators - EC funded projects). Open access policy of European
Commission is not a goal in itself, but an element in promotion of affordable
and easy accessible scientific information for the scientific community
itself, but also for innovative small businesses.
## 2.1. Dissemination, Communication and Open Access
For the implementation of FISSAC project, there is a complete dissemination
and communication set of activities scheduled, with the objectives of raising
awareness among non-expert citizens, but potential next users of the FISSAC
knowledge and solutions. For instance, e-newsletters, e-brochures, poster or
events, are foreseen for the dissemination of FISSAC to key groups potentially
related to the project results’ exploitation.
Likewise, FISSAC website, webinars, press releases or short videos, for
instance, will be developed for a Communication to a wider audience. Details
about all those dissemination and communication elements are provided in the
Deliverable D9.1 “Dissemination Plan”.
Open Access (OA) to scientific information is a complementary element to
dissemination and communication, and how this issue is specifically tackled by
FISSAC project is described in the present document.
## 2.2. Open Access to peer-reviewed scientific publications
Open access to scientific peer-reviewed publications has been anchored as an
underlying principle in the Horizon 2020 Regulation and the Rules of
Participation and is consequently implemented through the relevant provisions
in the grant agreement.
More specifically, Article 29: “Dissemination of results, Open Access,
Visibility of EU Funding” section 2 of FISSAC Grant Agreement (FISSAC,
Research & Innovation action, 2014) establishes the obligation to ensure open
access to all peer-reviewed articles produced by FISSAC.
29.2 Open access to scientific publications
Each beneficiary must ensure open access (free of charge online access for any
user) to all peer reviewed scientific publications relating to its results.
In particular, it must:
1. as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications;
Moreover, the beneficiary must aim to deposit at the same time the research
data needed to validate the results presented in the deposited scientific
publications.
2. ensure open access to the deposited publication — via the repository — at the latest:
1. on publication, if an electronic version is available for free via the publisher, or
2. within six months of publication (twelve months for publications in the social sciences and humanities) in any other case.
(c) ensure open access — via the repository — to the bibliographic metadata
that identify the deposited publication.
The bibliographic metadata must be in a standard format and must include all
of the following:
* the terms “European Union (EU)” and “Horizon 2020”;
* the name of the action, acronym and grant number;
* the publication date, and length of embargo period if applicable, and - a persistent identifier.
## 2.3. Open Access to research data
Research data is the second type of scientific information that OA is planned
for, besides the publications. 'Research data' refers to information, in
particular facts or numbers, collected to be examined and considered and as a
basis for reasoning, discussion, or calculation. In a research context,
examples of data include statistics, results of experiments, measurements,
observations resulting from fieldwork, survey results, interview recordings
and images. The focus is on research data that is available in digital form.
Open Research Data Pilot is a novelty in Horizon 2020 aiming to improve and
maximise access to and re-use of research data generated by projects (European
Commission, 9 December 2013). Particularly FISSAC is participating in this
Open Research Data Pilot programme as issued in Article 29 article 3:
### 29.3 Open access to research data
Regarding the digital research data generated in the action (‘data’), the
beneficiaries must:
1. deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following:
1. the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;
2. other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan' (see Annex 1);
2. provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves).
This does not change the obligation to protect results in Article 27, the
confidentiality obligations in Article 36, the security obligations in Article
37 or the obligations to protect personal data in Article 39, all of which
still apply.
The beneficiaries do not have to ensure open access to specific parts of their
research data if the achievement of the action's main objective, as described
in Annex 1, would be jeopardised by making those specific parts of the
research data openly accessible. In this case, the data management plan must
contain the reasons for not giving access to third parties.
### Consortium Agreement - Access Rights
The Parties have identified and agreed on the Background for the Project and
have also, where relevant, informed each other that Access to specific
Background is subject to legal restrictions or limitations. Anything which has
not been identified in the Consortium Agreement shall not be the object of
Access Right obligations regarding Background. Any Party can propose to the
General Assembly to modify its Background in the Consortium Agreement.
Each Party shall implement its tasks in accordance with the Consortium Plan
and shall bear sole responsibility for ensuring that its acts within the
Project do not knowingly infringe third party property rights.
* Any Access Rights granted expressly will exclude any rights to sublicense unless expressly stated otherwise.
* Access Rights shall be free of any administrative transfer costs.
* Access Rights are granted on a non-exclusive basis.
* Results and Background shall be used only for the purposes for which Access Rights to it have been granted.
* All requests for Access Rights shall be made in writing.
* The granting of Access Rights may be made conditional on the acceptance of specific conditions aimed at ensuring that these rights will be used only for the intended purpose and that appropriate confidentiality obligations are in place.
* The requesting Party must show that the Access Rights are needed.
Access Rights to Results and Background Needed for the performance of the own
work of a Party under the Project shall be granted on a royalty-free basis,
unless otherwise agreed for Background in Consortium Agreement.
Access Rights to Results if needed for Exploitation of a Party's own Results
shall be granted on Fair and Reasonable conditions to be agreed in writing
among the Parties concerned.
Access rights to Results for internal non-commercial research activities shall
be granted on a royalty-free basis.
Access Rights to Background if Needed for Exploitation of a Party's own
Results, including for research on behalf of a third party listed in
Attachment 3, shall be granted on Fair and Reasonable conditions.
A request for Access Rights may be made up to twelve months after the end of
the Project or after the termination of the requesting Party’s participation
in the Project.
Affiliated Entities have Access Rights under the conditions of the Grant
Agreement if they are identified in attachment “Identified Affiliated
Entities” to this Consortium Agreement. Such Access Rights must be requested
by the Affiliated Entity from the Party that holds the Background or Results.
Alternatively, the Party granting the Access Rights may individually agree
with the requesting Party to have the Access Rights include the right to
sublicense to the latter's Affiliated Entities. Access Rights to Affiliated
Entities shall be granted on Fair and Reasonable conditions and upon written
bilateral agreement.
Affiliated Entities which obtain Access Rights in return they should fulfil
all confidentiality and other obligations accepted by the Parties under the
Grant Agreement or this Consortium Agreement as if such Affiliated Entities
were Parties.
Access Rights may be refused to Affiliated Entities if such granting is
contrary to the legitimate interests of the Party which owns the Background or
the Results.
Access Rights granted to any Affiliated Entity are subject to the continuation
of the Access Rights of the Party to which it is affiliated, and shall
automatically terminate upon termination of the Access Rights granted to such
Party.
Upon termination of the status as an Affiliated Entity, any Access Rights
granted to such former Affiliated Entity shall lapse.
Further arrangements with Affiliated Entities may be negotiated in separate
agreements.
# 3 DMP Objective
The purpose of FISSAC Data Management Plan (DMP) is to provide a management
assurance framework and processes that fulfil the data management policy that
will be used by the FISSAC project participants with regard to all the dataset
types that will be generated by the FISSAC project.
The aim of the DMP is to control and ensure quality of project activities, and
to effectively/efficiently manage the material/data generated within the
FISSAC project. It also describes how data will be collected, processed,
stored and managed holistically from the perspective of external accessibility
and long term archiving.
All aspects of procedures that are associated with the quality control of data
management internal to the project is the subject of a separate deliverable,
D10.2 “Quality Assurance Plan”.
The content of the DMP is complementary to other official documents that
define obligations under the Grant Agreement (GA) and associated annexes, and
shall be considered a living document and as such will be the subject of
periodic updating as necessary throughout the lifespan of the project.
Figure 1 Data Management Plan overview
# 4 Information Management and Policy
The information available to different stakeholders will be managed and stored
in a Content Management System (CMS) taking advantage of existing information
management open sources that could be adaptable to project data dissemination
needs. CMS offers different levels of accessibility depending on the degree of
confidentiality of the information. It includes both, Publications and
Repository of other research data. Open access to research data refers to
right to access and re-use digital research data under the terms and
conditions set out in the Grant Agreement.
**Content Management System**
A content management system is a computer application that allows publishing,
editing, modifying, organizing, deleting, and maintaining content from a
central interface. Such systems of content management provide procedures to
manage workflow in a collaborative environment. These procedures can be manual
steps or an automated cascade. CMSs have been available since the late 1990s.
The function of CMS is to store and organize files, and provide version-
controlled access to their data. CMS features vary widely. Simple systems
showcase a handful of features, while other releases, notably enterprise
systems, offer more complex and powerful functions. Most CMSs include Web-
based publishing, format management, (version control), indexing, search, and
retrieval. The CMS increases the version number when new updates are added to
an already-existing file. Some content management systems also support the
separation of content and presentation. A CMS may serve as a digital asset
management system containing documents, movies, pictures, phone numbers,
scientific data. CMSs can be used for storing, controlling, revising,
semantically enriching and publishing documentation. Distinguishing between
the basic concepts of user and content. The CMS has two elements:
* **Content Management Application** (CMA) is the front-end user interface that allows a user, even with limited expertise, to add, modify and remove content from a Web site without the intervention of a Webmaster.
* **Content Delivery Application** (CDA) compiles that information and updates the Web site.
**Information Management**
Information Management (IM) is the collection and management of information
from one or more sources and the distribution of that information to one or
more audiences. This sometimes involves those who have a stake in, or a right
to that information. Management means the organization of and control over the
structure, processing and delivery of information.
Information includes both electronic and physical information. The
organizational structure must be capable of managing this information
throughout the information lifecycle regardless of source or format (data,
paper documents, electronic documents, audio, social business, video, etc.)
for delivery through multiple channels that may include cell phones and web
interfaces. The focus of IM is the ability of organizations to capture,
manage, preserve, store and deliver the right information to the right people
at the right time.
Information management environments are comprised of legacy information
resident in line of business applications, Enterprise Content Management
(ECM), Electronic Records Management (ERM), Business Process Management (BPM),
Taxonomy and Metadata, Knowledge Management (KM), Web Content Management
(WCM), Document Management (DM) and Social Media Governance technology
solutions and best practices.
_Figure 2: Information Management_
**FISSAC project website**
Project website will be used for storing both public and private documents
related to project and dissemination, the website is meant to be live for the
whole project duration and minimum 2 years after the project ends.
* Public section of the project website: public deliverables, brochure, poster, presentations, scientific papers, videos, etc.
* Private section of the project website: confidential deliverables, work packages related documentation, etc.
The website _www.fissacproject.eu_ was launched on 15 th of January 2016.
The website was designed by a subcontractor and will be managed by ACR+. It
will be dynamic and interactive in order to ensure a clear communication and
wide dissemination of project news, activities and results. The website is of
primary importance due to the expected impact on the target audiences. It was
designed to give quick, simple and neat information. The website will be
regularly updated with news and articles. It will also provide access to the
FISSAC platform and FISSAC model, once they are online. All partners are
responsible for feeding the project website with news and relevant
information. The website will remain at least two years after the end of the
project (February 2020). The website will be available in English and in the
languages of the project partners (Czech, French, German, Hungarian, Italian,
Spanish, Swedish and Turkish). However, the information will be selectively
translated where needed in the various languages of the partnership,
specifically for hosting regional workshops, webinars and for disseminating
local news.
_Figure 3: FISSAC website_
# 5 DMP Implementation
The organizational structure of the FISSAC project was created in order to
address an effective project direction and management through the
communication flow and methods for reporting, monitoring, management of
intellectual properties, background and foreground generated among the
project. Moreover, according to Project Quality Assurance Plan to be developed
(see WP 10 management), communication aspects and information generated in the
project will be monitored taking also into consideration management of gender
equality and risks analysis regarding financial, legal, administrative and
technical co-ordination and mitigation actions aspects. If new risks appear
along the project, new mitigation actions will be launched.
The FISSAC project is partly coordinated by the Scientific and Technical
Committee and Innovation Management Committee. The project has a structured
governance and management framework that controls and directs decisions during
the project. This is organised as shown in Figure 4 below. The DMP is issued
as project deliverable D10.3 under the work package 10 and will be
administrated by the Technical Coordination as shown in Figure 3 below.
_Table 1: FISSAC project partners and their roles_
<table>
<tr>
<th>
**Partner short name**
</th>
<th>
**Partner legal name**
</th>
<th>
**Partner role in FISSAC project**
</th> </tr>
<tr>
<td>
**1\. ACC**
</td>
<td>
ACCIONA INFRAESTRUCTURAS S.A.
</td>
<td>
Project coordinator, participating in the development and demonstration of
FISSAC implemented technologies and FISSAC model.
</td> </tr>
<tr>
<td>
**2\. ACR+**
</td>
<td>
ASSOCIATION DES CITES ET DES REGIONS POUR LE
RECYCLAGE ET LA GESTION DURABLE DES RESSOURCES
</td>
<td>
Dissemination leader, Stakeholders network, analysis of IS model and social
aspects.
</td> </tr>
<tr>
<td>
**3\. AEN**
</td>
<td>
ASOCIACION ESPAÑOLA DE NORMALIZACION Y CERTIFICACION
</td>
<td>
Standardization tasks
</td> </tr>
<tr>
<td>
**4\. CSIC**
</td>
<td>
AGENCIA ESTATAL CONSEJO SUPERIOR DE INVESTIGACIONES CIENTIFICAS
</td>
<td>
Re-formulation of ceramic tiles composition and determination of measurable
reduction of raw materials consumption by introducing waste in the ceramic
tiles composition formula, participation in the design of new materials able
to provide practical demonstration of FISSAC implemented technologies and
FISSAC model.
</td> </tr>
<tr>
<td>
**5\. AKG**
</td>
<td>
AKG GAZBETON ISLETMELERI SANAYI VETICARETCARET AS
</td>
<td>
Participation in the development of new products based on secondary raw
materials and demonstration of FISSAC implemented technologies and products.
</td> </tr>
<tr>
<td>
**6\. BEF**
</td>
<td>
BEFESA SALZCHALACKE GMBH
</td>
<td>
Active industrial partner as secondary raw material supplier.
</td> </tr>
<tr>
<td>
**7\. BGM**
</td>
<td>
BRITISH GLASS MANUFACTURERS CONFEDERATION LIMITED
</td>
<td>
Contribution to IS replicability activities and social issues.
</td> </tr>
<tr>
<td>
**8\. CBI**
</td>
<td>
CBI Betonginstitutet AB
</td>
<td>
Contribution in pre-industrial demonstration and real scale demonstration.
</td> </tr>
<tr>
<td>
**9\. CSM**
</td>
<td>
CENTRO SVILUPPO MATERIALI SPA
</td>
<td>
Contribution in eco-design and certification activities.
</td> </tr>
<tr>
<td>
**10\. DAP**
</td>
<td>
D'APPOLONIA SPA
</td>
<td>
Participation in development of the software platform, FISSAC methodology and
business model for IS, and will lead demonstration of the replication of
FISSAC model.
</td> </tr>
<tr>
<td>
**11\. EKO**
</td>
<td>
EKODENGE MUHENDISLIK MIMARLIK DANISMANLIK TICARET ANONIM SIRKETI
</td>
<td>
Development of the software platform tool.
</td> </tr>
<tr>
<td>
**12\. FAB**
</td>
<td>
FUNDACION AGUSTIN DE BETANCOURT
</td>
<td>
Participation in the development and demonstration of FISSAC implemented
technologies and products.
</td> </tr>
<tr>
<td>
**13\. FEN**
</td>
<td>
FENIX TNT SRO
</td>
<td>
Exploitation leader, business modelling, IPR management, Data Management.
</td> </tr>
<tr>
<td>
**14\. FER**
</td>
<td>
FERALPI SIDERURGICA S.p.A.
</td>
<td>
Active industrial partner as secondary raw material supplier.
</td> </tr>
<tr>
<td>
**15\. GEO**
</td>
<td>
GEONARDO ENVIRONMENTAL TECHNOLOGIES LTD
</td>
<td>
Participation in developing the software platform tool.
</td> </tr>
<tr>
<td>
**16\. GTS**
</td>
<td>
GLASS TECHNOLOGY SERVICES LIMITED
</td>
<td>
Active R&D partner as secondary raw material supplier.
</td> </tr>
<tr>
<td>
**17\. TRI**
</td>
<td>
INGENIEURBUERO TRINIUS GMBH
</td>
<td>
Eco-design and certification activities.
</td> </tr>
<tr>
<td>
**18\. HIF**
</td>
<td>
HIFAB AB
</td>
<td>
Contribution in the demonstration of the replication of FISSAC model,
exploitation & business model for IS.
</td> </tr>
<tr>
<td>
**19\. KER**
</td>
<td>
KERABEN GRUPO SA
</td>
<td>
Participation in the development of new products based on secondary raw
materials and demonstration of FISSAC implemented technologies and products.
</td> </tr>
<tr>
<td>
**20\. OVA**
</td>
<td>
OPENBARE VLAAMSE AFVALSTOFFENMAATSCHAPPIJ
</td>
<td>
Member of ACR+. As a competent (regional) government body with experience in
the development and follow-up of policies, business models, partnerships
offers insight and steering during the research process.
</td> </tr>
<tr>
<td>
**21\. RIN**
</td>
<td>
RINA SERVICES SPA
</td>
<td>
Contribute in Environmental Technology Verification tasks.
</td> </tr>
<tr>
<td>
**22\. SP**
</td>
<td>
SP SVERIGES TEKNISKA FORSKNINGSINSTITUT AB
</td>
<td>
Eco-design and certification activities leader, LCA and LCC methods,
responsible for ecological and economic evaluation of the developed processes.
Evaluation of non-technical opportunities and obstacles for different business
models in order to create better instruments and development towards greater
sustainability. Contribution with the analysis of circular business models.
</td> </tr>
<tr>
<td>
**23\. SYM**
</td>
<td>
SIMBIOSY SIMBIOSI INDUSTRIAL SL
</td>
<td>
Demonstration of the replication of FISSAC model, exploitation & business
model for IS, IS model trends.
</td> </tr>
<tr>
<td>
**24\. TCM**
</td>
<td>
TURKIYE CIMENTO MUSTAHSILLERI BIRLIGI
</td>
<td>
Participation in the development of new products based on secondary raw
materials and demonstration of FISSAC implemented technologies and products.
</td> </tr>
<tr>
<td>
**25\. TEC**
</td>
<td>
FUNDACION TECNALIA RESEARCH & INNOVATION
</td>
<td>
Active R&D partner participating in setting the basis for the IS concerning
innovative solutions for the use of by-products of steel and ceramic
industries in environmental-friendly products and efficient applications for
the construction sector. Validation at preindustrial scale to demonstrate the
efficiency of the solutions and products.
</td> </tr>
<tr>
<td>
**26\. VAN**
</td>
<td>
VANNPLASTIC LTD
</td>
<td>
Participation in the development of new products based on secondary raw
materials and demonstration of FISSAC implemented technologies and products.
</td> </tr> </table>
# 6 Research data
'Research data' refers to information, in particular facts or numbers,
collected to be examined and considered as a basis for reasoning, discussion,
or calculation. In a research context, examples of data include statistics,
results of experiments, measurements, observations resulting from fieldwork,
survey results, interview recordings and images. The focus is on research data
that is available in digital form.
## 6.1. Characteristics for datasets produced in the project
As indicated in the Guidelines on Data Management in Horizon 2020 (European
Commission, Research & Innovation, October 2015), scientific research data
should be easily:
1. DISCOVERABLE
The data and associated software produced and/or used in the project should be
discoverable (and readily located), identifiable by means of a standard
identification mechanism (e.g. Digital Object Identifier).
2. ACCESSIBLE
Information about the modalities, scope, licenses (e.g. licencing framework
for research and education, embargo periods, commercial exploitation, etc.) in
which the data and associated software produced and/or used in the project is
accessible should be provided.
3. ASSESSABLE and INTELLIGIBLE
The data and associated software produced and/or used in the project should be
easily assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. the minimal datasets are handled
together with scientific papers for the purpose of peer review, data is
provided in a way that judgments can be made about their reliability and the
competence of those who created them).
4. USEABLE beyond the original purpose for which it was collected The data and associated software produced and/or used in the project should be useable by third parties even long time after the collection of the data (e.g. the data is safely stored in certified repositories for long term preservation and curation; it is stored together with the minimum software, metadata and documentation to make it useful; the data is useful for the wider public needs and usable for the likely purposes of non-specialists).
5. INTEROPERABLE to specific quality standards
The data and associated software produced and/or used in the project should be
interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc.
# 7 FISSAC Data Sets
## 7.1 Collection and Management of FISSAC Data Sets
#### Types of data
The types of data to be included within the scope of the FISSAC Data
Management Plan shall as a minimum cover the types of data that is considered
complementary to material already contained within declared project
deliverables.
#### Data Collection & Definition
The responsibility to define and describe all non-generic data sets specific
to an individual work package shall be with the WP leader. The WP leader shall
formally review and update the data sets related to his WP on a six-monthly
basis. All modifications/ additions to the data sets shall be provided to the
FISSAC Coordinator (ACCIONA) for inclusion in the DMP, and shall be prepared
in accordance with the metadata capture table template contained in Appendix
2.
##### Table 2: Forecast of FISSAC datasets related to each WP
<table>
<tr>
<th>
**WP num.**
</th>
<th>
**WP name**
</th>
<th>
**WP leader**
</th>
<th>
**Dataset reference**
</th>
<th>
**Dataset name**
</th> </tr>
<tr>
<td>
WP1
</td>
<td>
FROM CURRENT MODELS OF INDUSTRIAL SYMBIOSIS TO A NEW MODEL
</td>
<td>
ACC
</td>
<td>
FISSAC_WP1
</td>
<td>
INDUSTRIAL SYMBIOSIS
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
CLOSED LOOP RECYCLING PROCESSES TO
TRANSFORM WASTE INTO SECONDARY
RAW MATERIALS
</td>
<td>
ACC
</td>
<td>
FISSAC_WP2
</td>
<td>
RECYCLING PROCESSES
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
PRODUCT ECO-DESIGN AND CERTIFICATION
</td>
<td>
SP
</td>
<td>
FISSAC_WP3
</td>
<td>
ECO-DESIGN
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
PRE-INDUSTRIAL SCALE
DEMONSTRATION OF THE RECYCLING
</td>
<td>
TEC
</td>
<td>
FISSAC_WP4
</td>
<td>
PREINDUSTRIAL DEMO
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
INDUSTRIAL PRODUCTION & REAL SCALE DEMONSTRATION
</td>
<td>
ACC
</td>
<td>
FISSAC_WP5
</td>
<td>
REAL SCALE DEMO
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
FISSAC MODEL FOR INDUSTRIAL SYMBIOSIS
</td>
<td>
EKO
</td>
<td>
FISSAC_WP6
</td>
<td>
FISSAC MODEL
</td> </tr>
<tr>
<td>
WP7
</td>
<td>
INDUSTRIAL SYMBIOSIS REPLICABILITY AND SOCIAL ISSUES
</td>
<td>
DAPP
</td>
<td>
FISSAC_WP7
</td>
<td>
REPLICABILITY
</td> </tr>
<tr>
<td>
WP8
</td>
<td>
EXPLOITATION AND BUSINESS MODELS FOR INDUSTRIAL SYMBIOSIS
</td>
<td>
FEN
</td>
<td>
FISSAC_WP8
</td>
<td>
EXPLOITATION
</td> </tr>
<tr>
<td>
WP9
</td>
<td>
DISSEMINATION
</td>
<td>
ACR+
</td>
<td>
FISSAC_WP9
</td>
<td>
DISSEMINATION
</td> </tr>
<tr>
<td>
WP10
</td>
<td>
MANAGEMENT
</td>
<td>
ACC
</td>
<td>
FISSAC_WP10
</td>
<td>
MANAGEMENT
</td> </tr> </table>
**Data set reference and name**
All data sets within this DMP have been given a unique field identifier and
are listed in the table contained in Appendix 1.
#### Data Set Description
A data set is defined as a structured collection of data in a declared format.
Most commonly a data set corresponds to the contents of a single database
table, or a single statistical data matrix, where every column of the table
represents a particular variable, and each row corresponds to a given member
of the data set in question. The data set may comprise data for one or more
fields. For the purposes of this DMP data sets have been defined by generic
data types that are considered applicable to the FISSAC project. For each data
set, the characteristics of the data set have been captured in a tabular
format as enclosed in Appendix 1.
#### Standards & Metadata
Metadata is defined as “data about data”. It is “structured information that
describes, explains, locates, and facilitates the means to make it easier to
retrieve, use or manage an information resource”. This is especially relevant
in the distributed data network environment that exists within FISSAC. Meta
Data shall be considered as the formal means by which data is defined and by
which the meaning of information is established. All data-sets generated
within the project shall be defined such that “data about data” is specified.
Metadata can be categorised in three types:
* Descriptive metadata describes an information resource for identification and retrieval through elements such as title, author, and abstract.
* Structural metadata documents relationships within and among objects through elements such as links to other components (e.g., how pages are put together to form chapters).
* Administrative metadata manages information resources through elements such as version number, archiving date, and other technical information for the purposes of file management, rights management and preservation.
There are a large number of metadata standards which address the needs of
particular user communities. More details about these standards can be found
in Annex 3.
#### Data Sharing
During the period when the project is live the sharing of data shall be
defined by the configuration rules defined in the access profiles for the
project participants as described in the FISSAC Quality Assurance Plan
(D10.2). Each individual project data set item shall be allocated a 3
character “dissemination classification” for the purposes of defining the data
sharing restrictions. The classification shall be an expansion of the system
of confidentiality applied to deliverables reports provided under the FISSAC
Grant Agreement.
PU: Public (data can be shared outside the consortium without restriction)
CO: Confidential, only for members of the consortium (including the Commission
Services)
CI: Classified, as referred to in Commission Decision 2001/844/EC
The three above levels are linked to the “Dissemination Level” specified for
all FISSAC deliverables. All material designated with a PU dissemination level
shall be deemed uncontrolled.
Data will be shared when the related deliverable or paper has been made
available at an open access repository. The normal expectation is that data
related to a publication will be openly shared. However, to allow the
exploitation of any opportunities arising from the raw data and tools, data
sharing will proceed only if all co-authors of the related publication agree.
The Lead author is responsible for getting approvals and then sharing the data
and metadata on Zenodo ( _www.zenodo.org_ ), a popular repository for research
data. The Lead Author will also create an entry on OpenAIRE (
_www.openaire.eu_ ) in order to link the publication to the data.
OpenAIRE is a service that implements the Horizon 2020 Open Access mandate for
publications and its Open Research Data Pilot and may be used to reference
both the publication and the data. A link to the OpenAIRE entry will then be
submitted to the FISSAC Website Administrator (ACR+) by the Lead Author.
_Figure 5: OpenAIRE website_
_Figure 6: ZENODO repository_
#### Data archiving and preservation
Both Zenodo and OpenAIRE are purpose-built services that aim to provide
archiving and preservation of long-tail research data. In addition, the FISSAC
website, linking back to OpenAIRE, is expected to be available for at least 2
years after the end of the project. At the formal project closure all the data
material that has been collated or generated within the project and classified
for archiving shall be copied and transferred to a digital archive.
The document structure and type definition will be preserved as defined in the
document breakdown structure and work package groupings specified. At the time
of document creation the document will be designated as a candidate data item
for future archiving. This process is performed by the use of codification
within the file naming convention (see Section 8). The process of archiving
will be based on a data extract performed within 12 weeks of the formal
closure of the FISSAC project.
The archiving process shall create unique file identifiers by the
concatenation of “metadata” parameters for each data type. The metadata index
structure shall be formatted in the metadata order as listed in Appendix 1.
This index file shall be used as an inventory record of the extracted files,
and shall be validated by the associated WP leader.
# 8 Data Sets Technical Requirements
## 8.1 General requirements
The applicable data sets are restricted to the following data types for the
purposes of archiving. The technical characteristics of each data set are
described in the following sections. The copy rights with respect to all data
types shall be subject to IPR clauses in the GA, but shall be considered to be
royalty free.
## 8.2 Prohibited file types
The use of file compression utilities, such as “WinZip” is prohibited. No data
files shall be encrypted.
## 8.3 Static Graphical Images
Graphical images shall be defined as any digital image irrespective of the
capture source or subject matter. Images should be composed such to contain
only objects that are directly related to FISSAC activity and do not breach
IPR of any third parties.
#### Image file formats
Image file formats are the standardised means of organising and storing
digital images. Image files are composed of digital data and can consist be of
two primary formats of “raster” or “vector”. It is necessary to represent data
in the rastered state for use on a computer displays or for printing. Once
rasterised, an image becomes a grid of pixels, each of which has a number of
bits to designate its colour equal to the colour depth of the device
displaying it. The FISSAC project shall only use raster based image files of
one of the two formats described below and shall be selected based on the
technical needs and the format characteristics described below. The two
allowable static image file formats are JPEG and PNG (detailed description in
Annex 4).
#### Image file sizes & file compression
There is normally a direct positive correlation between image file size and
the number of pixels in an image, the colour depth, or bits per pixel used in
the image. Compression algorithms can create an approximate representation of
the original image in a smaller number of bytes that can be expanded back to
its uncompressed form with a corresponding decompression algorithm.
Considering different compressions, it is common for two images of the same
number of pixels and colour depth to have a very different compressed file
size. With some compression formats, images that are less complex may result
in smaller compressed file sizes. This characteristic sometimes results in a
smaller file size for some lossless formats than lossy formats. The use of
compression tools shall not be used unless absolutely necessary. A digitally
stored image has no inherent physical dimensions. Some digital file formats
record a DPI value, or more commonly a PPI (pixels per inch) value, which is
to be used when printing the image. This number provides information to
establish the printed image size, or in the case of scanned images, the size
of the original scanned object.
Resolution refers to the number of pixels in an image. Resolution can be
expressed by the width and height of the image as well as the number of pixels
in the image. For example, an image that is 2048 pixels wide and 1536 pixels
high (2048X1536) contains 3,145,728 pixels. As the megapixels in the pickup
device increases so does the possible maximum size image that can be produced.
File size is determined by the number of pixels. The image default sizes and
resolution shall be as shown in Table 1. The image default size shall be A4.
##### Table 3: Image default sizes and resolution
<table>
<tr>
<th>
PPI
</th>
<th>
Pixels
</th>
<th>
mm
</th>
<th>
Paper size
</th>
<th>
Size (Greyscale)
</th>
<th>
Size (RGB)
</th> </tr>
<tr>
<td>
300
</td>
<td>
11114x14008
</td>
<td>
840x1186
</td>
<td>
A0
</td>
<td>
155.7MB
</td>
<td>
467MB
</td> </tr>
<tr>
<td>
300
</td>
<td>
7016x11114
</td>
<td>
594x840
</td>
<td>
A1
</td>
<td>
78MB
</td>
<td>
234MB
</td> </tr>
<tr>
<td>
300
</td>
<td>
4961x7016
</td>
<td>
420x594
</td>
<td>
A2
</td>
<td>
34.8M
</td>
<td>
104.4MB
</td> </tr>
<tr>
<td>
300
</td>
<td>
3508x4961
</td>
<td>
297x420
</td>
<td>
A3
</td>
<td>
17.4MB
</td>
<td>
52.2MB
</td> </tr>
<tr>
<td>
300
</td>
<td>
2480x3508
</td>
<td>
210x297
</td>
<td>
A4
</td>
<td>
8.7MB
</td>
<td>
26.1MB
</td> </tr>
<tr>
<td>
300
</td>
<td>
1748x2480
</td>
<td>
148x210
</td>
<td>
A5
</td>
<td>
4.3MB
</td>
<td>
13MB
</td> </tr>
<tr>
<td>
300
</td>
<td>
1240x1748
</td>
<td>
105x148
</td>
<td>
A6
</td>
<td>
2.2MB
</td>
<td>
6.5MB
</td> </tr>
<tr>
<td>
300
</td>
<td>
874x1240
</td>
<td>
74x105
</td>
<td>
A7
</td>
<td>
1.08MB
</td>
<td>
3.25MB
</td> </tr>
<tr>
<td>
300
</td>
<td>
614x874
</td>
<td>
52x74
</td>
<td>
A8
</td>
<td>
0.54MB
</td>
<td>
1.6MB
</td> </tr> </table>
## 8.4 Animated graphical image
Graphic animation is a variation of stop motion and possibly more conceptually
associated with traditional flat cell animation and paper drawing animation,
but still technically qualifying as stop motion consisting of the animation of
photographs (in whole or in parts) and other non-drawn flat visual graphic
material. The two allowable animated graphical image file formats are AVI and
MPEG (detailed description in Annex 4). The WP leader shall determine the most
suitable choice of format based on equipment availability and any other
factors.
## 8.5 Audio data
An audio file format is a file format for storing digital audio data on a
computer system. The bit layout of the audio data (excluding metadata) is
called the audio coding format and can be uncompressed, or compressed to
reduce the file size, often using lossy compression. The data can be a raw
bitstream in an audio coding format, but it is usually embedded in a container
format or an audio data format with defined storage layer. Detailed
description of audio data types is in Annex 4\.
## 8.6 Textual data
A text file is structured as a sequence of lines of electronic text. These
text files shall not contain any control characters including end-of-file
marker. In principle the least complicated form of textual file format shall
be used as the first choice. Detailed description of textual data types is in
Annex 4.
## 8.7 Numeric data
Numerical Data is information that often represents a measured physical
parameter. It shall always be captured in number form. Other types of data can
appear to be in number form i.e. telephone number, however this should not be
confused with true numerical data that can be processed using mathematical
operators.
## 8.8 Process and test data
Standard Test Data Format (STDF) is a proprietary file format originating
within the semiconductor industry for test information, but it is now a
Standard widely used throughout many industries. It is a commonly used format
produced for/by automatic test equipment (ATE). STDF is a binary format, but
can be converted either to an ASCII format known as ATDF or to a tab delimited
text file. Software tools exist for processing STDF generated files and
performing statistical analysis on a population of tested devices. FISSAC
innovation development shall make use of this file type for system testing.
## 8.9 Microsoft Office Application Suite
FISSAC participants shall use the currently MS supported operating system and
convert from any previous obsolete releases.
#### Microsoft Office Application Data files
The types of specific applications available within the current Microsoft
Windows operating system shall be used to support all project activities in
preference to any other software solutions. The data file types associated
with these applications shall be saved in the default format and be in
accordance with the file naming convention as specified in Section 8.
#### Microsoft Office Configuration
At the Microsoft Office Application level the “file properties” shall be
configured using the “document properties” feature. This is accessed via
“Info” dropdown within the “File” menu. The “properties” and “advanced
properties” present a data entry box under the “Summary” as shown in Figure 4.
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Title:
</td>
<td>
Duplication of the name used for the data file name
</td> </tr>
<tr>
<td>
Subject:
</td>
<td>
Identifier for FISSAC work package discrimination and shall be of the
following format FISSAC_WPxx where xx is the work package number in the range
01 to 10.
</td> </tr>
<tr>
<td>
Author:
</td>
<td>
Name of the person creating the document and be formatted to have the surname
stated first as follows: surname_firstname_secondname
</td> </tr>
<tr>
<td>
Manager:
</td>
<td>
Name of the author’s immediate line manager and be formatted to have the
surname stated first as follows: surname_firstname_secondname
</td> </tr>
<tr>
<td>
Company:
</td>
<td>
Company name of the author to be stated as follows: companyname_FISSAC
participant number
</td> </tr>
<tr>
<td>
Keywords:
</td>
<td>
Free format text and should contain key words that would be relevant and
useful to future data searches. The keywords should all be in lower case and
separated with commas
</td> </tr> </table>
Comments: Description of file contents in free format text
Hyperlink base: Blank
The tickbox indicating “Save Thumbnails for All Word Documents” shall be
untagged.
# 9 Naming Convention
All files irrespective of the data type shall be named in accordance with the
following document file naming convention:
FISSAC_Dx.x_Deliverable short title_Px_yyyymmdd_Status
“Dx.x”: Deliverable number according to the DoA
“Px”: Lead beneficiary number
“yyyymmdd”: Year/month/day
“Status”: Short name of the last reviewer (beneficiary short name)
_Example: FISSAC_D10.2_Quality Plan_P1_20150511_Acc_
Appendix files will be referred to the main document according to the
following rule:
FISSAC_Dx.x_Deliverable short title_Appx_Px_yyyymmdd_Status
Where “Appx” is the Appendix letter
_Example: FISSAC_D10.2_Quality Plan_AppA_P1_20150511_Acc_
When the document has been approved by the EC, the status in the file name
will be changed to “Final” while a copy of the file in PDF format will be
uploaded on the webpage.
The file naming convention contains the 7 following sections:
[PROJECT]_[WORKPACKAGE]_[TASK]_[TITLE]_[VERSION]_[DISSEMINATIONCLASS]_[ARCHIVE]
Where:
* [PROJECT] is FISSAC for all document types;
* [WORKPACKAGE] is the FISSAC project work package number, with WP as a prefix;
* [TASK] is the FISSAC project task number, this is two numbers where numbers less than 10 have a leading zero;
* [TITLE] represents the description of the data item contents excluding capitalisation and punctuation characters;
* [VERSION] is the version number consisting of integer numbers only without leading zeros, prefixed with V;
* [DISSEMINATIONCLASS] is the dissemination classification allocated to a document type that define the data access post archiving, consists of the characters CO and a suffix of a single number in the range 1 to 3;
* [ARCHIVE] this is a single character defining the allocation of the data item for future archiving and is represented by a Y or N ;
# 10 Conclusions
This report contains the first release of the Data Management Plan (DMP) and
represents the status of the mandatory quality requirements at the time of
deliverable D10.3.
This first version of the DMP establishes the measures for promoting the
findings during the project’s life. The DMP enhances and ensures relevant
project´s information transferability and takes into account the restrictions
established by the Consortium Agreement. In this framework, the DMP sets the
basis for both Dissemination Plan and Exploitation Plan. The first version of
the DMP is delivered at M6; later the DMP will be monitored and updated in
parallel with the different versions of Dissemination and Exploitation Plans
(the progress of the implementation of DMP will be included in the Project
Progress Reports, at M18 and M36.
This report should be read in association with all the referenced documents,
appendix material and including the EC Grant /Consortium Agreement, annexes
and guidelines.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0031_Residue2Heat_654650.md
|
# Introduction
## Scope
The _Residue2Heat_ Data Management Plan (DMP) constitutes one of the outputs
of the work package dissemination, communication and exploitation, dedicated
to raising awareness and promoting the project and its related results,
achievements. The present deliverable is prepared at an early project stage
(Month 12), in order to commence on a strategy on data management from the
project onset. It is also envisaged that the Data Management Plan will be
implemented during the entire project lifetime and updated on a yearly basis.
The main focus of the _Residue2Heat_ data management framework is to ensure
that the project’s generated and gathered data can be preserved, exploited and
shared for verification or reuse in a consistent manner. The main purpose of
the Data Management Plan (DMP) is to describe _Research Data_ with the
metadata attached to make them _discoverable_ , _accessible_ , _assessable_ ,
_usable beyond the original purpose_ and _exchangeable_ between researchers.
The definition of Research data is defined in the “Guidelines on Open Access
to Scientific Publication and Research Data in Horizon 2020” (2015) as:
“ _Research data_ refers to information, in particular facts or numbers,
collected to be examined and considered and as a basis for reasoning,
discussion, or calculation. In a research context, examples of data include
statistics, results of experiments, measurements, observations resulting from
fieldwork, survey results, interview recordings and images. The focus is on
research data that is available in digital form."
According to the EC provided documentation 1 for data management in H2020
aspects like research data access, sharing and security should also be
addressed in the DMP. This document has been produced following these
guidelines and aims to provide a policy for the project partners to follow.
## Objectives
The generated and gathered research data need to preserved in-line with the EC
requirements. They play a crucial role in exploitation, verification of the
research results and should be effectively managed. This Data Management plan
(DMP) aims at providing a timely insight into facilities and expertise
necessary for data management both during and after the project is finished,
to be used by all _Residue2Heat_ .
The most important reasons for setting up this DMP are:
* Embedding the _Residue2Heat_ project in the EU policy on data management. The rationale is that the Horizon 2020 grant consists of public money and therefore the data should be accessible to other researchers;.
* Enabling verification of the research results of the _Residue2Heat_ project;
* Stimulating the reuse of _Residue2Heat_ data by other researchers;
* Enabling the sustainable and secure storage of _Residue2Heat_ data in repositories;
This second version of the Data Management plan is submitted to the EU in
December 2016. It is important to note however that the document will evolve
and further develop during the project’s life cycle. It can be identified by a
version number and a date. Updated versions will be uploaded by project
partner OWI, which is the primary responsible for data management.
# Findable, accessible interoperable and reusable (FAIR) data
This document takes into account the latest “Guidelines on FAIR Data
Management in Horizon 2020”. The _Residue2Heat_ project partners should make
their research data **findable, accessible, interoperable and reusable** (
**FAIR** ) and ensure that is soundly managed. Good research data management
is not a goal in itself, but rather the key conduit leading to knowledge
discovery and innovation, and to subsequent data and knowledge integration and
reuse 1 .
## Data Management Plan
Data Management Plans (DMPs) are a key element of good data management. A DMP
describes the data management life cycle for the data to be collected,
processed and/or generated by a Horizon 2020 project. As part of making
research data findable, accessible, interoperable and re-usable (FAIR), a DMP
should include information on 2 :
* the handling of research data during and after the end of the project;
* what data will be collected, processed and/or generated;
* which methodology and standards will be applied; whether data will be shared/made open access; how data will be curated and preserved.
# Residue2Heat implementation of FAIR Data
## Data Summary
It is a well-known phenomenon that the amount of data is increasing while the
use and re-use of data to derive new scientific findings is more or less
stable. This does not imply, that the data currently unused are useless - they
can be of great value in the future. The prerequisite for meaningful use, re-
use or recombination of data is that they are well documented according to
accepted and trusted standards. Those standards form a key pillar of science
because they enable the recognition of suitable data. To ensure this,
agreements on standards, quality level and sharing practices have to be
defined. Strategies have to be fixed to preserve and store the data over a
defined period of time in order to ensure their availability and re-usability
after the end of the _Residue2Heat_ project.
Data considered for open access would include items as fuel properties, energy
flows and balances, modelling calculations etc. For example, the consortium
expects that the following data will be obtained and made available:
* Physico-chemical characterization of FPBO from different biomass resources (WP3);
* Data underpinning the mass- and energy balances for fast pyrolysis (WP6);
* Emission data and actual measurements obtained during combustion of FPBO (WP5);
* Data on combustion reaction mechanism modelling (WP4);
* Data on spray modelling (WP4);
* Background data on screening LCA calculations (WP6).
The data will be documented in 4 types of datasets:
1. **Core datasets** – datasets related to the main project activities.
2. **Produced datasets** – datasets resulting from _Residue2Heat_ applications, e.g. sensor data.
3. **Project related datasets** – datasets resulting from the documentation of the progress of the _Residue2Heat_ project. They are a collection of deliverables, dissemination material, training material and scientific publications.
4. **Software related datasets** – datasets resulting from the development of the combustion reaction mechanisms. These can be used for various purposes in the combustion area including research tasks or the development of new appliances.
Generally, the datasets which be stored in file formats which have a high
chance of remaining usable in the far future (see Annex 1). Especially the
datasets which will be available for open access will be stored in these
selected file formats. In principle the OpenAIRE 3 platform is selected to
insure open access of the datasets, persistent identifiers, data discovery and
preservation of data for a long term. The open access data is useful for
different stakeholder groups from the scientific community, industry as well
as socioeconomic actors. For example:
* **Industry and potential end** **users of the residential heating systems.** To implement FPBO residential heaters in society, the potential end‐users need to be aware of their options. The end users will have certain demands, such as cost and comfort levels, which the industry needs to accommodate. This will be addressed by the datasets generated in WP6 and WP7.
* **Social and Environmental impacts of the _Residue2Heat_ value chain to the population. ** The proposed value chain has the potential to influence the daily life of many EUresidents, not only in heating their home, but also in terms of environmental impact, social aspects such as job security and the economic development of rural communities. The positive (and if present negative) effects will be documented in WP6.
* **Social and Environmental impacts of the _Residue2Heat_ value chain to the Regulatory Framework. ** To allow commercial use of FPBO in residential heating systems, both the fuel as well as the heating systems need to comply with numerous regulations. Examples are CE certification of the heating system (EU), EN standard for FPBO, Emission limits for both FPBO production as well as the heating system (National) and local development plans need to accommodate construction and operation of the FPBO production plant (Regional). In WP6 the regulatory framework on the different levels will be documented.
## FAIR Data
### Making data findable, including provisions for metadata
In order to support the discoverability of data the OpenAIRE platform has been
selected. This platform support multiple using unique identifiers (doi, arxiv,
isbn, issn, etc) which are persistent for a long time. Currently this platform
is being tested how it can support the Residue2Heat project in optimal form.
This needs additional documentation of best practices with respect to:
* the discoverability of data (metadata provision);
* the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* the naming conventions used;
* the approach towards search keyword;
* the approach for clear versioning;
* specification of standards for metadata creation (if any). If there are no relevant standards available a documentation of what type of metadata will be created and how it is being created will be performed.
### Making data openly accessible
The consortium will store the research data in a format which is suited for
long-time preservation and accessibility. To prevent file format obsolescence,
some precautions have been taken. One such measure is to select file formats
which have a high chance of remaining usable in the far future (see Annex 1).
Furthermore, in a future update of this deliverable the following issues will
be addressed:
* Specification of the data which will be made openly available? If some data is kept closed provide a rationale for doing so will be given;
* Specification where the data and associated metadata, documentation and code are deposited;
* Specification how access will be provided in case there are any restrictions.
### Making data interoperable
In order to support the interoperability of the Residue2Heat project data a
list of standard and metadata vocabularies need to be defined. Additionally it
will be checked whether the data types present in our data set allow inter-
disciplinary interoperability. If necessary mapping to more commonly
ontologies will become available. The present version of the Data Management
plan does not include the actual metadata about the data being produced in the
_Residue2Heat_ project. Access to this project’s related metadata will be
provided in an updated version of the DMP.
### Increase data re-use
In order to support the data re-use the data will be a proper licence to
permit the widest re-use possible. Most likely the best licenses to publish
will be the Creative Commons license 4 . Other items which have to be
addressed are:
* when data will become available for re-use. If applicable, it is mentioned whether a data embargo is necessary;
* the data produced and/or used in the project which is useable by third parties, in particular after the end of the project is listed. If the re-use of some data is restricted, it is explained why this is necessary;
* data quality assurance processes;
* the length of time for which the data will remain re-usable.
## Allocation of resources
Lead for this data management task will be with OWI, co-lead with RWTH, though
all partners are involved in the compliance of the DMP. The partners deliver
datasets and metadata produced or collected in Residue2Heat according to the
rules described in the Annex 1. The project coordinator and in particular the
Technical coordinator are central players in the implementation of the DMP and
will track the compliance of the rules as documented in this DMP. The
Residue2Heat project partners have covered the costs for data FAIR in their
budget estimations. The long term preservation of datasets has been secured
via our internal communication platform EMDESK for up to eight years after the
project is finished.
## Data security
In this project various types of experimental and numerical data will be
generated. The raw data will be stored by each partner according to their own
standard procedures minimum for ten years after ending of the project. The
processed data will become available in the form of project reports and open
access publications. This data will be further exploited in webinars, articles
in professional journals, and by conference presentations. The OpenAIRE
platform 5 has been selected for secure long term storage and access of
these datasets. The research data used for communication, dissemination and
exploitation will be stored also on internal communication platform EMDESK (
_http://www.emdesk.com_ ) for up to 8 years after the project is finished.
This internal platform is only accessible for the project partners. Access to
research data which is not marked as confidential will be granted via a
repository.
### Rights to access and re-use research data
Open access to research data refers to the right to access and re-use digital
research data under the terms and conditions set out in the Grant Agreement.
Openly accessible research data can typically be accessed, mined, exploited,
reproduced and disseminated free of charge for the user.
Building on the proposed Consortium Agreement of the _Residue2Heat_
partnership the present data management plan is setup. The Consortium
Agreement described general rules how data will be shared and/or made open,
and how it will be curated, preserved and the proper licenses to publish, e.g.
Creative Commons license. In an updated version of this DMP the right to
access and re-use of research data will be documented in detail.
## Ethical aspects and Legal Compliance
The legal compliance related to copyright, intellectual property rights and
exploitation has been agreed on in the Consortium Agreement, which is also
applicable to access to research data. It is unlikely that the _Residue2Heat_
project will produce research which is sensitive to personal and ethical
concerns.
# Conclusions
This Data Management Plan (DMP) is focussed on the support of use and re-use
of research data to validate or derive new scientific findings. The
prerequisite for meaningful use, re-use or recombination of data is that they
are well documented according to accepted and trusted standards. Those
standards form a key pillar of science because they enable the recognition of
suitable data. To ensure this, agreements on standards, accessibility and
sharing practices have been defined. Strategies have to be fixed to preserve
and store the data over a defined period of time in order to ensure their
availability and re-usability after the end of _Residue2Heat_ . Especially,
the metadata vocabularies and licences to permit the widest reuse possible
need to be addressed more in detail in a future update of this deliverable.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0032_MARISA_740698.md
|
# 1\. Introduction
This document is developed as part of MARISA (MARitime Integrated Surveillance
Awareness) project, which has received funding from the European Union’s
Horizon 2020 Research and Innovation program, under the Grant Agreement number
740698.
The Project Management, Quality and Risk Plan corresponds to Deliverable 1.1
of Work Package 1 (WP1) – Project Management & Coordination. WP1 will ensure
an optimal coordination and management of MARISA Project, guaranteeing the
effective implementation of the project activities in respect of technical
progress, finance, contracts and administration. The specific objectives of
WP1 include:
* create the effective management structure on the basis of the principles included in the Grant and Consortium Agreements, for the whole duration of the project (committees, quality plans, procedures, risk register, project management tools, etc.);
* ensure that the procedures are followed and changed if required during the project lifetime in order to ensure successful completion of tasks, deliverables and achievement of the milestones;
* oversee the progress of work packages, monitor deliverables, and resolve any risks and unforeseen issues;
* maintain contacts with the European Commission;
* chair the Executive Board, Technical Board and the Advisory Board;
* manage the innovation in MARISA ensuring the coherence between technology and exploitation dimensions.
## 1.1. Purpose of the document
The D1.1 Project Management, Quality and Risk Plan provides an organized and
harmonized set of practical guidelines, procedures and support documents that
shall be used for optimizing the project implementation. It will be kept up to
date as needed throughout the project lifecycle.
This document is to be used by all partners to efficiently develop their
individual and collective activities and contribute to the global objective of
the project.
## 1.2. Reference documents
[GA] Grant Agreement-740698-MARISA.pdf
The Grant Agreement is the contract concluded between the EC (representing the
EU) and the beneficiaries under which the parties receive the rights and
obligations (e.g. the right of the Union's financial contribution and the
obligation to carry out the research and development work). The Grant
Agreement consists of the basic text and annexes, including Annex 1–
Description of the action (DoA) - part A and part B.
The DoA (Annex 1 part A) is also a key document to be taken into account given
that it compiles a specific description of the tasks that will be carried out
along the project and the expected results, deliverables and milestones to be
obtained.
This D1.1 Project Management, Quality and Risk Plan is a supporting document
to the [GA] and the DoA (Annex 1 part A) and contains extracts from both the
documents where appropriate to ensure a full definition of the management
processes and procedures to ensure that the project is delivered successfully.
[CA] Consortium Agreement
The Consortium Agreement is the internal agreement signed between the members
of the consortium establishing their rights and obligations with respect to
the implementation of the action in compliance with the grant agreement.
This D1.1 Project Management, Quality and Risk Plan is a supporting document
to the [CA] intended to provide more detailed processes and procedures to
ensure that the project is delivered successfully.
## 1.3. Applicability
The D1.1 Project Management, Quality and Risk Plan is a reference document in
the MARISA project.
From the start of the project to its end, it is applicable to all partners,
and is expected to remain stable. However, any changes will be agreed by the
Executive Board (EB) and included in a revised version.
In the unlikely event of a conflict between the D1.1 Project Management,
Quality and Risk Plan and other documents such as the Description of Work or
the Grant Agreement, they will prevail in the following order:
1. [GA] Grant Agreement including all Annexes;
2. [CA] Consortium Agreement;
3. [D1.1] D1.1 Project Management, Quality and Risk Plan (this Document).
The latter documents will have to be modified to remain consistent with the
former. This is especially mandatory for issues regulated by either the [GA]
or [CA] documents.
## 1.4. Definitions
In the following table, in order to clarify some concept tied to certain words
used in the document, some overall definitions have been reported.
<table>
<tr>
<th>
</th>
<th>
**DEFINITIONS**
</th> </tr>
<tr>
<td>
EUCISE2020
</td>
<td>
EUropean test bed for the maritime Common Information Sharing Environment in
the 2020 perspective. EUCISE2020 is a Security Research project of the
European Seventh Framework Program; it aims at achieving the pre-operational
Information Sharing between the maritime authorities of the European States.
</td> </tr>
<tr>
<td>
**DEFINITIONS**
</td> </tr>
<tr>
<td>
CISE
</td>
<td>
CISE is the Common Information Sharing Environment for the Maritime Domain. It
will integrate existing surveillance systems and networks and give to all the
relevant authorities (EU and national authorities responsible for different
aspects of surveillance) concerned access to the information they need for
their missions at sea. The CISE will make different systems interoperable so
that data and other information can be exchanged easily through the use of
modern technologies.
</td> </tr>
<tr>
<td>
MARISA
Toolkit
</td>
<td>
In order to fostering faster detection of new events, better informed decision
making and achievement of a joint understanding of a situation across borders,
the MARISA toolkit it will be able to provide a suite of services to correlate
and fuse various heterogeneous and homogeneous data and information from
different sources, including Internet and social networks.
</td> </tr>
<tr>
<td>
Data Fusion
</td>
<td>
The process of integrating multiple data sources to produce more consistent,
accurate, and useful information than that provided by any individual data
source. Is analogous to the ongoing cognitive process used by humans to
integrate data continually from their senses to make inferences about the
external world.
</td> </tr>
<tr>
<td>
Legacy Systems
</td>
<td>
The previously existing Maritime Surveillance systems in the National/Regional
Coordination Centers or Coastal Stations to which MARISA Toolkit must
establish some kind of communications.
</td> </tr>
<tr>
<td>
Maritime
Surveillance
</td>
<td>
The set of activities aimed to understand, prevent wherever applicable and
manage in a comprehensive way all the events and actions relative to the
maritime domain which could impact the areas of maritime safety and security,
law enforcement, defense, border control, protection of the maritime
environment, fisheries control, trade and economic interest of the EU.
</td> </tr> </table>
Table 1: Definitions
## 1.5. Structure of the document
The document is structured as follows:
* Chapter 1 **Introduction** : it gives an introduction to the overall documents providing also useful information about the applicability of the present document.
* Chapter 2 **System Overview** : in this chapter an overview of the MARISA system that will be implemented as objective of the project.
* Chapter 3 **Project Organization** : this chapter presents the project management structure and the associated roles.
* Chapter 4 **Project Control and Monitoring** : in this chapter processes set up for scheduling and schedule control, management of reviews and meetings, collaboration tools, are described.
* Chapter 5 **Configuration Management** : this chapter presents the project rules for configuration management, data management.
* Chapter 6 **Quality Plan** : it presents the quality control aspects.
* Chapter 7 **Risk Management Plan** : in this chapter it is presented the risks and the opportunities management rules.
# 2\. System Overview
One of the main MARISA objective is to foster faster detection of new events,
better informed decision making and achievement of a joint understanding of a
situation across borders and allowing seamless cooperation between operating
authorities and on-site / at sea / in air intervention forces.
The proposed solution is a toolkit that provides a suite of services to
correlate and fuse various heterogeneous and homogeneous data and information
from different sources, including Internet and social networks. MARISA also
aims to build on the huge opportunity that comes from using the open access to
“big data” for maritime surveillance: the availability of large to very large
amounts of data, acquired from various sources ranging from sensors,
satellites, open source, internal sources and of extracting from these amounts
through advanced correlation improves knowledge.
The MARISA toolkit provides new means for the exploitation of the bulky
information silos data, leveraging on the fusion of heterogeneous sector data
and taking benefit of a seamless semantic interoperability with the existing
legacy solutions available across Europe.
In this regard the CISE data model and the EUCISE2020 services will be
exploited, combining with the expertise of consortium members in creating
security intelligence knowledge from a wide variety of sources.
# 3\. Project Organization
## 3.1. Consortium Members
<table>
<tr>
<th>
**No.**
</th>
<th>
**Participant Organization Name**
</th>
<th>
**Country**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Leonardo S.p.A. (LDO)
</td>
<td>
IT
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Engineering Ingegneria Informatica S.p.A. (ENG)
</td>
<td>
IT
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
GMV Aerospace & Defence S.A.U. (GMV)
</td>
<td>
ES
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Airbus Defence and Space SAS (ADS)
</td>
<td>
FR
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
e-GEOS S.p.A (EG)
</td>
<td>
IT
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
PLATH GmbH (PLA)
</td>
<td>
DE
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
SATWAYS - Proionta Kai Ypiresies Tilematikis Diktyakon Kai Tilepikinoniakon
Efarmogon Etairia Periorismenis Efthinis Epe (STW)
</td>
<td>
GR
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
Inovaworks II Command and Control (IW)
</td>
<td>
PT
</td> </tr>
<tr>
<td>
**9**
</td>
<td>
Aster S.p.A. (AST)
</td>
<td>
IT
</td> </tr>
<tr>
<td>
**10**
</td>
<td>
Luciad NV (LUC)
</td>
<td>
BE
</td> </tr>
<tr>
<td>
**11**
</td>
<td>
INOV Inesc Inovação (INOV)
</td>
<td>
PT
</td> </tr>
<tr>
<td>
**12**
</td>
<td>
Nederlandse Organisatie voor toegepast-natuurwetenschappelijk onderzoek (TNO)
</td>
<td>
NL
</td> </tr>
<tr>
<td>
**13**
</td>
<td>
Fraunhofer Gesellschaft zur Förderung der angewandten Forschung e.V. (IOSB)
</td>
<td>
DE
</td> </tr>
<tr>
<td>
**14**
</td>
<td>
NATO STO – Centre for Maritime Research and Experimentation (CMRE)
</td>
<td>
BE
</td> </tr>
<tr>
<td>
**15**
</td>
<td>
Toulon Var Technologies (PMM)
</td>
<td>
FR
</td> </tr>
<tr>
<td>
**16**
</td>
<td>
Laurea University of Applied Sciences (LAU)
</td>
<td>
FI
</td> </tr>
<tr>
<td>
**17**
</td>
<td>
Alma Mater Studiorum University of Bologna (UNIBO)
</td>
<td>
IT
</td> </tr>
<tr>
<td>
**18**
</td>
<td>
Ministry of National Defence Greece (HMOD)
</td>
<td>
GR
</td> </tr>
<tr>
<td>
**19**
</td>
<td>
Netherlands Coastguard (NCG)
</td>
<td>
NL
</td> </tr>
<tr>
<td>
**20**
</td>
<td>
Guardia Civil (GUCI)
</td>
<td>
ES
</td> </tr>
<tr>
<td>
**21**
</td>
<td>
Italian Navy (ITN)
</td>
<td>
IT
</td> </tr>
<tr>
<td>
**22**
</td>
<td>
Portuguese Navy (PN)
</td>
<td>
PT
</td> </tr> </table>
Table 3: MARISA Consortium
## 3.2. Management Structure
Although MARISA is a project with a large number of partners, we have opted
for a lean and mean organization since the majority of the partners have
already worked closely together in other successful EU funded projects and
joint research activities. An overview of the proposed organisational
structure is given and shown in the Figure 1\.
Figure 1: MARISA Organization Chart
## 3.3. Project Bodies and Main Roles
### 3.3.1. Project Management and Coordination
The Project Coordinator (PC) [Francesco Cazzato, LDO], is the operative
coordinator of the project and the intermediary between the consortium and the
European Commission; PC coordinates Executive Board (EB) and Technical Board
(TB) and the Advisory Board (AB). PC is responsible for the overall
contractual, ethical, financial and administrative management of the project.
This includes also the supervision of the technical activities of the work
package leaders (WPLs). The PC is the sole contact person for the project with
the EC and will ensure the punctual delivery of reports and deliverables; the
liaison with the EndUsers/Advisory Board and with the partners; the chair of
the EB and of the TB. He promotes the participation of female researchers in
the project, approves all publications and deliverables.
The Project Manager (PM), [Valeria Fontana, LDO], with related experience in
managing research project, supports the PC and EB with the following tasks:
* prepares the meeting agendas, organizes locations, schedules, and all activities related to helping the PC with the organization of the project;
* monitors deadlines on time and budget and issues early warnings in case certain limits are reached;
* monitors of all the deliverables and reports quality aspects;
* provide appropriate reporting and addressing questions, problems, issues to be solved inside the EB meetings;
* manage the risk plan.
The Financial Officer (FO), [Alessandro Ambrosetti, LDO], supports the PC and
EB with the following tasks:
* distributes the payments received from the Commission to the Partners, according to the rules defined in the [GA] and [CA];
* receives the Partners’ reports on the person efforts and expenditures related to each six-month period;
* support to the Project coordinator for any amendment may occur to the [GA] and [CA];
* produces and monitors the Annual Financial Statements.
### 3.3.2. Executive Board
Executive Board (EB): is the decision making body and has the highest level of
authority in the project. The EB is chaired by the Project Coordinator. The EB
can authorize the PC to defer to the Commission for specific decisions in case
a specific question cannot be resolved internally. Decisions within the EB
will be taken ideally by consensus, but if necessary, by a two-thirds majority
vote. In general, the EB meets in person at least once every six months. In
addition to face-to-face meetings, the EB may hold teleconference meetings to
discuss project progress and to make decisions and take action where
appropriate. Specific tasks of the EB are:
* Being responsible for all the aspects of the project, including to review progress against the defined deliverables and timetables and propose corrective actions whenever necessary;
* Deciding upon the eventual reallocation of the projects’ budget by WPs and reviewing and proposing to the contractors budget transfers to include in the project plan.
* Making proposals to the partners for the review and/or amendment of the terms of the [GA] and/or the [CA].
* Deciding upon major changes in work, particularly termination, creation, or reallocation of activities.
* Deciding to suspend all or part of the project or to terminate all or part of the partners, or to request the Commission to terminate the participation of one or more partners.
* Deciding upon the entering into the [GA] and the [CA] of new partners.
* Agreeing procedures and policies in accordance with the Commission contractual rules, for the knowledge and Intellectual Property Rights (IPR) management.
### 3.3.3. Technical Board
The Technical Board (TB) is the technical body which has technical authority
in the project; it is chaired by the Project Coordinator (PC) and it is
composed by the Innovation Manager (IM) and WP Leaders (WPLs). The TB supports
the PC and the Executive Board (EB) in the following tasks:
* guarantees that the engineering tasks are carried out on schedule and in accordance with the tasks and functions determined by the [GA];
* understand the plan and scope of the project by developing the technical breakdown structure, schedule and workflow diagrams, resource requirements, and constraints;
* define interfaces as for instance understanding potential incompatibilities across project components;
* control the configuration, help in milestone review and assessment;
* understand biases, assumptions and other technical details that affect the results;
* place any data regarding the user’s expectations, requirements, design, etc. under version control.
### 3.3.4. User Community
The MARISA development methodology relies on the User Community that will
include “end user practitioners”, partners, associates, maritime surveillance
experts to explore and exploit the human capital in Member States and their
institutions identifying operational needs, operational scenarios, existing
gaps, acceptability issues and societal impacts that the proposed solutions
may entail. The design of MARISA therefore integrates the users’ experience
and design-development related R&D, trust building and cocreativity in a
collective-authentic manner.
The User Community will also provide guidance to the partners and enable
interactions in the consortium for the implementation of the data fusion
technologies. The current composition of the MARISA User
Community comprehends all the partners including the “end users practitioners”
that have joined MARISA as full partners: Italian Navy, Guardia Civil,
Netherlands Coast guard, Hellenic MoD and Portuguese Navy.
Moreover, the User Community also includes the associated partners that have
already expressed interest and support to the objectives of the project.
However the consortium is always open to accept other members during the
execution of the project.
### 3.3.5. Advisory Board
The MARISA Consortium has also foreseen an Advisory Board (AB). This
additional body meets three times throughout the project (M10, M20 and M30)
and consists of selected European and International organisations not directly
involved in the project as full partners. The AB supports and advises project
partners with experience and know-how throughout the project duration. Their
valuable feedback to the strategic and technical process of the project brings
many benefits for the MARISA project. Members of the AB will provide an
external view. The AB will advise on strategic directions of the project in
terms of detailed technical goals and impact, standardisation, ethical and
societal issues. To achieve high quality results within the MARISA project, a
strong cooperation with the AB members will actively be pursued and
facilitated by frequent interaction.
The current composition of the Advisory Board is reported in Table 4:
<table>
<tr>
<th>
**Nation**
</th>
<th>
**Org. Name/Dep.**
</th>
<th>
**Member of the Advisory Board**
</th> </tr>
<tr>
<td>
**FI**
</td>
<td>
Finnish Border Guard
</td>
<td>
_Commander Ari Laaksonen_
</td> </tr>
<tr>
<td>
**IT**
</td>
<td>
Italian Ministry of Interior - Dipartimento della Pubblica Sicurezza –
Direzione Centrale dell’Immigrazione e della Polizia delle Frontiere
</td>
<td>
_Dott.ssa Rosa Maria Preteroti_
</td> </tr>
<tr>
<td>
**SP**
</td>
<td>
Qi Europe - Security and Defence
</td>
<td>
_Javier Warleta Alcina_
</td> </tr>
<tr>
<td>
**MA-USA**
</td>
<td>
Massachusetts Institute of Technology
</td>
<td>
_Pierre F.J. Lermusiaux_
</td> </tr>
<tr>
<td>
**IT**
</td>
<td>
Italian Space Agency - Maritime Surveillance Projects
</td>
<td>
_Dott.ssa Carolina Matarazzi_
</td> </tr>
<tr>
<td>
**NO**
</td>
<td>
Peace Research Institute Oslo
</td>
<td>
_Maria Gabrielsen Jumbert_
</td> </tr>
<tr>
<td>
**FR**
</td>
<td>
État-Major de la Marine - ICT systems in
CEPN
</td>
<td>
_Captain of Frigate Benoit Stephan_
</td> </tr>
<tr>
<td>
**IT**
</td>
<td>
Leonardo S.p.A
</td>
<td>
_Andrea Biraghi_
</td> </tr> </table>
Table 4: MARISA Advisory Board
### 3.3.6. Security Board
Security Advisory Board (SAB) will review all potential EU classified
information, throughout the project life, coordinated by the MARISA Project
Security Officer and report at the Executive Board meetings.
The Security Advisory Board consists of representatives from the Consortium as
reported in the Table 5:
<table>
<tr>
<th>
**Nation**
</th>
<th>
**Org. Name**
</th>
<th>
**Project Security Officer**
</th>
<th>
**E-Mail**
</th> </tr>
<tr>
<td>
**IT**
</td>
<td>
LDO
</td>
<td>
_Francesco Moliterni_
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**Nation**
</td>
<td>
**Org. Name**
</td>
<td>
**Member of the AB**
</td>
<td>
</td> </tr>
<tr>
<td>
**IT**
</td>
<td>
ENG
</td>
<td>
_Fabio Barba_
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**ES**
</td>
<td>
GMV
</td>
<td>
_Oscar Pablo Tejedor Zorita_
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**FR**
</td>
<td>
ADS
</td>
<td>
_Philippe Chrobocinski_
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**GR**
</td>
<td>
STW
</td>
<td>
_Antonis Kostaridis_
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**BE**
</td>
<td>
CMRE
</td>
<td>
_Amleto Gabellone_
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**ES**
</td>
<td>
GUCI
</td>
<td>
_Luis Antonio Santiago Marín_
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**ES**
</td>
<td>
GUCI
</td>
<td>
_Carlos Díaz Martín_
</td>
<td>
[email protected]
</td> </tr> </table>
Table 5: MARISA Security Advisory Board
All the SAB representatives have wide experience in handling security issues.
As detailed in the summary CVs (reported in [GA]), many of them have acted as
Project Security Officer in a number of H2020 projects or are responsible of
Security Management in the respective companies.
### 3.3.7. Other Project Roles
The **Innovation Manager** (IM), [Véronique Pevtschin, ENG] ensures the
coordination of all activities connected to technology alignment to user
needs, technology transfer, management of innovation, including IPR, detection
of market opportunities and channels. IM supports the PC and the EB in
aligning innovation to user needs, in fostering the exploitation of innovation
and in connecting with other suppliers to promote the openness of the
approach. In 4.4, it is reported a more detailed explanation about how the
Innovation Management will be carried out in MARISA.
The **Ethics Manager** (EM), [Sari Sarlio-Siintola, LAU], with an extensive
expertise for the ethics work, a) identifies and analyses ethical and societal
framework; b) monitors the ethical concerns in the project, b) ensures that
the project will comply with the Research Ethics taking all relevant
international ethical aspects into consideration, prior to the execution of
the operational trials, c) translates and implements the ethical requirements
to the various deliverables in the project, d) provides advice on assessments
on ethics in the ethical and societal reports, e) facilitates collaboration
with all the project actors.
The **Communication Manager** (CM), [Eric den Breejen, TNO] ensures the
quality and execution of the communication (and dissemination) plan of the
project. CM supports the PC and EB with the following tasks: a) preparation of
the communication and dissemination plan for activities in relation to the
general media in conjunction with the innovation manager for the specifics of
industrial dissemination; b) Monitoring of media and other means of
dissemination for the duration of the project; c) definition and review of
dissemination material.
The **User Community Leader** (UCL), [Rauno Pirinen, LAU] with vast experience
in applying user-driven methodology in the definition of user needs, design
and validation of information systems, leads and animates the User Community.
# 4\. Project Control and Monitoring
## 4.1. Work Breakdown Structure
The project’s organisation regarding the overall work spread and allocation is
based on a systematic approach: everything produced throughout and by the
project corresponds with a Work Package task. The work division will therefore
be articulated on a two level basis:
* Level 1: Work Packages that gather group of single tasks, all with the same assigned objective. Each Work Package carries out its tasks, has autonomous control over internal issues and delivers research and development results in accordance with the Project Work Programme and within the allocated budget.
* Level 2: Single Tasks, embedded within the Work Packages, which are linked to a sole and defined action, like the production of a Deliverable.
<table>
<tr>
<th>
**#**
</th>
<th>
**TITLE**
</th>
<th>
**BRIEF DESCRIPTION**
</th> </tr>
<tr>
<td>
**WP1**
</td>
<td>
**Project Management & Coordination **
</td>
<td>
Overall management and coordination on the project in respect of technical
progress, finance, contracts and administration.
</td> </tr>
<tr>
<td>
T1.1
</td>
<td>
Project Coordination
</td>
<td>
Coordination of the overall administration and finances of the project.
</td> </tr>
<tr>
<td>
T1.2
</td>
<td>
Project Management, Quality Control and Risk Management
</td>
<td>
Day-to-day planning and work supporting the Project Coordination during the
entire lifecycle of the project.
</td> </tr>
<tr>
<td>
T1.3
</td>
<td>
Innovation Management
</td>
<td>
Innovation management of MARISA reflects the strategy of MARISA deliver
technology aligned to market needs, European strategies and operational
contexts.
</td> </tr>
<tr>
<td>
T1.4
</td>
<td>
Ethics Management
</td>
<td>
Guidance and steering on legal, ethical and societal issues of the proposed
MARISA solution.
</td> </tr>
<tr>
<td>
**WP2**
</td>
<td>
**User Needs and Operational Scenarios**
</td>
<td>
Animation of the MARISA User Community to foster pro-active involvement of
stakeholders following a user-centric approach in the MARISA design and
validation.
</td> </tr>
<tr>
<td>
T2.1
</td>
<td>
User community animation
</td>
<td>
Animation of the MARISA User Community focused on the goal of “innovation”,
delivering the benefits of data fusion to maritime surveillance through the
MARISA toolkit of services.
</td> </tr>
<tr>
<td>
T2.2
</td>
<td>
Addressing user needs through MARISA services
</td>
<td>
This task will build on results of previous R&D and cooperative projects and
the input of the users’ community to elaborate the relevant requirements of
data fusion functionalities, the organisation of these functionalities into
actionable services.
</td> </tr>
<tr>
<td>
T2.3
</td>
<td>
Adoption model for MARISA
</td>
<td>
The adoption path will be applied to drive the incremental alignment of the
innovative services to the user needs and evaluate the implementations with
respect to the needs.
</td> </tr>
<tr>
<td>
T2.4
</td>
<td>
Interaction with Existing/Legacy
Systems and the CISE environment
</td>
<td>
This task focuses on identifying the legacy systems which MARISA will interact
with, during the project trials but also beyond, to address their capabilities
to exchange data and required agreements.
</td> </tr>
<tr>
<td>
T2.5
</td>
<td>
Innovative use of additional data sources
</td>
<td>
This task focus on designing a MARISA toolkit that can easily integrate new
data sources and support new services beyond the MARISA project community and
duration.
</td> </tr> </table>
<table>
<tr>
<th>
**#**
</th>
<th>
**TITLE**
</th>
<th>
**BRIEF DESCRIPTION**
</th> </tr>
<tr>
<td>
T2.6
</td>
<td>
Legal and Ethical Context Analysis
</td>
<td>
This task starts at the very beginning of the project by identifying and
analysing legal and ethical regulation framework in which MARISA operates
linked to the MARISA functionalities for data fusion.
</td> </tr>
<tr>
<td>
T2.7
</td>
<td>
Operational Scenarios & Trials Definition
</td>
<td>
In this task, the implementation and trial results will be traced back to
initial requirements, according to the two-phase approach of the work plan.
</td> </tr>
<tr>
<td>
**WP3**
</td>
<td>
**MARISA Toolkit Design**
</td>
<td>
Delivery of the high level architecture of the MARISA Toolkit and the
associated services based on requirements and scenarios previously defined.
</td> </tr>
<tr>
<td>
T3.1
</td>
<td>
Overall Toolkit Design
</td>
<td>
Definition of the MARISA Toolkit main building blocks and interfaces among
them; identification of services producers and consumers and main system
components (internal and external, including legacy systems).
</td> </tr>
<tr>
<td>
T3.2
</td>
<td>
Data Fusion Module and resulting services
</td>
<td>
Definition of the data fusion module architecture including the identification
of the resulting services which shall be classified.
</td> </tr>
<tr>
<td>
T3.3
</td>
<td>
External and internal Interfaces and gateways design
</td>
<td>
Based on the services behaviour, the required capabilities and the flow of
information previously defined, the external and internal interfaces shall be
identified.
</td> </tr>
<tr>
<td>
T3.4
</td>
<td>
Data models definition
</td>
<td>
Elaboration of a common data model for data exchange.
</td> </tr>
<tr>
<td>
T3.5
</td>
<td>
User interactions (HCI)
</td>
<td>
Development of human-computer-interface modules.
</td> </tr>
<tr>
<td>
**WP4**
</td>
<td>
**Data analysis and fusion**
</td>
<td>
Identification of all the Data Fusion computing capabilities of Marisa Toolkit
providing methods and algorithms to extract value added information from the
available external data sources.
</td> </tr>
<tr>
<td>
T4.1
</td>
<td>
Data Fusion Level 1 products
</td>
<td>
This task addresses the aspect of “Observation of elements in the environment”
to provide a more accurate awareness of objects in the maritime environment.
</td> </tr>
<tr>
<td>
T4.2
</td>
<td>
Data Fusion Level 2 products
</td>
<td>
This task addresses the aspect of “Comprehension of the current situation” to
provide useful information among the relationships of level 1 objects in the
maritime environment.
</td> </tr>
<tr>
<td>
T4.3
</td>
<td>
Data Fusion Level 3 products
</td>
<td>
This task addresses the aspect of “Projection of future states”, to predict
the evolution of a maritime situation, in support of rapid decision making and
action.
</td> </tr>
<tr>
<td>
**WP5**
</td>
<td>
**Supporting capabilities and infrastructure**
</td>
<td>
Definition and development of the supporting infrastructure for the MARISA
toolkit.
</td> </tr>
<tr>
<td>
T5.1
</td>
<td>
Big Data Infrastructure set-up
</td>
<td>
Definition, implementation and set-up of a scalable Big Data management and
analysis infrastructure with the aim to support and integrate the analysis,
fusion, and analytics modules.
</td> </tr>
<tr>
<td>
T5.2
</td>
<td>
Interfaces to external data sources (include gateways)
</td>
<td>
Implementation of the adapter modules and gateways for the integration of the
required external data sources and legacy Systems.
</td> </tr>
<tr>
<td>
T5.3
</td>
<td>
User interactions
</td>
<td>
Implementation of the front-end components of the toolkit.
</td> </tr>
<tr>
<td>
T5.4
</td>
<td>
Data fusion distribution services
</td>
<td>
A set of system-to-system interfaces will be made available to be used by the
end user operational systems or other external systems.
</td> </tr>
<tr>
<td>
T5.5
</td>
<td>
Information and Assurance services
</td>
<td>
Implementation of the identity and access management services providing MARISA
toolkit the capability to identify (authentication) and grant and access
privileges (authorization) to all users, systems and devices connecting to
MARISA.
</td> </tr>
<tr>
<td>
**WP6**
</td>
<td>
**MARISA Toolkit integration and validation**
</td>
<td>
Integration of the various components of the system and test and validation of
the configurations to provide WP7 with qualified systems to support the
trials.
</td> </tr> </table>
<table>
<tr>
<th>
**#**
</th>
<th>
**TITLE**
</th>
<th>
**BRIEF DESCRIPTION**
</th> </tr>
<tr>
<td>
T6.1
</td>
<td>
General principles of test architecture and data integration
</td>
<td>
Definition of the generic principles of the test architecture based on
specifications provided by users or developers of the tools and components
that will be used.
</td> </tr>
<tr>
<td>
T6.2
</td>
<td>
Definition and implementation of integration platforms
</td>
<td>
Definition and implementation of integration platforms (at PMM premises), able
to test all components in different configurations and interfaces to local
legacy systems.
</td> </tr>
<tr>
<td>
T6.3
</td>
<td>
Definition of system configurations for each trial
</td>
<td>
The outcome of the task will be the definition files of each campaign
configuration, the definition of the data sets needed for each campaign and
the test files that shall be used to validate each configuration.
</td> </tr>
<tr>
<td>
T6.4
</td>
<td>
Factory integration of campaign configurations
</td>
<td>
The task will integrate all components developed in WP4 and WP5 into the whole
chain, physically in a first step and functionally in a second step. The
system configurations will be qualified through the test files and validated
on the basis of the specifications prepared in WP2 and WP3.
</td> </tr>
<tr>
<td>
T6.5
</td>
<td>
Validation of campaign configurations with simulation/emulation tools
</td>
<td>
Factory scenarios will be defined to test and validate the MARISA toolkit in
the different trial configurations, with the aim to provide validated
configurations for the trials execution.
</td> </tr>
<tr>
<td>
**WP7**
</td>
<td>
**Verification in Operational Trials**
</td>
<td>
Definition of the overall approach to be applied to the operational trials,
execution of the operational trials, collection and analysis of trial results.
</td> </tr>
<tr>
<td>
T7.1
</td>
<td>
Operational Trials Approach
</td>
<td>
This task will define the approach and plans to operational trials to be
conducted in Phase 1 and Phase 2 considering the preparatory work performed by
the User Community in WP2.
</td> </tr>
<tr>
<td>
T7.2
</td>
<td>
Iberian Trial
</td>
<td>
This task includes activities for the detailed organization, logistics,
setting-up and execution of the Iberian Trial.
</td> </tr>
<tr>
<td>
T7.3
</td>
<td>
Northern Sea Trial
</td>
<td>
This task includes activities for the detailed organization, logistics,
setting-up and execution of the North Sea Trial.
</td> </tr>
<tr>
<td>
T7.4
</td>
<td>
Ionian Trial
</td>
<td>
This task includes activities for the detailed organization, logistics,
setting-up and execution of the Ionian Sea trial.
</td> </tr>
<tr>
<td>
T7.5
</td>
<td>
Aegean Trial
</td>
<td>
This task include activities for the detailed organization, logistics,
setting-up and execution of the Aegean Sea Trial.
</td> </tr>
<tr>
<td>
T7.6
</td>
<td>
Strait of Bonifacio Trial
</td>
<td>
This task include activities for the detailed organization, logistics,
setting-up and execution of the Strait of Bonifacio Trial.
</td> </tr>
<tr>
<td>
T7.7
</td>
<td>
Verification Results Consolidation
</td>
<td>
This task will a) assess the results of each trial against the objective, b)
verify the KPIs defined in Section 1.2, c) provide recommendations and define
a roadmap for the subsequent project phases.
</td> </tr>
<tr>
<td>
**WP8**
</td>
<td>
**Dissemination and Exploitation**
</td>
<td>
Dissemination, exploitation and standardization activities of the MARISA
project.
</td> </tr>
<tr>
<td>
T8.1
</td>
<td>
Communication and Dissemination Strategy and Plan
</td>
<td>
This task defines the complete communication and dissemination strategy for
the MARISA project ensuring that the appropriate MARISA results are conveyed
to the right audience in the right time.
</td> </tr>
<tr>
<td>
T8.2
</td>
<td>
MARISA Web Site and dissemination materials
</td>
<td>
The MARISA web site will be set-up and maintained. Official MARISA project
leaflets, brochures, posters, videos and workshops material will be prepared
in this task.
</td> </tr>
<tr>
<td>
T8.3
</td>
<td>
MARISA Workshops
Coordination
</td>
<td>
This task will be dedicated to the preparation, organisation, management and
coordination of 3 workshops to present and promote the MARISA results to the
MARISA Consortium partners, end-users, user community and other relevant
stakeholders.
</td> </tr>
<tr>
<td>
T8.4
</td>
<td>
MARISA Training toolkit support
</td>
<td>
This task delivers the training material in a format adapted to users to
support them through trials in phase 1 and in phase 2.
</td> </tr>
<tr>
<td>
**#**
</td>
<td>
**TITLE**
</td>
<td>
**BRIEF DESCRIPTION**
</td> </tr>
<tr>
<td>
T8.5
</td>
<td>
IPR Management and Exploitation plan
</td>
<td>
This task focuses on creating and refining business models, supported by
adoption paths and including proposed IPR models, oriented both to existing
partners and external organisations interested in delivering new data fusion
services through the MARISA toolkit.
</td> </tr>
<tr>
<td>
T8.6
</td>
<td>
Standardization
</td>
<td>
This task will address standardizations of MARISA approach, methods and
results.
</td> </tr>
<tr>
<td>
**WP9**
</td>
<td>
**Ethics Requirements**
</td>
<td>
This work package sets out the 'ethics requirements' that the project must
comply with.
</td> </tr> </table>
Table 6: MARISA WBS
The **Work Package Leaders** (WPL) direct the day-to-day technical planning
and execution of work and escalate issues to the EB as required. The WPLs are
responsible for monitoring progress in their respective work package and for
coordinating the activities and compiling the responses. They collaborate with
partners on the tasks of each work package in order to assure the quality of
work and the present the results in reports according to the project
description.
Specific activities of the Work Package Leader are:
* planning of the Work Package’s activities;
* coordination of the Task Leaders within the Work Package;
* liaison with the Project Coordinator (technical follow-up and information on IPR issues in connection with the Work Package);
* deadline management, and implementation of the Project Work Programme at the Work Package level, in particular the Work Package Leader has to inform the Coordinator and the other Work Package Leaders whenever a timeline might not be achieved so that the necessary contingency plans can be implemented;
* quality control and performance assessment of the Tasks associated to the Work Package;
* in case of conflict between contributors, the Work Package Leader tries to find a solution (corrective action) and if needed will inform the Coordinator and the EB;
* responsible for security issues of the deliverables in their WP.
The Work Package Leader is responsible for the respect of the stipulated
deadlines, and if necessary the execution of the relevant part of the
contingency plan.
The **Task Leaders** (TLs) are responsible of all aspects of the Task’s
execution. A Task consists of a clearly identified simple objective (develop a
specified tool or provide a deliverable).
Specific activities of the Task Leaders are:
* contribute to the elaboration of the Work Package’s planning;
* coordination and management of the Task team and the Contributors;
* liaison with the Work Package Leader (technical follow-up and information on IPR issues in connection with the Work Package);
* deliver milestones and deliverables in accordance with the Project Work Programme;
* inform the Work Package Leader on all relevant events and activities related to the Task;
* propose and implement corrective actions in case of malfunctions;
* provide cost statements, information and data (financial and other) necessary for the mid-term and final review.
The Task Leader is responsible for the respect of the stipulated deadlines,
and if necessary the execution of the relevant part of the contingency plan.
<table>
<tr>
<th>
**#**
</th>
<th>
**TITLE**
</th>
<th>
**LEADER**
</th> </tr>
<tr>
<td>
**WP1**
</td>
<td>
**Project Management & Coordination **
</td>
<td>
**LDO**
</td> </tr>
<tr>
<td>
T1.1
</td>
<td>
Project Coordination
</td>
<td>
LDO
</td> </tr>
<tr>
<td>
T1.2
</td>
<td>
Project Management, Quality Control and Risk Management
</td>
<td>
LDO
</td> </tr>
<tr>
<td>
T1.3
</td>
<td>
Innovation Management
</td>
<td>
ENG
</td> </tr>
<tr>
<td>
T1.4
</td>
<td>
Ethics Management
</td>
<td>
LAU
</td> </tr>
<tr>
<td>
**WP2**
</td>
<td>
**User Needs and Operational Scenarios**
</td>
<td>
**LAU**
</td> </tr>
<tr>
<td>
T2.1
</td>
<td>
User community animation
</td>
<td>
LAU
</td> </tr>
<tr>
<td>
T2.2
</td>
<td>
Addressing user needs through MARISA services
</td>
<td>
LDO
</td> </tr>
<tr>
<td>
T2.3
</td>
<td>
Adoption model for MARISA
</td>
<td>
ENG
</td> </tr>
<tr>
<td>
T2.4
</td>
<td>
Interaction with Existing/Legacy Systems and the CISE environment
</td>
<td>
GMV
</td> </tr>
<tr>
<td>
T2.5
</td>
<td>
Innovative use of additional data sources
</td>
<td>
IOSB
</td> </tr>
<tr>
<td>
T2.6
</td>
<td>
Legal and Ethical Context Analysis
</td>
<td>
LAU
</td> </tr>
<tr>
<td>
T2.7
</td>
<td>
Operational Scenarios & Trials Definition
</td>
<td>
AST
</td> </tr>
<tr>
<td>
**WP3**
</td>
<td>
**MARISA Toolkit Design**
</td>
<td>
**GMV**
</td> </tr>
<tr>
<td>
T3.1
</td>
<td>
Overall Toolkit Design
</td>
<td>
GMV
</td> </tr>
<tr>
<td>
T3.2
</td>
<td>
Data Fusion Module and resulting services
</td>
<td>
LDO
</td> </tr>
<tr>
<td>
T3.3
</td>
<td>
External and internal Interfaces and gateways design
</td>
<td>
STW
</td> </tr>
<tr>
<td>
T3.4
</td>
<td>
Data models definition
</td>
<td>
ENG
</td> </tr>
<tr>
<td>
T3.5
</td>
<td>
User interactions (HCI)
</td>
<td>
LUC
</td> </tr>
<tr>
<td>
**WP4**
</td>
<td>
**Data analysis and fusion**
</td>
<td>
**LDO**
</td> </tr>
<tr>
<td>
T4.1
</td>
<td>
Data Fusion Level 1 products
</td>
<td>
LDO
</td> </tr>
<tr>
<td>
T4.2
</td>
<td>
Data Fusion Level 2 products
</td>
<td>
IOSB
</td> </tr>
<tr>
<td>
T4.3
</td>
<td>
Data Fusion Level 3 products
</td>
<td>
TNO
</td> </tr>
<tr>
<td>
**WP5**
</td>
<td>
**Supporting capabilities and infrastructure**
</td>
<td>
**ENG**
</td> </tr>
<tr>
<td>
T5.1
</td>
<td>
Big Data Infrastructure set-up
</td>
<td>
ENG
</td> </tr>
<tr>
<td>
T5.2
</td>
<td>
Interfaces to external data sources (include gateways)
</td>
<td>
GMV
</td> </tr>
<tr>
<td>
**#**
</td>
<td>
**TITLE**
</td>
<td>
**LEADER**
</td> </tr>
<tr>
<td>
T5.3
</td>
<td>
User interactions
</td>
<td>
LUC
</td> </tr>
<tr>
<td>
T5.4
</td>
<td>
Data fusion distribution services
</td>
<td>
LDO
</td> </tr>
<tr>
<td>
T5.5
</td>
<td>
Information and Assurance services
</td>
<td>
ADS
</td> </tr>
<tr>
<td>
**WP6**
</td>
<td>
**MARISA Toolkit integration and validation**
</td>
<td>
**ADS**
</td> </tr>
<tr>
<td>
T6.1
</td>
<td>
General principles of test architecture and data integration
</td>
<td>
ADS
</td> </tr>
<tr>
<td>
T6.2
</td>
<td>
Definition and implementation of integration platforms
</td>
<td>
PMM
</td> </tr>
<tr>
<td>
T6.3
</td>
<td>
Definition of system configurations for each trial
</td>
<td>
ADS
</td> </tr>
<tr>
<td>
T6.4
</td>
<td>
Factory integration of campaign configurations
</td>
<td>
ADS
</td> </tr>
<tr>
<td>
T6.5
</td>
<td>
Validation of campaign configurations with simulation/emulation tools
</td>
<td>
ADS
</td> </tr>
<tr>
<td>
**WP7**
</td>
<td>
**Verification in Operational Trials**
</td>
<td>
**STW**
</td> </tr>
<tr>
<td>
T7.1
</td>
<td>
Operational Trials Approach
</td>
<td>
STW
</td> </tr>
<tr>
<td>
T7.2
</td>
<td>
Iberian Trial
</td>
<td>
GMV
</td> </tr>
<tr>
<td>
T7.3
</td>
<td>
Northern Sea Trial
</td>
<td>
TNO
</td> </tr>
<tr>
<td>
T7.4
</td>
<td>
Ionian Trial
</td>
<td>
LDO
</td> </tr>
<tr>
<td>
T7.5
</td>
<td>
Aegean Trial
</td>
<td>
STW
</td> </tr>
<tr>
<td>
T7.6
</td>
<td>
Strait of Bonifacio Trial
</td>
<td>
ADS
</td> </tr>
<tr>
<td>
T7.7
</td>
<td>
Verification Results Consolidation
</td>
<td>
STW
</td> </tr>
<tr>
<td>
**WP8**
</td>
<td>
**Dissemination and Exploitation**
</td>
<td>
**TNO**
</td> </tr>
<tr>
<td>
T8.1
</td>
<td>
Communication and Dissemination Strategy and Plan
</td>
<td>
TNO
</td> </tr>
<tr>
<td>
T8.2
</td>
<td>
MARISA Web Site and dissemination materials
</td>
<td>
AST
</td> </tr>
<tr>
<td>
T8.3
</td>
<td>
MARISA Workshops Coordination
</td>
<td>
TNO
</td> </tr>
<tr>
<td>
T8.4
</td>
<td>
MARISA Training toolkit support
</td>
<td>
AST
</td> </tr>
<tr>
<td>
T8.5
</td>
<td>
IPR Management and Exploitation plan
</td>
<td>
ENG
</td> </tr>
<tr>
<td>
T8.6
</td>
<td>
Standardization
</td>
<td>
LDO
</td> </tr>
<tr>
<td>
**WP9**
</td>
<td>
**Ethics Requirements**
</td>
<td>
**LDO**
</td> </tr> </table>
Table 7: MARISA WPs and Tasks Responsibilities
## 4.2. Development Methodology
The MARISA overall development methodology is depicted in Figure 2. The
following principles have been followed.
**Strong involvement of the user community** to capture the relevant
operational needs and validate the results. The MARISA development methodology
relies on the User Community that will include “end user practitioners”,
partners, associates, maritime surveillance experts to explore and exploit the
human capital in Member States and their institutions identifying operational
needs, operational scenarios, existing gaps, acceptability issues and societal
impacts that the proposed solutions may entail.
**Compliance with European Maritime Security Strategy and CISE Data Model** to
maximize interoperability with other MSA community of interest already
existing and operating. The MARISA toolkit will be designed to streamline the
integration with the current and future MSA operational systems, to allow
different configurations and levels of exploitation, to ensure full
compatibility with the CISE data model and with the overall European policy
that facilitates the interagency interoperability and cooperation and allows
each Member State to decide how, when and whether additional data sources are
of relevance to its operations.
**Attention to reuse capabilities and results coming from other European
programs** , but ready to introduce state-of-the-art technologies when
appropriate. Great attention will be devoted to the previous European and
national projects since it is the MARISA Consortium intention to build on
results of these projects. Since all the MARISA participants have been
involved in EU funded R&D and cooperative projects, their deep knowledge of
those projects will allow to move from the current achievements and introduce
improvements and further innovation.
**Protection of Data Fusion Products based on the “need-to-share” approach** ,
to guarantee access and distribution of data fusion results among relevant
stakeholders. MARISA on the one hand will process a great amount of raw data
of different types, on the other hand will produce a relevant number of data
fusion products.
**Adoption of the Agile Software Development Methodology** supported by a
continuous integration environment to assure an evolutionary development of
MARISA services, ease the collaboration and ownership among geographically
distributed partners.
**Validation of MARISA in specific operational trials** . A large part of the
effort will be dedicated to verify and validate the MARISA toolkit in a
variety of operational trials. Indeed, MARISA will be a trial based project.
Trials will be real life exercises, not a guided tour on pre-packed demos.
Each specific trial is characterized by a geographic area, where MARISA
services will be tested, by the Use Cases and hence the system functionalities
will be verified in that trial by the project partners and practitioners
involved in that trial, with its own assets and/or output data coming from
their own systems and needed for the trial execution.
**Two Phase Approach** . The project is organized in two main phases, each
phase will include a complete MARISA life cycle iteration from statement and
finalization of user needs, MARISA toolkit design, development, integration
and setting-up, to trial sessions to validate MARISA through selected
scenarios. This two-phases approach allows to initially construct the concepts
and methods of the MARISA toolkit, deliver and operationally validate a subset
of MARISA initial services. Successively, based on the feedback of the first
phase, MARISA concepts and methods will be revised and enhanced, additional
services will be included and the complete MARISA toolkit validated again in
operational scenarios.
Figure 2: MARISA Development Methodology
## 4.3. Work Logic
MARISA is structured in 9 work packages, over a duration of 30-months. The
work logic is shown in Figure
3\.
Figure 3: MARISA Work Logic
MARISA end users practitioners are involved from the start in WP2,
contributing to the analysis of the context and the definition of the trials.
Users also play a key role in refining the operational scenarios that drive
the trials definition and in feeding the detailed analysis to be done of how
the legacy systems are exercised during the trials. Users also play a key role
in refining the operational scenarios that drive the trials definition and in
feeding the detailed analysis to be done of how the legacy systems are
exercised during the trials.
This involvement is consolidated into user needs, which are then prioritised
to create the functional, interface operational and security requirements as
input to the MARISA Toolkit Design phase performed in WP3. The MARISA software
development, integration and test activities performed in (WP4, WP5 and WP6)
will exploit Agile software development processes supported by a continuous
integration environment that eases the management of multiple contributions
provided by the project partners in a geographically distributed productive
scenario. A relevant integration and validation environment will be provided
in WP6 to minimize the risk of toolkit deployment. Operational scenarios will
be tested in advance in WP6 using simulated and real data. The MARISA Toolkit
will be operationally validated in WP7 according to selected scenarios. The
validation will also include an assessment of the metrics and KPIs.
## 4.4. Innovation Management
The Figure 4 shows the approach to Innovation Management in the MARISA
project.
Figure 4: MARISA Innovation Management
The **User Needs analysis** will focus on the organization of “brainstorming
meetings”, identification and analysis of the assets through the use of value
proposition canvas methodology (who is the customer/s, brainstorming on
customer’s needs). Output will be value proposition canvas, preliminary input
to business models, input to WP3, WP8
The **Competitors and market analysis** will involve the Innovation Team as
well as WP8 Leader in order to come out with a competitor-feature matrix and
an analysis (or re-analysis) of main competitors (value proposition, featured
offered, etc.).
The **Technical improvements** analysis will be carried out together with the
technical partners and it will deal with active participation and animation of
the technical meeting, planning of technical improvements providing also
roadmaps for the MARISA solution as well as provision of new version of the
assets.
**Check point meetings** will represent moments in the project lifecycle,
involving team of technical and nontechnical people with a set of
heterogeneous skills, in which to have demo of improved asset, discussion on
results of previous activities and refined value proposition and unique
selling point (updates, refinement).
The **Innovation Manager** (IM), [Véronique Pevtschin, ENG] ensures the
coordination of all Innovation Management activities.
Main foreseen tasks of the Innovation Manager are:
* Links and refines user needs to monitor and ensure alignment of technology innovation to user needs.
* Prepares the exploitation plan taking into account the inputs from the AB and the EB.
* Manages the execution of the overall exploitation plan of the project and supports the partners in setting up their individual business plans, in order to exploit the results.
* Manages the knowledge produced during the project lifecycle and assesses the opportunity for applying for patents or declaring copyrights, through maintaining all innovations descriptions, screening and ownership.
* Supports the individual partners’ legal departments in their drafting of the legal and contractual agreements with respect to the IPR of the project either internally or externally (the IM does not provide the legal consultancy, as this knowledge is part of the internal legal departments, but acts as a support to fully explain the features of the MARISA approach and their requirements on issues such as data exchange, interconnection of solutions with different IPRs and owners etc).
MARISA Tasks and Deliverables affected by the Innovation Management activities
are reported in the Figure 5.
Figure 5: Innovation Managements in the Project Tasks and Deliverables
## 4.5. Schedule and Milestones
The Figure 6 shows the master schedule of the different work packages. The
whole project duration is 30 months. The project is organized in two main
phases each phase ending with trial sessions that validate MARISA through
selected operational scenarios.
* Phase-1 focuses on the initial construction of the concepts and methods needed for the MARISA Toolkit and delivers and validate a subset of services.
* Phase-2 evolves the concepts and methods based on the feedback gained from the first phase, delivers and validate the complete MARISA toolkit.
Phase-1 will be completed in 20 months (M1-M20), subsequent Phase-2 will be
completed in 10 months (M21-M30).
Figure 6: MARISA Gantt Chart
The following table includes the Project Milestones, with a short description
of their scope and verification means.
<table>
<tr>
<th>
**ID**
</th>
<th>
**Milestone name**
</th>
<th>
**Related work package(s)**
</th>
<th>
**Estimated date**
</th>
<th>
**Means of verification**
</th> </tr>
<tr>
<td>
**MS1**
</td>
<td>
Project Start
</td>
<td>
All
</td>
<td>
M01
</td>
<td>
1\. Kick-Off meeting.
</td> </tr> </table>
<table>
<tr>
<th>
**ID**
</th>
<th>
**Milestone name**
</th>
<th>
**Related work package(s)**
</th>
<th>
**Estimated date**
</th>
<th>
</th>
<th>
**Means of verification**
</th> </tr>
<tr>
<td>
**MS2**
</td>
<td>
Initial
Definition of User community needs, Operational
trial
scenarios and strategy
</td>
<td>
WP2, WP6, WP7, WP8
</td>
<td>
M05
</td>
<td>
1\. 2.
3\.
4\.
</td>
<td>
User Community established.
Initial User needs and operational trial scenarios strategy and definition
achieved (initial version of D2.2 to D2.7 submitted).
Availability of project management, risks, quality plans (D1.1).
Establishment of communication and dissemination strategy (D8.1).
</td> </tr>
<tr>
<td>
**MS3**
</td>
<td>
MARISA
Toolkit
Initial
Design
</td>
<td>
WP3, WP6
</td>
<td>
M10
</td>
<td>
1\.
2\.
</td>
<td>
Initial definition of MARISA toolkit design achieved (initial version of D3.1
to D3.5 submitted).
Definition of integration test platform and integration plans achieved (D6.1,
D6.2, D6.4 submitted).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
3\.
</td>
<td>
Detailed plans for operational scenarios achieved (D7.1).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
4\.
</td>
<td>
Web Site established and populated with dissemination material (D8.2, D8.3).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
5\.
</td>
<td>
Initial Project Report produced (D1.2 submitted).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
6\.
</td>
<td>
Initial Societal Ethical Report available (D1.5).
</td> </tr>
<tr>
<td>
**MS4**
</td>
<td>
MARISA
Toolkit completed for Phase-1 Operational
Trials
</td>
<td>
WP4, WP5, WP6, WP7
</td>
<td>
M15
</td>
<td>
1\.
2\.
</td>
<td>
MARISA Toolkit completed for Phase 1 (Initial version of D4.1 to D4.3 and D5.1
to D5.5 produced) and in factory validated for Phase-1 Operational Trials
(initial version of D6.3, D6.5, D6.6 submitted).
Training Kit available (initial version of D8.5).
</td> </tr>
<tr>
<td>
**MS5**
</td>
<td>
Mid-Term Review
</td>
<td>
WP1, WP2, WP7, WP8
</td>
<td>
M20
</td>
<td>
1\.
</td>
<td>
Phase-1 Operational Trials completed and results available (initial version of
D7.2 submitted).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
2\.
</td>
<td>
Analysis of MARISA Phase-1 achievements and assessment from the User Community
available (Initial version of D2.1 produced).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
3\.
</td>
<td>
Updated Web Site and dissemination material available (D8.2, D8.3 final).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
4\.
</td>
<td>
First workshop achieved (D8.4), Exploitation plan achieved (D8.6).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
5\.
</td>
<td>
First standardization report available (D8.8).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
6\.
</td>
<td>
Final user and operational needs for Phase-2 achieved. (Final version of D2.2
to 2.7 submitted).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
7\.
</td>
<td>
Intermediate Project and Societal Ethical Reports Available (D1.3, D1.6).
</td> </tr>
<tr>
<td>
**ID**
</td>
<td>
**Milestone name**
</td>
<td>
**Related work package(s)**
</td>
<td>
**Estimated date**
</td>
<td>
</td>
<td>
**Means of verification**
</td> </tr>
<tr>
<td>
**MS6**
</td>
<td>
MARISA
Toolkit Completed for Phase-2 Operational
Trials
</td>
<td>
WP4, WP5, WP6, WP7
</td>
<td>
M26
</td>
<td>
1\.
2\.
</td>
<td>
Final MARISA toolkit implementation achieved (final version of D3.1 to D3.5,
D4.1 to D4.3, D5.1 to D5.4, D6.3 to D6.6 submitted).
Detailed plans for phase-2 operational scenarios achieved (final D7.1
produced).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
3\.
</td>
<td>
Final version of Training Kit available (final D8.5 produced)
</td> </tr>
<tr>
<td>
**MS7**
</td>
<td>
MARISA
Project
Completion
</td>
<td>
WP1, WP2, WP7, WP8
</td>
<td>
M30
</td>
<td>
1\.
2\.
</td>
<td>
Phase-2 Operational Trials completed and results available (final version of
D7.2).
Assessments of MARISA Phase-2 achievements available (final version of D2.1
produced).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
3\.
</td>
<td>
Exploitation Uptake actions established
(D8.7).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
4\.
</td>
<td>
Standardization reports available (final D8.8).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
5\.
</td>
<td>
Final Project and Societal Ethical Reports available (D1.3, D1.7).
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
6\.
</td>
<td>
All action deliverables completed and submitted.
</td> </tr> </table>
Table 8: MARISA Milestones
### 4.5.1. Activity Network
Figure 7 shows inter WP dependencies among the nine work packages. WP2
provides the user requirements to WP3 and WP7 for the design of the MARISA
toolkit and operational trials. WP7 provides feedback to WP3-6 after the first
project phase is finished, which will be used to fine-tune the outcome of the
corresponding WPs during the second phase of the project.
Figure 7: MARISA Pert-like diagram showing deliverables and inter-WP links
### 4.5.2. Project Review Plan
The MARISA Project is split into two reporting periods:
* P1: Month 1 – Month 18 (M1 – M18);
* P2: Month 19 – Month 30 (M19 – M30);
At the end of each period a Project Review Meeting will be held with the
participation of the EC Project Officer and independent reviewers appointed by
the EC.
<table>
<tr>
<th>
**Review number**
</th>
<th>
**Tentative timing**
</th>
<th>
**Planned Venue of Review**
</th> </tr>
<tr>
<td>
RV1
</td>
<td>
18
</td>
<td>
Rome
</td> </tr>
<tr>
<td>
RV2
</td>
<td>
30
</td>
<td>
Rome
</td> </tr> </table>
Table 9: Project Reviews
### 4.5.3. List of Deliverables
The following table provides the list of the deliverables throughout the
project lifecycle.
<table>
<tr>
<th>
**Del.**
**Num.**
</th>
<th>
**Deliverable name**
</th>
<th>
**WP**
</th>
<th>
**Lead**
</th>
<th>
**Type**
</th>
<th>
**Dissem. level**
</th>
<th>
**Deliv. date**
</th> </tr>
<tr>
<td>
**D1.1**
</td>
<td>
Project Management, Quality and Risk Plan
</td>
<td>
WP1
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M5
</td> </tr>
<tr>
<td>
**D1.2**
</td>
<td>
MARISA Project Initial Report
</td>
<td>
WP1
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D1.3**
</td>
<td>
MARISA Project Intermediate Report
</td>
<td>
WP1
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D1.4**
</td>
<td>
MARISA Project Final Report
</td>
<td>
WP1
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M30
</td> </tr>
<tr>
<td>
**D1.5**
</td>
<td>
MARISA Societal Ethical Initial Report
</td>
<td>
WP1
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D1.6**
</td>
<td>
MARISA Societal Ethical Intermediate Report
</td>
<td>
WP1
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D1.7**
</td>
<td>
MARISA Societal Ethical Final Report
</td>
<td>
WP1
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M30
</td> </tr>
<tr>
<td>
**D1.8**
</td>
<td>
MARISA Data Management Plan
</td>
<td>
WP1
</td>
<td>
LDO
</td>
<td>
ORDP
</td>
<td>
CO
</td>
<td>
M6
</td> </tr>
<tr>
<td>
**D1.9**
</td>
<td>
MARISA Data Management Plan (final)
</td>
<td>
WP1
</td>
<td>
LDO
</td>
<td>
ORDP
</td>
<td>
CO
</td>
<td>
M24
</td> </tr>
<tr>
<td>
**D2.1**
</td>
<td>
MARISA User Community Report
</td>
<td>
WP2
</td>
<td>
LAU
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D2.2**
</td>
<td>
MARISA User Requirements
</td>
<td>
WP2
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M5
</td> </tr>
<tr>
<td>
**D2.3**
</td>
<td>
MARISA Adoption Models
</td>
<td>
WP2
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M5
</td> </tr>
<tr>
<td>
**D2.4**
</td>
<td>
MARISA Interaction with existing/legacy systems
</td>
<td>
WP2
</td>
<td>
GMV
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M5
</td> </tr>
<tr>
<td>
**D2.5**
</td>
<td>
MARISA Usage of Additional Data Sources
</td>
<td>
WP2
</td>
<td>
IOSB
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M5
</td> </tr>
<tr>
<td>
**D2.6**
</td>
<td>
Legal, ethical and societal aspects of
MARISA
</td>
<td>
WP2
</td>
<td>
LAU
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M5
</td> </tr>
<tr>
<td>
**D2.7**
</td>
<td>
MARISA Operational Scenarios and Trials
</td>
<td>
WP2
</td>
<td>
AST
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M5
</td> </tr>
<tr>
<td>
**D2.8**
</td>
<td>
MARISA User Community Report (final)
</td>
<td>
WP2
</td>
<td>
LAU
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M30
</td> </tr>
<tr>
<td>
**D2.9**
</td>
<td>
MARISA User Requirements (final)
</td>
<td>
WP2
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D2.10**
</td>
<td>
MARISA Adoption Models (final)
</td>
<td>
WP2
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D2.11**
</td>
<td>
MARISA Interaction with existing/legacy systems (final)
</td>
<td>
WP2
</td>
<td>
GMV
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D2.12**
</td>
<td>
MARISA Usage of Additional Data
Sources (final)
</td>
<td>
WP2
</td>
<td>
IOSB
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D2.13**
</td>
<td>
Legal, ethical and societal aspects of
MARISA (final)
</td>
<td>
WP2
</td>
<td>
LAU
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D2.14**
</td>
<td>
MARISA Operational Scenarios and Trials
(final)
</td>
<td>
WP2
</td>
<td>
AST
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D3.1**
</td>
<td>
MARISA Toolkit Design
</td>
<td>
WP3
</td>
<td>
GMV
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D3.2**
</td>
<td>
MARISA Services Description Document
</td>
<td>
WP3
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D3.3**
</td>
<td>
MARISA Interfaces description Document
</td>
<td>
WP3
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D3.4**
</td>
<td>
MARISA data model description
</td>
<td>
WP3
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D3.5**
</td>
<td>
MARISA Human machine best practices and design document
</td>
<td>
WP3
</td>
<td>
LUC
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D3.6**
</td>
<td>
MARISA Toolkit Design (final)
</td>
<td>
WP3
</td>
<td>
GMV
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M22
</td> </tr>
<tr>
<td>
**D3.7**
</td>
<td>
MARISA Services Description Document
(final)
</td>
<td>
WP3
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M22
</td> </tr>
<tr>
<td>
**D3.8**
</td>
<td>
MARISA Interfaces description Document
(final)
</td>
<td>
WP3
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M22
</td> </tr> </table>
<table>
<tr>
<th>
**Del.**
**Num.**
</th>
<th>
**Deliverable name**
</th>
<th>
**WP**
</th>
<th>
**Lead**
</th>
<th>
**Type**
</th>
<th>
**Dissem. level**
</th>
<th>
**Deliv. date**
</th> </tr>
<tr>
<td>
**D3.9**
</td>
<td>
MARISA data model description (final)
</td>
<td>
WP3
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M22
</td> </tr>
<tr>
<td>
**D3.10**
</td>
<td>
MARISA Human machine best practices and design document (final)
</td>
<td>
WP3
</td>
<td>
LUC
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M22
</td> </tr>
<tr>
<td>
**D4.1**
</td>
<td>
MARISA Level 1 Data Fusion Services Description
</td>
<td>
WP4
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M12
</td> </tr>
<tr>
<td>
**D4.2**
</td>
<td>
MARISA Level 2 Data Fusion Services Description
</td>
<td>
WP4
</td>
<td>
IOSB
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M12
</td> </tr>
<tr>
<td>
**D4.3**
</td>
<td>
MARISA Level 3 Data Fusion Services Description
</td>
<td>
WP4
</td>
<td>
TNO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M12
</td> </tr>
<tr>
<td>
**D4.4**
</td>
<td>
MARISA Level 1 Data Fusion Services
Description (final)
</td>
<td>
WP4
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M24
</td> </tr>
<tr>
<td>
**D4.5**
</td>
<td>
MARISA Level 2 Data Fusion Services
Description (final)
</td>
<td>
WP4
</td>
<td>
IOSB
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M24
</td> </tr>
<tr>
<td>
**D4.6**
</td>
<td>
MARISA Level 3 Data Fusion Services
Description (final)
</td>
<td>
WP4
</td>
<td>
TNO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M24
</td> </tr>
<tr>
<td>
**D5.1**
</td>
<td>
MARISA Big data Infrastructure
</td>
<td>
WP5
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M12
</td> </tr>
<tr>
<td>
**D5.2**
</td>
<td>
MARISA Interfaces to External data sources
</td>
<td>
WP5
</td>
<td>
GMV
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M12
</td> </tr>
<tr>
<td>
**D5.3**
</td>
<td>
MARISA User Interfaces
</td>
<td>
WP5
</td>
<td>
LUC
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M12
</td> </tr>
<tr>
<td>
**D5.4**
</td>
<td>
MARISA Data Fusion Distribution Services
</td>
<td>
WP5
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M12
</td> </tr>
<tr>
<td>
**D5.5**
</td>
<td>
MARISA Access Control Services
</td>
<td>
WP5
</td>
<td>
ADS
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M12
</td> </tr>
<tr>
<td>
**D5.6**
</td>
<td>
MARISA Big data Infrastructure (final)
</td>
<td>
WP5
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M24
</td> </tr>
<tr>
<td>
**D5.7**
</td>
<td>
MARISA Interfaces to External data sources (final)
</td>
<td>
WP5
</td>
<td>
GMV
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M24
</td> </tr>
<tr>
<td>
**D5.8**
</td>
<td>
MARISA User Interfaces (final)
</td>
<td>
WP5
</td>
<td>
LUC
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M24
</td> </tr>
<tr>
<td>
**D5.9**
</td>
<td>
MARISA Data Fusion Distribution
Services (final)
</td>
<td>
WP5
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M24
</td> </tr>
<tr>
<td>
**D5.10**
</td>
<td>
MARISA Access Control Services (final)
</td>
<td>
WP5
</td>
<td>
ADS
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M24
</td> </tr>
<tr>
<td>
**D6.1**
</td>
<td>
General principles of MARISA test architecture and data integration
</td>
<td>
WP6
</td>
<td>
ADS
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D6.2**
</td>
<td>
Definition of MARISA integration platforms
</td>
<td>
WP6
</td>
<td>
PMM
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D6.3**
</td>
<td>
Definition of MARISA Trial Configurations
</td>
<td>
WP6
</td>
<td>
ADS
</td>
<td>
R/O
</td>
<td>
CO
</td>
<td>
M15
</td> </tr>
<tr>
<td>
**D6.4**
</td>
<td>
MARISA Toolkit Integration and validation Test Plan
</td>
<td>
WP6
</td>
<td>
ADS
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D6.5**
</td>
<td>
MARISA Toolkit Integration and validation test report
</td>
<td>
WP6
</td>
<td>
ADS
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M15
</td> </tr>
<tr>
<td>
**D6.6**
</td>
<td>
MARISA Toolkit
</td>
<td>
WP6
</td>
<td>
ADS
</td>
<td>
DEM
</td>
<td>
PU
</td>
<td>
M15
</td> </tr>
<tr>
<td>
**D6.7**
</td>
<td>
Definition of MARISA Trial Configurations (final)
</td>
<td>
WP6
</td>
<td>
ADS
</td>
<td>
R/O
</td>
<td>
CO
</td>
<td>
M26
</td> </tr>
<tr>
<td>
**D6.8**
</td>
<td>
MARISA Toolkit Integration and validation Test Plan (final)
</td>
<td>
WP6
</td>
<td>
ADS
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M22
</td> </tr>
<tr>
<td>
**D6.9**
</td>
<td>
MARISA Toolkit Integration and validation test report (final)
</td>
<td>
WP6
</td>
<td>
ADS
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M26
</td> </tr>
<tr>
<td>
**Del.**
**Num.**
</td>
<td>
**Deliverable name**
</td>
<td>
**WP**
</td>
<td>
**Lead**
</td>
<td>
**Type**
</td>
<td>
**Dissem. level**
</td>
<td>
**Deliv. date**
</td> </tr>
<tr>
<td>
**D6.10**
</td>
<td>
MARISA Toolkit (final)
</td>
<td>
WP6
</td>
<td>
ADS
</td>
<td>
DEM
</td>
<td>
PU
</td>
<td>
M26
</td> </tr>
<tr>
<td>
**D7.1**
</td>
<td>
MARISA Validation in operational trial approach and plan
</td>
<td>
WP7
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D7.2**
</td>
<td>
MARISA Validation in operational trial approach and plan - Appendix
</td>
<td>
WP7
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
RE
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D7.3**
</td>
<td>
MARISA Operational trials results Report and Lesson Learnt
</td>
<td>
WP7
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D7.4**
</td>
<td>
MARISA Operational trials results Report and Lesson Learnt - Appendix
</td>
<td>
WP7
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
RE
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D7.5**
</td>
<td>
MARISA Validation in operational trial approach and plan (final)
</td>
<td>
WP7
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D7.6**
</td>
<td>
MARISA Validation in operational trial approach and plan (final) - Appendix
</td>
<td>
WP7
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
RE
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D7.7**
</td>
<td>
MARISA Operational trials results Report and Lesson Learnt (final)
</td>
<td>
WP7
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M30
</td> </tr>
<tr>
<td>
**D7.8**
</td>
<td>
MARISA Operational trials results Report and Lesson Learnt (final) – Appendix
</td>
<td>
WP7
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
RE
</td>
<td>
M30
</td> </tr>
<tr>
<td>
**D8.1**
</td>
<td>
MARISA Communication and Dissemination Strategy and Plan
</td>
<td>
WP8
</td>
<td>
TNO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M5
</td> </tr>
<tr>
<td>
**D8.2**
</td>
<td>
MARISA Dissemination Material
</td>
<td>
WP8
</td>
<td>
AST
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D8.3**
</td>
<td>
MARISA WEB Site
</td>
<td>
WP8
</td>
<td>
AST
</td>
<td>
DEC
</td>
<td>
PU
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D8.4**
</td>
<td>
MARISA Workshop Organization and
Results
</td>
<td>
WP8
</td>
<td>
TNO
</td>
<td>
R/O
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D8.5**
</td>
<td>
MARISA Training Kit
</td>
<td>
WP8
</td>
<td>
AST
</td>
<td>
O
</td>
<td>
PU
</td>
<td>
M15
</td> </tr>
<tr>
<td>
**D8.6**
</td>
<td>
MARISA Exploitation Plan
</td>
<td>
WP8
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D8.7**
</td>
<td>
MARISA Exploitation and Uptake mechanisms
</td>
<td>
WP8
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
M30
</td> </tr>
<tr>
<td>
**D8.8**
</td>
<td>
MARISA Services Standardization Report
</td>
<td>
WP8
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D8.9**
</td>
<td>
MARISA Dissemination Material (final)
</td>
<td>
WP8
</td>
<td>
AST
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D8.10**
</td>
<td>
MARISA WEB Site (intermediate)
</td>
<td>
WP8
</td>
<td>
AST
</td>
<td>
DEC
</td>
<td>
PU
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D8.11**
</td>
<td>
MARISA Workshop Organization and
Results (final)
</td>
<td>
WP8
</td>
<td>
TNO
</td>
<td>
R/O
</td>
<td>
PU
</td>
<td>
M30
</td> </tr>
<tr>
<td>
**D8.12**
</td>
<td>
MARISA Training Kit (final)
</td>
<td>
WP8
</td>
<td>
AST
</td>
<td>
O
</td>
<td>
PU
</td>
<td>
M26
</td> </tr>
<tr>
<td>
**D8.13**
</td>
<td>
MARISA Services Standardization Report
(final)
</td>
<td>
WP8
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
M30
</td> </tr>
<tr>
<td>
**D8.14**
</td>
<td>
MARISA WEB Site (final)
</td>
<td>
WP8
</td>
<td>
AST
</td>
<td>
DEC
</td>
<td>
PU
</td>
<td>
M30
</td> </tr>
<tr>
<td>
**D9.1**
</td>
<td>
H – Requirement No.4
</td>
<td>
WP9
</td>
<td>
LDO
</td>
<td>
ETH
</td>
<td>
CO
</td>
<td>
M9
</td> </tr>
<tr>
<td>
**D9.2**
</td>
<td>
POPD – Requirement No.6
</td>
<td>
WP9
</td>
<td>
LDO
</td>
<td>
ETH
</td>
<td>
CO
</td>
<td>
M9
</td> </tr>
<tr>
<td>
**D9.3**
</td>
<td>
POPD – Requirement No.8
</td>
<td>
WP9
</td>
<td>
LDO
</td>
<td>
ETH
</td>
<td>
CO
</td>
<td>
M9
</td> </tr>
<tr>
<td>
**D9.4**
</td>
<td>
POPD – Requirement No.9
</td>
<td>
_WP9_
</td>
<td>
_LDO_
</td>
<td>
ETH
</td>
<td>
CO
</td>
<td>
M9
</td> </tr> </table>
Table 10: MARISA Deliverables List
## 4.6. Collaboration among partners
Due to the nature of the project, an efficient and effective communication and
knowledge-flow among the partners is very important. A deliverable describing
the complete communication and dissemination strategy for the MARISA project
is going to be delivered in M5. Regarding collaboration among partners, the
following arrangements will be made to ensure optimum sharing of knowledge
within the consortium.
### 4.6.1. Collaboration Tools
#### _4.6.1.1. Web Site_
A fully accessible project portal will be developed, with users’ customized
view and public as well as consortium-internal areas. The project web site
will support the communication among the project members, as well as
maintenance of a common repository of the documentation.
#### _4.6.1.2. Slack_
Slack is a cloud-based set of team collaboration tools and services. Slack
teams allow communities, groups, or teams to join through a specific URL or
invitation sent by a team admin or owner.
A Slack Environment has been already opened in the context of WP2 activities,
with a specific focus on the
MARISA User Requirements gathering phase, in order to support “dynamics”
between participants (EndUsers and Practitioners). At this purpose, it has
been created one slack channel per Trial involving End-users and providers,
encouraging discussions as well as exchanging of documentation among
participants.
### 4.6.2. Project Meetings
* **Internal project meetings** will serve as the main forum for interactions between all groups and for reviews preparation. Project internal meetings will be more telephone conferences instead of physical meetings.
* **Telephone conferences** will be held at least every months and physical meetings will be held at least every 3 months. WP internal/cross WP meetings will be held when requested via telephone or physical if needed. Video conference may be used, although some partners may not have the facility, and may not be suitable if more than two partners need to be involved.
* **Integration meetings** are also foreseen at the end of integration activities to assess the readiness of the MARISA Toolkit to operational trials.
* We plan three **Advisory Board meetings** to take place together with the achievement of main project milestones: MS3 (M10) toolkit design, MS5 (M20) Mid-Term Review, MS7 (M30) Final Review.
* **Innovation meetings** are planned every 3 months in the first year of the project (4 in total).
* **Technical Board meetings** are planned about every 6 month.
* **Executive Board meetings** are planned at least every 6 months or on request.
##### 4.6.2.1. Meetings Plan
The following figure provides an overall view of the project meetings plan.
This plan does not include the Formal Project reviews, that have been
described in the related paragraph 4.5.2.
Figure 8: Project Meetings Master Schedule
## 4.7. Involvement of end-users external to the Consortium
Community mechanisms will be created (as part of the MARISA Web Site
implemented in WP8) to foster interactions, leading to knowledge co-created
through social interactions, competence sharing and collective service
development. The User Community will be set-up involving user partners as well
as external end users invited to join the initiative. In this context, MARISA
aims to be as inclusive as possible, building also on the links of individual
consortium partners of the MARISA consortium to past and on-going initiatives
in the domain (CLOSEYE, EUCISE2020, SeaBILLA, PERSEUS, CoopP etc.) to speed up
the process.
The User Community includes partners, practitioners involved as full partners,
all the associated partners (the organizations that have already expressed
interest and are supporting the objectives of the project maritime domain
experts, external end users). However the consortium is always open to accept
other members during the execution of the project. It addresses user needs,
operational requirements, potential ethical and societal issues. It is
coordinated and animated by the User Community Leader.
# 5\. Configuration Management
Configuration Management deals with the overall project consistency,
identification and tracking of changes related to all project results
including the deliverables, documents, testing procedures and any other
related activity.
## 5.1. Document Configuration Management
Document configuration management will be ensured through the tracking of the
versions and history of changes of the various project documents
(deliverables, meeting minutes, reviewed documents, etc.).
Document history will be tracked in each deliverable in a separate table
describing the different versions of the document and the reasons of
change/updates on it. Each deliverable main author will be responsible for
updating this.
## 5.2. Software Configuration Management
The software components monitoring, will be done using a software version
configuration tool (CVS, SVN etc.), which will be installed on a central
Server. This will ensure that all necessary components of the MARISA Toolkit
will be available for the distributed development teams.
## 5.3. E-Mail Conventions
E-mail will be an important means to exchange information in the MARISA
project.
All E-mail subject headings must start with the text “[MARISA]”. Additional
tags can be added to specify relevant work packages, tasks, and deliverables
where appropriate, and if deemed useful. The tags should never contain spaces
within the square brackets.
Some examples of email subject headings are:
* [MARISA] [WP8] Title
* [MARISA] [WP1] [Task1.2] [D1.4] Title…….. document
* [MARISA] [WP2] Title
* [MARISA] [WP4] [Task4.3] Title
# 6\. Quality Management
## 6.1. Quality Assurance Process
Quality control is responsibility of everybody involved in the each project
activity.
The quality control task performed by the Coordinator at project level will
not substitute for internal quality control used in the various partner
organisations for their internal work. All partner organisations should ensure
that their existing internal quality control procedures are applied to MARISA
project tasks.
However, as part of their role, the Project Coordinator, the Project Manager,
the Innovation Manager and the Technical Board will act as Project Quality
Assurance Team.
Objectives of the Project Quality Assurance Team are:
* to ensure appropriate application of the procedures in MARISA;
* to control the main outputs (mainly documents) of the Project/Work Packages & organising reviews.
With reference to **Project Deliverables** : each project deliverable is
assigned to one leading responsible partner. This partner takes the
responsibility that the deliverable is of high quality and timely delivered.
The responsible partner assures that the content of a deliverable is
consistent with the team-workings of the deliverable and that the particular
objectives related to the goals of the project are met. Any issues related to
deliverables, endangering the success of the work package or the project, have
to be reported by the WP leader immediately to the Project Management and
discussed within the Coordination team.
### 6.1.1. Reviews for Documentation/Deliverables
A Reviews Process involving each partner and selected reviewers is adopted in
the Consortium to ensure the quality of deliverables and of any other external
publication with regard to the technical content, the objectives of the
project and to adhere to formal requirements established in the Grant and
Consortium Agreements. Review process ensures that publications and
deliverables comply with IPR of the partners. For external publications as
well as for project deliverables, the review process will involve all
Consortium partners and requires the approval of the Project Quality Assurance
Team.
Project documentation will be reviewed against the following criteria
regarding form as well as content of the document:
* Format of the document according to the document templates.
* Identification and correction of typing mistakes, etc.
* Check of consistency:
* with the overall scope of the document (e.g. it contains the right information, avoiding unnecessary information, etc.);
* with previous relevant documentation (e.g. technical specifications vs requirements definition, no redundancy with other documents, etc.).
* Technical aspects of the documentation will be reviewed also by the Project Quality Assurance Team in order to ensure that the document meets the technical goals of the project, and that all technical information is advancing the current state of the art and the recent technological research level.
The procedures and timeline for the review project documentation are described
hereafter.
* The partner responsible for preparing the deliverable, drafts a Table of Contents (ToC), assigns tasks to all involved partners and sets the respective deadlines (considering also time needed for quality review).
* Involved partners provide their feedback within the deadlines and the responsible partner prepares the first draft of the document.
* This draft is sent to the entire consortium for comments and improvements/additions. The feedback period for project partners should last at least five working days. Feedback is sent directly to the responsible partner who revises the document and prepares the semi-final version.
* The Quality Control Process begins based on the semi-final version of the deliverable. **This version has to be ready no later than 20 working days before the final deadline.** At least two Internal Reviewers have been assigned in advance (refer to the reviewers table).
* The Internal Reviewers send their comments (by five working days) to the Project Quality Assurance Team who consolidates and checks the reports and sends them to the partner responsible.
* This partner responsible for preparing the deliverable then improves the document based on received comments. In case the comments/suggestions cannot be realised, the reasons for this must be documented. If necessary (i.e. if there are too many comments on the first round), another round of comments from the Internal Reviewers takes place.
* The partner responsible addresses them appropriately and prepares the final version of the document, which is sent to the Project Coordinator (at least five days before the final deadline).
* The Project Coordinator then submits the document to the EC.
Figure 9: MARISA Deliverable Preparation and Quality Review Process - Flow
Figure 10: MARISA Deliverable Preparation and Quality Review Process \-
Timeline
## 6.2. Deliverables Item List and Internal Reviewers
<table>
<tr>
<th>
**Del.**
**Num.**
</th>
<th>
**Deliverable name**
</th>
<th>
**Lead**
</th>
<th>
**Type**
</th>
<th>
**Int.**
**Rev/er**
**#1**
</th>
<th>
**Int.**
**Rev/er**
**#2**
</th>
<th>
**Delivery date**
</th> </tr>
<tr>
<td>
**D1.1**
</td>
<td>
Project Management, Quality and Risk Plan
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
ENG
</td>
<td>
GMV
</td>
<td>
M5
</td> </tr> </table>
<table>
<tr>
<th>
**Del.**
**Num.**
</th>
<th>
**Deliverable name**
</th>
<th>
**Lead**
</th>
<th>
**Type**
</th>
<th>
**Int.**
**Rev/er**
**#1**
</th>
<th>
**Int.**
**Rev/er**
**#2**
</th>
<th>
**Delivery date**
</th> </tr>
<tr>
<td>
**D1.2**
</td>
<td>
MARISA Project Initial Report
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
STW
</td>
<td>
ENG
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D1.3**
</td>
<td>
MARISA Project Intermediate Report
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
GMV
</td>
<td>
ADS
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D1.4**
</td>
<td>
MARISA Project Final Report
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
ADS
</td>
<td>
STW
</td>
<td>
M30
</td> </tr>
<tr>
<td>
**D1.5**
</td>
<td>
MARISA Societal Ethical Initial Report
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
LAU
</td>
<td>
ENG
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D1.6**
</td>
<td>
MARISA Societal Ethical Intermediate Report
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
LAU
</td>
<td>
ENG
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D1.7**
</td>
<td>
MARISA Societal Ethical Final Report
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
LAU
</td>
<td>
ENG
</td>
<td>
M30
</td> </tr>
<tr>
<td>
**D1.8, D1.9**
</td>
<td>
MARISA Data Management Plan
(initial, final)
</td>
<td>
LDO
</td>
<td>
ORDP
</td>
<td>
ENG
</td>
<td>
GMV
</td>
<td>
M6, M24
</td> </tr>
<tr>
<td>
**D2.1, D2.8**
</td>
<td>
MARISA User Community Report
(initial, final)
</td>
<td>
LAU
</td>
<td>
R
</td>
<td>
ADS
</td>
<td>
INOV
</td>
<td>
M20, M30
</td> </tr>
<tr>
<td>
**D2.2, D2.9**
</td>
<td>
MARISA User Requirements (initial, final)
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
CMRE
</td>
<td>
IW
</td>
<td>
M5, M20
</td> </tr>
<tr>
<td>
**D2.3, D2.10**
</td>
<td>
MARISA Adoption Models (initial, final)
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
LUC
</td>
<td>
TNO
</td>
<td>
M5, M20
</td> </tr>
<tr>
<td>
**D2.4, D2.11**
</td>
<td>
MARISA Interaction with
existing/legacy systems (initial, final)
</td>
<td>
GMV
</td>
<td>
R
</td>
<td>
EG
</td>
<td>
ADS
</td>
<td>
M5, M20
</td> </tr>
<tr>
<td>
**D2.5, D2.12**
</td>
<td>
MARISA Usage of Additional Data
Sources (initial, final)
</td>
<td>
IOSB
</td>
<td>
R
</td>
<td>
TNO
</td>
<td>
STW
</td>
<td>
M5, M20
</td> </tr>
<tr>
<td>
**D2.6, D2.13**
</td>
<td>
Legal, ethical and societal aspects of MARISA (initial, final)
</td>
<td>
LAU
</td>
<td>
R
</td>
<td>
IW
</td>
<td>
PLA
</td>
<td>
M5, M20
</td> </tr>
<tr>
<td>
**D2.7, D2.14**
</td>
<td>
MARISA Operational Scenarios and
Trials (initial, final)
</td>
<td>
AST
</td>
<td>
R
</td>
<td>
STW
</td>
<td>
PMM
</td>
<td>
M5, M20
</td> </tr>
<tr>
<td>
**D3.1, D3.6**
</td>
<td>
MARISA Toolkit Design (initial, final)
</td>
<td>
GMV
</td>
<td>
R
</td>
<td>
TNO
</td>
<td>
INOV
</td>
<td>
M10, M22
</td> </tr>
<tr>
<td>
**D3.2, D3.7**
</td>
<td>
MARISA Services Description
Document (initial, final)
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
ADS
</td>
<td>
TNO
</td>
<td>
M10, M22
</td> </tr>
<tr>
<td>
**D3.3, D3.8**
</td>
<td>
MARISA Interfaces description Document (initial, final)
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
IOSB
</td>
<td>
LAU
</td>
<td>
M10, M22
</td> </tr>
<tr>
<td>
**D3.4, D3.9**
</td>
<td>
MARISA data model description
(initial, final)
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
IW
</td>
<td>
ADS
</td>
<td>
M10, M22
</td> </tr>
<tr>
<td>
**D3.5, D3.10**
</td>
<td>
MARISA Human machine best
practices and design document (initial, final)
</td>
<td>
LUC
</td>
<td>
R
</td>
<td>
LAU
</td>
<td>
IW
</td>
<td>
M10, M22
</td> </tr>
<tr>
<td>
**D4.1, D4.4**
</td>
<td>
MARISA Level 1 Data Fusion Services
Description (initial, final)
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
AST
</td>
<td>
EG
</td>
<td>
M12, M24
</td> </tr>
<tr>
<td>
**D4.2, D4.5**
</td>
<td>
MARISA Level 2 Data Fusion Services
Description (initial, final)
</td>
<td>
IOSB
</td>
<td>
R
</td>
<td>
ENG
</td>
<td>
UNIBO
</td>
<td>
M12, M24
</td> </tr>
<tr>
<td>
**D4.3, D4.6**
</td>
<td>
MARISA Level 3 Data Fusion Services
Description (initial, final)
</td>
<td>
TNO
</td>
<td>
R
</td>
<td>
ADS
</td>
<td>
CMRE
</td>
<td>
M12, M24
</td> </tr>
<tr>
<td>
**D5.1, D5.6**
</td>
<td>
MARISA Big data Infrastructure
(initial, final)
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
STW
</td>
<td>
EG
</td>
<td>
M12, M24
</td> </tr>
<tr>
<td>
**D5.2, D5.7**
</td>
<td>
MARISA Interfaces to External data sources (initial, final)
</td>
<td>
GMV
</td>
<td>
R
</td>
<td>
TNO
</td>
<td>
INOV
</td>
<td>
M12, M24
</td> </tr> </table>
<table>
<tr>
<th>
**Del.**
**Num.**
</th>
<th>
**Deliverable name**
</th>
<th>
**Lead**
</th>
<th>
**Type**
</th>
<th>
**Int.**
**Rev/er**
**#1**
</th>
<th>
**Int.**
**Rev/er**
**#2**
</th>
<th>
**Delivery date**
</th> </tr>
<tr>
<td>
**D5.3, D5.8**
</td>
<td>
MARISA User Interfaces (initial, final)
</td>
<td>
LUC
</td>
<td>
R
</td>
<td>
IOSB
</td>
<td>
LAU
</td>
<td>
M12, M24
</td> </tr>
<tr>
<td>
**D5.4, D5.9**
</td>
<td>
MARISA Data Fusion Distribution
Services (initial, final)
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
PMM
</td>
<td>
STW
</td>
<td>
M12, M24
</td> </tr>
<tr>
<td>
**D5.5, D5.10**
</td>
<td>
MARISA Access Control Services
(initial, final)
</td>
<td>
ADS
</td>
<td>
R
</td>
<td>
EG
</td>
<td>
TNO
</td>
<td>
M12, M24
</td> </tr>
<tr>
<td>
**D6.1**
</td>
<td>
General principles of MARISA test architecture and data integration
</td>
<td>
ADS
</td>
<td>
R
</td>
<td>
LDO
</td>
<td>
GMV
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D6.2**
</td>
<td>
Definition of MARISA integration platforms
</td>
<td>
PMM
</td>
<td>
R
</td>
<td>
ENG
</td>
<td>
STW
</td>
<td>
M10
</td> </tr>
<tr>
<td>
**D6.3, D6.7**
</td>
<td>
Definition of Trial Configurations (initial, final)
</td>
<td>
ADS
</td>
<td>
R/O
</td>
<td>
IOSB
</td>
<td>
IW
</td>
<td>
M15, M26
</td> </tr>
<tr>
<td>
**D6.4, D6.8**
</td>
<td>
MARISA Toolkit Integration and
validation Test Plan ( initial, final)
</td>
<td>
ADS
</td>
<td>
R
</td>
<td>
GMV
</td>
<td>
ENG
</td>
<td>
M10, M22
</td> </tr>
<tr>
<td>
**D6.5, D6.9**
</td>
<td>
MARISA Toolkit Integration and
validation test report (initial, final)
</td>
<td>
ADS
</td>
<td>
R
</td>
<td>
STW
</td>
<td>
LDO
</td>
<td>
M15, M26
</td> </tr>
<tr>
<td>
**D6.6, D6.10**
</td>
<td>
MARISA Toolkit
</td>
<td>
ADS
</td>
<td>
DEM
</td>
<td>
IW
</td>
<td>
LUC
</td>
<td>
M15, M26
</td> </tr>
<tr>
<td>
**D7.1, D7.5**
</td>
<td>
MARISA Validation in operational trial approach and plan (initial, final)
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
LDO
</td>
<td>
GMV
</td>
<td>
M10, M20
</td> </tr>
<tr>
<td>
**D7.2, D7.6**
</td>
<td>
MARISA Validation in operational trial approach and plan – Appendix (initial,
final)
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
LDO
</td>
<td>
GMV
</td>
<td>
M10, M20
</td> </tr>
<tr>
<td>
**D7.3, D7.7**
</td>
<td>
MARISA Operational trials results
Report and Lesson Learnt (initial, final)
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
ENG
</td>
<td>
ADS
</td>
<td>
M20, M30
</td> </tr>
<tr>
<td>
**D7.4, D7.8**
</td>
<td>
MARISA Operational trials results
Report and Lesson Learnt – Appendix
(initial, final)
</td>
<td>
STW
</td>
<td>
R
</td>
<td>
ENG
</td>
<td>
ADS
</td>
<td>
M20, M30
</td> </tr>
<tr>
<td>
**D8.1**
</td>
<td>
MARISA Communication and Dissemination Strategy and Plan
</td>
<td>
TNO
</td>
<td>
R
</td>
<td>
GMV
</td>
<td>
PLA
</td>
<td>
M5
</td> </tr>
<tr>
<td>
**D8.2, D8.9**
</td>
<td>
MARISA Dissemination Material
(initial, final)
</td>
<td>
AST
</td>
<td>
R
</td>
<td>
EG
</td>
<td>
INOV
</td>
<td>
M10, M20
</td> </tr>
<tr>
<td>
**D8.3,**
**D8.10,**
**D8.14**
</td>
<td>
MARISA WEB Site (initial, intermediate, final)
</td>
<td>
AST
</td>
<td>
DEC
</td>
<td>
LUC
</td>
<td>
UNIBO
</td>
<td>
M10,
M20,
M30
</td> </tr>
<tr>
<td>
**D8.4, D8.11**
</td>
<td>
MARISA Workshop Organization and
Results
</td>
<td>
TNO
</td>
<td>
R/O
</td>
<td>
CMRE
</td>
<td>
EG
</td>
<td>
M20, M30
</td> </tr>
<tr>
<td>
**D8.5, D8.12**
</td>
<td>
MARISA Training Kit (initial, final)
</td>
<td>
AST
</td>
<td>
O
</td>
<td>
INOV
</td>
<td>
LAU
</td>
<td>
M15, M26
</td> </tr>
<tr>
<td>
**D8.6**
</td>
<td>
MARISA Exploitation Plan
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
STW
</td>
<td>
GMV
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**D8.7**
</td>
<td>
MARISA Exploitation and Uptake mechanisms
</td>
<td>
ENG
</td>
<td>
R
</td>
<td>
PLA
</td>
<td>
CMRE
</td>
<td>
M30
</td> </tr>
<tr>
<td>
**D8.8, D8.13**
</td>
<td>
MARISA Services Standardization Report
</td>
<td>
LDO
</td>
<td>
R
</td>
<td>
UNIBO
</td>
<td>
STW
</td>
<td>
M20, M30
</td> </tr>
<tr>
<td>
**D9.1**
</td>
<td>
H – Requirement No.4
</td>
<td>
LDO
</td>
<td>
ETH
</td>
<td>
LAU
</td>
<td>
N/A
</td>
<td>
M9
</td> </tr>
<tr>
<td>
**Del.**
**Num.**
</td>
<td>
**Deliverable name**
</td>
<td>
**Lead**
</td>
<td>
**Type**
</td>
<td>
**Int.**
**Rev/er**
**#1**
</td>
<td>
**Int.**
**Rev/er**
**#2**
</td>
<td>
**Delivery date**
</td> </tr>
<tr>
<td>
**D9.2**
</td>
<td>
POPD – Requirement No.6
</td>
<td>
LDO
</td>
<td>
ETH
</td>
<td>
LAU
</td>
<td>
N/A
</td>
<td>
M9
</td> </tr>
<tr>
<td>
**D9.3**
</td>
<td>
POPD – Requirement No.8
</td>
<td>
LDO
</td>
<td>
ETH
</td>
<td>
LAU
</td>
<td>
N/A
</td>
<td>
M9
</td> </tr>
<tr>
<td>
**D9.4**
</td>
<td>
POPD – Requirement No.9
</td>
<td>
LDO
</td>
<td>
ETH
</td>
<td>
LAU
</td>
<td>
N/A
</td>
<td>
M9
</td> </tr> </table>
Table 11: Deliverables Item Reviewers
# 7\. Risk Management
## 7.1. Risk Management Process
The Risk Management Process has started during the proposal preparation and
has been assessed at Project Kick-Off.
The following steps are foreseen:
* initial brainstorming and preliminary risk identification;
* preparation of the Project Risk Register. Every entry of the Risk register provides the evaluation
(High, Medium, Low), of the likeliness of the risk (“L.” column), and the
impact of the consequences (“C.” column);
* for each Risk Item in the Risk register, a mitigation action will be identified;
* every three months an assessment of the Risk register will be performed, which will result in the update of the Risk register;
* every six months the Risk Analysis will be performed during the Executive Board;
* if, during the Risk register assessment (i.e. every three months), a given Risk Item will present a high like hood or a high consequence, and if the mitigation action did not produce any results, a specific meeting will be called to discuss the risk item and update the mitigation action as needed.
Several risks have been already identified. The following table presents the
current risk identification for various work packages. For each item, it is
included the Description of the Risk, the Mitigation Action, The Like hood and
Consequence.
<table>
<tr>
<th>
**Task/WP**
</th>
<th>
**Description of risk**
</th>
<th>
**Mitigation action**
</th>
<th>
**L**
</th>
<th>
**C**
</th> </tr>
<tr>
<td>
**WP1,**
**Management**
</td>
<td>
General project management risk: insufficient resources and personnel
committed to the project. Partner being in difficulties (company
reorganization), partner withdrawal
</td>
<td>
Raise the issue urgently with higher level management in partner organization,
ask EB to proposed solutions, in case of withdrawal replace partner.
</td>
<td>
**L**
</td>
<td>
**H**
</td> </tr>
<tr>
<td>
**WP2, WP3 Requirement definition and standards**
</td>
<td>
If the requirements are not precise, there is risk that the implementation of
the components suffers a general slack that would compromise effectiveness of
the toolkit.
</td>
<td>
This risk is mitigated by the SE approach adopted for the MARISA design and
development, that makes use of a Model Based Systems Engineering and of a
“Architectural Framework” as well by the adoption of the Agile development
with the practitioners
involved as full partners
</td>
<td>
**M**
</td>
<td>
**M**
</td> </tr>
<tr>
<td>
**WP3,**
**Interfaces**
</td>
<td>
Interaction with operational border surveillance systems could pose two
difficulties: a) security restrictions can prevent a seamless integration with
the network envisaged in MARISA, b) daily operations
(planned or unplanned) of existing civil and military systems could delay the
tests envisaged in MARISA
</td>
<td>
Mitigation action is twofold: a) early engagement with the final users
responsible of the current border surveillance operational systems to assess
the security restrictions. It is highlighted that these practitioners are
already involved in MARISA as full partners, b) flexible contingence plan: in
case one of these systems (for operational reasons) becomes unavailable during
a certain period of the demonstration an alternative time slot is envisaged
</td>
<td>
**H**
</td>
<td>
**M**
</td> </tr> </table>
<table>
<tr>
<th>
**Task/WP**
</th>
<th>
**Description of risk**
</th>
<th>
**Mitigation action**
</th>
<th>
**L**
</th>
<th>
**C**
</th> </tr>
<tr>
<td>
**WP3, WP4 and WP5, Technical solution**
</td>
<td>
There is technical risk in the design phase of MARISA solution that shall be
only detected at verification/validation time.
</td>
<td>
Trade-off studies planned during the development phase (e.g. prototypes, mock-
ups, simulations) These help in mitigating the risk by analyzing early the
potential technical problems. Moreover the iterative approach will further
mitigate this risk.
</td>
<td>
**M**
</td>
<td>
**M**
</td> </tr>
<tr>
<td>
**WP5, risk in software development**
</td>
<td>
Delay in testing and validation of the software
</td>
<td>
MARISA relies on software components that are already existing and largely
tested, integrated with new software development. Then, the complexity of the
software to be developed is reduced. Delays will affect individual components
and not the system as a whole.
</td>
<td>
**L**
</td>
<td>
**H**
</td> </tr>
<tr>
<td>
**WP6,**
**integration**
</td>
<td>
The major risk at the integration phase is clearly that a number of components
do not match the agreed specification, functionalities or simply interfaces.
</td>
<td>
Thanks to SE methods, the complexity of the development would be kept at
manageable level. The possibility to test in advance the MARISA toolkit in
synthetic environment reduces the risk to have troubles during the physical
trials.
</td>
<td>
**M**
</td>
<td>
**M**
</td> </tr>
<tr>
<td>
**WP6,**
**integration**
</td>
<td>
Delays for components provided by WP4 and 5
</td>
<td>
The integration is made in an incremental way, starting by an initial version
to test the interfaces. Any time a version of the components is stabilized, it
is tested in the integration platform to avoid bottlenecks and identify the
problems early.
</td>
<td>
**M**
</td>
<td>
**L**
</td> </tr>
<tr>
<td>
**WP6,**
**integration**
</td>
<td>
Components not stabilized or not mature for integration
</td>
<td>
All development WP will test and qualify the components prior to delivery to
WP6.
</td>
<td>
**L**
</td>
<td>
**M**
</td> </tr>
<tr>
<td>
**WP7, WP2**
</td>
<td>
Risk in the availability of real sensible data set to sustain the trials
impact
</td>
<td>
Joint definition of trials during WP2, reduces the risk for misalignment
between trials objectives and practitioners data availability
</td>
<td>
**M**
</td>
<td>
**M**
</td> </tr>
<tr>
<td>
**WP7, WP2**
</td>
<td>
Demonstrated solution not fully in line with end user constraints
</td>
<td>
End users involvement in all project phases. High number of end users with
dedicated budget ensures high average level of consultation in due time. WP7
will be performed in two phases, 1) with the toolkit providing a subset of
services, 2) with the toolkit in its final configuration. The feedback of the
first phase will be analyzed and incorporated in the final MARISA toolkit.
</td>
<td>
**L**
</td>
<td>
**H**
</td> </tr>
<tr>
<td>
**WP7**
</td>
<td>
System failure during trials
</td>
<td>
The components will be tested in laboratories and in a controlled environment
by technical partners to ensure a minimal operation of the system at the start
of each trial phase. More, the trials performance in 2 phases permits possible
malfunctions to be identified at early stage.
</td>
<td>
**M**
</td>
<td>
**M**
</td> </tr>
<tr>
<td>
**Task/WP**
</td>
<td>
**Description of risk**
</td>
<td>
**Mitigation action**
</td>
<td>
**L**
</td>
<td>
**C**
</td> </tr>
<tr>
<td>
**WP8, dissemination**
</td>
<td>
Delay in dissemination and exploitation of the results due to a disagreement
about IPR
</td>
<td>
MARISA has nominated an Innovation
Manager in the management structure in order to advise about all IP issues and
propose fair solution
</td>
<td>
**M**
</td>
<td>
**M**
</td> </tr> </table>
Table 12: Risk Identification
## 7.2. Risk Areas
The following Risk Areas will be taken into account as reference during the
Risk Analysis.
<table>
<tr>
<th>
**Risk Areas**
</th>
<th>
**Low**
</th>
<th>
**Medium**
</th>
<th>
**High**
</th> </tr>
<tr>
<th>
**1**
</th>
<th>
**2**
</th>
<th>
**3**
</th> </tr>
<tr>
<td>
_**Technology** _
</td>
<td>
Developed & used in other projects in maritime or other sectors
</td>
<td>
Technology qualified not in use.
</td>
<td>
Only investigation work, new technology
</td> </tr>
<tr>
<td>
_**Standards** _
</td>
<td>
Developed & used in other projects in maritime or other sectors
</td>
<td>
Experience in use of applicable Standards not consolidated
</td>
<td>
Only investigation work, no application, new Standard
</td> </tr>
<tr>
<td>
_**Requirements** _
</td>
<td>
Well defined, no modification risks or user uncertainty. Some minor areas
needs requirements
definition
</td>
<td>
For some critical areas the requirements are not
complete and aligned
</td>
<td>
Critical requirements imprecise or unreachable
</td> </tr>
<tr>
<td>
_**External Interface Definition** _
</td>
<td>
Well-established External Interfaces definition, with only minor areas to be
defined
</td>
<td>
The External Interfaces definition is not fully consolidated, with many areas
to be defined
</td>
<td>
The External Interfaces definition needs to be largely established
during the project
</td> </tr>
<tr>
<td>
_**Complexity of Integration and** _
_**validation** _
</td>
<td>
Early definition and baseline of Requirements and
External Interface.
Low coupling between S/W component.
Availability of Integration/ validation Platform
</td>
<td>
Requirements and External Interface baseline with some areas to be defined.
Medium coupling between S/W component.
Availability of Integration/ validation Platform
</td>
<td>
No Requirements and External ICD baseline.
High coupling between S/W component.
Insufficient availability of
Integration/Validation Platform
</td> </tr>
<tr>
<td>
_**Parallel development** _
</td>
<td>
Build approach with no parallel development between builds and adequate time
allocated to
development
</td>
<td>
Build approach with low parallel development between builds and
adequate time allocated
</td>
<td>
Build approach with high parallel development between build with no margin on
schedule (all activities are on critical path)
</td> </tr>
<tr>
<td>
**Risk Areas**
</td>
<td>
**Low**
</td>
<td>
**Medium**
</td>
<td>
**High**
</td> </tr>
<tr>
<td>
**1**
</td>
<td>
**2**
</td>
<td>
**3**
</td> </tr>
<tr>
<td>
_**People Motivation & Commitment ** _
</td>
<td>
All resources required for project are motivated an committed for the project
success
</td>
<td>
Limited Motivation and commitment
</td>
<td>
Team completely non motivated and committed
</td> </tr>
<tr>
<td>
_**Schedule** _
_**Constraints** _
</td>
<td>
Schedule with critical path, no particular schedule
shifting forecast
</td>
<td>
Schedule under control with precise critical areas, some criticality may
create schedule impacts
</td>
<td>
Schedule very critical and/or unfeasible based S/W size and
complexity
</td> </tr>
<tr>
<td>
_**Supplier** _
</td>
<td>
Supplier already successfully involved in previous project
</td>
<td>
Supplier not involved in previous project
</td>
<td>
Supplier not involved in previous project or that in previous project had
major criticality in schedule and/or in performances
</td> </tr>
<tr>
<td>
_**Project Staffing** _
</td>
<td>
Project understaffed in number or in expertise
</td>
<td>
Project staffing very insufficient in expertise or in number
</td>
<td>
Project staffing very insufficient both in number and in expertise
</td> </tr>
<tr>
<td>
_**SW/HW** _
_**Availability** _
</td>
<td>
Insufficient number of SW licenses or work station
</td>
<td>
Medium sharing of SW
licenses or work station
</td>
<td>
Total lack of SW licenses or work station
</td> </tr>
<tr>
<td>
_**Communication** _
</td>
<td>
Low lack of communication inside the project team
</td>
<td>
Medium lack of communication inside the project team
</td>
<td>
Total lack of communication inside the project team
</td> </tr> </table>
Table 13: Risk Areas
# 8\. Data Management
The MARISA Data Management Plan will describe the data management life cycle
for the data to be collected, processed and/or generated by the MARISA
project.
Information about the handling of research data during and after the end of
the project, what data will be collected, processed and/or generated, which
methodology and standards will be applied, whether data will be shared/made
open access and how data will be curated and preserved (including after the
end of the project) will be described in the D1.8 Data Management Plan due in
M6.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0035_INTERACT_730938.md
|
# Introduction
## Background and motivation
INTERACT research stations generate data as a result of long-term
environmental monitoring programmes and shorter term research projects.
Currently more than 75 research stations located throughout arctic and
northern alpine areas are part of the INTERACT network (Figure 1). Among the
scientific disciplines practiced in the network are climatology, geoscience,
biology, ecology, cryospheric science, and to some extent anthropology. These
activities can be organised by the station itself or by external scientists.
In addition, research stations often archive relevant data from external
sources (usually meteorological observations, photos, reports, maps). Such
heterogeneous data generating activities, combined with lacking structured
data management practices at the stations, result in data archived at multiple
locations for individual stations. Current INTERACT data repositories include
research stations’ archives, local archives (e.g. municipal authorities),
national archives (e.g. meteorological institutes), archives of international,
single discipline networks (e.g. CALM), EU repositories (e.g. SIOS Knowledge
Centre), pan-Arctic/regional repositories (e.g. SAON), and global repositories
(e.g. Pangaea, GTN-P). Far too often, research project data stays with the
research project leader and is not shared according to SAON/IASC, EU, OECD,
WMO and GEOSS recommendations. Furthermore, most stations lack the
interoperability interfaces necessary to actively engage in national and
international data exchange and management activities coordinated through
international programmes (e.g. EU, WMO, ICSU, GEO etc.). Also, no unified
interface for INTERACT datasets curretly exists that could help INTERACT
achieve a domain data repository role. The consequence is underutilisation of
existing and future monitoring capabilities, as well as INTERACT as a
contribution to the scientific toolbox. However, through the application of
accepted documentation and exchange standards, INTERACT can become a valuable
asset in network gap analysis performed in various communities (e.g. WMO
Observation Systems Capability Analysis and Review tool (OSCAR) surface
supporting GAW and GCW). The improvements to data management practices would
make INTERACT data **FAIR** : findable, accessible, interoperable, and re-
useable.
The data management work package of INTERACT aims to **increase the data
interoperability** among stations and towards external data consumers by
defining needs for generating **common standards** and data dissemination
strategies. The benefit of such a process is increased visibility and
potentially impact for INTERACT stations.
The purpose of the data management plan is to describe the basic principles
how the data generated by the project is handled during and after the project.
This includes standards and generation of discovery and use metadata, data
sharing and preservation and life cycle management; i.e. by following the
principles outlined by the _Open Research Data Pilot_ and _The FAIR Guiding
Principles for scientific data management and_ _stewardship_ (Wilkinson et al.
2016). However, INTERACT is a heterogeneous community and full implementation
of data management at stations is not in the budget. Thus the primary
objectives of this Data Management Plan is to initiate a process that at some
time will lead to a unified view of the INTERACT data. This is achieved
through dialogue with station managers, description of best practises and
linking stations and data centres where stations do not want to manage data
themselves.
This document is a **living document** that will be updated during the
project.
_**1.2.Organisation of the plan** _
This plan is based on the _template_ provided by the UK Digital Curation
Centre (DMP Online). This approach is recommended by _OpenAIRE_ _guidelines_
.
# Admin details
<table>
<tr>
<th>
**Project Name**
</th>
<th>
INTERACT
</th> </tr>
<tr>
<td>
**Funding**
</td>
<td>
EU HORIZON 2020 Research and Innovation Programme
</td> </tr>
<tr>
<td>
**Partners**
</td>
<td>
* Lund University (LU) SE
* University of Sheffield (USFD) UK
* University of Copenhagen (UCPH) DK
* University of Oulu (UOULU) FI
* Aarhus University (AU) DK
* CLU srl (CLU) IT
* Alfred Wegener Institute for Polar and Marine Research (AWI) DE
* Norwegian Polar Institute (NPI) NO
* Natural Environment Research Council (NERC) UK
* Tomsk State University (TSU) RU
* University of South Bohemia in Ceske Budejovice (USB) CZ
* Swedish Polar Research Secretariat (SPRS) SE
* Norwegian Institute for Agricultural and Environ. Research (NIBIO) NO
* Stockholm University (SU) SE University of Helsinki (UH) FI
* Greenland Institute of Natural Resources (GINR) GL
* Polish Academy of Sciences - Geophysics dept. IGF-PAS PL
* University of Turku (UTU) FI
* University of Oslo (UiO) NO
* Natural Resources Institute Finland (LUKE) FI
* Russian Academy of Sciences - Siberian Branch (IBPC) RU
* M V Lomonosov Moscow State University (MSU) RU Swedish University of Agricultural Sciences (SLU) SE
* Zentralanstalt für Meteorologie und Geodynamik (ZAMG) AT
* University of Innsbruck (LFU) AT
* Yugra State University (YSU) RU
* Faroe Islands Nature Investigation (JF) FO
* Northeast Iceland Nature Center (RFS) IS
* Centre for Northern Studies (CEN) CA
* Polish Academy of Sciences - Geography Dept. (IGSO-PAS) PL
* Consiglio Nazionale delle Ricerche (CNR) IT
* University of Alaska Fairbanks (UAF) US
* Sudurnes Science and Learning Center (SSLC) IS
* Finnish Meteorological Institute (FMI) FI
* CAFF International Secretariat (CAFF) IS
* APECS - University of Tromsoe (UiT) NO
* Aurora College - The Western Arctic Research Centre (AC) CA
* Arctic Institute of North America (AINA) CA
* Umbilical Design (UD-AB) SE
* ÅF Technology AB (AF) SE
* Norwegian Meteorological Institute (METNO) NO
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Agricultural University of Iceland (AUI) IS
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
University of Groningen (UoG-AC) NL
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
International Polar Foundation (IPF) BE
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Mapillary (MAP) SE
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
University Centre in Svalbard (UNIS) NO
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
The International Centre for Reindeer Husbandry (ICR) NO
</td> </tr> </table>
# Data summary
The INTERACT Data Management Plan addresses data describing >75 research
stations (Figure 1) in cold regions of the Northern Hemisphere. A listing of
the stations involved is provided in the proposal Section 4 and on the
projects website ( _http://www.eu-interact.org_ ) . These stations obtain
baseline- and monitoring data on a multitude of scientific disciplines
practiced within the network. Through the integration of the independent
research stations’ data through a unified approach, a comprehensive
coordinated view on the Arctic is achieved. Multitudes of stakeholders,
scientists, modellers, government agencies, educators, and to some extent
private citizens have a vested interest in accessing the various kinds of data
collected at the stations that can provide historical records, serve in model
validation, and provide critical indicators across the disciplines covered
within the network.
The main objective of INTERACT is **to build capacity for identifying,
understanding, predicting and responding to diverse environmental changes
throughout the wide environmental and land-use envelopes of the Arctic** .
A prerequisite to achieve this is to coordinate the data collected at INTERACT
stations and to make them available. Thus, INTERACT data management aims to
integrate datasets in a unified system, simplifying discovery, access and
utilisation of data for various stakeholders in the scientific community, as
well as in operational communities (e.g. scientists, national and local
decision makers, etc.).
INTERACT is truly interdisciplinary. With this perspective and as this
activity on coordinated data management has just begun, no full overview of
data types exist.
Concerning the encoding of data, self-explaining file formats (e.g. NetCDF,
HDF/HDF5) combined with semantic and structural standards like the Climate and
Forecast Convention are required to ensure interoperability at the data level.
Implementation of this is however a time consuming process and will be done
gradually.
Eventually, data can be integrated from different data centres with this
approach.
The default format for INTERACT datasets is NetCDF following the Climate and
Forecast Convention (feature types grid, timeseries, profiles and trajectories
if applicable). However, not all data handled at INTERACT stations are covered
by the Climate and Forecast Convention for standard names. INTERACT is
currently in a process of analysing the data collected and potential ways for
handling these data. This work must be based on external activities within the
disciplines and in Arctic data management in general.
INTERACT has a huge legacy of data. Within this phase of INTERACT, an effort
to identify legacy datasets and plan future handling of these will be
initiated.
Data are generated by permanent instrumentation (monitoring) and field work
(projects) at the INTERACT research stations.
The total amount of data is yet not known in detail currently. As the project
progress, better understanding of the full capacity of INTERACT will be
achieved.
_Figure 1: More than 75 research stations are participating in INTERACT_
INTERACT data are useful for all users of INTERACT research stations, as well
as projects, programmes and individual scientists undertaking scientific or
monitoring work in the Arctic. Establishing a unified view on the data
produced by INTERACT stations will improve the impact of INTERACT and the
individual stations through promotion of the their capacity for various data
consumers, ranging from individual scientists to regional or global monitoring
programmes (e.g. _AMAP_ , _GCW_ and _GAW_ ) .
## Making data findable, provisions for metadata [FAIR data]
Improving the ability of internal and external data consumers to find and
understand the data INTERACT stations are producing is essential to increase
the impact of INTERACT, individual stations and researchers. Through exposure
of the data produced by INTERACT in relevant discipline specific, regional and
global catalogues, the knowledge and interest in INTERACT is increased. This
can be done both individually by each station or by the INTERACT community.
INTERACT is following a metadata driven approach. This means that by utilizing
internationally accepted standards and protocols for documentation and
exchange of discovery and use metadata, interoperability with international
systems and frameworks, including WMO’s systems, _Year of Polar Prediction_
(YOPP), _WMO Global Cryosphere Watch_ (GCW) and many national and
international Arctic and marine data centers (e.g. _Svalbard Integrated
Arctic Earth Observing System_ ) is ensured.
INTERACT data management is distributed in nature, relying on a number of data
centres with a long term mandate. This ensures preservation of the scientific
legacy. While defining the approach of INTERACT data management, INTERACT is
aligning efforts with _SAON/IASC Arctic Data_ _Committee_ . This implies
documenting all datasets with standardised discovery metadata using either the
_Global Change Master_ _Directory_ _Directory Interchange Format_ or
_ISO19115_ standards.
INTERACT promotes and encourages the implementation of globally resolvable
Persistent Identifiers (e.g. Digital Object Identifiers - DOI) at each
contributing data centre. Some have this in place, while others are in the
process of establishing this. If DOIs are not supported, a local persistent
identifier must be supported.
Concerning naming conventions, INTERACT requires that controlled vocabularies
are used both at the discovery level and the data level to describe the
content. Discovery level metadata must identify the convention used and the
convention has to be available in machine readable form (preferably through
Simple Knowledge Organisation System). The fallback solution for controlled
vocabularies is the _Global Change_ _Master Directory vocabularies_ .
The search model of the data management system is based on _GCMD Science
Keywords_ for parameter identification through discovery metadata.
Versioning of data is required for the data published in the data management
system. Details on requirements for how to define a new version of a dataset
is to be agreed upon by the Data Forum.
The central node can consume and expose discovery metadata as GCMD DIF and
ISO19115 records (using GCMD keywords for physical/dynamical parameters).
Support for more formats is considered. For use metadata the Climate and
Forecast convention is promoted. More specifications will be identified early
in the project.
## Making data openly accessible [FAIR data]
Being able to find relevant data is only the first step. Most data consumers
are interested in the actual data. The requirements of data consumers vary.
While ad hoc consumers (usually scientists) frequently consume whatever is
found from a network of stations, consumers concerned with monitoring, or
calibration and validation of numerical models, or remote sensing products
will usually require harmonisation of the data to a common form before they
invest in integration. In order to address this standardisation of file
formats (encoding) and data access mechanisms is required.
The discovery metadata that can be collected will be made available through a
web based search interface at _https://interact.met.no_ . Some data may have
temporal access restrictions (embargo period). An embargo period on data may
be requested for different reasons, e.g. allowing Ph.D. students to prepare
their work, or while data is used in the preparation of a publication. Even if
data are constrained in the embargo period, data will be shared internally in
the project. Any disagreements on access to data or misuse of data internally
are to be settled by the INTERACT Steering Board.
A central data repository supporting the demonstrator will be made available.
Within this demonstrator, data are made accessible using interoperability
protocols using a THREDDS Data Server. This will support OpeNDAP, OGC Web Map
Service for visualisation of gridded datasets, and direct HTTP download of
full
files. Standardisation of data access interfaces and linkage to the Common
Data Model through OPeNDAP 1 is promoted for all data centres contributing
to INTERACT. This enables direct access of data within analysis tools like
Matlab, Excel 2 and R. The purpose of this demonstrator is to show how data
may be shared in standard manner using Open Source Software. Most of the
INTERACT data will however be managed by the stations or data centres the
stations make agreements with. The purpose of the demonstrator is to increase
the knowledge among stations on metadata and data interoperability and to
encourage stations not sharing data today to start exploring possibilities.
Metadata and data for the datasets are maintained by the stations and
responsible data centres, metadata supporting unified search is harvested and
ingested in the demonstrator hosted by central node.
## Making data interoperable [FAIR data]
Interoperability at the data level will be facilitated by following best
practises within international data management and relevant standardisation
efforts. This includes application of self explaining file formats utilising
discipline specific controlled vocabularies for data annotation. Data will be
made available through _OPeNDAP_ with use metadata following the _Climate
and Forecast conventions_ e.g. for geophysical data.
However, exceptions will occur due to the diversity of INTERACT data. Some of
the disciplines covered by INTERACT, e.g. meteorology, are advanced in the
context of use metadata, while others are lacking a unified, discipline
specific approach. INTERACT must rely on discipline specific activities and
larger network activities (e.g. GTN-P, GAW, GCW) to avoid duplication of
efforts and reuse the solutions developed. In order to address this aspect,
the Data Forum is established to promote the understanding of emerging data
management requirements. Implementation within INTERACT will be a long and
stepwise process.
Initially _GCMD Science keywords_ will be used, mapping between GCMD Science
keywords and _CF_ _standard names_ is supported (but needs to be updated).
Other vocabularies are included (e.g. _GBIF_ ) as they are available and
considered mature. In the current situation, interaction with the stations is
needed to fully get an overview of the relevant standards and controlled
vocabularies.
## Increase data re-use (through clarifying licenses) [FAIR data]
The INTERACT data policy is not written yet, but INTERACT promotes free and
open data sharing in line with the _Open Research Data Pilot_ . Each dataset
requires a license attached. The recommendation in
INTERACT is to use _Creative Commons_ _attribution license_ for data (see
_https://creativecommons.org/licenses/by/3.0/_ for details). However, INTERACT
is spanning many nations and a more careful examination of the business models
for various stations and funding regimes is required.
INTERACT data should be delivered in a timely manner, meaning without undue
delay. Any delay, due or undue, shall not be longer than one year after the
dataset is finished. Discovery metadata shall be delivered immediately.
INTERACT is promoting free and open access to data. Some data may have access
constraints. Details will be evaluated during the project.
The quality of each dataset is the responsibility of the Principal
Investigator. The Data Management System will ensure that information on the
quality of the data is available in the discovery metadata.
INTERACT is primarily concerned with observational data. These data cannot be
reproduced and must be reusable in the undefined future.
# Allocation of resources
In the current situation it is not possible to estimate the cost for making
INTERACT data FAIR. Part of the reason is that this work is relying on
existing functionality at the contributing data centres and that this
functionality has been developed over years. The project is also still in the
process of establishing an overview of the current situation among the 79
research stations involved.
Within the first period of INTERACT, a questionnaire has been filed to
stations asking for details on existing data management. This is still being
analysed, but preliminary results indicate challenges establishing a
preliminary data management system as a demonstrator for INTERACT. Over 50 %
of the stations surveyed indicated established data management routines. Thus,
instead of starting with stations, INTERACT will start with selected data
centres that host data for INTERACT stations. Most of these data centres are
active in relevant data management activities.
The following data centres are so far identified:
**Page**
Not all contact points identified above are directly involved in INTERACT, but
their institutions are and the data centres are handling INTERACT data. For
some archives, contact points are to be identified. This table will be further
developed and is only to be considered as a preliminary version.
4. Will be available August 2017.
5. _Data available through Polar Data Catalogoue which has interoper_ ability interfaces for metadata.
**Page**
Once INTERACT data management is fully established, each data centre is
responsible for accepting, managing, sharing and preserving relevant datasets.
Concerning interoperability interfaces the following interfaces are required
for the first version of the system:
1\. Metadata
1. _OAI-PMH_ serving either _CCMD DIF_ or the _ISO19115_ minimum profile with _GCMD Science_ _Keywords_ .
2. Data (will also use whatever is available and deliver this in original form, for those data no synthesis products are possible without an extensive effort) 1. OGC WMS (actual visual representation, not data)
2\. _OPeNDAP_ for data streaming/download, including format conversion
However, it should be understood that this is a best effort basis to show the
benefit for the INTERACT community, at least initially. Thus, the activities
are aligned with the efforts of the _SAON/IASC Arctic Data_ _Committee_ .
In the current situation there is no overview of the costs of long term
preservation of data as this is the responsibility of the contributing data
centres and the business model for these differs. This information will be
updated.
# Data security
Data security relies on the existing mechanisms of the contributing data
centres. INTERACT recommends to ensure the communication between data centres
and users with secure HTTP. Concerning the internal security of the data
centre, INTERACT recommends the best practises from _OAIS_ .
The central node relies on secure HTTP, but not all contributing data centres
support this yet. As this effort is for demonstration initially, this section
will be addressed following discussions in the Data Forum.
# Ethical aspects
INTERACT is handling a wide variety of data. Some data may be ethically
sensitive. In the _IASC context_ this is especially related to humans and
resources (e.g. fisheries, birds and mammals). As the INTERACT Data Policy
still is under development, this will be addressed in later versions of the
document.
# Other
This is not applicable in the current situation, but other considerations
(e.g. funder, institutional, departmental or group policies on data
management, data sharing and data security) may become applicable in later
versions of the plan.
**Page**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0039_DataBio_732064.md
|
# Introduction
## 1.1 Project Summary
The data intensive target sector on which the DataBio project focuses is the
**Data-Driven Bioeconomy** . DataBio focuses on utilizing Big Data to
contribute to the production of the best possible raw materials from
agriculture, forestry and fishery (aquaculture) for the bioeconomy industry,
as well as their further processing into food, energy and biomaterials, while
taking into account various accountability and sustainability issues.
DataBio will deploy state-of-the-art big data technologies and existing
partners’ infrastructure and solutions, linked together through the **DataBio
Platform** . These will aggregate Big Data from the three identified sectors (
**agriculture, forestry and fishery** ), intelligently process them and allow
the three sectors to selectively utilize numerous platform components,
according to their requirements. The execution will be through continuous
cooperation of end user and technology provider companies, bioeconomy and
technology research institutes, and stakeholders from the big data value PPP
programme.
DataBio is driven by the development, use and evaluation of a large number of
**pilots** in the three identified sectors, where associated partners and
additional stakeholders are also involved. The selected pilot concepts will be
transformed to pilot implementations utilizing co-innovative methods and
tools. The pilots select and utilize the best suitable market-ready or almost
market-ready ICT, Big Data and Earth Observation methods, technologies, tools
and services to be integrated to the common DataBio Platform.
Based on the pilot results and the new DataBio Platform, new solutions and new
business opportunities are expected to emerge. DataBio will organize a series
of trainings and hackathons to support its uptake and to enable developers
outside the consortium to design and develop new tools, services and
applications based on and for the DataBio Platform.
The DataBio consortium is listed in Table 1. For more information about the
project see [REF01].
#### _Table 1: The DataBio consortium partners_
<table>
<tr>
<th>
**Number**
</th>
<th>
**Name**
</th>
<th>
**Short name**
</th>
<th>
**Country**
</th> </tr>
<tr>
<td>
1 (CO)
</td>
<td>
INTRASOFT INTERNATIONAL SA
</td>
<td>
**INTRASOFT**
</td>
<td>
Belgium
</td> </tr> </table>
<table>
<tr>
<th>
2
</th>
<th>
LESPROJEKT SLUZBY SRO
</th>
<th>
**LESPRO**
</th>
<th>
Czech Republic
</th> </tr>
<tr>
<td>
3
</td>
<td>
ZAPADOCESKA UNIVERZITA V PLZNI
</td>
<td>
**UWB**
</td>
<td>
Czech Republic
</td> </tr>
<tr>
<td>
4
</td>
<td>
FRAUNHOFER GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
</td>
<td>
**Fraunhofer**
</td>
<td>
Germany
</td> </tr>
<tr>
<td>
5
</td>
<td>
ATOS SPAIN SA
</td>
<td>
**ATOS**
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
6
</td>
<td>
STIFTELSEN SINTEF
</td>
<td>
**SINTEF ICT**
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
7
</td>
<td>
SPACEBEL SA
</td>
<td>
**SPACEBEL**
</td>
<td>
Belgium
</td> </tr>
<tr>
<td>
8
</td>
<td>
VLAAMSE INSTELLING VOOR TECHNOLOGISCH ONDERZOEK N.V.
</td>
<td>
**VITO**
</td>
<td>
Belgium
</td> </tr>
<tr>
<td>
9
</td>
<td>
INSTYTUT CHEMII BIOORGANICZNEJ POLSKIEJ
AKADEMII NAUK
</td>
<td>
**PSNC**
</td>
<td>
Poland
</td> </tr>
<tr>
<td>
10
</td>
<td>
CIAOTECH Srl
</td>
<td>
**CiaoT**
</td>
<td>
Italy
</td> </tr>
<tr>
<td>
11
</td>
<td>
EMPRESA DE TRANSFORMACION AGRARIA SA
</td>
<td>
**TRAGSA**
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
12
</td>
<td>
INSTITUT FUR ANGEWANDTE INFORMATIK (INFAI) EV
</td>
<td>
**INFAI**
</td>
<td>
Germany
</td> </tr>
<tr>
<td>
13
</td>
<td>
NEUROPUBLIC AE PLIROFORIKIS & EPIKOINONION
</td>
<td>
**NP**
</td>
<td>
Greece
</td> </tr>
<tr>
<td>
14
</td>
<td>
Ústav pro hospodářskou úpravu lesů Brandýs nad
Labem
</td>
<td>
**UHUL FMI**
</td>
<td>
Czech Republic
</td> </tr>
<tr>
<td>
15
</td>
<td>
INNOVATION ENGINEERING SRL
</td>
<td>
**InnoE**
</td>
<td>
Italy
</td> </tr>
<tr>
<td>
16
</td>
<td>
Teknologian tutkimuskeskus VTT Oy
</td>
<td>
**VTT**
</td>
<td>
Finland
</td> </tr>
<tr>
<td>
17
</td>
<td>
SINTEF FISKERI OG HAVBRUK AS
</td>
<td>
**SINTEF**
**Fishery**
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
18
</td>
<td>
SUOMEN METSAKESKUS-FINLANDS SKOGSCENTRAL
</td>
<td>
**METSAK**
</td>
<td>
Finland
</td> </tr>
<tr>
<td>
19
</td>
<td>
IBM ISRAEL - SCIENCE AND TECHNOLOGY LTD
</td>
<td>
**IBM**
</td>
<td>
Israel
</td> </tr>
<tr>
<td>
20
</td>
<td>
MHG SYSTEMS OY - MHGS
</td>
<td>
**MHGS**
</td>
<td>
Finland
</td> </tr>
<tr>
<td>
21
</td>
<td>
NB ADVIES BV
</td>
<td>
**NB Advies**
</td>
<td>
Netherlands
</td> </tr>
<tr>
<td>
22
</td>
<td>
CONSIGLIO PER LA RICERCA IN AGRICOLTURA E
L'ANALISI DELL'ECONOMIA AGRARIA
</td>
<td>
**CREA**
</td>
<td>
Italy
</td> </tr>
<tr>
<td>
23
</td>
<td>
FUNDACION AZTI - AZTI FUNDAZIOA
</td>
<td>
**AZTI**
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
24
</td>
<td>
KINGS BAY AS
</td>
<td>
**KingsBay**
</td>
<td>
Norway
</td> </tr> </table>
<table>
<tr>
<th>
25
</th>
<th>
EROS AS
</th>
<th>
**Eros**
</th>
<th>
Norway
</th> </tr>
<tr>
<td>
26
</td>
<td>
ERVIK & SAEVIK AS
</td>
<td>
**ESAS**
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
27
</td>
<td>
LIEGRUPPEN FISKERI AS
</td>
<td>
**LiegFi**
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
28
</td>
<td>
E-GEOS SPA
</td>
<td>
**e-geos**
</td>
<td>
Italy
</td> </tr>
<tr>
<td>
29
</td>
<td>
DANMARKS TEKNISKE UNIVERSITET
</td>
<td>
**DTU**
</td>
<td>
Denmark
</td> </tr>
<tr>
<td>
30
</td>
<td>
FEDERUNACOMA SRL UNIPERSONALE
</td>
<td>
**Federu**
</td>
<td>
Italy
</td> </tr>
<tr>
<td>
31
</td>
<td>
CSEM CENTRE SUISSE D'ELECTRONIQUE ET DE
MICROTECHNIQUE SA - RECHERCHE ET
DEVELOPPEMENT
</td>
<td>
**CSEM**
</td>
<td>
Switzerland
</td> </tr>
<tr>
<td>
32
</td>
<td>
UNIVERSITAET ST. GALLEN
</td>
<td>
**UStG**
</td>
<td>
Switzerland
</td> </tr>
<tr>
<td>
33
</td>
<td>
NORGES SILDESALGSLAG SA
</td>
<td>
**Sildes**
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
34
</td>
<td>
EXUS SOFTWARE LTD
</td>
<td>
**EXUS**
</td>
<td>
United
Kingdom
</td> </tr>
<tr>
<td>
35
</td>
<td>
CYBERNETICA AS
</td>
<td>
**CYBER**
</td>
<td>
Estonia
</td> </tr>
<tr>
<td>
36
</td>
<td>
GAIA EPICHEIREIN ANONYMI ETAIREIA PSIFIAKON
YPIRESION
</td>
<td>
**GAIA**
</td>
<td>
Greece
</td> </tr>
<tr>
<td>
37
</td>
<td>
SOFTEAM
</td>
<td>
**Softeam**
</td>
<td>
France
</td> </tr>
<tr>
<td>
38
</td>
<td>
FUNDACION CITOLIVA, CENTRO DE INNOVACION Y
TECNOLOGIA DEL OLIVAR Y DEL ACEITE
</td>
<td>
**CITOLIVA**
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
39
</td>
<td>
TERRASIGNA SRL
</td>
<td>
**TerraS**
</td>
<td>
Romania
</td> </tr>
<tr>
<td>
40
</td>
<td>
ETHNIKO KENTRO EREVNAS KAI TECHNOLOGIKIS
ANAPTYXIS
</td>
<td>
**CERTH**
</td>
<td>
Greece
</td> </tr>
<tr>
<td>
41
</td>
<td>
METEOROLOGICAL AND ENVIRONMENTAL EARTH
OBSERVATION SRL
</td>
<td>
**MEEO**
</td>
<td>
Italy
</td> </tr>
<tr>
<td>
42
</td>
<td>
ECHEBASTAR FLEET SOCIEDAD LIMITADA
</td>
<td>
**ECHEBF**
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
43
</td>
<td>
NOVAMONT SPA
</td>
<td>
**Novam**
</td>
<td>
Italy
</td> </tr>
<tr>
<td>
44
</td>
<td>
SENOP OY
</td>
<td>
**Senop**
</td>
<td>
Finland
</td> </tr>
<tr>
<td>
45
</td>
<td>
UNIVERSIDAD DEL PAIS VASCO/ EUSKAL HERRIKO
UNIBERTSITATEA
</td>
<td>
**EHU/UPV**
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
46
</td>
<td>
OPEN GEOSPATIAL CONSORTIUM (EUROPE) LIMITED LBG
</td>
<td>
**OGCE**
</td>
<td>
United
Kingdom
</td> </tr>
<tr>
<td>
47
</td>
<td>
ZETOR TRACTORS AS
</td>
<td>
**ZETOR**
</td>
<td>
Czech Republic
</td> </tr>
<tr>
<td>
48
</td>
<td>
COOPERATIVA AGRICOLA CESENATE SOCIETA
COOPERATIVA AGRICOLA
</td>
<td>
**CAC**
</td>
<td>
Italy
</td> </tr> </table>
## 1.2 Document Scope
This document outlines DataBio’s data management plan (DMP), formally
documenting how data will be handled both during the implementation and upon
natural termination of the project. Many DMP aspects will be considered
including metadata generation, data preservation, data security and ethics,
accounting for the FAIR (Findable, Accessible, Interoperable, Re-usable) data
principle. DataBio, Data-driven Bioeconomy project, is an innovation big data
intensive action involving public private partnership to promote productivity
on EU companies in three of the major bioeconomy sectors namely, Agriculture,
forestry and fishery. Experiences from US show that bioeconomy can get a
significant boost from Big Data. In Europe, this sector has until now
attracted few large ICT vendors. A central goal of DataBio is to increase
participation of European ICT industry in the development of Big Data systems
for boosting the lagging bioeconomy productivity. As a good case in point,
European agriculture, forestry and fishery can benefit greatly from the
European Copernicus space program which has currently launched its third
Sentinel satellite, telemetry IoT, UAVs, etc.
Farm and forestry machinery, and fishing vessels in use today collect large
quantities of data in unprecedented pattern. Remote and proximal sensors and
imagery, and many other technologies, are all working together to give details
about crop and soil properties, marine environment, weeds and pests, sunlight
and shade, and many other primary production relevant variables. Deploying big
data analytics in these data can help the farmers, foresters and fishers to
adjust and improve the productivity of their business operations. On the other
hand, large data sets such as those coming from the Copernicus earth
monitoring infrastructure, are increasingly available on different levels of
granularity, but they are heterogeneous, at times also unstructured, hard to
analyze and distributed across various sectors and different providers. It is
here that data management plan comes in. It is anticipated that DataBio will
provide a solution which will assume that datasets will be distributed among
different infrastructures and that their accessibility could be complex,
needing to have mechanisms which facilitate data retrieval, processing,
manipulation and visualization as seamlessly as possible. The infrastructure
will open new possibilities for ICT sector, including SMEs to develop new
Bioeconomy 4.0 and will also open new possibilities for companies from the
Earth Observation sector.
This DMP will be updated over the course of DataBio project whenever
significant changes arise. The updates of this document will increasingly
provide in-depths on DataBio DMP strategies with particular interest on the
aspects of findability, accessibility, interoperability and reusability of the
Big Data the project produces. At least two updates will be prepared, on Month
18 and Month 36 of the project.
## 1.3 Document Structure
This document is comprised of the following chapters:
**Chapter 1** presents an introduction to the project and the document.
**Chapter 2** presents the data summary including the purpose of data
collection, data size, type and format, historical data reuse and data
beneficiaries.
**Chapter 3** outlines DataBio’s FAIR data strategies.
**Chapter 4** describes data management support.
**Chapter 5** describes data security.
**Chapter 6** describes ethical issues.
**Chapter 7** presents the concluding remarks.
**Appendix A** presents the managed data sets.
# Data Summary
## 2.1 Purpose of data collection
During the lifecycle of the DataBio project, big data will be collected that
is, very large data sets (multi-terabyte or larger) consisting of a wide range
of data types (relational, text, multistructured data, etc.) from numerous
sources, including relatively new generation big data (machines, sensors,
genomics, etc.). The ultimate purpose of data collection is to use the data as
a source of information in the implementation of a variety of big data
analytics algorithms, services and applications DataBio will deploy to create
a value, new business facts and insights with a particular focus on the
bioeconomy industry. The big datasets are part of the building blocks of the
DataBio’s big data technology platform (Figure 1) that was designed to help
European companies increase productivity. Big Data experts provide common
analytic technology support for the main common and typical Bioeconomy
applications/analytics that are now emerging through the pilots in the
project. Data from the past will be managed and analyzed, including many
different kind of data sources: i.e., descriptive analytics and classical
query/reporting (in need of variety management - and handling and analysis of
all of the data from the past, including performance data, transactional data,
attitudinal data, descriptive data, behavioural data, location-related data,
interactional data, from many different sources). Big data from the present
time will be harnessed in the process of monitoring and real-time analytics -
pilot services (in need of velocity processing - and handling of real-time
data from the present) - trigging alarms, actuators etc.
Harnessing big data for the future time include forecasting, prediction and
recommendation analytics - pilot services (in need of volume processing - and
processing of large amounts of data combining knowledge from the past and
present, and from models, to provide insight for the future).
_Figure 1: DataBio’s analytics and big data value approach_
Specifically:
* Forestry: Big Data methods are expected to bring the possibility to both increase the value of the forests as well as to decrease the costs within sustainability limits set by natural growth and ecological aspects. The key technology is to gather more and more accurate information about the trees from a host of sensors including new generation of satellites, UAV images, laser scanning, mobile devices through crowdsourcing and machines operating in the forests.
* Agriculture: Big Data in Agriculture is currently a hot topic. The DataBio intention is to build a European vision of Big Data for agriculture. This vision is to offer solutions which will increase the role of Big Data role in Agri Food chains in Europe: a perspective, which will prepare recommendation for future big data development in Europe.
* Fisheries: the ambition is to herald and promote the use of Big Data analytical tools within fisheries applications by initiating several pilots which will demonstrate benefits of using Big Data in an analytical way for the fisheries, such as improved analysis of operational data, tools for planning and operational choices, crowdsourcing methods for fish stock estimation.
* The use of Big data analytics will bring about innovation. It will generate significant economic value, extend the relevant market sectors, and herald novel business/organizational models. The cross-cutting character of the geo-spatial Big Data solutions allows the straightforward extension of the scope of applications beyond the bio-economy sectors. Such extensions of the market for the Big Data technologies are foreseen in economic sectors, such as: Urban planning, Water quality, Public safety (incl. technological and natural hazards), Protection of critical infrastructures, Waste management. On the other hand, the Big Data technologies revolutionize the business approach in the geospatial market and foster the emergence of innovative business/organizational models; indeed, to achieve the cost effectiveness of the services to the customers, it is necessary to organize the offer to the market on a territorial/local basis, as the users share the same geospatial sources of data and are best served by local players (service providers). This can be illustrated by a network of European services providers, developing proximity relationships with their customers and sharing their knowledge through the network.
## 2.2 Data types and formats
The DataBio specific data types, formats and sources are listed in detail in
Appendix A; below are described key features of the data used in the project.
### 2.2.1 Structured data
Structured data refers to any data that resides in a fixed field within a
record or file. This includes data contained in relational databases,
spreadsheets, and data in forms of events such as sensor data. Structured data
first depends on creating a data model – a model of the types of business data
that will be recorded and how they will be stored, processed and accessed.
This includes defining what fields of data will be stored and how that data
will be stored: data type (numeric, currency, alphabetic, name, date, address)
and any restrictions on the data input (number of characters; restricted to
certain terms such as Mr., Ms. or Dr.; M or F).
### 2.2.2 Semi-structured data
Semi-structured data is a cross between structured and unstructured data. It
is a type of structured data, but lacks the strict data model structure. With
semi-structured data, tags or other types of markers are used to identify
certain elements within the data, but the data doesn't have a rigid structure.
For example, word processing software now can include metadata showing the
author's name and the date created, with the bulk of the document just being
unstructured text. Emails have the sender, recipient, date, time and other
fixed fields added to the unstructured data of the email message content and
any attachments. Photos or other graphics can be tagged with keywords such as
the creator, date, location and keywords, making it possible to organize and
locate graphics. XML and other markup languages are often used to manage semi-
structured data. Semi-structured data is therefore a form of structured data
that does not conform with the formal structure of data models associated with
relational databases or other forms of data tables, but nonetheless contains
tags or other markers to separate semantic elements and enforce hierarchies of
records and fields within the data. Therefore, it is also known as self-
describing structure. In semistructured data, the entities belonging to the
same class may have different attributes even though they are grouped
together, and the attributes' order is not important. Semi-structured data are
increasingly occurring since the advent of the Internet where full-text
documents and databases are not the only forms of data anymore, and different
applications need a medium for exchanging information. In object-oriented
databases, one often finds semistructured data.
XML and other markup languages, email, and EDI are all forms of semi-
structured data. OEM (Object Exchange Model) was created prior to XML as a
means of self-describing a data structure. XML has been popularized by web
services that are developed utilizing SOAP principles. Some types of data
described here as "semi-structured", especially XML, suffer from the
impression that they are incapable of structural rigor at the same functional
level as Relational Tables and Rows. Indeed, the view of XML as inherently
semi-structured (previously, it was referred to as "unstructured") has
handicapped its use for a widening range of data-centric applications. Even
documents, normally thought of as the epitome of semistructure, can be
designed with virtually the same rigor as database schema, enforced by the XML
schema and processed by both commercial and custom software programs without
reducing their usability by human readers.
In view of this fact, XML might be referred to as having "flexible structure"
capable of humancentric flow and hierarchy as well as highly rigorous element
structure and data typing. The concept of XML as "human-readable", however,
can only be taken so far. Some implementations/dialects of XML, such as the
XML representation of the contents of a Microsoft Word document, as
implemented in Office 2007 and later versions, utilize dozens or even hundreds
of different kinds of tags that reflect a particular problem domain - in
Word's case, formatting at the character and paragraph and document level,
definitions of styles, inclusion of citations, etc. - which are nested within
each other in complex ways. Understanding even a portion of such an XML
document by reading it, let alone catching errors in its structure, is
impossible without a very deep prior understanding of the specific XML
implementation, along with assistance by software that understands the XML
schema that has been employed. Such text is not "human-understandable" any
more than a book written in Swahili (which uses the Latin alphabet) would be
to an American or Western European who does not know a word of that language:
the tags are symbols that are meaningless to a person unfamiliar with the
domain.
JSON or JavaScript Object Notation, is an open standard format that uses
human-readable text to transmit data objects consisting of attribute–value
pairs. It is used primarily to transmit data between a server and web
application, as an alternative to XML. JSON has been popularized by web
services developed utilizing REST principles. There is a new breed of
databases such as MongoDB and Couchbase that store data natively in JSON
format, leveraging the pros of semi-structured data architecture.
### 2.2.3 Unstructured data
Unstructured data (or unstructured information) refers to information that
either does not have a pre-defined data model or is not organized in a pre-
defined manner. This results in irregularities and ambiguities that make it
difficult to understand using traditional programs as compared to data stored
in “field” form in databases or annotated (semantically tagged) in documents.
Unstructured data can't be so readily classified and fit into a neat box:
photos and graphic images, videos, streaming instrument data, webpages, PDF
files, PowerPoint presentations, emails, blog entries, wikis and word
processing documents.
In 1998, Merrill Lynch cited a rule of thumb that somewhere around 80-90% of
all potentially usable business information may originate in unstructured
form. This rule of thumb is not based on primary or any quantitative research,
but nonetheless is accepted by some. IDC and EMC project that data will grow
to 40 zettabytes by 2020, resulting in a 50-fold growth from the beginning of
2010. Computer World states that unstructured information might account for
more than 70%–80% of all data in organizations.
Software that creates machine-processable structure can utilize the
linguistic, auditory, and visual structure that exist in all forms of human
communication. Algorithms can infer this inherent structure from text, for
instance, by examining word morphology, sentence syntax, and other small- and
large-scale patterns. Unstructured information can then be enriched and tagged
to address ambiguities and relevancy-based techniques then used to facilitate
search and discovery. Examples of "unstructured data" may include books,
journals, documents, metadata, health records, audio, video, analog data,
images, files, and unstructured text such as the body of an e-mail message,
Web page, or word-processor document. While the main content being conveyed
does not have a defined structure, it generally comes packaged in objects
(e.g. in files or documents, …) that themselves have structure and are thus a
mix of structured and unstructured data, but collectively this is still
referred to as "unstructured data".
### 2.2.4 New generation big data
The new generation big data is in particular focusing on semi-structured and
unstructured data, often in combination with structured data.
In the BDVA reference model for big data technologies a distinction is done
between 6 different big data types.
##### 2.2.4.1 Sensor data
Within the Databio pilots, several key parameters will be monitored through
sensorial platforms and sensor data will be collected along the way to support
the project activities. Two types of sensor data have been already identified
and namely, a) IoT data from in-situ sensors and telemetric stations, b)
imagery data from unmanned aerial sensing platforms (drones), c) imagery from
hand-held or mounted optical sensors.
###### 2.2.4.1.1 Internet of Things data
The IoT data are a major subgroup of sensor data involved in multiple pilot
activities in the Databio project. IoT data are sent via TCP/UDP protocol in
various formats (e.g. txt with time series data, json strings) and can be
further divided into the following categories:
• Agro-climatic/Field telemetry stations which contribute with raw data
(numerical values) related to several parameters. As different pilots focus on
different application scenarios, the following table summarizes several IoT-
based monitoring approaches to be followed.
##### Table 2: Sensor data tools, resolution and spatial density
<table>
<tr>
<th>
**Pilot**
</th>
<th>
**Mission, instrument**
</th>
<th>
**Data resolution and spatial density**
</th> </tr>
<tr>
<td>
**A1.1,**
**B1.2,**
**C1.1,**
**C2.2**
</td>
<td>
NP’s GAIAtrons, which are telemetry IoT stations with modular/expandable
design will be used to monitor ambient temperature, humidity, solar radiation,
leaf wetness, rainfall volume, wind speed and direction, barometric pressure
(GAIAtron atmo), soil temperature and humidity
(multi-depth) (GAIAtron soil)
</td>
<td>
Time step for data collection every 10 minutes. One station per microclimate
zone (300ha - 1100 ha for atmo, 300ha -
3300ha for soil)
</td> </tr>
<tr>
<td>
**A1.2,**
**B1.3**
</td>
<td>
Field bound sensors will be used to monitor air temperature, air moisture,
solar radiation, leaf wetness, rainfall, wind speed and direction, soil
moisture, soil temperature, soil EC/salinity, PAR, and barometric pressure.
These sensors consist in technology platform of retriever and pups wireless
sensor network and SpecConnect, a cloud based crop data management solution.
</td>
<td>
Time step for data collection is customizable from 1 to 60 minutes; Field
sensors will be used to monitor 5 tandemly located sites at a density: a) Air
temperature, air moisture, rainfall, wind data and solar radiation: one bloc
of sensors per 5 ha
2. Leaf wetness: two sensors per ha
3. Soil moisture, soil temperature and soil EC/salinity: one combined
sensor per ha
</td> </tr>
<tr>
<td>
**A2.1**
</td>
<td>
Environmental indoor: air temperature, air relative humidity, solar radiation,
crop leaf temperature (remotely and in contact), soil/substrate water content.
Environmental outdoor: wind speed and direction, evaporation, rain, UVA, UVB
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
**B1.1**
</td>
<td>
Agro-climatic IoT stations monitoring temperature, relative and absolute
humidity, wind parameters
</td>
<td>
To be determined
</td> </tr> </table>
* Control data in the parcels/fields measuring sprinklers, drippers, metering devices, valves, alarm settings, heating, pumping state, pressure switches, etc.
* Contact sensing data that determine problems with great precision, speeding up the use of techniques which help to solve problems
* Vessel and buoy-based stations which contribute with raw data (numerical values), typically hydro acoustic and machinery data
###### 2.2.4.1.2 Drone data
A specific subset of sensor data generated and processed within DataBio
project is images produced by cameras on-board drones or RPAS (Remotely
Piloted Aircraft Systems). In particular, some DataBio pilots will use optical
(RGB), thermal or multispectral images and 3D point-clouds acquired from RPAS.
The information generated by drone-airborne cameras is usually Image Data
(JPEG or JPEG2000). A general description of the workflow is provided below.
__Data acquired by the RGB sensor_ _
The RGB sensor acquires individual pictures in **.JPG** format, together with
their ‘geotag’ files, which are downloaded from the RPAS and processed into:
* **.LAS** files: 3D point clouds (x, y, z), which are then processed to produce Digital Models (Terrain- DTM, Surface-DSM, Elevation-DEM, Vegetation-DVM)
* **.TIF** files: which are then processed into an orthorectified mosaic. In order to obtain smaller files, mosaics are usually exported to compressed **.ECW** format.
__Data acquired by the thermal sensor_ _
The Thermal sensor acquires a video file which is downloaded from the RPAS
and:
* split into frames in **.TIF** format (pixels contain Digital Numbers: 0-255)
* 1 of every 10 frames is selected (with an overlap of about 80%, so as not to process an excessive amount of information)
__Data acquired by the multispectral sensor_ _
The multispectral sensor acquires individual pictures from the 6 spectral
channels in **.RAW** format, which are downloaded from the RPAS and processed
into:
* **.TIF** files (16 bits), which are then processed to produce a 6-bands .TIF mosaic (pixels contain Digital Numbers: 0-255)
###### 2.2.4.1.3 Data from hand-held or mounted optical sensors
Images from hand-held or mounted cameras will be collected using truck-held or
hand held full Range / high resolution UV-VIS-NIR-SWIR Spectroradiometer.
###### _2.2.4.2 Machine-generated data_
Machine-generated data in the DataBio project are data produced by ships,
boats and machinery used in agriculture and in forestry (such as tractors).
These data will serve for further analysis and optimisation of processes in
the bio-economy sector.
For illustration purposes, examples of data collected by tractors in
agriculture are described. Tractors are equipped by the following units:
* Control units for data control, data collection and analyses including dashboards, transmission control unit, hydrostatic or hydrodynamic system control unit, engine control unit.
* Global Positioning System (GPS) units or Global System for Mobile Communications (GSM) units for tractor tracking.
* Unit for displaying characteristics of field/soil characteristics including area, quality, boundaries and yields.
These units generate the following data:
* Identification of tractor + identification of driver by code or by RFID module.
* Identification of the current operation status.
* Time identification by the date and the current time.
* Precise tractor location tracking (daily route, starts, stops, speed).
* Tractor hours - monitoring working hours in time and place.
* Information from tachometer [Σ km] and [Σ working hrs and min].
* Identification of the current maintenance status.
* Tractor diagnostic: failure modes or failure codes
* Information about the date of the last calibration of each tractor systems + information about setting, information about SW version, last update, etc.
* The amount of fuel in the fuel tank [L].
* Online information about sudden loss of fuel in the fuel tank.
* Fuel consumption per trip / per time period / per kilometer (monitoring of fuel consumption in various dependencies e.g. motor load).
* Total fuel consumption per day [L/day].
* Engine speed [run/min].
* Possibility to online setup engine speed in range [run/min from - to], signaling when limits are exceeding.
* Current position of accelerator pedal [% from scale 0-100 %].
* Charging level of the main battery [V].
* Current temperature of the cooling weather [C ͦ or F ͦ ].
* Current temperature of the motor oil [C ͦ or F ͦ ].
* Current temperature of after treatment [C ͦ or F ͦ ].
* Current temperature of the transmission oil [C ͦ or F ͦ ].
* Diagnosis gear shift [grades backward and forward].
* Current engine load [% from scale 0-100 %]
###### _2.2.4.3 Geospatial data_
The DataBio pilots will collect earth observation (EO) data from a number of
sources which will be refined during the project. Currently, it is confirmed
that the following EO data will be collected and used as input data:
##### Table 3: Geospatial data tools, format and origin
<table>
<tr>
<th>
**Mission, instrument**
</th>
<th>
**Format**
</th>
<th>
**Origin**
</th> </tr>
<tr>
<td>
Sentinel-1, C-SAR
</td>
<td>
SLC, GRD
</td>
<td>
Copernicus Open Access Hub
(https://scihub.copernicus.eu/)
</td> </tr>
<tr>
<td>
Sentinel-2, MSI
</td>
<td>
L1C
</td>
<td>
Copernicus Open Access Hub
(https://scihub.copernicus.eu/)
</td> </tr> </table>
Information about the expected sizes will be added, when the information
becomes available.
In addition to EO data, DataBio will utilise other geospatial data from EU,
national, local, private and open repositories including Land Parcel
Identification System data, cadastral data, Open Land Use map (
_http://sdi4apps.eu/open_land_use/_ ) , Urban Atlas and Corine Land Cover,
Proba-V data ( _www.vito-eodata.be_ ) .
The meteo-data will be collected mainly from EO systems based and will be
collected from European data sources such as COPERNICUS products, EUMETSAT
H-SAF products, but also other EO data sources such as VIIRS and MODIS and
ASTER will be considered. As complementary data sources, the weather forecast
models output (ECMWF) and the regional weather services output usually based
on ground weather stations can be considered according to the specific target
areas of the pilots."
###### _2.2.4.4 Genomics data_
Within the DataBio Pilot 1.1.2 different data will be collected and produced.
Three categories of data have been already identified for the Pilot and
namely, a) in-situ sensors (including image capture) and farm data, b) genomic
data from plant breeding efforts in Green Houses produced using Next
Generation Sequencers (NGS), c) biochemical data of tomato fruits produced by
chromatographs (LC/MS/MS, GS/MS, HPLC).
In-situ sensors/Environmental outdoor: Wind speed and direction, Evaporation,
Rain, Light intensity, UVA, UVB.
In-situ sensors/Environmental indoor: Air temperature, Air relative humidity,
Crop leaf temperature (remotely and in contact), Soil/substrate water content,
crop type, etc.).
Farm Data:
* In-Situ measurements: Soil nutritional status.
* Farm logs (work calendar, technical practices at farm level, irrigation information,).
* Farm profile (Static farm information, such as size
##### Table 4: Genomic, biochemical and metabolomic data tools, description
and acquisition
<table>
<tr>
<th>
**Pilot A1.1.2**
</th>
<th>
**Mission, Instrument**
</th>
<th>
**Data description and acquisition**
</th> </tr>
<tr>
<td>
Genomic data
</td>
<td>
To characterize the genetic diversity of local tomato varieties used for
breeding. To use the genetic- genomic information to guide the breeding
efforts (as a selection tool for higher performance) and develop a model to
predict the final breeding result in order to achieve rapidly and with less
financial burden varieties of higher performance. Data will be produced using
two Illumina NGS Macchines.
</td>
<td>
Data produced from Illumina machines stored in compressed text files (fastq).
Data will be produced from plant biological samples (leaf and fruit).
Collection will be done in 2 different plant stages (plantlets and mature
plants). Genomic data will be produced using standard and customized protocols
at CERTH. Genomic data, although plait text in format, are bigvolume data and
pose challenges in their storage, handling and processing. Preliminary
analysis will be performed using the local HPC computational facility.
</td> </tr>
<tr>
<td>
Biochemical, metabolomic data
</td>
<td>
To characterize the biochemical profile of fruits from tomato varieties used
for breeding. Data will be produced from different chromatographs and mass
spectrometers
</td>
<td>
Data will be mainly proprietary binary based archives converted to XML or
other open formats. Data will be acquired from biological samples of tomato
fruits.
</td> </tr> </table>
While genomic data are stored in raw format as files, environmental data,
which are generated using a network of sensors, will be stored in a database
along with the time information and will be processed as time series data.
## 2.3 Historical data
In the context of doing machine learning and predictive and prescriptive
analytics it is important to be able to use historical data for training and
validation purposes. Machine learning algorithms will use existing historical
data as training data both for supervised and unsupervised learning.
Information about datasets and the time periods concerned with historical
datasets to be used for DataBio can be found in Appendix A. Historical data
can also serve as training complex event processing applications. In this
case, historical data is injected as “happening in real-time” therefore
serving as testing the complex event driven application in hand before running
it in real-environment.
## 2.4 Expected data size and velocity
The big data “V” characteristics of Volume and Velocity is being described for
each of the identified data sets in the DataBio projects - typically with
measurements of total historical volumes and new/additional data per time
unit. The DataBio-specific Data Volumes and velocities (or injection rates)
can be found in Appendix A.
## 2.5 Data beneficiaries
In this section, this document analyses the key data beneficiaries who will
benefit from the use of big data in several fields as analytics, data sets,
business value, sales or marketing. This section will consider both tangibles
and intangibles concepts.
In examining the value of big data, it is necessary to evaluate who is
affected by them and their usage. In some cases, the individual whose data is
processed directly receives a benefit. Nevertheless, regarding Data Driven
Bio-Economy, the benefit to the individual can be considered as indirect. In
other cases, the relevant individual receives no benefit attributable, with
big data value reaped by business, government, or society at large.
Concerning General Community, the collection and use of an individual’s data
benefits not only that individual, but also members of a proximate class, such
as users of a similar product or residents of a geographical area. In the case
of organizations, Big Data analysis often benefits those organizations that
collect and harness the data. Data-driven profits may be viewed as enhancing
allocative efficiency by facilitating the free economy. The emergence,
expansion, and widespread use of innovative products and services at
decreasing marginal costs have revolutionized global economies and societal
structures, facilitating access to technology and knowledge and fomenting
social change. With more data, businesses can optimize distribution methods,
efficiently allocate credit, and robustly combat fraud, benefitting consumers
as a whole.
On the other hand, big data analysis can provide a direct benefit to those
individuals whose information is being used. However, DataBio project is not
directly involved on those specific cases (see chapter6 about ethical issues).
Regarding general benefits, big data is creating enormous value for the global
economy, driving innovation, productivity, efficiency, and growth. Data has
become the driving force behind almost every interaction between individuals,
businesses, and governments. The uses of big data can be transformative and
are sometimes difficult to anticipate at the time of initial collection.
This section does not provide a comprehensive taxonomy of big data benefits.
It would be pretentious to do so, ranking the relative importance of weighty
social goals. Rather it posits that such benefits must be accounted for by
rigorous analysis considering the priorities of a nation, society, or economy.
Only then, can benefits be assessed within an economic framework.
Besides those general concepts on Big Data Beneficiaries, it is possible to
analyse the impact of DataBio project results regarding the final users of the
different technologies, tools and services to be developed. Using this
approach, and taking into account that more detailed information is available
at Deliverables D1.1, D2.1 and D3.1 regarding Agricultural, Forestry and
Fishery pilots definition, the main beneficiaries of big data are described in
the following sections.
### 2.5.1 Agricultural Sector
One of the proposed agricultural pilots is about the use of tractor units able
to online send information regarding current operations to the driver or
farmer. The prototypes will be equipped with units for tracking and tracing
(GPS - Global Positioning System or GSM - Global System for Mobile
Communications) and the unit for displaying characteristics of soil units. The
proposed solution will meet Farmers requests on cost reduction and improved
productivity in order to increase their economic benefits following, also,
sustainable agriculture practices.
In other case, Smart farming services provided as irrigation through flexible
mechanisms and UIs (web, mobile, tablet compatible) will promote the adoption
of technological tools (IoT, data analytics) and collaboration with certified
professionals to optimize farm productivity. Therefore, Farming Cooperatives
will obtain, again, cost reduction and improved productivity migrating from
standard to sustainable smart-agriculture practices. As a summary, main
beneficiaries of DataBio will be Farming cooperatives, farmers and land
owners.
### 2.5.2 Forestry Sector
Data sharing and a collaborative environment enable improved tools for
sustainable forest management decisions and operations. Forest management
services make data accessible for forest owners, and other end users, and
integrate this data for e-contracting, online purchase and sales of timber and
biomass. Higher data volumes and better data accessibility increase the
probability that the data will be updated and maintained.
DataBio WP2 will develop and pilot standardized procedures for collecting and
transferring Big Data based on DataBio WP4 platform from silvicultural
activities executed in the forest. As a summary, the Big Data beneficiaries
related to WP2 – Forestry Pilots activities will be:
* Forest owners (private, public, timberland investors)
* Forest authority experts
* Forest companies
* Contractors and service providers
### 2.5.3 Fishery Sector
Regarding WP3 – Fisheries Pilot, in Pilot A2: Small pelagic fisheries
immediate operational choices, the main users and beneficiaries of this pilot
will be the ship owners and masters on board small pelagic vessels. Modern
pelagic vessels are equipped with increasingly complex machinery systems for
propulsion, manoeuvring and power generation. Due to that, the vessel is
always in an operational state, but the configuration of the vessel systems
imposes constraints on operation. The captain is tasked with safe operation of
the vessel, while the efficiency of the vessel systems may be increased if the
captain is informed about the actual operational state, potential for
improvement and expected results of available actions.
The goal of the pilot B2: Oceanic tuna fisheries planning is to create tools
that aid in trip planning by presenting historical catch data as well as
attempting to forecast where the fish might be in the near future. The
forecast model will be constructed from historical data of catches with the
data available by the skippers at that moment (oceanographical data, buoys
data etc). In that case, the main beneficiary of DataBio development will be
tuna fisheries companies. Therefore, as a summary, DataBio WP3 beneficiaries
will be the broad range of fisheries stakeholders from companies, captains and
vessels owners.
### 2.5.4 Technical Staff
Adoption rates aside, the potential benefits of utilising big data and related
technologies are significant both in scale and scope and include, for example:
better/more targeted marketing activities, improved business decision making,
cost reduction and generation of operational efficiencies, enhanced planning
and strategic decision making and increased business agility, fraud detection,
waste reduction and customer retention to name but a few. Obviously, the
ability of firms to realize business benefits will be dependent on company
characteristics such as size, data dependency and nature of business activity.
A core concern voiced by many of those participating in big data focused
studies is the ability of employers to find and attract the talent needed for
both a) the successful implementation of big data solutions and b) the
subsequent realisation of associated business benefits.
Although ‘Data Scientist’ may currently be the most requested profile in big
data, the recruitment of Data Scientists (in volume terms at least) appears
relatively low down the wish list of recruiters. Instead, the openings most
commonly arising in the big data field (as is the case for IT recruitment) are
development positions ~~.~~
### 2.5.5 ICT sector
##### 2.5.5.1 Developers
The generic title of developer is normally employed together with a detailed
description of the specific technical related skills required for the post and
it is this description that defines the specific type of development activity
undertaken. The technical skills most often cited by recruiters in adverts for
big data Developers are: NoSQL (MongoDB in particular), Java, SQL, JavaScript,
MySQL, Linux, Oracle, Hadoop (especially Cassandra), HTML and Spring.
##### 2.5.5.2 Architects
More specifically, however, applicants for these positions are required to
hold skills in a range of technical disciplines including: Oracle (in
particular, BI EE), Java, SQL, Hadoop and SQL Server, whilst the main generic
areas of technical Knowledge and competence required were: Data Modelling,
ETL, and Enterprise Architecture, Open Source and Analytics.
##### 2.5.5.3 Analysts
Particular process/methodological skills required from applicants for analyst
positions were primarily in respect of: Data Modelling, ETL, Analytics and
Data.
##### 2.5.5.4 Administrators
In general, the technical skills most often requested by employers from big
data Administrators at that time were: Linux, MySQL and Puppet, Hadoop and
Oracle, whilst the process and methodological competences most often requested
were in the areas of Configuration Management, Disaster Recovery, Clustering
and ETL.
##### 2.5.5.5 Project Managers
The specific types of Project Manager most often required by big data
recruiters are Oracle Project Managers, Technical Project Managers and
Business Intelligence Project Managers.
Aside from Oracle (and in particular BI EE, EBS and EBS R12), which was
specified in over twothirds of all adverts for big data related Project
Management posts, other technical skills often needed by applicants for this
type of position were: Netezza, Business Objects and Hyperion. Process and
methodological skills commonly required included ETL and Agile Software
Development together with a range of more ‘business focused’ skills, i.e.
PRINCE2 and Stakeholder Management.
##### 2.5.5.6 Data Designers
The most commonly requested technical skills associated with these posts to
have been Oracle (particularly BIEE) and SQL followed by Netezza, SQL Server,
MySQL and UNIX. Common process and methodological skills needed were: ETL,
Data Modelling, Analytics, CSS, Unit Testing, Data Integration and Data
Mining, whilst more general knowledge requirements related to the need for
experience and understanding of Business Intelligence, Data Warehouse, Big
Data, Migration and Middleware.
##### 2.5.5.7 Data Scientists
The core technical skills needed to secure a position as a Data Scientist are
found to be:
Hadoop, Java, NoSQL and C++. As was the case for other big data positions,
adverts for Data Scientists often made reference to a need for various process
and methodological skills and competences. Interestingly however, in this
case, such references were found to be much more commonplace and (perhaps as
would be expected) most often focused upon data and/or statistical themes,
i.e. Statistics, Analytics and Mathematics.
### 2.5.6 Research and education
Researchers, scientists and academics are one of the largest groups for data
reuse. DataBio data published as open data will be used for further research
and for educational purposes (e.g. thesis).
### 2.5.7 Policy making bodies
The DataBio data and results will serve as a basis for decision making bodies,
especially for policy evaluation and feedback on policy implementation. This
includes mainly the European Commission, national and regional public
authorities.
# FAIR Data
The FAIR principle ensures that data can be discovered through catalogs or
search engines, is accessible through open interfaces, is compliant to
standards to interoperable processing of that data, and therefore can be
easily being reused.
## 3.1 Data findability
### 3.1.1 Data discoverability and metadata provision
Metadata is, as its name implies, data about data. It describes the properties
of a dataset. Metadata can cover various types of information. Descriptive
metadata includes elements such as the title, abstract, author and keywords,
and is mostly used to discover and identify a dataset. Another type is
administrative metadata with elements such as the license, intellectual
property rights, when and how the dataset was created, who has access to it,
etc. The datasets on the DataBio Infrastructure are either added locally, by a
user, harvested from existing data portals, or fetched from operational
systems or IoT ecosystems. In DataBio, the definition of a set of metadata
elements is necessary in order to allow identification of the vast amount
information resources managed for which metadata is created, its
classification and identification of its geographic location and temporal
reference, quality and validity, conformity with implementing rules on the
interoperability of spatial data sets and services, constraints related to
access and use, and organization responsible for the resource. In addition,
metadata elements related to the metadata record itself are also necessary to
monitor that the metadata created are kept up to date, and for identifying the
organization responsible for the creation and maintenance of the metadata.
Such minimum set of metadata elements is also necessary to comply with
Directive 2007/2/EC and does not preclude the possibility for organizations to
document the information resources more extensively with additional elements
derived from international standards or working practices in their community
of interest.
Metadata referred to datasets and dataset series (particularly relevant for
DataBio will be the
EO products derived from satellite imagery) should adhere to the profile
originating from the INSPIRE Metadata regulation with added theme-specific
metadata elements for the agriculture, forestry and fishery domains if
necessary. This approach will ensure that metadata created for the datasets,
dataset series and services will be compliant with the INSPIRE requirements as
well international standards ISO EN 19115 (Geographic Information
– Metadata; with special emphasis in ISO 19115-2:2009 Geographic information
-- Metadata -- Part 2: Extensions for imagery and gridded data), ISO EN 19119
(Geographic Information – Services), ISO EN 19139 (Geographic Information –
Metadata – Metadata XML Schema) and ISO EN ISO 19156 (Earth Observation
Metadata profile of Observations & Measurements). Besides, INSPIRE conformant
metadata may be expressed also through the DCAT Application Profile 1 ,
which defines a minimum set of metadata elements to ensure cross-domain and
cross-border interoperability between metadata schemas used in European data
portals. If adopted by DataBio, such a mapping could support the inclusion of
INSPIRE metadata in the Pan-European Open Data Portal for wider discovery
across sectors beyond the geospatial domain.
A Distribution represents a way in which the data is made available. DCAT is a
rather small vocabulary, but deliberately leaves many details open. It
welcomes “application profiles”: more specific specifications built on top of
DCAT resp GeoDCAT – AP as geospatial extension. For sensors we will focused on
SensorML. SensorML can be used to describe a wide range of sensors, including
both dynamic and stationary platforms and both in-situ and remote sensors.
Other possibility is Semantic Sensor Net Ontology which describes sensors and
observations, and related concepts. It does not describe domain concepts,
time, locations, etc. these are intended to be included from other ontologies
via OWL imports. This ontology is developed by the W3C Semantic Sensor
Networks Incubator Group (SSN-XG).
In DataBio, there is a need for metadata harmonization of the spatial and non-
spatial datasets and services. GeoDCAT-AP was an obvious choice due to the
strong focus on geographic datasets. The main advantage is that it enables
users to query all datasets in a uniform way. GeoDCAT-AP is still very new,
and the implementation of the new standard within EUXDAT can provide feedback
to OGC, W3C & JRC from both technical and end user point of view. Several
software components are available in the DataBio architecture that have
varying support for GeoDCAT-AP, being Micka 2 , CKAN 3 and GeoNetwork 4
. For the DataBio purposes we will need also integrate Semantic Sensor Net
Ontology and SensorML.
For enabling compatibility with COPERNICUS, INSPIRE and GEOSS, the DataBio
project will make three extensions: i) Module for extended harvesting INSPIRE
metadata to DCAT, based on XSLT and easy configuration; ii)Module for user
friendly visualisation of INSPIRE metadata in CKAN; and iii)Module to output
metadata in GeoDCAT-AP resp SensorDCAT. We plan use Micka and CKAN systems.
MICKA is a complex system for metadata management used for building Spatial
Data Infrastructure (SDI) and geo portal solutions. It contains tools for
editing and the management of spatial data and services metadata, and other
sources (documents, websites, etc.). CKAN supports DCAT to import or export
its datasets. CKAN enables harvesting data from OGC:CSW catalogues, but not
all mandatory INSPIRE metadata elements are supported. Unfortunately, the DCAT
output does not fulfil all INSPIRE requirements, nor is GeoDCAT-AP fully
supported.
An ongoing programme of spatial data infrastructure projects, undertaken with
academic and commercial partners, enables DataBio to contribute to the
creation of standard data specifications and policies. This ensures their
databases remain of high quality, compatible and can interact with one another
to deliver data which provides practical and tangible benefits for European
society. The network’s mission is to provide and disseminate statistical
information which has to be objective, independent and of high quality.
Federal statistics are available to everybody: politicians, authorities,
businesses and citizens.
### 3.1.2 Data identification, naming mechanisms and search keyword
approaches
For data identification, naming and search keywords we will use INSPIRE data
registry. The INSPIRE infrastructure involves a number of items, which require
clear descriptions and the possibility to be referenced through unique
identifiers. Examples for such items include INSPIRE themes, code lists,
application schemas or discovery services. Registers provide a means to assign
identifiers to items and their labels, definitions and descriptions (in
different languages). The INSPIRE Registry is a service giving access to
INSPIRE semantic assets (e.g. application schemas, meta/data codelists,
themes), and assigning to each of them a persistent URI. As such, this service
can be considered also as a metadata directory/catalogue for INSPIRE, as well
as a registry for the INSPIRE "terminology". Starting from June 2013, when the
INSPIRE Registry was first published, a number of version have been released,
implementing new features based on the community's feedback. Now, recently, a
new version of the INSPIRE Registry has been published, which, among other
features, makes available its content also in RDF/XML:
_http://inspire.ec.europa.eu/registry/_ 5
The INSPIRE registry provides a central access point to a number of centrally
managed INSPIRE registers 6 . INSPIRE registry include:
* _INSPIRE application schema register_
* _INSPIRE code list register_
* _INSPIRE enumeration register_
* _INSPIRE feature concept dictionary_
* _INSPIRE glossary_
* _INSPIRE layer register_
* _INSPIRE media-types register_
* _INSPIRE metadata code list register_
* _INSPIRE reference document register_
* _INSPIRE theme register_
Most relevant for naming in metadata is INSPIRE metadata code list register,
which contains the code lists and their values, as defined in the INSPIRE
implementing rules on metadata. 7
### 3.1.3 Data lineage
Data lineage refers to the sources of information, such as entities and
processes, involved in producing or delivering an artifact. Data lineage
records the derivation history of a data product. The history could include
the algorithms used, the process steps taken, the computing environment run,
data sources input to the processes, the organization/person responsible for
the product, etc. Provenance provides important information to data users for
them to determine the usability and reliability of the product. In the science
domain, the data provenance is especially important since scientists need to
use the information to determine the scientific validity of a data product and
to decide if such a product can be used as the basis for further scientific
analysis. The provenance of information is crucial to making determinations
about whether information is trusted, how to integrate diverse information
sources, and how to give credit to originators when reusing information
[REF-02]. In an open and inclusive environment such as the Web, users find
information that is often contradictory or questionable. Reasoners in the
Semantic Web will need explicit representations of provenance information in
order to make trust judgments about the information they use. With the arrival
of massive amounts of Semantic Web data (eg, via the Linked Open Data
community) information about the origin of that data, ie, provenance, becomes
an important factor in developing new Semantic Web applications. Therefore, a
crucial enabler of the Semantic Web deployment is the explicit representation
of provenance information that is accessible to machines, not just to humans.
Data provenance as the information about how data was derived. Both are
critical to the ability to interpret a particular data item. Provenance is
often conflated with metadata and trust. Metadata is used to represent
properties of objects. Many of those properties have to do with provenance, so
the two are often equated. Trust is derived from provenance information, and
typically is a subjective judgment that depends on context and use [REF-03].
W3C PROV Family of Documents defines a model, corresponding serializations and
other supporting definitions to enable the interoperable interchange of
provenance information in heterogeneous environments such as the Web [REF-04].
Current standards include [REF-05]:
**PROV-DM: The PROV Data Model** [REF-06] - PROV-DM is a core data model for
provenance for building representations of the entities, people and processes
involved in producing a piece of data or thing in the world. PROV-DM is
domain-agnostic, but with well-defined extensibility points allowing further
domain-specific and application-specific extensions to be defined. It is
accompanied by PROV-ASN, a technology-independent abstract syntax notation,
which allows serializations of PROV-DM instances to be created for human
consumption, which facilitates its mapping to concrete syntax, and which is
used as the basis for a formal semantics.
**PROV-O: The PROV Ontology** [REF-07] - This specification defines the PROV
Ontology as the normative representation of the PROV Data Model using the Web
Ontology Language (OWL2). This document is part of a set of specifications
being created to address the issue of provenance interchange in Web
applications.
**Constraints of the PROV Data Model** [REF-08] - PROV-DM, the PROV data
model, is a data model for provenance that describes the entities, people and
activities involved in producing a piece of data or thing. PROV-DM is
structured in six components, dealing with: (1) entities and activities, and
the time at which they were created, used, or ended; (2) agents bearing
responsibility for entities that were generated and activities that happened;
(3) derivations of entities from entities; (4) properties to link entities
that refer to a same thing; (5) collections forming a logical structure for
its members; (6) a simple annotation mechanism.
**PROV-N: The Provenance Notation** [REF-09] - PROV-DM, the PROV data model,
is a data model for provenance that describes the entities, people and
activities involved in producing a piece of data or thing. PROV-DM is
structured in six components, dealing with: (1) entities and activities, and
the time at which they were created, used, or ended; (2) agents bearing
responsibility for entities that were generated and activities that happened;
(3) derivations of entities from entities; (4) properties to link entities
that refer to the same thing; (5) collections forming a logical structure for
its members; (6) a simple annotation mechanism.
Figure 2 [REF-10] is a generic data lifecycle in the context of a data
processing environment where data are first discovered by the user with the
help of metadata and provenance catalogues.
During the data processing phase, data replica information may be entered in
replica catalogues (which contain metadata about the data location), data may
be transferred between storage and execution sites, and software components
may be staged to the execution sites as well. While data are being processed,
provenance information can be automatically captured and then stored in a
provenance store. The resulting derived data products (both intermediate and
final) can also be stored in an archive, with metadata about them stored in a
metadata catalogue and location information stored in a replica catalogue.
Data Provenance is also addressed in W3C DCAT Metadata model [REF-11].
dcat:CatalogRecord describes a dataset entry in the catalog. It is used to
capture provenance information about dataset entries in a catalog. This class
is optional and not all catalogs will use it. It exists for catalogs where a
distinction is made between metadata about a dataset and metadata about the
dataset's entry in the catalog. For example, the publication date property of
the dataset reflects the date when the information was originally made
available by the publishing agency, while the publication date of the catalog
record is the date when the dataset was added to the catalog. In cases where
both dates differ, or where only the latter is known, the publication date
should only be specified for the catalog record. W3C PROV Ontology [prov-o]
allows describing further provenance information such as the details of the
process and the agent involved in a particular change to a dataset. Detailed
specification of data provenance is also additional requirements for DCAT – AP
specification effort [REF-12].
## 3.2 Data accessibility
Through DataBio experiments with a large number of tools and technologies
identified in WP4 and WP5, a common data access pattern shall be developed.
Ideally, this pattern is based on internationally adopted standards, such as
OGC WFS for feature data, OGC WCS for coverage data, OGC WMS for maps, or OGC
SOS for sensor data.
### 3.2.1 Open data and closed data
Everyone from citizens to civil servants, researchers and entrepreneurs can
benefit from open data. In this respect, the aim is to make effective use of
Open Data. This data is already available in public domains and is not within
the control of the DataBio project.
All data rests on a scale between closed and open because there are variances
in how information is shared between the two points in the continuum. Closed
data might be shared with specific individuals within a corporate setting.
Open data may require attribution to the contributing source, but still be
completely available to the end user.
Generally, open data differs from closed data in three key ways 8 :
1. Open data is accessible, usually via a data warehouse on the internet.
2. It is available in a readable format.
3. It’s licensed as open source, which allows anyone to use the data or share it for noncommercial or commercial gain.
Closed data restricts access to the information in several potential ways:
1. It is only available to certain individuals within an organization.
2. The data is patented or proprietary.
3. The data is semi-restricted to certain groups.
4. Data that is open to the public through a licensure fee or other prerequisite.
5. Data that is difficult to access, such as paper records that haven’t been digitized.
The perfect example of closed data could be information that requires a
security clearance; health-related information collected by a hospital or
insurance carrier; or, on a smaller scale, your own personal tax returns.
There are also other datasets used for the pilots, like e.g. cartography, 3D
or land use data but those are stored in databases which are not available
through the Open Data portals. Once the use case specification and
requirements have been completed these data may also be needed for the
processing and visualisation within the DataBio applications. However, this
data – in its raw format – may not be made available to external stakeholders
for further use due to licensing and/or privacy issues. Therefore, at this
stage, the data management plan will not cover these datasets.
### 3.2.2 Data access mechanisms, software and tools
Data access is the process of entering a database to store or retrieve data.
Data Access Tools are end user oriented tools that allow users to build
structured query language (SQL) queries by pointing and clicking on the list
of table and fields in the data warehouse.
Thorough computing history, there have been different methods and languages
already that were used for data access and these varied depending on the type
of data warehouse. The data warehouse contains a rich repository of data
pertaining to organizational business rules, policies, events and histories
and these warehouses store data in different and incompatible formats so
several data access tools have been developed to overcome problems of data
incompatibilities.
Recent advancement in information technology has brought about new and
innovative software applications that have more standardized languages,
format, and methods to serve as interface among different data formats. Some
of these more popular standards include SQL, OBDC, ADO.NET, JDBC, XML, XPath,
XQuery and Web Services.
### 3.2.3 Big data warehouse architectures and database management systems
Depending on the project needs, there are different possibilities to store
data:
##### 3.2.3.1 Relational Database
This is a digital database whose organization is based on the relational model
of data. The various software systems used to maintain relational databases
are known as a relational database management system (RDBMS). Virtually all
relational database systems use SQL (Structured Query Language) as the
language for querying and maintaining the database. A relational database has
the important advantage of being easy to extend. After the original database
creation, a new data category can be added without requiring that all existing
applications be modified.
This model organizes data into one or more tables (or "relations") of columns
and rows, with a unique key identifying each row. Rows are also called records
or tuples. Generally, each table/relation represents one "entity type" (such
as customer or product). The rows represent instances of that type of entity
and the columns representing values attributed to that instance.
The definition of a relational database results in a table of metadata or
formal descriptions of the tables, columns, domains, and constraints.
When creating a relational database, the domain of possible values can be
defined in a data column and further constraints that may apply to that data
value can be described. For example, a domain of possible customers could
allow up to ten possible customer names but be constrained in one table to
allowing only three of these customer names to be specifiable.
An example of a relational database management system is the Microsoft SQL
Server, developed by Microsoft. As a database server, it is a software product
with the primary function of storing and retrieving data as requested by other
software applications—which may run either on the same computer or on another
computer across a network (including the Internet). Microsoft makes SQL Server
available in multiple editions, with different feature sets and targeting
different users.
_PostgreSQL – for specific domains_ : PostgreSQL, often simply Postgres, is an
object-relational database management system (ORDBMS) with an emphasis on
extensibility and standards compliance. As a database server, its primary
functions are to store data securely and return that data in response to
requests from other software applications. It can handle workloads ranging
from small single-machine applications to large Internet-facing applications
(or for data warehousing) with many concurrent users; on macOS Server,
PostgreSQL is the default database. It is also available for Microsoft Windows
and Linux.
PostgreSQL is developed by the PostgreSQL Global Development Group, a diverse
group of many companies and individual contributors. It is free and open-
source, released under the terms of the PostgreSQL License, a permissive
software license. Furthermore, it is ACIDcompliant and transactional.
PostgreSQL has updatable views and materialized views, triggers, foreign keys;
supports functions and stored procedures, and other expandability.
##### 3.2.3.2 Big Data storage solutions
A NoSQL (originally referring to "non-SQL", "non-relational" or "not only
SQL") database provides a mechanism for storage and retrieval of data which is
modeled in means other than the tabular relations used in relational
databases. Such databases have existed since the late 1960s, but did not
obtain the "NoSQL" moniker until a surge of popularity in the early
twentyfirst century, triggered by the needs of Web 2.0 companies such as
Facebook, Google, and Amazon.com. NoSQL databases are increasingly used in big
data and real-time web applications. NoSQL systems are also sometimes called
"Not only SQL" to emphasize that they may support SQL-like query languages.
Motivations for this approach include: simplicity of design, simpler
"horizontal" scaling to clusters of machines (which is a problem for
relational databases), and finer control over availability. The data
structures used by NoSQL databases (e.g. key-value, wide column, graph, or
document) are different from those used by default in relational databases,
making some operations faster in NoSQL. The particular suitability of a given
NoSQL database depends on the problem it must solve. Sometimes the data
structures used by NoSQL databases are also viewed as "more flexible" than
relational database tables.
_MongoDB_ : MongoDB (from humongous) is a free and open-source cross-platform
documentoriented database program. Classified as a NoSQL database program,
MongoDB uses JSONlike documents with schemas. MongoDB is developed by MongoDB
Inc. and is free and opensource, published under a combination of the GNU
Affero General Public License and the Apache License.
MongoDB supports field, range queries, regular expression searches. Queries
can return specific fields of documents and also include user-defined
JavaScript functions. Queries can also be configured to return a random
sample of results of a given size. MongoDB can be used as a file system with
load balancing and data replication features over multiple machines for
storing files. This function, called Grid File System, is included with
MongoDB drivers. MongoDB exposes functions for file manipulation and content
to developers. GridFS is used in plugins for NGINX and lighttpd. GridFS
divides a file into parts, or chunks, and stores each of those chunks as a
separate document.
MongoDB based (but not restricted to) is _GeoRocket_ , developed by Fraunhofer
IGD. It provides high-performance data storage and is schema agnostic and
format preserving. For more information please refer to D4.1 which describes
the components applied in the DataBio project.
## 3.3 Data interoperability
Data can be made available in many different formats implementing different
information models. The heterogeneity of these models reduces the level of
interoperability that can be achieved. In principle, the combination of a
standardized data access interface, a standardized transport protocol, and a
standardized data model ensure seamless integration of data across platforms,
tools, domains, or communities.
When the amount of data grows, mechanisms have to be explored to ensure
interoperability while handling large volumes of data. Currently, the amount
of data can still be handled using OGC models and data exchange services. We
will need to review this element during the course of the project. For now,
data interoperability is envisioned to be ensured through compliance with
internationally adopted standards.
Eventually, interoperability requires different phenotypes when being applied
in various “disciplinary” settings. The following figure illustrates that
concept (source: Wyborn 2017).
_Figure 3: The “disciplinary data integration platform: where do you ssit?
(source: Wyborn)_
The intra-disciplinary type remains within a single discipline. The level of
standardization needs to cover the discipline needs, but little attention is
usually paid to cross-discipline standards. The multi-disciplinary situation
has many people from different domains working together, but eventually they
all remain within their silos and data exchange is limited to the bare
minimum.
The cross-disciplinary setting is what we are experiencing at the beginning of
DataBio. All disciplines are interfacing and reformatting their data to make
it fit. The model works as long as data exchange is minor, but does not scale,
as it requires bilateral agreements between various parties. The
interdisciplinary approach is targeted in DataBio. The goal here is to adhere
to a minimum set of standards. Ideally, the specific characteristics are
standardized between all partners upfront. This model adds minimum overhead to
all parties, as a single mapping needs to be implemented per party (or, even
better, the new model is used natively from now on). The transdisciplinary
approach starts with data already provided as linked data with links across
the various disciplines, well-defined vocabularies, and a set of mapping rules
to ensure usability of data generated in arbitrary disciplines.
### 3.3.1 Interoperability mechanisms
Key to interoperable data exchange are standardized interfaces. Currently, the
amount of data processing and exchange tools is extremely large. We expect a
consolidation of the number of tools during the first 15 months of the
project. We will revise the requirements set by the various pilots and the
data sets made available regularly to ensure that proper recommendations can
be given at any time.
### 3.3.2 Inter-discipline interoperability and ontologies
A key element to interoperability within and across disciplines are shared
semantics, but the Semantic Web is still in its infancy and it is not clear to
which extent it will become widely accepted within data intensive communities
in the near future. It requires graph-structures for data and/or metadata,
well defined vocabularies and ontologies, and lacks both the necessary tools
to get DataBio data operational within reasonable amounts of time. Therefore,
at this stage it is mainly recommended to observe the topic of vocabularies
and ontologies, but concentrate on initial base-vocabularies and their
governance to ensure that at least base parameters are well defined.
## 3.4 Promoting data reuse
The reuse of data is a key component in FAIR. It ensures that data can be
reused for purposes other than it was initially created for. This reuse
improves the cost-balance of the initial data production and allows cross-
fertilization across communities. DataBio will advertise all the data produced
to ensure that they are known to wider audience. In combination with
standardized models and interfaces as described above and complemented with
metadata and a catalog system that allows proper discovery, DataBio can serve
as valuable input outside of the project.
At this stage, it is not clear what licensing models need to be applied for
the various data products produced in DataBio. Generally, the focus shall be
on public domain attribution and open licenses that maximize reusability in
other contexts. All data products produced by DataBio will be reviewed for
FAIR principles once a year by the data producing organization. on the other
hand, DataBio is open to any third-party data and process provisioning. Data
quality is a key component for data reuse. Without proper quality parameters,
data cannot be integrated in external processes, as the level of uncertainty
of the remote processes becomes undefined. DataBio will review its data
products for quality information provided as part of the metadata. Currently,
ISO quality flags are envisioned to be used.
# Data management support
## 4.1 FAIR data costs
The DataBio consortium will handle both the open data and data with restricted
access. These data will be used by the project and the project pilots to
demonstrate the power of big data. These data will be published through the
DataBio infrastructure.
The current list of datasets and their details are described in Appendix A.
All data are either open data or data with restricted access provided for free
to the consortium partners for project purposes. DataBio does not foresee to
purchase any data.
The consortium has the knowledge and tools to make data FAIR, i.e. findable,
accessible, interoperable and reusable. To make data FAIR is one of the
project objectives and appropriate resources were allocated by each partner to
cover costs for data harmonisation, integration and publication.
The DataBio project has allocated appropriate resources to the sustainability
of the project results. This includes the sustainability of FAIR data that are
in the scope of the project.
To satisfy the dataset reusability requirement, DataBio anticipated several
strategies for data storage and preservation. Dataset storage and preservation
plan will include but not limited to disk drives, solid-state drives, in-
memory functions and off-premises storage. Insofar as security concerns are
not an issue, DataBio partners will be encouraged to store data in the
publicly available certified data repositories.
## 4.2 Big data managers
Managing Big Data also includes a specific structure or role-system, which
means in fact types of people how manage or use Big Data in a specific way.
Following chapter will describe the team structures for Big Data Management in
DataBio.
DataBio will employ a two-layer approach for the management of the data used.
On the first layer, the management of data provided in any of the
participating institutions is done locally. On the second layer, data used in
the context of DataBio and needed in the context of data exchange or
integration across organizations will be subject to the methodologies
described within this document. These are enforced by the roles described
below.
### 4.2.1 Project manager
DataBio includes a diverse group of talented professionals, which have to be
led. Beside the complex pilot-driven management structure, Intrasoft can be
called the main project manager.
### 4.2.2 Business Analysts
Business analysts are business-oriented domain experts, which are comfortable
with data handling. They have deep insights in business requirements and
logics and make sure that big data applications and platforms are capable to
them. Business analysts are the connection between “non-technical” business
user and technical developers. This includes technoeconomic analysis as well
as advanced visualisation services. DataBio has five Business analysts from
five different organizations: Lesprojekt, ATOS, CIAOTECH, IBM and CREA.
### 4.2.3 Data Scientists
Data scientists represent the data experts and analysis within the DataBio
consortium. They are able to turn raw data into purified insights and value
with data science methods, techniques and tools. They have strong programming
skills and can handle big data as well as linked data (incl. metadata).
Furthermore, they are able to identify datasets for different requirements and
develop solutions with regard to common standards. They are also able to
visualise eloquently the results and findings. Within the DataBio consortium
following partner are data scientists: Lesprojekt, UWB, Fraunhofer IGD,
SINTEF, InfAI, INNOVATION ENGINEERING SRL, OGC, VITO.
##### 4.2.3.1 Data Scientists: Machine Learning Experts
One of the most important parts of DataBio is making sense and value of data
in different bioeconomic sectors. In order to do so, methods, techniques and
tools of machine learning are necessary to handle the huge amount of data. The
DataBio project has several partner which are capable machine learning experts
with different specialisations. These are: PSNC, InfAI, INNOVATION ENGINEERING
SRL, VTT, IBM, CREA, DTU, CSEM, EXUS, Terrasigna, CERTH
### 4.2.4 Data Engineer / Architect
Data Engineers or Architects are data professionals who prepare the big data
to be ready for analysis. This includes data discovery, data integration, data
processing (and pre-processing) extraction and exchange as well as the quality
control. Furthermore, they focus on design and architecture. DataBio have
thirteen partners who fulfil this important role: UWB, ATOS, SpaceBel, VITO,
IBM, InfAI, MHG, CREA, e-GEOS, DTU, Cybernetica, CERTH and Rikola.
### 4.2.5 Platform architects
The data platform and its architecture is one of the most important part of
DataBio. In order to ensure a valid platform design, systems integration and
platform development, high experienced platform architects are needed. This
role will taken by Intrasoft, ATOS, Fraunhofer IGD, SINTEF and VTT.
### 4.2.6 IT/Operation manager
Some of the realized pilots will be very processing intensive, which requires
a very good infrastructure. In order to provide and manage this infrastructure
specific operation manager are needed. This function will be fulfilled by PSNC
and Softeam.
### 4.2.7 Consultant
Big Data Consultant are responsible for support, guidance and help within all
design and implementation phases. That includes high knowledge and practice in
design big data solutions as well as develop data pipelines that leverage
structured and unstructured data from multiple sources. The DataBio consortium
have several partners which fulfil this role, including SpaceBel, CIAOTECH,
InfAI, FMI, Federunacoma, University of St. Gallen, CITOLIVA and OGC
### 4.2.8 Business User
Business users are direct (business) beneficiaries of the developed DataBio
solutions. Further, they are important to specify detailed domain requirements
and implement the solutions. These partners are TRAGSA, Neuropublic, Finnish
Forest Centre, MHG, LIMETRI, Kings Bay,
Eros, Ervik & Saevik, Liegruppen Fiskeri, Norges Sildesalgslag SA, GAIA, MEEO,
Echebastar, Novamont, Rikola, UPV/EHU, ZETOR and CAC
### 4.2.9 Pilot experts
In order to specify and prioritize requirements as well as manage the
different pilots, finding synergies and connecting the different experts into
the pilot, domain experts are needed. These are Lesprojekt, FMI, VTT, SINTEF,
Finnish Forest Centre and AZTI.
_Figure 4: DataBio’s data managers_
# Data security
## 5.1 Introduction
In order to be able to address data security properly, one has to identify the
various phases of data lifecycle, from their creation, to their use, sharing,
archive and deletion. Handling project data securely throughout their
lifecycle lays the foundations of a sensitive data protection strategy. In
this context, the project consortium will determine specific security controls
to apply in each phase, evaluating during the course of the project their
level of compliance. Those data lifecycle phases are featured in the image
below and are summarized as follows:
1. Phase 1: Create
This first phase includes the creation of structured or unstructured (raw)
data. For the needs of the DataBio project, those sensitive data are
classified in the following categories: a) **Enterprise Data** (commercially
sensitive data), b) **Personal Data** (personal sensitive data) and c) **other
data** that are not applicable in one of the previous categories. Especially
for the enterprise data, upon the creation phase already, security
classification occurs based on an enterprise data security policy.
2. Phase 2: Store
Once data is created and included in a file, then it is stored somewhere. What
needs to be ensured is that stored data is protected and the necessary data
security controls have been implemented, so as to secure and minimize risk of
information leak, ensuring efficient data privacy. More information about this
phase is found in sections 5.2 about **data recovery** and 5.3 about **secure
storage** .
3. Phase 3: Use
During this phase when data is viewed, processed, modified and saved, security
controls are directly applied to data, with a focus on monitoring user
activity and applying security controls to ensure data leak prevention.
4. Phase 4: Share
Data is constantly being shared between employees, customers and partners,
necessitating a strategy that continuously monitors **data stores** and users.
Data move among a variety of public and private storage locations,
applications and operating environments, and are accessed by various data
owners from different devices and platforms. That can happen at any stage of
the data security lifecycle, which is why it’s important to apply the right
security controls at the right time.
5. Phase 5: Archive
In the case of data leaving active use but still needed to be available, they
should be securely archived in appropriate storages, normally of low cost and
performance, sometimes offline. This may cover also version control where
older versions of original (raw) data files and data source processing
programs are maintained in archive storages, per case. These backups are then
stored and can be brought back online within a reasonable timeframe that will
ensure that there is no detrimental effect of the data being lost or
corrupted.
6. Phase 6: Destroy
In the case of data no longer needed, this data should be deleted securely so
as to avoid any data leakage.
## 5.2 Data recovery
Data recovery strategy (also called disaster recovery plan) is not only a
plan, but also ongoing process of minimizing a risk of data loss that can be a
consequence of different random events.
Since DataBio is a project dealing with Big Data scenarios, the context of
data recovery is focused mostly on management procedures of data centers that
are able to store and process significant amount of data. The disasters that
can occur can be classified into two categories:
* Natural disasters (floods, hurricanes, tornadoes or earthquakes): because they cannot be avoided it is possible to minimize their effects on IT infrastructure (distributed backups)
* Man-made disasters (infrastructure failure, software bugs, hackers attacks): besides minimizing the effect it is possible to prevent them in different ways (regular software updates, good, active protection mechanisms, regular testing procedures) The most important elements of Data recovery plan are:
* Backup management: well-designed automatic procedures for regular storing copies of datasets on separate machines or even geographically distributed places
* Replication of data to an off-site location, which overcomes the need to restore the data (only the systems then need to be restored or synchronized), often making use of storage area network (SAN) technology
* Private Cloud solutions that replicate the management data (VMs, Templates and disks) into the storage domains that are part of the private cloud setup.
* Hybrid Cloud solutions that replicate both on-site and to off-site data centers. These solutions provide the ability to instantly fail-over to local on-site hardware, but in the event of a physical disaster, servers can be brought up in the cloud data centers as well.
* The use of high availability systems which keep both the data and system replicated off-site, enabling continuous access to systems and data, even after a disaster (often associated with cloud storage)
Several partners in the project are infrastructure providers. They ensure high
quality in terms of reliability and scalability.
## 5.3 Privacy and sensitive data management
### 5.3.1 Introduction
With regards to privacy and sensitive data management, it is confirmed that
these activities will be rigorously implemented in compliance to the privacy
and data collection rules and regulations as they are applied nationally and
in the EU, as well as with the H2020 rules. The next sections include more
specific information regarding those activities, rules and measures based on
the classification of data made in the introduction of this section (5.1).
### 5.3.2 Enterprise Data (commercial sensitive data)
This category of data includes the (raw) data coming from specific sensor
nodes and other similar data management systems and sources from the various
project partners in each pilot case. They also include data about technologies
and other assets protected by IPR and are considered to be highly-commercially
sensitive, belonging to the partner that provides them for the various
research and pilot activities within DataBio project. Therefore, access to
those data will be controlled and exchanges normally take place between
specific end users and partners involved in their use and management within
each pilot case for DataBio related activities.
Following also project GA and CA, each partner who provides or otherwise makes
available to any other project partner shared information represents that: (i)
it has the authority to disclose this shared information, (ii) where legally
required and relevant, it has obtained appropriate informed consents from all
individuals involved, or from any other applicable institution, all in
compliance with applicable regulations; and (iii) there is no restriction in
place that would prevent any such other project partner from using this shared
information for the purpose of DataBio project and the exploitation thereof.
The abovementioned rules are also applied to any new data stemming from the
project activities. This data will be also anonymised and protected and only
based on the above rules our partners will be able to make data available to
external industry stakeholders to utilise them for their own purposes. Related
publications will be released and disseminated through the project
dissemination and exploitation channels to make these parties aware of the
project as well as appropriate access to any data (see Appendix A for DataBio
specific data).
On a technical level, data are protected by IPRs are often accessed as a
service, with specific access rights given under specific terms.
Alternatively, they are shared encrypted or similarly protected with the keys
provided under specific terms.
### 5.3.3 Personal Data
According to the Grant Agreement, it has been agreed by all partners that any
Background, Results, Confidential Information and/or any and all data and/or
information that is provided, disclosed or otherwise made available between
the Parties **shall not include personal data** . Accordingly, each Party
agreed that it will take all necessary steps to ensure that all Personal Data
is removed from the Shared Information, made illegible, or otherwise made
inaccessible (i.e. de-identify) to the other Parties prior to providing the
Shared Information.
Therefore, no personal sensitive data are included in data exchanged between
partners within DataBio. Data created within project activities, e.g. some
pilot activities, could initially involve personal and/or sensitive data from
human participants, like location and id, DataBio will apply specific security
measures for their informed consent and data protection in line with the
legislation and regulations in force in the countries where the research will
be carried out, with most relevant rules to the project being the following:
* The Charter of Fundamental Rights of the EU, specifically the article concerning the protection of personal data
* Council Directive 83/570/EEC of 26 October 1983 amending Directives 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data.
Regarding the procedure that is required in order to be able to participate in
any DataBio activities, we foresee that all potential participants will have
to read and sign an informed consent form before starting the participation.
This form aims to fully inform the participants about the study procedure and
goals in order to guarantee that they have basic information in order to make
the decision about whether to participate or not in the project activity. It
shall include a summary and schedule of the study, the objectives and
descriptions of the DataBio system and its components. All participants have
the right to receive a copy of the documents of this form. Participants will
receive a generic user ID to identify them in the system and to anonymise
their identities. No full names will be stored anywhere electronically. All
gathered personal data shall be password protected and encrypted. Users’
personal data will be safeguarded from other people not involved in the
project. No adults unable to give informed consent will be involved.
It should be stated that the protection of the privacy of participants is a
responsibility of all persons involved in research with human participants.
Privacy means, that the participant can control the access to personal
information and is able to decide who has access to the collected data in the
future. Due to the principle of autonomy, the participants will be asked for
their agreement before private and personal information is collected. It will
be ensured that all persons involved in the project activities understand and
respect the requirement for confidentiality. The participants will be informed
about the confidentiality policy that is used in this research project.
## 5.4 General privacy concerns
Other privacy concerns will be addressed as following:
* External experts: Any external experts that will be involved in the project shall be required to sign an appropriate non-disclosure agreement prior to participating in any project related meeting, decision or activity.
* Publications: Hints to or identifiable personal information of any participant in (scientific) publications should be omitted. It is avoided to reveal the identity of participants in research deliberately or inadvertently, without the expressed permission of the participants.
* Dissemination: Dissemination of data between partners. This relates to access to data, data formats, and methods of archiving (electronic and paper), including data handling, data analyses, and research communications. Access to private information will be granted only to DataBio partners for purposes of evaluation of the system and only in an anonymised form, i.e. any personally identifiable information such as name, phone number, location, address, etc. will be omitted.
* Protection: The lead project partner of every pilot case is responsible for the protection of the participants’ privacy throughout the whole project, including procedures such as communications, data exchange, presentation of findings, etc.
* Control: The responsible project partners are not allowed to circulate information without anonymisation. This means that only relevant attributes, i.e. gender, age, etc. are retained.
* Information: As already mentioned above, the protection of the confidentiality implies informing the participants about what may be done with their data (i.e. data sharing). Individuals that participate in any study must have the right to request and obtain free of charge information on his/her personal data subjected to processing, on the origin of such data and on their communication or intended communication.
# Ethical issues
In line with the Consortium’s commitment in the DATABIO proposal, the ethics
and responsibility work in the project is guided by the principles of
responsible research and innovation in the information society
( _http://renevonschomberg.wordpress.com/implementing-responsible-research-_
_andinnovation/_ ) , by the guidelines of European Group on Ethics
( _http://ec.europa.eu/bepa/european-groupethics_ ) .
Since the research activities do not include any human trial, animal
intervention or acquisition of tissues thereof, there are no ethical concerns.
Remote sensing of fields, forests or fish stocks does not cause any ethical
concerns.
The Partners agreed that any Background, Results, Confidential Information
and/or any and all data and/or information that is provided, disclosed or
otherwise made available between the Partners during the implementation of the
Action and/or for any Exploitation activities (“Shared Information”), shall
not include personal data as defined by Article 2, Section (a) of the Data
Protection Directive (95/46/EEC) (hereinafter referred to as “ **Personal
Data** ”). Accordingly each Partner agrees that it will take all necessary
steps to ensure that all **Personal Data** is removed from the Shared
Information, made illegible, or otherwise made inaccessible (i.e. de-identify)
to any other Party prior to providing the Shared Information to such other
Party.
Each Partner who provides or otherwise make available to any other Partner
Shared Information (“Contributor”) represents that: (i) it has the authority
to disclose the Shared Information, if any, which it provides to the Partner;
(ii) where legally required and relevant, it has obtained appropriate informed
consents from all the individuals involved, or from any other applicable
institution, all in compliance with applicable regulations; and (iii) there is
no restriction in place that would prevent any such other Partner from using
the Shared Information for the purpose of the DATABIO Action and the
exploitation thereof.
Any Advisory Board member or external expert shall be required to sign an
appropriate nondisclosure agreement prior to participating in any project
related meeting, decision or activity.
# Conclusions
The DataBio project is an EU lighthouse project with eighteen pilots running
from hundreds of piloting sites across Europe in the three main bioeconomy
sectors, agriculture, forestry, and fishery. During the lifecycle of the
DataBio project, big data will be collected consisting of very large data sets
including a wide range of data types from numerous sources. Most data will
come from farm and forestry machinery, fishing vessels, remote and proximal
sensors and imagery, and many other technologies. In this document, DataBio’s
D6.2 deliverable “Data Management Plan” was presented as the key element of
good data management. As DataBio participates in the European Commission H2020
Program’s extended ORD pilot, a DMP is required and as a consequence, DataBio
project’s datasets will be as open as possible and as closed as necessary,
focusing on sound big data management for the sake of best research practice,
and in order to create value, and foster knowledge and technology out of big
datasets for the good of man.
The data management life cycle for the data to be collected, processed and/or
generated by DataBio project was described, accounting also for the necessity
to make research data findable, accessible, interoperable and re-usable,
without compromising the security and ethics requirements. As a part of the
project implementation, DataBio’s partners will be encouraged to adhere to
sound data management to ensure that data are well-managed, archived and
preserved. This is the first version of DataBio DMP; it will be updated over
the course of the project as warranted by significant changes arising during
the project implementation, and the project consortium. The scheduled advanced
releases of this document will particularly include information on the
repositories where the data will be preserved, the security measures, and
several other FAIR aspects.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0040_COMPACT_740712.md
|
# 1\. Introduction
This deliverable will set out the first version of the data management plan
(DMP) for the COMPACT project. A DMP is a key element of good data management,
which is especially important in the COMPACT context, as all Horizon
2020-funded projects from 2017 onward are required to contain a DMP. 1
This DMP is based on the European Commission’s Guidelines on FAIR Data
Management in Horizon 2020 2 and the COMPACT Grant Agreement. 3
It reflects the consortium’s comprehensive approach towards data management.
It is a living document, which will be updated in months 18 and 24 (DMP
version 2, and the final version, respectively), due to the possible
significant changes, including but not limited to:
* Use of new data,
* Changes in consortium policies (e.g. new innovation potential, decision to file for a patent, etc.),
* Changes in consortium composition and external factors (e.g. new consortium members joining or existing members leaving).
This deliverable will contribute towards legal and ethical compliance
regarding data protection, alongside the Deliverable 2.5 ‘S.E.L.P. Framework’.
While the latter focuses specifically on legal and ethical aspects of
principles and minimum requirements of procedures, necessary for proper data
collection, this document will serve as a project management tool,
implementing those requirements in terms of data management.
In order to implement the open data principle, the DMP sets out the following
information:
* The handling of research data during and after the end of the project,
* What data will be collected, processed and/or generated,
* Which methodology and standards will be applied,
* Whether data will be shared/made open access and
* How data will be curated and preserved (including after the end of the project).
Sections 2 to 7 of this document will cover the different DMP components,
based on the outline suggested in the Guidelines. They are based on input from
the following partners: AIT, CINI, INOV and KUL, as indicated in the relevant
sections.
# 2\. Data summary
## 2.1. AIT
**What is the purpose of the data collection/generation and its relation to
the objectives of the project?**
For AIT the purpose of collecting user data is to understand user behaviour on
an analytical basis.
**What types and formats of data will the project generate/collect?**
AIT will record audio and video of test participants. In addition we will save
log-files within the prototypes. We will also collect data via online surveys.
**Will you re-use any existing data and how?**
AIT will not re-use any existing data.
**What is the origin of the data?**
AIT will collect data by observing users during technology interaction and
asking them (either in real time or via online surveys).
**What is the expected size of the data?**
1 TB (which will mainly be video recordings of end-user interaction behaviour)
**To whom might it be useful ('data utility')?**
Recordings of end-users (besides being the basis for end-user-studies in the
project) are – due to their heavy context dependence – not useful to third
parties. It would also create a privacy problem for end-users if the
recordings would be public. Hence they are closed.
## 2.2. CINI
**What is the purpose of the data collection/generation and its relation to
the objectives of the project?**
Within COMPACT Project, CINI is in charge of developing an advanced Security
Information and Event Management (SIEM) system endowing LPAs’ organisation
with real-time monitoring capabilities. SIEM services receive log files, which
represent records of the events occurring within an organization’s systems and
networks when a user attempts to authenticate into the system or a system
event occurs (such as starting a service or shutting down the system, etc.).
The content/records of these log files related to computer security
information are then analysed for investigating malicious activities. A
particular alarm or event is generated in relation to the particular detected
attack.
**What types and formats of data will the project generate/collect?**
SIEM systems come with a number of adapters for receiving data/events from a
wide variety of sources, such as Operating System (OS) log files (in
proprietary or open formats) or Commercial Off The Shelf (COTS) products for
logical and physical security monitoring, including: Wireshark, Nessus, Nikto,
Snort, Ossec, Argus, Cain & Abel, OpenNMS, Nagios, CENTEROS, Ganglia,
Milestone, openHAB, IDenticard, FieldAware, and CIMPLICITY. In terms of data
generated, a format has not been defined yet. However, any XMLcompliant format
(such as for example the “Json” (JavaScript Object Notation) format) can
represent a valuable solution for alarms/events generated by the SIEM system.
**Will you re-use any existing data and how?**
It is not definitive in this phase, since the datasets are not currently
specified. However, in a first phase of the SIEM service implementation, we
foresee to use existing anonymized data present within the archives of some of
the LPAs involved in the project.
**What is the origin of the data?**
Data collected and analysed by the SIEM system will be originated by testing
activities carried out at a pilot sites (LPAs involved within the project) and
will be relied on the collection and analysis of log files of the LPAs
participating in the COMPACT project.
**What is the expected size of the data?**
At this stage of the project, it is not possible to predict the size of the
data that will be processed, but we can hypothesize that they will be more
than a terabyte.
**To whom might it be useful ('data utility')?**
Data collected by the SIEM system can be useful to the other technical
partners in charge of developing COMPACT’s tools and services, such as risk
assessment tool or personalization of training courses for LPAs’ employees,
etc.. They could be useful to other research groups working on similar
research, as well as for testing alternative SIEM solutions.
## 2.3. INOV
**What is the purpose of the data collection/generation and its relation to
the objectives of the project?**
During the COMPACT project, INOV will collect data to test and demonstrate its
Business process intrusion detection system (BP-IDS). This data collection is
related with the project objective “SO3: Lower the entry barrier to timely
detection and reaction to cyber-threats”, and may occur during the tasks:
“Task 4.3 Threat intelligence and monitoring Component”; “Task 4.5 Integration
of solutions in a unified platform”; “Task 5.1 Validation and Demonstration
scenarios”; “Task 5.2 Trials Setup”; and “Task 5.3 Pilot execution and
demonstration”.
**What types and formats of data will the project generate/collect?**
It is not definitive in this phase, since the datasets are not currently
specified. However, it is expected that BP-IDS collects data from multiple
sources of data, such as: network traffic generated during the communications
of the monitored hosts; or by inspecting specific files present in the file-
system of the monitored hosts.
**Will you re-use any existing data and how?**
It is not definitive in this phase, since the datasets are not currently
specified. But it is expected that all the data collected is self-contained in
the dataset used, and not re-used existing data.
**What is the origin of the data?**
It has not been decided yet, the datasets need to be specified first in order
to respond to this question.
**What is the expected size of the data?**
It is difficult to estimate the size of the data at this stage, because the
size of the data highly varies on the network protocols or the files
monitored.
**To whom might it be useful ('data utility')?**
The principal benefactor of this dataset will be CMA that will use the tools
developed to monitor threats against their infrastructure. INOV will use it
for adapting BP-IDS for LPAs. Besides INOV this dataset might be useful to
technical partners in the COMPACT project, that require live data to adapt
their technical solutions to LPA environments.
# 3\. FAIR data
Under Horizon 2020’s principle of open access to data, research data must be
FAIR: findable, accessible, interoperable and reusable. This will contribute
to the use of data in future research. 4
In order to be **Findable** :
* F1. (meta)data are assigned a globally unique and eternally persistent identifier.
* F2. data are described with rich metadata.
* F3. (meta)data are registered or indexed in a searchable resource.
* F4. metadata specify the data identifier.
In order to be **Accessible** :
* A.1. (meta)data are retrievable by their identifier using a standardized communications protocol.
* A1.1. the protocol is open, free, and universally implementable.
* A1.2. the protocol allows for an authentication and authorization procedure, where necessary.
* A2. metadata are accessible, even when the data are no longer available.
In order to be **Interoperable** :
* I1. (meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation.
* I2. (meta)data use vocabularies that follow FAIR principles.
* I3. (meta)data include qualified references to other (meta)data.
In order to be **Re-usable** :
* R1. meta(data) have a plurality of accurate and relevant attributes.
* R1.1. (meta)data are released with a clear and accessible data usage license.
* R1.2. (meta)data are associated with their provenance.
* R1.3. (meta)data meet domain-relevant community standards.
Answering the following questions will contribute towards compliance with the
FAIR data standards. The answers are provided in a comprehensive manner, not
on a yes/no basis.
## 3.1. Making data findable, including provisions for metadata
### 3.1.1. AIT
Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?
The scientific publications from AIT will end up with a DOI when accepted at a
conference.
**What naming conventions do you follow?**
None.
**Will search keywords be provided that optimize possibilities for re-use?**
Yes.
**Do you provide clear version numbers?**
Yes, versioning is already implemented in the document templates.
**What metadata will be created? In case metadata standards do not exist in
your discipline, please outline what type of metadata will be created and
how.**
AIT TX will use standard HCI classifiers from ACM.
### 3.1.2. CINI
**Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?**
Not defined yet. However, if the Zenodo Repository will be adopted for data
storing and sharing, the persistent identification through DOIs for sharing
research result will be adopted.
**What naming conventions do you follow?**
We refer to the “Glossary of Key Information Security Terms” provided by NIST
5 or, in turn, to the SANS Glossary of Security Terms 6 .
**Will search keywords be provided that optimize possibilities for re-use?**
At this stage we have not yet planned to provide keywords for optimizing re-
use.
**Do you provide clear version numbers?**
Not defined yet
**What metadata will be created? In case metadata standards do not exist in
your discipline, please outline what type of metadata will be created and
how.**
Not defined yet
### 3.1.3. INOV
Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?
Not defined yet
**What naming conventions do you follow?**
Not defined yet
**Will search keywords be provided that optimize possibilities for re-use?**
Not defined yet
**Do you provide clear version numbers?**
Not defined yet
**What metadata will be created? In case metadata standards do not exist in
your discipline, please outline what type of metadata will be created and
how.**
Not defined yet
## 3.2. Making data openly accessible
### 3.2.1. AIT
**What methods or software tools are needed to access the data?**
The COMPACT project strives to make data available in a format, which can be
read by free tools (also) to not force people to buy software only to read
through the COMPACT outcomes.
**Is documentation about the software needed to access the data included?**
No.
### 3.2.2. CINI
**Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions.**
Note that in multi-beneficiary projects it is also possible for specific
beneficiaries to keep their data closed if relevant provisions are made in the
consortium agreement and are in line with the reasons for opting out. There
has been no opt-out in the COMPACT project yet.
Data produced and/or used in the project will be made openly available by
default only after a pseudonymisation and/or anonymization process in order to
prevent that the data can be attributed to a specific person.
**How will the data be made accessible (e.g. by deposition in a repository)?**
Data will be made accessible through a research data repository. The
consortium will take measures to enable third parties to access, mine,
exploit, reproduce, and disseminate the data free of charge.
**What methods or software tools are needed to access the data?**
The best candidate tool for data sharing – at the time of this writing – is
ZENODO, an OpenAIRE/CERN compliant repository. Zenodo builds and operates a
simple and innovative service that enables researchers, scientists, EU
projects and institutions to share, preserve and showcase multidisciplinary
research results (data and publications), that are not part of the existing
institutional or subject-based repositories of the research communities.
Zenodo enables researchers, scientists, EU projects and institutions to:
easily share the long tail of small research results in a wide variety of
formats,
<table>
<tr>
<th>
</th>
<th>
including text, spreadsheets, audio, video, and images across all fields of
science.
</th> </tr>
<tr>
<td>
</td>
<td>
display the research results and receive credit by making the research results
citable and integrating them into existing reporting lines to funding agencies
like the European Commission.
</td> </tr>
<tr>
<td>
</td>
<td>
easily access and reuse shared research results.
</td> </tr> </table>
**Is documentation about the software needed to access the data included?**
Yes it is.
**Is it possible to include the relevant software (e.g. in open source
code)?**
It is possible, but not decided yet if open source code will be included.
**Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.**
The consortium plans to deposit data in an OpenAIRE compliant research data
repository.
**Have you explored appropriate arrangements with the identified repository?**
Not defined yet
**If there are restrictions on use, how will access be provided?**
Not defined yet
**Is there a need for a data access committee?**
Not defined yet
**Are there well described conditions for access (i.e. a machine-readable
license)?**
Not defined yet
**How will the identity of the person accessing the data be ascertained?**
Not defined yet
### 3.2.3. INOV
**Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions.**
Note that in multi-beneficiary projects it is also possible for specific
beneficiaries to keep their data closed if relevant provisions are made in the
consortium agreement and are in line with the reasons for opting out. There
has been no opt-out in the COMPACT project yet.
Not defined yet
**How will the data be made accessible (e.g. by deposition in a repository)?**
Not defined yet
**What methods or software tools are needed to access the data?**
Not defined yet
**Is documentation about the software needed to access the data included?**
Not defined yet
**Is it possible to include the relevant software (e.g. in open source
code)?**
Not defined yet
**Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.**
Not defined yet
**Have you explored appropriate arrangements with the identified repository?**
Not defined yet
**If there are restrictions on use, how will access be provided?**
Not defined yet
**Is there a need for a data access committee?**
Not defined yet
**Are there well described conditions for access (i.e. a machine-readable
license)?**
Not defined yet
**How will the identity of the person accessing the data be ascertained?**
Not defined yet
## 3.3. Making data interoperable
### 3.3.1. AIT
**Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different
origins)?**
AIT will rely on XML when publishing HCI-patterns, which guarantees data
exchange with existing HCI-pattern providers.
**What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?**
PLML (pattern language mark-up language)
**Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?**
No.
**In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?**
Yes.
### 3.3.2. CINI
**Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different
origins)?**
Yes
**What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?**
Not defined yet
**Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?**
Not defined yet
**In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?**
Not defined yet
### 3.3.3. INOV
**Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different
origins)?**
Not defined yet
**What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?**
Not defined yet
**Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?**
Not defined yet
**In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?**
Not defined yet
## 3.4. Increase data re-use (through clarifying licenses)
### 3.4.1. CINI
**How will the data be licensed to permit the widest re-use possible?**
Not defined yet
**When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.**
At this stage of the project it is not possible to predict the date for making
re-use available.
**Are the data produced and/or used in the project useable by third parties,
in particular after the end of the project? If the re-use of some data is
restricted, explain why.**
Anonymised data will be usable by third parties.
**How long is it intended that the data remains re-usable?**
We intend to store data and make it re-usable for an appropriate period of
time according to the key guidelines established by the following regulations:
Directive (EU) 2016/1148 of the European Parliament and of the Council of 6
July 2016 concerning measures for a high common level of security of network
and information systems across the Union (NIS Directive)
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27
April 2016 on the protection of natural persons with regard to the processing
of personal data and on the free movement of such data, and repealing
Directive 95/46/EC (General Data Protection Regulation)
**Are data quality assurance processes described?**
Not defined yet
### 3.4.2. INOV
**How will the data be licensed to permit the widest re-use possible?**
Not defined yet
**When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.**
Not defined yet
**Are the data produced and/or used in the project useable by third parties,
in particular after the end of the project? If the re-use of some data is
restricted, explain why.**
Not foreseeable at this stage
**How long is it intended that the data remains re-usable?**
For the duration of the project.
**Are data quality assurance processes described?**
Not defined yet
# 4\. Allocation of resources – the whole consortium
According to the Horizon 2020 rules, costs related to open access to research
data are eligible for reimbursement during the duration of the project under
the conditions defined in the COMPACT Grant Agreement, in particular Articles
6 and 6.2.D.3. 7 These are direct costs, related to subcontracting of
project tasks, such as subcontracting the open access to data.
**What are the costs for making data FAIR in your project?**
According to the budget, AIT has planned to address 10000 EUR for making data
FAIR in the project.
**How will these be covered? Note that costs related to open access to
research data are eligible as part of the Horizon 2020 grant (if compliant
with the Grant Agreement conditions).**
Not defined yet
**Who will be responsible for data management in your project?**
A specific role has been foreseen in the project, the Data Controller (DC).
Salvatore D’Antonio from CINI has been appointed as Data Controller and will
be responsible for data management.
**Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?**
Not defined yet
# 5\. Data security
### 5.1.1. AIT
**What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?**
Standard data at AIT is stored on a state-of-the-art secured storage. For
sensible data encrypted file storages are created on demand with extremely
restricted access.
**Is the data safely stored in certified repositories for long term
preservation and curation?**
AIT runs regular backups on all data. This ensures preservation.
### 5.1.2. CINI
**What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?**
Regarding security, all the data collected will be stored on a database only
accessible to authenticated users on the partner premises. Regarding the data
recovery, database backups will be stored on premises and only accessible to
CINI.
**Is the data safely stored in certified repositories for long term
preservation and curation?**
It is not definitive in this phase.
### 5.1.3. INOV
**What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?**
Regarding security, all the data collected will be stored on a database only
accessible to authenticated users on the partner premises. Regarding the data
recovery, database backups will be stored on premises and only accessible to
INOV.
**Is the data safely stored in certified repositories for long term
preservation and curation?**
It is not definitive in this phase, but it is not expected to store collected
data in a repository.
# 6\. Ethical and legal aspects
**Are there any ethical or legal issues that can have an impact on data
sharing? These can also be discussed in the context of the ethics review. If
relevant, include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA).**
Two types of data will be used in the COMPACT research: personal data and
anonymised data.
Personal data is defined in the General Data Protection Regulation (GDPR) as
‘any information relating to an identified or identifiable natural person
(‘data subject’); an identifiable natural person is one who can be identified,
directly or indirectly, in particular by reference to an identifier such as a
name, an identification number, location data, an online identifier or to one
or more factors specific to the physical, physiological, genetic, mental,
economic, cultural or social identity of that natural person’.
When processing personal data, data protection legislation applies. Until May
25 th 2018, this is the Data Protection Directive (DPD) 8 and the relevant
legislation, transposing the DPD into national law, and after that date, the
General Data Protection Regulation (GDPR). 9 Deliverable D1.2 ‘S.E.L.P.
Management Plan’ sets out the management procedures, enabling the consortium
to comply with legal requirements. Ethics requirements are addressed in WP8,
Deliverables D8.1-8.3.
Anonymised data, on the other hand, are not subject to such requirements. 10
This is because once data have been successfully anonymised, their subjects
can no longer be (re-)identified. Therefore, there are no specific data
protection-relevant provisions in EU law, which hinder dissemination or
further use of anonymised data. Data must be anonymised in a manner that
absolutely prevents the data subject from being reidentified.
While there are no specific provisions in the Open Data Research Pilot
requiring the participants to anonymise data, the open research data from the
COMPACT project will be anonymised before it is made publically available.
Anonymisation is a data processing operation, so the GDPR requirements apply
before and while it is being carried out, 11 especially the basic principles
such as data minimisation and purpose limitation. The procedure for carrying
out a GDPR-compliant anonymisation procedure is described in Deliverable D2.5,
‘S.E.L.P. Framework'.
Regarding possible intellectual property (IP) restrictions on the use of
research data, these are dealt with in the Consortium Agreement. Research data
qualifies as ‘results’, which are defined as any (tangible or intangible)
output of the action such as data, knowledge or information – whatever its
form or nature, whether it can be protected or not – that is generated in the
action, as well as any rights attached to it, including intellectual property
rights. Research results are owned by the partner which produced them.
Regarding access to such results, partners will conclude individual agreements
with end-users.
**Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?**
All participants will give their free and informed consent before any personal
data is obtained from them. In order to do so, they will be provided with
Informed Consent Forms and Information Sheets, which set out the purposes and
means of data collection and their relevance in the COMPACT project. They take
into account the principles of data minimisation and purpose limitation. Data
minimisation refers to processing on the data, which are adequate, limited and
necessary for the research purposes, including time limitation on storage and
amount of data. Purpose limitation means that processing will be carried out
for a specific, explicit and legitimate purpose, i.e. research for the
purposes of the COMPACT project, as well as storage for potential further
research, which is explained in the Informed Consent Form and Information
Sheet.
# 7\. Other
**Do you make use of other national/funder/sectorial/departmental procedures
for data management? If yes, which ones?**
Not defined yet (AIT, CINI, INOV).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0044_INNOPATHS_730403.md
|
# 1\. Introduction
This Data Management Plan presents a summary of the key data that will be used
and generated by the INNOPATHS project, and how this data will be managed to
ensure that it is FAIR – Findable, Accessible, Interoperable and Re-usable.
This is intended to be a ‘living’ document. The information presented below
will evolve and become more specific (or change) over time, as the INNOPATHS
project progresses and as details, practicalities and feedback from project
partners and key stakeholders emerges.
# 2\. Data Summary
The INNOPATHS project involves both primary and secondary data collection, and
the generation of new data.
Primary data collection is planned by activities under Tasks 1.3, 1.4, 1.5,
2.1, 2.2, 2.3, 2.4, 2.5, 3.1, 3.6 and 4.4. Such data collection will largely
take the form of semistructured interviews with officials, analysts, technical
experts, academics and members of the INNOPATHS ‘Co-Design Group’ (CDG). The
collection, handling and storage of personal data will comply with the
procedures outlined in Deliverables 8.1 and 8.2.
Secondary data will be collected under a wide range of activities, tasks and
subtasks throughout the project. Such data will be largely sourced from the
scientific, government and grey literature and associated databases, along
with proprietary datasets. Specific examples of activities and secondary data
sources employed include:
* Key characteristics (e.g. cost, performance, spillover rates and associated uncertainty ranges) related to key technologies (both historic and characteristics projected for the future according to existing EU scenario, roadmap and transition studies), across the power, ICT, industry, buildings, transport and agriculture sectors, for the EU and Member States (T1.1 – T1.3, & T1.6)
* Energy-related investments by households, using data from the German Mobility
Panel (MOP), possibly in conjunction with the German Socio-Economic Panel
(SOEP) and the survey Mobility in Germany 2008. Analysis may also be extended
to Denmark or Norway, using similar datasets (T1.4).
* Effect of environmental legislation on sectoral environmental and economic performance, skill composition of the workforce, and wage premia to different occupational groups, with data required to conduct such analysis sourced from varied public databases, including WIOD, EU-KLEMS and ONET (T1.4).
* Current and historic flows of finance to both high- and low-carbon technologies, and financial instruments employed. Data will be sourced from both publically available and proprietary databases (e.g. IEA/OECD, BNEF, Thompson Reuters) (T1.5). These datasets are already available to project partners, and will not incur additional cost.
* Identify significant technological advancements and associated spillover effects for key energy-related technologies using patent analysis. Patent data will be purchased to allow this analysis, with a budget of €10,000 allocated for this purpose (T2.2).
* Industrial process technologies and their characteristics will be updated in the PRIMES industrial sector model, using the European Commission’s Best Available Technique database (T3.6)
* Effect on the labour market and industrial competitiveness from changes in energy prices (a proxy of climate policies) using French firm-level data. In particular, three data sources available from the French statistical office will be used: EACAI (on energy expenditures), DADS (on employment and skills) and FARE/FICUS (on productivity and balance sheets). Access to this data is given after presenting a project and paying an annual subscription fee (paid by project partner SPO, outside of the INNOPATHS project budget). These data are protected by the statistical confidentiality directly at the source, and access to is granted only remotely using fingerprint identification (T4.1).
* The effect of income (in)equality on the ability to transition to a low-carbon economy, using cross-country emissions data linked with the various freely available datasets on inequality (i.e. Milanovic all gini, the world wealth and income database (http://wid.world/) (T4.5).
* The impact of new low-carbon transition pathways on air pollution, security of supply resources, materials agriculture and land, drawing on publically-available databases including GAINS and EXIOPOL (T4.6).
Other primary and secondary data may be collected throughout the project, as
the need or opportunity arises. The purpose for such primary and secondary
data collection is multifaceted. Such data is required to (a) understand how
existing studies have treated different aspects of technology, the
environment, economy and society, to draw lessons for the analysis to be
conducted in INNOPATHS, (b) allow analysis to address specific questions posed
by the INNOPATHS project, using the most relevant and best data available, (c)
improve the quality of data inputs to, and the characterisation and detail of,
different models to be used to produce new low-carbon transition pathways, and
(d) to analyse the environmental and socio-economic impacts of such pathways,
beyond those assessed by the models that created them.
The data generated by the project in the tasks outlined above (along with
those from other activities and tasks) will be made available principally
through three publically accessible ‘online tools’:
* **Technology Matrix –** This will consist of graphical representations of underlying data on different parameters regarding different energy-related technologies. The user would be able to select different cost and performance (including environmental performance) variables for the x/y axes and to show the results in a wide range of units. There would also be the functionality for cross-country and cross-technology comparisons (when data at that level is available), a search function (by keywords), and a menu structure (or equivalent). It will also be possible to select future-looking cost and performance estimates for the period 2030-2050 (with uncertainty ranges). A perspective on ‘Technology Readiness Level’ (TRL) progression would also be provided.
* **Policy Assessment Framework -** This tool will be designed to exploit to the fullest the research carried out under WP2, which is dedicated to the creation of a framework, which will build on official EU criteria, to set out what we know about the outcomes of a range of energy policy instruments. The tool will allow the user to select a policy or a group of policies, and then visualize what the state of the art knowledge is on the different types of impact that policy has had. This knowledge will come from a wide range of literatures and countries and from the project itself. The indication of impacts would include the source of the impact, as well as the level of confidence in the results.
* **Low Carbon Pathways Platform -** This Platform will allow stakeholders to assess the socio-economic implications of medium- to long-term decarbonisation pathways, including their associated costs, benefit and risks, view and interact with the energy-economy pathways for the 21st century modeled in WP3, and extract the information relevant for their decision-making. They will be able to access the important variables (energy, emissions, technology deployment, prices) for the different models, sectors, scenarios, and countries (and combinations thereof). The user-friendly interface will provide quick introductions and how-to-recipes to facilitate information access by first-time users.
These tools, and the information and data contained and presented therein,
will be useful for a wide range of stakeholders, including academics and
researchers (not least those involved with INNOPATHS, in order to address the
key research questions posed by the project), national and local policy
makers, industry representatives, and the wider public.
A forth online tool will also be developed and employed by the project:
\- **Interactive Decarbonisation Simulator** – This tool has the goal of
giving policy makers, the general public and industry associates a more
intuitive understanding of different decarbonisation strategies, and what
different choices and targets in the various sectors and MS entails for the
other sectors and MS. It is a tool that is fully interactive; policymakers can
increase the mitigation effort in one area and decrease it in another, and the
simulator will give them a rough idea of the expected effect. The tool invites
users to interact with it - by designing a few different decarbonisation
strategies, one can quickly get an intuitive understanding of which measures
have a large effect and which don't, and where potential bottlenecks might
lie.
However, the Interactive Decarbonisation Simulator will draw on scenarios and
data generated by existing studies by project partner E3M, rather than those
generated by the INNOPATHS project. In addition, although it will be
accessible to the public through the INNOPATHS website alongside the three
online tools described above, it will be produced and hosted by E3M (with
project partner N&S applying INNOPATHS style and branding). As such, the
sections below refer only to the first thee online tools described above
(except for Section 2.1, which is applicable to all four tools).
# 3\. FAIR Data
## 1.1 Making data findable & Metadata
### Discoverability and Metadata
For each online tool webpage/portal, we will adhere to search engine
optimisation (SEO) best practices. This means they will be easily discoverable
by search engines, using HTML metatags to describe the content of the tools in
clear and easily understandable manner. The presence and location of the
online tools will be heavily advertised by INNOPATHS Deliverables,
publications and various dissemination channels (including the project flyer,
social media and events).
Once a user has navigated to the online tool, specific data may be found
through a verity of means, depending on the tool in question and data of
interest. For example, the Policy Assessment Framework (PAF) tool will allow
users to access underlying information (e.g. effectiveness, cost-efficiency)
about different policy instruments and mixes at different levels of
granularity (at least two), including outcomes, level of confidence, and
source material. Different categories of data will be presented in different
ways. For example, the context for a particular policy may be displayed in the
form of a comment, the level of confidence on a particular outcome may be
shown as a number with a clear description (translation) of what that number
represents, and in some cases the ‘impact’ may be measured with a number or a
range.
It is likely that each online tool will have a keyword search, dropdown
filtering functionality and/or structured menus to help narrow down the user’s
selection and more easily navigate the tools to identify the data of interest.
Specific keyword and metadata terms and conventions will be defined as the
data is produced and online tools are created. However, for the Low Carbon
Pathways Platform (LCCP) the metadata will follow the Integrated Assessment
Modelling Consortium (IAMC) convention 1 of scenario and variable metadata,
thus containing information on the scenario definitions, the models used,
their version number, regional resolution.
### Versioning
To make it clear when an update has been made to either the functionality of
the online tools, or the data contained in and presented by the online tools,
a version number will be added to the relevant page, along with the date that
the tool was last updated.
## 1.2 Making data openly accessible
### Accessibility for Project Partners
During the construction of the online tools and their initial population with
data, all partner institutions will have access to the tools via a web
interface. In addition, for the Low Carbon Pathways Platform (LCPP), for the
purpose of convenient scenario data analysis, consolidated snapshots of
scenario data will be available to partner institutions. Selected stakeholders
(i.e. co-designers who will test and provide feedback to the design of the
tools) will also have access, under strict confidentiality arrangements.
The specific protocol for access and data use during this period will depend
on the specific tool. For example, for the LCPP, when submitting data to the
INNOPATHS scenario database for subsequent display on the LCPP tool, modelling
teams will agree to the internal use of their scenario data by all INNOPATHS
partner institutions. However, the team producing the scenario data, in order
to account for the iterative process of scenario calculation and quality
control, must approve publication of research based on such data by project
partners. Individual modelling teams participating in the INNOPATHS project
shall retain control of their preliminary scenario data. Use of preliminary
scenario data by other partners than the modelling team generating this data
must be through explicit permission from the modelling team concerned.
When the online tools are made available to the general public, differentiated
access permissions will be defined. **Accessibility for the Public**
All online tools will be fully accessible to the public. The datasets
contained within and presented by each of the tools will be viewable directly
within the tools themselves, or extractable to comma separated value (CSV)
file format for further analysis. The CSV format is a widely used format
applicable to most data analysis software.
To preserve transparency, the code for each of the online tools will be made
open source and accessible to the public through GitHub (the Interactive
Decarbonisation Simulator will also be open source).
## 1.3 Making data interoperable
The form in which the data is presented will depend on the specific tool and
specific data of interest (e.g. numerical value, text, hyperlink, etc.). All
efforts will be made to ensure the data presented is easily understood and
easy to use for further analysis. This will include, for example, the use of
the IAMC variable template for the data presented by the LCPP. Efforts will
also be made to link to the common scenario databases and scenario
visualisations of the IAMC Working Group on scenarios and the Horizon 2020
project CD-LINKS.
As discussed above, all data presented by the online tools will be
downloadable into CSV format, for ease of interpretation and analysis.
## 1.4 Increasing data reuse
Any license conditions for the use of data presented by the online tools will
be clearly displayed, both on the online viewing platform and accompanying any
downloaded data.
The data for each online tool will be made available to the public once
related scientific work has been made available for publication. This ensures
that the data is subject to the additional quality control from the review
process. After the re-use embargo is lifted, the data may be freely used by
third parties for non-commercial and educational purposes.
Efforts will be made to preserve the online tools for public use for as long
as possible after the conclusion of INNOPATHS. Specific timeframes will be
clarified as the project progresses. Subject to future funding, updates to
data presented by the tools may be updated.
# 4\. Allocation of Resources
The costs for making the data generated by the project ‘FAIR’ will be minor,
and are included as part of the budget assigned to the project partners
responsible for producing the data for the online tools (AU, UCAM, E3M and
PIK), and the project partner responsible for producing the tools themselves
(N&S).
Responsibilities for data management rest in the first instance with the
project partners responsible for generating and collating them (as above), and
then with UCL, who will host the data and the tools through the UCL Research
Data Service (see below). Preserving long-term access to this data, through
the online tools, will be highly valuable to INNOPATHS stakeholders (e.g.
policy makers, industrial groups, NGOs), and is achievable at minimal, and
perhaps zero cost (specific value to be determined).
# 4\. Data Security
The online tools, and data generated by the project for use by these tools,
will be curated by the UCL Research Data Service 2 . UCL’s Research Data
Services (RDS) has the capability to store and access very large volumes of
electronic research data and data products, to support coordinated end-to-end
research workflows encompassing the use of both data storage and computational
resources (e.g. UCL’s high performance computing services), and to protect and
preserve digital data assets, including for future re-use and exploitation.
The project’s data storage strategy will consist of three components: private
web server storage, a secure short-term backup facility, and a long-term
archive. Consortium partners are able to upload and exchange data using the
private server. Web server storage is flexible, backed up, and can be readily
expanded if necessary. The long-term archive will ensure that data and the
online tools are preserved once the project comes on an end (for a timeframe
yet to be determined).
# 4\. Ethical Aspects
For an assessment and management of the ethical aspects of data collection and
use for the INNOPATHs project, please see Deliverables 8.1 and 8.2.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0046_InterFlex_731289.md
|
# 1\. INTRODUCTION & PROJECT BACKGROUND
## 1.1. Scope of the document
This report presents the data management life cycle for the data to be
collected, processed and/or generated by the InterFlex project.
As part of making research data findable, accessible, interoperable and re-
usable (FAIR), the deliverable will include information on:
* The handling of research data during & after the end of the project
* What data will be collected, processed and/or generated
* Which methodology & standards will be applied
* Whether data will be shared/made open access and
* How data will be curated & preserved (including after the end of the project).
Within the InterFlex project, it may be necessary to limit access to certain
information, in accordance with the article 12 of the Electricity Directive by
guaranteeing that commercially sensitive information obtained in the course of
carrying out their business shall remain confidential, and that information
disclosed regarding their activities, which may be commercially advantageous,
shall be made available in a non-discriminatory manner.
The document will be updated over the course of the project whenever
significant changes arise, such as (but not limited to) new data, changes in
consortium policies (e.g. new innovation potential, decision to file for a
patent); changes in consortium composition and external factors (e.g. new
consortium members joining or old members leaving).
The DMP will be updated as a minimum in time with the periodic
evaluation/assessment of the project: M18 and M36.
## 1.2. Notations, abbreviations and acronyms
The table below provides an overview of the notations, abbreviations and
acronyms used in the document.
## 1.3. EU Expectations from InterFlex
InterFlex is a response to the Horizon 2020 Call for proposals LCE-02-2016
(“Demonstration of smart grid, storage and system integration technologies
with increasing share of renewables: distribution system”).
This Call addresses the challenges of the distribution system operators in
modernizing their systems and business models in order to be able to support
the integration of distributed renewable energy sources into the energy mix.
Within this context, the LCE-02-2016 Call promotes the development of
technologies with a high TRL (technology readiness level) into a higher one.
InterFlex explores pathways to adapt and modernize the electric distribution
system in line with the objectives of the 2020 and 2030 climate-energy
packages of the European Commission. Six demonstration projects are conducted
in five EU Member States (Czech Republic, France, Germany, The Netherlands and
Sweden) in order to provide deep insights into the market and development
potential of the orientations that were given by the call for proposals, i.e.,
demand-response, smart grid, storage and energy system integration.
With Enedis as the global coordinator and ČEZ Distribuce as the technical
director, InterFlex relies on a set of innovative use cases.
Six industry-scale demonstrators are being set up in the participating
European countries:
Through the different demonstration projects, InterFlex will assess how the
integration of the new solutions can lead to a local energy optimisation.
Technically speaking, the success of these demonstrations requires that some
of the new solutions, which are today at TRLs 57, are further developed
reaching TRLs 7-9 to be deployed in real-life conditions. This allows new
business models and contractual relationships to be evaluated between the DSOs
and the market players.
**Environment** : Through the optimisation of the local energy system, the
project generates benefits in terms of increased energy efficiency (load
shifts to off peak hours; optimized selfconsumption in case of prosumers,
increased awareness leading to active DSM and reduced electricity
consumption), power generation optimization (peak shaving, avoiding
electricity generation from carbonized peak load generation units) and
increased share of renewables (optimized integration of intermittent renewable
energy sources), resulting in the overall reduction of GHG emissions.
**Socio-economic** : The project stimulates the development of new services
for end-customers allowing for instance the development of demand response
service packages for small and large consumers as well as prosumers. The
provision of community storage solutions or the optimal use of multiple source
flexibilities should help to decrease the electricity bill without any
noticeable impact on the supply quality.
**Policy** : The Use cases of the project will help to
* Formulate recommendations for micro grid operation (control schemes and observability),
* Elaborate an appropriate regulatory framework for self- consumption and storage solutions (community or individual residential storage)
* Provide guidelines on the participation of distributed resources in DSO operations (modifications of grid codes).
_Figure 1: InterFlex Demo Map_
# 2\. DATA SUMMARY
## 2.1. Purpose of the data collection
The goal of the data collection is to design a structure for data
classification and define level of confidentiality and access rights for each
subcategory:
* To evaluate the technical and financial performance of the 6 demonstrators and the InterFlex project
* To communicate properly on the demonstrators and the results of the InterFlex project
* To make sure the 6 demonstrators methodology and results are exploitable and replicable
* Without affecting the confidentiality of some data 1 2
Data Subcategories
1
Level of
confidentiality
Owner of data
Recipient
2
Perime
ter of access
rights
Dissemination perimeter of data subcategories:
For each subcategory and each applicant recipien
t, owner can choose between:
Sharing detailed data
Sharing aggregated or
equivalent data
No sharing
_Figure 2 : Level of confidentiality and access rights_
Detailed system data need to be transformed before being exchanged with non-
authorized recipient:
* **Detailed data** : Raw data
* **Aggregated data** : Aggregated data: Data based on detailed data that are aggregated at a sufficient level so that raw data can’t be identified(statistical law) with respect to competition laws
* **Equivalent data** : Data based on detailed data that are in an anonymous form or with modified values so that raw data can’t be identified
## 2.2. Types and formats of generated or collected data
Different types of data are collected or generated within the InterFlex
Project:
Table 2: Data classification
## 2.3. Data Structure and utility
The data structure has been defined gathering and compiling all project data
and processes. It may evolve and be updated during the life of the project.
The actual structure for Actors and Data for the InterFlex project are listed
in the tables below:
<table>
<tr>
<th>
**_Categories_ **
</th>
<th>
**_Type_ **
</th>
<th>
**_Subcategories_ **
</th>
<th>
**_Definition and utility_ **
</th>
<th>
**_Example_ **
</th> </tr>
<tr>
<td>
Actors
</td>
<td>
Role
</td>
<td>
DSO
</td>
<td>
Responsible for operating, ensuring the maintenance of and, if necessary,
developing the distribution system in a given area
</td>
<td>
Avacon, CEZ, E.ON, Enexis, Enedis
</td> </tr>
<tr>
<td>
Actors
</td>
<td>
Role
</td>
<td>
Industrial partner
</td>
<td>
All industrial partners involved in Interflex project at a DEMO level
</td>
<td>
* GE
* Siemens
* Schneider …
</td> </tr>
<tr>
<td>
Actors
</td>
<td>
Role
</td>
<td>
University and research partner
</td>
<td>
All university or research partners
involved in Interflex project at a
DEMO level
</td>
<td>
* RWTH
* AIT etc
</td> </tr>
<tr>
<td>
Actors
</td>
<td>
Role
</td>
<td>
Retailer
</td>
<td>
Licensed supplier of electricity to an end-user
</td>
<td>
\- EDF - Engie...
</td> </tr>
<tr>
<td>
Actors
</td>
<td>
Role
</td>
<td>
Legal Client
</td>
<td>
A legal client of a DSO that is involved at Demo scale
</td>
<td>
* Company producer
* Municipalities
* Tertiary service providers
</td> </tr>
<tr>
<td>
Actors
</td>
<td>
Role
</td>
<td>
Physical client
</td>
<td>
A physical client of a DSO that is involved at Demo scale
</td>
<td>
\- Residential client
</td> </tr>
<tr>
<td>
Actors
</td>
<td>
System
</td>
<td>
Charging
facilities
</td>
<td>
Facilities to charge electrical vehicles
</td>
<td>
\- Charging facilities
</td> </tr>
<tr>
<td>
Actors
</td>
<td>
System
</td>
<td>
DER installation
</td>
<td>
Power plant that use renewable technology and are owned by a legal person
</td>
<td>
* Photovoltaics panels
* Biomass farm
* Wind power, …
</td> </tr>
<tr>
<td>
Actors
</td>
<td>
System
</td>
<td>
In house device
</td>
<td>
All devices working on electricity that can be find in a customer's dwelling.
</td>
<td>
* Heater
* Meter
* Local display
* Customer's battery
</td> </tr>
<tr>
<td>
Actors
</td>
<td>
System
</td>
<td>
Communication
infrastructure
</td>
<td>
All the infrastructure that are used for communication at all level (from
customer's place to power
command)
</td>
<td>
\- Modem - Routers
</td> </tr>
<tr>
<td>
Actors
</td>
<td>
System
</td>
<td>
Network device
</td>
<td>
All devices placed on MV/LV network
for monitoring or gathering information on grid's situation or electrical
parameters values. It also include the IS associated
</td>
<td>
* Secondary Substation control infrastructure
* RTU : Remote terminal units
* Circuits breakers
* sensors
</td> </tr>
<tr>
<td>
Actors
</td>
<td>
System
</td>
<td>
IS IT
</td>
<td>
All the hardware and software associated, used at power command to control and
monitor the network
</td>
<td>
* SCADA
* Central database
* Control operation center
</td> </tr>
<tr>
<td>
Actors
</td>
<td>
System
</td>
<td>
Interactive communication device
</td>
<td>
All device used to interact with customers in order to involved him in the
Demo
</td>
<td>
* Web portal
* Display used for communication
</td> </tr> </table>
_Table 3: List of actors_
<table>
<tr>
<th>
**_Categories_ **
</th>
<th>
**_Type_ **
</th>
<th>
**_Subcategories_ **
</th>
<th>
**_Definition and utility_ **
</th>
<th>
**_Example_ **
</th> </tr>
<tr>
<td>
Data
</td>
<td>
Document
</td>
<td>
Internal
document
</td>
<td>
All the documentation made by Demo to run operation, to monitor and conduct
the project's good development
</td>
<td>
* Meeting minutes
* Report on the cost's impact of selected flexibility plans
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Document
</td>
<td>
Interflex deliverable
</td>
<td>
All the deliverables that Demo have to produce during the project's time as
agreed in the DOW
</td>
<td>
* Risk analysis
* Documentation on KPI
* Detailed use case - Report on technical
experimentation, market research,
…
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Document
</td>
<td>
Communication material
</td>
<td>
All the documentation that describe the project to the public and can be put
on the future website
</td>
<td>
* Purpose of the DEMO (leaflet)
* Brief description of use case
* Location of use case
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Financial data
</td>
<td>
Project financial data
</td>
<td>
All the financial data that are produced during the project and that are used
to make financial report for European Commission and internal report
</td>
<td>
* Invoices
* Cost and time imputation
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Financial data
</td>
<td>
Solution cost and selling price
</td>
<td>
All the financial data that can be made concerning estimation prices of
solution for replication
</td>
<td>
* Unit product cost of hardware developed by Demo
* Sell price of the solution develop
(software,…)
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Parameter
</td>
<td>
Condition parameter
</td>
<td>
All the external parameters that may influence the success of the use case
</td>
<td>
* Weather
* Time of day
* Day of week …
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Parameter
</td>
<td>
Scenario assumption
</td>
<td>
All the stated parameters that are necessary to determinate a scenario for the
use case
</td>
<td>
* Location of islanding
* Experiment's location
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Parameter
</td>
<td>
Electrical parameter
</td>
<td>
All the electrical parameters that are used to supervise the network and its
good state
</td>
<td>
* Intensity
* Voltage
* Frequency
* Quality
</td> </tr> </table>
<table>
<tr>
<th>
Data
</th>
<th>
Parameter
</th>
<th>
Algorithm, formula, rule, specific model
</th>
<th>
All the intellectual data that are created during the project to made
software's contents
</th>
<th>
* Algorithm to optimize flexibility plan
* Simulation to determine location
of circuit breaker
* Voltage regulation algorithm
</th> </tr>
<tr>
<td>
Data
</td>
<td>
Parameter
</td>
<td>
Optimized value
</td>
<td>
Values of parameters that optimized the use case or the demo‘s performance
</td>
<td>
\- Optimization time of islanding
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Parameter
</td>
<td>
Forecast data
</td>
<td>
All the data used to forecast consumption or production of customer
</td>
<td>
\- Forecast customer's consumption - Forecast photovoltaic panels' production
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Facility data
</td>
<td>
Network topology
</td>
<td>
All information on network devices and their location and interaction, mainly
coming from GIS (Geographic
Information System)
</td>
<td>
* Map of the network
* Substations location
* All the other data found in the GIS
(Geographical Information System)
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Facility data
</td>
<td>
Network state
</td>
<td>
All information concerning the network's status (global or local) at a precise
moment useful to monitor the network
</td>
<td>
* Feeding situation in a distribution area
* State of network regarding Limit value violation
* Location of constraint
* Flexibility needs of DSO
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Facility data
</td>
<td>
Customer's meter state and output
</td>
<td>
All the information concerning customer’s meter state and outputs information
</td>
<td>
\- Customer’s consumption or production
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Facility data
</td>
<td>
Other device state and output
</td>
<td>
All the information concerning device's state and outputs information
</td>
<td>
* State of charge of batteries
* Consumption data coming from meter
* Production data coming from meter
* State of charge of storage
components
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Parameter
</td>
<td>
Information exchanged between IS or sent to device
</td>
<td>
All automated information sent between facilities in order to send information
or order for monitoring
</td>
<td>
* Order sent to breaker devices (open, close,…)
* Information on local network status coming from sensors - Order and roadmap sent to network devices (batteries, aggregator,…)
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Parameter
</td>
<td>
Detailed specification on devices
</td>
<td>
All detailed information (reference components, specification, process,…)
useful to build the devices
</td>
<td>
* Detailed specification of the telecommunication infrastructure
* Detailed specification of interactive sensor network
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Network data
</td>
<td>
Network topology
</td>
<td>
All information on network devices and their location and interaction, mainly
coming from GIS (Geographic
Information System)
</td>
<td>
* Map of the network
* Substations location
* All the other data found in the GIS
(Geographical Information System)
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Network data
</td>
<td>
Network state
</td>
<td>
All information concerning the network's status (global or local) at a precise
moment useful to monitor the network
</td>
<td>
* Feeding situation in a distribution area
* State of network regarding Limit value violation
* Location of constraint
* Flexibility needs of DSO
</td> </tr>
<tr>
<td>
Data
</td>
<td>
KPI
</td>
<td>
Data for KPI
(input raw data)
</td>
<td>
All raw data that are used to calculate the final KPI
</td>
<td>
* Duration of experiment
* Customer response to DSO's
demand
* Electrical parameter used for KPI
</td> </tr>
<tr>
<td>
Data
</td>
<td>
KPI
</td>
<td>
KPI (KPI values)
</td>
<td>
All the KPI values and the way to calculate them
</td>
<td>
* Economic KPI
* System Efficiency KPI
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Customer data
</td>
<td>
Customer
contract’s data
</td>
<td>
All the data in customer's contact
that are used for contact or make payment
</td>
<td>
* Address
* Phone number
* Bank account details
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Customer data
</td>
<td>
Information sent to /received from customer
</td>
<td>
All the information and data that are exchanged between the DEMO and the
customer in order to involve customer in the experiment
</td>
<td>
* Customer's response to DSO's request to reduce consumption - Information and data available to customer in order to visualize its consumption
* Advices and encouragement sent to encourage a smart consumption
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Customer data
</td>
<td>
Customer
analysis (profile analysis, studies on client reactivity)
</td>
<td>
All the data that are produced in order to better understand the customer's
behaviour regarding the possibility to adopt smarter habits in their
electricity consumption
</td>
<td>
* Customer's typology and behaviour patterns
* Analysis on customer's response to DSO's request
</td> </tr> </table>
_Table 4: List of data_
# 3\. FAIR DATA
## 3.1. Making data findable, including provisions for metadata
In order to make data findable and usable, regarding the level of access
rights, rules have been defined to identify the data.
### KPIs
#### Data concerned
* All demo KPIs
* Common KPIs
#### Characteristics
* KPI values were created in order to be shared and published outside the Interflex project
* KPIs reflect the results of the demos and the InterFlex Project and are one of the main tools of the Technical Management
* For each KPI, the level of dissemination has been defined in the deliverable ‘D2.2 MinimalSetOfKPIs_CEZd_InterFlex_V1.0’
#### Rules/ Identification/Versioning
* Decision has been taken that only calculated values will be put inside the data clearing house located on the Project Intranet. The data collection frequency and responsibilities for data collection are defined for each KPI in the deliverable ‘D2.2 MinimalSetOfKPIs_CEZd_InterFlex_V1.0’ o Data name
* Data ID
* Methodology for Data collection o Source/tools/Instruments for Data collection o Location of Data collection o Frequency of data collection o Data responsible data collection o KPI ID o KPI Name
* Each responsible for WP KPI collection is the WPL who collects the different KPIs in the database stored on the InterFlex Intranet, so the Technical Director can assess the project thanks to the KPI collection.
* Each Raw data and calculated value has an Object Identifier defined in deliverable ‘D2.2 MinimalSetOfKPIs_CEZd_InterFlex_V1.0’.
### InterFlex Deliverables
**_Data concerned_ **
\- List of deliverables defined in the Grant Agreement (or its amended
version)
#### Characteristics
\- Depending on the deliverable, perimeter of dissemination can be different -
A level of dissemination is already pre-defined in the Grant Agreement
#### Rules/ Identification/Versioning
* Level of dissemination is chosen by the author and Technical committee validates the choice. Main deliverables and appendices may have different levels of confidentiality, especially if appendices are more detailed
* The deliverables are available on the Project Intranet and on the website for public audience
* Versioning and nomenclatures are defined in the deliverable ‘D10.1 ProjectManagementPlan_Enedis_InterFlex_V2.0’
### Demo’s local Data
#### Data concerned
* Internal documents
* Electrical parameters
* Forecast data
* Device state and outputs and information exchanged between facilities
* Customer analyses (profile analyses, studies on client reactivity…)
#### Characteristics
* Data used to run each of the Demos on a daily basis
* Data are with low added value outside source Demo, as Demos are running separately without overlap
#### Rules to be applied
* Data should stay at Demo level and be used only by Demo partners for the achievement of their activity
* If another partner outside Demo needs these data, a written request and explanation should be provided about the way he is going to use the data (Agreement form)
> Sample data in an anonymous format can be sent only for illustration >
> Providing aggregated data should be the rule
* A global description will be integrated inside deliverable
### Project Financial Data
#### Data concerned
* Invoices
* Time spendings/imputations
* Cost/price
* Company internal financial documents
#### Characteristics
* Data can be at different levels of details
* Detailed data (cost by unit) are extremely sensitive
* Demos need to send financial data to Coordinator who will aggregates data to present a global cost statement covering all Demos and general expenses of each partner (for internal financial report) broken down to individual WPs
#### Rules to be applied
* Detailed data should stay in partner's accounting system
* Data sent to the Coordinator should be in the detailed level described in the template provided by Coordinator
* All data sent to Coordinator must be kept strictly confidential and must not be disseminated
* Coordinator aggregates data to present to the consortium
* Company internal financial information is not shared
### Network Data
#### Data concerned
* Network topology
* Network state
#### Characteristics
* Network topology GIS information and Network state are highly confidential and sensitive
* Network state can be sensitive information as it can reveal grid's weakness and vulnerability
#### Rules to be applied
* Network topology with GIS information must not be shared except between DSOs
* If another partner needs network data, a written request and explanation should be provided about the way he is going to use the data (Agreement form)
> Data in an anonymous format : equivalent data without indication of location
> can be sent
> Providing aggregated data should be the rule
### Demo Customer Data
**_Data concerned_ **
* Customer's meter data
* Customer's contract data
**_Characteristics_ **
* All this data are strictly under personal data protection (European and national laws)
* Detailed identification data (address, phone number,…) are sensitive information and must be secured
**_Rules to be applied_ **
* Customer's contract must never be sent
* As stated in laws :
> Customer must have an access to this data
> Data must be protected and traceability of the use must be made
> Data can't be disclosed to anyone without the full consent of customers on
> usage and access rights
* All this information has to be clearly stated in the customer's contract that is signed to enter project
* in order to send information to other partners if the need is clearly established, these data have to be delivered in an anonymous format
(equivalent data) or aggregated format
* The DSO must ensure to record these data in compliance with their national data protection regulations
## 3.2. Making data openly accessible
Data that are used to manage the project within the consortium are stored on
the Project Intranet with a private login and password. This intranet is used
as a working tool for the sharing of documents related to InterFlex and
consists of a private area, accessible online to the project partners.
It allows the safe access to project information and reports, circulation of
preparatory and internal work, online exchanges and virtual communication
tools such as shared Agenda, Instant Messaging etc.
Groups and access roles have been defined in order to assure clearly
identified access rights.
_Figure 3: Tree view of the access rights of the Intranet_
Data that are public are accessible on the project website. Key words and
search tool are available in order to make the data more accessible.
Confidential data are identified with a confidential tag in order to protect
them.
The access rights per actors per data is described in the table below:
D2.4 Da
t
a Management Plan
Interflex
–
GA N°731289
Page
20
Table 5: Data classification
## 3.3. Making data interoperable
In order to identify and aggregate the data in an interoperable way the
InterFlex project uses the SGAM framework/ Use case methodology approach and
also the IEC PAS 62559 based template to describe in detail the Use Cases. See
deliverable D2.1.
The SGAM framework and its methodology are intended to present the design of
Smart Grid use cases in an architectural but solution and technology-neutral
manner.
The SGAM framework consists of five layers representing business objectives
and processes, functions, information exchange and models, communication
protocols and components. These five layers represent an abstract and
condensed version of the GWAC interoperability categories. Each layer covers
the smart grid plane, which is spanned by electrical domains and information
management zones. The intention of this model is to represent on which zones
of information management interactions between domains take place. It allows
the presentation of the current state of implementations in the electrical
grid, but furthermore to depict the evolution to future smart grid scenarios
by supporting the principles universality, localization, consistency,
flexibility and interoperability
InterFlex aims to get information on:
1. Description of the Use Case
2. Diagrams of the Use Case
3. Technical data - Actors
4. Step by Step Analysis of Use Case (can be extended by detailed info on “information exchanged”)
5. Information exchanged
Moreover, the WP3 focuses on defining an interoperable API for IT systems
involved in the flexibility transactions with cybersecurity constraints
(D3.6), Interoperability and interchangeability validation results (D3.7), and
Scalability and replicability analyses for all the use cases (D3.8).
## 3.4. Increase data re-use (through clarifying licenses)
In order to enable and promote data re-use, all data provided need to take the
following questions in a reasonable way into account and must be specified in
the Exploitation Plan of the project (D4.7, D4.8, i.e., 1 st and 2 nd
version of the Exploitation Plan of the project results):
* How will the data be licensed to permit the widest possible re-use?
* When will the data be made available for re-use? If an embargo is sought to give time to publish or seek patents, specify why and how long this will apply, bearing in mind that research data should be made available as soon as possible.
* Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why.
* How long is it intended that the data remains re-usable?
* Are data quality assurance processes described?
# 4\. ALLOCATION OF RESOURCES
Enedis as Project Coordinator of the InterFlex project is responsible for data
management of the project.
All relevant data such as KPI results, publication and deliverables must be
accessible at least 5 years after the end of the project. As such, the
Intranet and Internet platforms will be available and maintained by Enedis
during this period from 2020 until 2024. The corresponding estimated expenses
are shown hereunder:
\- Intranet licences: 96€/year/licences. 1 licence for each of the 20 partners
(TBC) - Internet Hosting and maintenance: 40€/month
<table>
<tr>
<th>
**Cost category**
</th>
<th>
**Year 2020**
</th>
<th>
**Year 2021**
</th>
<th>
**Year 2022**
</th>
<th>
**Year 2023**
</th>
<th>
**Year 2024**
</th>
<th>
**Total**
</th> </tr>
<tr>
<td>
Intranet
</td>
<td>
1 920 €
</td>
<td>
1 920 €
</td>
<td>
1 920 €
</td>
<td>
1 920 €
</td>
<td>
1 920 €
</td>
<td>
**9 600 €**
</td> </tr>
<tr>
<td>
Internet
</td>
<td>
480 €
</td>
<td>
480 €
</td>
<td>
480 €
</td>
<td>
480 €
</td>
<td>
480 €
</td>
<td>
**2 400 €**
</td> </tr>
<tr>
<td>
**Total direct costs**
</td>
<td>
**2 400 €**
</td>
<td>
**2 400 €**
</td>
<td>
**2 400 €**
</td>
<td>
**2 400 €**
</td>
<td>
**2 400 €**
</td>
<td>
**12 000 €**
</td> </tr> </table>
_Table 6 : Direct data management costs excluding HR after the end of the
project_
# 5\. DATA SECURITY AND ETHICAL ASPECTS
## 5.1. Data security
Based on common works and agreements among GWP and Demo leaders, each
subcategory of data was assessed with three levels of confidentiality in order
to ensure data security.
Depending on the constraints applying to these types of data (laws, internal
rules…), it is possible to apply a level of confidentiality as followed:
\- Level 0:
* Detailed data can never be shared
* Aggregated data or equivalent data can be shared - Level 1:
* Detailed data can be shared with some partners upon request and dedicated agreement
* No restriction on aggregated data or equivalent data - Level 2:
* No restriction whatsoever.
## 5.2. Ethical aspects
Ethics requirements in the protection of personal data must be taken into
account. Indeed, within the context of Interflex demos and exploitation of
related results, partners will be collecting or processing personal data.
As such, the D1.1 Ethics POPD deliverable (protection of personal data)
specifies for each partner:
* Certification by their competent Data Protection Authority of compliance with applicable local and European laws,
* Detailed information on the procedures that will be implemented for data collection, storage, protection, retention and destruction
* where applicable, providing templates of consent forms to be given out to customers whose personal data may collected and used
# 6\. APPENDIX
List of InterFlex Common KPIs
<table>
<tr>
<th>
**Interflex Project KPI**
</th>
<th>
**KPI ID**
</th>
<th>
**KPI TYPE**
</th>
<th>
**KPI Description**
</th> </tr>
<tr>
<td>
Flexibility
</td>
<td>
WP2.2_KPI_1
</td>
<td>
Technical
</td>
<td>
Flexible power that can be used for balancing specific grid segment.
</td> </tr>
<tr>
<td>
Hosting capacity
</td>
<td>
WP2.2_KPI_2
</td>
<td>
Technical
</td>
<td>
Percentage increase of network hosting capacity for DER.
</td> </tr>
<tr>
<td>
Islanding
</td>
<td>
WP2.2_KPI_3
</td>
<td>
Technical
</td>
<td>
Capacity of the energy system to switch to islanding whilst keeping the power
quality requirement.
</td> </tr>
<tr>
<td>
Customer recruitment
</td>
<td>
WP2.2_KPI_4
</td>
<td>
Social
</td>
<td>
Measure whether demos are managing to recruit enough customer bases in order
to attain demo objectives.
</td> </tr>
<tr>
<td>
Active participation
</td>
<td>
WP2.2_KPI_5
</td>
<td>
Social
</td>
<td>
Reflects how versatile the demos are in leveraging flexibility from different
technologies.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0049_RURITAGE_776465.md
|
# 1\. Introduction
The RURITAGE project participates in the Pilot on Open Research Data launched
by the European Commission (EC) along with the H2020 programme. The aim of the
programme is to allow access to research data generated in H2020 projects. A
FAIR (findable, accessible, interoperable and re-usable) Data Management Plan
(DMP) is required a deliverable for all projects participating in the Open
Research Data Pilot.
Open access is defined as the practice of providing free of charge on-line
access to scientific information that is reusable. Scientific information may
be defined as research data and scientific peer-reviewed journal publications.
In the context of RURITAGE, research data will concern Key Performance
Indicators developed through the project and will include socio-economic,
environmental and cultural heritage related data. Spatial data will also be
collected and stored within the RURITAGE ATLAS (output from project). In
addition stakeholder’s perception of change will be collected via the Cult-
Rural Toolkit and stored in anonymous fashion and displayed in the ATLAS.
Personal data of participants in the project activities will also collected
and stored securely.
The Consortium believes in the concepts of open science and in the benefits
that can be drawn from allowing the reuse of data at a larger scale.
Furthermore, there is a need to gather experience and knowledge relating to
innovative use of heritage for rural regeneration. In fact, the majority of
European heritage is found in rural areas, however there is a much longer
tradition of heritage promotion in the urban context. Hence, most rural areas
are facing chronic economic, social and environmental problems, resulting in
unemployment, disengagement, depopulation, marginalisation or loss of
cultural, biological and landscape diversity. In most cases, tangible and
intangible Cultural Heritage is threatened.
This project proposes to remove this condition by demonstrating the heritage
potential for sustainable growth. Around Europe and in Third countries,
numerous examples of good practices show how CNH is emerging as a driver of
development and competitiveness through the introduction of sustainable and
environmentally innovative solutions and the application of novel business
models. The project will for the first time provide open access, high quality
data relating to the innovative use of heritage for rural regeneration.
Although the project embraces open access data there will be legitimate
situations where access to data will be restricted due to commercial
exploitation reasons. However processes will be developed to limit
restrictions, for example anonymising data or limited embargos on datasets.
## 1.2 Purpose of Data Management Plan
The purpose of the DMP is to provide an analysis of the main elements of the
data management policy that will be used by the Consortium with regard to the
project research data.
The DMP covers the complete research data life cycle (figure 1). It describes
the types of research data that will be generated or collected during the
project, the standards that will be used, how the research data will be
preserved and what parts of the datasets will be shared for verification or
reuse. It also reflects the current state of the Consortium agreements on data
management and must be consistent with exploitation and IPR requirements.
Figure 1.1: Research Data Cycle. (Source: University of Plymouth (2018).
_Research data cycle_ . Available online:
_https://plymouth.libguides.com/ld.php?content_id=31431849_ )
The DMP is not a fixed document, but will develop during the lifecycle of the
project, particularly whenever significant changes arise such as dataset
updates or changes in Consortium policies.
This document is the first version of the DMP, delivered in Month 6 (D8.3) of
the project. It includes an overview of the datasets to be produced by the
project, and the specific conditions that are attached to them. The next
versions of the DMP will get into more detail and describe the practical data
management procedures implemented by the project with reference with the IT
tools developed in WP4. At a minimum, the DMP will be updated in Month 24 and
Month 48 respectively.
This document has been produced following the EC guidelines for project
participating in this pilot.
# 2\. Data Summary
The dataset types that have been identified in the initial DMP are focused on
the description of the action and on results obtained in the first months of
the project.
Therefore, Table 1.1 reports a list of initial types of research data
identified. The list may have datasets added or removed in later versions of
the DMP as the project develops. Datasets relating to the ATLAS, Cult-Rural
Toolkit and replicators baseline data will be developed within the next period
of the project and therefore will be added to the next version of the DMP.
Details for each of the current dataset have been included in the following
section.
## 2.1 Dataset
<table>
<tr>
<th>
**NAME of DATASET**
</th>
<th>
**RESPONSIBLE**
**PARTNER**
</th>
<th>
**WP**
</th> </tr>
<tr>
<td>
Role Model BP
</td>
<td>
TECNALIA
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
Participatory Managment
</td>
<td>
UNESCO
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
Stakeholder Participants
Information
</td>
<td>
UNIBO
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
KPI Evidence
</td>
<td>
CARTIF
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
DSS Knowledge
</td>
<td>
POLITO
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Dissemination Information
</td>
<td>
ICLEI
</td>
<td>
WP7/WP6
</td> </tr> </table>
Table 1.1 DataSet Types
### 2.1.1 Dataset 1 Role_Model_BP
<table>
<tr>
<th>
**DATASET REFERENCE AND NAME:**
Dataset 1: Role_Model_BP
</th> </tr>
<tr>
<td>
**DATASET DESCRIPTION:**
The data collected in this dataset relates to best practices and lesson learnt
from RMs. It contains contextual data, description of the process, description
of the stakeholders and resources involved. It would include also a
description of the actions that can be replicated. The data is in an excel
sheet format, with a workbook for each RM.
</td> </tr>
<tr>
<td>
**ORIGIN, NATURE AND SCALE OF DATA:**
The data is gathered from the RMs via a questionnaires campaign. The scale of
the data relates to the number of RMs .
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA:**
Specific keywords will be attached to document to make them findable.
</td> </tr>
<tr>
<td>
**DATA SHARING:**
During the lifecycle of the project all raw data will the stored on the
RURITAGE SharePoint site which is backup via University of Plymouth IT
systems. Sensitive data will be shared between partners via SharePoint folders
with limit access to data. Sensitive data will not be shared outside the EU
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION:**
Research datasets that can be shared will be uploaded to ZENODO repository
with detailed metadata to insure discoverability. In additional to ZENODO, a
metadata record for each dataset will be added to the University of Plymouth’s
“PEARL” open publication/data repository ( _https://pearl.plymouth.ac.uk/_ )
in order to improve discoverability.
</td> </tr> </table>
**2.1.2 Dataset 2 Participatory_Management**
<table>
<tr>
<th>
**DATASET REFERENCE AND NAME:**
Dataset 2: Participatory_Managment
</th> </tr>
<tr>
<td>
**DATASET DESCRIPTION:**
The data within this dataset relates to the theoretical and methodological
approach to the participatory process into the Rural Heritage Hubs. The
Methodology has been divided in three phase: 1) Setting up the Hub
(identifying hub coordinator, hub space, hub stakeholders and multi-use of the
hub); 2) Activities to be implemented in the Hub (includes a detailed
individual calendar with activities to be implemented for each of the RMs and
Rs); 3) Monitoring the Hub (i.e. processes and indicators). A document
detailing the methodology will be complied and will contain several annexes
including contact details of Hub coordinators. A Serious Game kit which will
be part of the RURITAGE Replicator Tool Box (WP5) will be produced and be free
to download by other institutions interested in being trained and in using the
game (DIY approach).
A word documents/pdfs containing (‘match-making agreement’, schedule for the
visits, required inputs, expected outputs will also be produced. This is
likely to contain contact details. Video recordings of presentation will be
produced and placed on YouTube channel. The video recordings will not include
personal data.
</td> </tr>
<tr>
<td>
**ORIGIN, NATURE AND SCALE OF DATA:**
This dataset is a document that builds on research about participatory
process. It also includes a calendar of planned events and explains different
techniques that can be used in hub activities. Deliverable of the RURITAGE
project (D2.2 – public): Serious Game kit which will be part of the RURITAGE
Replicator Tool Box (WP5) The scale of the data will relate to the number of
RMs and Rs and future participatory activity.
Recordings of online presentation (min. 9 videos) to be uploaded on YouTube
will be produced.
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA:**
Specific keywords will be attached to document to make them findable and
keywords and digital object identifiers will be to the game kit and YouTube
videos.
</td> </tr>
<tr>
<td>
**DATA SHARING:**
Videos will be shared on YouTube channel that will be created for the project.
These videos will also be part of WP5 Platform and the project web page.
The documents containing the methodology and approach will be shared via
webpage and events. However Annex III, which contains the contact details will
not be shared. Any raw data will the stored on the RURITAGE SharePoint site
which is backup via University of Plymouth IT systems. Sensitive data will
</td> </tr>
<tr>
<td>
be shared between partners via SharePoint folders with limit access to data.
Sensitive data will not be shared outside the EU
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION:**
Research datasets that can be shared will be uploaded to ZENODO repository
with detailed metadata to insure discoverability. In additional to ZENODO, a
metadata record for each dataset will be added to the University of Plymouth’s
“PEARL” open publication/data repository ( _https://pearl.plymouth.ac.uk/_ )
in order to improve discoverability. The documents outlining the methodology
will be upload to repository and open access papers will be produced to share
the method.
In order to insure longevity of video recordings as suitable format for
archiving will be explored.
</td> </tr> </table>
**2.1.3 Dataset 3 Stakeholder_Participants_Information**
<table>
<tr>
<th>
**DATASET REFERENCE AND NAME:**
Dataset 3: Stakeholder_Participants_Information
</th> </tr>
<tr>
<td>
**DATASET DESCRIPTION:**
This dataset will include the stakeholders’ databases coming from the 13 RMs
and the 6 Rs. A Total of 38 databases will then be produced. Indeed, each RM
and each R will have to compile two different databases:
* One including the organizations involved as stakeholders in the process of the RHH
* One including details (anonymised personal information) of the citizens that will also participate into the RHH
</td> </tr>
<tr>
<td>
**ORIGIN, NATURE AND SCALE OF DATA:**
The databases will come from the RMs and Rs that will compile those also
including the personal information and contact details of the participants.
The databases will be fulfilled and transmitted in an .xlsx format.
_Organizations’ database_
The database that will be transmitted from RMs and Rs to responsible project
partners (CE and UNIBO) will include the following information:
* Name of the Organisation
* Organisation level
* Organisation Form
* Value chain (for profit organizations)
* Sector of activity (for non-profit organizations)
* Brief description of the organisation
* Organisation address
* Website
* Generic organisation email
* Twitter/Facebook handle of the organisation (if applicable)
* Additional comments
* Topic of interest: Pilgrimage, Sustainable local Food production, migration, art and festival, integrated landscape management, resilience
</td> </tr>
<tr>
<td>
The following information will be collected by the partners but will not be
transmitted to project partners and thus won’t be included in the databases:
* Name
* Role
* Residence
* Gender
* Age (range of age)
* Disability
* Email
* Telephone
_Organizations’ database_
The database that will be transmitted from RMs and Rs to responsible project
partners (CE and UNIBO) will include the following information:
* Residence
* Gender
* Age (Range of age)
* Disability
* Topic of interest: Pilgrimage, Sustainable local Food production, migration, art and festival, integrated landscape management, resilience
The following information will be collected by the partners but will not be
transmitted to project partners and thus won’t be included in the databases:
* Name
* Email
* Telephone
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA:**
Specific keywords will be attached to document to make them findable, however
this is only for internal use as information is confidential.
</td> </tr>
<tr>
<td>
**DATA SHARING:**
The template with the full contact and personal details will be kept, together
with their relevant signed consent form and information sheets, a secure
folder on the project SharePoint site that is only accessible to the
responsible partners and won’t be shared with any other project partners. The
anonymized databases will be included in Del 3.2 that will be available to all
project partners and to the commission services. The deliverable is expected
to be submitted at M10 of the project implementation (End of March 2019).
Deliverable 3.2 will in any case be kept confidential, meaning that it will
not be shared with a wider audience.
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION:**
Research datasets that can be shared will be uploaded to ZENODO repository
with detailed metadata to insure discoverability. In additional to ZENODO, a
metadata record for each dataset will be added to the University of Plymouth’s
“PEARL” open publication/data repository ( _https://pearl.plymouth.ac.uk/_ )
in order to improve discoverability.
Datasets that are confidential will be archived within the University of
Plymouth system with restricted access for 5 years to support any queries.
</td> </tr> </table>
### 2.1.4 Dataset 4 KPI_Evidence
<table>
<tr>
<th>
**DATASET REFERENCE AND NAME:**
Dataset 4: KPI_Evidence
</th> </tr>
<tr>
<td>
**DATASET DESCRIPTION:**
Dataset will provide quantifiable evidences of the potential role of CNH as a
driver for sustainable growth. WP4 will monitor the performance of the
deployed regeneration schemes in the 6 Rs through selected Key Performance
Indicators (KPIs).
</td> </tr>
<tr>
<td>
**ORIGIN, NATURE AND SCALE OF DATA:**
* Origin: Data are provided mainly by (Rs), or obtained from official statistics (Eurostat or the like).
* Nature: Most of data are quantitative, either absolute values or percentages. - Scale: Considering the worst case:
6 Rs + 6 additional Res = 12 cases
Less than 100 KPIs
3 data gathering campaigns (Baseline + intermediate + final)
TOTAL = 12 x 100 x 3 = 3,600 data fields (approx.)
It should be taking into account the project exploitation and upscaling after
project’s end, so scale could be 3 or 4 times the estimated value (i.e. 10,800
– 14,400 data fields).
Optionally, some KPIs could also be collected from (RM). In that case, there
will be no monitoring, i.e. no gathering campaigns, so only baseline data will
be collected.
13 RM + 8 additional RM = 21 cases
Less than 100 KPIs
TOTAL = 2,100 data fields
TOTAL = 14,400 + 2,100 = 16,500 data fields (aprox.)
Detailed information on data types and formats can be found in Deliverable
D4.1 ‘KPIs definition and evaluation procedures’.
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA:**
Data will be stored in a database (PostgreSQL, MySQL, or similar), preferably
open source database, as INTEGER, FLOAT or STRING data type. Minimum metadata
will be data source, timestamp, validity, …
Data field names and table names will be auto-descriptive, avoiding the use of
abbreviations. All names will be lowercase, except for the first letter
between two consecutive words, e.g. tableName. Variables must include a letter
at the beginning of the name illustrating data type of the variable (‘i’ for
integer, ‘f’ for float and ‘s’ for string data), e.g. iAge,
fAverageTemperature, sAddress.
Versioning will be formed by a major version number, followed by a dot and a
minor version number, starting from ‘0.1’. Including new data in the
Monitoring database does not mean a new version. Minor changes in the
database, as modifying or including a new data field, will suppose increasing
minor version number. Major changes, as removing or including a new table,
will suppose increasing major version number.
</td> </tr>
<tr>
<td>
**DATA SHARING:**
Data will be open access. No personal data is stored, so no issues regarding
GDPR are expected. Data will be accessible through the RURITAGE platform. The
only tool necessary to access the data is a web browser, but some database
management system (DBMS) as PostgreSQL or MySQL and some database
administration tools like phpPgAdmin or phpMyAdmin, could be necessary to
manage the dataset properly. This will be documented accordingly to enable
future reuse.
Licence conditions of official statistics obtained will be checked to insure
they do not impact on sharing of data outputs.
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION:**
Research datasets will be uploaded to ZENODO repository with detailed metadata
to insure discoverability.
In additional to ZENODO, a metadata record for each dataset will be added to
the University of Plymouth’s “PEARL” open publication/data repository (
_https://pearl.plymouth.ac.uk/_ ) in order to improve discoverability.
</td> </tr> </table>
**2.1.5 Dataset 5 DSS_Knowledge**
<table>
<tr>
<th>
**DATASET REFERENCE AND NAME:**
Dataset 5: DSS_Knowledge
</th> </tr>
<tr>
<td>
**DATASET DESCRIPTION:**
This dataset relates to the data generated from the DSS. The DSS will use data
from other WPs and inbuilt rules to generate suggestions relating to
regeneration policies that Rs can use. The results were be in a textual form.
</td> </tr>
<tr>
<td>
**ORIGIN, NATURE AND SCALE OF DATA:**
Data are generated by the DSS using as input other datasets from WP1 and WP5.
The scale of data relates to the number of Rs (currently 6 increasing to 12
over life of project) involved but will expand when the project is up scaled.
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA:**
There are no standards, however all data stored/produced will be documented
and tagged with keywords
</td> </tr>
<tr>
<td>
**DATA SHARING:**
During the lifecycle of the project all raw data will the stored on the
RURITAGE SharePoint site which is backup via University of Plymouth IT
systems. Sensitive data will be shared between partners via SharePoint folders
with limit access to data. Sensitive data will not be shared outside the EU.
Data will be also be accessible through the RURITAGE platform.
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION:**
Research datasets that can be shared will be uploaded to ZENODO repository
with detailed metadata to insure discoverability. In additional to ZENODO, a
metadata record for each dataset will be added to the University of Plymouth’s
“PEARL” open publication/data repository ( _https://pearl.plymouth.ac.uk/_ )
in order to improve discoverability. Data will also be archived in the ARCHES
database
</td> </tr> </table>
**2.1.6 Dataset 6 Dissemination_Information**
<table>
<tr>
<th>
**DATASET REFERENCE AND NAME:**
Dataset 6: Dissemination_Information
</th> </tr>
<tr>
<td>
**DATASET DESCRIPTION:**
During the WP7 Dissemination and Communication Activity the following
information might be temporary requested from participants:
* Name of participant
* Name of organisation represented
* City where the organisation is located
* Work Emails Address
* Work phone number
</td> </tr>
<tr>
<td>
**ORIGIN, NATURE AND SCALE OF DATA:**
The events that are likely to request this information are:
* EU and International cooperation with other similar projects. Information might be necessary for communication between our project and other organisations.
* Community events - Information might be necessary for registering and informing participants about the event.
* Photo Contest and Troubadour activities - Information might be necessary for registering and informing participants about the contest results, as well as for the troubadour diary.
* Public events, workshops (e.g. Dialogue Breakfast and joint meetings) and conferences: -
Information might be necessary for registering and informing participants
about the contest results.
* Newsletters - Information might be necessary for subscribing.
* Summer Schools and Master Course – Information will be necessary for registering on courses.
The scale of the data is indicated in the D7.1 Communication and Dissemination
Plan (and its regular updates).
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA:**
Due to the fact that information will not be shared additional descriptive
metadata is not required.
</td> </tr>
<tr>
<td>
**DATA SHARING:**
The information will not be shared externally. The information is just needed
and used for the practical organisation and functioning of the cooperation,
events, contest and newsletter. The data presented, will be accessed just by
the core team (involved in the development that particular event). The data
will the stored on the RURITAGE SharePoint site which is backup via University
of Plymouth IT systems. Data will be stored in restricted access folders on
the site.
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION:**
All the data will respect carefully the GDPR requirements and will be
responsibly administrated and managed.
</td> </tr> </table>
# 3\. Open access to publication
The RURITAGE consortium is an open data pilot project within H2020 programme;
this means that all the publications deriving from the project should be
published in open access. To ensure open access to scientific journal
publications, appropriate project budget has been allocated and will be
available to all partners producing scientific journal publications. Hence all
scientific journal publication will follow a “Gold” route and be open access
on the day of publication. The University of Plymouth also manages and
maintains “PEARL” (https://pearl.plymouth.ac.uk/) an open publication
repository, therefore all publications (“postprint” version) will be deposited
in the repository within 20 days of acceptance. All “postprint” version will
then be replaced by final versions when copyright allows. RURITAGE intends to
publish at least 10 publications.
# 4\. Open access to raw data
## 4.1 Data Storage, Security, back up and repository
All RURITAGE project datasets will be stored on the project SharePoint. The
site is hosted in the Microsoft Cloud (which sit on a secure EU server). The
site will be automatically backed up every 12 hours by the service provider
(Microsoft). SharePoint Online (part of Office 365) provides encryption during
data storage and transfer. It has been approved by the University Security
Architects for the storage of valuable research data. (See security
certifications: _https://technet.microsoft.com/en-
GB/library/office-365-compliance.aspx_ ) The PI can use the site to share
documents with the partners, reducing duplication and the risks associated
with emailing documents. The PI can set up different access permission levels
to fit in with confidentiality requirements.” When appropriate research
datasets will be uploaded to ZENODO repository with detailed metadata to
insure discoverability. In additional to ZENODO, a metadata record for each
dataset will be added to the University of Plymouth’s “PEARL” open
publication/data repository ( _https://pearl.plymouth.ac.uk/_ ) in order to
improve discoverability. This will ensure wide dissemination of relevant
scientific outputs. Links will also be created to openAIRE platform.
A dedicated dissemination plan will also ensure that materials are shared with
relevant groups and stakeholders.
# 5\. Ethics and Data protection
As defined in the relevant datasets personal data protection will be ensured
in all the step of the project. In particular RURITAGE comply with the H2020
Ethical standard and rules according to the following procedures:
* A signed consent form will be collected from all research participants who are participating in the research before any data collection takes place. An information sheet will also be provided to all research participants. The following information will be included in the information sheet: o Details of what the study is about, who is undertaking the study and why it is being conducted; o Link to the web page containing the Privacy Notice for Research Participants.
* Clear details of what participation would involve- i.e. What they would be asked to do, where, for how long etc.;
* The advantages/disadvantages of taking part; o Who is funding the study; o Who has reviewed/authorised the study;
* The researcher’s contact details; o Another named person, besides the researcher, whom people can contact (e.g. With any questions / complaints);
* What will happen to the data collected- how it will be stored, for how long and with whom / how it will be shared, whether it will be anonymised, how it will be published also including whether any automated decision-making will apply, as well as the significance and the envisaged consequences of such processing for the participant;
* How long it will be retained, security measures in place; o The categories of personal data collected; o What safeguards are in place in relation to personal data shared with other parties and/or transferred out of Europe;
* The source of the personal data if not from the participant or if additional to them, for example whether it came from publicly accessible sources (e.g. Online, social media, NHS)
* Participation is voluntary; people are free to withdraw at any time without giving a reason and with no negative consequences; any timescales for withdrawal;
* Eligibility criteria;
* A GDPR and Information Security online training course hosted on the Project SharePoint site has been developed and all partners responsible for collecting data will undertake the training. This will insure that all researchers have a good understanding of GDPR regulations and that the study is complying with GDPR regulation concerning the collection, processing and storage of personal data.
# 6\. Other Data Management issues
**6.1 Responsibilities and Resource Allocation for Data**
## Management
Each WP has been allocated appropriate resources to manage the planning,
collection, processing, publication to open access repositories and archiving
of data produced within their WP. Each partner will respect the processes
identified in the DMP and support the WP leaders. Resource has also been
allocated to develop and maintain the project SharePoint site that will store
data in a secure fashion. The SharePoint site will be maintained for the life
of the project and beyond. Both the Coordinator and University of Plymouth
have appropriate resources allocated to their project budgets to manage the
overall data management plan and provide suitable training and guidance to all
partners as and when required.
### 6.2 Intellectual Property Rights
In line with the Grant Agreement, the Consortium has a policy of protecting
the project’s results, whenever results are expected to be commercially
exploitable and whenever this protection is possible, reasonable and
justified. Where this is applicable, the necessary steps to protect the
associated IP will be included in the Action Plan for the relevant project
results. IPR will be managed through the detailed internal IP Protection Plan
that will be developed in WP8.
The ownership of results is strictly controlled by the Consortium Agreement
(CA) - Section 8, which includes all provisions related to the Ownership of
Results, Joint Ownership of Results, Use of Results and Transfer of Results.
Specifically relating to jointly owned results which could be commercially
exploited, including but not limited to RURITAGE Systematic Innovation Areas
(SIAs), RURITAGE branding, RURITAGE Decision Support System (DSS), Serious
Game Kit, RURITAGE Atlas, My Cult-Rural Toolkit; these will be the subject of
discussion among the parties that participate in the development of such
results. The parties shall make their best efforts to negotiate in good faith
and finalise joint ownership agreements before the end of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0053_ReFreeDrive_770143.md
|
# Executive Summary
This document, D9.2 Open Data Management Plan (ODMP) main objetive is to
collect, analyse and share open motor design, characterization and testing
data and experience to validate and de-risk future industrial innovations.
This objective has been addresed in this document and there have been no
deviations in content or time from the deliverable objectives set out in the
ReFreeDrive Grant Agreement. The data gathering and management will be a
continuous action throughout the duration of the project.
The Consortium strongly believes in the concepts of open science, and in the
benefits that the European innovation ecosystem and economy can draw from
allowing the reuse of data at a larger scale. Besides, the share and reuse of
research data to the electric machine design research community will eliminate
barriers and enforece an innovation culture.
The purpose of the ODMP is to provide an analysis of the main elements of the
data management policy that will be used by the Consortium with regard to the
project open research data.
The ODMP covers the complete research data life cycle. It describes the types
of research data that will be generated or collected during the project, how
the research data will be preserved and what parts of the datasets will be
shared for verification or reuse.
Research data linked to exploitable results will not be put into the open
domain if they compromise its commercialisation prospects or have inadequate
protection, which is a H2020 obligation. The rest of research data will be
deposited in an open access repository.
The ODMP is not a fixed document; on the contrary it will evolve during the
lifespan of the project. This first version of the ODMP includes an overview
of the datasets to be produced by the project, and the specific conditions
that are attached to them. The next versions of the ODMP will get into more
detail and describe the practical data management procedures implemented by
the ReFreeDrive project.
The expected types of research data that will be collected or generated along
the project will be discussed following the project work package structure.
# 1 Introduction
Open access is defined as the practice of providing on-line access to
scientific information that is free of charge to the reader and that is
reusable. In the context of research and innovation, scientific information
can refer to peer-reviewed scientific research articles or research data.
Research data refers to information, in particular facts or numbers, collected
to be examined and considered and as a basis for reasoning, discussion, or
calculation. In a research context, examples of data include statistics,
results of experiments, measurements, observations resulting from fieldwork,
survey results, interview recordings and images. The focus is on research data
that is available in digital form.
Nevertheless, data sharing in the open domain can be restricted as a
legitimate reason to protect results that can reasonably be expected to be
commercially or industrially exploited. Strategies to limit such restrictions
will include anonymising or aggregating data, agreeing on a limited embargo
period or publishing selected datasets. It must be duly noted that the
automotive industry is highly competitive and ReFreeDrive project aims at
providing its industrial partners with added value innovation, which would not
be such if made public.
## _1.1 Purpose_
The purpose of the ODMP is to provide an analysis of the main elements of the
data management policy that will be used by the Consortium with regard to the
project research data.
The ODMP covers the complete research data life cycle. It describes the types
of research data that will be generated or collected during the project, the
standards that will be used, how the research data will be preserved and what
parts of the datasets will be shared for verification or reuse. Figure 1 shows
the research data life cycle, taken from [1], which has been used as guideline
for this deliverable.
The ODMP is not a fixed document, but will evolve during the lifespan of the
project, particularly whenever significant changes arise such as dataset
updates or changes in Consortium policies.
This document is the first version of the ODMP, delivered in Month 6 of the
project. It includes an overview of the datasets to be produced by the
project. The ODMP will be updated in month 18 if needed and again at the end
of the project.
This document has been produced following the EC guidelines for project
participating in this pilot and additional consideration described in ANNEX I:
KEY PRINCIPLES FOR OPEN ACCESS TO RESEARCH DATA.
**Figure 1. Reserach Data Life Cycle**
## _1.2 Research data types_
For this first release of ODMP, the data types that will be produced during
the project are focused on the Description of the Action (DoA) and on the
results obtained in the first months of the project.
According to such consideration, Table 1 reports a list of indicative types of
research data that each of the ReFreeDrive work packages will produce. This
list may be adapted with the addition or removal of datasets in the next
versions of the ODMP to take into consideration the project developments. A
detailed description of each dataset is given in the following sections of
this document.
**Table 1. Work pacakges and expected datasets of the ReFreeDrive project**
<table>
<tr>
<th>
**#**
</th>
<th>
**Work Package**
</th>
<th>
**Lead Partner**
</th>
<th>
**Expected Datasets**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Project Management
</td>
<td>
CIDAUT
</td>
<td>
None
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Boundary Conditions
</td>
<td>
PRIVÉ
</td>
<td>
KPIs, driving cycles and boundary conditions values
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Induction Machine Design
</td>
<td>
MDL
</td>
<td>
Design Simulation results
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Synchronous Reluctance
Machine Design
</td>
<td>
IFPEN
</td>
<td>
Design Simulation results
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
e-Drive Design
</td>
<td>
PRIVÉ
</td>
<td>
Control Simulation results
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
Prototype Manufacturing
</td>
<td>
UAQ
</td>
<td>
Prototype Pictures
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
Powertrain Testing, Vehicle integration and Validation
</td>
<td>
CIDAUT
</td>
<td>
Test Results
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
Techno Economic Evaluation and Exploitation
</td>
<td>
JLR
</td>
<td>
Environmental assessment (LCA results)
</td> </tr>
<tr>
<td>
**9**
</td>
<td>
Dissemination Communication
</td>
<td>
and
</td>
<td>
UAQ
</td>
<td>
None
</td> </tr> </table>
Specific datasets may be associated to scientific publications (i.e.
underlying data), public project reports and other raw data or curated data
not directly attributable to a publication. The policy for open access are
summarised in the following Figure 2.
**Figure**
**2**
**. ReFreeDrive timing of different options**
ReFreeDrive
Results
Research
Data
Exploitable?
Deposit Data
Linked to Publication?
Gold Open
Access
Green Open
Access
YES
NO
YES
NO
≤ Project
End
≤ Publication Date
\+
6 Months
=
Publication Date
Research data linked to exploitable results will not be put into the open
domain if they compromise its commercialization prospects or have inadequate
protection, which is a H2020 obligation. The rest of research data will be
deposited in an open access repository.
When the research data is linked to a scientific publication, the provisions
outlined in the Grant and Consortium agreements will be followed. Research
data needed to validate the results presented in the publication should be
deposited at the same time for “Gold” Open Access 1 or before the end of the
embargo period for “Green” Open Access 2 . Underlying research data will
consist of selected parts of the general datasets generated, and for which the
decision of making that part public has been made.
Other datasets will be related to any public report or be useful for the
research community. They will be selected parts of the general datasets
generated or full datasets and be published as soon as possible.
## _1.3 Responsibilities_
Each ReFreeDrive partner has to respect the policies set out in this ODMP.
Datasets have to be created, managed and stored appropriately and in line with
applicable legislation.
The Project Coordinator has a particular responsibility to ensure that data
shared through the ReFreeDrive website are easily available, but also that
backups are performed and that proprietary data are secured. CIDAUT, will
ensure dataset integrity and compatibility for its use during the project
lifetime by different partners.
Validation and registration of datasets and metadata will be done by CIDAUT in
close collaboration with the Work Package Leader generating the respective
datasets. Metadata constitutes an underlying definition or description of the
datasets, and facilitate finding and working with particular instances of
data.
Backing up data for sharing through open access repositories will be done by
CIDAUT.
Quality control of these data is the responsibility of the relevant WP leader,
supported by the Project Coordinator.
If datasets are updated, the partner that possesses the data has the
responsibility to manage the different versions and to make sure that the
latest version is available in the case of publically available data. WP1 will
provide naming and version conventions.
Last but not least, all partners must consult the concerned partner(s) before
publishing data in the open domain that can be associated to an exploitable
result, as outlined in the Grant and Consortium Agreements of this project.
# 2 Data Sharing
Relevant datasets will be stored in ZENODO [2] , which is the open access
repository of the Open Access Infrastructure for Research in Europe, OpenAIRE
[3]. ZENODO builds and operates a simple and innovative service that enables
researchers, scientists, EU projects and institutions to share and showcase
multidisciplinary research results (data and publications) that are not part
of the existing institutional or subject-based repositories of the research
communities. ZENODO enables researchers, scientists, EU projects and
institutions to: easily share the long tail of small research results in a
wide variety of formats including text, spreadsheets, audio, video, and images
across all fields of science. display their research results and get credited
by making the research results citable and integrate them into existing
reporting lines to funding agencies like the European Commission. easily
access and reuse shared research results.
Data access policy will be unrestricted since no confidentiality or
Intellectual Property Rights (IPR) issues are expected regarding the
environmental monitoring datasets. All collected datasets will be disseminated
without an embargo period unless linked to a green open access publication.
Data objects will be deposited in ZENODO under:
* Open access to data files and metadata and data files provided over standard protocols such as HTTP and Open Archive Initiative-Protocol for Metadata Harvesting (OAI-PMH).
* Use and reuse of data permitted.
* Privacy of its users protected.
## _2.1 Findable, Accessible, Interoperable, Reusable (FAIR) Principles_
FAIR Principles definition as referenced from FAIR principles description [4].
**2.1.1 To be Findable:**
* **F1** : (meta)data are assigned a globally unique and persistent identifier o A Digital Object Identified (DOI) is issued to every published record on Zenodo.
* **F2** : data are described with rich metadata (defined by Reusable R1 principle below) o Zenodo's metadata is compliant with DataCite's Metadata Schema minimum and recommended terms, with a few additional enrichements. The DataCite Metadata Schema is a list of core metadata properties chosen for an accurate and consistent identification of a resource for citation and retrieval purposes, along with recommended use instructions.
* **F3** : metadata clearly and explicitly include the identifier of the data it describes o The DOI is a top-level and a mandatory field in the metadata of each record.
* **F4** : (meta)data are registered or indexed in a searchable resource o Metadata of each record is indexed and searchable directly in Zenodo's search engine immediately after publishing. o Metadata of each record is sent to DataCite servers during DOI registration and indexed there.
**2.1.2 To be Accessible:**
* **A1** : (meta)data are retrievable by their identifier using a standardized communications protocol o Metadata for individual records as well as record collections are harvestable using the OAI-PMH protocol by the record identifier and the collection name. o Metadata is also retrievable through the public Representational state transfer (REST) Application Programming Interface (API) API.
* **A1.1** : the protocol is open, free, and universally implementable o See point A1. OAI-PMH and REST are open, free and univesal protocols for information retrieval on the web.
* **A1.2** : the protocol allows for an authentication and authorization procedure, where necessary o Metadata are publicly accessible and licensed under public domain. No authorization is ever necessary to retrieve it.
* **A2** : metadata are accessible, even when the data are no longer available o Data and metadta will be retained for the lifetime of the repository. This is currently the lifetime of the host laboratory CERN, which currently has an experimental programme defined for the next 20 years at least.
* Metadata are stored in high-availability database servers at CERN, which are separate to the data itself.
**2.1.3 To be Interoperable:**
* **I1** : (meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation. o Zenodo uses JavaScript Object Notation (JSON) Schema as internal representation of metadata and offers export to other popular formats such as Dublin Core or MARCXML. The Dublin Core Schema is a small set of vocabulary terms that can be used to describe digital resources (video, images, web pages, etc.), as well as physical resources such as books or CDs, and objects like artworks. The full set of Dublin Core metadata terms can be found on the Dublin Core Metadata Initiative website. MARCXML is an XML schema based on the common MARC21 standards. MARCXML was developed by the Library of Congress and adopted by it and others as a means of facilitating the sharing of, and networked access to, bibliographic information. Being easy to parse by various systems allows it to be used as an aggregation format, as it is in software packages.
* **I2** : (meta)data use vocabularies that follow FAIR principles o For certain terms Zenodo refers to open, external vocabularies, e.g.: license (Open Definition), funders (FundRef) and grants (OpenAIRE).
* **I3** : (meta)data include qualified references to other (meta)data o Each referrenced external piece of metadata is qualified by a resolvable URL.
**2.1.4 To be Reusable:**
* **R1** : (meta)data are richly described with a plurality of accurate and relevant attributes o Each record contains a minimum of DataCite's mandatory terms, with optionally additional DataCite recommended terms and Zenodo's enrichments.
* **R1.1** : (meta)data are released with a clear and accessible data usage license o License is one of the mandatory terms in Zenodo's metadata, and is referring to a Open Definition license.
* Data downloaded by the users is subject to the license specified in the metadata by the uploader.
* **R1.2** : (meta)data are associated with detailed provenance o All data and metadata uploaded is tracable to a registered Zenodo user.
* Metadata can optionally describe the original authors of the published work.
* **R1.3** : (meta)data meet domain-relevant community standards o Zenodo is not a domain-specific repository, yet through compliance with DataCite's Metadata Schema, metadata meets one of the broadest cross-domain standards available.
## _2.2 Archiving and Preservation_
Zenodo is hosted by CERN which has existed since 1954 and currently has an
experimental programme defined for the next 20+ years. CERN is a memory
institution for High Energy Physics and renowned for its pioneering work in
Open Access. Organisationally Zenodo is embedded in the IT Department,
Collaboration Devices and Applications Group, Digital Repositories Section
(IT-CDADR).
Zenodo is offered by CERN as part of its mission to make available the results
of its work (CERN Convention, Article II, §1 [5]).
Data files and metadata are backed up nightly and replicated into multiple
copies in the online system.
**2.2.1 Data storage**
All files uploaded to Zenodo are stored in CERN’s EOS service 3 in an 18
petabytes disk cluster. Each file copy has two replicas located on different
disk servers.
For each file Zenodo stores two independent MD5 4 checksums. One checksum is
stored by Invenio 5 [6], and used to detect changes to files made from
outside of Invenio. The other checksum is stored by EOS, and used for
automatic detection and recovery of file corruption on disks.
Zenodo may, depending on access patterns in the future, move the archival
and/or the online copy to The CERN Advanced STORage manager (CASTOR) [7] in
order to minimize long-term storage costs.
EOS is the primary low latency storage infrastructure for physics data from
the Large Hadron Collider 6 (LHC) [ 8 ] and CERN currently operates
multiple instances totalling 150+ petabytes of data with expected growth rates
of 30-50 petabytes per year. CERN’s CASTOR system currently manages 100+
petabytes of LHC data which are regularly checked for data corruption.
Invenio provides an object store like file management layer on top of EOS
which is in charge of e.g. version changes to files.
# 3 Datasets Description
The Table 2 refers to each of the datasets that will be produced during the
project, their description and importance to the project.
**Table 2. Datasets generated by the ReFreeDrive Project**
<table>
<tr>
<th>
**Who (WPs**
**generating the dataset)**
</th>
<th>
**What (Dataset description)**
</th>
<th>
**Why (Importance of this dataset)**
</th>
<th>
**How (use of this dataset in the project)**
</th> </tr>
<tr>
<td>
WP2
</td>
<td>
KPIs: Project targets at vehicle levels
</td>
<td>
These figures set the design space for the project
electric motors
</td>
<td>
These values will drive the different designs
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
Full Subsystems
Technical specifications
</td>
<td>
Assigned boundary conditions at the subsystem level
</td>
<td>
These values will drive the different designs and lead the in vehicle
integration activities
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
Simulation Results: Electromagnetic, mechanical,
thermal, Noise
Vibration,
Harshness (NVH)
</td>
<td>
Induction Machine design expected result will help comparisons with other
technologies
</td>
<td>
This dataset will be the basis for at least one scientific publication.
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
Simulation Results: Electromagnetic, mechanical, thermal, NVH
</td>
<td>
Synchronous Reluctance design expected results will help comparisons with
other technologies
</td>
<td>
This dataset will be the basis for at least one scientific publication.
</td> </tr>
<tr>
<td>
WP3 & WP4
</td>
<td>
Material characterization values
</td>
<td>
Grain oriented and non grain oriented materials magnetic and mechanical
performance will help other designers reuse this knowledge
</td>
<td>
This information will be used for design purposes. This dataset will be the
basis for at least one scientific publication.
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
Control algorithm design results
</td>
<td>
Implementation of control strategies or innovation in control strategies
</td>
<td>
This dataset will be the basis for at least one scientific publication. It
will drive the power electronic configuration.
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
Prototype
Manufacuring
Pictures
</td>
<td>
Comparison with other technologies, technology demonstration feasibility
</td>
<td>
The project will use these pictures for communication purposes
</td> </tr>
<tr>
<td>
WP7
</td>
<td>
Motor Test Results: test results for the integrated e-Drive
</td>
<td>
These data will enable a comparison with other technologies and help designers
set ambitious
targets in future designs
</td>
<td>
This dataset will be the basis for at least one scientific publication.
The project will use these data for the techno economic evaluation and
exploitation strategies
</td> </tr>
<tr>
<td>
WP7
</td>
<td>
Powertrain test results: results in the powertrain test
</td>
<td>
These data will enable a comparison with other technologies and help
</td>
<td>
This dataset will be the basis for at least one scientific
</td> </tr>
<tr>
<td>
</td>
<td>
bench
</td>
<td>
designers set ambitious
targets in future designs
</td>
<td>
publication.
The project will use these data for the techno economic evaluation and
exploitation strategies
</td> </tr>
<tr>
<td>
WP7
</td>
<td>
Vehicle driving test results
</td>
<td>
These data will enable a comparison with other technologies at the vehicle
level
</td>
<td>
This dataset will be the basis for at least one scientific publication.
The project will use these data for the techno economic evaluation and
exploitation strategies
</td> </tr>
<tr>
<td>
WP7
</td>
<td>
Vehicle integration pictures
</td>
<td>
Technology demonstration
feasibility
</td>
<td>
The project will use these pictures for communication purposes
</td> </tr>
<tr>
<td>
WP8
</td>
<td>
Life Cycle Analysis
(LCA) results:
environmental
assessment and comparatives of the studied technologies
</td>
<td>
LCA data are used throughout the electric vehicle market for marketing,
communication, and new designs comparative evaluations.
</td>
<td>
LCA will be key to demonstrate the rare earth free environmental advantages
poised by the ReFreeDrive technologies
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0054_RINGO_730944.md
|
# 1\. INTRODUCTION
This initial Data Management Plan describes the existing and planned data
management, data access and data security policies of ICOS RI.
The structure of this report consists of a general overview on the data
management of the RINGO project as a whole, as well as a more detailed
description of data management.
The mission of the European Research Infrastructure ‘Integrated Carbon
Observatory System‘ (ICOS RI) is to enable research to understand the
greenhouse gas (GHG) budgets and perturbations. The ICOS RI is a distributed
research infrastructure that provides the long-term observations required to
understand the present state and predict future behaviour of the global carbon
cycle and GHG emissions. ICOS RI ensures the continuous, high-precision and
long- term greenhouse gas measurements in Europe and adjacent key regions of
Africa and Eurasia. The backbone of ICOS RI are the three measurement stations
networks: the ICOS atmospheric, ecosystem and ocean networks. Together they
are organized within national measurement networks. Technological developments
and implementations, related to GHGs, will be promoted by the linking of
research, education and innovation.
ICOS Central Facilities (ICOS CF), which are Atmospheric Thematic Centre
(ATC), Ecosystem Thematic Centre (ETC), Ocean Thematic Centre (OTC) and the
Central Analytical Laboratories (CAL) have the specific tasks of collecting
and processing the data and samples (e.g. flask or radiocarbon samples in
Atmosphere or soil and plant tissue samples in Ecosystem observational
network) received from the national measurement networks.
ICOS ERIC is the legal entity of ICOS RI established to coordinate the
operations and the data of ICOS distributed research infrastructure, and to
develop, monitor and integrate the activities and the data of ICOS RI. The
ICOS Carbon Portal is the ICOS data centre from where ICOS data and ancillary
data sets will be published and be accessible for the users. Carbon Portal is
responsible for handling and providing ICOS data products. Carbon Portal is
being designed and envisioned as the single access point for environmental
scientists to discover, obtain, visualise and track observation measurements
produced from the observation stations as quickly as possible.
The design of the overall ICOS RI data management is challenged by the
complicated requirements of dataflow from the distributed acquisition via
centralized processing and quality assessment to the publication of the data
products. The ICOS national observation stations are highly distributed; data
are semantically diverse, organisational features of the National Networks are
different from country to country, observational measurements and resulting
data life cycles are varying between observational networks. Data definitions,
transfers and responsibilities have been discussed within ICOS RI for several
years. These discussions have been documented in numerous documents.
# 2\. Data summary
This section specifies the purpose of the data, the formats and origin, the
size and to whom it will be useful.
There are different kinds of data generated in the project. First of all,
there are experiments where new observational technologies are developed and
tested, either in the lab or in the field. The data is used to evaluate the
performance of the instrumentation and/or methods. Data and metadata are
essential for documentation and publishing of the results in the literature.
Data formats will be very similar to the current standard data formats used in
ICOS, except for new instrument specific raw data formats that are often
proprietary. Order of magnitude of the data generated is several Gigabytes.
In several work packages methods are developed for new or existing measurement
strategies. For this existing (ICOS or pre-ICOS) measurements and/or model
simulations are used. The model data will be generated using existing models.
The data will be used to evaluate the different alternatives for measurement
strategies. Data and metadata are essential for documentation and publishing
of the results in the literature. Data formats will be very similar to the
current standard data formats used in ICOS and mainly consist of netcdf files.
Order of magnitude of the data is tens of Gigabytes.
In WP5 the historical data sets (pre-ICOS) will be re-evaluated for a limited
number of stations by going back to the original raw data and a re-analysis
using methodologies as close as possible to the current data processing
standards and strategies of ICOS RI, including evaluation of the uncertainties
in the individual measurements. The data will be used and offered to the users
as improved, quality controlled and recalibrated data sets extending the ICOS
dataset to the pre-ICOS period. This dataset will be essential for inverse
modelling experiments to complement the ICOS dataset with at least 10 years of
historical data. Order of magnitude of the datasize is one gigabyte. This data
will be published through the ICOS Carbon Portal following the ICOS data
license (CC4BY) and data policy.
3\. FAIR data
# 3.1 Data findability, including provisions for metadata
This section outlines the discoverability and identifiability of the data and
the use of persistent and unique identifiers.
All data will be curated using standard EUDAT B2 services, making sure that
all data is discoverable through B2FIND. All final (Level 2) datasets will be
shared through the ICOS Carbon Portal, which will be fully implementing the
FAIR principles. All EUDAT and ICOS CP services make use of ePIC handle for
identification of all data objects. (Collections of) Level 2 products will
also be minted doi identifiers based on Datacite. All ICOS data object
metadata is shred with the GEOSS portal.
Naming conventions are not relevant due to the use of persistent identifiers
and the linked machine-readable description through metadata complying with
INSPIRE and ISO19115 as a subset. New versions of data objects receive of
course their own unique persistent identifier and in the metadata the
appropriate link to the older version is added. Same for the reference of an
updated version in the newer version.
Keywords are part of the metadata, following the appropriate (community)
standards.
# 3.2 Data accessibility
This section specifies the extent of open access, how the data is made
available, what methods and tools are used.
Experiment and model data might but will be openly accessible only after the
end of the projects as soon as the results have been published. All
publications will be open access.
Whenever possible data will be openly accessible following the ICOS CC4BY
license. Through the EUDAT B2 services of B2FIND, B2DROP and B2SHARE and the
ICOS Carbon portal all metadata and data can be found and accessed.
Where relevant and possible with regards to property rights developed software
will be made available through the open source repository Github or similar
using a GPL license.
# 3.3 Data interoperability
This section covers what data and metadata vocabularies, standards and
methodologies are used to facilitate interoperability.
All RINGO and ICOS data and metadata are designed for interoperability and in
all cases follow and in some cases even form de-factor (community) standards.
All metadata will available in INSPIRE compliant form. All ICOS Carbon Portal
data is available as linked open data and through an open SPARQL endpoint. The
RINGO project specific data will be available through the EUDAT B2 services
following the same standards.
# 3.4 Data re-use
This section specifies data licencing, availability and length of time for re-
use.
Wherever possible the data will be shared right after production following the
Creative Commons 4.0 International License with Attribution (CC4BY).
Experimental data test data will in some cases only become available after the
end of the project or publication of the results, whatever comes first, and
will be shared used the same CC4BY license.
The CC4BY licenses guarantees maximum re-use (and redistribution) while
maintaining the traceability of the use and credit to the data providers and
their sponsors.
Data quality assurance and control is central and the raison d'étre of ICOS
and the RINGO project. About 80% of the efforts spent in the ICOS Thematic
Centres is directed at data quality assurance.
ICOS RI has a time horizon of at least 20 years, the data will remain useful
and usable beyond that period. For example, now the time-series generated
since 1957 of CO 2 concentrations at Mauna Loa are still being used.
## 4\. Allocation of resources
The costs of making data fair can be estimated to be 100% of the effort by
ICOS Carbon Portal and 25% of the operational costs of the ICOS Thematic
Centres. About 10% of the RINGO budget is used for improvement of the
interoperability of the ICOS metadata.
The cost of long term preservation of the data is at this moment impossible to
estimate. On the long-term the costs of storage are foreseen to go down
tremendously. At this moment, the storage costs for ICOS are foreseen to be in
the order of 50 k€ per year.
## 5\. Data security
ICOS and RINGO produce non-sensitive data. Personal information is processes
and stored according to the ICOS privacy policy. For secure storage ICOS
relies on the European e-Infrastructure of EUDAT and EGI.
6\. Ethical aspects
Not relevant for the RINGO data.
## 7\. Other
There are several documents that are related to the data management plan
developed in RINGO. These are ICOS Data Policy document, ICOS Data lifecycle
document, ICOS Data License, and ICOS measurement protocol documentation for
Atmosphere, Ocean and Ecosystem community.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0057_Net4Society5_838335.md
|
# Executive Summary
This document is a deliverable of the Net4Society5 project, which is funded by
the European Union’s Framework Programme for Research and Innovation, Horizon
2020, under Grant Agreement # 838335.
This data management describes the data which will be used by the project, in
particular how this data will be collected, the activities for which it will
be utilized, and how and where this data will be stored. Subsequent versions
of this data management plan will outline how the data used in Net4Society
will be shared and preserved.
# Introduction
This Data Management Plan (DMP, D5.2) is a deliverable (month 6) of the
Net4Society5 project, which is funded by the European Union’s Horizon 2020
Programme under Grant Agreement # 838335. Net4Society5 is the transnational
network for National Contact Points (NCPs) working in Societal Challenge 6 –
“Europe in a changing world: inclusive, innovative and reflective Societies”
of Horizon 2020.
The main aims of the project focus on providing NCPs with professional and
tailor-made services designed to help them support their research communities
in their efforts to secure EU-funding. Also incorporated in these aims is the
objective of improving and strengthening the integration of SSH research and
researchers throughout the whole of Horizon 2020 as a way of fostering
interdisciplinarity. These aspects, along with promotion the outcomes of
social scientific and humanities research and its impact on society, comprise
the general scope of the project.
This document constitutes a first version of Net4Society’s Data Management
Plan. The purpose of this plan is to describe the main sort of activities and
type of data used by the project, and the policy for data management to be
followed by the consortium. In this first version, focus rests on a
description of the various datasets generated and used by Net4Society5.
Because this DMP is a living document, subsequent versions of the plan will go
into further detail on the specifics concerning actual data management, as
well as reflect any changes made to management procedures. The plan will thus
evolve and cover the entire project lifecycle, including where, how, and with
which standards data is collected and used in the project.
The following section provides an overview of the specific datasets which will
be generated by
Net4Society5. In addition, this section describes the origins of the datasets
and the work packages to which they belong. This section is then followed by a
general overview of Net4Society5’s participation in the ongoing Pilot on Open
Research Data and approach to personal data protection. The remainder of the
plan then presents the datasets and concludes with a brief outlook toward the
next DMP update.
# Data Summary
The following table presents the different datasets that will be generated and
used during Net4Society5.
The list provided here presents an overview of the sets. As such, it is an
indicative list and will be adapted (either addition or removal of datasets)
as the project develops. Any changes will be taken into account in subsequent
versions of the DMP.
<table>
<tr>
<th>
**#**
</th>
<th>
**Data type**
</th>
<th>
**Description & Purpose **
</th>
<th>
**Utility**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Net4Society NCP mailing list
</td>
<td>
**_Description_ ** : This dataset contains contact information of the main
target group of the Net4Society project—
Societal Challenge 6
(SC6) National Contact Points (NCPs). NCPs are individuals who have been
officially nominated by their national bodies to provide assistance in the
form of information on all matters related to securing EU funding under the
H2020
</td>
<td>
This data could be useful for research related to better understanding NCP
needs, as well as for future projects which access researchers in SSH
disciplines.
</td> </tr> </table>
**Table 1. Datasets overview**
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset (DS) name**
</th>
<th>
**Origin**
</th>
<th>
**WP #**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
1
</td>
<td>
DS1_Subscribers_Net4Society_NCP_mailing_list
</td>
<td>
Publically available data
</td>
<td>
1,2,4,5
</td>
<td>
.csv
</td> </tr>
<tr>
<td>
2
</td>
<td>
DS2_Net4Society_External Newsletter_Subscriber
</td>
<td>
Publically available data
</td>
<td>
4
</td>
<td>
.csv
</td> </tr>
<tr>
<td>
3
</td>
<td>
DS2_Net4Society_Internal Newsletter_Subscriber
</td>
<td>
Publically available data
</td>
<td>
4
</td>
<td>
.csv
</td> </tr>
<tr>
<td>
4
</td>
<td>
DS4_SSH Opportunities Document
</td>
<td>
Primary data
</td>
<td>
3
</td>
<td>
.docx; .pdf
</td> </tr>
<tr>
<td>
5
</td>
<td>
DS5_SSH Integration Monitoring Report_ESR Analysis
</td>
<td>
European Commission
</td>
<td>
3
</td>
<td>
.xls
</td> </tr>
<tr>
<td>
6
</td>
<td>
DS6_Net4Society_Internal_Website
</td>
<td>
publically available data
</td>
<td>
4
</td>
<td>
.html
</td> </tr>
<tr>
<td>
7
</td>
<td>
DS7_Research Directory and Partner Search Tool
</td>
<td>
publically available data
</td>
<td>
2
</td>
<td>
.csv
</td> </tr>
<tr>
<td>
8
</td>
<td>
DS8_Institutional Portraits
</td>
<td>
publically available data
</td>
<td>
2
</td>
<td>
.xls
</td> </tr>
<tr>
<td>
9
</td>
<td>
DS9_Surveys
</td>
<td>
primary data
</td>
<td>
2,3,5
</td>
<td>
.xls
</td> </tr> </table>
Table 2 below describes the dataset and the purpose of the generation and
collection of data in relation to the project’s objectives. The table also
explains the utility of this data collection and generation and for whom it
might be useful.
**Table 2. Datasets description, purpose, and utility**
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Programme. Information included in this dataset is the full name,
organizations, countries of origin, and email addresses of the SC6 NCPs.
**_Purpose_ ** : This information is collected in order to keep SC6 NCPs up-
to-date on news and developments related to H2020 that is necessary for their
daily work. This mailing list also provides the basis for dissemination of
information related to Net4Society project activities.
</th>
<th>
</th> </tr>
<tr>
<td>
2
</td>
<td>
DS2_Net4Society_External Newsletter_Subscriber
</td>
<td>
**_Description_ : ** This dataset is a mailing list all contact information
(full name, email address, organization, country of origin) of non-NCP
stakeholders within the SC6 community. These stakeholders include researchers,
policy makers, research
managers, and members
of civil society organizations. **_Purpose_ ** : To be able to send
Net4Society5’s external newsletter and emagazine, ISSUES, to subscribers.
</td>
<td>
This data is useful for disseminating information related to EU-based research
and funding.
</td> </tr>
<tr>
<td>
3
</td>
<td>
DS2_Net4Society_Internal Newsletter_Subscriber
</td>
<td>
**_Description_ : ** This dataset is a mailing list of all contact information
(full name, email address, organization, country of origin) of SC6 NCPs who
voluntarily sign-up to be part of the network. **_Purpose_ ** : To be able to
send updates on project activities (past, up-
</td>
<td>
This data is useful for disseminating information related to Net4Society
organized events, EU-level events (DG-RTD) EU-based research and funding, and
relevant policy updates.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
coming)to SC6 NCPs
</th>
<th>
</th> </tr>
<tr>
<td>
4
</td>
<td>
SSH Opportunities Document
</td>
<td>
**_Description_ ** : The data collected here is a compiled list of 2020 SSH-
flagged topics on all
H2020 Work Programmes.
**_Purpose_ ** : This dataset will be used to produce the 2019 edition of the
“Opportunities document for researchers from the Socio-economic sciences and
Humanities in H2020”.
</td>
<td>
This data is useful for providing potential applicants to Horizon 2020 funding
with key information for where they can apply. It is extremely well-received
and great value and use by researchers at all career levels, and in all
countries within Europe and the world (where applicable).
</td> </tr>
<tr>
<td>
5
</td>
<td>
DS5_SSH Integration Monitoring Report_ESR Analysis
</td>
<td>
**_Description_ : ** “This dataset includes statistical information for all
projects funded under SSH-flagged topics (Evaluation
Summary Reports, Anonymised Parts A & B of Grant Agreements (without project
participant names), project acronyms, numbers, project participant
organizations, LE country code, LE participant description) in 2018.””
**_Purpose_ ** : This data is used to develop a second set of data which will
be used in the production of the data analysis for the
European Commission publication _5th Monitoring Report on SSH-flagged topics
funded in 2018 under Societal Challenges and Industrial Leadership priorities.
Integration of Social Sciences and_
_Humanities in Horizon_
_2020: Participants,_
_Budget and Disciplines_ .
</td>
<td>
This data is useful for evaluating the success of efforts to strengthen the
integration of SSH throughout the whole of H2020. The statistics can be useful
for research managers and funding organizations at local, national, and EU-
levels seeking to understanding areas where social scientific and humanities
researchers have been successful in EU
Framework
Programmes.
</td> </tr>
<tr>
<td>
6
</td>
<td>
Research Directory and Partner Search Tool
</td>
<td>
**_Description_ ** : The dataset
</td>
<td>
This data is useful for
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
contains full names, organizational affiliations, research interests and areas
of expertise, email addresses, and phone numbers of researchers in an online
database located. The tool is accessible via the Net4Society project website.
The tool is updated with new calls for proposals each time a new SC6 Work
Programme is published. **_Purpose_ ** : The data is collected to support
researchers seeking partners for consortiumbuilding purposes in response to
calls announced in the various Horizon 2020 SC6 Work Programmes.
This data is voluntarily entered by interested researchers, primarily
coordinators, and stored in a repeatedly accessible online database.
</th>
<th>
any researcher looking to make professional connections, and in particular for
the purpose of building consortia to write and submit research proposals in
Horizon 2020. The tool is always accessible, not only when new calls are open,
meaning that interested parties always have an opportunity to search the
database and reach
out to potential partners.
</th> </tr>
<tr>
<td>
7
</td>
<td>
Institutional Profiles
</td>
<td>
**_Description_ ** : The data collected here are the full names of
researchers, their organizational affiliations, professional profiles, email
addresses, and phone numbers. This information is associated with researchers
of excellence located in EU member states which are underperforming in Horizon
2020. **_Purpose_ ** : This data is collected in order to comprise and
published profiles of researchers
</td>
<td>
The data collected can be useful for public authorities at the EUlevel
interested in strengthening the visibility and awareness of researchers in
nontraditionally strong
countries within EU
Framework
Programmes.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
and institutions of excellence in underperforming EU member states. The aim is
to boost the chances of these researchers to become part of winning consortia,
as well as to become coordinators of projects. An additional aim is to
increase the success rate of researchers within H2020 who do not stem from a
traditional powerhouse country (i.e. Germany, France, England).
</th>
<th>
</th> </tr>
<tr>
<td>
8
</td>
<td>
Surveys
</td>
<td>
**_Description_ ** : This dataset contains answers from feedback forms and
other surveys aimed at participants of Net4Society events. Feedback forms
collect data from NCPs, whereas other surveys include quantitative surveys
targeting researchers who have taken part in
Net4Society-organized brokerage events. **_Purpose_ ** : Both types of surveys
serve the purpose of providing Net4Society5 insight into the nature of the
experiences event participants have had and to gather information about what
sort of specific services they need and or seeking. The idea is to gain
insight into areas where services and tools provided by the project can be
enhanced, modified, or newly created to better address project
</td>
<td>
This data is useful to Net4Society, as it helps the project better
strategically plan its activities and consider which groups to address. This
information can also be of interest to public bodies interested in
understanding the needs and interests of stakeholders within the SC6
community.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
stakeholder needs.
</td>
<td>
</td> </tr> </table>
# FAIR Data
_Participation in the Pilot on Open Research Data_
Although Net4Society5 as a project does not actively generate research data,
it does intend to make use of data of a personal nature in many of its
activities. Because of this reliance on personal data, the project has opted
to participate in the ongoing Pilot on Open Research Data. The consortium of
this project takes the protection of data very seriously, and it is for this
reason that it has chosen to produce this data management plan.
_Personal Data Protection_
For many of its planned activities, Net4Society5 will need to collect personal
data. The personal data to be collected will be of a basic nature (i.e. full
name, professional background, email address, phone number, organization,
country of origin). In collecting and using this data, Net4Society5 will work
in compliance with the EU’s General Data Protection Regulation (GDPR), as well
as all relevant national regulations. The principle of informed consent will
be followed to further ensure the proper handling of personal data, especially
in the administration of planned surveys. In such cases, data subjects who are
asked, for example, to respond to either event feedback forms, or satisfaction
surveys, will be explicitly informed over the use and purpose of the personal
data to be collected.
To further protect the personal data collected, Net4Society 5 will make use of
secure folders which will be placed on secure servers. Access to these folders
will only be possible for those individuals specifically assigned to the task.
No other project members will have access.
# Outlook toward next DMP
The next version of the DMP will be prepared for month 19, which is the final
month of Net4Society’s runtime. In the next version, updates will be made
related to how the data will be made interoperable, where and how they will be
stored.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0067_M-CUBE_736937.md
|
2\. DATA SET DESCRIPTION
# A. DESCRIPTION, NATURE AND SCALE OF THE DATA COLLECTED
The data generated by the M-CUBE project are related to the collection,
characterization and standardization of calculation codes for measurements,
creation of antennas and to produce various kinds of improved MRI images.
The data will be collected through web interfaces on the intranet website of
the project and will serve to build up a web-based catalogue (the M-CUBE
repository) accessible on the MCUBE public website. Hence, it will help
disseminating worldwide data generated through the project.
# B. ORIGINS OF COLLECTED DATA
The origins of the collected data are the following:
1. UNIVERSITE D’AIX-MARSEILLE (AMU)
1a) Institut Fresnel
1b) Centre de Résonance Magnétique Biologique et Médicale
2. COMMISSARIAT A L’ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES (CEA)
3. UNIVERSITE CATHOLIQUE DE LOUVAIN (UCL)
4. CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE (CNRS)
5. UNIVERSITAIR MEDISCH CENTRUM UTRECHT (UMC UTRECHT)
6. AALTO-KORKEAKOULUSAATIO (AALTO)
7. SAINT PETERSBURG NATIONAL RESEARCH UNIVERSITY OF INFORMATION TECHNOLOGIES, MECHANICS AND OPTICS (ITMO)
8. THE AUSTRALIAN NATIONAL UNIVERSITY (ANU)
9. MULTIWAVE TECHNOLOGIES AG (Multiwave)
10. MR COILS BV (MR Coils BV)
C. TO WHOM CAN THESE DATA BE USEFUL?
The aim of collecting these data is to organise the M-CUBE collection and to
propose a web based catalogue to the scientific communities. It will
constitute a readily accessible repository at the European and world level.
\- **It helps:**
The Scientific community of physicists in accessing high quality data and
tools for improving the next generation of MRI images
The health authorities and doctors in facilitating detection of diseases
thanks to better images
3\. How will the data be created and collected?
Each M-CUBE project partner generates the data which help the creation of new
kind of antenna and which facilitate the radically improvement of spatial and
temporal resolutions of MRI images. These data are collected into a database,
through the M-CUBE website, thanks to formatted forms allowing easy filling
in.
All M-CUBE partners have provided details on what type of data they are going
to generate, to provide a short description of the data, their formats, if
they wish to open these sets of data and how they are going to archive and
preserve them.
The updated sets of data generated by M-CUBE partners are listed below.
## A. AMU – FRESNEL
<table>
<tr>
<th>
</th>
<th>
**AMU/FRESNEL**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Calculation codes
</td>
<td>
Analytical model codes. To calculate the resonant frequency of coils. Volumic
coils (birdcage ex.) and surface coils (dipole loops)
</td>
<td>
MATLAB
</td>
<td>
YES. Post
deliverable
</td>
<td>
Fresnel Data center and
M-CUBE website
</td> </tr>
<tr>
<td>
Modelisation results
</td>
<td>
Field maps, SNR maps, scattering matrix (Sij)
</td>
<td>
MATLAB,
.TXT, JPEG + others.
</td>
<td>
YES. Post
deliverable
</td>
<td>
Fresnel Data center and
M-CUBE website
</td> </tr>
<tr>
<td>
3D antenna
models
</td>
<td>
Architecture design, material &
composition
</td>
<td>
CST, HFSS,
COMSOL
</td>
<td>
YES. Post
deliverable
</td>
<td>
Fresnel Data center and
M-CUBE website
</td> </tr>
<tr>
<td>
Simulation results
</td>
<td>
Field maps, SNR maps, scattering matrix (Sij)
</td>
<td>
MATLAB,
.TXT, JPEG + others.
</td>
<td>
YES. Post
deliverable
</td>
<td>
Fresnel Data center and
M-CUBE website
</td> </tr>
<tr>
<td>
Electric caracterization
</td>
<td>
Field maps,
Electromagnetic compatibility
</td>
<td>
MATLAB,
.TXT, JPEG + others.
</td>
<td>
NO. Ethics reasons.
</td>
<td>
Fresnel Data center and
M-CUBE website
</td> </tr>
<tr>
<td>
Electronic caraterization
</td>
<td>
Field maps,
Electromagnetic compatibility
</td>
<td>
MATLAB,
.TXT, JPEG + others.
</td>
<td>
NO. Ethics reasons.
</td>
<td>
Fresnel Data center and
M-CUBE website
</td> </tr> </table>
## B. AMU – CRMBM
<table>
<tr>
<th>
**AMU/CRMBM**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data generated**
</td>
<td>
**Data set description**
</td>
<td>
**Formats**
</td>
<td>
**Open?**
</td>
<td>
**Data preservation**
</td> </tr>
<tr>
<td>
coil specifications
</td>
<td>
dimensions
</td>
<td>
_Value (cm)_
</td>
<td>
YES
</td>
<td>
on lab data server
</td> </tr>
<tr>
<td>
coil specifications
</td>
<td>
Maximum power
</td>
<td>
_Value (kW)_
</td>
<td>
YES
</td>
<td>
on lab data server
</td> </tr>
<tr>
<td>
coil specifications
</td>
<td>
homogeneity
</td>
<td>
_Values (FOV (cm)_ _and percent_ _variation over_ _DSV area (cm))_
</td>
<td>
YES
</td>
<td>
on lab data server
</td> </tr>
<tr>
<td>
coil specifications
</td>
<td>
Signal to noise ratio in reference image obtained on phantom with reference
sequence
</td>
<td>
_Value (unitless)_
</td>
<td>
YES
</td>
<td>
on lab data server
</td> </tr>
<tr>
<td>
coil specifications
</td>
<td>
homogeneity Q factor of coils
</td>
<td>
dB - Hz - Ohm
</td>
<td>
YES
</td>
<td>
on lab data server
</td> </tr>
<tr>
<td>
coil evaluation
</td>
<td>
Reference voltage on reference phantom
</td>
<td>
_Value (V)_
</td>
<td>
YES
</td>
<td>
on lab data server
</td> </tr>
<tr>
<td>
B1 RF maps
(measurements)
</td>
<td>
maps
</td>
<td>
</td>
<td>
YES
</td>
<td>
on lab data server
</td> </tr>
<tr>
<td>
MRI measurements on small animals
</td>
<td>
MRI and MRS data
</td>
<td>
dicom
</td>
<td>
YES
</td>
<td>
on lab data server
</td> </tr>
<tr>
<td>
MRI measurements on phantoms
</td>
<td>
MRI and MRS data
</td>
<td>
dicom
</td>
<td>
YES
</td>
<td>
on lab data server
</td> </tr>
<tr>
<td>
MRI measurements on humans
</td>
<td>
MRI and MRS data
</td>
<td>
dicom
</td>
<td>
YES
</td>
<td>
on lab data server
</td> </tr>
<tr>
<td>
3D printer structure manufacturing
</td>
<td>
dimensions of antenna
</td>
<td>
ACSI SAT
</td>
<td>
YES
</td>
<td>
on lab data server
</td> </tr>
<tr>
<td>
3D printer structure manufacturing
</td>
<td>
dimensions of antenna support for 3D printer
</td>
<td>
.STL
</td>
<td>
YES
</td>
<td>
on lab data server
</td> </tr> </table>
### C. CEA-NEUROSPIN
<table>
<tr>
<th>
</th>
<th>
**CEA/NEUROSPIN**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data generated**
</td>
<td>
**Data set description**
</td>
<td>
**Formats**
</td>
<td>
**Open ?**
</td>
<td>
**Data preservation**
</td> </tr>
<tr>
<td>
__Types of Data_ _
</td>
<td>
__Description of the data_ _
</td>
<td>
__Standards and_ _
__Metadata_ _
</td>
<td>
__Data_ _
__Sharing_ _
</td>
<td>
__Archive and_ _
__Preservation_ _
</td> </tr>
<tr>
<td>
Calculation codes
</td>
<td>
m-file
</td>
<td>
Matlab
</td>
<td>
No
</td>
<td>
No
</td> </tr>
<tr>
<td>
Modelisation results
</td>
<td>
m-file
</td>
<td>
Matlab
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
3D antenna models
</td>
<td>
\-
</td>
<td>
CST, HFSS,
COMSOL
</td>
<td>
No
</td>
<td>
No
</td> </tr>
<tr>
<td>
Simulation results
</td>
<td>
\-
</td>
<td>
CST, HFSS,
COMSOL
</td>
<td>
Yes
</td>
<td>
No
</td> </tr>
<tr>
<td>
MRI measures on small animals
</td>
<td>
quantitative or weighted MR images
</td>
<td>
Raw data
&DICOM
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
MRI measures on fantoms
</td>
<td>
quantitative or weighted MR images
</td>
<td>
DICOM
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
MRI measures on men
</td>
<td>
quantitative or weighted MR images
</td>
<td>
DICOM
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr> </table>
### D. UCL
<table>
<tr>
<th>
</th>
<th>
**UCL**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data generated**
</td>
<td>
**Data set description**
</td>
<td>
**Formats**
</td>
<td>
**Open or private?**
</td>
<td>
**Data preservation**
</td> </tr>
<tr>
<td>
__Types of Data_ _
</td>
<td>
__Description of the data_ _
</td>
<td>
__Standards and_ _
__Metadata_ _
</td>
<td>
__Data Sharing_ _
</td>
<td>
__Archive and_ _
__Preservation_ _
</td> </tr>
<tr>
<td>
Calculation codes
</td>
<td>
Codes regarding E and H interactions; Codes
regarding array scanning method
</td>
<td>
executables and Matlab routines
</td>
<td>
Shared among partners in collaborative framework
</td>
<td>
Archive internal to consortium
</td> </tr>
<tr>
<td>
Modelisation results
</td>
<td>
Effects of impedance surfaces, eigenmode analysis, active impedance studies,
etc
</td>
<td>
presentations, papers
</td>
<td>
presenations limited to consortium; papers public
</td>
<td>
Archive all
</td> </tr>
<tr>
<td>
3D antenna models
</td>
<td>
Wire or strip-type metal in dielectic volume (e.g. Teflon)
</td>
<td>
GMSH files or
STEP files
</td>
<td>
Shared among partners
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Simulation results
</td>
<td>
Effects of impedance surfaces, eigenmode analysis, active impedance studies,
etc
</td>
<td>
Text and Matlab data
files
</td>
<td>
Limited to
Consortium, except for papers
</td>
<td>
Archive all
</td> </tr>
<tr>
<td>
CAD MECA files
</td>
<td>
GMSH files for geometry
</td>
<td>
*.msh files
</td>
<td>
Open to all Consortium members
</td>
<td>
Archive all
</td> </tr>
<tr>
<td>
Electric caracterization
</td>
<td>
Near-field patterns of metamaterial antennas obtaine with probes
</td>
<td>
Formats offered by
VNA (Vector
Network Analyzer), with proper scaling rule
</td>
<td>
Open to all Consortium members
</td>
<td>
Archive all
</td> </tr>
<tr>
<td>
Electronic caraterization
</td>
<td>
Scattering matrix of N-port
MRI coils/birdcages
</td>
<td>
Formats offered by VNA
</td>
<td>
Limited to
Consortium, except for papers
</td>
<td>
Archive all
</td> </tr> </table>
## E. UMC UTRECHT
<table>
<tr>
<th>
</th>
<th>
**UMC UTRECHT**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data generated**
</td>
<td>
**Data set description**
</td>
<td>
**Formats**
</td>
<td>
**Open?**
</td>
<td>
**Data preservation**
</td> </tr>
<tr>
<td>
Calculation codes
</td>
<td>
Scripts and functions to process simulated and measured data
</td>
<td>
Plain text
(matlab, C++ or
Python)
</td>
<td>
YES, after finalizing project
</td>
<td>
Archived
</td> </tr>
<tr>
<td>
Simulation input
files
</td>
<td>
Input file for EM simulation (e.g.
Sim4Life) that contains simulation geometry (=antenna design) and simulation
settings
</td>
<td>
CST or Sim4Life format
</td>
<td>
Private
</td>
<td>
Archived
</td> </tr>
<tr>
<td>
3D antenna models
</td>
<td>
Antenna design with electronics, geometry and materials.
</td>
<td>
Powerpoint,
CAD
</td>
<td>
Private
</td>
<td>
Archived
</td> </tr>
<tr>
<td>
Simulation results
</td>
<td>
Resulting simulated field distributions
</td>
<td>
CST or Sim4Life format
</td>
<td>
private; data not stored longterm
</td>
<td>
The data is bulky and can always be reproduced with the simulation input
files. The data is therefore not stored long-term.
</td> </tr>
<tr>
<td>
MRI measures on fantoms
</td>
<td>
2D or 3D images representing MRI measurements on phantoms
</td>
<td>
DICOM
</td>
<td>
YES, after finalizing project
</td>
<td>
Archived
</td> </tr>
<tr>
<td>
Electric caracterization
</td>
<td>
S11 and S12 response from bench measurements of antennas and metamaterial
structures
</td>
<td>
Touchstone
</td>
<td>
YES, after finalizing project
</td>
<td>
Archived
</td> </tr> </table>
## F. ITMO
<table>
<tr>
<th>
</th>
<th>
**ITMO**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data generated**
</td>
<td>
**Data set description**
</td>
<td>
**Formats**
</td>
<td>
**Open?**
</td>
<td>
**Data preservation**
</td> </tr>
<tr>
<td>
Calculation codes
</td>
<td>
Codes for experimental data post-processing
</td>
<td>
Matlab, Python
</td>
<td>
To be defined
</td>
<td>
ITMO data center
</td> </tr>
<tr>
<td>
3D antenna models
</td>
<td>
Optimized coil and metasurface
structures as
project files in commercial software pachages (CST, HFSS, Sim4Life)
</td>
<td>
CST, HFSS,
COMSOL
</td>
<td>
private
</td>
<td>
ITMO data center
</td> </tr>
<tr>
<td>
Simulation results
</td>
<td>
Calculated RF-field distributions of RF-coils (magnetic, electric fields, SAR,
SNR, B1+,B1-)
</td>
<td>
CSV data files
</td>
<td>
YES
</td>
<td>
M-CUBE website
</td> </tr>
<tr>
<td>
CAD MECA files
</td>
<td>
CAD models of RF-coils exported from
simulation tools (3D geometry)
</td>
<td>
IGES, STEP,
DXF, STL
</td>
<td>
YES
</td>
<td>
M-CUBE website
</td> </tr>
<tr>
<td>
MRI measures on fantoms
</td>
<td>
Images obtained using metasurface-based coils
</td>
<td>
DICOM
</td>
<td>
YES
</td>
<td>
M-CUBE website
</td> </tr>
<tr>
<td>
MRI measures on men
</td>
<td>
Images obtained using metasurface-based coils
</td>
<td>
DICOM
</td>
<td>
YES
</td>
<td>
M-CUBE website
</td> </tr>
<tr>
<td>
Electronic caraterization
</td>
<td>
Schematic models
</td>
<td>
P-cad
</td>
<td>
private
</td>
<td>
ITMO data center
</td> </tr> </table>
## G. MULTIWAVE
<table>
<tr>
<th>
</th>
<th>
**MULTIWAVE**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data generated**
</td>
<td>
**Data set description**
</td>
<td>
**Formats of the data**
</td>
<td>
**Open ?**
</td>
<td>
**Data preservation**
</td> </tr>
<tr>
<td>
__Types of_ _
__Data_ _
</td>
<td>
__Description of the data_ _
</td>
<td>
__Standards and_ _
__Metadata_ _
</td>
<td>
__Data_ _
__Sharing_ _
</td>
<td>
__Archive and Preservation_ _
</td> </tr>
<tr>
<td>
Calculation codes
</td>
<td>
Spectral Element Solver
</td>
<td>
Python source code
</td>
<td>
Private
</td>
<td>
Multiwave data center
</td> </tr>
<tr>
<td>
Modeling results
</td>
<td>
Mathematical modeling of antennas (equivalent circuits)
</td>
<td>
Python source code
</td>
<td>
Yes, partially
</td>
<td>
M-CUBE website
</td> </tr>
<tr>
<td>
3D antenna models
</td>
<td>
CAD models and Meshes
</td>
<td>
CST, HFSS,
COMSOL
</td>
<td>
Yes
</td>
<td>
M-CUBE website
</td> </tr>
<tr>
<td>
Simulation results
</td>
<td>
Post processing of IBVP solutions
</td>
<td>
ascii, txt, csv, hdf5
</td>
<td>
Yes
</td>
<td>
M-CUBE website
</td> </tr>
<tr>
<td>
CAD MECA
files
</td>
<td>
CAD files for geometry or mesh description
</td>
<td>
.nii, .stl, .nastran
</td>
<td>
Open
</td>
<td>
Multiwave data center
</td> </tr> </table>
H. MR COILS
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**MR COILS**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data generated**
</td>
<td>
**Data set description**
</td>
<td>
**Formats**
</td>
<td>
**Open?**
</td>
<td>
**Data preservation**
</td> </tr>
<tr>
<td>
__Types of Data_ _
</td>
<td>
__Description of the_ _data_ _
</td>
<td>
__Standards and_ _
__Metadata_ _
</td>
<td>
__Data Sharing_ _
</td>
<td>
__Archive and Preservation_ _
</td> </tr>
<tr>
<td>
MRI measures on fantoms
</td>
<td>
B1 maps, SNR maps, g-factor maps
</td>
<td>
Dicom and matlab
</td>
<td>
Private, but open upon request
</td>
<td>
Synergy storage (at
MRCoils)
</td> </tr>
<tr>
<td>
Electric caracterization
</td>
<td>
test results checklist
</td>
<td>
Word or PDF
</td>
<td>
Private, but open upon request
</td>
<td>
Synergy storage (at
MRCoils)
</td> </tr> </table>
### I. CNRS - ESPCI
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**CNRS**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data generated**
</td>
<td>
**Data set description**
</td>
<td>
**Formats of the data**
</td>
<td>
**Open ?**
</td>
<td>
**Data preservation**
</td> </tr>
<tr>
<td>
Calculation codes
</td>
<td>
Codes for computing eigenmodes from analytical models of metamaterial
structure
</td>
<td>
Matlab, python, C
</td>
<td>
YES, after publication
</td>
<td>
CNRS-ESPCI data center and
M-CUBE website
</td> </tr>
<tr>
<td>
Modelisation results
</td>
<td>
Output of the calculation codes.
</td>
<td>
CSV, binary
</td>
<td>
YES, after publication
</td>
<td>
CNRS-ESPCI data center and
M-CUBE website
</td> </tr>
<tr>
<td>
3D antenna
models
</td>
<td>
CAD file with
electromagnetic properties
</td>
<td>
CST, HFSS, COMSOL
</td>
<td>
Private
</td>
<td>
CNRS-ESPCI data center
</td> </tr>
<tr>
<td>
Simulation results
</td>
<td>
E and H fields, S parameters
</td>
<td>
CSV
</td>
<td>
YES, after publication
</td>
<td>
CNRS-ESPCI data center and
M-CUBE website
</td> </tr>
<tr>
<td>
Electric caracterization
</td>
<td>
S parameters
</td>
<td>
CSV
</td>
<td>
YES, after publication
</td>
<td>
CNRS-ESPCI data center and
M-CUBE website
</td> </tr> </table>
4\. DATA SHARING
# A. DATA SHARING AND DISSEMINATION
The data will be made available worldwide through the M-CUBE Website thanks to
dedicated interfaces that will list the M-CUBE data files available and enable
the users to access all data and place enquiries on data of interest.
Access to the data is not restricted, except for adding and editing
permissions that are restricted to the M-CUBE partners.
The M-CUBE data will be widely open to any user except for intellectual
property reasons and/or unmet quality criteria, and will respect the following
disclosure levels set:
* Consortium: the data are made available to consortium partners only.
* Public: the data are listed on the public web interface and any user can place an enquiry about a data of interest.
# B. DATA REPOSITORY
The data will be deposited on the M-CUBE database through the M-CUBE website
that is hosted on servers of the AMU-Institut Fresnel partner laboratory.
## C. DATA DISCOVERY
The data generated by the project correspond to the description of the
calculation codes, modelisation results, simulation results, coil
specifications and MRI measures that will be generated by M-CUBE partners,
laboratories and SMEs, in their own infrastructure.
These data are discoverable by the users simply, indirectly by using any web
search engines (e.g: google, yahoo, bing...), or directly by connecting to the
M-CUBE website's internal search engine.
LINK : _http://www.mcube-project.eu/opendata/_
## D. REUSING DATA
The data are generated by the all the M-CUBE partners. The collected data are
used by endusers to create their own calculation codes, their own coils and to
improve their own measures.
There is no limitation on the way the data can be reused and practically can
serve as reference, even after the project ends.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0074_IoF2020_731884.md
|
# EXECUTIVE SUMMARY
Data or Big Data rapidly has become a new resource or asset in the current
economy, also in the agricultural sector. This development leads to several
issues that have to be addressed such as data availability, quality, access,
security, responsibility, liability, ownership, privacy, costs and business
models. In Europe several initiatives have already been established to work on
this in a general context (e.g. the EU open data policy, General Data
Protection Regulation) and more specifically in agriculture (e.g. GODAN, COPA-
COGECA).
IoF2020 has to address these issues by developing a Data Management Plan
(DMP). This document (D1.4) provides a first version of a DMP containing a
general overview of relevant developments and a first inventory of needs and
issues that play a role in the use cases. This has resulted in a number of
general guidelines for data management in IoF2020:
* Research papers derived from the project should be published under open access policy
* Research data should be stored in a central repository according to the FAIR principles: findable, accessible, interoperable and reliable
* The use cases in IoF2020 should be clearly aligned to the European Data Economy policy and more specifically in line with the principles and guidelines provided by the stakeholder community _i.e._ COPA-COGECA.
* All IoF2020 use cases should be GDPR-compliant.
A number of follow-up actions were identified to establish open, transparent
data management in IoF2020:
* Investigate what research data is involved in the use cases and other project activities and define how they should be treated according to the open data policy.
* Participate actively in the debate and developments of the European Data Economy and data sharing in agriculture.
* Analyze and explore the use cases in a deeper way in order to identify which data management issues potentially play a role and define plans how to deal with them.
* Concerning collaboration with other projects, a more systematic and structural approach should be explored in order to maximize the benefits and impact of the mutual activities on data management.
* A feasible, but sufficient, plan has to be developed to make the use cases GDPR-compliant.
As a result the DMP will be continuously adapted and updated during the
project’s period.
## 1 INTRODUCTION
### 1.1 CONTEXT AND BACKGROUND
Big Data is becoming a new resource, a new asset, also in the agricultural
sector. Big Data in the agricultural sector includes enterprise data from
operational systems, farm field sensor data (e.g. temperature, rainfall,
sunlight), farm equipment sensor data (from tractors, harvesters, milking
robots, feeding robots), data from wearable animal sensors (neck tag, leg
tag), harvested goods and livestock delivery vehicles sensor data (from farms
to processing facilities) etc. Increasingly, Big Data applications, Big Data
initiatives and Big Data projects are implemented and carried out, aiming for
improving the farm and chain performance (e.g., profitability and
sustainability) and support associated farm management decision making.
In the IoF2020 different use cases are taking place in which data also plays a
key role involving farm companies that share their (big) farm data with
enterprises and organisations who strive to add value to that data. This
implies that the data from one party is combined with data from other parties
in the chain, and then analysed and translated into advices, knowledge, or
information for farmers. In this way Big Data becomes an asset in supporting
farmers to further improve their business performance (e.g., higher yield,
better quality, higher efficiency). These data-driven developments often
involve collaborations between agri-IT companies, farmers’ cooperatives and
other companies in the food supply chain. These business-to-business
initiatives and interactions are increasingly conducted through inter-
organisational coordination hubs, in which standardised IT-based platforms
provide data and business process interoperability for interactions among the
organisations.
In Figure 1 an example of such a network of collaborating organisations is
provided. It involves four stakeholders around the farmer (i) sperm supplier,
(ii) milk processor, (iii) feed supplier and (iv) milking robot supplier.
Multiple farms can be involved, and for each stakeholder relation data are
collected from the farm and collected in a data platform. All stakeholders can
then receive multiple datasets back from this platform, depending on
authorization settings. This has to be arranged and governed by some form of
network administrative organization. It can be expected that the IoF2020 use
cases can be modelled into a similar picture.
**C**
**loud DATA platform**
Farmer
Supplier C
Supplier A
Supplier B
Customer X
feed
sperm
milk
milking
robot
**data**
**data**
**data**
**data**
**data**
**data**
**data**
**data**
**data**
**data**
**data**
**data**
**data**
Network
Administrative
Organization
_Figure 1 Example of a data sharing network in dairy farming in the agri-food
sector_
There are multiple ways of expanding and intensifying such a Big Data network.
For instance, at the end of the market chain, consumers may play a role in
future. They could be interested in Big Data as a quality check for products,
and they could in principle provide consumer based data to the platform.
Moreover, the role of government can be relevant. While governments seem to be
interested in open data possibilities, this may cause uncomfortable privacy
concerns for suppliers in the network, who prefer to keep business models away
from the public.
The management of Big Data initiatives comprises challenges of privacy and
security, which impact discouragements and distrust among famers. Trust is
considered to be a starting point in increasing Big Data applications. Many
involved companies are refraining from sharing data because of the fear of
issues such as data security, privacy and liability. Corresponding to the
different stages of the data value chain (data capture, data storage, data
transfer, data transformation, data analytics and data marketing), several key
issues of Big Data applications have been identified and can be summarised as
data availability, data quality, access to data, security, responsibility,
liability, data ownership, privacy, costs, and business models. Ownership and
value of data appear to be important issues in discussions on the governance
of Big Data driven inter-organisational applications. A growing number of Big
Data initiatives address privacy and security concerns. Also Big Data
applications raise power-related issues that sometimes can lead to potential
abuses of data. And many companies expect to develop new business models with
data but are in a kind of deadlock and afraid of taking the first step. So the
challenge is how companies could or should deal with these inter-
organisational governance issues as they are considered as (the most)
hindering factors for fulfilling the promises and opportunities of Big Data in
agri-food sector.
Against this background, the network structure raises multiple questions in
the scope of data management. For instance, how is the communication among the
different actors? On what conditions do farmers take part, and how easy is it
to enter or leave the network? What is the role of a network administrative
organization and how openly do they perform together with partners of the
network?
### 1.2 OBJECTIVE OF THIS DOCUMENT
Task 1.4 ‘Development Data Management Plan’ is meant to address these
questions by developing a plan and specific guidelines for the use cases and
the project as a whole concerning data management.
The Data Management Plan (DMP) will be developed, outlining:
* how (research) data will be collected, processed or generated within the project;
* what methodology and standards will be adopted;
* whether and how this data will be shared and/or made open;
* and how this data will be curated and preserved during and after the project.
The DMP aims to ensure that IoF2020 activities are compliant with the H2020
Open Access policy and the recommendations of the Open Research Data pilot.
The DMP will furthermore explain how the project will be connected with other
past and on-going initiatives such as EIP-Agri, agINFRA and global channels,
such as OpenAIRE, CIARD and GODAN.
Under this task an Open Access Support Pack will be developed translating the
generic H2020 requirements and recommendations into specific guidelines and
advice that can be applied in the project.
This document (D1.4) will provide first version of Data Management Plan
containing a general overview of relevant developments and a first inventory
of needs and issues that play a role in the use cases. The application of the
DMP by all IoF2020 partners will be continuously monitored under this task and
an updated version of the DMP including more detailed specific support packs
for the use cases (D1.5) will be delivered in Month 36.
### 1.3 OUTLINE
The remainder of this document is organized as follows. Chapter 2 will
describe the approach how the results were found. This results in an overview
of general developments on data management, described in Chapter 3. Chapter 4
will provide a short overview of relevant past and on-going initiatives and
projects on data management in agriculture. Then a first inventory of the
needs and potential issues of the use cases will be described. Based on the
findings in these Chapters, a concrete Data Management Plan will be provided
in Chapter 6, followed by some general conclusions in Chapter 7.
## 2 APPROACH
The approach that was followed to generate this report consists of the
following steps (see Figure 2):
1. Identification and description of relevant external developments in the field of data management in general and more specific for agriculture (see Chapter 3).
2. Identification and description of relevant initiatives and/or projects (in the recent past of ongoing) in the field of (agricultural) data management (see Chapter 4).
3. Based on the results of the previous two steps a quick scan of the needs and issues in the IoF2020 use cases is made (see Chapter 5).
4. Based on the results of all previous steps a preliminary version of the IoF2020 Data
Management is defined that provides general guidelines for the use cases (see
Chapter 6).
_Figure 2 Steps that were taken to achieve a first version of the Data
Management Plan for IoF2020_
After these steps the data management plan will be further refined and
tailored into a specific support pack for the use cases, but this is outside
the scope of this deliverable. The results of this deliverable will be
iteratively updated and refined as a living document but consolidated in a
next deliverable in Month 36 (D1.5).
## 3 EXTERNAL DEVELOPMENTS
In this chapter we will highlight and briefly describe some data management
developments that are most relevant to a H2020 innovation action project such
as IoF2020 in Section 3.1. In Section 3.2 we will then describe the possible
consequences for IoF2020 of these developments.
### 3.1 DATA MANAGEMENT DEVELOPMENTS
#### **3.1.1 H2020 Open Access Policy** 1
Open access (OA) can be defined as the practice of providing on-line access to
scientific information that is free of charge to the user and that is re-
usable. In the context of R&D, open access to 'scientific information' refers
to two main categories:
* Peer-reviewed scientific publications (primarily research articles published in academic journals)
* Scientific research data: data underlying publications and/or other data (such as curated but unpublished datasets or raw data)
It is now widely recognised that making research results more accessible to
all societal actors contributes to better and more efficient science, and to
innovation in the public and private sectors The Commission therefore supports
open access at the European level (in its framework programmes), at the Member
States level and internationally.
#### Peer-reviewed scientific publications
All projects receiving Horizon 2020 funding are **required** to make sure that
any peer-reviewed journal article they publish is openly accessible, free of
charge (article 29.2. Model Grant Agreement).
#### Research data
The Commission is running a **pilot on open access** to research data in
Horizon 2020: the Open Research Data (ORD) pilot. This pilot takes into
account the need to balance openness with the protection of scientific
information, commercialisation and Intellectual Property Rights (IPR), privacy
concerns, and security, as well as questions of data management and
preservation. The pilot applies to research data underlying publications but
beneficiaries can also voluntarily make other datasets open. Participating
projects are required to develop a Data Management Plan, in which they will
specify what data will be open. In previous work programmes, the ORD Pilot was
limited to some specific areas of Horizon 2020. Starting with the 2017 work
programme, however, the ORD pilot was extended to cover **all thematic areas**
of Horizon 2020, thus realising the Commission's ambition of "open research
data per default" (but allowing for opt-outs).
#### More information
For details of how open access applies to beneficiaries in projects funded
under Horizon 2020, please see the **_Guidelines_ ** **_on_ ** **_Open_ **
**_Access_ ** **_to_ ** **_Scientific_ ** **_Publications_ ** **_and_ **
**_Research_ ** **_Data_ ** and/or the **_Guidelines_ ** **_on_ ** **_data_
** **_management_ ** .
Also:
* _Participants_ _Portal_
* _OpenAIRE_
* _Open_ _access_ _in_ _FP7_
##### 3.1.2 Open Research Data pilot 2
_What is the open research data pilot?_
Open data is data that is free to access, reuse, repurpose, and redistribute.
The Open Research Data Pilot aims to make the research data generated by
selected Horizon 2020 projects accessible with as few restrictions as
possible, while at the same time protecting sensitive data from inappropriate
access.
If your Horizon 2020 project is part of the pilot, and your data meets certain
conditions, you must deposit your data in a research data repository where
they will be findable and accessible for others. Don’t panic - you are not
expected to share sensitive data or breach any IPR agreements with industrial
partners. You do not need to deposit all the data you generate during the
project either – only that which underpins published research findings and/or
has longer-term value. In addition to supporting your research’s integrity,
openness has many other benefits. Improved visibility means your research will
reach more people and have a greater impact – for science, society and your
own career. Recent studies have shown that citations increase when data is
made available alongside the publication; these papers also have a longer
shelf-life.
_Which H2020 strands are required to participate?_
Projects starting from January 2017 are by default part of the Open Data
Pilot. If your project started before earlier and stems from one of these
Horizon 2020 areas, you are automatically part of the pilot as well::
* Future and Emerging Technologies
* Research infrastructures (including e-Infrastructures)
* Leadership in enabling and industrial technologies – Information and Communication Technologies
* Nanotechnologies, Advanced Materials, Advanced Manufacturing and Processing, and
Biotechnology: ‘nanosafety’ and ‘modelling’ topics
* Societal Challenge: Food security, sustainable agriculture and forestry, marine and maritime and inland water research and the bioeconomy - selected topics in the calls H2020-SFS2016/2017, H2020-BG-2016/2017, H2020-RUR-2016/2017 and H2020-BB-2016/2017, as specified in the work programme
* Societal Challenge: Climate Action, Environment, Resource Efficiency and Raw materials – except raw materials
* Societal Challenge: Europe in a changing world – inclusive, innovative and reflective Societies
* Science with and for Society
* Cross-cutting activities - focus areas – part Smart and Sustainable Cities.
Maybe data sharing is not appropriate for your project; the _EC’s Guide on
Open Access_ _Scientific_ _Publications and Research Data_ lists conditions
that would allow or require you to opt out of the pilot.
In that case please consider if a partial opt-out is possible.
_What is a data management plan (DMP)?_
To help you optimise the potential for future sharing and reuse, a Data
Management Plan (DMP) can help you to consider any problems or challenges that
may be encountered and helps you to identify ways to overcome these. A DMP
should be thought of as a “living” document outlining how the research data
collected or generated will be handled during and after a research project.
Remember, the plan should be realistic and based around the resources
available to you and your project partners. There is no point in writing a
gold plated plan if it cannot be implemented!
It should describe:
* The data set: What kind of data will the project collect or generate, and to whom might they be useful later on? The pilot applies to (1) the data and metadata needed to validate results in scientific publications and (2) other curated and/or raw data and metadata that may be required for validation purposes or with reuse value.
* Standards and metadata: What disciplinary norms will you adopt in the project? What is the data about? Who created it and why? In what forms it is available? Metadata answers such questions to enable data to be found and understood, ideally according to the particular standards of your scientific discipline. Metadata, documentation and standards help to make your data Findable, Accessible, Interoperable and Re-usable or FAIR for short.
* Data sharing: By default as much of the resulting data as possible should be archived as Open Access. Therefore legitimate reasons for not sharing resulting data should be explained in the DMP. Remember, no one expects you to compromise data protection or breach any IPR agreements. Data sharing should be done responsibly. The DMP Guidelines therefore ask you to describe any ethical or legal issues that can have an impact on data sharing. Furthermore,
* Archiving and preservation: Funding bodies are keen to ensure that publicly funded research outputs can have a positive impact on future research, for policy development, and for societal change. They recognise that impact can take quite a long time to be realised and, accordingly, expect the data to be available for a suitable period beyond the life of the project. Remember, it is not simply enough to ensure that the bits are stored in a research data repository, but also consider the usability of your data. In this respect, you should consider preserving software or any code produced to perform specific analyses or to render the data as well as being clear about any proprietary or open source tools that will be needed to validate and use the preserved data.
The DMP is not a fixed document. The first version of the DMP is expected to
be delivered within the first 6 months of your project, but you don’t have to
provide detailed answers to all the questions yet. The DMP needs to be updated
over the course of the project whenever significant changes arise, such as new
data or changes in the consortium policies or consortium composition. The DMP
should be updated at least in time with the periodic evaluation or assessment
of the project as well as in time for the final review. Consider reviewing
your DMP at regular intervals in the project and consider how you might make
use of scheduled WP and/or project staff meetings to facilitate this review.
_What practical steps should you take?_
1. When your project is part of the pilot, you should _create a Data Management Plan_ . Your institution may offer Research Data Management support to help you planning.
2. Also, you should _select a data repository_ that will preserve your data, metadata and possibly tools in the long term. It is advisable to contact the repository of your choice when writing the first version of your DMP. Repositories may offer guidelines for sustainable data formats and metadata standards, as well as support for dealing with sensitive data and licensing.
3. As noted earlier, you do not need to keep everything. Curating data requires time and effort so you want to make sure that you are putting your effort into the outputs that really matter. Select what data you’ll need to retain to support validation of your finding but also consider any data outputs that may have longer term value as well – for you and for others.
#### Links
* EC’s Guide on Open Access to Scientific Publications and Research Data in Horizon 2020
(updated August 25,
2016)
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/_
_h2020-hi-oa-pilot-guide_en.pdf_
* EC’s Guidelines on Data Management in Horizon 2020 (updated July 26,
2016):
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/_
_h2020-hi-oa-data-mgt_en.pdf_
* EC’s Agenda on Open Science: _https://ec.europa.eu/digital-agenda/en/open-science_
* DMPonline tool: _https://dmponline.dcc.ac.uk/_
* DCC How to Write a DMP guide: _http://www.dcc.ac.uk/resources/how-guides/develop-dataplan_
* DCC How to Select What Data to Keep guide: _http://www.dcc.ac.uk/resources/howguides/five-steps-decide-what-data-keep_
* DCC How to Licence Research Data guide: _http://www.dcc.ac.uk/resources/howguides/license-research-data_
* RDNL video The what, why and how of data management planning: _http://datasupport.researchdata.nl/en/start-de-cursus/iiplanfase/datamanagementplanning/_
* Software Sustainability Institute’s Software Management
Plan:
_https://www.software.ac.uk/sites/default/files/images/content/SMP_Checklist_2016__
_v0.1.pdf_
##### 3.1.3 European Data Economy
Building a European data economy is part of the Digital Single Market
strategy. The initiative aims at fostering the best possible use of the
potential of digital data to benefit the economy and society. It addresses the
barriers that impede the free flow of data to achieve a European single
market.
#### General need for action
_Digital data_ is an essential resource for economic growth, competitiveness,
innovation, job creation and societal progress in general. The EU needs to
ensure that data flows across borders and sectors. This data should be
accessible and reusable by most stakeholders in an optimal way. A coordinated
European approach is essential for the development of the data economy, as
part of the _Digital Single_
_Market strategy_ . The European Commission adopted a _Communication_ on
"Building a European Data Economy", accompanied by a _Staff Working Document_
on January 2017, where it:
* looks at the rules and regulations impeding the free flow of data and present options to remove unjustified or disproportionate data location restrictions, and
* outlines legal issues regarding access to and transfer of data, data portability and liability of non-personal, machine-generated digital data.
The European Commission has launched a _public consultation_ and _dialogue
with stakeholders_ on these topics to gather further evidence. This process
will help identify future policy or legislative measures that will unleash
Europe's data economy. The development of the European Data economy is one of
the three emerging challenges identified in the _mid-term review_ . The
actions to be implemented are:
* to prepare a legislative proposal on the EU free flow of data cooperation framework (autumn 2017)
* to prepare an initiative on accessibility and re-use of public and publicly funded (spring 2018)
In addition, the Commission will continue its work on liability and other
emerging data issues. For more details, read the _Communication_ .
#### Facing the challenge - removing data localisation restrictions: the free
flow of data
Free flow of data means the freedom to process and store data in electronic
format anywhere within the EU. It is necessary for the development and use of
innovative data technologies and services. In order to achieve the free flow
of data, the European Commission will collect more evidence on data location
restrictions and assess their impacts on businesses, especially SMEs and
start-ups, and public sector organisations. The Commission will also discuss
the justifications for and proportionality of those data location restrictions
with Member States and other stakeholders. It will then take justified and
appropriate follow-up actions, in line with _better regulation principles_ ,
to address the issue.
#### Exploring the emerging issues relating to the data economy
The European Commission is currently defining, scoping and articulating the
following issues in order to trigger and frame a dialogue with stakeholders:
* Non-personal machine-generated data need to be tradable to allow innovative business models to flourish, new market entrants to propose new ideas and start-ups to have a fair chance to compete.
* Data-driven technologies are transforming our economy and society, resulting in the production of ever-increasing amounts of data. This phenomenon leads to innovative ways of collecting, acquiring, processing and using data which can pose a challenge to the current legal framework.
* Access to and transfer of non-personal data, data liability, as well as portability of nonpersonal data, interoperability and standards are complex legal issues.
This _consultation_ process will contribute to the policy choices taken by
the European Commission in the future.
#### Useful links
* Have a look at the _workshops_ organised on how to build a European data economy.
* _Press release and MEMO_ _-_ _Q &A _
* _Communication_ on Building a European Data Economy
* _Staff Working Document_ on Building a European Data Economy
* _Factsheet_ on Building a European Data Economy
* Study on _Measuring the economic impact of cloud computing in Europe_
* Study on _Facilitating cross border data flow in the DSM_
* Intermediary study on _Cross-border data flow in the Digital Single Market: data location_ _restrictions_
* _Speech from Commissioner Oettinger_ at the Conference "Building European Data Economy" (17 October 2016).
* _Speech from Vice-President Ansip_ at the Digital Assembly 2016, "Europe should not be afraid of data" (29 September 2016).
##### 3.1.4 Agricultural Data developments
In view of the technical changes brought forth by Big Data and Smart Farming,
we seek to understand the consequences for the stakeholder network and
governance structure around the farm in this section.
The literature suggests major shifts in roles of and power relations among
different players in existing agri-food chains. We observed the changing roles
of old and new software suppliers in relation to Big Data and farming and
emerging landscape of data-driven initiatives with prominent role of big tech
and data companies like Google and IBM. In Figure 3, the current landscape of
data-driven initiatives is visualized.
The stakeholder networks exhibits a high degree of dynamics with new players
taking over the roles played by other players and the incumbents assuming new
roles in relation to agricultural Big Data. As opportunities for Big Data have
surfaced in the agribusiness sector, big agriculture companies such as
Monsanto and John Deere have spent hundreds of millions of dollars on
technologies that use detailed data on soil type, seed variety, and weather to
help farmers cut costs and increase yields. Other players include various
accelerators, incubators, venture capital firms, and corporate venture funds
(Monsanto, DuPont, Syngenta, Bayer, DOW etc.).
_Figure 3 The landscape of the Big Data network with business players._
Monsanto has been pushing big-data analytics across all its business lines,
from climate prediction to genetic engineering. It is trying to persuade more
farmers to adopt its cloud services. Monsanto says farmers benefit most when
they allow the company to analyse their data - along with that of other
farmers - to help them find the best solutions for each patch of land.
While corporates are very much engaged with Big Data and agriculture, start-
ups are at the heart of action, providing solutions across the value chain,
from infrastructure and sensors all the way down to software that manages the
many streams of data from across the farm. As the ag-tech space heats up, an
increasing number of small tech start-ups are launching products giving their
bigger counterparts a run for their money. In the USA, start-ups like
FarmLogs, FarmLink and 640 Labs challenge agribusiness giants like Monsanto,
Deere, DuPont Pioneer. One observes a swarm of dataservice start-ups such as
FarmBot (an integrated open-source precision agriculture system) and Climate
Corporation. Their products are powered by many of the same data sources,
particularly those that are freely available such as from weather services and
Google Maps. They can also access data gathered by farm machines and
transferred wirelessly to the cloud. Traditional agri-IT firms such as NEC and
Dacom are active with a precision farming trial in Romania using environmental
sensors and Big Data analytics software to maximize yields.
Venture capital firms are now keen on investing in agriculture technology
companies such as Blue River Technology, a business focusing on the use of
computer vision and robotics in agriculture. The new players to Smart Farming
are tech companies that were traditionally not active in agriculture. For
example, Japanese technology firms such as Fujitsu are helping farmers with
their cloud based farming systems. Fujitsu collects data (rainfall, humidity,
soil temperatures) from a network of cameras and sensors across the country to
help farmers in Japan better manage its crops and expenses. Data processing
specialists are likely to become partners of producers as Big Data delivers on
its promise to fundamentally change the competitiveness of producers.
Beside business players such as corporates and start-ups, there are many
public institutions (e.g., universities, USDA, the American Farm Bureau
Federation, GODAN) that are actively influencing Big Data applications in
farming through their advocacy on open data and data-driven innovation or
their emphasis on governance issues concerning data ownership and privacy
issues. Well-known examples are the Big Data Coalition, Open Agriculture Data
Alliance (OADA) and AgGateway. Public institutions like the USDA, for example,
want to harness the power of agricultural data points created by connected
farming equipment, drones, and even satellites to enable precision agriculture
for policy objectives like food security and sustainability. Precision farming
is considered to be the “holy grail” because it is the means by which the food
supply and demand imbalance will be solved. To achieve that precision, farmers
need a lot of data to inform their planting strategies. That is why USDA is
investing in big, open data projects. It is expected that open data and Big
Data will be combined together to provide farmers and consumers just the right
kind of information to make the best decisions.
Data ownership is an important issue in discussions on the governance of
agricultural Big Data generated by smart machinery such as tractors from John
Deere. In particular, value and ownership of precision agricultural data have
received much attention in business media. It has become a common practice to
sign Big Data agreements on ownership and control data between farmers and
agriculture technology providers. Such agreements address questions such as:
How can farmers make use of Big Data? Where does the data come from? How much
data can we collect? Where is it stored? How do we make use of it? Who owns
this data? Which companies are involved in data processing?
There is also a growing number of initiatives to address or ease privacy and
security concerns. For example, the Big Data Coalition, a coalition of major
farm organizations and agricultural technology providers in the USA, has set
principles on data ownership, data collection, notice, third-party access and
use, transparency and consistency, choice, portability, data availability,
market speculation, liability and security safeguards 3 . And AgGateway, a
non-profit organization with more than 200 member companies in the USA, have
drawn a white paper that presents ways to incorporate data privacy and
standards 4 . It provides users of farm data and their customers with issues
to consider when establishing policies, procedures, and agreements on using
that data instead of setting principles and privacy norms. The European
farmers and agri-cooperatives association COPA-
COGECA has recently also published their ‘Main principles underpinning the
collection, use and
exchange of agricultural data’ 5 . The principles concern:
* _Ownership of farm data_ – data produced on the farm or during farming operations should be owned by the farmers themselves. If this data is used, also indirectly through combined services, the farmer should be somehow compensated for this.
* _Ownership of the underlying rights to derived data_ \- In any case, it should be clear when farm data is used and for what purpose; the farmer should be in full control of his/her data. When farm data is used by third parties anonymization and security are of utmost importance.
* _Duration, suspension and termination of supply_ – farmers must be provided with the possibility to opt out of a contract on data use with the right to delete all historical data.
* _Guarantee of compliance with laws and regulation_ – data collection should not be in conflict with general laws (e.g. on privacy) and data should not be used for unlawful purposes
* _Liability_ – in contracts on data use intellectual property rights of farmers and agri-cooperatives must be protected and liabilities must be clearly described. It is not always possible to go into all possible details so there should be a good balance between what is written in the contract and trust between partners.
Based on these type of principles several codes of conduct/practice with
concrete models for contracts have been developed such as the New Zealand Farm
Data Code of Practice 6 , the Dutch BO-Akkerbouw Code of Conduct 7 , and
Ag Data Transparency has established a certifying procedure 8 . More similar
codes and model contracts will pop-up in the near future.
The ‘Ownership Principle’ of the Big Data Coalition states that “We believe
farmers own information generated on their farming operations. However, it is
the responsibility of the farmer to agree upon data use and sharing with the
other stakeholders (...).” While having concerns about data ownership, farmers
also see how much companies are investing in Big Data. In 2013, Monsanto paid
nearly 1 billion US dollars to acquire The Climate Corporation, and more
industry consolidation is expected. Farmers want to make sure they reap the
profits from Big Data, too. Such change of thinking may lead to new business
models that allow shared harvesting of value from data.
In conclusion, Big data applications in Smart Farming will potentially raise
many power-related issues. There might be companies emerging that gain much
power because they get all the data. In the agrifood chain these could be
input suppliers or commodity traders, leading to a further power shift in
market positions. This power shift can also lead to potential abuses of data
e.g. by the GMO lobby or agricultural commodity markets or manipulation of
companies. Initially, these threats might not be obvious because for many
applications small start-up companies with hardly any power are involved.
However, it is a common business practice that these are acquired by bigger
companies if they are successful and in this way the data still gets
concentrated in the hands of one big player. It can be concluded that Big Data
is both a huge opportunity as a potential threat for farmers.
##### 3.1.5 General Data Protection Regulation 9
Security and privacy of personal data increasingly have become a public
concern which has led to the General Data Protection Regulation (GDPR) imposed
by the EU in 2018. The aim of GDPR is that:
* It allows European Union citizens to better control their personal data. It also modernises and unifies rules allowing businesses to reduce red tape and to benefit from greater consumer trust.
* The GDPR is part of the _EU data protection reform package_ , along with the _data protection_ _directive for police and criminal justice authorities_ .
_Key points:_
#### **Citizens’ rights**
The GDPR strengthens existing rights, provides for new rights and gives
citizens more control over their personal data. These include:
* **easier access to their data** — including providing more information on how that data is processed and ensuring that that information is available in a clear and understandable way;
* **a newright to data portability** — making it easier to transmit personal data between service providers;
* a clearer **right to erasure (‘right to be forgotten’)** — when an individual no longer wants their data processed and there is no legitimate reason to keep it, the data will be deleted;
* **right to know when their personal data has been hacked** — companies and organisations will have to inform individuals promptly of serious data breaches. They will also have to notify the relevant data protection supervisory authority.
#### **Rules for businesses**
The GDPR is designed to create business opportunities and stimulate innovation
through a number of steps including:
* **a single set of EU-wide rules** — a single EU-wide law for data protection is estimated to make savings of €2.3 billion per year;
* a **data protection officer,** responsible for data protection, will be designated by public authorities and by businesses which process data on a large scale;
* **one-stop-shop** — businesses only have to deal with one single supervisory authority (in the
EU country in which they are mainly based);
* **EU rules for non-EU companies** — companies based outside the EU must apply the same rules when offering services or goods, or monitoring behaviour of individuals within the EU;
* **innovation-friendly rules** — a guarantee that data protection safeguards are built into products and services from the earliest stage of development (data protection by design and by default);
* **privacy-friendly techniques** such as **pseudonymisation** (when identifying fields within a data record are replaced by one or more artificial identifiers) and **encryption** (when data is coded in such a way that only authorised parties can read it);
* **removal of notifications** — the new data protection rules will scrap most notification obligations and the costs associated with these. One of the aims of the data protection regulation is to remove obstacles to free flow of personal data within the EU. This will make it easier for businesses to expand;
* **impact assessments** — businesses will have to carry out impact assessments when data processing may result in a high risk for the rights and freedoms of individuals;
* **record-keeping** — SMEs are not required to keep records of processing activities, unless the processing is regular or likely to result in a risk to the rights and freedoms of the person whose data is being processed.
The _European Commission_ must submit a report on the evaluation and review
of the regulation by 25 May 2020. The GDPR will apply as of 25 May 2018\.
For more information, see:
* _Press release_ ( _European Commission_ )
* _‘2018 reform of EU data protection rules’_ ( _European Commission_ ).
### 3.2 CONSEQUENCES FOR IOF2020
The following consequences for IoF2020 could be derived from the developments
that were described in the previous section:
1. Any peer-reviewed journal article should be published openly accessible
2. Research data should be stored in a central repository where it is findable and accessible by others
1. Since IoF2020 is an Innovation Action type of project, the first question is whether _research_ data is actually involved; this has to be identified first
2. It should be further explored whether data sharing is appropriate in IoF2020 and if a partial opt-out for the Open Research Data pilot is appropriate
3. A data management plan needs to be written for IoF2020 according to the guidelines provided by the ORD Pilot
4. It should be identified if and to what extent the data-driven technologies in the IoF2020 use cases are aligned with the European Data Economy policy.
1. Since this policy is also under debate, IoF2020 should actively participate in this debate and development of this Data Economy.
5. It should be explored which issues around agricultural data sharing (e.g. data ownership/control, rights to use data, etc.) are (potentially) playing a role and if principles and guidelines, especially provided by IoF2020 partner COPA-COGECA, are applicable. And if yes, what type of contracts should be set-up?
6. The IoF2020 use cases should be analysed for GDPR compliance and take measures on this where necessary.
## 4 RELEVANT INITIATIVES AND PROJECTS
In this chapter several projects and initiatives are described that are
dealing with (Agricultural) Data and are potentially relevant for IoF2020.
### 4.1 PROJECTS AND INITIATIVES
#### **4.1.1 GODAN** 10
GODAN supports the proactive sharing of open data to make information about
agriculture and nutrition available, accessible and usable to deal with the
urgent challenge of ensuring world food security. It is a rapidly growing
group, currently with over 584 partners from national governments, non-
governmental, international and private sector organisations that have
committed to a joint _Statement of Purpose._
The initiative focuses on building high-level support among governments,
policymakers, international organizations and business. GODAN promotes
collaboration to harness the growing volume of data generated by new
technologies to solve long-standing problems and to benefit farmers and the
health of consumers. GODAN encourages collaboration and cooperation between
stakeholders in the sector.
The GODAN initiative was launched At the 2012 G-8 Summit, G-8 leaders
committed to the New Alliance for Food Security and Nutrition, the next phase
of a shared commitment to achieving global food security.
As part of this commitment, they agreed to “share relevant agricultural data
available from G-8 countries with African partners and convene an
international conference on Open Data for Agriculture, to develop options for
the establishment of a global platform to make reliable agricultural and
related information available to African farmers, researchers and
policymakers, taking into account existing agricultural data systems.”
In April 2013, the commitment to convene an international conference on Open
Data for Agriculture was fulfilled when the G8 International Conference on
Open Data for Agriculture took place.
This conference worked to ‘obtain commitment and action from nations and
relevant stakeholders to promote policies and invest in projects that open
access to publicly funded global agriculturally relevant data streams, making
such data readily accessible to users in Africa and world-wide, and ultimately
supporting a sustainable increase in food security in developed and developing
countries.
The GODAN initiative was a by-product of this conference and was announced at
the Open Government Partnership Conference in October 2013\.
Any organization that supports open access to agriculture and nutrition data
can become member of GODAN. Partners include government, donors, international
and not-for-profit organizations and businesses. GODAN partners support the
shared principles based on GODAN’s Statement of Purpose:
* Agricultural and nutritional data to be available, accessible, usable and unrestricted
* Partners aim to build high level policy and private sector support for open data
* Encourage collaboration and cooperation across existing agriculture, nutrition and open data activities and stakeholders to solve long-standing global problems
GODAN partners commit to:
* Host regular conversations with our peer multilateral and local organisations to identify and share best practices and determine how to more effectively share data and provide useable analysis for local application
* Recruit new partners to GODAN
GODAN activities and its Secretariat are financially supported by the US
Government, the UK
Department for International Development (DFID), the Government of the
Netherlands, FAO, Technical Centre for Agricultural and Rural Cooperation
(CTA), GFAR, The Open Data Institute (ODI), the CGIAR and CABI.
_More information:_
* _Statement of Purpose_
* _Theory of Change_
* _Ownership of Open Data: Governance Options for Agriculture and Nutrition_
* _A Global Data Ecosystem for Agriculture and Food_
#### **4.1.2 EIP-Agri** 11
The agricultural European Innovation Partnership (EIP-AGRI) works to foster
competitive and sustainable farming and forestry that 'achieves more and
better from less'. It contributes to ensuring a steady supply of food, feed
and biomaterials, developing its work in harmony with the essential natural
resources on which farming depends.
The European Innovation Partnership for Agricultural productivity and
Sustainability (EIP-AGRI) has been launched in 2012 to contribute to the
European Union's strategy 'Europe 2020' for smart, sustainable and inclusive
growth. This strategy sets the strengthening of research and innovation as one
of its five main objectives and supports a new interactive approach to
innovation: _European Innovation Partnerships._
The EIP-AGRI pools funding streams to boost interactive innovation. Having an
idea is one thing, turning it into an innovation action is another. Different
types of available funding sources can help get an agricultural innovation
project started, such as the **European Rural Development policy** or the EU's
research and innovation programme **Horizon 2020** . The EIP-AGRI contributes
to integrating different funding streams so that they contribute together to a
same goal and duplicate results.
Rural Development will in particular support **Operational Groups** and
**Innovation Support**
**Services** within a country or region. **Horizon 2020** will fund multi-
actor projects and thematic networks involving partners from at least three EU
countries. Other policies may offer additional opportunities.
The EIP-AGRI brings together innovation actors (farmers, advisers,
researchers, businesses, NGOs and others) at EU level and within the rural
development programmes (RDPs). Together they form an EU-wide EIP network. EIP
Operational Groups can be funded under the RDPs, are project-based and tackle
a certain (practical) problem or opportunity which may lead to an innovation.
The Operational Group approach makes the best use of different types of
knowledge (practical, scientific, technical, organisational, etc.) in an
interactive way. An Operational Group is composed of those key actors that are
in the best position to realise the project's goals, to share implementation
experiences and to disseminate the outcomes broadly. The first Operational
Groups are currently being set up in several EU countries and regions.
The Rural Networks' Assembly, which was launched in January 2015, coordinates
two networks - the
EIP-AGRI Network and the European Network for Rural Development (ENRD). The
Assembly
includes several subgroups, one of them being the permanent Subgroup on
Innovation for agricultural productivity and sustainability. This Subgroup on
Innovation will support the EIP-AGRI Network.
The EIP-AGRI website has exciting and interactive features. All visitors can
voice their research needs, discover funding opportunities for innovation
projects and look for partners to connect with. Through the website's
interactive functions, users can share **innovative project ideas** and
practices, information about **research and innovation projects** ,
including projects' results, by filling in the available easy-to-use e-forms.
Various EIP-AGRI-related **publications** are available for download on the
website, providing visitors with information on a wide range of interesting
topics. Future functionalities will be developed for **Operational Groups**
and European funds managing authorities once the programmes start. Through
this collaborative effort, the EIP-AGRI website will become a one-stop-shop
for agricultural innovation in Europe.
The **EIP-AGRI network** is run by the European Commission (DG Agriculture and
Rural Development) with the help of the **EIP-AGRI Service Point** . The
Service Point offers a wide range of tools and services which can help you
further your ideas and projects. It also facilitates networking activities;
enhancing communication, knowledge sharing and exchange through conferences,
**Focus Groups** , workshops, seminars and publications.
A recent series of workshops on Agricultural Data is particularly relevant:
* ‘Data revolution: emerging new data-driven business models in the agri-food sector’, Sophia,
Bulgaria, 22-23 June 2016.
_https://ec.europa.eu/eip/agriculture/en/event/eip-agri-seminar%E2%80%98data-
revolution-emerging-new_
* ‘Data Sharing: ensuring fair sharing of digitisation benefits in agriculture’ Tuesday 4-5 April 2017, Bratislava, Slovakia. _https://ec.europa.eu/eip/agriculture/en/event/eip-agri-workshopdata-sharing_
* ‘Digital Innovation Hubs: mainstreaming digital agriculture’, 1-2 June 2017, Kilkenny, Ireland.
_https://ec.europa.eu/eip/agriculture/en/event/eip-agri-seminar-digital-
innovation-hubs_
_More about EIP-AGRI:_
* **_Brochure: EIP-AGRI network_ **
* **_Brochure: EIP-AGRI Service Point_ **
#### **4.1.3 AgInfra+** 12
**AGINFRA+** aims to exploit core e-infrastructures such as _EGI.eu_ ,
_OpenAIRE_ , _EUDAT_ and _D4Science_ , towards the evolution of the AGINFRA
data infrastructure, so as to provide a sustainable channel addressing
adjacent but not fully connected user communities around Agriculture and Food.
To this end, the project will develop and provide the necessary specifications
and components for allowing the rapid and intuitive development of variegating
data analysis workflows, where the functionalities for data storage and
indexing, algorithm execution, results visualization and deployment are
provided by specialized services utilizing cloud based infrastructure(s).
Furthermore, **AGINFRA+** aspires to establish a framework facilitating the
transparent documentation and exploitation and publication of research assets
(datasets, mathematical models, software components results and publications)
within AGINFRA, in order to enable their reuse and repurposing from the wider
research community.
**AGINFRA** is the European research hub and thematic aggregator that
catalogues and makes discoverable publications, data sets and software
services developed by Horizon 2020 research projects on topics related to
agriculture, food and the environment.
It is part of the broader vision of the European research e-infrastructure
“European Open Science Cloud”, a synergy between _OpenAIRE_ , _EUDAT_ ,
_GEANT_ , _EGI_ , _LIBER_ .
With the integration of big data processing components from projects like the
Horizon 2020 BigDataEurope and the FP7 _SemaGrow_ , _AGINFRA_ evolves into a
big data analytics capable einfrastructure for agri-food, to respond to the
needs of three (3) adjacent yet not fully connected user communities:
* _**H2020 SC1** _ Health
* _**H2020 SC2** _ Food security and sustainable agriculture
* _**H2020 SC5** _ Climate action and environment
**AGINFRA+** addresses the challenge of supporting user-driven design and
prototyping of innovative einfrastructure services and applications. It
particularly tries to meet the needs of the scientific and technological
communities that work on the multi-disciplinary and multi-domain problems
related to agriculture and food. It will use, adapt and evolve existing open
e-infrastructure resources and services, in order to demonstrate how fast
prototyping and development of innovative data- and computingintensive
applications can take place.
This project builds upon the extensive experience and work of its partners,
who are key stakeholders in the e-infrastructures ecosystem. It also
implements part of a strategic vision shared between **Agroknow** , the
National Agronomic Research Institute of France ( **INRA** ), the Alterra
Institute of the Wageningen University & Research Center ( **ALTERRA** ), the
National Institute for Risk Assessment of Germany ( **BfR** ), and the Food
and Agriculture Organization ( **FAO** ) of the United Nations - the latter
one, not participating as a funded beneficiary, but supporting the project and
its activities. These stakeholders are part of a core group of internationally
recognised players (including the Chinese Academy of Agricultural Sciences)
aiming to put in place a free global data infrastructure for research and
innovation in agriculture, food and environmental science. This data
infrastructure will become an incubator of the large infrastructure
investments that global donors (including the European Commission) make in the
field of agricultural research around the world.
**AGINFRA+** will evolve and develop further the resources and services of the
AGINFRA data infrastructure, which has been developed in the context of the
FP7 _agINFRA_ p roject. The new project will build upon core components of
AGINFRA, such as:
* the federated data and software registry of _CIARD RING_ ,
* the _AGINFRA API_ gateway for indexing and hosting executable software components for advanced data processing & analysis,
* the open source software stack for data analysis, indexing, publication and querying developed by projects such as FP7 _SemaGrow_ and H2020 _Big Data Europe_ ,
* the semantic backbone of the Global Agricultural Concept Scheme (GACS1) that has been based upon the alignment of FAO’s AGROVOC with the USDA’s National Agricultural Library
Thesaurus and CABI’s Thesaurus,
* the advanced research data set processing & indexing demonstrators developed within FP7 SemaGrow for specific scientific communities such as _Trees4Futures_ and _gMIP_ .
The envisaged pilots will focus on three societal challenges that are of
primary importance for our planet and for humanity:
* Food safety risk assessment and risk monitoring, addressing H2020 SC1 Health, demographic change and well-being.
* Plant phenotyping for food security, addressing H2020 SC2 Food security, sustainable agriculture and forestry, marine and maritime and inland water research, and the Bioeconomy.
* Agro-climatic and Economic Modelling, addressing H2020 SC5 Climate action, environment, resource efficiency and raw materials.
In order to realize its vision, AGINFRA+ will achieve the following
objectives:
* identify the requirements of the specific scientific and technical communities working in the targeted areas, abstracting (wherever possible) to new AGINFRA services that can serve all users;
* design and implement components that serve such requirements, by exploiting, adapting and extending existing open e-infrastructures (namely, OpenAIRE, EUDAT, EGI, and D4Science), where required;
* define or extend standards facilitating interoperability, reuse, and repurposing of components in the wider context of AGINFRA;
* establish mechanisms for documenting and sharing data, mathematical models, methods and components for the selected application areas, in ways that allow their discovery and reuse within and across AGINFRA and served software applications;
* increase the number of stakeholders, innovators and SMEs aware of AGINFRA services through domain specific demonstration and dissemination activities.
The development of fully defined demonstrator applications in each of the
three application areas will allow to showcase and evaluate the AGINFRA
components in the context of specific end-user requirements from different
scientific areas.
#### **4.1.4 BigDataEurope** 13
Big Data Europe will undertake the foundational work for enabling European
companies to build innovative multilingual products and services based on
semantically interoperable, large-scale, multilingual data assets and
knowledge, available under a variety of licenses and business models.
Big Data Europe aims to:
* Collect requirements for the ICT infrastructure needed by data-intensive science practitioners tackling a wide range of societal challenges; covering all aspects of publishing and consuming semantically interoperable, large-scale, multi-lingual data assets and knowledge.
* Design and implement an architecture for an infrastructure that meets requirements, minimizes the disruption to current workflows, and maximizes the opportunities to take advantage of the latest European RTD developments, including multilingual data harvesting, data analytics, and data visualization.
Societal challenges and their Big Data focus areas are:
* Health - heterogeneous data linking and integration, biomedical semantic indexing
* Food & Agriculture - large-scale distributed data integration
* Energy - real-time monitoring, stream processing, data analytics, decision support
* Transport - streaming sensor network and geospatial data integration
* Climate - real-time monitoring, stream processing and data analytics
* Social Sciences - statistical and research data linking and integration
* Security - real-time monitoring, stream processing and data analytics, image data analysis
The **Food and Agriculture pilot** (SC2) within BDE is focusing on
viticulture. The problem of discovery and linking of information is present in
every major area of agricultural research and agriculture in general. This is
especially true in viticulture where different research methodologies produce
a great amount of heterogeneous data from diverse sources; scientists need to
be able to find all this information so as to analyse and correlate it to
provide integrated solutions to the emerging problems in the European and
global vineyard. These problems arise largely because of the impact of climate
change and therefore the exploitation of the appropriate grapevine varieties
is very important. Factors to bear in mind include the intensity of diseases,
the intensification of the cultivation, the proper implementation of precision
viticulture systems that affect the quality of viticultural products and their
role in human health.
The overall goal of the SC2 Pilot is to demonstrate the ability of Big Data
technologies to complement existing community-driven systems (e.g. _VITIS_
for the Viticulture Research Community) with efficient large-scale back-end
processing workflows.The pilot deployment is organised in three Cycles with
different targeted objectives:
* **Pilot Cycle 1 (SC2 Pilot Pitch-Deck)** \- The goal of this Pilot Cycle is to showcase a largescale processing workflow that automatically annotates scientific publications relevant to Viticulture. The focus of the first demonstrator cycle is on the Big Data aspects of such a workflow (i.e. storage, messaging and failure management) and not on the specificities of the NLP modules/tools used in this demonstrator.
* **Pilot Cycle 2 (SC2 Pilot Maturity / Functionality Expansion)** \- The goal of this Pilot Cycle is to showcase the ability of scalable processing workflows to handle a variety of data types (beyond bibliographic data) relevant to Viticulture.
* **Pilot Cycle 3 (Lowering SC2 Community Boundaries)** \- The goal of this Pilot Cycle is to provide an engaging, intuitive graphical web interface addressing key data-oriented questions relevant to the Viticulture Research Community, and if possible, intuitive interfaces for endusers for sharing and linking their on-the-field generated data.
In SC2 Pilot Cycle 1, content mainly refers to open scientific publications
relevant to Viticulture, available at FAO/AGRIS and NCBI/PubMed in PDF format
(about 26K and 7K publications respectively). In Cycle 2, the content pool has
been extended to include:
* Weather Data, available via publicly available APIs (e.g. OpenWeatherMap, Weather Underground, AccuWeather etc.)
* User-generated data, e.g. geotagged photos from leaves, young shoots and grape clusters, ampelographic data, SSR-marker data etc.
Additional data sources include:
* Sensor Data, measuring temperature, humidity and luminosity retrieved from sensors installed in selected experimental vineyards,
* ESA Copernicus Sentinel 2 Data, for selected experimental vineyards.
The goal of the inclusion of these data is to complement the existing SC2
Pilot Demonstrator Knowledge Base so as to support complex real-life research
questions, based on the correlation of environmental conditions with real
observations on crop production and quality.
#### **4.1.5 DATABIO** 14
he Data-Driven Bioeconomy project (DataBio) focuses on the production of best
possible raw materials from agriculture, forestry and fishery for the
bioeconomy industry to produce food, energy and biomaterials taking into
account responsibility and sustainability
In order to meet the above objectives, DataBio is controlling and putting to
use the innovative ICTs and information flows centered mostly around the use
of proximal and remote sensors, in order to provide a streamlined Big Data
Infrastructure for data discovery, retrieval, processing and visualizing, in
support to decisions in bioeconomy business operations.
The main goal of the **DataBio project** is to show the benefits of Big Data
technologies in the raw material production from **agriculture** ,
**forestry** and **fishery/aquaculture** for the bioeconomy industry **to
produce food** , **energy** and **biomaterials** responsibly and sustainably.
**DataBio** proposes to deploy a state of the art, big data platform on top of
the existing partners’ infrastructure and solutions – the **Big DATABIO
Platform** .
The work will be continuous cooperation of experts from end user and
technology provider companies, from bioeconomy and technology research
institutes, and of other partners.
DataBio is working on pilots in three areas: Agriculture, Fishery and
Forestry. The Agicultural pilots are divided into:
* **Precision Horticulture including vine and olives** o A1. Precision agriculture in olives, fruits, grapes and vegetables o A2. Big Data management in greenhouse eco-systems
* **B. Arable Precision Farming** o B1. Cereals and biomass crops o B2. Machinery management and environmental issue
* **C. Subsidies and insurance** o C1. Insurance o C2. CAP support
In the pilots also associated partners and other stakeholders will be actively
involved.
The selected pilots and concepts will be transformed into pilot
implementations using co-innovative approaches and tools where the bioeconomy
sector end users, experts and other stakeholders will give input to the user
and sector domain understanding for the requirement specifications for ICT,
Big Data and Earth Observation experts and for other solution providers in the
consortium.
Based on the preparation and requirement specifications work, the pilots are
implemented using and selecting the best suitable market ready or almost
market ready Big Data and Earth Observation methods, technologies, tools and
services to be integrated to the common **Big DATABIO Platform** .
During the pilots the close cooperation continues and feedback from the
bioeconomy sector user companies will be harnessed in the technical and
methodological upgrades for pilot implementations.
Based on the pilot results and the new solutions also new business
opportunities are expected.
In addition during the pilots the end users are participating in trainings to
learn how to use the solutions and developers also outside the consortium will
be active in the Hackathons to design and develop new tools, services and
application for the platform.
**Databio’s expected achievements include but are not limited to:**
* Demonstrate increase of productivity in bioeconomy
* Increase of market share of Big Data technology providers in the bioeconomy sector
* More than double the use of Big Data technology in bioeconomy Leveraging additional target sector investments by a factor of >5
* More than 100 organizations in demonstrations
* Liaison with other Big Data actions
* Closely working with BDVA
### 4.2 RELATIONSHIP WITH IOF2020
Through overlapping partners between the mentioned initiatives and projects
and IoF2020, there is already quite a natural ground for collaboration and
knowledge exchange on data management.
Especially IoF2020’s coordinator Wageningen University & Research is involved
in most relevant initiatives and projects. However since we are dealing with
relatively large initiatives, projects and organizations, knowledge exchange
and collaboration it is not automatically guaranteed. Specific attention needs
to be paid to this. Work on very similar specific pilot areas (e.g.
viticulture, agricultural machinery data, etc.) would be a good starting point
for this.
Since the beginning of the IoF2020 project there were already several
occasions (conferences, workshops, etc.) where explicit connections were made
with these projects and initiatives. It was decided to setup an explicit
collaboration with the DataBio project.
## 5 INVENTORY OF USE CASES
Based on the issues that are identified and described in the previous
chapters, a first scan of the IoF2020 use cases was made.
A quick scan for the constraints to data privacy and security was conducted by
a questionnaire at use case level. The results are listed in **Table 1** .
**Table 1** Potential constraints to data privacy and security for each use
case.
<table>
<tr>
<th>
**Trial and use cases**
</th>
<th>
**Data privacy & security constraints **
</th> </tr>
<tr>
<td>
**Arable Trial**
</td> </tr>
<tr>
<td>
UC 1.1. Within-field management zoning
</td>
<td>
Farmer’s data can only be used outside his company only if he agrees. Other
partners provide proprietary software tools and information.
</td> </tr>
<tr>
<td>
UC 1.2 Precision Crop Management
</td>
<td>
The data collected by the Bosch systems are intended to remain confidential
and will be the property of the farmer who has acquired a system. The Bosch
cloud must be able to guarantee this confidentiality and the security of its
data.
The API-AGRO interface, which is an API management platform operated by a
consortium represented in the Use Case by Arvalis, allows to add a layer of
management of the confidentiality and possible opening to other actors of
those data. While remaining under the control of the owner of the systems, it
will be possible to communicate the acquired data according to conditions
which still have to be defined while guaranteeing a control by the owner of
the data of the destination that he wants to give to his data.
The Arvalis models and agro-climatic references that will be used for DSS are
the property of Arvalis.
</td> </tr>
<tr>
<td>
UC1.3 Soya Protein Management
</td>
<td>
No constraints so far about data privacy and security in UC 1.3.
</td> </tr>
<tr>
<td>
UC1.4 Farm Machine Interoperability
</td>
<td>
The constraints will be dependent on the agreements achieved with the partners
and farmers involved in the UCs that we collaborate with, i.e. UC 1.1 and UC
1.3. UC 1.4 members will have to sign agreements with the farmers for the
access to their data, but also NDAs for specific software elements and
equipment components with the other use case participants.
</td> </tr>
<tr>
<td>
**Dairy Trial**
</td> </tr> </table>
<table>
<tr>
<th>
UC2.1 Grazing Cow Monitor
</th>
<th>
No specific data protection issues arise
</th> </tr>
<tr>
<td>
UC2.2 Happy Cow
</td>
<td>
The platform is secured to prevent access to cow/farm data by default. Login
opens the data associated to the user. Communication is encrypted.
</td> </tr>
<tr>
<td>
UC2.3 Silent Herdsman
</td>
<td>
An agreement on the use of the data acquired on each trial site has to be
established. The principle is that the data will be owned by the farmer and
will be made available for the purposes of the project. Any public reporting
of the key findings regarding the trial farms must be anonymised and cleared
by the data owners.
</td> </tr>
<tr>
<td>
UC2.4 Remote Milk Quality
</td>
<td>
Up to our present knowledge, there are no constraints. There will be no data
exchanged. Data of users will be monitored and handled but not used. Only
alerts and checks are exchanged
</td> </tr>
<tr>
<td>
**Fruit Trial**
</td> </tr>
<tr>
<td>
UC3.1 Fresh table grapes chain
</td>
<td>
No issues mentioned.
</td> </tr>
<tr>
<td>
UC3.2 Big wine optimization
</td>
<td>
None till the edition of this document. However, the following considerations
will be taken into account during the implementation phase, others could come
according to the discover needs:
All the communications are expected to be carried out over DTLS (Datagram
Transport Layer Security) using DTLS 1.2 version, it means ECC asymmetric
cryptography for the key negotiation and authentication, in conjunction with
AES symmetric cryptography for the efficient and optimal data exchange.
Since the protocol, expected to be used, is OMA LwM2M, it includes all the
security from this protocol for defining an access control for the list of
servers that can interact with the devices and also the definition of a
bootstrap server and commissioning service to guarantee the secure
provisioning of all the credentials and configuration details.
</td> </tr> </table>
<table>
<tr>
<th>
UC3.3 Automated olive chain
</th>
<th>
Farmers involved in the action would like to remain privacy their personal
data.
</th> </tr>
<tr>
<td>
UC3.4 Intelligent fruit logistics
</td>
<td>
Collection of data on customer sites using our IoT-RTI:
* Who owns the data collected on premise by the IoT-enabled RTI?
* Who can do what with the data?
* Which data can be shown to whom (datadriven business models!)?
* Encryption of data
Questions regarding security of IoT-Technology used in the pilot:
* Is there a danger of hacking, data capturing by others, interference with communication?
* Would it be possible to manipulate collected data?
* Can the central database be accessed by external parties?
* To which security level is it possible to encrypt the data?
</td> </tr>
<tr>
<td>
**Vegetable Trial**
</td> </tr>
<tr>
<td>
UC4.1 City farming leafy vegetables
</td>
<td>
The data generated in a city farm, in general, are related to plant growth and
the operation of a city farm. Data related to persons are out of scope. The
data are owned by the owner of the city farm. The data collected in a city
farm represent a value from an economic point of view. Access to the data in
general only takes place with the consent of the owner. Data access should be
protected. Also, any data link should be secured.
Measures need to be in place against unauthorized or unlawful access or
processing of data as well as against accidental loss or damage of data. Data
access rights management should be in place.
</td> </tr>
<tr>
<td>
UC4.2 Chain-integrated greenhouse production
</td>
<td>
Data should be owned by farmers and agri-business, and privacy issues should
be respected according to European regulation. As for Intellectual Property, a
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
consortium Agreement (CA) will be negotiated between all partners, settling
among other things the internal organization of the consortium, reflecting
what has been described about the project management structure. With respect
to this Use Case 4.3, Background IP remains with the partner(s) who created it
and Foreground IP (if any) goes to the partner(s) that developed it.
</th> </tr>
<tr>
<td>
UC4.3 Added value weeding data
</td>
<td>
Yes, farmers need to provide access to their machine data and crop data.
</td> </tr>
<tr>
<td>
UC4.4 Enhanced quality certification system
</td>
<td>
No issues mentioned
</td> </tr>
<tr>
<td>
**Meat Trial**
</td>
<td>
</td> </tr>
<tr>
<td>
UC5.1 Pig farm management
</td>
<td>
</td>
<td>
Raw pig farm data is confidential and owned by the farmer – and must only be
accessed in aggregated fashion. Aggregated feed data must be kept in the farm
and only accessed by Cloud Service component when actually needed. All
accesses should be logged so that farmer can be informed about what is being
read by whom. The same is true for slaughterhouse data.
</td> </tr>
<tr>
<td>
UC5.2 Poultry chain management
</td>
<td>
</td>
<td>
Privacy:
* Delivering data to platforms (who owns data…)
* The data remain the ownership of the provider – since SADA is an integrated company, all raw data is property of SADA
* Support on legal and ethical issues on workers manipulation model Security:
* Authentication of the data delivered to the platform
</td> </tr>
<tr>
<td>
UC5.3 Meat Transparency and
Traceability
</td>
<td>
Data privacy and security must be handled within this project. Farms may
provide information which can result in detailed insights on their internal
processes. Data access restriction is provided by the access layer. This layer
implements a set of static and dynamic access rules for EPCIS events,
supporting different
</td> </tr>
<tr>
<td>
</td>
<td>
roles and actors. Data providers (farmer) stay owner of their data and decide
which supply chain partner can access what kind of information.
</td> </tr> </table>
From this table it can be generally concluded that there is quite some
difference between the several use cases in thinking about data management
issues. Some use cases have thought about this in much detail and are very
aware of the various issues that might arise. Other use cases say that they
don’t see any issue or only have thought about in a general way.
The results from this scan will be shared and discussed between the use cases
and it is expected that they can learn from each other. This will result in a
more generic list of data management issues that in the end again can be made
specific for each use case.
For all trials and use cases together an inventory was made on what type of
data is expected as is presented in Table 2.
_Table 2 Data that is expected to be generated by all use case and trials_
<table>
<tr>
<th>
**Type**
</th>
<th>
**Origin**
</th>
<th>
**Format**
</th>
<th>
**Estimated size**
</th> </tr>
<tr>
<td>
Dissemination material:
* press releases,
* leaflets,
* audio-visual material,
* posters,
* images/photos
</td>
<td>
IoF2020 consortia generated
</td>
<td>
Adobe Photoshop
(.psd)
MS PowerPoint (.ppt, pptx)
JPEG (.jpg, .jpeg)
MS Word (.doc, .docx)
Adobe Acrobat Reader
(.pdf)
Adobe Illustrator (.ai)
Hard copy
</td>
<td>
0.5 Gb
</td> </tr>
<tr>
<td>
Demographic and personal data of third parties interviewed
</td>
<td>
Interviews, questionnaires, cooperation agreements,
invitation letters to participate in the pilots, application forms, informed
consent forms
</td>
<td>
Google forms
MS Excel (.xls, .xlsx)
MS Word (.doc, .docx)
Adobe Acrobat Reader
(.pdf)
Comma Separated
Values (.csv)
</td>
<td>
0.5 Gb
</td> </tr>
<tr>
<td>
Demographic and personal data of partners within Use Cases
</td>
<td>
Survey, questionnaires
</td>
<td>
Google forms
MS Excel (.xls, .xlsx)
MS Word (.doc, .docx)
</td>
<td>
0.5 Gb
</td> </tr> </table>
D1.4 Data Management Plan
<table>
<tr>
<th>
**Type**
</th>
<th>
**Origin**
</th>
<th>
**Format**
</th>
<th>
**Estimated size**
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Adobe Acrobat Reader
(.pdf)
Comma Separated
Values (.csv)
</td>
<td>
</td> </tr>
<tr>
<td>
Project reports/deliverables with internal reviewing process
</td>
<td>
WP2 team generated
</td>
<td>
MS Excel (.xls, .xlsx)
MS Word (.doc, .docx)
</td>
<td>
0.5 Gb
</td> </tr>
<tr>
<td>
Project reports/deliverables with external reviewing process
</td>
<td>
WP2 team generated
</td>
<td>
MS Excel (.xls, .xlsx)
MS Word (.doc, .docx)
</td>
<td>
0.5 Gb
</td> </tr>
<tr>
<td>
Contact details of project partners and advisory/scientific board(s)
(Name, Email, Phone, Skype ID…)
</td>
<td>
Survey, questionnaires
</td>
<td>
MS Excel (.xls, .xlsx)
MS Word (.doc, .docx)
Adobe Acrobat Reader
(.pdf)
</td>
<td>
0.5 Gb
</td> </tr>
<tr>
<td>
Guidelines for consortium members
</td>
<td>
WP2 team generated
</td>
<td>
MS Word (.doc, .docx)
Adobe Acrobat Reader
(.pdf)
MS PowerPoint (.ppt, pptx)
</td>
<td>
0.5 Gb
</td> </tr>
<tr>
<td>
Outputs generated at project events
</td>
<td>
Agendas and meeting minutes,
Attendance sheets
</td>
<td>
MS Word (.doc, .docx)
Adobe Acrobat Reader
(.pdf)
MS PowerPoint (.ppt, pptx)
</td>
<td>
0.2 Gb
</td> </tr>
<tr>
<td>
Dataset produced by aggregating data during project implementation
</td>
<td>
Interview
</td>
<td>
MS Word (.doc, .docx)
</td>
<td>
1 Gb
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
</td>
<td>
**Origin**
</td>
<td>
**Format**
</td>
<td>
**Estimated size**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Focus group discussion
Observation
Survey
</td>
<td>
Adobe Acrobat Reader
(.pdf)
MS PowerPoint (.ppt, pptx)
</td>
<td>
</td> </tr>
<tr>
<td>
A dataset documenting and providing evidence for either a report or a
publication produced in the context of project activities
</td>
<td>
Peerreviewed
scientific publications
</td>
<td>
Interview
Focus group discussion
Observation
Survey
Desk research etc.
</td>
<td>
MS Word (.doc, .docx)
Adobe Acrobat Reader
(.pdf)
</td>
<td>
0.5 Gb
</td> </tr>
<tr>
<td>
Scientific publications with internal reviewing process
</td> </tr> </table>
## 6 DATA MANAGEMENT PLAN
### 6.1 GENERAL GUIDELINES FOR DATA MANAGEMENT IN IOF2020
From the previous Chapters in this document the following general guidelines
for data management in IoF2020 can be derived:
* Research papers that are derived from the project should be published according to the open access policy
* Research data should be stored in a central repository according to the FAIR principles: findable, accessible, interoperable and reliable
* The use cases in IoF2020 should be clearly aligned to the European Data Economy policy and more specifically in line with the principles and guidelines provided by the stakeholder community _i.e._ COPA-COGECA.
* All IoF2020 use cases should be GDPR-compliant.
### 6.2 FURTHER ACTIONS AND RECOMMENDATIONS
The following steps should be taken to further develop data management in
IoF2020:
* Investigate what research data is involved in the use cases and other project activities and define how they should be treated according to the open data policy.
* Participate actively in the debate and developments of the European Data Economy and data sharing in agriculture.
* Analyze and explore the use cases in a deeper way in order to identify which data management issues potentially play a role and define plans how to deal with them.
* Although many collaborative actions with other relevant projects and initiatives are already taking place and there are many natural connections through the project partners, a more systematic and structural approach should be explored in order to maximize the benefits and impact of the mutual activities on data management.
* Collect information concerning GDPR from all IoF2020 use cases based on questionnaires that can be provided by several partners that are already GDPR-compliant.
* Prepare for each use case a Data Protection Policy, Privacy Policy Statement, Consent Forms and a Data Breach Notification Procedure
For the latter two points, an IoF2020 GDPR package has already been developed
containing several tools and templates (see a separate file ‘ _IoF2020_ _GDPR
package.zip_ ’) . However, a full implementation of this package is expected
to be a too heavy load for all IoF2020 use cases. Through an iterative
approach with the use cases and advise from internal and external experts we
will search for an appropriate approach to have a light, but sufficient
implementation.
This Data Management Plan and its future updates will be actively communicated
with all partners in IoF2020 and the use cases in particular. It is planned to
have a workshop on this issue, possibly combined with other similar issues
such as ‘Ethics’ during the annual project meeting in Spring 2018 organized by
WP4.
## 7 CONCLUSIONS
Data or Big Data rapidly has become a new resource or asset in the current
economy, also in the agricultural sector. This development leads to several
issues that have to be addressed such as data availability, quality, access,
security, responsibility, liability, ownership, privacy, costs and business
models. In Europe several initiatives and projects have already been
established to work on this in a general context (e.g. the EU open data
policy) and more specifically in agriculture (e.g. GODAN,
COPA-COGECA).
The consequences for IoF2020 are partly mandatory (open access publishing,
open research data and GDPR) and for the other part guiding principles that
also have to be further explored. In this document we have presented a first
version of a Data Management Plan with concrete actions to be taken in order
to establish open, transparent data management in IoF2020.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0082_bIoTope_688203.md
|
**Executive Summary**
The bIoTope project lays the foundation for creating open innovation
ecosystems supporting the Internet of Things (IoT) by providing a platform
that enables companies to easily create new IoT systems and to rapidly harness
available information using advanced Systems-of-Systems (SoS) capabilities for
connected smart objects. The project will develop three large scale pilots in
smart cities to provide social, technical and business proofs-of-concept of
the bIoTope enabled SoS ecosystems.
This document describes the initial Data Management Plan of the bIoTope
project and the initial data sets that have been identified for the large
scale pilots. The data sets are described in this report in accordance with
the European Commission guidelines addressing key attributes of the data type,
format, metadata, use of standards and sharing modalities. At this early stage
of the project, benchmarks and other facilities to be used in the development
of the bIoTope components and their resultant data have not yet been fully
identified. These data sets will be described in future Data Management Plan
updates as the technology development tasks progress and relevant data sets
are developed or adopted.
**1\. Introduction**
The bIoTope project participates to the Open Data pilot under the Horizon 2020
Programme and, in the interest of supporting the bIoTope ecosystems seeks,
whenever feasible, to share open data by making data sets used and created
within the project publicly available. This will apply both to data sets used
for the large scale pilots later in the project, as well as data used for
validation of the work carried out under the research and development tasks,
which may not be included as part of the formal reporting from the project.
This deliverable outlines the initial Data Management Plan (DMP) for bIoTope,
in line with the H2020 guidelines for data management plan creation 1 . It
identifies the initial classes of datasets that the project foresees to
utilise and create primarily with respect to the smart cities pilots and
target communities providing an outline of the type, format, metadata and
sharing modalities for the initial data sets identified in the early stages of
the project.
The purpose of this DMP is to provide a description of the main elements of
the data management policy that will be used by the consortium with regard to
all the data sets that will be generated or adopted by the project. The DMP is
not a fixed document and is intended to evolve during the operation of the
bIoTope project. This initial version of the DMP includes an overview of the
data sets to be produced by the project, and the specific conditions that are
attached to them. Updated versions of the DMP will provide more details and
describe the practical data management procedures that have been implemented
by the bIoTope project.
### 1.1. Data lifecycle
The data management planning for the bIoTope project is intended to evolve
over the project duration to eventually cover the entire data life cycle for
the data created or adopted by the project. Figure 1 depicts the typical
lifecycle for a given data set.
Data lifecycle support is an important aspect related to creating a
sustainable bIoTope ecosystem where evolving data sets can help sustain the
ecosystems and create opportunities for new applications and services that
further exploit the bIoTope technologies. Supporting policies and procedures
concerning bIoTope related data sets will be further analysed and outlined in
later deliverables addressing bIoTope ecosystem management and business
modelling.
**Figure 1: Data Lifecycle Elements 2 **
Figure 1 depicts a typical lifecycle and there can be many variations. For
example, some Internet of Things applications may utilise or share raw data
from Data Collection sources without any Data Analysis.
Key elements to be considered in managing data within the data lifecycle are
the following:
* File formats
* Organisation and naming conventions
* Quality control
* Access modalities
* Persistence and recovery
* Metadata and data conversion
* Sharing and preservation
These elements are addressed for the initial data sets described in this plan
in accordance with the European Commission guidelines. At this early stage of
the bIoTope project some of the data set information provided should be
considered preliminary and subject to change.
### 1.2. Intended audience
This deliverable is intended for use both internally within the bIoTope
project and externally to create awareness of the initial data sets that have
been identified for use in the bIoTope ecosystem and the planning that is
already taking place within the project concerning access and preservation of
data. This report provides initial guidance on data management to the project
partners and is particularly relevant for partners responsible for data
collection and large scale pilots. It should be considered a snapshot at this
stage, which will evolve throughout the project as further details concerning
the technologies and pilots are specified and further procedures and potential
infrastructures are created or modified for storing and managing project
related data.
### 1.3. Structure of this deliverable
This deliverable has been structured with two main sections addressing the
different aspects of data management within the bIoTope project as follows:
* Section 2 provides an overview of the initial data policies established for the project
* Section 3 describes the data sets already identified for use in the large scale pilots, some of which exist and others will be created during the project operation A final section with conclusions is also included.
**2\. Project data policies**
### 2.1. Participation in the Pilot on Open Research Data
The bIoTope project participates in the Pilot on Open Research Data launched
by the European Commission along with the Horizon 2020 Programme. The
consortium strongly believes in the concepts of open research and development,
and in the benefits that the European Internet of Things community can draw
from allowing reuse of data on much larger scales across Europe. Therefore,
whenever feasible, data produced by the project can potentially be published
under open access procedures – though this objective may be constrained in
view of other principles related to IPR and security as described below.
### 2.2. IPR management and security
Many project partners have or will have Intellectual Property Rights (IPR) on
the project technologies and data, which for some partners are essential for
economic sustainability. The bIoTope project consortium will therefore have an
obligation to protect these data and to publish data only if the concerned
partner(s) have granted explicit authorisation. Another effect of IPR
management is that some of the data collected through the bIoTope project may
be of high value for application and service providers and therefore due
consideration of the business models and ecosystem should be taken in advance
of open access decisions for project data sets. All measures should therefore
be taken to prevent data from being leaked or hacked, which could potentially
undermine the ecosystem planning for the project or the commercial
opportunities for bIoTope project partners. Repositories used by the project
for data that have potential commercial value will be secured until decisions
are taken concerning open access by the respective partner(s), and in view of
the planned business models within the bIoTope ecosystem.
For sensitive data a holistic security approach will be undertaken to protect
the three mains pillars of information security: confidentiality, integrity,
and availability. The security approach will consist of an assessment of
security risks for each data set followed by an impact analysis. This analysis
will be performed on the information and data processed by the bIoTope system,
their flows and any risk associated to their processing. Particular assessment
attention will be placed on any data sets containing personally identifiable
information.
### 2.3. Personal data protection
For some of the activities to be carried out by the project, it may be
necessary to collect basic personal data (e.g. full name, contact details,
background) for use in the large scale pilots, even though the project will
avoid collecting such data unless deemed necessary. Such data will be
protected in compliance with the EU's Data Protection Directive 95/46/EC1
aiming at protecting personal data. National legislations in Belgium, Finland
and France applicable to the project will also be strictly followed. All
personal data collected by the project will be done after giving data subjects
full details on the pilot experiments to be conducted, and after providing
options to opt out of collection of any personal data.
**3\. Initial bIoTope data sets**
The different data sets that will be gathered and processed by the bIoTope
project are described in the following subsections. The descriptions follow
the guidelines provided by the European Commission with respect to data set
characteristics to be described. These initial data sets will be updated and
extended with additional data sets by the project partners responsible for the
different pilots to be conducted later in the project, as well as by partners
involved in technology development of the core technology components where
benchmark and other relevant data sets may be generated or adopted during
development.
Table 1 provides an overview of the initial datasets identified for the
project.
**Table 1: Summary of initial bIoTope data sets**
<table>
<tr>
<th>
**No.**
</th>
<th>
**Data set name**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Lyon Bottle Banks
</td>
<td>
Point data representing the location of the metropolis bottle banks.
</td> </tr>
<tr>
<td>
2
</td>
<td>
Lyon Bottle Banks status
</td>
<td>
One or several data sets will be created to store the following measures
(including historical data) coming from each bottle bank sensor
</td> </tr>
<tr>
<td>
3
</td>
<td>
Lyon Real-time traffic conditions
</td>
<td>
Traffic density on the road sections, refreshed every minute
</td> </tr> </table>
<table>
<tr>
<th>
**No.**
</th>
<th>
**Data set name**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
4
</td>
<td>
Lyon Road real-time event
</td>
<td>
Point data representing a road perturbation.
</td> </tr>
<tr>
<td>
5
</td>
<td>
Lyon Temperatures and humidity measures
</td>
<td>
Point data supporting temperature and humidity.
</td> </tr>
<tr>
<td>
6
</td>
<td>
Lyon Trees evapotranspiration
</td>
<td>
The data set stores, for each tree which is monitored, the evapotranspiration
rate.
</td> </tr>
<tr>
<td>
7
</td>
<td>
Brussels Schools location
</td>
<td>
Coordinates and details of schools along with their location.
</td> </tr>
<tr>
<td>
8
</td>
<td>
Brussels Entry points of school
</td>
<td>
Likely to be geo coordinates for entry points of schools in the Brussels
Capital region
</td> </tr>
<tr>
<td>
9
</td>
<td>
Brussels Green spaces
</td>
<td>
Geo localization of all the green spaces in the Brussels Capital region
</td> </tr>
<tr>
<td>
10
</td>
<td>
Brussels Waterflows
</td>
<td>
Geo localization of all the public water installation
(fountains, etc.) in the Brussels Capital region
</td> </tr>
<tr>
<td>
11
</td>
<td>
Brussels Stops of trams, metro, bus
</td>
<td>
Geo localization of all the stops of the public transport the
Brussels Capital region
</td> </tr>
<tr>
<td>
12
</td>
<td>
Brussels Itineraries of tram, metro, bus
</td>
<td>
Geo localization of all the routes of the public transport in the Brussels
Capital region
</td> </tr>
<tr>
<td>
13
</td>
<td>
Brussels Details of the stops of tram, metro, bus
</td>
<td>
Geo localization of all the additional information of the stops of the public
transport in the Brussels Capital region
</td> </tr>
<tr>
<td>
14
</td>
<td>
Brussels Timetable of tram, metro, bus
</td>
<td>
Timetable of the stops of the public transport in the Brussels Capital region
</td> </tr>
<tr>
<td>
15
</td>
<td>
Brussels Real-time travel time of tram, metro, bus
</td>
<td>
Realtime travel timing of the public transport in the Brussels Capital region
</td> </tr>
<tr>
<td>
16
</td>
<td>
Brussels Cyclist routes
</td>
<td>
Geo localization of the cyclist routes in the Brussels Capital region
</td> </tr>
<tr>
<td>
17
</td>
<td>
Brussels RER Bicycle
</td>
<td>
Geo localization of the cyclist routes in the Brussels Capital region
</td> </tr>
<tr>
<td>
18
</td>
<td>
Brussels Parking for 3 Bikes
</td>
<td>
Geo localization of the parking for 3 Bikes in the Brussels Capital region
</td> </tr>
<tr>
<td>
19
</td>
<td>
Brussels Free-service bike stations
</td>
<td>
Geo localization of the free-service bike stations in the
Brussels Capital region
</td> </tr>
<tr>
<td>
20
</td>
<td>
Brussels Free-service bike stations tariffs
</td>
<td>
Tariffs of the free-service bike stations in the Brussels Capital region
</td> </tr>
<tr>
<td>
21
</td>
<td>
Brussels Real-time flow of cyclists
</td>
<td>
Geo localization of the flows of cyclists in the Brussels Capital region
</td> </tr>
<tr>
<td>
22
</td>
<td>
Brussels Drive directions
</td>
<td>
General driving directions for emergency services in the
Brussels Capital region
</td> </tr>
<tr>
<td>
23
</td>
<td>
Brussels Zone 30
</td>
<td>
Geo localization of the Zones 30 Brussels Capital region
</td> </tr>
<tr>
<td>
24
</td>
<td>
Brussels Crossroads with red lights
</td>
<td>
Geo localization of crossroads and red lights in the Brussels
</td> </tr> </table>
<table>
<tr>
<th>
**No.**
</th>
<th>
**Data set name**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Capital region
</td> </tr>
<tr>
<td>
25
</td>
<td>
Brussels Public on-street parking
</td>
<td>
Geo localization of the public parking spots in the Brussels Capital region
</td> </tr>
<tr>
<td>
26
</td>
<td>
Brussels Realtime flow of cars
</td>
<td>
Geo localization of the car traffic flows in the Brussels Capital region
</td> </tr>
<tr>
<td>
27
</td>
<td>
Brussels Road works and events
</td>
<td>
Geo localization of the public road works and events
(weekly markets, etc.) in the Brussels Capital region
</td> </tr>
<tr>
<td>
28
</td>
<td>
Brussels Congestion of public roads
</td>
<td>
General congestion status of the public roads in the Brussels Capital region
</td> </tr>
<tr>
<td>
29
</td>
<td>
Brussels Sidewalks
</td>
<td>
Geo localization of the public sidewalks in the Brussels Capital region
</td> </tr>
<tr>
<td>
30
</td>
<td>
Brussels Pedestrian crossroads
</td>
<td>
Geo localization of the pedestrian crossroads in the Brussels Capital region
</td> </tr>
<tr>
<td>
31
</td>
<td>
Brussels Dangerous traffic points
</td>
<td>
Geo localization of the dangerous traffic points in the
Brussels Capital region
</td> </tr>
<tr>
<td>
32
</td>
<td>
Brussels Realtime flow of pedestrians
</td>
<td>
Geo localization of the realtime flow of pedestrians in the
Brussels Capital region
</td> </tr>
<tr>
<td>
33
</td>
<td>
Brussels Traffic signs
</td>
<td>
Geo localization of the traffic signs in the Brussels Capital region
</td> </tr>
<tr>
<td>
34
</td>
<td>
Brussels Fire station localization
</td>
<td>
Geo localization of the fire stations in the Brussels Capital region
</td> </tr>
<tr>
<td>
35
</td>
<td>
Brussels Hospitals localization
</td>
<td>
Geo localization of the hospitals in the Brussels Capital region
</td> </tr>
<tr>
<td>
36
</td>
<td>
Brussels Garbage trucks localization
</td>
<td>
Geo localization of garbage trucks in the Brussels Capital region
</td> </tr>
<tr>
<td>
37
</td>
<td>
Brussels Extra Long Busses localization
</td>
<td>
Geo localization of Extra Long Busses in the Brussels Capital region
</td> </tr>
<tr>
<td>
38
</td>
<td>
Brussels Reserved Traffic lanes
</td>
<td>
Geo localization of the reserved traffic lanes in the Brussels Capital region
</td> </tr>
<tr>
<td>
39
</td>
<td>
Brussels Fire trucks localization
</td>
<td>
Geo localization of the fire trucks in the Brussels Capital region
</td> </tr>
<tr>
<td>
40
</td>
<td>
Brussels Hydrants
</td>
<td>
Geo localization of the hydrants in the Brussels Capital region
</td> </tr>
<tr>
<td>
41
</td>
<td>
Helsinki KNX data
</td>
<td>
Water, electricity, heating consumption data from apartments.
</td> </tr>
<tr>
<td>
42
</td>
<td>
Helsinki Presence data
</td>
<td>
Data to identify if a person is at home or not, using some personalised
service or combination of them (e.g. GPS location of mobile phone, personal
calendar, home away button).
</td> </tr>
<tr>
<td>
**No.**
</td>
<td>
**Data set name**
</td>
<td>
**Description**
</td> </tr>
<tr>
<td>
43
</td>
<td>
Open Charging Station Vocabulary
</td>
<td>
A linked open vocabulary covering charging stations services for e-Mobility
will be created.
</td> </tr> </table>
The descriptions of each data set follows the European Commission provided
template and includes the following elements:
* **Data Set Name** – used to keep track of different data sets.
* **Contributor(s)** – organisations that are responsible for the data. This can be a project partner, but also external organisations can also be contributors.
* **Description** – briefly summarises the type of data elements that exist, or will be created, and the format.
* **Standards** – any standards the data set might follow in the way elements are described or structured.
* **Quality Assurance** – procedures that might be in place to ensure quality is maintained such as consistency or reliability of the data.
* **Access** – indication if the existing data set or data to be created in the bIoTope project will be publicly accessible or restricted.
* **Archiving and preservation** – indicate if facilities for preserving the data are provided by the project partner, third party or the bIoTope project will need to create facilities (e.g. project website).
The characteristics of each of the identified data sets are summarized in the
following sections. **3.1. Lyon Bottle Banks**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Bottle banks [ Silos à verre]
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Lyon metropolis - Cleanliness Department [Métropole de Lyon / Direction de la
propreté (DP)]
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Point data representing the location of the metropolis bottle banks.
Formats : WMS WFS KML geo-json Shape-zip json
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
System of coordinates : EPSG::RGF93 / CC46 (EPSG:3946)
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☐External party ☐bIoTope project
☐Other (please describe):
</td> </tr> </table>
### 3.2. Lyon Bottle Banks Status
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Bottle banks status (to be created)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Lyon metropolis
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
One or several data sets will be created to store the following measures
</td> </tr>
<tr>
<td>
**and format**
</td>
<td>
(including historical data) coming from each bottle bank sensor :
* Filling rate
* Internal temperature
* Location (GPS)
* Acceleration (as an event : when bottle bank is emptied) - (timestamp)
Format : to be defined
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☐External party ☐bIoTope project
☐Other (please describe):
</td> </tr> </table>
**3.3. Lyon Real-time traffic conditions**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Real-time traffic conditions [Etat du trafic temps reel]
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Lyon metropolis – Roads and mobility department / [Métropole de Lyon /
Direction de la voirie (DV)]
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Traffic density on the road sections, refreshed every minute
Formats : WMS WFS KML geo-json Shape-zip json xml
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☐External party ☐bIoTope project
☐Other (please describe):
</td> </tr> </table>
### 3.4. Lyon Road real-time event
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Road real-time event [Evènement routier temps reel]
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Lyon metropolis – Roads and mobility department / [Métropole de Lyon /
Direction de la voirie (DV)]
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Point data representing a road perturbation.
Formats : WMS WFS KML geo-json Shape-zip json xml
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
Datex 2 descriptive informations (type of perturbation, start date, end date)
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☐External party ☐bIoTope project
☐Other (please describe):
</td> </tr> </table>
**3.5. Lyon Temperatures and humidity measures**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Temperatures and humidity measures (to be created)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Lyon metropolis – Climate plan unit / Citizens (crowdsourcing) / External
partners
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Point data supporting temperature and humidity.
Formats : to be defined.
</td>
<td>
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☒External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr> </table>
### 3.6. Lyon Trees evapotranspiration
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Trees evapotranspiration
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Lyon metropolis – Climate plan unit
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
The data set stores, for each tree which is monitored, the evapotranspiration
rate.
Formats : to be defined.
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner
☐Other (please describe):
</td>
<td>
☐External party
</td>
<td>
☐bIoTope project
</td> </tr> </table>
### 3.7. Brussels Schools location
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Schools location (3.1.4.1)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
CIRB (or Others)
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
To be determined, but likely to be similar to the following:
<table>
<tr>
<th>
**Nom**
</th> </tr>
<tr>
<td>
**Language**
</td> </tr>
<tr>
<td>
**NameFr**
</td> </tr>
<tr>
<td>
**NameNl**
</td> </tr>
<tr>
<td>
**AdressFr**
**Street**
**Number**
**PostalCode**
**City**
</td> </tr>
<tr>
<td>
**AdressNl**
</td> </tr>
<tr>
<td>
**Phone**
</td> </tr>
<tr>
<td>
**Fax**
</td> </tr>
<tr>
<td>
**eMail**
</td> </tr>
<tr>
<td>
**WebSite**
</td> </tr>
<tr>
<td>
**Director**
</td> </tr>
<tr>
<td>
**Coordinates**
</td> </tr> </table>
</td>
<td>
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☐External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☒2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
**3.8. Brussels Entry points of school**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Entry points of school (3.1.4.2)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
CIRB (or Others)
</td>
<td>
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Coordinates for entry points of schools in the area
</td>
<td>
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☐External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☒2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
### 3.9. Brussels Green spaces
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Green spaces (3.1.4.3)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
CIRb / IBGE
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localisation of all the green spaces in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJSON
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td> </tr> </table>
**3.10. Brussels Waterflows**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Waterflows (3.1.4.4)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
IBGE
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of all the public water installation (fountains, etc.) in the
Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GML 3.2.1
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☒2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
### 3.11. Brussels Stops of trams, metro, bus
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Stops of trams, métro, bus (3.1.4.5)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
STIB
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of all the stops of the public transport the Brussels Capital
region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GTFS
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☒2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
**3.12. Brussels Itineraries of tram, metro, bus**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Itineraries of tram, métro, bus (3.1.4.6)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
STIB
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of all the routes of the public transport in the Brussels
Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GTFS
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
**3.13. Brussels Details of the stops of tram, metro, bus**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Details of the stops of tram, métro, bus (3.1.4.7)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
STIB
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of all the additional information of the stops of the public
transport in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GTFS
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
### 3.14. Brussels Timetable of tram, metro, bus
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Timetable of tram, métro, bus (3.1.4.8)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
STIB
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Timetable of the stops of the public transport in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GTFS
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td> </tr> </table>
**3.15. Brussels Real-time travel time of tram, metro, bus**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Real-time travel time of tram, métro, bus (3.1.4.9)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
STIB
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Realtime travel timing of the public transport in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GTFS
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td> </tr> </table>
### 3.16. Brussels Cyclist routes
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Cyclist routes (3.1.4.10)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Brussels Mobility
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the cyclist routes in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJSON EPSG
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td> </tr> </table>
### 3.17. Brussels RER Bicycle
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
RER Bicycle (3.1.4.11)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Brussels Mobility
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the cyclist routes in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJSON
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
### 3.18. Brussels Parking for 3 Bikes
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Parking for 3 Bikes (3.1.4.12)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Brussels Mobility
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the parking for 3 Bikes in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJSON ShapeFile KML GeoJSON
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td> </tr> </table>
**3.19. Brussels Free-service bike stations**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Free-service bike stations (3.1.4.13)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
JCDecaux
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the free-service bike stations in the Brussels Capital
region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
**Quality assurance (if**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**any)**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
**3.20. Brussels Free-service bike stations tariffs**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Free-service bike stations tariffs (3.1.4.14)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
JCDecaux
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Tariffs of the free-service bike stations in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td> </tr> </table>
### 3.21. Brussels Real-time flow of cyclists
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Real-time flow of cyclists (3.1.4.15)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Orange BE
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the flows of cyclists in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
☒Project partner ☐External party ☐bIoTope project
</td> </tr>
<tr>
<td>
**responsibility**
</td>
<td>
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td> </tr> </table>
**3.22. Brussels Drive directions**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Drive directions (3.1.4.16)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
SIAMU
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
General driving directions for emergency services in the Brussels Capital
region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☒2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
### 3.23. Brussels Zone 30
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Zone 30 (3.1.4.17)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Brussels Mobility
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the Zones 30 Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
EPSG
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☐External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td> </tr> </table>
**3.24. Brussels Crossroads with red lights**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Crossroads with red lights (3.1.4.18)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Brussels Mobility
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of crossroads and red lights in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☐External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☒2 Emergency ☐ 3 Bikes
</td> </tr> </table>
### 3.25. Brussels Public on-street parking
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Public on-street parking (3.1.4.19)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Parking Brussels
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the public parking spots in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJSON EPSG
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☐External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td> </tr> </table>
### 3.26. Brussels Realtime flow of cars
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Realtime flow of cars (3.1.4.20)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Orange BE
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the car traffic flows in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☐External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☒2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
### 3.27. Brussels Road works and events
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Road works and events (3.1.4.21)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Brussels Mobility
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the public road works and events (weekly markets, etc.) in
the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJSON
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☐External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☒2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
**3.28. Brussels Congestion of public roads**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Congestion of public roads (3.1.4.22)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
WAZE
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
General congestion status of the public roads in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
**Quality assurance (if**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**any)**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☒2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
### 3.29. Brussels Sidewalks
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Sidewalks (3.1.4.23)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
IBGE
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the public sidewalks in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
EPSG
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td> </tr> </table>
**3.30. Brussels Pedestrian crossroads**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Pedestrian crossroads (3.1.4.24)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Brussels Mobility
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the pedestrian crossroads in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
☒Project partner ☐External party ☐bIoTope project
</td> </tr>
<tr>
<td>
**responsibility**
</td>
<td>
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td> </tr> </table>
### 3.31. Brussels Dangerous traffic points
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Dangerous traffic points (3.1.4.25)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Brussels Mobility
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the dangerous traffic points in the Brussels Capital
region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☐External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
**3.32. Brussels Realtime flow of pedestrians**
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Realtime flow of pedestrians (3.1.4.26)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Orange BE
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the realtime flow of pedestrians in the Brussels Capital
region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☒Project partner ☐External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
### 3.33. Brussels Traffic signs
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Traffic signs
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Orange BE
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the traffic signs in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
To be determined
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☒1 School ☐2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
## 3.34. Brussels Fire station localization
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Fire station localization
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
SIAMU
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the fire stations in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJson
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☐1 School ☒2 Emergency ☐ 3 Bikes
</td> </tr> </table>
## 3.35. Brussels Hospitals localization
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Hospitals localization
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
SIAMU
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the hospitals in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJson
</td>
<td>
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☐1 School ☒2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
## 3.36. Brussels Garbage trucks localization
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Garbage trucks localization (RealTime)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Cleanness Brussels
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of garbage trucks in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJson
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☐1 School ☒2 Emergency ☐ 3 Bikes
</td> </tr> </table>
## 3.37. Brussels Extra Long Busses localization
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Extra Long Busses localization (RealTime)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
STIB
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of Extra Long Busses in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJson
</td> </tr>
<tr>
<td>
**Quality assurance (if**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**any)**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party
☐Other (please describe):
</td>
<td>
☐bIoTope project
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☐1 School ☒2 Emergency ☐ 3 Bikes
</td>
<td>
</td> </tr> </table>
## 3.38. Brussels Reserved Traffic lanes
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Reserved Traffic lanes
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Brussels Mobility / STIB
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the reserved traffic lanes in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJson
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☐1 School ☒2 Emergency ☐ 3 Bikes
</td> </tr> </table>
## 3.39. Brussels Fire trucks localization
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Fire trucks localization (RealTime)
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
SIAMU
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the fire trucks in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJson
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
</td> </tr>
<tr>
<td>
**responsibility**
</td>
<td>
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☐1 School ☒2 Emergency ☐ 3 Bikes
</td> </tr> </table>
## 3.40. Brussels Hydrants
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Hydrants
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
SIAMU
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Geo localization of the hydrants in the Brussels Capital region
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
GeoJson
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Use cases**
</td>
<td>
☐1 School ☒2 Emergency ☐ 3 Bikes
</td> </tr> </table>
## 3.41. Helsinki KNX data
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
KNX data
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
The facility management company of Fiskars and Fregatti buidings in
Kalasatama, the residents, and the service providers (companies ABB, Helen)
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Water, electricity, heating consumption data from the apartments.
Formats : not known yet
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
KNX
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr> </table>
## 3.42. Helsinki Presence data
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Presence data
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
Residents of Fiskary, Fregatti houses
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
Data to identify if the person is at home or not, using some personalised
service or combination of them. For example, GPS location of mobile phone,
personal calendar, home away button.
Format : to be decided
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☐Public access ☒Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☒External party ☐bIoTope project
☐Other (please describe):
</td> </tr> </table>
## 3.43. Open Charging Station Vocabulary
<table>
<tr>
<th>
**Data set name or reference**
</th>
<th>
Open Charging Station Vocabulary (MobiVoc) \- planned
</th> </tr>
<tr>
<td>
**Contributor(s)**
</td>
<td>
eccenca, Fraunhofer IAI, University of Bonn and everybody who is interested
</td> </tr>
<tr>
<td>
**Data set description and format**
</td>
<td>
A linked open vocabulary covering charging stations services for e-Mobility
will be created.
RDF.
</td> </tr>
<tr>
<td>
**Standards (if any)**
</td>
<td>
Semantic Web Standards (W3C)
</td> </tr>
<tr>
<td>
**Quality assurance (if any)**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
☒Public access ☐Restricted access
☐Other (please describe):
</td> </tr>
<tr>
<td>
**Archiving and preservation responsibility**
</td>
<td>
☐Project partner ☐External party ☐bIoTope project
☒Other (please describe): not decided yet
</td> </tr> </table>
# Conclusion
This DMP provides an overview of the data that bIoTope project will produce or
adopt together with related challenges and constraints that need to be taken
into consideration. The analysis contained in this report supports the
procedures and infrastructures to be implemented by the bIoTope project to
efficiently manage the existing data that will be used, as well as data that
will be produced within the project.
It is too early in the project to have a complete identification of the data
sets that will be used or created by the project as the initial user
requirements have only just been completed and substantial design of the
bIoTope technologies and identification of required data sets is underway.
Some of the data that will need to be collected is not sufficiently clear to
be detailed with the required level of specification to be included in this
preliminary plan, and others will be identified later in the project. This
first version of the DMP should therefore be considered an initial view that
will be updated periodically as the project tasks progress.
By project completion, many bIoTope project partners will be owners or
producers of relevant data, in particular those associated with the large
scale pilots. This implies specific responsibilities and this initial version
of the DMP is intended to create awareness among the project partners
regarding the importance of appropriate procedures with respect to collection,
publication in the case of open access data sets, and use of metadata to
increase the value of data, as well as persistence of all the information
necessary for the optimal use and reuse of bIoTope related data sets to
support the bIoTope ecosystems.
Specific attention will be given to ensuring that the data made public breaks
neither partner IPR rules, nor regulations and good practices related to
personal data protection. For this latter point, procedures such as systematic
anonymisation of personal data should be anticipated whenever data created
within the bIoTope project has potential for misuse or disclosure of
personally identifiable information unless specific security measures have
been taken.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0083_BRAIN-IoT_780089.md
|
**Introduction**
</th> </tr> </table>
The purpose of this document is to present the initial Data Management Plan
(DMP) of the BRAIN-IoT project and to provide the guidelines for maintaining
the DMP during the project.
The Data Management Plan methodology approach adopted for the compilation of
D6.1 has been based on the updated version of the “Guidelines on FAIR Data
Management in Horizon 2020 version 3.0 released on 26 July 2016 by the
European Commission Directorate – General for Research & Innovation” [1]. It
defines how data in general and research data in particular will be handled
during the research project and will make suggestions for the after-project
time. It describes what data will be collected, processed or generated within
the scope of the project, what methodologies and standards shall be followed
during the collection process, whether and how these data shall be shared
and/or made open for the evaluation needs, and how they shall be curated and
preserved.
All BRAIN-IoT data will be handled according to EU Data protection and Privacy
regulation and the General Data Protection Regulation (GDPR) [2].
The BRAIN-IoT DMP addresses the following issues:
* Data Summary
* FAIR data
* Making data findable, including provisions for metadata
* Making data openly accessible
* Making data interoperable
* Increase data re-use
* Allocation of resources
* Data security
* Ethical aspects
* Other issues
According to EU’s guidelines regarding the DMP, the document will be updated –
whenever the Project Board considers it necessary to be updated - during the
project lifetime (in the form of deliverables).
BRAIN-IoT will be deployed in two pilot sites in Coruna and Valencia, Spain,
with the aim to be replicated in several other places and domains. More
specifically, BRAIN-IoT is envisaged to be evaluated also within the context
of use cases identified in running Large Scale Pilot (LSP).
Currently (M5 of the project), the exact definition, deployment and usage of
BRAIN-IoT functionalities are not yet completely defined. Therefore, we will
need to update the DMP with the data that is being collected/created at each
pilot site according to their usage and whether they can be published as Open
Data.
# 1.1 Scope
This document is generated by WP6 ”Test, Demonstration and Evaluation”, and
more specifically by task T6.1 ”Integration and Lab-scale Evaluation”.
The scope of the DMP is to describe the data management life cycle for all
data sets to be collected, processed or generated in all Work Packages during
the 36 months of the Brain-IoT project. FAIR Data Management is highly
promoted by the Commission and since Brain-IoT deals with several kinds of
data, relevant attention has been given to this task.
However, the Data Management Plan is going to be updated throughout the course
of the project and more specifically, extended information on data and data
management will be included in the upcoming deliverables D6.3 – “ _Phase 1
Integration and Evaluation Framework_ ”, due on M16, and D6.5 - “ _Phase 2
Integration and Evaluation Framework”_ , due on M28.
# 1.2 Methodology
The DMP [1] concerns all the data sets that will be collected, processed
and/or generated, shared, and deleted when not needed anymore, within the
project.
The methodology the consortium follows to create and maintain the project DMP
is hereafter outlined:
1. Create a data management policy.
1. Using the elements that the EC guidelines [1] proposes to address for each data set.
2. Adding the strategy that the consortium uses to address each of the elements.
2. Create a DMP template that will be used in the project for each of the collected data sets, see Section 5 - DMP dataset description template.
3. Creating and maintaining DMPs
1. If a data set is collected, processed and/or generated within a work package, a DMP should be filled in. For instance, training data sets, example collections etc.
2. For each of the pilots, when it is known which data will be collected, the DMP for that pilot should be filled in.
4. The filled DMPs should be added to the upcoming D6.3 and D6.5, describing which data are collected within the project as well as how it is managed.
5. Towards the end of the project, an assessment will be made about which data is valuable to be kept as Open Data after the end of the project.
1. For the data that is considered to be valuable an assessment of how the data can be maintained and the cost involved will be made. The Consortium will also evaluate the possibility to share data, or a subset, under an Open Data Commons Open Database License (ODbL).
The deliverable is organized as following:
**Chapter 2** outlines a data overview in the BRAIN-IoT project. It details
BRAIN-IoT data categories, data types and metadata.
**Chapter 3** outlines the data management policy in BRAIN-IoT about dataset
naming and collection, giving also an insight about the Open Research Data
Pilot under H2020 guidelines and FAIR Data principle, as well as how to
achieve it.
**Chapter 4** presents the identified approach to be used in order to describe
the set of data generated and collected by the project.
# 1.3 Related documents
<table>
<tr>
<th>
**ID**
</th>
<th>
**Title**
</th>
<th>
**Reference**
</th>
<th>
**Version**
</th>
<th>
**Date**
</th> </tr>
<tr>
<td>
DoA
</td>
<td>
Description of Action/ Grant Agreement
</td>
<td>
ISMB
</td>
<td>
1.0
</td>
<td>
2017-10-09
</td> </tr>
<tr>
<td>
D1.1
</td>
<td>
Project Handbook, Quality & Risk Management
Plan
</td>
<td>
IM
</td>
<td>
1.1
</td>
<td>
2018-02-19
</td> </tr>
<tr>
<td>
D2.1
</td>
<td>
Initial Visions, Scenarios and Use Cases
</td>
<td>
UGA
</td>
<td>
1.0
</td>
<td>
2018-06-23
</td> </tr> </table>
<table>
<tr>
<th>
**2**
</th>
<th>
**Data Management and the GDPR**
</th> </tr> </table>
The EU General Data Protection Regulation (GDPR) brings revolutionary changes
to European data protection laws. Some principles found from the GDPR are
defined to correspond to both the technological developments happened in
recent years, and to better answer the requirements for privacy protection in
the digitized world of today and tomorrow.
The principles relating to the personal data management are set out in GDPR’s
Article 5(1)
* Lawfulness, fairness and transparency: personal data shall be processed lawfully, fairly and in a transparent manner in relation to the data subject. To be more transparent while managing and processing data, making privacy policies more user friendly and promoting the rights of users could be considered.
* Purpose limitation: personal data shall be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes.
* Data minimization: personal data shall be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed. Considering the purpose, only necessary data is managed and processed. Data minimization is strongly related to purpose limitation, since enough data should be collected to achieve the purpose, but only the strictly amount needed.
* Accuracy: personal data shall be accurate and, where necessary, kept up to date. The erasure or rectification of inaccurate personal data must be implemented without delay.
* Storage limitation: personal data shall be kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed.
* Integrity and confidentiality: personal data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorized or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organizational measures. Article 5(2) provides for the perhaps most important principle of all: the principle of accountability, which sets an obligation on data controllers to be responsible for and to be able to demonstrate compliance with the GDPR. It complements the GDPR’s transparency requirements; data controllers must not only comply with the GDPR but must also be able to demonstrate it by e.g. documenting their decisions while managing and processing data.
In May 2018 the GDPR has been officially released. This means all partners
within the consortium have to follow the new rules and principles. The novelty
of the new regulamentation implies the consortium tools and partner specific
guidelines for data management are not yet fully available.
This chapter addresses how the founding principles of the GDPR will be
followed in the BRAIN-IoT project.
# 2.1 Lawfulness, fairness and transparency
BRAIN-IoT project describes all handling of personal data in its Data
Management Plan. Some of the answers requested cannot be provided at the
moment of writing this report. Therefore, updates to the plan will be provided
in the next deliverables. Meanwhile, the project Wiki tool (see D1.1 – “
_Project Handbook, Quality & Risk Management Plan _ ”), used as a logger of
all the ongoing activities related to the project, will be used as a working
tool to log also information related to DMP and will be updated accordingly as
soon as new information about data sets become available. The collected
information will be lately afterwards officially reported within upcoming
deliverables D6.3 – “ _Phase 1 Integration and Evaluation Framework_ ”, due on
M16, and D6.5 - “ _Phase 2 Integration and Evaluation Framework”_ , due on
M28.
All data gathering from individuals will require informed consent of the
subjects who are engaged in the project. Informed consent requests will
consist of an information letter and a consent form. This will state the
specific causes for the experiment, or other activity, how the data will be
handled, stored, and shared. The request will also inform the subjects of
their rights to have data updated or removed, and the project’s policies on
how these rights are managed.
As far as possible, BRAIN-IoT project will anonymise the personal data.
Whenever considered necessary, further consent will be asked to use the data
for open research purposes, this includes presentations at conferences,
publications in journals as well as depositing a data set in an open
repository at the end of the project. The consortium tries to be as
transparent as possible in their collection of personal data: while collecting
data, information leaflet and consent form will describe the kind of
information, the manner in which it will be collected and processed, if, how,
and for which purpose it will be disseminated and if and how it will be made
open access. Finally, the subjects will have the possibility to request what
kind of information has been stored about them and they can request to be
removed from the results.
# 2.2 Purpose limitation
BRAIN-IoT project will not collect any data that is outside the scope of the
project. Each partner will only collect data which is needed within the scope
of their specific work package.
# 2.3 Data minimisation
BRAIN-IoT will collect only data that is relevant for the project’s research
questions and demonstration. However, while testing the system in an
environment including the interaction with human beings, it could be possible
to collect indirect data related to the personal behaviours of the involved
individuals. Since this data can be highly personal, it will be treated
according to all guidelines on personal data and won’t be shared without
anonymization or explicit consent of the involved persons.
# 2.4 Accuracy
All data collected will be checked for consistency.
Since all data is gathered within a specific timeframe, we chose not to keep
the data up to date, since it would hinder our research. However, we will try
to capture the data as accurately as possible, for example “warehouse map”
could be stored as “warehouse map in June 2018”. This will remove the
necessity of keeping this information up to date.
# 2.5 Storage limitation
All data that will no longer be used for research purposes will be deleted as
soon as possible. All personal data or data that can be reconducted to
personal information or behaviours will be made anonymous as soon as possible.
At the end of the project, if the data has been anonymised, the data set could
be considered to be released as open dataset. If data cannot be made
anonymous, it will be pseudonymised as much as possible and stored according
the archiving rules of the partner institutions who was responsible for the
management of the specific data to be stored.
# 2.6 Integrity and confidentiality
All personal data will be handled with appropriate security measures applied.
Each partner who is responsible for the management of specific data will store
or share data through means and channels that comply with the GDPR.
# 2.7 Accountability
Within the scope of the project, the project and quality management is
responsible for the correct data management within the project. Whether the
partners follow the GDPR principles will be regularly checked during the
project the project lifetime. For each data set, a responsible person has been
appointed at partner level, who will be held accountable for the specific data
set.
<table>
<tr>
<th>
**3**
</th>
<th>
**Data in BRAIN-IoT: an Overview**
</th> </tr> </table>
BRAIN-IoT project will deal with a large amount of raw data to measure the
benefit of IoT and federation of IoT platforms within the two selected
scenarios, i.e. Service Robotics and Water Critical Infrastructure Management,
and also in other scenarios to be selected from the ones identified in Large
Scale Pilot (LSP) projects, i.e. AUTOPILOT, MONICA, ACTIVAGE, IoF2020 and
SynchroniCity.
From raw data, a large amount of derived data can be produced to address
multiple research needs and enable Smart Behaviours. Some processing, such as
cleaning, verification, conversion, aggregation, summarization or reduction
could also be applied to raw data according to specific needs derived from the
use cases.
In any case, data must be well documented in order to facilitate and foster
sharing, to enable validity assessments and to enable its usage in an
efficient way.
Thus, each data must be described using additional information called
metadata. The latter must provide information about the data source, the data
transformation and the conditions in which the data has been produced.
# 3.1 Data sets Categories
The BRAIN-IoT project will produce different categories of data sets:
* _Context data_ : data that describe the context of an experiment.
* _Acquired and derived data_ : data that contain all the collected information related to an experiment.
* _Aggregated data_ : data summary obtained by reduction of acquired data and generally used for data analysis.
## 3.1.1 Context Data
Context data is any information that helps to explain observation during a
measurement campaign. Context data can be collected, generated or retrieved
from existing data. For example, it contains information such as presence of
humans or presence of obstacles in the robot-path, quality of water, etc.
## 3.1.2 Acquired and Derived Data
Acquired data are all data collected during the course of the study for the
sole purpose of the analysis. **.** Derived data is created by different type
of transformation including data fusion, filtering, and classification.
Derived data are usually required according to specific needs from the use
cases. Derived data may contain derived measures and performance indicators
referring to a time period when specific conditions are met. This category
includes measures from sensors coming from robotic platforms or IoT devices
and subjective data collected from either the users or the environment.
The following list outlines the data types and sources that will be collected:
**Service Robotics:**
The service robotics scenario identified three different interactions of
robots with external world (see D2.1):
* Robot-thing interaction (e.g. robot needs of crossing door or using lifts, interactions with conveyor belts etc.)
* Robot-environment interaction in order to have environment context information (e.g. alarm system failure/errors, obstacles or humans in the way, detection of beacons etc.)
* Robot-robot interaction to enable self-organization and collaborative features (such as map generation and shared resources)
These use cases allow to roughly outline an initial set of types of data to be
dealt with:
* robots involved in the scenario will be endowed with capabilities to scan and navigate the entire area of the warehouse and share the acquired information to update the knowledge base (e.g., map) on the go.
* robots will also be used for collecting additional information implicitly e.g. room temperature, presence of humans, also paying special attention to their privacy and any image recorded during operation, detection of in-path obstacles, other IoT devices etc.
* robots can also collect context information such as the presence of alarm system, layout, number of items to interact with, loads per day, which may also be an indicator of the performance, productivity of the factory , detection of beacons etc.
**Critical Water Management Infrastructure scenario:**
Currently a part of the systems that helps us develop the business processes
are implemented in a platform called SICA. Relevant data for this scenario
concern the urban water domain.
More specifically, in D2.1 the following domains have been identified, which
outline an initial set of types of data to be dealt with:
_RESOURCE_
* Connection of a multiparameter probe to measure the water quality control parameters in reserve water (surface waters).
* Connection with the gaugin station to measure the circulating flow of the river (entry and exists of the reserve).
* Pluviometry and temperature. _TREATMENT_
* Headstock deposits levels.
* Volume
* Cl/pH levels
* Pump systems at the plant.
* Pump from the headstock to treatment. o Pump of treated water.
_DISTRIBUTION_
* Distribution deposits levels.
* Volume
* Cl/pH/turbidity • Pump and repump systems.
* Section control systems.
* Flows o Cl/pH/turbidity
* Tele-read meters:
* Domestic. Control sections: ABERING platform. o Commercial. Sectors. iEcoCity platform.
* Large clients. Complex multi-sensor remote systems.
* Control of green zone irrigation.
* Irrigation programme. Connection with automatons. o Meteorological control: rain forecast.
## 3.1.3 Aggregated data
Aggregated data contains a specific part of the acquired or derived data (raw
data). Its smaller size allows a simple storage in e.g. database tables and an
easy usage suitable for data analysis. To obtain aggregated data, several data
reduction processes are performed. The reduction process summarizes the most
important aspects in the data into a list of relevant parameters or events,
through one or all of the following processes: validation, curation,
conversion, annotation. Aggregated data is generally created in order to
answer different research question. They are supposed to be verified and
cleaned, thus facilitating their usage for analysis purposes.
# 3.2 Metadata
This section provides the first recommendations regarding the description of
the data provided by BRAIN-IoT project. As the project will collect several
data categories and several data types, several metadata descriptions must be
provided to describe the characteristics of each measure or component and also
the origin on how the data was produced and collected.
BRAIN-IoT project will follow and adapt the metadata type recommendation
provided by the FOT-Net Data project ( http://fot-net.eu/ ). This project
identifies in its Data Sharing Framework several metadata types that can be
applied to BRAIN-IoT. The following list provides a first version of the
metadata that may be managed by the project and their content in the upcoming
months. A more detailed version will be provided in the upcoming deliverables
D6.3 – “ _Phase 1 Integration and Evaluation Framework_ ”, due on M16, and
D6.5 - “ _Phase 2 Integration and Evaluation Framework”_ , due on M28.
**3.2.1 Metadata attributes of time-history data:**
Time-history data corresponds to the history of a measurement over the time.
Time-history data can be collected by legacy instrument, by IoT devices or IoT
platforms.
Time-history data stores a variation over the time of single or complex
physical value. To enable their re-use, each dataset provides a metadata
description that includes the following descriptive attributes:
* Precision (accuracy)
* Unit of measure
* Sample rate (frequency of the measure)
* Filtering (low-pass, interpolation, etc.)
* Origin (data source)
* Type (Integer, Float, String)
* Error codes (full description of error codes)
* Quality (Quality measure related to this measure)
* Enumeration specification (Defines how to convert constant to correct value, e.g.: 1 means Left, 2 means Right)
•
## 3.2.2 Metadata attributes of aggregated data
As aggregated data varies depending on the purpose of the experiment, it can
be described as time history measures or as time segment. Time segment is a
sub-set of data parameters or measures generated by data summarization or data
reduction.
This metadata type should include the following descriptive attributes:
* Description (Purpose of the aggregated data)
* Definition (Algorithm applied on the aggregated measures)
* Origin (Measures used to calculate the aggregated data)
* Unit (Unit of output value)
## 3.2.3 Metadata attributes of self-reported data
Self-reported data corresponds to interviews, surveys or questionnaires. This
metadata type should include the following descriptive attributes:
* Description (Purpose of the questionnaire)
* Instructions (way how the collection process was executed)
* Type (Free text, single or multiple choices, etc.)
* Options (description of possible alternatives)
<table>
<tr>
<th>
**4**
</th>
<th>
**BRAIN-IoT Data Management Policy**
</th> </tr> </table>
The responsible party for creating and maintaining the DMP for a data set is
the partner that creates/collects such data. If a data set is collected,
processed and/or generated within a work package, a DMP should be created.
Before each pilot execution, it should be clear which data set is
collected/created in the pilot and how the data will be managed, i.e. the DMPs
for the pilot data must be ready and accepted. This will be done individually
for each of the pilots because of the difference between the pilots being in
different domains and of different types of data and events.
# 4.1 Naming and identification of the Data set
To have a mechanism for easily identifying the different collected/generated
data, we will use a naming scheme. The naming scheme for BRAIN-IoT datasets
will be a simple hierarchical scheme including country, pilot, creating or
collecting partner and a describing data set name. This name should be used as
the identification of the data set when it is published as Open Data in
different open data portals. The structure of the naming of the dataset will
be as follows:
BRAINIOT_{Country+Area Code or WP}_{Pilot Site or WP}_{Responsible
Partner}_{Description}_{Data Set Sub Index}
Figure 1: GOEASY Data Set Naming Scheme **Figure 1: BRAIN-IoT Data Set Naming
Scheme**
The parts are defined as follows:
* BRAINIOT: Static for all data sets and is used for identifying the project.
* Country+Area Code: The two letter ISO 3166-1 country code for the pilot where data has been collected or generated plus the numeric routing code that identifies each geographic area in the telephone numbering plan, e.g. ES96.
* WP: the work package label along with the work package number, e.g., WP6.
* Pilot Site: The name of the pilot site where the data was collected, without spaces with CamelCaps in case of multiple words, e.g. ServiceRobotics etc.
* Responsible Partner: The partner that is responsible for managing the collected data, i.e. creates and maintains the Data Management plan for the data set. Using the acronyms from D1.1, e.g. ISMB
* Description: Short name for the data set, without spaces with CamelCaps in case of multiple words, e.g., WarehouseMap, WaterPollution, etc.
* Data Set Sub Index: Optional numerical index starting from 1. The purpose of the dataset sub index is that data sets created/collected at different times can be distinguished and have their individual meta data.
BRAINIOT_ES96_ServiceRobotics_ROB_Warehouse_1
**Figure 2: Figure 2: BRAINBRAIN--IoTIoT Data Set Naming Example Data Set
Naming Example**
In the example shown in Figure 2, the Data set is created within BRAIN-IoT
project in Valencia city, Spain, at Service Robotics pilot site. Robotnik is
responsible for the relevant Data Management plan for the dataset. The dataset
contains location data and it is the first of a series of data sets collected
at different times.
There can be situations where the data needs to be anonymised with regards to
the location the data has been collected, for instance at some pilots it might
not be allowed to publish people count data with the actual event location for
security reasons. In these cases, the Country and Pilot Site will be replaced
by string UNKNOWN when it is made available as Open Data.
For data sets that are not connected to a specific pilot site the Pilot Site
should be replaced with the prefix WP followed by the Work Package number that
creates and maintains the Data Management plan for the dataset, e.g., WP6. The
same applies to the Country part which also should be replaced with the prefix
WP followed by the Work Package number in the cases where the data set is not
geographically dependent, such as pure simulations or statistics.
# 4.2 Data Summary / Data set description
The data collected/created needs to be described including the following
information:
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any) o Provide the identification of the re-used data, i.e. BRAIN-IoT identifier or pointer to external data, if possible.
* Specify the origin of the data
* State the expected data size (if known)
* Outline the data utility: to whom will it be useful
# 4.3 Fair Data
FAIR data management means in general terms, that research data should be
“FAIR” (Findable, Accessible, Interoperable and Re-usable). These principles
precede implementation choices and do not necessarily suggest any specific
technology, standard, or implementation solution.
## 4.3.1 Making data findable, including provisions for metadata
This point addresses the following issues:
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism.
* Outline the naming conventions used.
* Outline the approach towards search keywords.
* Outline the approach for clear versioning.
* Specify standards for metadata creation (if any).
As far as the metadata are concerned, the way the consortium will capture and
store information should be described. For instance, for data records stored
in a database with links to each item, metadata can pinpoint their description
and location.
There are various disciplinary metadata standards, however the BRAIN-IoT
consortium has identified a number of available best practices and guidelines
for working with Open Data, mostly by organisations or institutions that
support and promote Open Data initiatives, and will be taken into account.
These include:
* FOT-Net Data project
* Open Data Foundation
* Open Knowledge Foundation
* Open Government Standards
Furthermore, data should be interoperable and compliant with respect to data
annotation and data exchange. **4.3.2 Making data openly accessible**
The objectives of this aspect address the following issues:
* Specify which data will be made openly available and, in case some data is kept closed, explain the reason why.
* Specify how data will be made available.
* Will the data be added to any Open Data registries?
* Specify what methods or software tools are needed to access such data, if a documentation is necessary about the software and if it is possible to include the relevant software (e.g. in open source code).
* Specify where data and associated metadata, documentation and code are deposited.
* Data that will be considered safe in terms of privacy, and useful for release, could be made available for download under the ODbL License.
* Specify how access will be provided in case there are restrictions.
## 4.3.3 Making data interoperable
This aspect refers to the assessment of the data interoperability specifying
which data and metadata vocabularies, standards or methodologies will be
followed in order to facilitate interoperability. Moreover, it will address
whether standard vocabulary will be used for all data types present in the
data set in order to allow inter-disciplinary interoperability.
In the framework of the BRAIN-IoT project, we will deal with many different
types of data coming from very different sources, but in order to promote
interoperability we use of the following guidelines:
* OGC SensorThings API model for time series data [4], such as environmental readings etc.
* If the data is part of a domain with well-known open formats that are in common use, this should be selected.
* If the data does not fall in the previous categories, an open and easily machine-readable format should be selected.
## 4.3.4 Increase Data Re-use
This aspect addresses the following issues:
* Specify how the data will be licensed to permit the widest reuse possible.
o Tool to help selecting license:
https://www.europeandataportal.eu/en/content/show-license o If a restrictive
license has been selected, explain the reasons behind it.
* Specify when data will be made available for re-use.
* Specify if the data produced and/or used in the project is useable by third parties, especially, after the end of the project.
* Provide a data quality assurance process description, if any.
* Specify the length of time for which the data will remain re-usable.
In order to maximize the reusability of data, ODbL licence could be considered
in some cases as a good candidate to distribute datasets. ODbL allows to:
* to copy, distribute and use the database;
* to produce works from the database;
* to modify, transform and build upon the database; as long as you:
* must attribute any public use of the database, or works produced from the database, in the manner specified in the ODbL. For any use or redistribution of the database, or works produced from it, you must make clear to others the license of the database and keep intact any notices on the original database.
* publicly use any adapted version of this database, or works produced from an adapted database, you must also offer that adapted database under the ODbL.
* redistribute the database, or an adapted version of it, then you may use technological measures that restrict the work (such as DRM) as long as you also redistribute a version without such measures.
# 4.4 Allocation of Resources
This aspect addresses the following issues:
* Estimate the costs for making the data FAIR and describe the method of covering these costs.
o This includes, if applicable, the cost for anonymising data.
* Identify responsibilities for data management in the project.
* Describe costs and potential value of long-term preservation.
# 4.5 Data security and Privacy
Based on the self-assessment performed by the BRAIN-IoT consortium, no major
ethics issues are foreseen to be relevant for project activities.
Nevertheless, the consortium recognizes the potential risks that the
deployment of IoT technology developed in BRAIN IoT could generate. In fact,
the project has a dedicated WP i.e. WP5 “End-to-end Security, Privacy and
Trust Enablers” that is specifically conceived to mitigate these risks and
focuses on:
* Threat Modelling and Assessment (Task 5.1);
* Decentralized Authorization, Authentication and Trust (Task 5.2);
* Privacy awareness and control (Task 5.3);
* End-to-end data security and provenance (Task 5.4).
The project consortium is committed to conducting responsible research and
innovation and will respect careful experimentation methodologies whenever end
users are present in experimentations:
* end users will get a complete briefing on the project, the experimentation and any potential risks as part of their training.
* the project will ensure that any end user involved understands and consent to the experiment.
In addition to the above approach that will be adopted during the project
implementation and beyond, BRAINIoT has realized at proposal stage an ethic
self-assessment of risks and identified two main points that can be of
concerns:
* the involvements of end users in the experiments run on the test-sites.
* the potential collection and handling of personal data.
Such evaluation has also been performed taking into consideration possible
links of BRAIN-IoT project to IoT Large Scale Pilots. However, it is worth
observing that, in those cases, BRAIN-IoT will act as technology solution
provider and will not take care of the data and user involvement aspects (that
will remain within the scope of the Large Scale Pilots projects).
In the following, rules defined for handling data are presented. More
specifically, the data collected will be treated as confidential and security
processes and techniques will be applied to ensure their confidentiality.
Overall the following general principles will be used regarding any data
collection:
* Transparency of usage of the data: User – data subject in the European Union (EU) parlance - shall give explicit consent of usage of data.
* Collected Data shall be adequate, relevant and not excessive: The data shall be collected on “need to know” principle. This principle is also known as “Data Minimization”. The principle also helps to setup the user contract, to fulfil the data storage regulation and enhance the “Trust” paradigm.
* Collector shall use data for explicit purpose: Data shall be collected for legitimate reasons and shall be deleted (or anonymize) as soon as data is no longer relevant.
* Collector shall protect data at communication level: The Integrity of the information is important because modification of received information could have serious consequence for the overall system availability. User has accepted to disclose information to a specific system, not all the systems. The required level of protection depends on the data to be protected according the cost of the protection and the consequence of data disclosure to unauthorized systems.
* Collector shall protect collected data at data storage: User has accepted to disclose information to a specific system, not all the systems. It also could be mandatory to get infrastructure certification. The required level of protection depends on the data to be protected according the cost of the protection and the consequence of data disclosure to unauthorized systems. As example, user financial information can be used to perform automatic billing. Such data shall be carefully protected. Security keys at device side and server side are very exposed and shall be properly protected against hardware attacks.
* Collector shall allow user to access / remove Personal Data: Personal Data may be considered as a property of the user. User shall be able to verify correctness of the data and ask – if necessary – correction. Dynamic Personal Data – for instance home electricity consumption – shall also be available to the user for consultation. For static user identity, this principle is simply the application of current European regulations according access to user profile.
# 4.6 Ethical aspects
Some of the most mature tests and demonstrations of the project will be run in
“live” environment (city) where ordinary citizens are present.
The development of new human behaviours is an important impact of ICT that can
be felt by end users as positive, neutral or negative. It is indeed a task of
the project to perform a user-center evaluation of the BRAINIoT solution (task
6.2).
An additional task that could be possibly performed beyond the duration of the
project, aims to ensure that end user involvement is done in condition as good
as possible for the end users. This will involve the following activities:
* Engaging with end user only on an informed way: making sure they are aware of the presence of experiments and that relevant documentation, in understandable format (language and avoidance of technical jargon) is available.
* Gathering end user consent as a prerequisite for interaction and any data collection
* Providing a complaint procedure with a neutral third party
* Ensuring that end-users are free to refuse the experiment at any moment, including after it is started, without any prejudice or disadvantage.
# 4.7 Other issues
Other issues will refer to other national/ funder/ sectorial/ departmental
procedures for data management that are used.
<table>
<tr>
<th>
**5**
</th>
<th>
**DMP dataset description template**
</th> </tr> </table>
During the course of the project, each work package will analyse which DMP
components are relevant for its activities. When the pilots definitions will
be ready with regards to which data is collected and how data is used, DMPs
for the pilots need to be created.
This table is a template that shall be used to describe the datasets.
**Table 1: BRAIN-IoT Template for DMP**
<table>
<tr>
<th>
**DMP Element** **Issues to be addressed**
</th> </tr>
<tr>
<td>
**Identifier**
</td>
<td>
**Brain-IoT_WPX_TX.X_{Responsible Partner}_{Description}_{Data Set Sub Index
}**
</td> </tr>
<tr>
<td>
**Revision History**
</td>
<td>
Partner Name Description of change
ISMB Xu Tao Created initial DMP
</td> </tr>
<tr>
<td>
**Dataset Description**
</td>
<td>
Each data set will have a full data description explaining the data
provenance, origin and usefulness. Reference may be made to existing data that
could be reused.
</td> </tr>
<tr>
<td>
**Findability**
</td>
<td>
1\.
</td>
<td>
Outline the discoverability of data (metadata provision).
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Outline the identifiability of data and refer to standard identification
mechanism.
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Outline the naming conventions used.
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Outline the approach towards search keywords.
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Outline the approach for clear versioning.
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
Specify standards for metadata creation (if any).
</td> </tr>
<tr>
<td>
**Accessibility**
</td>
<td>
1\.
</td>
<td>
Specify which data will be made openly available? If some data is kept closed
provide rationale for doing so.
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify how the data will be made available.
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Specify where the data and associated metadata, documentation and code are
deposited.
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify how access will be provided in case there are any restrictions.
</td> </tr>
<tr>
<td>
**Interoperability**
</td>
<td>
1\.
</td>
<td>
Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify whether you will be using standard vocabulary for all data types
present in your dataset, to allow inter disciplinary interoperability? If not,
will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
**Reusability**
</td>
<td>
1\.
</td>
<td>
Specify how the data will be licenced to permit the widest reuse possible.
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed.
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why.
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Describe data quality assurance processes
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify the length of time for which the data will remain re-usable.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
1\.
</td>
<td>
Explanation of the sharing policies related to the data set between the next
options:
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
**Open:** Open for public disposal
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
**Embargo** : It will become public when the embargo period applied by the
publisher is over. In case it is categorized as embargo the end date of the
embargo period must be written in DD/MM/YYYY format.
Restricted: Only for project internal use.
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Each data set must have its distribution license.
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Provide information about personal data and mention if the data is anonymized
or not. Tell if the dataset entails personal data and how this issue is taken
into account.
</td> </tr>
<tr>
<td>
**Archiving and**
**Preservation**
</td>
<td>
The preservation guarantee and the data storage during and after the project
(for example: databases, institutional repositories, public repositories …)
</td> </tr> </table>
<table>
<tr>
<th>
**6**
</th>
<th>
**Resource allocation**
</th> </tr> </table>
Costs for establishing and maintaining the HBM4EU data repository are covered
by the financial budget of BRAIN-IoT.
While the repository in itself is not maintained after the end of the project,
all files stored within the BRAIN-IoT repository shall be stored after the
project to meet the requirements of good scientific practice. A strategy for
storage of the files after the project is being developed and will be included
in the DMP later.
The responsibility for data management during and after the end of the project
is up to the owners of the scenarios which are also the providers of the data
sources and the organizations who are mainly interested to the semantic value
of the data itself.
<table>
<tr>
<th>
**7**
</th>
<th>
**Conclusions**
</th> </tr> </table>
This deliverable provides a planning overview of the data that BRAIN-IoT
project is going to deal with, together with related data processes and
requirements that need to be taken into consideration.
The descriptions of the data sets will be incrementally enriched along the
project lifetime. These descriptions include a detailed description,
standards, methodologies, sharing and storage methods.
The Data Management Plan has been outlined within this deliverable and is
going to be updated and further detailed in the upcoming deliverables D6.3 – “
_Phase 1 Integration and Evaluation Framework_ ”, due on M16, and D6.5 - “
_Phase 2 Integration and Evaluation Framework”_ , due on M28.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0084_HRadio_761813.md
|
**EXECUTIVE SUMMARY**
This deliverable is focusing on the data management planning within the HRADIO
project. To create the data management plan, we applied the ‘Guidelines on
FAIR Data Management in Horizon 2020 1 , which helps to make the research
data findable, accessible, interoperable and reusable. In this deliverable we
will discuss the different types of data that will be gathered, the processing
and storing of the data and the data handling principles.
Data within the HRADIO project will be gathered for 3 objectives, namely
technical integration, service harmonization and user engagement. Personal,
technological and documents repository data will be collected. These data will
be handled as partner specific data, consortium confidential data or open
data.
This is a living document, meaning that the document will be updated
throughout the project’s lifetime, when new data is gathered or when specific
issues concerning data management arise. In this first version, the general
principles (FAIR) for the data management will be discussed in section 2,
followed by the allocation of sources (section 3), data security (section 4)
and ethical aspects (section 5).
In addition to this deliverable, deliverable 7.1 about the ethical
requirements is submitted. In the latter, the ethical clearance took place
through the Vrije Universiteit Brussel Humanities Ethical Board. In addition,
an independent ethical expert is appointed.
# DATA SUMMARY
In this chapter, the purpose of data collection and the related objectives are
outlined. Also, the different types and formats of the gathered data are
discussed. Furthermore, the reuse of existing data is explained. Finally, the
origin, expected size of the data and the data utility are cited.
## PURPOSE OF THE DATA COLLECTION AND GENERATION
In the HRADIO study, the purpose of the data collection is related to three
main objectives for radio in the digital era as mentioned in the Grant
Agreement (page 4-5, part B) [1]:
* **Technical integration:** Today’s radio devices, although providing different reception technologies, lack integration. It’s often up to the listener to make the decision which technology currently will deliver the best and most cost-effective user experience. Application developers for mobile platforms need to be enabled to gain a comfortable access to embedded tuner hardware in order to integrate broadcast and broadband seamlessly into the applications.
* **Service harmonization:** The permanent availability of return channels paired with the versatility of applications on mobile devices, forces broadcasters into a competition with sophisticated services such as music streaming, on-demand content and general information services. In order not to be perceived as the “old” radio service, broadcasters must combine their traditional linear services together with their IP-based on-demand content in order to provide an integrated service for the listener, which matches the expectations of end-users.
* **User engagement:** Radio applications on mobile platforms enable broadcasters to get in direct contact with their listeners and increase audience engagement. Besides enabling more interactive features, such as personalization, targeted advertising, games and voting. This also opens up the possibility of measuring exactly the number of people currently listening to the program and analyzing their behavior, as each stream is sent out individually to the end user.”
The collected data within the project will be linked to these three main
objectives. To create a service harmonization and enhance user engagement,
personal data from radio audiences will need to be collected, as well as
specific feedback on user behavior, user experiences and technical performance
of the system. The specific types of data we will collect within the HRADIO
project is discussed in the next section.
## TYPES AND FORMATS OF DATA GENERATED/COLLECTED
This section gives a description of the different data types we will collect
during the HRADIO project, how open the data will be, and in which format the
data can appear. A summary table is given, followed by a detailed explanation
of data types, data handling and data formats.
Table 1 - Data types and formats
<table>
<tr>
<th>
**Data types**
</th>
<th>
**Data openness**
</th>
<th>
**Data formats**
</th> </tr>
<tr>
<td>
Personal data:
* Personal profile data
* Sensitive personal data
* Behavioral data
</td>
<td>
Partner specific datasets
</td>
<td>
Textual, numerical, audio, pictures, video
</td> </tr>
<tr>
<td>
Technical data
</td>
<td>
Consortium confidential datasets
</td>
<td>
Logging data, user statistics,...
</td> </tr>
<tr>
<td>
Other datasets
</td>
<td>
Open datasets
</td>
<td>
Logos, templates, audit documents, meeting, reports,,...
</td> </tr> </table>
### Data types
#### Personal data
Three specific types of personal data will be gathered within the HRADIO
project.
* The first one is _**personal profile data** . _ This contains sociodemographic data (for example: first name, surname, sex, place of birth, phone-number, email address, profession,...), radio usage data (user patterns via logging data or self-reported behavior), user feedback (in oral or written form, in response to interview and/or survey questions), pictures or videos (taken during workshops or by respondents (for example where they listen to radio). Contact details (physical address, phone number, email address) are only used to contact the participant for the research activity. These data will not be shared with other project partners who are not directly involved in the research activity. The data will also not be saved, nor distributed outside the project. Registration of participants to mailing lists will only happen when they give their explicit permission to do so. All users are ensured to be anonymized when used in project reports.
* A second specific type of data is _**sensitive personal data** _ . This type of dataset will be gathered when voice control is used as a function of the hybrid radio. This type of data will never be open data, as it can contain data about sexual life, political opinions, mental health,... Therefore, sensitive data will be treated in the same way as personal profile data.
* A third type of collected data is _**behavioral data** _ . This includes user patterns of the HRADIO services, for example how long the radio user is listening to a radio program. These data will also be anonymized before being made publicly available.
#### Technical data
Within the HRADIO project, also technical data will be gathered, such as
access logs for the quality assurance of the technical solution, user logging
data etc. Depending on the specific nature of the data that is gathered, the
data will be consortium confidential or open. The nature of the technical data
will become clear with the development of the pilots.
#### Other Datasets: documents repository
The repository of documents is used for the purpose to share and store all
documentation related to the execution of the project. The repository within
the HRADIO project for the consortium partners is imec’s MyMinds communication
platform to store and share all project related documents (public, consortium
confidential and partner specific).
For public documents, we will also use the Zenodo repository as well as the
project website (hradio.eu) to allow open access. In what follows we describe
the different types of documents that can be archived in the repository:
* **User feedback** (interview transcripts, survey responses etc.). These data will be made public after anonymization of the data.
* **Video and images** of workshops and other research activities. Pictures will only be used after informed consent of the respondents. Videos of the workshops will not be made public, as these will not be anonymized.
* **Deliverables** : MyMinds will contain the repository of deliverables of the project available to all the partners. It includes public as well as confidential deliverables. Relevant public deliverables are also published on the website.
* **Reports** : reports that are public will be made available for third parties.
* **Meeting documents** : will contain documents presented or generated during plenary meetings, work package meetings and conference calls (such as meeting minutes and supporting materials). Depending on the specific meeting topic, meeting minutes can be made public outside the consortium.
* **Audit documents** : material exchanged among partners for audits preparation.
* **Work packages documents** : the documentation, specific to that WP will be accessible for all the partners including activities description, APs and others.
* **Templates and logos** : in this category reference documents and guides to generate standard documents for the project have been uploaded as well as logos and visual materials for media dissemination.
* **Dissemination** **materials** : papers, articles, blogs resulting from the project tasks,.. This dissemination material will also be made available on the project website.
### Data openness
There are different ways of data handling, depending on the specific type of
data (personal, sensitive, behavioral, technical data or other data). A
of 33
distinction has to be made between partner specific, consortium confidential,
open and other datasets, as not all the data can be made public, and the
consortium is simultaneously bound to regulations stemming from a) EC open
access policies, b) EC and national data protection regulations, c) Grant
Agreement stipulations and d) Consortium Agreement stipulations.
#### Partner specific datasets
Partner specific datasets cannot be shared with other partners of the
consortium. These data are generated by one specific partner, for example:
e-mail addresses or other personal data of respondents that engage in research
activities. These data will only be used by the partner that generated these
data and are thus confidential data.
#### Consortium confidential datasets
These datasets cannot be made public because of intellectual property or data
privacy constraints, but these datasets can be shared among the consortium
members for research purposes. Examples of consortium confidential datasets
are user patterns or user behavior that cannot be completely anonymized, or
business intelligence gathered for innovation management purposes.
#### Open datasets
Open and other datasets will be made public as much as possible, to the extent
possible, on the condition that these data are anonymized so there will be no
recognizable patterns. Examples of openly available data are logos, station
names, descriptions and bearers, dissemination material etc.
Most of the open datasets can be reused but these data will be anonymized and
thus could not be linked to specific participants. This means third parties
have the opportunity to freely use these data as the data will be available to
everyone.
### Data formats
The formats of these gathered data will either be textual (i.e.
questionnaires, transcriptions,...), numerical (access logs, logging data,..)
audio or pictures/videos.
In the table below, we indicate if the data is confidential or public, where
the data will be shared and if the data is personal or technical data.
Table 2 - Data confidentiality and sharing location
<table>
<tr>
<th>
**Consortium**
**confidential (C), partner specific (PS) or public (P) data**
</th>
<th>
**MyMinds**
</th>
<th>
**Project Zenodo** **website**
</th> </tr>
<tr>
<td>
Personal profile data
</td>
<td>
PS
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Sensitive personal data
</td>
<td>
PS
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Behavioral data
</td>
<td>
PS
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Technical data
</td>
<td>
C or P
</td>
<td>
x
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Dissemination materials
</td>
<td>
P
</td>
<td>
x
</td>
<td>
x
</td>
<td>
x
</td> </tr>
<tr>
<td>
Deliverables
</td>
<td>
C or P
</td>
<td>
x
</td>
<td>
x
</td>
<td>
x (only P)
</td> </tr>
<tr>
<td>
Reports
</td>
<td>
C or P
</td>
<td>
x
</td>
<td>
x (only P)
</td>
<td>
x (only P)
</td> </tr>
<tr>
<td>
Meeting documents
</td>
<td>
C or P
</td>
<td>
x
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Templates and logos
</td>
<td>
P
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td> </tr>
<tr>
<td>
Audit
documents
</td>
<td>
C
</td>
<td>
x
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Workpackages documents
</td>
<td>
C
</td>
<td>
x
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
User feedback
</td>
<td>
P (after anonymization)
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td> </tr>
<tr>
<td>
Video and images
</td>
<td>
C or P (after consent)
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td> </tr> </table>
## REUSE OF EXISTING DATA
For piloting activities, it is possible that existing profile data is used.
For the Belgian pilot, Digimeter 2 profiles are used to sample and recruit
potential participants for the pilot activities. In this case, we will reuse
data from these panels when participants accept to participate in the HRADIO
project. Reusing data from these participants is only possible when the panel
manager provides the data from the participants. These data will always be
considered confidential data. Also, for the German pilot, RBB can rely to some
extent on approved user groups that might be recruited and complemented for
the pilot activities.
RBB will reuse data if participants accept to participate in the HRADIO
project. Also, in this case, the personal data will be considered
confidential.
Also, for the technical provision of radio centric metadata such as service
names, service descriptions, logos, links to web streams, podcasts or
slideshow images, existing data from the broadcast partners will be reused in
the HRADIO project. To assure technical interoperability, this data will be
provided by applying the RadioDNS/WorldDAB set of ETSI specifications e.g.
ETSI 102 818 and ETSI 103 270. These data will be public.
## ORIGIN OF THE DATA
All the data gathered within the HRADIO project, will be the result of
research and piloting activities. Piloting activities including workshops and
user tests will take place in Belgium and Germany, as well as in the UK.
Besides user feedback on the developed scenarios and HRADIO services, also
technical data from the HRADIO client device will be collected and monitored
(e.g. logging data).
## EXPECTED SIZE OF THE DATA (IF KNOWN)
Currently the expected data size is unknown.
## DATA UTILITY
Not all the data gathered within the context of the HRADIO project will be
made public (see supra), but some data will be publicly available. In this
case, third parties or other researchers and developers could have an insight
in specific developed services and user feedback on these services. This can
be useful for other research projects working on radio, for broadcasters, for
radio manufacturers etc.
# FAIR DATA
FAIR data 3 4 stands for findable, accessible, interoperable and reusable
data and is used to ensure good research data management. Below, these four
key elements are discussed more in-depth.
## MAKING DATA FINDABLE, INVLUDING PROVISIONS FOR METADATA
### The discoverability of data (metadata provision)
Metadata is provided within the HRADIO project to aid discovery of data in
Zenodo, the platform we will use to share our public project data.
### Identifiability and locatability of data
We will use the Zenodo archiving system to share all open data generated
within the project. Articles, uploaded on Zenodo, will have a DOI-number.
### Used naming conventions
Metadata will be used to improve discoverability of the data. We will use
existing vocabularies to describe the metadata via the Data Catalogue
Vocabulary (DCAT) 5 . “ _Naming conventions, once adopted, help people
converge on a shared vocabulary and then make it easier to use the terms in
that vocabulary_ ” [2].
### Approach towards search keyword
Undefined at this moment.
### Outline the approach for clear versioning
The deliverables will have a clear version numbering, which is also indicated
at the beginning of each deliverable in a version table. Each deliverable
starts with version
0.1. And consequent versions are numbered 0.2 etc. The final version is
version 1.0. In the table, the added content and revisions in each version is
also indicated.
### Standards for metadata creation
Personal and technical data produced by HRADIO and intended for publication
will be modelled following linked data principles, so that the data may be
augmented with metadata using dedicated vocabularies.
Data formats are described in section 1.2.3. To complete these formats where
needed and appropriate for data that the project will publish: 1. Provenance
metadata will follow the PROV ontology 6 , a W3C standard
2. Time-related metadata will follow the Time in OWL ontology 7 , a W3C standard
3. Geolocation metadata will follow the W3C Spatial Data on the Web Best Practices 8 4\. Metadata to describe published datasets will follow the Data Catalogue Vocabulary (DCAT) 9
## MAKING DATA OPENLY ACCESSIBLE
### Openly available data and rationale for closed data
If possible, data will be made publicly available (see supra). This means that
open and other datasets will be made public to a certain extent, on the
condition that these data are anonymized so there will be no recognizable
patterns. For example, the participant’s names can be changed in aliases or
codes (i.e. P01, P02,...). Also, radio service and program metadata will be
openly available as the default. Broad search statistics (number of users that
searched for genre x or for keyword y) are shared across search instances in
the federated search architecture. However detailed user search statistics and
usage statistics are collected by search instances but kept closed since
privacy should be preserved as much as possible.
Some partner and consortium datasets are kept closed because of **intellectual
property rights and data protection.** For example, broadcasters expect their
stream URLs to be kept private (because mis-use of streams by unauthorized
third parties, costs broadcaster’s money in wasted bandwidth bills) so for the
project partners offering radio streams, the streams will only be available to
authorized users.
Other private data concerns personal information such as names and addresses
of respondents (see supra) will be anonymized.
### Format data availability
To comply with the guidelines for open data availability, we will make use of
Zenodo to store and share all open project information free of charge. Also,
the project website will be used as a communication tool.
* The stored data will include the data, including associated metadata.
* Information about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves).
### Methods or software tools needed to access the data
All the transcriptions of the interviews and all data files are analyzed with
dedicated software, like SPSS or NVivo, which are commercial available
software tools. All the files that will become publicly available will be
accessible for third parties, for example by transforming the files in a .txt
or .csv format.
### Deposition of data, metadata, documentation and code
The data will be made available via Zenodo.
### Provision of access in case of restrictions
The provision of access depends on how open the dataset is. Open data will be
publicly available via Zenodo. For project specific data, most data is
available via the internal project platform (MyMinds). For data that is
partner specific, each partner controls the access and sharing settings.
## MAKING DATA INTEROPERABLE
### Data interoperability
Data produced and stored by the metadata dissemination and communication
platform will be fully interoperable. On a syntactic level, this will be
ensured by using standard data description languages like XML and JSON. On a
semantic level, the description of radio service and program metadata will
follow the RadioDNS ETSI standard as defined in ETSI TS 103 270 (Hybrid lookup
for radio services) as well as ETSI TS 102 818 (DAB EPG). Publicly available
interfaces of the platform will follow well known RESTful service principles
for full interoperability with external systems.
### Vocabulary for inter-disciplinary interoperability
The HRADIO platform will use standard vocabularies for data description
wherever they exist (eg. RadioDNS service and program description) in order to
ensure interdisciplinary interoperability. The offered external interfaces are
fully described, open and extensible. More specifically, the project will use
a linked data approach, for instance by providing the vocabulary context to
turn a JSON document into a JSONLD document. Where possible and useful, e.g.
to improve public content indexing by search engines, the project will also
align data with or provide mappings to the schema.org vocabulary.
If - during development or after initial release - commonly used ontologies
emerge, the system can be extended or changed accordingly while still ensuring
backward compatibility. Other standard vocabularies used within the project,
are the TVAnytime 10 specification.
## INCREASE DATA REUSE
### Data licensing to permit the widest reuse possible
It has been agreed that the project results (research results, software,
business models, etc.), which will be provided as open-source components will
be protected under open source licenses (e.g. the EUPL, the LGPL or another
open source license). LGPL is a free software license which allows developers
to integrate software under the LGPL even into proprietary code 11 .
### Data embargo
This does not apply.
### Reuse of data by third parties after end of project
The open data will also remain available after the project duration. See
section 1.2.2 for an overview of open data.
### Length of time for which the data will remain reusable
In the Grant Agreement it is stated that the EU may — with the consent of the
beneficiary concerned — assume ownership of results to protect them, if a
beneficiary intends — up to four years after the period set out in Article 3
(see grant agreement)— to stop protecting them or not to seek an extension of
protection, except in any of the following cases:
1. the protection is stopped because of a lack of potential for commercial or industrial exploitation;
2. an extension would not be justified given the circumstances.
A beneficiary that intends to stop protecting results or not seek an extension
must — unless any of the cases above under Points (a) or (b) applies —
formally notify the Commission at least 60 days before the protection lapses
or its extension is no longer possible and at the same time inform it of any
reasons for refusing consent. The beneficiary may refuse consent only if it
can show that its legitimate interests would suffer significant harm [3].
We will comply with these guidelines.
### Data quality assurance processes
#### Personal data
In response to ethical considerations, attention is given to the gathering of
data by using an informed consent each time personal data is collected from
respondents (see annex 1 for a draft version of an informed consent). Besides
ethical considerations, all data within the project is gathered based on
scientifically validated methods. Reliability and validity criteria associated
with these methods are also taken into account, ensuring the data quality.
#### Broadcast content
For UKRP, data quality is assured for broadcasters which are members of
Radioplayer, since UKRP is able to help them provide metadata at a high
standard.
VRT data has been broadcasted in the past, and thus already underwent a
quality check in the media production process.
RBB plans to use audio in the pilots, which have already undergone the normal
production processes. This audio will be further adapted to ensure suitability
for HRADIO. In addition to the audio, we want to use different types of
metadata, ranging from SI data, to data already provided by RadioDNS, to newly
processed or accumulated data.
#### Technical data
For the technical activities within the project, the platform defines the
following data quality assurance processes:
* Search index data exchanged between two instances of the federated search system is checked by the receiving system on two levels: Syntactically, only well-formed documents are accepted.
Semantically, only content signed by certified federation partners is accepted
and further filtered using a plausibility analysis.
* Service metadata that was entered manually using the corresponding external interface is checked syntactically and semantically as well (same for service metadata coming from the RadioDNS crawler).
* User statistics for the broad search mechanism are distributed amongst federation partners. Incoming data is checked syntactically and for plausibility. In addition, only signed content is accepted.
* Detailed user data collected by the user statistics module is checked syntactically and filtered based on a plausibility analysis.
# ALLOCATION OF RESOURCES
## COST ESTIMATION FOR MAKING YOUR DATA FAIR
Making service and program metadata generated within the project openly
accessible is associated with costs for purchase and maintenance of server
infrastructure for search instances. Since the architecture is based on a
federated approach, costs are shared across participating service providers
dependent on the chosen level of engagement. In addition, the publication of
different types of data will be free or at a reasonable cost, for example when
Zenodo will be used. Also, different quality checks have been done and
standards are used within the HRADIO project.
At this moment, it is not possible yet to estimate the specific cost for
making FAIR datasets, however we expect the cost to be reasonable.
## DATA MANAGEMENT RESPONSIBILITY
Imec will be responsible for the overall data management in the HRADIO
project, but for the data that is considered partner specific (see supra),
every partner is responsible for following the guidelines agreed within the
project and described in this data management plan (see section 1.2.2. Data
openness).
## COSTS AND POTENTIAL VALUE OF LONG TERM PRESERVATION
The project lead and work package lead decide on the opportunity of long term
preservation through a discussion within the management board meeting. Data
will be kept during implementation of the action and for four years after the
period set out in article 3 of the Grant Agreement [4], the parties must keep
confidential any data, documents or other material (in any form) that is
identified as confidential at the time it is disclosed. If a beneficiary
requests, the Commission may agree to keep such information confidential for
an additional period beyond the initial four years (see supra).
Long term preservation of service and program metadata would allow for deep
analysis of radio program changes over long periods of time. For user data,
longterm analysis could be as well of value for service providers and academic
research but issues of data protection and data sovereignty must be considered
as well. Costs depend on the chosen preservation strategy (Refreshing,
Migration, Replication, Emulation, Encapsulation, ...). Other project
documents, including deliverables can provide interesting benchmark material
for future projects.
# DATA RECOVERY, STORAGE AND TRANSFER
The International Standard ISO/IEC 17799 covers data security under the topic
of information security, and one of its main principles is that all stored
information, i.e. data, should be “owned”, so that it is clear whose
responsibility it is to protect and control access to that data, to keep it
safe from corruption and involuntary disclosure outside the European Economic
Area. Any evidence of natural persons’ identity and sensitive data collected
during the research will be destroyed at the end of it. For all collected
data, a responsible project partner is appointed that ensures the data is
stored and managed in a secure way.
For internal sharing of project data, we will use the password protected
MyMinds platform. This system has an adequate user administration system,
including individual rights assignment to project participants and back-up
systems to ensure data preservation. For the open sharing of documents, we
will make use of the Zenodo system, which is provided by CERN with support of
the EC. In this platform, only open project data will be shared.
# ETHICAL ASPECTS
Users will be informed about what data is collected/shared and the purpose of
the collection/sharing, so they can make an informed decision whether they
consent or not. If data would be shared to third parties, this data will be
anonymized.
Requests for informed consents for data sharing and long term preservation are
included in all research activities (e.g. interviews, user workshops,
surveys,...) dealing with personal data.
Within work package 7 of the HRADIO project about the ethical requirements
(see deliverable 7.1), the ethical clearance took place through the Vrije
Universiteit Brussel Humanities Ethical Board. In addition, an independent
ethical expert is appointed.
# OTHER PROCEDURES
Each project partner that is responsible for piloting activities, will ensure
compliance with their national data protection regulations. The project was
also assessed and approved by the ethical board of Vrije Universiteit Brussel.
# CONCLUSIONS
In this deliverable, we have set out the data management plan for HRADIO. To
start with, we provided a data summary that explained the purpose and
objectives of data collection, the different types and formats of data, how we
plan to reuse existing data, and the origin, expected size and utility of data
that is gathered in the project.
Secondly, we described how HRADIO will strive to follow the FAIR data
guidelines. This document presents our provisions to make sure that HRADIO
data is findable, accessible, interoperable and reusable.
Next, we illustrated the allocation of resources in the HRADIO project and the
arrangements that we have made in order to ensure data security. We provided
insight into the ethical aspects and legal issues that can have an impact on
data sharing within the project. To conclude, we explained other procedures
for data management which ensure the compliance of the HRADIO consortium
partners with national/funder/sectorial/departmental regulations.
As stated in the executive summary, this data management plan is a living
document. This means that the next step is to update the document throughout
the project’s lifetime when new data is gathered or when specific issues
concerning data management arise. The updates of the data management plan will
be incorporated in the quarterly and periodic management reports (deliverables
D1.4 and D1.5).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0085_scMODULES_778609.md
|
# 1\. Introduction
This document (Deliverable 3.1 or D3.1) describes the project’s Dissemination
and Communication strategy and the activities carried out to reach the
objectives.
All data management aspects will be analysed in detail in D3.2 and D5.1
## Scope of the document
The Dissemination and Communication Plan is the core document outlining the
project’s dissemination and communication activities. This plan is fundamental
for a good coordination of all initiatives and also for defining the messages
which should be targeted enhance the visibility of the project results.
This Dissemination and Communication Plan aims concretely to:
* Outline the main objectives of the dissemination actions;
* Identify the target audiences for each communications objective;
* Define the tools and channels to be used and the activities required to reach targeted audiences;
* Identify the dissemination KPIs, useful to measure the effectiveness and efficiency of the activities conducted;
* Explain how the dissemination activities will support the exploitation activity.
# 2\. Communication plan: goals
The communication strategy of the project seeks to achieve the following
objectives:
G.1. Publicize the initiative among the target audience of the project and
attract participants to their pilots
G.2. Publicize among museums audiences the published contents as a result of
this project, to encourage its use, analyze how it is used and measure the
success of the project
G.3. Show the value added by the platform to participants in the project to
convert them into customers, attract new clients, and position the platform
among future clients and partners as the reference solution:
1. for the publication of digitized collections of museums
2. for the creation of new digital experiences based on them
# 3\. How to achieve these goals
To achieve these objectives, a combination of communication and dissemination
actions has been defined through both offline and online channels, which we
believe are the most appropriate for this purpose.
Offline dissemination
* Professional event attendance
* PR / Press Releases
* Organization of presentation events
Online dissemination
* Content creation and publication on different channels (combined with SEO actions):
1. Website (blog)
○ Social Media: Facebook, Twitter, Linkedin (professional forums and museum’s
associations)
* Emailing and newsletters:
1. email campaigns (emails and newsletters) based on our network of contacts and on existing databases, focused on impacting potential candidates and generating leads ● SEM and SMM campaigns:
○ targeted campaigns in search and display networks, and in social networks
(Google AdWords, Facebook Ads, Twitter Ads, ...)
<table>
<tr>
<th>
</th>
<th>
**G.1**
</th>
<th>
**G.2**
</th>
<th>
**G.3**
</th> </tr>
<tr>
<td>
Events attendance
</td>
<td>
**√**
</td>
<td>
x
</td>
<td>
**√**
</td> </tr>
<tr>
<td>
PR / Press releases
</td>
<td>
**√**
</td>
<td>
**√**
</td>
<td>
**√**
</td> </tr>
<tr>
<td>
Events organization
</td>
<td>
**√**
</td>
<td>
x
</td>
<td>
**√**
</td> </tr>
<tr>
<td>
Content creation and publication
</td>
<td>
**√**
</td>
<td>
x
</td>
<td>
**√**
</td> </tr>
<tr>
<td>
Email / Newsletters
</td>
<td>
**√**
</td>
<td>
x
</td>
<td>
**√**
</td> </tr>
<tr>
<td>
SEM and SMM
</td>
<td>
**√**
</td>
<td>
**√**
</td>
<td>
x
</td> </tr> </table>
# 4\. Target audience
Three main types of target audiences have been established to which
communication and marketing actions are directed:
* Potential customers (and/or channels to reach them)
1. The potential market for scMODULES is basically composed by museums and of other institutions managing collections and art/cultural heritage (GLAM sector: Galleries, Libraries, Archives and Museums). Nonetheless, Madpixel will concentrate on medium-sized and small museums specialised in pictorial art for which scMODULES bring the highest differential USP. Therefore, our main target are directors and responsible for the digital area and user experience of this kind of institutions. Additional channels:
■ Sectorial associations/groups that represent potential customers, such as
Network of European Museum Organisations (NEMO), the European
Museums Academy (EMA), American Alliance of Museums, The European Group on
Museum Statistics (EGMUS), or their national equivalents in target countries.
■ Highly reputed and influencers professionals from the sector such as art
exhibition curators, photographers, consulting companies and individuals that
work for museums.
■ Local stakeholders and evangelizers that have a high degree of influence
* End users
1. Museum Visitors / Audience
○ Schools, universities and other educational centers
○ Consumers of other digital content
* Potential partners
1. Educational publishers.
○ Digital multimedia services agencies specialized in museums, photographers,
consulting companies and individuals that work for museums.
## 4.1. Geographical location
According to the company exploitation strategy described in our project plan,
our target countries for this phase (and until 2020) are UE countries (with
special focus on Spain, UK, France, Germany, Netherlands, and Italy) and USA.
In later phases we will approach LATAM (specially Mexico and Brazil), Canada
and other European countries, and finally, China, Japan and Russia.
For communication actions during the project we will mainly focus on UE, and
USA although we also consider to so some tests in Latam (specially in Mexico),
due to the potential of this market.
## 4.2. Segmentation of actions and value proposition
<table>
<tr>
<th>
**Target**
</th>
<th>
**Actions**
</th>
<th>
**Timeline**
</th>
<th>
**Value proposition**
</th> </tr>
<tr>
<td>
GLAM Sector
</td>
<td>
Events attendance
PR / Press releases
Events organization
Content creation and publication
Email / Newsletters
SEM and SMM
</td>
<td>
See timeline at point 7.1
</td>
<td>
The best way to “go digital”. Affordable premium solution to get the most of
digitized collections, opening new opportunities to:
* Create engaging and
innovative experiences
* Disseminate collections
* Reach new audiences
* Generate new revenue streams
Easy-to-use (no technical/design
skills required)
Multi-language
Multiple publishing options
</td> </tr>
<tr>
<td>
Sectorial Associations / groups
</td> </tr>
<tr>
<td>
Professionals from the sector
</td> </tr>
<tr>
<td>
Local stakeholders and evangelizers
</td>
<td>
Events organization
Content creation and publication
Email / Newsletters
</td> </tr>
<tr>
<td>
Museum visitors / audience
</td>
<td>
PR / Press releases
SEM and SMM
</td>
<td>
See timeline at point 7.1
</td>
<td>
New tool multi-device (mobiles, tablets, computers) to explore museums
collections:
* Innovative complement to
the physical visit
* Remote visit
* Visit preparation
</td> </tr>
<tr>
<td>
Schools, universities and other educational centers
</td>
<td>
Events attendance
PR / Press releases
Content creation and publication
Email / Newsletters
</td>
<td>
New educational asset, based on big art repository:
* High quality and resolution
* Multimedia and interactive
* Many possibilities of
activities for students based on it
</td> </tr>
<tr>
<td>
Consumers of
other digital content
</td>
<td>
PR / Press releases
SEM and SMM
</td>
<td>
Innovative audiovisual and multimedia experience based on art
</td> </tr>
<tr>
<td>
Educational publishers
</td>
<td>
Events attendance
Content creation and publication
Email / Newsletters
</td>
<td>
See timeline at point 7.1
</td>
<td>
New business opportunities based on innovative EdTech tool offering:
* Exclusive content
* Possibilities to create tailor made experiences for
schools and students
* New revenue streams
</td> </tr>
<tr>
<td>
Digital multimedia services agencies
specialized in museums, photographers, consulting
companies and individuals that work for museums
</td>
<td>
New business opportunities with museums:
* New category of projects
* Complementing other
services (web/apps
designing, photography,
…)
</td> </tr> </table>
# 5\. Main actions
## G.1: Publicize the initiative among the target audience of the project and
attract participants to their pilots
### Target audience
● Phase I:
○ Directors of institutions, and responsible for the digital area and user
experience of the GLAM sector (Galleries, Libraries, Archives and Museums) and
of other institutions managing collections and art/cultural heritage.
○ Sectorial associations/groups that represent potential customers, such as
Network of European Museum Organisations (NEMO), the European Museums Academy
(EMA), American Alliance of Museums, The European Group on Museum Statistics
(EGMUS), or their national equivalents in target countries.
○ Highly reputed and influencers professionals from the sector such as art
exhibition curators, photographers, consulting companies and individuals that
work for museums.
○ Local stakeholders and evangelizers that have a high degree of influence ●
Fase II:
○ Educational publishers.
○ Digital multimedia services agencies specialized in museums.
### List of actions
Offline dissemination
* Assistance to professional events: attendance at conferences of the sector and participation with stands and sessions (workshops, others)
* PR: Press releases in general and professional media to publicize relevant project milestones:
○ Open call for participation in pilot projects
○ Relevant museums and institutions participating
* Organization of events: project presentation sessions, in collaboration with the first 10 participating museums and at their own facilities, focused on informing other museums of the same region about it
Online dissemination
* Content creation and publication
* Emailing and newsletters
* SEM y SMM campaigns: targeted campaigns in search and display networks, and in social networks (Google AdWords, Facebook Ads, Twitter Ads, ...), focused on impacting potential candidates and on generating leads
## G.2. Publicize among museums audiences the published contents as a result
of this project, to encourage its use, analyze how it is used and measure the
success of the project
### Target audience
* Museum Visitors
* Schools, universities and other educational centers
* Consumers of other digital content
### List of actions
Offline dissemination:
* PR: Press releases in general media to publicize relevant project milestones:
○ Relevant museums and institutions participating
○ Offer of digitized content (cultural / artistic heritage) and of experiences
based on it, available as a result of the project Online dissemination:
* SEM y SMM campaigns: targeted campaigns in search and display networks, and in social networks (mainly Facebook Ads), focused on impacting end users and on generating visits and downloads (apps) of the published content.
## G.3. Show the value added by the platform to participants in the project
to convert them into customers, attract new clients, and position the platform
among future clients and partners as the reference solution for digitized
content
### Target audience
* Directors of institutions, and responsible for the digital area and user experience of the GLAM sector (Galleries, Libraries, Archives and Museums) and of other institutions managing collections and art/cultural heritage.
* Sectorial associations/groups that represent potential customers, such as Network of European Museum Organisations (NEMO), the European Museums Academy (EMA), American Alliance of Museums, The European Group on Museum Statistics (EGMUS), or their national equivalents in target countries.
* Highly reputed and influencers professionals from the sector such as art exhibition curators, photographers, consulting companies and individuals that work for museums. ● Local stakeholders and evangelizers that have a high degree of influence ● Educational publishers.
* Digital multimedia services agencies specialized in museums.
### List of actions
Offline dissemination
* Assistance to professional events: attendance at conferences of the sector and participation with stands and sessions (workshops, others)
* PR: Press releases in general and professional media, to publicize project results
* Events of presentation of the project results, for participants and potential clients, in different chosen locations.
Online dissemination
* Content creation and publication: focused on positioning Second Canvas as the reference solution for digitized content
* Emailing and newsletters
* SEM / SMM campaigns: targeted campaigns focused on impacting potential candidates and on generating leads
# 6\. KPIs
<table>
<tr>
<th>
Action
</th>
<th>
</th>
<th>
KPI
</th>
<th>
</th> </tr>
<tr>
<td>
Professional events attendance
</td>
<td>
●
●
</td>
<td>
# of events with company’s presence
# of potential leads
</td>
<td>
10
50
</td> </tr>
<tr>
<td>
PR / Press Releases
</td>
<td>
●
●
</td>
<td>
# of press releases launched
# of target media publishing our press releases
</td>
<td>
5
25
</td> </tr>
<tr>
<td>
Organization of presentation events
</td>
<td>
●
●
●
</td>
<td>
# of events organized
# of attendees
# of potential leads
</td>
<td>
5
250
50
</td> </tr>
<tr>
<td>
Content creation and publication on different channels (combined with SEO
actions)
</td>
<td>
●
○
○
○
●
○
○
○
●
○
○
○
●
○
○
○
○
●
○
○
○
○
●
○
○
○
●
○
○
○
</td>
<td>
Company’s website
# of visitors
# of pages viewed
Conversion rate
scModules
# of visitors
Average Session Duration
Conversion rate
Blog
# of posts published
# of visitors
Average Session Duration
Facebook
# of subscribers
# of posts published
# of interactions
# click through rate
Twitter
# of followers
# of tweets
# of interactions
# click through rate
Linkedin
# of followers
# of posts published
# of impressions
Instagram
# of followers
# of posts published
# of interactions
</td>
<td>
15.000
30.000
4%
8.000
0:04:00
4%
20
2.000
0:05:00
500 followers
180 posts
7.000
1.000
900 followers
480 tweets
8.000
4.000
200
100 posts
5.000
400 followers
130 posts
5.200
</td> </tr>
<tr>
<td>
SMM
</td>
<td>
●
●
●
</td>
<td>
# of impressions
CTR
Conversion rate (downloads)
</td>
<td>
248.000 0,90 %
0,7 %
</td> </tr>
<tr>
<td>
Emailing and newsletters
</td>
<td>
●
</td>
<td>
Emailing campaigns
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
○
○
○
●
○
</td>
<td>
# emails
Open-rate
Conversion rate
Direct emails
# emails
</td>
<td>
80
26%
20%
150
</td> </tr> </table>
11
<table>
<tr>
<th>
</th>
<th>
○
○
●
○
○
○
○
</th>
<th>
Open-rate
Conversion rate
Newsletter
# of newsletters sent
# of subscribers
Open-rate
Conversion rate
</th>
<th>
80%
50%
12
130
26%
20%
</th> </tr> </table>
12
7\. Timeline / Calendar
## 7.1. Timeline
<table>
<tr>
<th>
**Action**
</th>
<th>
**Goals**
</th>
<th>
**Timeline**
</th> </tr>
<tr>
<td>
**Professional event attendance**
</td>
<td>
G1
</td>
<td>
2017/Nov: MCN (Pittsburgh) - Confirmed: stand and workshop
</td> </tr>
<tr>
<td>
G1
</td>
<td>
2017/Nov: UAM (México DF) - Confirmed: stand and session
</td> </tr>
<tr>
<td>
G1
</td>
<td>
2017/Nov: Sharing is caring (Aarhus) - Confirmed: session
</td> </tr>
<tr>
<td>
G1
</td>
<td>
2018/Jan: SITEM (Paris) - TBC
</td> </tr>
<tr>
<td>
G1
</td>
<td>
2018/Apr: Museums and the web (Vancouver) - TBC
</td> </tr>
<tr>
<td>
G1
</td>
<td>
2018/Apr: We Are Museums - TBC
</td> </tr>
<tr>
<td>
G1
</td>
<td>
2018/May: AAM (Phoenix) - TBC
</td> </tr>
<tr>
<td>
G1, G3
</td>
<td>
2018/Jun: Museum Next (London) - TBC
</td> </tr>
<tr>
<td>
G1, G3
</td>
<td>
2018/Jun: Art#Connexion (Paris - June 2018) - TBC
</td> </tr>
<tr>
<td>
G3
</td>
<td>
2018/Nov: MCN (Denver) - TBC
</td> </tr>
<tr>
<td>
**PR / Press Releases**
</td>
<td>
G1
</td>
<td>
2017/Aug: Launching of the project
</td> </tr>
<tr>
<td>
G1
</td>
<td>
2017/Nov: Open call for participation in pilot projects
</td> </tr>
<tr>
<td>
G1, G3
</td>
<td>
2018/Jan: Relevant museums and institutions participating
</td> </tr>
<tr>
<td>
G2, G3
</td>
<td>
2018/May: Offer of digitized content (cultural / artistic heritage) and of
experiences based on it, available as a result of the project
</td> </tr>
<tr>
<td>
G3
</td>
<td>
2018/Sep: Project results
</td> </tr>
<tr>
<td>
**Organization of presentation events**
</td>
<td>
G1
</td>
<td>
2018/Jan: Museum TBD
</td> </tr>
<tr>
<td>
G1
</td>
<td>
2018/Apr: Museum TBD
</td> </tr>
<tr>
<td>
G1
</td>
<td>
2018/Jun: Museum TBD
</td> </tr>
<tr>
<td>
G3
</td>
<td>
2018/Sep: Museum TBD
</td> </tr>
<tr>
<td>
G3
</td>
<td>
2018/Nov: Museum TBD
</td> </tr>
<tr>
<td>
**Content marketing**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
\- Company’s website
</td>
<td>
G1, G3
</td>
<td>
2017/Nov: Launching
</td> </tr>
<tr>
<td>
\- scModules website
</td>
<td>
G1
</td>
<td>
2017/Nov: Launching
</td> </tr>
<tr>
<td>
\- Blog
</td>
<td>
G1, G3
</td>
<td>
2017/Nov: Launching
</td> </tr>
<tr>
<td>
2017/Dec - 2018/March: 1 post / month
</td> </tr>
<tr>
<td>
2018/Apr - 2019/Jan: 2 posts / month
</td> </tr>
<tr>
<td>
\- Facebook
</td>
<td>
G1, G2
</td>
<td>
2018/Feb - 2019/Jan: 15 posts / month
</td> </tr>
<tr>
<td>
\- Twitter
</td>
<td>
G1, G3
</td>
<td>
2018/Feb - 2019/Jan: 40 tweets / month
</td> </tr>
<tr>
<td>
\- Instagram
</td>
<td>
G1, G2
</td>
<td>
2018/Mar - 2019/Apr: 10 posts / month
</td> </tr>
<tr>
<td>
2018/Apr - 2019/Jan: 12 posts / month
</td> </tr>
<tr>
<td>
\- Linkedin
</td>
<td>
G1, G3
</td>
<td>
2018/Apr - 2019/Jan: 10 posts / month
</td> </tr> </table>
<table>
<tr>
<th>
**SMM**
</th>
<th>
G2
</th>
<th>
2018/Feb - 2019/Jan: Facebook promoted posts (included as a part of the
**Direct Marketing**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Email campaigns
</td>
<td>
G1
</td>
<td>
2018/Feb-Mar and 2018/Oct-Nov
</td> </tr>
<tr>
<td>
Direct emailing
</td>
<td>
G1
</td>
<td>
2017/Aug - 2018/Mar
</td> </tr>
<tr>
<td>
Newsletter
</td>
<td>
G1
</td>
<td>
2018/Feb - 2019/Jan: 1 newsletter / month
</td> </tr> </table>
## 7.2. Calendar
<table>
<tr>
<th>
Year
</th>
<th>
2017
</th>
<th>
2018
</th>
<th>
2019
</th> </tr>
<tr>
<td>
Month
</td>
<td>
Aug
</td>
<td>
Sep
</td>
<td>
Oct
</td>
<td>
Nov
</td>
<td>
Dec
</td>
<td>
Jan
</td>
<td>
Feb
</td>
<td>
</td>
<td>
Mar
</td>
<td>
</td>
<td>
Apr
</td>
<td>
May
</td>
<td>
</td>
<td>
Jun
</td>
<td>
</td>
<td>
Jul
</td>
<td>
Aug
</td>
<td>
Sep
</td>
<td>
</td>
<td>
Oct
</td>
<td>
Nov
</td>
<td>
Dec
</td>
<td>
Jan
</td> </tr>
<tr>
<td>
Week
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**Events attendance** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**PR / Press releases** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**Events organization** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**Company’s website** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**scModules website** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**Blog** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**Facebook** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**Twitter** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**Instagram** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**Linkedin** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**SMM** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**Email campaigns** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**Direct emailing** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**Newsletter** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
# 8\. Resources
The project requires the hiring of a dedicated team, in charge of designing,
executing and analyzing the results of the different actions to be
implemented. Profiles to be contracted are the following:
* Responsible for online marketing
* Community manager
* Contents marketer
Our proposal already includes a budget for communication activities, events
attendance, branding, project marketing and product positioning, which will
cover the actions planned.
<table>
<tr>
<th>
**Travels**
</th>
<th>
</th> </tr>
<tr>
<td>
_Communication activities, events attendance and project management related
meetings_
</td>
<td>
</td> </tr>
<tr>
<td>
**Concept**
</td>
<td>
**#**
</td>
<td>
**Cost (estimate)**
</td>
<td>
**People**
</td>
<td>
**Days**
</td>
<td>
**Sbtl**
</td> </tr>
<tr>
<td>
Flights
</td>
<td>
10
</td>
<td>
800.00€
</td>
<td>
2.5
</td>
<td>
</td>
<td>
20,000.00€
</td> </tr>
<tr>
<td>
Hotels
</td>
<td>
10
</td>
<td>
100.00€
</td>
<td>
2.5
</td>
<td>
3
</td>
<td>
7,500.00€
</td> </tr>
<tr>
<td>
Meals, others
</td>
<td>
10
</td>
<td>
60.00€
</td>
<td>
2.5
</td>
<td>
3
</td>
<td>
4,500.00€
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
**Total cost (estimate)**
</td>
<td>
**32,000.00€**
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**Other goods and services**
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
_Design, branding, project marketing and product positioning_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Concept**
</td>
<td>
**#**
</td>
<td>
**Cost (estimate)**
</td>
<td>
**Days**
</td>
<td>
**Sbtl**
</td> </tr>
<tr>
<td>
Website, landing page and blog design
</td>
<td>
3
</td>
<td>
1,500.00€
</td>
<td>
</td>
<td>
4,500.00€
</td> </tr>
<tr>
<td>
Online Marketing Campaigns (monthly budget)
</td>
<td>
12
</td>
<td>
250.00€
</td>
<td>
</td>
<td>
3,000.00€
</td> </tr>
<tr>
<td>
Press releases
</td>
<td>
5
</td>
<td>
1,500.00€
</td>
<td>
</td>
<td>
7,500.00€
</td> </tr>
<tr>
<td>
IP protection related actions
</td>
<td>
</td>
<td>
18,000.00€
</td>
<td>
</td>
<td>
18,000.00€
</td> </tr>
<tr>
<td>
Conferences attendance
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_\- Stand_
</td>
<td>
6
</td>
<td>
2,500.00€
</td>
<td>
</td>
<td>
15,000.00€
</td> </tr>
<tr>
<td>
_\- AV equipment (renting)_
</td>
<td>
6
</td>
<td>
1,500.00€
</td>
<td>
2
</td>
<td>
18,000.00€
</td> </tr>
<tr>
<td>
_\- Other (posters, flyers, roll-ons, others)_
</td>
<td>
6
</td>
<td>
500.00€
</td>
<td>
</td>
<td>
3,000.00€
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**Total cost (estimate)**
</td>
<td>
**69,000.00€**
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0086_microSPIRE_766955.md
|
# Introduction
This document is a deliverable of the µSPIRE project which is funded by the
European Union’s H2020 Programme under Grant Agreement No. 766955\. This first
version of the Data Management Plan (DMP) describes the main elements of the
data management policy that will be used by the members of the Consortium with
regard to the data generated throughout the duration of the project and after
its completion.
The µSPIRE consortium shall implement procedures that are in line with
national legislation of each consortium partner and in line with the European
Union standards. This DMP will apply to all data under µSPIRE consortium
control. If we shall strive to make data open, we cannot overrule limitations
that partner institutions put on data that they contributed to generate (see
e.g. the grant agreement).
This first version of the Data Management Plan (DMP), based on the knowledge
available and developed at the moment of the deliverable submission, describes
the main elements of the data management policy that will be used by the
members of the Consortium with regard to the data generated throughout the
duration of the project and after its completion.
The DMP will be regularly updated as far as necessary during the development
of the project activities. The next editions of the DMP will provide
additional details. New versions will be relased at months M18 and M36.
The DMP is released in compliance with the H2020 FAIR [1] (making data
Findable, Accessible, Interoperable and Reusable).
# Administrative data
**Project name:** micro-crystals Single Photon InfraREd detectors - µSPIRE
**Grant reference number:** H2020-FETOPEN-1-2016-2017 – ID: 766955
**Project description:** µSPIRE aims at establishing a technological platform
for homo- and hetero- structure based photonic and electronic devices using
the self-assembling of epitaxial crystals on patterned Si substrates. Emerging
micro-electronic and photonic devices strongly require the integration on Si
of a variety of semiconducting materials such as Ge, GaAs, GaN and SiC, in
order to add novel functionalities to the Si platform. µSPIRE pursues this
goal employing a novel deposition approach, which we termed vertical hetero-
epitaxy (VHE). VHE exploits the patterning of conventional Si substrates, in
combination with epitaxial deposition, to attain the self-assembly of arrays
of Ge and GaAs epitaxial micro-crystals elongated in the vertical direction,
featuring structural and electronic properties unparalleled by “conventional”
epitaxial growth. As a concrete demonstration of VHE potentialities, we will
deliver a complete set of novel photon counting detectors: VHE micro-crystals
will be used as the
elementary microcells for single-photon detectors with performances far beyond
those of current state-of-the-art devices.
**Consortium members:**
1. Politecnico di Milano – Polimi (coordinator)
2. Università degli Studi Milano Bicocca – Unimib
3. University of Glasgow – UGLA
4. Philipps-Universität Marburg – UMR
5. Technische Universität Dresden – TDU
6. Micro Photon Devices Srl.- MPD
**Project data contact:** Giovanni Isella
Politecnico di Milano - Polo Territoriale di Como
Via Anzani 42 , 22100 Como Phone: +39 0313327303 e-mail:
[email protected]
# Dataset description and generation/collection
The data generated in µSPIRE will mainly be:
1. **Design Data**
2. **Protocol Data**
3. **Experimental and Characterization Data**
4. **Computational and Modeling Data**
A comprehensive description of such data is given in the tables here below:
## Design data
_Table 1: Design of substrate pattern_
<table>
<tr>
<th>
Type of study
</th>
<th>
</th>
<th>
Schematics representing the geometry and dimensions of the silicon (Si)
pillars etched in the Si wafer.
</th> </tr>
<tr>
<td>
Data set reference name
</td>
<td>
and
</td>
<td>
DEV<progressive number> (substrates meant for device fabrication)
DIS<progressive number> (substrates meant for dislocation expulsion studies)
</td> </tr>
<tr>
<td>
Provenance of data
</td>
<td>
</td>
<td>
Original data produced by: Ugla, Unimib, Polimi.
</td> </tr>
<tr>
<td>
Type of data
</td>
<td>
</td>
<td>
Relevant dimensions of the pillars etched into the Si substrate: pillar shape
(square, round) orientation with respect to crystallographic directions,
pillar height, lateral dimension and spacing between adjacent pillars.
</td> </tr>
<tr>
<td>
Nature and formats
</td>
<td>
</td>
<td>
Text document(PDF).
Images (PNG).
Autocad file (gds).
</td> </tr>
<tr>
<td>
Amount of data
</td>
<td>
Based on previous studies, the amount of resulting data is estimated around
500MB per year. Some text format data files are also required for post-
processing in the laboratory and are anticipated to be around 5MB per year.
</td> </tr>
<tr>
<td>
Requirements for software and hardware
</td>
<td>
Substrate patterns are typically designed by using proprietary software such
as AutoCAD. Free viewers for gds files are available and all relevant
information can be exported as text document (PDF) or images (PNG).
</td> </tr> </table>
3.2. Protocol data
_Table 2: Protocol Data_
<table>
<tr>
<th>
Type of study
</th>
<th>
</th>
<th>
Relevant parameters used during low-energy plasma-enhanced chemical vapour
deposition (LEPECVD) and molecular beam epitaxy (MBE) growth (substrate
temperature, plasma current, gas flow, pressure and growth rate) and
microfabrication.
</th> </tr>
<tr>
<td>
Data set reference name
</td>
<td>
and
</td>
<td>
LG<number of deposition system (1 or 2)>_<progressive number identifying the
epitaxial growth>
MBE_<progressive number identifying the epitaxial growth>
</td> </tr>
<tr>
<td>
Provenance of data
</td>
<td>
</td>
<td>
Original data produced by: Polimi and Unimib.
</td> </tr>
<tr>
<td>
Type of data
</td>
<td>
</td>
<td>
Intended profile of the epilayer stack: silicon (Si), germanium (Ge) and
aluminum-gallium-arsenide (AlGaAs) concentration and thickness.
Relevant growth parameters: deposition temperature, deposition rate, plasma
parameters.
Substrate pattern as described in Table 1.
</td> </tr>
<tr>
<td>
Nature and formats
</td>
<td>
</td>
<td>
Text document (TXT) organized in columns with headers indicating the quantity
measured and unit of measure used.
</td> </tr>
<tr>
<td>
Amount of data
</td>
<td>
</td>
<td>
We expect a total of 1GB.
</td> </tr>
<tr>
<td>
Requirements for software and hardware
</td>
<td>
Any text editor.
</td> </tr> </table>
3.3. Experimental and characterization data
_Table 3: Experimental and Characterization Data_
<table>
<tr>
<th>
Type of study
</th>
<th>
</th>
<th>
Morphological data (scanning electron microscopy – SEM, trasmission electron
microscopy – TEM and high-resolution X-ray diffraction – HRXRD),
optical/spectroscopical data (photoluminescence – PL, µPL, Raman and µRaman)
and optoelectronic data (current-voltage characteristics, photoresponse).
</th> </tr>
<tr>
<td>
Data set reference name
</td>
<td>
and
</td>
<td>
PL_<progressive number> identifying the epitaxial growth as indicated in Table
2 and **Error! Reference source not found.** >
TEM_<progressive number>
AFM_<progressive number>
IV_<progressive number>
</td> </tr>
<tr>
<td>
Provenance of data
</td>
<td>
</td>
<td>
Original data produced by: Unimib (PL), Umar (TEM and SEM), Polimi (SEM, AFM
and electrical measurements).
</td> </tr>
<tr>
<td>
Type of data
</td>
<td>
</td>
<td>
Photoluminescence response of the sample.
TEM, SEM and AFM images showing crystal defects with atomic resolution and
micro-crystals morphology.
Electro-optical characterization of µSPIRE’s devices.
</td> </tr>
<tr>
<td>
Nature and formats
</td>
<td>
</td>
<td>
PL and electrical measurement data: text document (TXT) organized in columns
with headers indicating the quantity measured and unit of measure used.
TEM data: images (bmp, tiff, jpeg…) Post processed images can be visualized
with free software.
SEM data: images (bmp, tiff, jpeg…).
AFM data: raw data are in flt format and can exported in an image format (bmp,
tiff, jpeg…). Accessing raw data is possible by means of freely available
software such as Gwyddion.
Electrical measurements: text document (TXT) organized in columns with headers
indicating the quantity measured and unit of measure used.
</td> </tr>
<tr>
<td>
Amount of data
</td>
<td>
</td>
<td>
1TB
</td> </tr>
<tr>
<td>
Requirements for software and hardware
</td>
<td>
TXT file can be accessed with several free software.
Image file are also freely accessible.
Accessing .flt raw data is possible by means of freely available software such
as Gwyddion.
</td> </tr> </table>
3.4. Computational and modeling data
_Table 4: Computational and Modeling Data_
<table>
<tr>
<th>
Type of study
</th>
<th>
</th>
<th>
Modeling of the morphological properties of Si, Ge and AlGaAs micro-crystals
as a function of substrate patterning and growth conditions.
Electronic design of µSPIRE devices: material (Si, Ge or GaAs), doping profile
and micro-crystals shape/dimensions.
Modeling of the micro-crystals bandstructure.
</th> </tr>
<tr>
<td>
Data set reference name
</td>
<td>
and
</td>
<td>
MOD_<progressive number identifying the morphological modeling>
ELDES<progressive number identifying the electronic modeling>
BAND_<progressive number identifying the bandstructure calculation>
</td> </tr>
<tr>
<td>
Provenance of data
</td>
<td>
</td>
<td>
Original data produced by: Polimi, TUD and Unimib
</td> </tr>
<tr>
<td>
Type of data
</td>
<td>
</td>
<td>
Morphological modeling: phase field calculation of micro-crystals morphology.
Electronic modelling: drift-diffusion Poisson calculations of optoelectronic
response of µSPIRE devices.
Bandstructure calculations: K-dot-P or effective mass calculation.
</td> </tr>
<tr>
<td>
Nature and formats
</td>
<td>
</td>
<td>
Morphological modeling: images (bmp, tiff, jpeg).
Electronic modeling: such calculations are performed using the proprietary
software Synopsis Sentaurus TCAD. Selected image can be exported in almost any
image format.
Bandstructure calculationsSuch calculations are performed using the
proprietary software Nextnano. Results are exported as TXT files.
</td> </tr>
<tr>
<td>
Amount of data
</td>
<td>
</td>
<td>
1TB
</td> </tr>
<tr>
<td>
Requirements for software and hardware
</td>
<td>
Any image viewer or TXT editor
</td> </tr> </table>
# Data management documentation and curation
This section describes the processes and actions that will be implemented
during the course of the research for data management, documentation, sharing
between partners and curation (saving and preservation).
_Table 5: File naming convention_
<table>
<tr>
<th>
**Convention**
</th>
<th>
_**[time_stamp]_microSPIRE_[data type]_[Partner]_[Version].[file format]** _
</th>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
Time Stamp
</td>
<td>
Data Type
</td>
<td>
Partner
</td>
<td>
Version
</td>
<td>
File format
</td> </tr>
<tr>
<td>
YYYY_MM_DD
</td>
<td>
Data set reference and name as describe in tables 1 to 4
</td>
<td>
Polimi,
Unimib,
Ugla
Umar
TDU
MPD
</td>
<td>
V1
</td>
<td>
According to software:
txt
csv
odt ods pdf
</td> </tr>
<tr>
<td>
**Examples**
</td>
<td>
2018_04_30_LEPECVD_10001_Polimi_v1.txt
</td>
<td>
</td> </tr> </table>
## Data Managing: access, storage and back-up
µSPIRE data are mainly created _ex novo_ as a result of the research
activities listed from 1 to 4 in Section 3. Common rules will be used for file
naming in order to favour data sharing and accessibility.
The proposed naming convention is described in Table 5. The data managing
cycle is composed of three steps:
1. data production and storage by each partner;
2. data sharing and storage on a repository accessible only to the project participants;
3. data open-access on a public repository.
### Data production and storage by each partner
The first responsible for each data set is the Partner ho generated it,
therefore a copy of the data set will be stored and maintained on the Partner
servers according to its internal practices and regulation described in Table
6.
_Table 6_
<table>
<tr>
<th>
Polimi
</th>
<th>
All data is stored on a local server, Dr. Chrastina is the server
administrator and provides access and read/write permissions to the different
members of the group. Data transmission can be accessed through secure file
transfer protocol. A back-up procedure on an external hard disk is performed
weekly.
</th> </tr>
<tr>
<td>
Unimib
</td>
<td>
All local data is stored on a local server. Access and permission is granted
to the project members by the server administrator, Dr. Bergamaschini. The
access to the data is possible through secure file transfer protocol and
network file system export on local computers (password protected). The server
is configured as RAID-5 and a weekly backup is done to local hard drives.
</td> </tr>
<tr>
<td>
Ugla
</td>
<td>
All data is stored on a raid server which is backed up every evening to a set
of secondary discs. The server is also backed up to a tape system every month.
The server is administered by the School’s IT department and user access is
controlled by Prof. Paul. The server is accessed through secure file transfer
or secure https protocols.
</td> </tr>
<tr>
<td>
Umar
</td>
<td>
All data is stored on a local server, Dr. Beyer is the server administrator
and provides access and read/write permissions to the different members of the
group. Data transmission can be accessed through secure file transfer
protocol. The server is integrated in the daily backup scheme of the
university´s computer centre.
</td> </tr>
<tr>
<td>
TUD
</td>
<td>
All challenges to store, process and manage data at TUD, are addressed
according to the
``Guidelines on the Handling of Research Data at TUD". TUD closely cooperates
with the Saxon State and University Library (SLUB) and started in 2018 an
institutional repository and longterm preservation infrastructure for research
data (Project OpARA). This system will be used as the central base for data
management.
</td> </tr>
<tr>
<td>
MPD
</td>
<td>
All data is stored on external servers administered by our secure cloud
infrastructure provider. Mr Sandro Rizzetto is our internal server
administrator and provides access with individual read/write permissions to
the different MPD employees. Local MPD data can be accessed through a secure
VPN connection. A back-up procedure is daily performed by our external IT
infrastructure provider.
</td> </tr> </table>
### Data sharing and storage on a repository accessible only to the project
participants
Each Partner will transfer data which are considered relevant for the project
on a server set-up and maintained by personnel of the ICT services of the
Politecnico di Milano (ASICT Polimi). The system operates as a Git server (
_http://gitlab.polimi.it/_ ) . The access is granted by means of
username/password provided by ASICT.
Gitlab allows for access control: different users can be given different
access permit. Each Partner will be allowed to add data to the repository.
Only the Coordinator will be allowed to delete data from the repository.
Versioning and profiling are implemented, it is therefore possible to trace
which changes have been made to the database and who made them.
Back-ups are performed daily and kept for 30 days (i.e. each given day it is
possible to recovery the data as they were in any of the 30 days before). A
full back-up is performed every 12 months and kept for 12 months. The gitlab
repository will ensure preservation of the data also after the duration of the
project for a time of at least 5 years. We do not envision any need to
transfer data to a different server during the project. The costs are covered
by the Polimi budget and consists of 200 € for the gitlab set-up and 360
€/year for its maintenance.
### Data open access
Data underpinning published papers will be made available through ZENODO
( _https://zenodo.org/_ ) , an open access repository for all fields of
science that allows uploading any kind of data file formats. ZENODO is
recommended by the Open Access Infrastructure for Research in Europe
(OpenAIRE). ZENODO assign a persistent digital identifier to all published
records. ZENODO will also be used for the long term preservation of data, even
those not shared, after completion of the project.
## Metadata standards and data documentation
Different information will be stored as metadata and associated to the
different types of data set, from 1 to 4, described in Section 3. Some
examples are given in Table 7.
_Table 7: Metadata_
<table>
<tr>
<th>
Metadata associated to
Design Data
</th>
<th>
Units of measurement (in case they are not specified in the TXT file headers),
software required to make the data usable.
</th> </tr>
<tr>
<td>
Metadata associated to
Protocol Data
</td>
<td>
Information allowing for the correct interpretation of the growth parameters
used: units of measurement of the relevant physical quantities (pressure, gas
flow, temperature, shutters pneumatic valves staus).
</td> </tr>
<tr>
<td>
Metadata associated to
Experimental and
characterization data
</td>
<td>
Equipment used for the measurements, measuring condition
(temperature), sample preparation,
Information on data analysis procedures, software required for making data
readable.
</td> </tr>
<tr>
<td>
Computational and
modelling data
</td>
<td>
Equipment used for the measurements, measuring condition
(temperature), sample preparation,
Information on data analysis procedures, software required for making data
readable.
</td> </tr> </table>
# Data sharing
µSPIRE DMP is inspired by the FAIR data Principles i.e. making data findable,
accessible, interoperable and reusable[1]. Therefore, µSPIRE will consider the
following approaches as far as applicable for providing the open access to
research data:
* data appearing or underpinning publications will be available on the web trough the ZENODO platform under a Creative Commons license of the CC BY-NC-SA (non commercial share alike) type (https://creativecommons.org/);
* open-source formats and softwares (e.g. CSV instead of Excel) will be preferred over their commercial counterpart;
* interoperability will be enforced by using Metadata to specify the data type (as highlighted in Table 1 to 4), and the software required to analyze and process data. The Metadata file (typically in the form of a README.txt file) will be filled by the authors in order to summarize the characteristics of each data set to give a quick understanding of the content of the data to anyone that reads it.
The policy for open-access to the research data in µSPIRE will follow two core
principles:
1. The generated research data should generally be made as widely accessible as possible in a timely and responsible manner;
2. The research process should not be impaired or damaged by the inappropriate release of such data.
Data sets containing key information that could be patented for commercial or
industrial exploitation will be excluded for public distribution and data sets
containing key information that could be used by the research team for
publications will not be shared until the embargo period applied by the
publisher is over.
µSPIRE will follow the Open Access mandate for its publications (art. 29.2
Grant agreement). Each publication, including associated metadata, will be
deposited in Individual partner repositories (institutional or subject). The
institutional repository of Politecnico of Milano, compliant with OpenAIRE
rules, is _https://re.public.polimi.it/_ .
# Ethical Aspects
In the µSPIRE project, there are no ethical or legal issues present which
impairs the data managing. The research does not create, process and store
personal data.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0087_CIRC-PACK_730423.md
|
**INTRODUCTION**
This is the first version of DMP to be revised during the course of the
project within Task 1.4 Data Management Plan, including new data, changes in
consortium policies regarding innovation potential or decision to file a
patent, and changes in the consortium composition and external factors.
This plan will establish the measures for promoting the findings during CIRC-
PACK’s lifecycle and will set the procedures for the sharing of data of the
project. Addressing FAIR principle for research data (Findable, Accessible,
Interoperable and Re-usable) CIRC-PACK DMP will consider:
* Data set reference and name
* Data set description
* Standards and metadata
* Data sharing and handling during and after the end of the project
* Archiving and preservation (including after the end of the project)
# DATA SUMMARY
The data that will be managed in CIRC-PACK is included under the umbrella of
the following categories.
* **End-user personal data** : During baseline definition and preliminary assessment of packaging value chain, surveys will be implemented in different countries in order to identify public perception and expectations about plastics and plastic packaging value chain, as well as to collect interesting info to be considered during project development regarding products design, waste management measures and dissemination activities. Besides, the exploitation and commercialization of knowledge and technical results achieved will be also addressed.
This data set is private and will be managed only by the organization hosting
the Data Repository where is allocated. This collection will be not shared
unless the data are previously anonymized to remove any possible personal
link. In those cases where data is shared this process will be according
European regulation and requesting users permission for doing it. Data
protection and privacy in conjunction with market surveys.
* **Processes information of stages involved in demo cases:** Processes information will be required during demo cases execution and to carry out Life Cycle Assessment and Life Cycle Cost analysis (LCA/LCC) of the plastic packaging value chain (before and after the implementation of project innovations). These analyses will consider data such as flow diagrams, materials (input and output), energy consumption, transport of materials, storage conditions, raw materials, materials sorting, waste management, assets value and lifespan, investment and turnover, etc. Certain datasets may not be shared (or will need restrictions), legal and contractual reasons will be explained in that case.
* **Sensor Data** : this collection comprises all information collected from sensors involved in waste sorting facilities. This collection is public, as it is not related to any specific person and therefore it will be open.
* **Derived Data** : this comprises the result of applying Data Analytics techniques to the data steaming from sensors. At this stage of the project, the vision is that the data in this collection will not serve as means to identify any person, therefore they are considered as public.
Regarding the nature of the data, in order to fulfil the required security and
privacy requirements in this project, which are set by the Data Protection
Directive (Directive 95/46/EC on the protection of individuals with regard to
the processing of personal data and on the free movement of such data), the
project assumes the differentiation set in this Directive between Personal and
No Personal data. Data are considered as personal data “when someone is able
to link the information to a person, even if the person holding the data
cannot make this link”. Any data susceptible of being considered as Personal
Data will be managed according to this Directive.
# STANDARDS AND METADATA
There are several domains considered in CIRC-PACK, each of them follow
different rules and recommendations. This is a very early stage identification
of standards:
* Microsoft Word 2010 for text based documents (or any other compatible version).
* MP3 or WAV for audio files.
* Quicktime Movie or Windows Media Video for video files.
* Quantitative data analysis will be stored in SAV file format (used by SPSS) from which data can be extracted using the open-source spread Perl script.
These file formats have been chosen because they are accepted standards and in
widespread use. Files will be converted to open file formats where possible
for long-term storage.
Metadata will be comprised of two formats – contextual information about the
data in a text based document and ISO 19115 standard metadata in an xml file.
These two formats for metadadata are chosen to provide a full explanation of
the data (text format) and to ensure compatibility with international
standards (xml format).
# ALLOCATION OF RESOURCES
CIRCE will be responsible for data management in CIRC-PACK project.
Costs related to open access to research data are eligible as part of the
Horizon 2020 grant (if compliant with the Grant Agreement conditions).
Resources for long term preservation, associated costs and potential value, as
well as how data will be kept beyond the project and how long, will be
discussed by the whole consortium during General Assembly meetings.
# DATA SHARING, ACCESS AND PRESERVATION
The digital data created by the project will be diversely curated depending on
the sharing policies attached to it. For both open and non-open data, the aim
is to preserve the data and make it readily available to the interested
parties for the whole duration of the project and beyond.
A public API will be provided to registered users allowing them the access to
the platform. The database compliance aims to ensure the correct
implementation of the security policy on the databases verifying vulnerability
and incorrect data. The target is to identify excessive rights granted to
users, too simple passwords (or even the lack of password) and finally to
perform an analysis of the entire database. At this point, we can assure that
at least the following measures will be considered for assuring a proper
management of data:
1. Dataset minimisation. The minimum amount of data needed will be stored so as to prevent potential risks.
2. Access control list for user and data authentication. Depending on the dissemination level of the information an Access Control List will be implemented reflecting there for each user the data sets that can be accessed.
3. Monitoring and Log of activity. The activity of each user in the project platform, including the data sets accessed is registered in order to track and detect harmful behaviour of users with access to the platform.
4. Implementation of an alert system that informs in real time of violation of procedures or about hacking attempts.
5. Liability. Identification of a person who is responsible of keeping safe the information stored,
6. When possible, the information will be also made available in the initiative that the EC has launched for open data sharing from research, which is ZENODO.ORG.
The mechanisms explained in this document aim at reducing to the maximum the
risks related with data storage. However due to the activities that are going
to be carried out in the project, it is still not defined the amount of time
that data will be stored in the platform since Big Data analysis and services
run data analytics procedures and depending of the accuracy of results based
on the size of the sets considered.
## Non-Open research data
The non-open research data will be archived and stored long-term in the EMDESK
portal administered by CIRCE. The CIRCE platform is currently being employed
to coordinate the project's activities and to store all the digital material
connected to CIRC-PACK.
If certain datasets cannot be shared (or need restrictions), legal and
contractual reasons will be explained.
## Open research data
The open research data will be archived on the Zenodo platform
(http://www.zenodo. org). Zenodo is a EU-backed portal based on the well
established GIT version control system (https://git-scm.com) and the Digital
Object Identifier (DOI) system (http://www.doi.org). The portal's aims are
inspired by the same principles that the EU sets for the pilot; Zenodo
represents thus a very suitable and natural choice in this context.
The repository services offered by Zenodo are free of charge and enable peers
to share and preserve research data and other research outputs in any size and
format: datasets, images, presentations, publications and software. The
digital data and the associated meta-data is preserved through well-
established practices such as mirroring and periodic backups. Each uploaded
data-set is assigned a unique DOI rendering each submission uniquely
identifiable and thus traceable and referenceable.
# ETHIC ASPECTS
## Personal data collected through the interviews and surveys imported from
Turkey to the EU
## partners / exported from EU partners to Turkey
Personal data of users will be collected during the social acceptance and
participation in eco-design. Regarding personal data from Kartal (non-EU
couontry), KARTALMUN will collect information at local level through digital
media (tablets, computers, etc.). These data will be collected and included in
an excel file by an external Turkish entity. This excel file will show only
aggregated data and will be analysed by OCU EDICIONES, it will not show
identity data and will be processed, together with the rest of countries
information, in an aggregated way. All surveys will be processed anonymously
and no personal data will be exported from EU partners to a non-EU country.
Procedures to be implemented in Turkey to manage personal data in Turkey, will
be established by KARTALMUN and the external entity, according to national
regulation.
## Collection, storage and protection of personal data
EU countries surveys will be based on a combined self-administered postal and
online sampling and data collection approach. Necessary measures will be taken
for guaranteeing optimal anonymization of collected, analysed and stored data.
Respondents will be comprehensibly informed about it. Whenever necessary,
survey respondents will be requested for their explicit consent (e.g. in case
of follow-up/second-layer contacts & questionnaires).
Collected data will be used exclusively within the context and for the purpose
of the CIRCPACK project. No data will be transmitted to any person, company or
organization not being involved on the CIRC-PACK project. Collected data will
not be used for commercial purposes.
All personal data will be collected, stored and destructed only with and
accordingly to the consent of the personal data holder.
The research will comply with:
* ethical principles
* applicable international, EU and national law (in particular, EU Directive 95/46/EC).
Data collection, storage, protection, retention and destruction will be
carried out through the intranet system of the project: EMDESK.
Interviewees/beneficiaries/recipients will be informed about data security,
anonymity and use of data as well as asked for accordance. Participation
happens on a voluntary basis.
# LIST OF THE DATA-SETS
This section will list the data-sets produced within the CIRC-PACK project.
# TIMETABLE FOR UPDATES
As established in the Grant Agreement, at the end of the project, a new
version of DMP will be provided. Nevertheless, after each Steering Committee
meeting, an updating of the document will be performed, if required. This is
the current Steering Committee calendar:
<table>
<tr>
<th>
**Meeting**
</th>
<th>
**Month**
</th>
<th>
**Dates**
</th>
<th>
**Country**
</th>
<th>
**Host Partner**
</th> </tr>
<tr>
<td>
**I SC**
</td>
<td>
month 7
</td>
<td>
16-17 November
2017
</td>
<td>
Italy
</td>
<td>
RINA
</td> </tr>
<tr>
<td>
**I GA, II SC**
</td>
<td>
month 13
</td>
<td>
may-18
</td>
<td>
Italy
</td>
<td>
NOVAMONT
</td> </tr>
<tr>
<td>
**III SC**
</td>
<td>
month 19
</td>
<td>
November 2018
</td>
<td>
The Netherlands
</td>
<td>
BUMAGA BV
</td> </tr>
<tr>
<td>
**II GA, IV SC**
</td>
<td>
month 25
</td>
<td>
may-19
</td>
<td>
Spain
</td>
<td>
AITIIP
</td> </tr>
<tr>
<td>
**V SC**
</td>
<td>
month 31
</td>
<td>
November 2019
</td>
<td>
Croatia
</td>
<td>
MI-PLAST
</td> </tr>
<tr>
<td>
**III GA, VI SC, Final meeting**
</td>
<td>
month 36
</td>
<td>
April 2020
</td>
<td>
Belgium
</td>
<td>
CIRCE
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0088_PlastiCircle_730292.md
|
# 1\. Overview of the PlastiCircle Project
## 1.1 Objectives
The main objective of PlastiCircle is to improve the Circular Economy of
Plastics (Closure of the European Plastic Loop 1 ). In order to achieve this
purpose, the PlastiCircle concept is centred on closing the plastic waste
chain in several ways:
Collection
Transport
Sorting
Recovery
1. Increasing the amount of plastic waste collected.
2. Reducing the costs of recovering plastic waste.
3. Increasing the quality of collected plastic waste.
4. Developing new value-added applications.
The combination of these measures will promote the recovery of the most
important plastic fraction in Europe (i.e. plastic packaging). The
PlastiCircle approach is based on innovation in the four stages associated
with plastic packaging treatment: collection, transport, sorting and recovery
in value-added products.
## 1.2 The PlastiCircle Consortium
A key factor will be the integration of the project results in a global
process to be implemented throughout Europe. The consortium has been designed
with this objective in mind as subsequently explained (see Figure 1).
Collection, transport, sorting and recycling improvements will be developed by
experienced RTDs and companies (ITENE, PICVISA, AXION, SINTEF and
PROPLAST) and then tested in
_Figure 1. PlastiCircle concept and consortium._ three pilots in Valencia-
Spain,
Alba Iulia-Romania and
Utrecht-The Netherlands, being finally results exploited, disseminated and
communicated in a EU level (PROPLAST, KIMbcn, ICLEI, PLAST-EU and ECOEMBES).
It should be also remarked the participation of the Municipality of Velenje
(MOV) as a follower city, with a view to learn from the 3 pilots, to
disseminate results in Balkan countries and Central Europe, as well as to
ensure and facilitate the incorporation of the PlastiCircle approach in the
medium and long term in European cities and regions (To assure PlastiCircle
replicability after the project).
# 2\. Scope of the DMP
## 2.1 Related Policies
The present Data Management Plan (DMP) complies with and has been developed
according to the following EU policies regarding Research Data and Data
Protection:
1. The Open Research Data Pilot (“ORD pilot”)
2. The General Data Protection Regulation
### 2.1.1 Open Research Data Pilot (ORD pilot) 2
The European Commission is running a flexible pilot under Horizon 2020 called
the **Open Research Data Pilot** (ORD pilot). The Open Research Data Pilot
aims to make the research data generated by selected Horizon 2020 projects
accessible with as few restrictions as possible, while at the same time
protecting sensitive data from inappropriate access.
According to this pilot, projects participating in the pilot must submit a
first version of the DMP (as a deliverable) within the first 6 months of the
project. The DMP needs to be updated over the course of the project whenever
significant changes arise.
Further details are provided in the Guidelines on FAIR Data Management in
Horizon 2020 (v.3, 26 July 2016).
### 2.1.2 General Data Protection Regulation (GDPR) (3) 3
The **General Data Protection Regulation (GDPR)** (Regulation (EU) 2016/679)
is a regulation by which the European Parliament, the European Council and the
European Commission intend to strengthen and unify data protection for all
individuals within the European Union (EU). It also addresses the export of
personal data outside the EU. The primary objectives of the GDPR are to give
citizens and residents back control of their personal data and to simplify the
regulatory environment for international business by unifying the regulation
within the EU. When the GDPR takes effect, it will replace the data protection
directive (officially Directive 95/46/EC) from 1995. The regulation was
adopted on 27/4/2016. It applies from 25/5/2018 after a two-year transition
period and, unlike a directive, it does not require any enabling legislation
to be passed by national governments.
Under GDPR it will not be necessary to submit notifications / registrations to
each local DPA of data processing activities, nor will it be a requirement to
notify / obtain approval for transfers based on the Model Contract Clauses
(MCCs). Instead, there will be internal **record keeping requirements** and
DPO appointment will be mandatory only for those controllers and processors
whose core activities consist of processing operations which require regular
and systematic monitoring of data subjects on a large scale or of special
categories of data or data relating to criminal convictions and offences
(which is not the case of PlastiCircle Project).
# 3\. Data Summary
## 3.1 Purpose of the data
The data (databases, datasets, etc.) that are required for PlastiCircle will
be used to:
1. Develop, test and evaluate the plastic value chain:
* Collection. Using smart containers provided with a user identification, identifiable labels and money compensation procedure (WP2).
* Transport. Based on the compaction of plastic, both in container and in the trucks; measuring container filling levels; optimizing collection routes and efficient driving (WP3).
* Sorting. With an innovative technology will be based on a new filmstabilizing conveyor for plastic sorter able to achieve an excellent performance on films. A special focus will be offered to the stages of material feeding, identification and ejection. The system will be based on Near- Infra-Red-Hyperspectral-Imaging (NIR-HSI), THz (TeraHertz) imaging, and hyperspectral whiskbroom/pushbroom shooting along with a spectral shifting.) (WP4).
2. Demonstrate the potential to obtain added-value and innovative recovered products from the fractions previously sorted (circular economy approach) (WP5)
3. Elaborate the pilot tests in the cities of Valencia, Utrecht and Alba Iulia (WP6)
4. Integrate and validate the PlastiCircle approach from a technical, environmental, economic and social point view (WP7 and WP8).
## 3.2 Types and formats of data
The data to be collected, processed and stored in PlastiCircle project can be
categorized as follows:
1. Type of data based on its content:
* Citizen data
* Waste quantity and composition data
* Containers filling data
* Collection routes data
* Plastic identification
* Economic, social and environmental data
2. Type of data based on its collection time:
* Real-time data
* Historical (archived) data
3. Type of data, based on its sources:
* Crowdsourced data
* Open data
* Proprietary data
* Artificially generated data
4. Processes related to the abovementioned data are:
* Data collection and storage
* Data manipulation and management
* Data analysis
## 3.3 Data Formats
The following data formats are PlastiCircles’ preferred choices to enable
sharing and long-term validity of the data:
* **JSON:** JSON is a simple file format that is very easy for any programming language to read. Its simplicity means that it is generally easier for computers to process than others, such as XML.
* **XML:** XML is a widely-used format for data exchange because it gives good opportunities to keep the structure in the data and the way files are built on, and allows developers to write parts of the documentation in with the data without interfering with the reading of them.
* **Spreadsheets:** Many authorities have information left in the spreadsheet, for example Microsoft Excel. This data can often be used immediately with the correct descriptions of what the different columns mean. However, in some cases there can be macros and formulas in spreadsheets, which may be somewhat more cumbersome to handle. It is therefore advisable to document such calculations next to the spreadsheet, since it is generally more accessible for users to read.
* **Comma Separated Values (CSV):** CSV files can be a very useful format because it is compact and thus suitable to transfer large sets of data with the same structure. However, the format is so spartan that data are often useless without documentation since it can be almost impossible to guess the significance of the different columns. It is therefore particularly important for the comma-separated formats that documentation or metainformation of the individual fields is provided and is sufficient and accurate.
* **Text Documents:** Classic documents in formats like RTF, ODF, OOXML, or PDF are sufficient to show certain kinds of documents, e.g., deliverables, reports, etc. Templates may be used whenever possible, so that displayed data can be re-used.
* **Plain Text (TXT):** Plain text documents (.txt) are chosen because they are very easy to read and process via plain text parsers. They generally exclude structural metadata.
* **HTML:** Nowadays much data is available in HTML format on various sites. This may well be sufficient if the data is very stable and limited in scope. In some cases, it could be preferable to have data in a form easier to download and manipulate, but as it is cheap and easy to refer to a page on a website, it might be a good starting point in the display of data. Typically, it would be most appropriate to use tables in HTML documents to hold data, and then it is important that the various data fields are displayed and are given IDs which make it easy to find and manipulate data.
* **Web Services:** For data that changes frequently, and where each pull is of limited size, it is very relevant to expose data through web services. There are several ways to create a web service, but some of the most used is SOAP and REST.
* **Proprietary formats:** Some dedicated systems, etc. have their own data formats that they can save or export data in. It can sometimes be enough to expose data in such a format - especially if it is expected that further use would be in a similar system as that which they come from.
The following Table contains guidance on file formats recommended for data
sharing, reuse and preservation.
_Table 1. File formats recommended for data sharing, reuse and preservation_
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Recommended formats**
</th> </tr>
<tr>
<td>
**Quantitative tabular data with extensive metadata.**
A dataset with variable labels, code labels, and defined missing values, in
addition to the matrix of data
</td>
<td>
SPSS portable format (.por)
delimited text and command ('setup') file
(SPSS, Stata, SAS, etc.) structured text or mark-up file of metadata
information, e.g. DDI XML file
</td> </tr>
<tr>
<td>
**Quantitative tabular data with minimal metadata**
A matrix of data with or without column headings or variable names, but no
other metadata or labelling
</td>
<td>
Comma-separated values (.csv) tab-delimited file (.tab) including delimited
text of given character set with SQL data definition statements where
appropriate
</td> </tr>
<tr>
<td>
**Geospatial data** vector and raster data
</td>
<td>
ESRI Shapefile (.shp, .shx, .dbf, .prj, .sbx,
.sbn optional)
geo-referenced TIFF (.tif, .tfw) CAD data (.dwg)
tabular GIS attribute data, geojson Geography Markup Language (.gml)
</td> </tr>
<tr>
<td>
**Quantitative data** textual
</td>
<td>
Rich Text Format (.rtf)
plain text, ASCII (.txt)
JSON
eXtensible Mark-up Language (.xml) text according to an appropriate Document
Type Definition (DTD) or schema
</td> </tr>
<tr>
<td>
**Digital image data**
</td>
<td>
TIFF 6.0 uncompressed (.tif)
PNG
JPEG
</td> </tr>
<tr>
<td>
**Digital audio data**
</td>
<td>
Free Lossless Audio Codec (FLAC) (.flac)
Wav Mp3
</td> </tr>
<tr>
<td>
**Digital video data**
</td>
<td>
MPEG-4 (.mp4)
OGG video (.ogv, .ogg) motion JPEG 2000 (.mj2)
</td> </tr>
<tr>
<td>
**Documentation**
</td>
<td>
Rich Text Format (.rtf)
PDF/UA, PDF/A or PDF (.pdf)
XHTML or HTML (.xhtml, .htm)
OpenDocument Text (.odt)
Doc, docx, xls, xlsx
</td> </tr> </table>
[Modified from source: _Managing and Sharing Research Data: A Guide to Good
Practice_ ]
## 3.4 Existing Data
During its lifetime, the PlastiCircle project will make use of existing data
that are in the possession of various partners as well as other data sources
(open data). The existing data (or background) is defined as “data, know-how
or information (…) that is needed to implement the action or exploit the
results” It includes inventions, experience and databases. Background that
partners will bring to the project has been defined in the Consortium
Agreement (CA). The background that each partner brings to the project is
defined below:
_Table 2. Background provided by each partner_
<table>
<tr>
<th>
**1\. INSTITUTO TECNOLÓGICO DE ENVASE TRANSPORTE Y LOGÍSTICA**
</th> </tr>
<tr>
<td>
**Describe Background**
</td>
<td>
**Specific limitations and/or conditions for implementation (Article 25.2**
**Grant Agreement)**
</td>
<td>
**Specific limitations and/or conditions for Exploitation (Article**
**25.3 Grant Agreement)**
</td> </tr>
<tr>
<td>
1)Collection: Characterization protocols for evaluating the segregation
quality of Municipal Solid Waste(MSW) fractions and specifically packaging,
commingled MSW, organic.
2)Transport:
-Improved algorithms for route optimization tested in urban vehicles, with an interface developed for the routing management, being able to help decision-making and recalculate routes in real time. - Hardware device for vehicle data adquisition from CAN-BUS port (speed, gps position, level of fuel, brake use, etc.), connected to a database where data are stored and can be post-treated. - Knowledge in the deployment and use of traceability technologies, such as RFID or QR systems, and sensorization devices (i.e. for detecting filling levels). 3)Sorting: Evaluation of
possibilities to sort new packaging materials by Near
</td>
<td>
INSTITUTO TECNOLOGICO
DEL EMBALAJE, TRANSPORTE Y LOGISTICA shall grant access to Background that is,
or will be found to be, necessary for the implementation of the
Project royalty free to the Party or Parties that Need access to implement
their work in the Project.
Provided that the access to background do not contravene non-disclosure
agreements or exploitation agreements with third parties.
ITENE shall not be obliged to grant Access Rights (i) to
Background not owned by
ITENE and/ or (ii) to
Background for which ITENE is not able to grant Access Rights, due to third
party rights and/ or (iii) to
Background for which ITENE is not able to grant Access Rights without paying
compensation to third parties and/or (iv) to information that was not
</td>
<td>
INSTITUTO
TECNOLOGICO DEL
EMBALAJE,
TRANSPORTE Y
LOGISTICA shall grant access to Background needed to use the
Results of the Project under fair and reasonable conditions to be agreed with
the Party or Parties that Need access to use the Results of the Project.
ITENE shall not be obliged to grant Access Rights (i) to Background not owned
by ITENE and/ or (ii) to Background for which ITENE is not able to grant
Access Rights, due to third party rights and/ or (iii) to Background for which
ITENE is not able to grant Access Rights without paying compensation to third
parties and/or (iv) to
</td> </tr> </table>
<table>
<tr>
<th>
InfraRed (NIR); Possibility to add
markerts in packaging materials to be then sorted 4)Recovery and recycling:
Evaluation of recyclability of new packaging materials and specifically
polymers incorporating nanomaterials and TIC technologies by injection,
extrusion and compression moulding.
5)Sustainability assessment:
Environmental, economic and social LCA (Life Cycle
Assessment) of new packaging materials (biomaterials, nanoreinforced
materials, smart packaging)
</th>
<th>
held by ITENE before they acceded to the Grant
Agreement
</th>
<th>
information that was not held by ITENE before they acceded
to the Grant
Agreement
</th> </tr>
<tr>
<td>
**2\. STIFTELSEN SINTEF (SINTEF)**
</td> </tr>
<tr>
<td>
**Describe Background**
</td>
<td>
**Specific limitations and/or conditions for implementation (Article 25.2**
**Grant Agreement)**
</td>
<td>
**Specific limitations and/or conditions for Exploitation (Article**
**25.3 Grant Agreement)**
</td> </tr>
<tr>
<td>
1. Transport: Software modules for predicting vehicle speed profiles and energy consumption.
2. Sustainability assessment:
Environmental and Economic LCA (Life Cycle Assessment) of new packaging
materials (biomaterials, nano-reinforced materials, smart packaging)
</td>
<td>
SINTEF shall grant access to Background that is necessary for the
implementation of the Project royalty free to
the Party or Parties that Need access to implement their work in the Project.
SINTEF shall not be obliged to grant Access Rights (i) to
Background not owned by
SINTEF and/ or (ii) to
Background for which SINTEF is not able to grant Access Rights, due to third
party rights and/ or (iii) to
Background for which SINTEF is not able to grant Access Rights without paying
compensation to third parties and/or (iv) to information that was not held by
SINTEF before they acceded to the Grant
</td>
<td>
SINTEF shall grant access to Background needed to use the Results of the
Project under fair and reasonable conditions to be agreed with the Party or
Parties that Need access to use the Results of the Project.
SINTEF shall not be obliged to grant Access Rights (i) to Background not owned
by SINTEF and/ or (ii) to Background for which SINTEF is not able to grant
Access Rights, due to third party rights and/ or (iii) to Background for which
SINTEF is not
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Agreement
</th>
<th>
able to grant Access Rights without paying compensation to third parties
and/or (iv) to information that was not held by SINTEF before they acceded
to the Grant
Agreement
</th> </tr>
<tr>
<td>
**3\. PICVISA**
</td> </tr>
<tr>
<td>
**Describe Background**
</td>
<td>
**Specific limitations and/or conditions for**
**implementation (Article**
**25.2 Grant Agreement)**
</td>
<td>
**Specific limitations and/or conditions for Exploitation**
**(Article 25.3 Grant Agreement)**
</td> </tr>
<tr>
<td>
1)Sorting of post-consumer plastics:
Evaluation and optimisation of mechanical and optical sorting of post-consumer
plastics. Evaluation and optimisation of marker/tracer technology to
sort plastics including technologies such as fluorescent markers and digital
watermarking. Specific knowledge on the sorting of flexible packaging and the
effect of packaging design on optical sorting.
</td>
<td>
PICVISA shall grant access to Background that is necessary for the
implementation of the Project royalty free to
the Party or Parties that Need access to implement their work in the Project.
</td>
<td>
PICVISA shall grant access to Background needed to use the Results of the
Project under fair and reasonable conditions to be agreed with the Party or
Parties that Need access to use the Results of the Project.
</td> </tr>
<tr>
<td>
**4\. AXION RECYCLING**
</td> </tr>
<tr>
<td>
**Describe Background**
</td>
<td>
**Specific limitations and/or conditions for implementation (Article 25.2**
**Grant Agreement)**
</td>
<td>
**Specific limitations and/or conditions for Exploitation (Article**
**25.3 Grant Agreement)**
</td> </tr>
<tr>
<td>
1)Sorting of post-consumer plastics:
Evaluation and optimisation of mechanical and optical sorting of post-consumer
plastics. Evaluation and optimisation of marker/tracer technology to sort
plastics including
</td>
<td>
AXION RECYCLING shall grant access to Background that is, or will be found to
be,
necessary for the implementation of the
Project royalty free to the Party or Parties that Need access to implement
their
</td>
<td>
AXION shall grant access to Background needed to use the Results of the
Project under fair and reasonable conditions to be
</td> </tr> </table>
<table>
<tr>
<th>
technologies such as fluorescent markers and digital watermarking. Specific
knowledge on the sorting of flexible packaging and the effect of packaging
design on optical sorting.
2)Washing of post-consumer plastics:
Evaluation and optimisation of washing technologies to remove contamination
from packaging. Knowledge on packaging design and the effect on the
washing/cleaning process.
3)Conversion of post-consumer plastics to secondary raw
material
Evaluation and optimisation of extrusion and upgrading technologies to produce
secondary raw materials from post-consumer plastics. Knowledge on the impact
of different materials on the quality recycled.
4) Testing of recycled plastics: Evaluation of recycled polymers to assess the
physical
properties and determine the suitability for different applications.
</th>
<th>
work in the Project.
Provided that the access to background do not contravene non-disclosure
agreements or exploitation agreements with third parties.
AXION shall not be obliged to grant Access Rights (i) to
Background not owned by
AXION and/ or (ii) to
Background for which AXION is not able to grant Access Rights, due to third
party
rights and/ or (iii) to
Background for which AXION is not able to grant Access Rights without paying
compensation to third parties and/or (iv) to information that was not held by
AXION before they acceded to the Grant
Agreement
</th>
<th>
agreed with the Party or Parties that Need access to use the Results of the
Project. AXION shall not be obliged to grant Access Rights (i) to Background
not owned by AXION and/ or (ii) to Background for which AXION is not able to
grant Access Rights, due to third party rights and/ or (iii) to Background for
which AXION is not able to grant Access Rights without paying compensation to
third parties and/or (iv) to information that was not held by AXION before
they acceded
to the Grant
Agreement
</th> </tr>
<tr>
<td>
**5\. CENTRO RICERCHE FIAT (CRF)**
</td> </tr>
<tr>
<td>
No data, know-how or information of CENTRO RICERCHE FIAT shall be Needed by
another Party for implementation of the Project (Article 25.2 Grant Agreement)
or Exploitation of that other Party’s Results (Article 25.3 Grant Agreement).
</td> </tr>
<tr>
<td>
**6\. GEMEENTE UTRECHT (UTRECHT)**
</td> </tr>
<tr>
<td>
No data, know-how or information of GEMEENTE UTRECHT shall be Needed by
another Party for implementation of the Project (Article 25.2 Grant Agreement)
or Exploitation of that other Party’s Results (Article 25.3 Grant Agreement).
</td> </tr> </table>
7. **FUNDACION DE LA COMUNITAT VALENCIANA PARA LA PROMOCION**
**ESTRATEGICA EL DESARROLLO Y LA INNOVACION URBANA (INNDEA)**
No data, know-how or information of FUNDACION DE LA COMUNITAT VALENCIANA
PARA LA PROMOCION ESTRATEGICA EL DESARROLLO Y LA INNOVACION URBANA (LAS
NAVES shall be Needed by another Party for implementation of the Project
(Article 25.2 Grant Agreement) or Exploitation of that other Party’s Results
(Article 25.3 Grant Agreement).
8. **PRIMARIA MUNICIPIULUI ALBA IULIA (ALBA)**
**Specific Specific limitations limitations and/or and/or conditions for
conditions for**
**Describe Background** **Exploitation**
**implementation (Article**
**25.2 Grant Agreement)** **(Article 25.3 Grant**
**Agreement)**
1)Collection: data and information about the existing waste collection system
and collection infrastructure at local level; expertize and information
necessary for the design of the compensation policies 2) Transport: data and
information about the existing truck fleet in Alba Iulia (it will be necessary
to design the guidance and best practices report on truch traceabiity and
driving behaviour guidance) 3) Communication: procedures, expertize and best
practices that will be needed in the regional case study workshop and in all
the external communication activities where PRIMARIA MUNICIPIULUI ALBA IULIA.
PRIMARIA MUNICIPIULUI ALBA IULIA shall grant access to Background that is, or
will be found to be, necessary for the implementation of the Project royalty
free to the Party or Parties that Need access to implement their work in the
Project. Provided that the access to background do not contravene non-
disclosure agreements or exploitation agreements with third parties.
PRIMARIA MUNICIPIULUI ALBA IULIA shall not be obliged to grant Access Rights
(i) to Background not owned by PRIMARIA MUNICIPIULUI ALBA
IULIA and/ or (ii) to
Background for which
PRIMARIA MUNICIPIULUI ALBA IULIA is not able to grant Access Rights, due to
third party rights and/ or (iii) to Background for which
PRIMARIA MUNICIPIULUI ALBA IULIA is not able to grant Access Rights PRIMARIA
MUNICIPIULUI ALBA IULIA shall grant access to Background needed to use the
Results of the Project under fair and reasonable conditions to be agreed with
the Party or Parties that Need access to use the Results of the Project.
PRIMARIA
MUNICIPIULUI ALBA IULIA shall not be obliged to grant Access
Rights (i) to
Background not owned by PRIMARIA
MUNICIPIULUI ALBA IULIA and/
or (ii) to
<table>
<tr>
<th>
<table>
<tr>
<th>
</th>
<th>
without paying compensation to third parties and/or (iv) to information that
was not held by PRIMARIA
MUNICIPIULUI ALBA IULIA before they acceded to the Grant agreement
</th>
<th>
to Background for which
PRIMARIA
MUNICIPIULUI
ALBA IULIA is not able to grant Access Rights without paying compensation to
third parties and/or (iv) to information that was not held by PRIMARIA
MUNICIPIULUI ALBA IULIA before they acceded to the Grant Agreement
</th> </tr>
<tr>
<td>
9\. **MESTNA OBCINA VELENJE (MOV)**
</td> </tr>
<tr>
<td>
</td> </tr> </table>
No data, know-how or information of MESTNA OBCINA VELENJE shall be Needed by
another Party for implementation of the Project (Article 25.2 Grant Agreement)
or Exploitation of that other Party’s Results (Article 25.3 Grant Agreement).
**10\. SOCIEDAD ANONIMA AGRICULTORES DE LAVEGA DE VALENCIA (SAV)**
**Specific Specific limitations limitations and/or and/or conditions for
conditions for**
**Describe Background Exploitation**
**implementation (Article**
**25.2 Grant Agreement) (Article 25.3 Grant**
**Agreement)**
</th> </tr> </table>
Background for which PRIMARIA MUNICIPIULUI ALBA IULIA is not able to grant
Access Rights, due to third party rights and/ or (iii) 1) Collection:
-Knowledge in monitoring of the filling of waste containers by sensors and external communication from the sensors to a cloud platform.
-Knowledge and experience in the use of the best systems for the collection of solid urban waste.
2) Transport:
-Experience in data acquisition from the CAN-BUS SAV shall grant access to Background that is, or will be found to be, necessary for the implementation of the Project royalty free to
the Party or Parties that Need access to implement their work in the Project.
Provided that the access to background do not contravene non-disclosure
agreements or exploitation agreements with third parties. SAV shall not be
obliged to grant Access Rights (i) to Background SAV shall grant access to
Background needed to use the Results of the Project under fair and reasonable
conditions to be agreed with the Party or Parties that Need access to use the
Results of the Project.
POLARIS M HOLDING shall not be obliged to
grant Access
Rights (i) to Background not
<table>
<tr>
<th>
port of vehicles. Data taken from more than 200 vehicles during 5 years.
-Knowledge and experience in the more efficient driving model system in function of the loading system of the recollector (rear-loading, lateral-loading, etc…). And in the in the field of fuel consumption in function of the time of the power take-off.
</th>
<th>
not owned by SAV and/ or (ii) to Background for which SAV is not able to grant
Access Rights, due to third party rights and/ or (iii) to
Background for which
SAV is not able to grant Access Rights without paying compensation to third
parties and/or (iv) to information that was not held by ITENE before they
acceded
to the Grant
Agreement
</th>
<th>
owned by POLARIS M
HOLDING and/ or (ii) to Background for which POLARIS M HOLDING is not able to
grant Access Rights, due to third party rights and/ or (iii) to Background for
which POLARIS M HOLDING is not able to grant Access Rights without paying
compensation to third parties and/or (iv) to information that was not held by
POLARIS M HOLDING before they acceded to
the Grant Agreement
</th> </tr>
<tr>
<td>
</td>
<td>
**11\. POLARIS M HOLDING (POLARIS)**
</td> </tr>
<tr>
<td>
**Describe Background**
</td>
<td>
**Specific limitations and/or conditions for implementation (Article 25.2**
**Grant Agreement)**
</td>
<td>
**Specific limitations and/or conditions for**
**Exploitation (Article**
**25.3 Grant**
**Agreement)**
</td> </tr>
<tr>
<td>
1)Collection: data and information about the existing waste collection system
and collection infrastructure at
local level;
2. Transport: data and information about the existing truck fleet in Alba Iulia (it will be necessary to design the guidance and best practices report on truck traceability and driving behaviour guidance)
3. Sorting: data and information about the existing
sorting solutions at local level
</td>
<td>
POLARIS M HOLDING shall grant access to Background that is, or will be found
to be, necessary for the implementation of the
Project royalty free to the Party or Parties that Need access to implement
their work in the Project.
Provided that the access to background do not contravene non-disclosure
agreements or exploitation agreements with third parties.
POLARIS M HOLDING shall not be obliged to grant
</td>
<td>
POLARIS M HOLDING shall grant access to Background needed
to use the Results of the Project under fair and reasonable conditions to be
agreed with the Party or Parties that Need access to use the
Results of the Project.
POLARIS M HOLDING
shall not be obliged to grant Access Rights (i) to Background not owned by
POLARIS M HOLDING and/ or (ii)
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Access Rights (i) to
Background not owned by POLARIS M HOLDING and/ or (ii) to Background for which
POLARIS M HOLDING
is not able to grant Access Rights, due to third party rights and/ or (iii) to
Background for which POLARIS M HOLDING is not able to grant Access Rights
without paying compensation to third parties and/or (iv) to information that
was not held by POLARIS M HOLDING before they acceded to the Grant
Agreement
</th>
<th>
to Background for which POLARIS M HOLDING is not able to grant Access Rights,
due to third party rights and/ or (iii) to Background for which POLARIS M
HOLDING is not able to grant Access Rights without paying compensation to
third parties and/or (iv) to information that was not held by POLARIS M
HOLDING before they acceded to the Grant
Agreement
</th> </tr>
<tr>
<td>
</td>
<td>
**12\. INDUSTRIAS TERMOPLÁSTICAS VALENCIANAS (INTERVAL)**
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**Describe Background**
</td>
<td>
**Specific limitations and/or conditions for implementation**
**(Article 25.2 Grant**
**Agreement)**
</td>
<td>
**Specific limitations and/or conditions for Exploitation**
**(Article 25.3 Grant**
**Agreement)**
</td> </tr>
<tr>
<td>
INTERVAL, is a plastic bags manufacture that is working with recycled
materials for more than 30 years, It has a broad knowledge on the behavior of
the polyethilene in film applications.
Nowadays INTERVAL is working its products with plastics from agricultural or
industrials waste.
</td>
<td>
INTERVAL shall grant access to Background that is necessary for the
implementation of the
Project royalty free to the Party or Parties that Need access to implement
their work in the Project.
</td>
<td>
INTERVAL shall grant access to Background needed to use the
Results of the Project under fair and reasonable conditions to be agreed with
the Party or Parties that Need access to use the Results of the Project.
</td> </tr>
<tr>
<td>
**13\. ARMACELL Benelux S.A. (ARMACELL)**
</td> </tr>
<tr>
<td>
No data, know-how or information of ARMACELL Benelux S.A. shall be needed by
another Party for implementation of the Project (Article 25.2 Grant Agreement)
or Exploitation of that other Party’s Results (Article 25.3 Grant Agreement).
</td> </tr> </table>
**14\. DERBIGUM / IMPERBEL (DERBIGUM)**
**Specific limitations Specific limitations and/or and/or conditions for
conditions for**
**Describe Background** **Exploitation (Article**
**implementation (Article 25.2 25.3 Grant**
**Grant Agreement)** **Agreement)**
Derbigum owns the knowhow Derbigum shall grant access Derbigum shall grant and
technology to produce to Background that is access to Background bituminous
roofing products necessary for the needed to use the based on the combination
of implementation of the Results of the Project bitumen and polymers. Project
royalty free to the under fair and These polymers are mainly PP Party or
Parties that Need reasonable conditions and SBS based. access to implement
their to be agreed with the Derbigum has the knowhow work in the Project.
Party or Parties that and technology to use Need access to use recycled
polymers within its the Results of the blends to produce the roofing Project.
products. It also has
developed a technology to reuse old roofing membranes without loss within
their new membranes.
Derbigum has several patents to protect its technology.
#### 15\. CONSORZIO PER LA PROMOZIONE DELLA CULTURA PLASTICA PROPLAST
(PROPLAST)
**Specific limitations Specific limitations and/or and/or conditions for
conditions for**
**Describe Background** **Exploitation (Article**
**implementation (Article 25.2 25.3 Grant**
**Grant Agreement)** **Agreement)**
1)Sorting: Experience in sorting recycled plastic by analytical techniques,
such as for example Fourier Transform Infrared spectroscopy (FTIR),
Thermal gravimetric analysis (TGA), Differential Scanning Calorimetry (DSC).
2)Recovery and recycling: Development and characterization of high added value
plastic formulations based on
recycled plastics
<table>
<tr>
<th>
<table>
<tr>
<th>
</th>
<th>
background for which
Proplast is not able to grant access rights, due to third party rights and/ or
(iii) to background for which
Proplast is not able to grant access rights without paying compensation to
third parties and/or (iv) to information that was not held by Proplast before
they acceded to the Grant Agreement.
</th>
<th>
Proplast is not able to grant access rights, due to third party
rights and/ or (iii) to
background for which Proplast is not able to grant access rights without
paying compensation to third parties and/or (iv) to information that was not
held by Proplast before they acceded
to the Grant
Agreement
</th> </tr>
<tr>
<td>
**16\. HAHN PLASTICS Ltd (HAHN)**
</td> </tr>
<tr>
<td>
</td> </tr> </table>
No data, know-how or information of Hahn Plastics Ltd shall be Needed by
another Party for implementation of the Project (Article 25.2 Grant Agreement)
or Exploitation of that other Party’s Results (Article 25.3 Grant Agreement).
**17\. ECOEMBALAJES ESPAÑA S.A. (ECOEMBES)**
**Specific limitations Specific limitations and/or and/or conditions for
conditions for**
**Describe Background** **Exploitation (Article**
**implementation (Article 25.2 25.3 Grant**
**Grant Agreement)** **Agreement)**
</th> </tr> </table>
Proplast will grant access – on a royalty-free basis – to background that is
necessary for the implementation of the project to the party or parties that
need access for implementing their work in the project, provided that the
access to background do not contravene nondisclosure agreements or
exploitation agreements with third parties. Proplast shall not be obliged to
grant access rights (i) to background not owned by Proplast and/ or (ii) to
Proplast shall grant access to background needed to use the results of the
project under fair and reasonable conditions to be agreed with the party or
parties that need access for using the results of the project. Proplast shall
not be obliged to grant Access Rights (i) to background not owned by Proplast
and/ or (ii) to background for which Sustainability assessment: Environmental
LCA (Life Cycle Assessment) of SCRAP model. FENIX database (specific database
of the recovery, sorting, recycling stages of the household packaging waste
management
ECOEMBES shall grant access to Background that is, or will be found to be,
necessary for the implementation of the Project royalty free to the Party or
Parties that Need access to implement their work in the Project. Provided that
the access to background do not contravene non-disclosure agreements or
exploitation agreements with third parties.
ECOEMBES shall not be obliged to grant Access Rights (i) to Background not
owned by ECOEMBES and/
or (ii) to Background for which ECOEMBES is not able to grant Access Rights,
due ECOEMBES shall grant access to Background needed to use the Results of the
Project under fair and reasonable conditions to be agreed with the Party or
Parties that Need access to use the Results of the Project. ECOEMBES shall not
be obliged to grant Access Rights (i) to Background not owned by ECOEMBES
and/ or (ii) to Background for which ECOEMBES is not able to grant Access
Rights, due to third party rights and/ or (iii)
<table>
<tr>
<th>
</th>
<th>
to third party rights and/ or (iii) to Background for which ECOEMBES is not
able to grant Access Rights without paying compensation to
third parties and/or (iv) to information that was not held by ECOEMBES before
they acceded to the Grant
Agreement
</th>
<th>
to Background for which ECOEMBES is not able to grant Access Rights without
paying compensation to third parties and/or (iv) to information that was not
held by ECOEMBES before they acceded to the Grant Agreement
</th> </tr>
<tr>
<td>
**18\. FUNDACIÓ KNOWLEDGE INNOVATION MARKET BARCELONA (KIMbcn)**
</td> </tr>
<tr>
<td>
</td> </tr> </table>
No data, know-how or information of FUNDACIÓ KNOWLEDGE INNOVATION MARKET
BARCELONA shall be Needed by another Party for implementation of the Project
(Article 25.2 Grant Agreement) or Exploitation of that other Party’s Results
(Article 25.3 Grant Agreement).
**19\. PLASTICSEUROPE (PLAST-EU)**
**Specific limitations Specific limitations and/or and/or conditions for
conditions for**
**Describe Background** **Exploitation (Article**
**implementation (Article 25.2 25.3 Grant**
**Grant Agreement)** **Agreement)**
PlasticsEurope should grant access to
Background needed
to use the Results of
the Project under fair and reasonable conditions to be agreed with the Party
or Parties that Need access to use the Results of the Project. PlasticsEurope
shall not be obliged to grant Access Rights (i) to Background not owned by
PlasticsEurope and/ or (ii) to Background for which PlasticsEurope is not able
to grant Access Rights, due to third party rights and/ or (iii) to Background
for which
PlasticsEurope is not
PlasticsEurope is one of the leading European trade associations with centres
in Brussels, Frankfurt, London, Madrid, Milan and Paris. We are networking
with European and national plastics associations and have more than 100-member
companies, producing over 90% of all polymers across the EU28 member states
plus Norway, Switzerland and Turkey. Since the early 1990s the association of
European Plastics manufacturers has been committed and prepared to contribute
to the enhancement of plastics waste management schemes.
Today, we call for a landfill ban of all recyclable and recoverable post-
consumer waste by 2025 and the establishment of recoveryPlasticsEurope should
grant access to Background that is, or will be found to be, necessary for the
implementation of the Project royalty free to the Party or Parties that Need
access to implement their work in the Project, once the data is public and has
been internally approved. Provided that the access to background do not
contravene non-disclosure agreements or exploitation agreements with third
parties. PlasticsEurope shall not be obliged to grant Access Rights (i) to
Background not owned by PlasticsEurope and/ or (ii) to Background for which
PlasticsEurope is not able to grant Access Rights, due to third party
oriented collection schemes.
These will need to be aligned
with modern sorting
inf
rastructur
e and improved
recycling and recovery in
order exploit the fullest
potential of this precious
resource.
Furthermore,
with a
focus on high quality and
market standards this will
stimulate markets for the more
resource efficient use of end
\-
of
\-
life plastics t
hroughout
Europe.
Our actions are based on:
•Specific know
\-
how and
expertise compiled via studies
and thorough evaluations of
practices in high
\-
performing
member states,
•An open dialogue with all
relevant stakeholders
•Analysis of data on amounts
of pl
astics waste, recycling
and recovery in Europe
regularly compiled and made
available to the broader
public
All PlasticsEurope publications
are available at:
http://www.plasticseurope.org
/information
\-
centre/publications.aspx
The Unknown life of Plastics,
20
16
Plastics
\-
the Facts 2015
Plastic Packaging: Born to
protect, 2012
The impact of plastic
packaging on energy
consumption and GHG
emissions, 2011
The impact of plastics on life
cycle energy consumption
and greenhouse gas emissions
in Europe, 2010
rights and/ or (iii) to
Background for which
PlasticsEurope is not able to
grant Access Rights without
paying compensation to
third parties and/
or (iv) to
information that was not
held by PlasticsEurope
before they acceded to the
Grant Agreement
able to grant Access
Rights without paying
compensation to third
parties and/or (iv) to
information that was
not held by
PlasticsEurope before
they acceded to the
G
rant Agreement
**20.**
**ICLEI EUROPEAN SECRETARIAT GMBH (ICLEI)**
No data, know-how or information of ICLEI EUROPEAN SECRETARIAT GMBH shall be
Needed by another Party for implementation of the Project (Article 25.2 Grant
Agreement) or Exploitation of that other Party’s Results (Article 25.3 Grant
Agreement).
**21.**
**CALAF INDUSTRIAL**
No data, know-how or information of CALAF INDUSTRIAL shall be Needed by
another Party for implementation of the Project (Article 25.2 Grant Agreement)
or Exploitation of that other Party’s Results (Article 25.3 Grant Agreement).
## 3.5 Origin of data
Each step of the Plastic Value Chain will generate its own data. This means
production of data in the collection, transport, sorting and recovery of the
plastic. Furthermore, data analysis will be generated to evaluate the whole
project approach, as well as, disseminate the project achievements.
_Figure 2. PlastiCircle data origin and fluxes_
### 3.5.1 Collection of plastic waste
The origin of the data from the collection of plastic waste will be produced
using smart containers. These smart containers are an innovative collection
system that will allow to monitor the pilot tests developed in the cities of
Valencia, Utrecht and Alba Iulia. These containers will be provided with a
user identification system, label expending functionalities, anti-fraud
measures, garbage level detection, and IoT communication protocols.
For the user identification, the system will be provided with a reading system
of “citizen cards” (i.e. unique and smart identification system based on NFC
or QR). Citizens will stick a label on the garbage bag before depositing it in
the container, which will be provided by the container. This label, which will
also be designed in the project, will match the garbage bag with the citizen.
Therefore, it will be possible to know how many bags have been deposited by
each user and check if the separation of the recycling material has been done
properly or if there is unwanted material.
With respect to garbage level detection, the filling level of containers will
be measured in real time using ultrasonic and/or optic sensors that will
provide information from the bins where these sensors are embedded.
The data from the sensors in the smart containers will be transmitted by
SigFox (radio technology UNB Ultra Narrow Band) that allows the coverage of
several kilometers, or LoRA (Low-power wireless protocol for the Internet of
Things (IoT). LoRa is likely the better option when bidirectionality is needed
because of the symmetric link. Thus, when command-and-control functionality is
required. With SigFox, it is possible to use bidirectional command-and-control
functionality, but to work appropriately, network density would need to be
higher (due to the asymmetric link). Therefore, it is better for applications
that send only small and infrequent bursts of data.
### 3.5.2 Transport
The transport associated to the collection process of packaging waste will be
monitored in order to optimize it. This includes a software platform to gather
all data, a truck traceability system, algorithms for route optimization, and
guidelines for efficient driving. Furthermore, this platform will allow to
integrate all information received from the different sources of data
(components of the smart container and truck solution). It will be an IoT
(Internet of Things) web-based platform which will let to: create new users;
check the segregation performance of each user, the filling status of each
container, the remaining labels in each container and the truck position;
define driving behaviour guidance and establish and assess current
recollection route, as well as to introduce alarms when containers are full.
In this sense, the system will allow to define the optimal collection routes
based on the position of the containers and their filling levels. This
optimization process will reduce the empty travels, allowing to increase the
efficiency of the global system.
On the other hand, the transport monitoring process will use a system of
sensors connected to the CAN-Bus of each vehicle of the waste collector fleet.
These sensors will measure the key parameters to optimize the performance of
the vehicles and the behaviour of the drivers.
Electronic devices with GPRS communication will be connected to the CAN of
each vehicle. These devices will parameterize among others the following data:
time of use of the power take-off (PTO) of the waste collectors, excess of
speed, RPM excess, acceleration, sudden braking, fuel consumption and
excessive idling. Reducing the time of use of the PTO in the process of
emptying the containers and making the collectors to work at the optimum RPM
(i.e., in the optimum power curve and consumption) will significantly reduce
the fuel consumption.
The data from the sensors will be sent via GPRS periodically and saved in a
digital cloud platform. In case of GPRS coverage not being available, the
information will be stored in the onboard computer, and ready to be sent when
it is restored.
After analysing for example, the curves of maximum RPM, PTO or idling time,
the hardware installed in the waste collectors will be programmed with the
optimal operating values.
A traceability system will be defined and developed with GPS, GPRS, and
CANdata capabilities. The information generated will be communicated with the
PlastiCircle platform, and defined in a way that it is portable and easily
adaptable for different trucks. The system will be tested in real
applications, and optimized. All participants will help in the testing, giving
input for optimization. CITIES and WASTE MANAGERS will also provide data about
their own fleet.
### 3.5.3 Sorting
Data from sorting of plastic packaging will also be collected and analysed.
This will allow to develop, integrate and validate innovative technologies to
sort valuable plastic fractions within the packaging waste. This innovative
technology will be based on a new film-stabilizing conveyor for plastic sorter
able to achieve an excellent performance on films. A special focus will be
offered to the stages of material feeding, identification and ejection. The
system will be based on Near- Infra-Red-Hyperspectral-Imaging (NIR-HSI), THz
(Tera-Hertz) imaging, and hyperspectral whiskbroom/pushbroom shooting along
with a spectral shifting.
### 3.5.4 Recovery
Once the plastic packaging has been sorted, data from the recovery and
recycling of these plastics will be collected and analysed. This information
will allow to develop and validate added-value applications and products from
the plastic packaging waste previously sorted.
The parameters considered to meet the technical requirements of recycled
polymers for the new products will be: format (i.e. washed flake, extruded
pellet, sorted packaging etc.), maximum contamination level (i.e. % of PVC,
PS, bioplastics, metals, paper, etc.), as well as mechanical (e.g.
tensile/tear strength), optical (e.g. whiteness, yellowness index and opacity)
and processing properties (e.g. melt flow index) of the recycled material (raw
material for the applications).
In parallel to that, a characterization of all sorted fractions will be
carried out (contamination level, mechanical and processing properties) with a
view to determine if they are aligned with the technical requirements
stabilized by the industries. It should be noted that this characterization
will be made both before the pilot (sorted fractions currently available in
the market) and during/after the pilots (PlastiCircle sorted fractions).
Finally, an economic validation will be fulfilled. For this purpose, the data
on production costs of each products will be collected, integrated and
analysed; this information will be provided by the 5 industries partners in
the project: DERBIGUM, HAHN, INTERVAL, ARMACELL and CRF.
### 3.5.5 Sustainable assessment:
The whole PlastiCircle system will be evaluated from a sustainability point of
view. The analysis will be centred on the three pillars of sustainability
(society, environment and economy) making use of a Life Cycle Assessment
approach.
In order to achieve this analysis, data from the current plastic waste
collection, transport, sorting and treatment in the three test cities
(Valencia, Alba Iulia and Utrecht), as well as, data from their integration in
the PlastiCircle project will be needed. All the partners will collaborate to
collect this information.
### 3.5.6 Data collection & dissemination plan
The Communication and Dissemination Manager (ICLEI) supported by the
consortium will formulate the dissemination strategy. Key elements of the
strategy will include: articulation of the project identity (branding);
identification of target audiences (public and private figures); specification
of channels for connecting with audiences (events and media platforms); cross-
integration of dissemination output (print, electronic and face-to-face). The
strategy will also propose ways of developing synergies with existing projects
in relevant thematic areas.
A dedicated web site will produce an extensive record of all publications and
communications originated during the course of the project. This website will
ensure a rapid exchange and circulation of information between partners and
other stakeholders.
All partners will be invited to publish an article in magazines. At least one
article per PlastiCircle result will be published in specialised magazines,
either at national or European level. Scientific publications will follow an
open-access model according to a green model by ITENE, Sintef, Axion,
Proplast, PICVISA and SAV. Part of the life cycle data generated will be
disseminated according the European Life Cycle Data Network guidelines.
**Training activities:**
In order to increase the participation of citizens, industry and waste
managers during and after the project, a training plan will be developed. The
table below shows the description of training activities:
_Table 3. Training activities of PlastiCircle project_
<table>
<tr>
<th>
Target Audience
</th>
<th>
Purpose
</th>
<th>
Result
</th>
<th>
Medium
</th>
<th>
Volume
</th>
<th>
Location
</th>
<th>
Date
</th>
<th>
Partners
</th> </tr>
<tr>
<td>
Cities and
manufactu
rers of smart containers
</td>
<td>
Synergies of
PlastiCircle solution with other smart containers
</td>
<td>
8-12 attendee
s
</td>
<td>
Workshop, panel expert
</td>
<td>
1
</td>
<td>
Belgium
</td>
<td>
M4
</td>
<td>
SAV
</td> </tr>
<tr>
<td>
Citizens
</td>
<td>
Involve citizens on the design of the containers
</td>
<td>
Adapt container
s to citizen needs
</td>
<td>
Workshop on cocreation methodolo
gy
</td>
<td>
1
</td>
<td>
Valencia
</td>
<td>
M5
</td>
<td>
INNDEA
</td> </tr>
<tr>
<td>
Citizens
</td>
<td>
Proper use of containers.
</td>
<td>
Improve
quality of
waste to be sorted
</td>
<td>
Workshops and videos on how to
use smart containers
</td>
<td>
10,000 brochur es
1 video
</td>
<td>
Valencia
Alba Iulia
Utrecht
</td>
<td>
M25
M30
M34
</td>
<td>
INNDEA
ALBA
UTRECHT
</td> </tr>
<tr>
<td>
Waste managers
</td>
<td>
Proper use of our guidelines
</td>
<td>
Stakehol ders trained
</td>
<td>
Training event:
segregatio n quality
</td>
<td>
1 per country
</td>
<td>
Valencia
Alba Iulia
Utrecht
</td>
<td>
M36
</td>
<td>
SAV, ITENE,
ECOEMBES
</td> </tr>
<tr>
<td>
Waste managers,
drivers
</td>
<td>
Proper use of our guidelines
</td>
<td>
Stakehol ders trained
</td>
<td>
Training event:
efficient driving
</td>
<td>
1 per country
</td>
<td>
Valencia
Alba Iulia
Utrecht
</td>
<td>
M36
</td>
<td>
ITENE SINTEF
PLAST-EU
</td> </tr>
<tr>
<td>
Waste managers, sorting plants
</td>
<td>
Proper use of our guidelines
</td>
<td>
Stakehol ders trained
</td>
<td>
Training event:
Optimal sorting
</td>
<td>
1 per country
</td>
<td>
Valencia
Alba Iulia
Utrecht
</td>
<td>
M36
</td>
<td>
PICVISA
</td> </tr> </table>
**Workshops:**
Co-design work is foreseen in order to adapt the PlastiCircle approach to the
needs of stakeholders. Co-design will be mainly focussed on a Distributed
Participatory Design (DPD) and Mass Participatory design (MPD). The
application of both methodologies will have as a main objective the collection
and incorporation of the input from all stakeholders in the final design of
the PlastiCircle approach. Distributed Participatory Design (DPD) will be
based on the realization of meetings in UTRECHT, INNDEA and ALBA IULIA in
which the initial project approach will be explained to stakeholders
(specially citizens will be invited but also associations and companies).
Stakeholders will be asked to give comments and suggestions in these meetings
on how to improve and adapt PlastiCircle approach to the specific needs of the
cities in study. Mass
Participatory design (MPD) will be based on the integration in the webpage of
a platform to compile comments/suggestions from stakeholders. INNDEA will
prepare the material needed to incorporate this platform in the webpage by
ICLEI. This platform will work as a social network in which stakeholders will
be able to give comments. Tasks and results of each partner will be also
presented in the platform, giving the stakeholders the possibility to directly
contact them to give suggestions on the improvement of the whole system. In
order to boost the participation of stakeholders in the platform, visits to
the waste management plants in Utrecht, Valencia and Alba Iulia will be
raffled among participants. UTRECHT, INNDEA and ALBA IULIA will attach sticks
to containers used in pilots in order to inform citizens about the possibility
to give comments in the platform.
**Individual visits:**
Moreover, visits to citizens are foreseen during the initial stage of each
pilot in the three cities. These visits will be conducted by UTRECHT, INNDEA,
and ALBA IULIA. These visits will be used to inform the citizens about the
pilot but also to collect inputs from them through questionnaires.
**Questionnaires:**
General Questionnaires will be prepared by INNDEA and SINTEF in English and
then adapted and translated to the local languages by the three cities.
Comments registered in the platform will be collected by ICLEI and input in
questionnaires/visits/meetings respectively by INNDEA, UTRECHT and ALBA IULIA.
All these comments will be sent to INNDEA, which will analyse and present them
in the PSC meetings. All partners will analyse these results, taking decisions
for the following stages of the project in order to align the design to the
requirements of stakeholders.
**Communication campaigns.**
Three types of communication activities are expected:
Raising-awareness campaigns among citizens to boost their understanding of the
new systems before being implemented and to encourage participation in pilots
(one per city pilot).
Waste management campaign to inform waste managers about the implementation
and use of the solutions and technologies in each pilot demonstrator.
Recycled plastic campaign will take place in each participating country to
show the expected impact and benefits of using recycled plastic from packaging
waste.
**Communication Material:**
Poster, roll-up, business cards, flyers. A series of three posters are planned
to promote the project and will be available in the project languages. The
posters will be displayed at relevant events related to circular Economy,
within the beneficiaries’ organizations, at conferences and exhibitions. In
addition, a minimum of 5.000 flyers or postcards will be distributed at events
and sent to relevant organizations from the European polymer industries, waste
managers and public authorities.
Participation in external congresses/conferences/fairs. The dissemination of
the project will be boosted by the participation of all the partners to
congresses/conferences/fairs in which the project outcomes will be underlined
and presented to the other participants.
## 3.6 Data Volumes & Storage
### 3.6.1 PlastiCircle IoT cloud Platform
SAV will collaborate to setup the project’s IoT cloud Platform. The location,
architecture and structure of the repository and the related security
processes will be decided and documented.
The IoT cloud Platform is the official data repository of the PlastiCircle
project in addition to the project’s web-site.
All project data, public and private, will be stored in the IoT cloud
Platform, consistently and concurrently.
The IoT cloud Platform must support secure access to its storage facilities,
based on SSL/TLS certificates. ITENE and SAV will closely collaborate to
maintain the IoT cloud Platform.
### 3.6.2 Data Volumes
Expected (approximate) data volumes of existing datasets (i.e., those made
available to PlastiCircle) will be provided in the updated version of the DMP
in M30.
### 3.6.3 Data Storage
The existing project datasets are stored at the premises of the owning
partners / organizations. Copies of them will be delivered to PlastiCircle
storage after anonymization.
Datasets of relatively small volumes, such as surveys, spreadsheets,
interviews, focus-group data, along with accompanying consent forms, will also
be uploaded to and maintained at the project’s SharePoint space.
The minimum available storage for data storage in the IoT cloud Platform
should be 1 TByte, extendible to 3 TBytes. This storage capacity is considered
sufficient for the scope and duration of PlastiCircle. However, if necessary,
it can be increased with very little cost.
### 3.6.4 Managed Access Procedures
The IoT cloud Platform will store all project data that are potentially of
large volume, typically comprising smart containers and transport data.
There will be password controlled access to the IoT cloud Platform and all
partners will be able to access the private and shared project data through
passwords that will be provided to them in a secure way.
Data will be uploaded to the IoT cloud Platform by any PlastiCircle partner
and downloaded by them via secure FTP. Typical secure ftp applications are
sftp, WinSCP, Filezilla, Cyberduck, etc.
Data-sharing agreements between partners will not be necessary, because the
data on the repository will be anonymous.
Each partner is responsible to provide its own metadata and data descriptions.
Quality control on the metadata and data descriptions will be enforced by the
IoT cloud Platform manager (SAV) and provisions will be made to enable
metadata searching capability.
### 3.6.5 Third-party Data
During the project all the results, technologies and techniques will be
analysed in order to establish if any of the results can be exploited by
licensing them to a third part, according to the market interest of the
partners. The main target stakeholders to succeed in this exploitation
strategy will be public administrations that can adopt the solutions and
transfer it to their services providers or recycling and waste management
companies, mostly environmental engineering that want to acquirer the right to
exploit the solution in order to boost their competitiveness. The strategy
will be mainly driven by the aim to boost the European economy as a whole. For
this reason, the exploitation rights conceded to the entities interested will
be not-exclusive in order to widen the positive effects of PlastiCircle
project the most.
# 4\. FAIR data
In general terms, PlastiCircles’ research data should be “F.A.I.R.” that is
Findable, Accessible, Interoperable and Re-usable. These principles precede
implementation choices and do not necessarily suggest any specific technology,
standard or implementation-solution.
It should be noted that, participating in the ORD Pilot does not necessarily
imply opening-up all PlastiCircle research data. Rather, the ORD pilot follows
the principle "as open as possible, as closed as necessary" and focuses on
encouraging sound data management as an essential part of research best
practice.
The European Commission recognizes that there are good reasons to keep some or
even all research data generated in a project closed. The DMP explains which
data can be shared and under what terms and conditions.
## 4.1 Making data findable, including provisions for metadata
### 4.1.1 Provisions for Findability
Provisions and actions that are to be taken to ensure the discoverability of
PlastiCircle data include:
* Accompanying datasets with properly structured and accurate metadata
* Providing proper documentation identifying their content and potential uses
* Making data identifiable by using standard identification mechanisms and persistent and unique identifiers (e.g. Digital Object Identifiers (DOI), where applicable
* Advertising them in global search engines
* Publishing papers and reports with references to them
* Providing consistent naming conventions (e.g., using descriptive filenames, in English, with version numbers, etc.)
* Ensuring accessibility of the hosting infrastructure (at least 99.9%)
### 4.1.2 Metadata and documentation
Data should be documented and organized in order to be accessible. Good data
documentation includes information on:
* the context of data collection: aims, objectives and hypotheses
* data collection methods: data collection protocol, sampling design, instruments, hardware and software used, data scale and resolution, temporal coverage and geographic coverage
* dataset structure of data files, cases, relationships between files
* data sources used
* data validation, checking, proofing, cleaning and other quality assurance procedures carried out
* modifications made to data over time since their original creation and identification of different versions of datasets
* information on data confidentiality, access and use conditions, where applicable
At data-level, datasets should also be documented with:
* names, labels and descriptions for variables, records and their values
* explanation of codes and classification schemes used
* codes of, and reasons for, missing values
* derived data created after collection, with code, algorithm or command file used to create them
* weighting and grossing variables created data listing with descriptions for cases, individuals or items studied
## 4.2 Making data openly accessible
The IoT cloud Platform will store all large volumes of data that are not
appropriate for storage on SharePoint (e.g. filling levels, ID data,
efficiency driving, collection routes, plastic characteristics, etc.).
Following the H2020 open access strategy for scientific publications, all the
publishable results will follow an open-access model according to a green
model. Previously, an IPR assessment will be done to achieve the best
exploitation strategy for each partner and avoid incompatibilities.
The “green model or self-archiving” will be the preferential way. However,
when the magazine does not allow the deposit of items in the repository, two
strategies will be followed:
* Post the article in "gold model” open access, on payment of fees, or considering the possibility to publish it in open access journals subscription.
* If gold or green open access is not possible, other relevant journals will be chosen.
In order to comply with green model requirements, beneficiaries will, at the
very least, ensure that their publications, if any, can be read online,
downloaded and printed. Each beneficiary will deposit a machine readable
electronic copy of the published version or final peer-reviewed manuscript
accepted for publication in a repository for scientific publications. This
step will be followed even in case of open access publishing ('gold' open
access) in order to ensure long-term preservation of the article. The
repository for scientific publications is an online archive. Open Access
Infrastructure for Research in Europe (OpenAIRE) will be the entry point for
researchers to determine what repository to choose (http://www.openaire.eu).
After depositing publications and, where possible, underlying data,
beneficiaries will ensure open access to the deposited publication via the
chosen repository.
On the other hand, a pre-selection of journals and publications of interest
for the dissemination of the project results has been made:
_Table 4. Pre-selected Journals for Open Access dissemination strategy_
<table>
<tr>
<th>
**Pre-selected Journals for open access dissemination strategy**
</th> </tr>
<tr>
<td>
Advances in Recycling & Waste
Management
</td>
<td>
Advances in Materials Science and Engineering
</td> </tr>
<tr>
<td>
International Journal of Integrated
Engineering
</td>
<td>
Open Environmental Engineering Journal
</td> </tr>
<tr>
<td>
Sustainability: Science, Practice and Policy
</td>
<td>
Journal of Management and Science
</td> </tr>
<tr>
<td>
Environmental Research, Engineering and
Management
</td>
<td>
Journal of Waste Management
</td> </tr> </table>
### 4.2.1 Methods for Data Sharing
Methods for data sharing include:
* Secure FTP access (compressed files)
* Web-site (hyperlink based) file access
* Web-services (SOAP, REST) through database access
* API-based access (for application programmers)
Potential data users include public administrations that can adopt the
solutions and transfer it to their services providers or recycling and waste
management companies, mostly environmental engineering that want to acquirer
the right to exploit the solution in order to boost their competitiveness. All
shareable data will be released for general access as soon as the dataset is
complete, its quality is assured, and it is sufficiently annotated to be
widely useful. Before general release, the adequacy of data storage and access
procedures will be tested first by project personnel, then by selected
colleagues external to the project.
Publications describing the data collected and conclusions drawn from them
would be submitted soon thereafter. Other data will more appropriately be made
generally available at the time publications reporting on them are accepted.
Archived data will be made available initially as just described and are
intended to be available indefinitely or until judged no longer useful.
The project data are intended to be available indefinitely beyond the term of
the grant, or for as long as their hosting infrastructure (repositories, etc.)
are accessible.
The development of data-analysis tools is anticipated as outcomes of the
PlastiCircle project. Those tools that will be openly offered will be made
available through the project’s website and/or the IoT cloud Platform with
sufficient tutorial to allow future users to use them tools without undue
difficulty.
### 4.2.2 Data Repositories
As mentioned above, all project data will be stored (maintained and archived)
in the PlastiCircle’s IoT cloud Platform. However, public information (i.e.
public deliverables) will be made open access through Open Data Repositories
as the project runs and after its end. This will ensure that PlastiCircle open
data will be done more persistent, even when the IoT cloud Platform will not
be available and maintained for several years after the end of the project.
An international list of global data repositories is available via _Re3data_
( _http://www.re3data.org_ ) .
Journal articles will be made available on an Open Access basis. Outputs
deposited on such repositories will be discoverable via search engines such as
Google scholar, increasing visibility, increased likelihood of citation and
raising of research profiles. Open access facilitates broader knowledge
transfer and open science as it ensures that non-academic organisations such
as small and medium-sized enterprises and charities who have limited access to
journal outputs are able to freely access published research via the Internet.
## 4.3 Making data interoperable
Interoperability refers to allowing data exchange and re-use between
researchers, institutions, organizations, countries, etc. (i.e. adhering to
standards for formats, as much as possible compliant with available (open)
software applications, and particularly facilitating re-combinations with
different datasets from different origins.
* Interdisciplinary interoperability of PlastiCircle data is ensured by:
* Having the data accessible via global and well-known data repositories
* Using consistent and standard metadata vocabularies
* Using standard and re-usable data formats
* Providing well-defined and standard access methods
* Providing open tools and open software for processing the data
* Adhering to open data standards as much as possible
* Using licensing types that facilitate broad use and open-access of the data
## 4.4 Increase data re-use
Publicly available data from the PlastiCircle project will be available in
open formats and with as least-restrictive licenses as possible, to allow the
widest reuse possible.
### 4.4.1 Data Sharing
Public (shareable) datasets from the PlastiCircle project (along with
accompanying metadata) will be shared in a timely fashion. It is generally
expected that timely release should be no later than 3 months after
publication of the main findings and that the data will remain available and
re-usable until at least 3 years after the end of the project.
Shareable data will have properly defined and specific (open) formats and
nonrestrictive licensing descriptions, to facilitate easy re-usability and
interoperability by third parties. Data licensing will respect the owner’s
IPRs (requiring proper attribution), but will permit the re-use of the data
for non-commercial purposes.
The public datasets will be complete and consistent and a quality assurance
process will be enforced on them before they are made available to ensure that
they are usable and of acceptable quality. User feedback will be used to
further improve the quality of the data and increase their re-usability.
### 4.4.2 Expected Data Re-use
**Interested parties in PlastiCircle data include:**
* Stakeholders, including plastic producers/converters, waste managers, equipment firms,
* Consumers/citizens
* Public organizations
* Researchers and academics conducting research in the field of plastic recycling and converting.
**Improving competitiveness in Europe:**
The project will create new business opportunities in the plastic sector in
terms of producers, converters, waste management, equipment and software. They
will manufacture and exploit eco-innovative solutions on collection,
transport, sorting and recovery of plastic waste, which will be launched on
the European and global markets.
### 4.4.3 IPR Ownership and Licensing
The definition of IPR policies and knowledge management for knowledge
protection in PlastiCircle will be based on the Consortium Agreement, in
addition to the IPR provisions in the Grant Agreement and Rules for
Participation of Horizon 2020. IPR and Innovation management will be described
in a Strategy Document (D. 8.4) and will result in the Exploitation and
Business Plan (D. 8.3) where each partner’s need will be identified. WP
leaders will be responsible within their WP for the identification, monitoring
and issue of project outcomes. They will fill in an IPR Chapter in their
periodic reporting, which will be added in the Exploitation and Business Plan.
In the General Assembly the IPR issues will be exposed while decisions are
taken jointly concerning the strategies of and provisions for protection of
IPR in the specific cases. The main effort here will be devoted to the
implementation of the IPR related decisions of the Project Steering Committee.
It should be noted that the Project Steering Committee (formed by one
representative of each partners) will be formed during the first six months of
the project, having the first meeting in Month 6.
For every dataset deposited to the PlastiCircle the IoT cloud Platform the
following information will have to be declared:
* Data description
* Data volume and format
* Ownership of data (i.e., who produced and who owns the data)
##### Licensing type (“Creative Commons” type, or other)
The above requirement applies to both local project data, produced by
PlastiCircle partners during the lifetime of the project, as well as for
existing and third-party data that will be used during PlastiCircle.
**Open License (Legal Openness)**
In most jurisdictions, there are intellectual property rights in data that
prevent third-parties from using, reusing and redistributing data without
explicit permission. Even in places where the existence of rights is
uncertain, it is important to apply a license simply for the sake of clarity.
Thus, if you are planning to make your data available you should put a license
on it – and if you want your data to be open this is even more important.
For open data one of the licenses conformant with the Open Definition and
marked as suitable for data can be used. This list (along with instructions
for usage) can be found at:
_http://opendefinition.org/licenses_
A short instruction guide to applying an open data license can be found on the
Open Data Commons website:
_http://opendatacommons.org/guide_
Creative Commons (CC) licensing is described in:
_https://creativecommons.org/licenses/_
_https://en.wikipedia.org/wiki/Creative_Commons_
_https://en.wikipedia.org/wiki/Creative_Commons_license_
# 5\. Allocation of resources
## 5.1 Allocation of Responsibilities
People/groups involved in Data Management in PlastiCircle are the members of
the PSC:
_Table 5. Members of the PSC: People involved in the Data Management Plan_
<table>
<tr>
<th>
</th>
<th>
**Partner**
</th>
<th>
**Person**
</th>
<th>
</th>
<th>
**Partner**
</th>
<th>
**Person**
</th> </tr>
<tr>
<td>
1
</td>
<td>
ITENE
</td>
<td>
César Aliaga
</td>
<td>
11
</td>
<td>
POLARIS
</td>
<td>
Tarsosaga IonutNicolae
</td> </tr>
<tr>
<td>
2
</td>
<td>
SINTEF
</td>
<td>
Dr. Einar Hinrichsen
</td>
<td>
12
</td>
<td>
INTERVAL
</td>
<td>
Eva García
</td> </tr>
<tr>
<td>
3
</td>
<td>
PICVISA
</td>
<td>
Luis Seguí
</td>
<td>
13
</td>
<td>
ARMACELL
</td>
<td>
Sven Hendriks
</td> </tr>
<tr>
<td>
4
</td>
<td>
AXION
</td>
<td>
Richard McKinlay
</td>
<td>
14
</td>
<td>
DERBIGUM
</td>
<td>
Hans Aerts
</td> </tr>
<tr>
<td>
5
</td>
<td>
CRF
</td>
<td>
Vito Lambertini
</td>
<td>
15
</td>
<td>
PROPLAST
</td>
<td>
Marco Monti
</td> </tr>
<tr>
<td>
6
</td>
<td>
UTRECHT
</td>
<td>
Jan Bloemheuvel
</td>
<td>
16
</td>
<td>
HAHN
</td>
<td>
Howard
Waghorn
</td> </tr>
<tr>
<td>
7
</td>
<td>
INNDEA
</td>
<td>
Julian Torralba
</td>
<td>
17
</td>
<td>
ECOEMBES
</td>
<td>
Ana Rivas
</td> </tr>
<tr>
<td>
8
</td>
<td>
ALBA
</td>
<td>
Valentin Voinica
</td>
<td>
18
</td>
<td>
KIMbcn
</td>
<td>
Jordi Gasset
</td> </tr>
<tr>
<td>
9
</td>
<td>
MOV
</td>
<td>
Mirjam Britovšek
</td>
<td>
19
</td>
<td>
PLAST-EU
</td>
<td>
Irene Mora
</td> </tr>
<tr>
<td>
10
</td>
<td>
SAV
</td>
<td>
Jerónimo Franco
</td>
<td>
20
</td>
<td>
ICLEI
</td>
<td>
Kelly Cotel
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
21
</td>
<td>
CALAF
INDUSTRIAL
(THIRD PARTY)
</td>
<td>
Rodrigo Verbal
</td> </tr> </table>
## 5.2 Additional Resources & Costing
The cost of Data Management regarding PlastiCircle’s role and responsibilities
has been included in the project’s budget and no additional resources (i.e.
with extra costs) will be charged to the project.
# 6\. Data security
## 6.1 Data Security Policies
### 6.1.1 Confidentiality, Integrity, Availability
ITENE’s Information Security Management System (ISMS) is certificated by
AENOR, according to ISO 27001 4 . ISMS are the most effective means of
minimising risks, ensuring that assets and risks are identified, considering
the impact for the organisation, and that the most effective controls and
procedures are adopted in line with business strategy.
The policies, procedures, human and machine resources, which constitute the
ISMS of the PlastiCircle project ensures an effective management of
information according to the “CIA” triad — Confidentiality, Integrity and
Availability.
* confidentiality, ensuring that only those who are authorised can access the information,
* integrity, ensuring that the information and its processing methods are accurate and complete, and
* availability, ensuring that authorised users have access to the information and to related assets when they need it.
Physical security, network security and security of computer systems and files
are all considered to ensure security of data and prevent unauthorised access,
changes to data, disclosure or destruction of data.
The certification of Information Security Management by AENOR, in accordance
with the UNE-ISO/IEC 27001:2014, helps to promote activities of data
protection, thus improving their image and generating confidence with respect
to third parties.
### 6.1.2 Data Anonymization / Pseudonymization
According to the GDPR, “pseudonymization” refers to the processing of personal
data in such a manner that the personal data can no longer be attributed to a
specific data subject without the use of additional information, provided that
such additional information is kept separately and is subject to technical and
organisational measures to ensure that the personal data are not attributed to
an identified or identifiable natural person.
However, the explicit introduction of pseudonymization is not sufficient by
itself to preclude any other measures of data protection. Therefore, security
policies for data protections should always be enforced as strictly as
possible.
The controllers of the PlastiCircle IoT cloud Platform should require that
data pseudonymization (or anonymization) is enforced before any dataset is
uploaded to the IoT cloud Platform.
### 6.1.3 Encrypted Communications
Even if pseudonymization is done at the device, the user's privacy can still
be compromised because its data is typically communicated through WiFi
networks, and thus may be eavesdropped and/or be victimized by man-in-the-
middle type of attacks. For this reason, strong end-to-end encryption must be
enforced from the device (one end) to the central server (the other end). On
the server's end, the uploaded data should be decrypted, validated and stored,
stripped from the (hashed) device_ID. [Note: for statistical purposes, we may
store the hashed device_IDs, without linking them with their associated data.]
The hashing and encryption keys should comply with symmetric/asymmetric
cryptography standards and techniques. Strong symmetric encryption (AES-256)
and strong asymmetric encryption (e.g. RSA-2048) should be used to provide the
strongest possible encryption available today. Server digital certificates are
protected on the server's side. Shared keys are exchanged with the client
devices using asymmetric cryptography and are applied in the symmetric
encryption/decryption of large data volumes (for efficiency reasons).
## 6.2 Storage & Backups
Data storage, in the context of Data Security, must be done in such a way to
ensure the privacy and integrity of data and prevent unauthorised access,
changes to data, disclosure or destruction of data.
Transmitting (uploading or downloading) sensitive or personal data between
locations or within research teams must always be done using data encryption,
e.g. using secure FTP or other secure data transfer protocol, to ensure data
privacy and prevent unauthorized access of data (e.g. eavesdropping).
Access to data repositories should be password protected and access logs
should be maintained.
Archived data of personal or sensitive nature should be stored encrypted, with
strong encryption.
Before the PlastiCircle project is completed, the partners will decide which
data will have to be destroyed and which data will be maintained (and for how
long).
To ensure data integrity, avoid loss of data and maintain storage consistency,
regular data backups should be performed on a weekly and monthly basis, either
incremental or full. Data backups should be accompanied with appropriate and
corresponding data recovery procedures.
# 7\. Ethical aspects
An “Ethics Handbook” has been developed, as Deliverable D10.1, D10.2, D10.3
and D10.4 of Work Package 10, which addresses the ethics requirements of the
PlastiCircle project.
# 8\. Conclusion
The Data Management Plan (DMP) will be the guide document for project’s data
treatment and management. As has been seen, the DMP describes which and how
data is collected, processed or generated, but also outlining the methodology
and standards used. Furthermore, the DMP explains whether and how this data is
shared and/or made open, and how it is curated and preserved. Finally, it
should be taken into account that the DMP evolves during the lifespan of the
project. Thus, this initial version will be updated in M30 to reflect the
status of the PlastiCircle project with respect to its data management.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0089_IL TROVATORE_740415.md
|
# Introduction to DMP development
IL TROVATORE contributes to the H2020 Open Research Data (ORD) Pilot, which
‘aims to make the research data generated by H2020 projects accessible with as
few restrictions as possible, while at the same time protecting sensitive data
from inappropriate use’ [1]. The IL TROVATORE participation to the ORD Pilot
calls for the development of a Data Management Plan (DMP), the 1 st version
of which is presented herein; regularly updated versions of the DMP, which is
a ‘living document’, will be submitted at the end of each reporting period or
in an _ad hoc_ manner when properly motivated.
Typically, a DMP is a document that describes the way the data will be treated
during the project lifetime and the fate of these data after the end of the
project [2]. DMPs may cover fully or partially the data life cycle, i.e., from
data production, selection/collection and organisation (e.g., databases),
through quality assurance & quality control, documentation (e.g., data types,
lab methods) and use of the data, to data preservation and sharing (e.g.,
dissemination strategies) [2]. Michener et al. [2] have summarised the
following 10 simple rules that govern the development of an efficient DMP:
1. **Determine the research sponsor requirements:** sponsors usually provide DMP requirements in either the public request/call for proposals or in an online grant proposal guide. The European Commission’s expectations from the DMPs produced within the ORD Pilot may be found in the Participant Portal H2020 Online Manual [3], which also provides a DMP template in ODT format (i.e., an OpenDocument format) [4]. This template was used to produce the IL TROVATORE DMP.
2. **Identify the data to be collected:** a good DMP plan includes sufficient information to understand the collected data, i.e., the data types, sources and volume, as well as the data and file formats. Since not all data-related information is known for the IL TROVATORE project at this stage, the DMP will be iteratively updated in the project lifetime.
3. **Define how the data will be organised:** defining the best approach to organise and manage the produced data can only happen when the types and volume of data are known or can at least be predicted with a reasonable level of accuracy. Therefore, the appropriate software tools for data organisation will be defined in one of the updated DMP versions that will be produced later in the project lifetime. The IL TROVATORE DMP proposes naming conventions for important documents (e.g., Deliverables, Milestones, etc.), while it considers the use of persistent unique identifiers (e.g., Digital Object Identifiers – DOIs) and versioning control (e.g., software and data products) whenever appropriate.
4. **Explain how the data will be documented:** this refers to the use of metadata that can allow data and files to be discovered, used, and properly cited. Metadata provide details on what, where, when, why and how the data were collected, processed and interpreted. Metadata also describe how data and files are named, physically structured, and stored; they also provide details about the experiments, analytical methods, and research context in which they were acquired. The metadata completeness and comprehensiveness can be directly associated with the data utility and longevity. A successful documentation strategy is based on 3 steps: (a) identify the type of information that must be captured so as to enable data discovery, access, interpretation, use and citation; (b) determine whether there is a community-based metadata schema or standard (i.e., preferred sets of metadata elements) that can be easily adopted. Often, a data-repository-specific content standard is recommended by the target data repository, archive, or domain professional organisation; (c) identify software tools that can be used to create and manage metadata content. A well-informed documentation strategy will only be defined for the IL TROVATORE metadata at a later stage of the project, i.e., when there is more information on the nature of the data (i.e., type, volume, format, etc.) to be collected in the project lifetime.
5. **Describe how data quality will be assured:** quality assurance and quality control (QA/QC) refer to the processes employed to measure, assess, and improve the quality of products (e.g., data, software). Since the 1 st version of the IL TROVATORE DMP is produced at an early stage of the project lifetime, where the optimum material testing methods (e.g., for joining materials, SiC/SiC composites, etc.) have not yet been identified, it is impossible to describe the QA/QC measures (e.g., instrument calibration and verification tests, statistical and visualisation approaches to detect errors, training activities, etc.) that will be employed in the project. The appropriate QA/QC measures will be described in later DMP versions, when the project beneficiaries have had the opportunity to make well-informed decisions on the most appropriate, activity-specific measures.
6. **Present a sound data storage and preservation strategy:** data storage and preservation are key to a good DMP, so as to ensure that data remain available for use by both their originators and others. The data storage and preservation strategy must consider the following 3 questions: (1) how long will the data be accessible, (2) how will the data be stored and protected over the duration of the project, and (3) how will the data be preserved and made available for future use. It has already been decided that the data will be first stored in the password-protected SharePoint area of the IL TROVATORE website ( _http://www.iltrovatore-h2020.eu/_ ) . The openly-accessible data sets, codes, etc., will be preserved in the Zenodo repository ( _https://zenodo.org/_ ) , which has been developed in the framework of the OpenAIRE project ( _https://www.openaire.eu/_ ) in order to share, curate and publish data produced by EC-funded research.
7. **Define the project’s data policies:** a good DMP should include explicit policy statements on data management and data sharing. Such policy statements could touch upon issues, such as licensing or sharing arrangements of pre-existing (background) data as well as plans for retaining, licensing, sharing and embargoing data, codes, etc., produced in the project framework (foreground data). Developing a sound data policy that does not regard sensitive data with legal/ethical restrictions typically comprises 2 steps: (a) identify and describe relevant licensing and sharing arrangements by considering proprietary and intellectual property rights (IPR) laws as well as export control regulations on the research products (e.g., data, codes, software, etc.); and (b) explain how and when the data/research products will be made available (e.g., describe embargo/delay periods associated with publications or patenting, possible non-standard licenses and waivers, etc.). The IL TROVATORE IP management approach has been described in detail in the project Consortium Agreement (CA) document, which has been signed by the 30 beneficiaries in February 2018 [5].
The 1 st version of the DMP also discusses publication embargo issues
associated with open access publishing in the framework of the project.
8. **Describe how the data will be disseminated:** the DMP must also address how and when the data products will be disseminated both within and beyond the Consortium boundaries. Dissemination can be ensured using both passive and active approaches. In the IL TROVATORE project, passive dissemination involves placing the data in the password-protected area (SharePoint) of the website, while active dissemination involves publishing the data in the Zenodo repository; moreover, open access will be ensured for all peer-reviewed scientific publications on the data produced within the framework of the project. Other means of active data dissemination may include (a) submitting the data (or data subsets) as appendices or supplementary materials in peer-reviewed Journal articles, and (b) publishing the data, metadata and relevant codes as “data papers” [6]. It is important to note that many Journals and data repositories provide guidelines on the appropriate citation of data by others, including the use of DOIs and recommended citation formats. Moreover, the data will be more usable and interpretable by all interested parties, provided they are disseminated using standard, non-proprietary approaches and when they are accompanied by metadata and associated codes used for data processing.
9. **Assign roles and responsibilities:** a good DMP describes with clarity the roles and responsibilities of every organisation involved in the project. These roles may include data collection, data entry, QA/QC measures, metadata creation and management, backup, data preparation and submission to an archive/repository, and systems administration. Time allocation and needed staff level of expertise must be carefully considered. A large-scale, multi-investigator, multidisciplinary project, such as IL TROVATORE, should consider a staff member dedicated to data management, probably within the premises of the project coordinator (SCK•CEN). As already mentioned, this 1 st DMP version is considered a ‘living document’ that will be updated at the end of each reporting period (m18, m36, m54), until a mature data management policy is established. It is also recommended to revisit the DMP frequently (e.g., on a quarterly basis) so as to reflect the evolution in policies and protocols. The DMP revision history will also be tracked, registering the dates when changes were made along with the person who made them. In order to describe the principles of data management after the end of the project, it is advisable to describe the policies and procedures of the data repository (i.e., Zenodo in the case of IL TROVATORE) where the data will be stored.
10. **Prepare a realistic budget:** data management is time-consuming and costly in terms of personnel, software, and hardware. At present, data management is entrusted to SCK•CEN, in association with the planned activities on website maintenance. However, it is clear that data management activities will grow as data production evolves in the project lifetime. Since IL TROVATORE brings together academic partners, who are interested in publishing, with industrial partners, who are keen on patenting and transferring the produced innovation to market, data management is an activity that cannot be taken lightly in this project. The budget/personnel that will be dedicated to data management at SCK•CEN is currently under consideration. The potential disagreement between entities that traditionally strive for publishing (i.e., research organisations, ROs) and entities that strive for patenting (i.e., industries) has already been discussed by the European IPR Helpdesk ( _https://www.iprhelpdesk.eu/_ ) in a dedicated report/fact sheet [7]; in the same fact sheet, the idea of bridging that commonly encountered ‘gap’ has been explored and alternative dissemination routes (e.g., defensive publication, open access, etc.) have been proposed.
Fig. 1 is a graphical representation of the relation between research and data
life cycles that provides reference to the 10 rules governing DMP development
[2].
**Figure 1.** Graphical representation of the link between research and data
life cycles; the red numbers refer to the 10 rules governing DMP development.
The research life cycle involves (1) formulation of ideas & hypotheses, (2)
data acquisition, (3) data analysis, visualisation & interpretation, and (4)
data publication or alternative dissemination (e.g., Conference presentations,
etc.). The data life cycle involves (1) DMP development, (2) discovery of
existing data, (3) collection & organisation of new data, (4) data quality
assurance, (5) data description (e.g., ascribing metadata), (6) use of data,
(7) data preservation, and (8) data sharing. Figure adapted from Ref. [2].
As already mentioned, the DMP is a ‘living document’ that aims to describe all
stages in the data life cycle, from data generation and collection to data
preservation and sharing, providing guidelines on data management both during
the project lifetime and after the project ends. The herein presented 1 st
version of the DMP gives a tentative overview of important data management
aspects, such as the types and formats of data that will be
generated/collected during the project, the expected origin and reuse of data,
how to make data FAIR (findable, accessible, interoperable, and reusable),
etc. The next (updated) DMP version is scheduled for the end of the 1 st
reporting period (m18) and will contain more information on appropriate
metadata & keywords, QA/QC measures, strategy to make data interoperable,
resources that will be dedicated to data management, etc.
# Data Summary
## Data collection/generation with respect to the project objectives
IL TROVATORE is a Research & Innovation Action (RIA) dedicated to the
improvement of nuclear energy safety on a global scale by validating select
accident-tolerant fuel (ATF) cladding material concepts in an industrially
relevant environment (i.e., via neutron irradiation in PWR-like water). The
first 2 years in the project lifetime are dedicated to the optimisation of the
candidate ATF cladding concepts (SiC/SiC composites, MAX phase- and oxide-
coated clads, GESA surface-alloyed clads and ODS-FeCrAl alloys) and the
assessment of their performance with respect to the cladding material property
requirements imposed by the ATF application. Years 3-4 of the project are
dedicated to the neutron irradiation of well-performing materials in PWR-like
water in the BR2 research reactor. The R&D activities will be concluded by the
post-irradiation examination (PIE) of neutron-irradiated ATF cladding
materials; it should be noted that the collection of PIE data will start much
before m48 (end of the total 2-years irradiation in BR2) by analysing the
early-sampled materials (1-3 dpa). The PERT chart of the IL TROVATORE workflow
is shown in Fig. 2, indicating that the successful implementation of this
project relies on the feedback loops (i.e., data exchange) between foreseen
activities.
**Figure 2.** PERT chart of the workflow in the IL TROVATORE project.
IL TROVATORE strives to deliver proof-of-concept cladding materials (TRL 5)
that are designed to address the stringent requirements of the ATF
application. In all phases of the project, applicationdriven material design
exchanges info/data with material production and material performance
assessment, in a conscious effort towards accelerated materials development
(AMD), as dictated by the dire global societal and industrial demand for safer
nuclear energy. A schematic representation of the AMD principle is given in
Fig. 3. Meeting the ambitious S&T objectives of IL TROVATORE means that a
continuous flow of high-quality data must be ensured between the different
stages in ATF cladding material development. For example, the development of
accurate models that can predict the in-service degradation of the candidate
ATF clads relies on reliable input data from processing (WP1-WP3), performance
evaluation (WP4-WP6) and validation (WP7-WP8). AMD also involves the
development of high-throughput screening tools, such as ion/proton irradiation
to recreate, in a fast and relatively inexpensive way, material-specific
defect microstructures similar to those induced by neutron irradiation. The
successful employment of ion/proton irradiation to emulate neutron-induced
defect microstructures in innovative nuclear materials, such as the ones
studied in IL TROVATORE, relies on a robust data management policy. In
accordance with Fig. 1, the first step involves the labour-intensive
collection and evaluation of existing ion/proton/neutron irradiation data for
all IL TROVATORE ATF cladding materials & constituents thereof; needless to
say, the critical evaluation of the quality and relevance of the existing
irradiation data requires the involvement of reviewers with background in
Radiation Materials Science. This has been already achieved for the majority
of the candidate ATF cladding materials in the 1 st edition of the report
accompanying Milestone 4 – Mining of existing irradiation data [8]. The second
step implies the collection and organization of new ion/proton irradiation
data; the new data can be divided in two data subsets: (a) the data that are
needed to validate the material-specific ion/proton irradiation approach,
which will be used in the project to assess the radiation tolerance of the
candidate ATF clads prior to the BR2 neutron irradiation; and (b) the actual
data that will provide key information on the radiation tolerance of the new
materials and on the possible ways to improve it (feedback to WP1-WP3). The
robustness of the employed ion/proton irradiation approach is linked to the
quality of the ion/proton irradiation data (step 4 in Fig. 1); hence, it is
quite important to dedicate a part of the ion/proton irradiation campaigns in
the project to develop and validate the proposed irradiation approaches. Once
the quality of the irradiation data produced in the project is ensured, their
further description, use , preservation and sharing (steps 58 in Fig. 1) are
guaranteed. Apart from the value of the actual irradiation data, proposing a
robust approach of using ions & protons to assess the radiation tolerance of
innovative nuclear materials is a major step forward for the conservative
nuclear sector. The data subsets involved in establishing such an approach
will be openly published in the framework of Deliverable 6.3 – Best-practice
guidelines on the use of ion/proton irradiation to facilitate AMD of nuclear
materials (m54), as well as in an accompanying “gold open access” Journal
article. The data management involved in the establishment of a material-
specific, best-practice approach for ion/proton irradiation is a single
indication of the anticipated complexity of the mature (end-of-project) DMP of
the IL TROVATORE project.
**Fig**
**ure**
**.**
**3**
AMD principle in IL TROVATORE.
Exploitation of
Innovative ATF
Material Concepts
Material
Design
Material
Assessment
## Types and formats of collected data
The **research data** that are expected to be collected in the framework of
the IL TROVATORE project include: data in numerical and graphical format
(e.g., facts/numbers; graphs of the evolution of material properties with
processing/testing conditions); images (e.g., microstructural information on
all scales, from the microscale to the nanoscale); documents (e.g., reports
analysing/processing data subsets; peer-reviewed Journal articles with data in
appendix or supplement; patents; standards); presentations (e.g., Conference
presentations); etc.
Moreover, various **documents related with the project management/progress**
will be produced in the course of the project. These documents include:
Deliverables; Milestone reports; administrative documents (e.g., templates);
progress reports (e.g., BEBRs, IARs, PARs); minutes of progress meetings;
financial documents; presentations; etc. Documents associated with contractual
(Grant Agreement) obligations (e.g., Deliverables, Milestones, etc.) will be
uploaded to the ECAS portal and the password-protected area (SharePoint) of
the website ( _http://www.iltrovatore-h2020.eu/_ ) . These documents will
always be uploaded in PDF format.
**Dissemination-oriented documents** include: peer-reviewed Journal articles,
articles in Conference proceedings, educational/training notes/modules,
special Journal editions, etc.
**Communication-oriented documents/materials** comprise: brochures & leaflets,
video, website, etc.
The tentative list of datasets to be collected in the course of the IL
TROVATORE project is summarised in tabular form in the _**_Archive Plan_ ** _
(see **Annex 1** and IL TROVATORE Archive Plan_31032018.xlsx).
The 1 st version of the DMP recommends the use of the following tentative
data formats:
**For numerical/tabular data:**
* delimited text (.txt) with characters not present in data used as delimiters
* widely-used formats: MS Excel (.xls/.xlsx), MS Access (.mdb/.accdb), dBase (.dbf), OpenDocument Spreadsheet (.ods) **For textual data:**
* Rich Text Format (.rtf)
* PDF/UA, PDF/A
* plain text, ASCII (.txt)
* widely-used formats: MS Word (.doc/.docx) **For images:**
* TIFF 6.0 uncompressed
* TIFF other versions (.tif, .tiff)
* PEG (.jpeg, .jpg, .jp2), if the original is created in this format
* GIF (.gif)
* RAW image format (.raw)
* Photoshop files (.psd)
* BMP (.bmp)
* PNG (.png)
* Adobe Portable Document Format (PDF/A, PDF) (.pdf)
As explained in section 2.1 of this document, the success of the IL TROVATORE
project relies on the efficient flow of information between tasks (represented
by the feedback loops/interconnectivity arrows in Fig. 2). It is essential,
therefore, that the contributors to a specific task agree as soon as possible
on the type, format and typical size of data, the way to describe them
(metadata) and share them, as well as on any other aspect related to data
production and management (e.g., used standards, advanced test methods
developed as part of pre-normative research, etc.)
## Reuse and origin of existing data
The development of the ATF cladding material concepts considered in IL
TROVATORE does not start from zero (i.e., TRL 1 – basic principles observed).
All considered material concepts have already reached a satisfactory level of
technological maturity (TRL4 – technology validated in the lab) or have, at
least, demonstrated an adequate manufacturing feasibility (TRL 3 –
experimental proof-ofconcept). For truly innovative materials, such as the MAX
phase-coated clads, the processes required for their manufacturing (e.g.,
magnetron sputtering, cathodic arc deposition, cold spraying, etc.) have
already demonstrated a technological maturity and industrial scalability that
bodes well for reaching the targeted TRL 5 for these materials in the lifetime
of the project. Hence, a large pool of data is already available with respect
to the manufacturing and properties/performance of the envisaged materials &
constituents thereof within the IL TROVATORE Consortium. Some of these data
are proprietary (background IP) and protected by the technology drivers (WH,
MuroranIT, CEA/EDF, etc.); such data primarily pertain to material
manufacturing and cannot affect the progress of the project if they do not
become openly available to all members of the Consortium. In fact, such
eventuality is undesirable, as it would jeopardize the commercial exploitation
of the considered innovation, undermining one of the main objectives of the
project, which is to help the involved industries to reach the market as fast
as possible, thus enhancing the nuclear energy safety worldwide. Data
pertaining to the performance of the state-of-the-art materials in the project
(e.g., data from cladding/coolant interaction tests, ion/proton/neutron
irradiation data on specific material concepts or their constituents, etc.)
will be openly shared between the data originators and the researchers
involved in the study of a particular aspect in material performance. These
data will be reused in order to achieve the S&T objectives of the project;
their possible sharing in an open context will be addressed on a case-by-case
basis. It is the aspiration of the project coordinator, Dr. K. Lambrinou, to
facilitate the publication of key articles based on the invaluable pool of
data that are available within the Consortium, reflecting the state-of-the-art
progress in ATF cladding material development for Gen-II/III LWRs. The
data/innovation produced in the framework of the project (i.e., foreground IP)
primarily belongs to the partners producing them. Again, open publication vs.
patenting/protecting of these data will be decided on a case-by-case basis. It
should be emphasised, however, that all Consortium members like the prospect
of maximising the reuse of data produced with European tax payers’ money by
making them FAIR (i.e., findable, accessible, interoperable and reusable), as
long as this does not jeopardise the industrial exploitability of produced
innovation. This _modus operandi_ is in agreement with the principle “as open
as possible, as closed as necessary”.
The provenance of the data that will be produced/used/reused in the framework
of IL TROVATORE – apart from relevant data that may be found in open
literature – is mainly expected to be identified amongst the members of the
international IL TROVATORE Consortium (30 partners: 28 from Europe, 1 for the
USA, and 1 from Japan). Moreover, relevant data are expected to be provided by
experts involved in the 3 expert advisory committees of the project, i.e., the
Scientific Advisory Committee (SAC), the End Users Group (EUG), and the
Standardization Advisory Committee (STC).
## Data utility
The data produced in the framework of the IL TROVATORE project will be
primarily used by the Consortium members, in order to achieve the project S&T
objectives. Therefore, the project data can be used by the **Gen-II/III LWR
community** , both for scientific publications, test method/standard
development, etc., but also for the core industrial purpose of expedited
commercialisation; with respect to the latter, the results of the BR2
irradiation in PWR-like water and especially the collection of ‘high dpa’ data
(i.e., 6-7 dpa after 2 full years following the standard BR2 operational
schedule of 6 cycles/year; potentially extendable to 8-9 dpa for 8
cycles/year) is of high value for the technology drivers, so as to validate
their materials in an industrially relevant environment or identify points of
improvement for their preferred material concept(s).
The strong cross-cutting character of the IL TROVATORE R&D activities bodes
well for the use of the project data by various future-generation nuclear
systems, such as **Gen-IV systems** (e.g., LFRs, GFRs) and **fusion** .
Moreover, the investigated material solutions and the manufacturing processes
needed for application-driven performance optimisation are expected to produce
data that could potentially be used in **concentrated solar power** (CSP),
**aerospace** and other industrial sectors requiring the use of materials in
harsh/extreme environments.
Last but not least, the **educational & training activities ** (i.e.,
Workshops, Summer School) planned in the course of the project is expected to
contribute to the formation of skilled scientific/technical personnel that is
likely to be needed in implementing the new cutting-edge technologies in
diverse industrial sectors. The overall project activities are expected to
forge strong ties between academia and industries in the common quest towards
safer nuclear energy, while the produced know-how is expected to increase the
competitiveness of European industries by providing skilled personnel.
# FAIR Data
The H2020 Open Research Data (ORD) Pilot encourages the beneficiaries of H2020
projects, such as IL TROVATORE, to make their research data **findable** ,
**accessible** , **interoperable** and **reusable** ( **FAIR** ) [9]. Working
towards an enhanced reuse of scholarly data is desired by various
stakeholders, such as academia, funding agencies, industries and scholarly
publishers, and a concise set of guidelines known as the FAIR Guiding
Principles may be used as guideline towards an enhanced data reusability.
The FAIR Guiding Principles may be summarised, according to Wilkinson et al.
[10], as follows:
**To be Findable:**
F1. (meta)data are assigned a globally unique and persistent identifier
F2. data are described with rich metadata (defined by R1 below)
F3. metadata clearly and explicitly include the identifier of the data it
describes F4. (meta)data are registered or indexed in a searchable resource
**To be Accessible:**
A1. (meta)data are retrievable by their identifier using a standardised
communications protocol
A1.1. the protocol is open, free, and universally implementable
A1.2. the protocol allows for an authentication and authorization procedure,
where necessary A2. metadata are accessible, even when the data are no longer
available **To be Interoperable:**
I1. (meta)data use a formal, accessible, shared, and broadly applicable
language for knowledge representation
I2. (meta)data use vocabularies that follow FAIR principles I3. (meta)data
include qualified references to other (meta)data **To be Reusable:**
R1. (meta)data are richly described with a plurality of accurate and relevant
attributes
R1.1. (meta)data are released with a clear and accessible data usage license
R1.2. (meta)data are associated with detailed provenance
R1.3. (meta)data meet domain-relevant community standards
The FAIR Guiding Principles presented herein are the elaborated version of the
principles defined at the meeting held by the FORCE11 community [11] in
Leiden, The Netherlands, in 2014. FORCE11, i.e., The Future of Research
Communications and e-Scholarship, is a community – initiated in 2011 – of
scholars, librarians, archivists, publishers and research funders that has
arisen originally to help facilitate the change towards improved knowledge
creation and sharing [10].
This section describes the basic approach that will be adopted in the IL
TROVATORE project in order to make the produced data FAIR, i.e., findable,
accessible, interoperable and reusable. This approach is tentative and generic
and does not suggest specific technologies, standards, or implementation
solutions. The suggested approach in this 1 st version of the DMP will be
updated, based on the return of experience, for the first time at the end of
the 1 st Reporting Period, i.e., in month 18.
## Making data findable, including provisions for metadata
The success of IL TROVATORE depends on the smooth exchange and retrieval of
information/data. Updated information and peer-reviewed (high quality) data
are uploaded to the password-protected (SharePoint) area of the website, thus
becoming available to the persons involved in the project in a secure manner.
As explained in Deliverable D13.1 – Project website and logo [12], only WP and
Task leaders have read & write access rights to the SharePoint, so as to
ensure that only high-quality, peer-reviewed data are uploaded to the ‘members
only’ website section. As mentioned in section 2.2 of this document, an
__Archive Plan_ _ was created to manage, classify and archive information/data
produced in electronic format during the IL TROVATORE project. The __Archive
Plan_ _ may be seen in Annex 1, and is also submitted together with this
document as supplementary material (Excel file: IL TROVATORE Archive
Plan_31032018.xlsx). The __Archive Plan_ _ provides an inventory of type and
subtype of information (document/data) produced/used in the project; for each
information type/subtype, the following is specified:
* parent domain (domain in which the information was produced)
* dissemination level (public, restricted, confidential)
* storing place
* archiving/retention period
* action after the retention period
### Parent domain
In IL TROVATORE, all datasets will originate from one of the 4 project domains
(DMs):
Domain 1 (DM1) – Processing optimization and joining of ATF cladding materials
WP1 – Processing of engineered bulk materials
WP2 – Coating deposition & surface modification of clads & joints WP3 –
Joining of ATF clads & testing of joint/welds
Domain 2 (DM2) – Evaluation and pre-screening of ATF clads & joints
WP4 – Advanced characterization & testing of ATF clads & joints
WP5 – Coolant/cladding/fuel interaction tests
WP6 – Ion/proton irradiation & PIE of ATF clads & joints
Domain 3 (DM3) – In-service validation of ATF clads & joints
WP7 – Neutron irradiation of ATF clads & joints WP8 – PIE of neutron-
irradiated ATF clads & joints
WP9 – Predictive modelling activities
Domain 4 (DM4) – Access to end users
WP10 – Standardisation
WP11 – Exploitation of results
WP12 – Dissemination & Communication
### Dissemination level
Participation to the ORD Pilot should not jeopardize the commercialisation
potential of the project results and must avoid to offend the interests of the
industrial partners that are prepared to invest in bringing the achieved
innovation to market. Therefore, the **datasets** that are granted permission
to become openly accessible will be selected based on the principle “as open
as possible, as closed as necessary”, taking into account the project policy
for protection of foreground IPR, as defined in the CA [5]. Datasets that
cannot be disclosed in view of activities leading to patenting or commercial
exploitation are assigned a **Restricted (R)** dissemination level (DL); this
means that these data are open only to select partners (i.e., the IP owners
who want to proceed with the commercial valorisation of the particular IP).
Datasets produced in the framework of the project, which cannot become openly
accessible but are available to the whole Consortium, are assigned a
**Confidential (CO)** dissemination level. These datasets are likely being
analysed, processed and/or interpreted prior to becoming published. Datasets
that are published, either as appendix or supplement of scientific
publications or are placed on an openly accessible data repository (e.g.,
Zenodo) in support of publications produced within the project, are assigned a
**Public (PU)** dissemination level.
**Documents** associated with _contractual obligations_ (e.g., Deliverables,
Milestones, etc.) are assigned either a PU dissemination level (most
Deliverables) or a CO dissemination level (select Deliverables and all
Milestone reports). All _publications_ (Journal articles, Conference
Proceeding articles, etc.) are assigned a PU dissemination level. All _other
documents_ (material associated with education/training, standards and pre-
normative test methods/procedures, etc.) are assigned a dissemination level on
a case-by-case basis.
### Naming convention & metadata
The __Archive Plan_ _ introduces a tentative nomenclature so as to be able to
access efficiently datasets (documents/data/other) produced in the project.
This nomenclature will be tested in the next phase of the project and, if
needed, will be updated at the end of the 1 st reporting period (m18), based
on the return of experience. The proposed nomenclature takes into account the
following aspects:
* Type of dataset (document/data/other):
* Deliverable o Milestone report o Report (EEBR, IAR, PAR, etc.)
* Patent o Standard o Journal/Conference Proceedings article o Educational/training material for Workshops, Summer School, etc. o Leaflets/Brochures o Video
* Status of the dataset:
* **D** raft (as-produced by author(s))
* **R** eviewed by peers (usually within the institute(s) producing the dataset) o **A** pproved (by authors, reviewers, project coordinator) o **F** inal (published/submitted to the EC) • Parent domain: **DM** 1-4 (see also section 3.1.1)
* Dissemination level (see also section 3.1.2):
* **PU** – Public (openly published, openly accessible) o **CO** – Confidential (accessible only by the Consortium members) o **R** – Restricted (accessible only by select Consortium members)
* Version: standard numbering (v1, v2, v3,…)
Most dataset types (documents, reports, Deliverables, etc.) are assigned a
naming convention that provides information on all above aspects. Particular
datasets will be assigned a DOI (digital object identifier) or a URL (uniform
resource locator). A DOI is a unique alphanumeric string assigned by a
registration agency (i.e., the International DOI Foundation) to identify
content and provide a persistent link to its location on the internet. For
example, the publisher assigns a DOI when a Journal article is published and
made available electronically [13]. A URL provides a way to locate a resource
on the web and contains the name of the protocol to be used to access the
resource, as well as the resource name. The 1 st URL part identifies what
protocol to use, while the 2 nd part identifies the IP address or domain
name where the resource is located. For example, URL protocols include HTTP
(hypertext transfer protocol) and HTTPS (HTTP secure) for web resources [14].
The tentative naming convention for each type of dataset may be found in the
__Archive Plan_ _ (Excel file: IL TROVATORE Archive Plan_31032018.xlsx). Two
examples are provided herein for Deliverables and Milestone reports; the
version number (v#) drops out when the document reaches its final status and
is ready to be submitted to the EC (i.e., uploaded to the ECAS portal):
_Deliverable_ : ILTROVATORE_DM#_D##.#-Title_DL_v#
_Milestone report_ : ILTROVATORE_DM#_WP##_MS##-Title_CO_v#
In the above: (a) the WP number (WP##) is not needed for Deliverables, as this
is implied in the name of the Deliverable (i.e., D12.2 – Data Management Plan
is the 2 nd Deliverable in WP12); this is not the case for Milestone reports
(MS1-MS15 relate to all WPs, and the MS number does not correspond to the WP
with the same number; MS12 is a special case, because it refers to WP1-WP6,
thus requiring a special naming convention, i.e.,
ILTROVATORE_DM1-2_W1-6_MS12-Title_CO_v#); and (b) DL is the dissemination
level (PU or CO for Deliverables, CO for Milestone reports).
It is too early in the project lifetime to ascribe **metadata** to each type
of dataset mentioned in the __Archive Plan_ _ . It is advisable to attempt
doing so towards the end of the 1 st reporting period, when the first
‘critical mass’ of data has been produced and a trend has been established
with respect to data type, size, format, etc. At that time, it is also
advisable to define the first set of **keywords** to ensure that the data
produced in the project will be optimally reused. Finding the right keywords
is indeed essential in retrieving the right type of information/data (step 2 –
discover existing data, in Fig. 1) needed for a particular project activity.
This became apparent during the preparation of the report accompanying
Milestone 4 – Mining of existing irradiation data, which also describes the
data mining methodology (Section 1.3 – Strategy of literature survey, in that
report) used to achieve that particular Milestone [8]. It is, therefore, very
important to define dataset-specific keywords early in the project, so as to
ensure maximum data reuse as well as optimal citation of the data provenance.
### Storage & retention period
Datasets (e.g., documents, data, graphs, codes, simulations, etc.) produced in
the framework of the project will be stored in the SharePoint platform of the
website ( _http://www.iltrovatore-h2020.eu/_ ) , for a period of 10 years.
As already explained in Deliverable 13.1 – Project Website and Logo [12], the
SharePoint platform is a secure environment accessible via password by the
persons contributing to the project. This platform is used to share
information/data generated in the project between the Consortium members as
well as with experts involved in the 3 expert advisory committees (STC, EUG,
STC) of the project (all these experts have signed non-disclosure agreements
(NDAs) with SCK•CEN); the latter surely applies to the members of the EEAB
(External Expert Advisory Board) that will be called to peer-review the
project achievements at the mid-term and end of the project (see Deliverable
D13.2 – Mid-term review report of the EEAB & Deliverable D13.3 – Final review
report of the EEAB in Annex 1 of the IL TROVATORE Grant Agreement [15]). The
structure on the SharePoint follows the project DMP __Archive Plan_ _ , in
order to make data storage consistent and user-friendly. The first level of
the SharePoint platform is structured as follows:
* Calendar (meetings & events)
* Deliverables, Milestones and Reports
* Project documents
* Meetings
* Work packages – Documents & Data
* Practical information (guidelines, templates)
* Education & Training
* External Communication
Openly-accessible datasets, codes, publications (after the embargo period for
‘green’ open access articles), etc., will be stored in the Zenodo repository
_**.** _ The __Archive plan_ _ also distinguishes between documents that can
be eliminated (deleted) after the retention period and those that can be
stored for a longer time due to their merit (scientific or other).
## Making data openly accessible
IL TROVATORE supports fully the EC initiative towards open access & open data
movement in Europe. This is not only reflected in the IL TROVATORE
participation to the H2020 ORD Pilot (‘open access to research data’), but
also in the commitment to ensure **open access** to all peer-reviewed
scientific publications stemming from the results of the project (‘open access
to peer-reviewed scientific research articles’). According to the Participant
Portal H2020 Online Manual, “open access can be defined as the practice of
providing online access to scientific information that is free of charge to
the reader” [3]. Both **self-archiving ('green' open access)** and **open
access publishing ('gold' open access)** options will be explored in IL
TROVATORE, according to the following basic guidelines:
* In **self-archiving ('green' open access)** , also called ‘parallel publishing’, an embargo period may apply. The maximum permitted embargo period is **6 months** from the initial publication date.
* In **open access publishing ('gold' open access)** , the publications become immediately openly available online.
‘Gold’ open access is typically covered by the authors by paying an article
processing charge (APC) to the Journal. The Journal may either be “gold”
(i.e., all papers have a ‘gold’ open access status) or “hybrid” (i.e.,
subscription Journals with the option to pay for individual papers). In IL
TROVATORE, all partners have reserved a part of their budget to ensure open
access to peer-reviewed scientific publications containing project results.
Even though open access fees are eligible costs in H2020 projects, such as IL
TROVATORE, each partner has a limited budget, which cannot cover ‘gold’ open
access fees for all Journal articles. A reasonable approach to this issue
could be the following:
* Articles that are expected to have high impact on the scientific community should be published in ‘gold’ open access. Indicative APCs are as follows: _Nature Communications_ – 3700 EUR; _Advanced Materials_ – 4375 EUR; American Chemical Society (ACS) Journals (e.g., _ACS Nano_ , _Journal American Chemical Society_ , etc.) – 1500-2000 USD for ACS members; etc.
* Regular articles (i.e., with average impact on the scientific community) could be primarily published in ‘green’ open access with select articles in ‘gold’ open access.
When preparing to submit an article to a peer-reviewed Journal in one of the
open access options, one should take the following facts into account:
1\. Some institutions/countries have agreements with publishers, so that the
‘gold’ open access fees are paid by the institution or the country’s library
organization and not by the authors. This is an option that merits
consideration whenever possible. Relevant information is provided below:
* **_Wiley_ : ** _https://authorservices.wiley.com/author-resources/Journal-Authors/licensing-openaccess/open-access/institutional-funder-payments.html_
* **_Institute of Physics (IOP)_ : ** IOP has agreements with select Universities and/or countries (e.g., Sweden, Norway, Austria, UK) for free ‘gold’ open access to its _subscription Journals_ : _https://publishingsupport.iopscience.iop.org/questions/paying-for-open-access/_
* **_Royal Society of Chemistry_ : ** Authors from institutions with full Journal subscriptions may claim vouchers (NB: limited number, on a first-come, first-served basis) for free ‘gold’ open access to their _subscription Journals_ . It is recommended to contact your library on this item.
* **_Springer_ : ** Springer has agreements with some Universities and/or countries (e.g., Sweden) for free ‘gold’ open access to their _subscription Journals_ .
2. Publishing in ‘gold’ open access Journals with low-to-moderate APCs: fully open access publishers (PLOS, MDPI) and fully open access Journals from “traditional” publishers should be considered. Examples are MDPI (Multidisciplinary Digital Publishing Institute) Journals like _Materials_ , _Coatings_ , etc. (850-1500 CHF), _RSC Advances_ (500 GBP during 2018, 750 GBP afterwards), _Materials Research Letters_ (500 USD), and mega-Journals, such as _ACS Omega_ (750 USD), _AIP Advances_ (1350 USD), _Scientific Reports_ (1370 EUR), etc.
3. Publishing in ‘green’ open access: authors could post/upload the accepted manuscript version (i.e., final version after review, but before any typesetting/production) to a repository such as Zenodo. Making a scientific publication openly accessible may entail an embargo period. The maximum permitted embargo period is 6 months from the initial date of publication. Some publishers require longer embargo periods (e.g., 12 or 24 months) for ‘green’ open access; however, publishing under those terms is not permitted. Useful information on Journal-specific embargo periods is given below:
* **_Nature_ ** (subscription) Journals allow parallel publishing with 6 months embargo.
* **_American Institute of Physics (AIP)_ ** Journals (e.g., _Applied Physics Letters_ , _Journal of Applied Physics_ ) allow parallel publishing without embargo.
* **_American Physical Society (APS)_ ** (e.g., _Physical Review_ Journals) allows parallel publishing of the published Journal version (not only the accepted manuscript) without embargo.
* **_Elsevier_ ** allows posting a preprint of the manuscript on the preprint server ArXiv, and updating the preprint with the accepted manuscript without embargo. This is sufficient to meet the open access requirements, even though it only applies to ArXiv. For other repositories, longer embargo periods apply (often up to 24 months).
All peer-reviewed Journal articles containing data/findings produced in the
project lifetime will be deposited on the **Zenodo repository** (
_https://zenodo.org/_ ) . As already mentioned, Zenodo has been developed in
the framework of the OpenAIRE project ( _https://www.openaire.eu/_ ) , which
was commissioned by the EC to support the nascent Open Data policy by
providing a catch-all repository for EC-funded research. CERN, an OpenAIRE
partner and pioneer in open source, open access and open data, made a major
contribution to the launching of Zenodo in May 2013 [16]. The name Zenodo is
derived from Zenodotus, the first librarian of the Ancient Library of
Alexandria and father of the first recorded use of metadata, a landmark in
library history [16].
Apart from the peer-reviewed Journal articles, **openly-accessible datasets,
codes, etc.** , will also be preserved in the Zenodo repository. These
datasets are firstly the datasets used to produce the peerreviewed Journal
articles deposited in Zenodo, especially in case the used datasets have not
been published as appendices or supplementary material(s) in the original
articles. Other datasets, not necessarily associated with scientific articles,
will also become openly accessible in Zenodo during the project lifetime.
Deciding the datasets that will become openly accessible and the timeframe
this will happen is to occur on a case-by-case basis. These datasets should
not jeopardise either the future industrial exploitation of the project
findings (patenting, licensing, etc.) or their prospect of getting published
in peer-reviewed Journals. As already mentioned, IL TROVATORE is prepared to
support the EC initiative towards open access and open data movement in
Europe, while simultaneously respecting the principle “as open as possible, as
closed as necessary”.
## Making data interoperable
The success of the IL TROVATORE project relies on the continuous exchange of
information and data between partners (Fig. 2), which implies that the
produced datasets must be **interoperable** . Datasets belonging to members of
the Consortium (i.e., ATF technology drivers; experts in different project
aspects, e.g., specific manufacturing processes and materials or constituents
thereof; etc.) at the project outset (background IP) are reused, mostly as
input of follow-up R&D activities. Datasets will be systematically exchanged
between researchers and institutions/organisations in the international
project setting (Europe, USA, Japan) in order to achieve its S&T objectives.
The SharePoint of the project website is selected as platform for data
exchange between partners since the early stages in the project lifetime;
datasets that are ready to become openly accessible will be uploaded to the
Zenodo repository (see section 3.2). **Metadata** will be ascribed to the
produced datasets later in the project lifetime, once the project data
landscape has become more familiar. Moreover, existing and under development
**standards** related to the project activities have already been identified
and are listed in Deliverable 10.1 – Standardisation roadmap, together with
standardised **vocabularies** for specific types of materials [17]. The
diversity of datasets (i.e., types, formats, provenance) might be accompanied
by an unavoidable level of incompatibility that could hinder full data
interoperability at the initial stages of the project; however, with careful
alignment between the data originators (e.g., opting for similar/compatible
data formats, open/widely used software applications, etc.) and elaboration of
a good strategy (vocabularies, standards, methodologies), it is expected to
make data fully interoperable within a short timeframe. Some tentative
recommendations towards data interoperability are given in Table 1.
**Table 1.** Tentative recommendations to make data interoperable in IL
TROVATORE.
<table>
<tr>
<th>
**No.**
</th>
<th>
**Recommendation to make data interoperable**
</th> </tr>
<tr>
<td>
1
</td>
<td>
When specific datasets will be the input of subsequent
tests/analysis/processing performed by others, expectations in terms of data
type & format, etc., should be clearly defined.
</td> </tr>
<tr>
<td>
2
</td>
<td>
The __Archive Plan_ _ concerning the nomenclature, dissemination level,
storing and archiving of each dataset type and sub-type should be followed or
collectively updated/improved.
</td> </tr>
<tr>
<td>
3
</td>
<td>
Documents that constitute contractual obligations (Deliverables, Milestone
reports, etc.) must be reviewed (in the original file format) and their final
version (in PDF format) must be uploaded by the project coordinator to the
SharePoint platform and the ECAS portal.
</td> </tr>
<tr>
<td>
4
</td>
<td>
Use same data and metadata vocabularies to the extent possible; if possible,
use standard vocabularies for all data types in a dataset so as to allow
interdisciplinary interoperability; if needed, generate jointly new
vocabularies and use them during the project
</td> </tr>
<tr>
<td>
5
</td>
<td>
Maximise data reuse within the project premises by uploading to the SharePoint
platform peer-reviewed, high-quality datasets as soon as possible. Maximise
data reuse beyond the project community by making them openly accessible,
i.e., by storing them in the Zenodo repository. Whenever possible, assign a
DOI to the openly accessible datasets.
</td> </tr>
<tr>
<td>
6
</td>
<td>
In scientific publications, strive for publishing the datasets used to produce
the publications as appendices or supplements; if this is not always possible,
upload the data together with the scientific publication to the Zenodo
repository according to the guidelines given in section 3.2. Also describe the
used standards, lab methods, processes, codes and software.
</td> </tr>
<tr>
<td>
7
</td>
<td>
The deposition of the openly accessible datasets in other repositories, such
as institutional repositories, is allowed and/or encouraged so as to maximise
data reuse.
</td> </tr> </table>
## Increase data re-use (through clarifying licenses)
The data produced in the framework of the IL TROVATORE project could
potentially benefit/interest various industrial sectors beyond the Gen-II/III
LWR community (e.g., Gen-IV LFRs/GFRs, fusion, CSP, aerospace, etc.). Hence,
the widest reuse of the project data could indeed be very beneficial for the
society, in terms of fostering diverse entrepreneurial initiatives on European
ground and beyond. It is quite early in the project lifetime, however, to
decide the data licensing approach so as to achieve the widest possible reuse,
especially in view of the fact that the value and possible impact of the data
cannot be estimated before the data have actually been produced. It is also
premature to define the timeframe in which the data will be made available for
reuse beyond the project community. Within the project community, the data are
either first protected (e.g., patented) and then made available for use by the
partners that need them as input via the SharePoint platform or they become
directly available on the SharePoint platform (once peer-reviewed with respect
to their quality). As already mentioned in section 3.1.4, the intended
archiving period for the project data is 10 years or even longer for specific
datasets; the openly accessible data in the project data pool will be
available for reuse in the same period of time.
# Allocation of Resources
As mentioned in section 3.2, all partners have reserved a part of their budget
to ensure open access to peer-reviewed scientific publications containing
project results/data by paying the APCs required by certain Journals. The
recommended approach for open access publications in IL TROVATORE has also
been described in section 3.2. Apart from the resources associated with open
access publishing, the resources (budget/personnel) that will be needed for
data management in a multi-investigator, and multidisciplinary project, such
as IL TROVATORE, might actually be not negligible. This is a point of current
consideration at SCK•CEN, which is entrusted with the maintenance of the
project website and the management of the SharePoint, i.e., the platform that
will be used throughout the project lifetime for data exchange between
partners. The decision of the SCK•CEN management on this item will be
described in the next DMP version.
It is important to emphasise that each partner is responsible for the data
quality (including the data interoperability) generated and provided to other
partners for further R&D activities during the project. Moreover, each partner
is also expected to contribute to the overarching effort to make the project
data FAIR by doing so for the datasets that originate from that particular
partner. Last but not least, the Dissemination Manager (Prof. P. Eklund, LiU)
will overlook all activities associated with knowledge management &
protection, such as the contribution to the ORD Pilot and the continuous
update of the DMP. For this purpose, the Dissemination Manager will constantly
be in consultation with the Executive Board (EB) and the project coordinator.
Moreover, the Dissemination Manager will support the Consortium members with
the registration of publications in the Zenodo repository.
# Data Security
Data security relies on the infrastructural security of each Consortium member
and SCK•CEN, which is responsible for the construction and maintenance of the
project website ( _http://www.iltrovatoreh2020.eu/_ ) . As explained in
D13.1 – Project Website and Logo [12], the website is divided in 2 areas:
* **Public website area (SiteCore):** the public (openly accessible) part of the website provides non-sensitive information (i.e., project scope & participants, generic description of WPs, events, useful links, list of publications, contact information).
* **Secure website area (SharePoint):** the SharePoint is the password-protected part of the website that is only accessible by members of the Consortium. All contributors have been provided with username & password and have _read access rights_ to the SharePoint. Only WP and Task leaders have _read & write access rights _ to the SharePoint. Passwords can only be made by the ICT Department of SCK•CEN. Therefore, password requests should be sent to the IL TROVATORE project office (e-mail address: [email protected]_ ).
Publications and openly accessible datasets will be stored in the Zenodo
repository, following the procedure described in section 3.2.
# Ethical Aspects
No ethical/legal issues with an impact on data sharing have so far been
identified. As confirmed in
D14.1 – GEN-NEC-Requirement No. 1 [18], both non-European partners (Drexel
University, DU, and Kyoto University, KU) will ensure that all ethical
standards and guidelines of H2020 will be rigorously applied in the course of
the IL TROVATORE project. The fact that both DU and KU are willing to respect
the H2020 ethical standards and guidelines implies that both partners are
willing (in fact, they are already doing so) to share data related with the
project R&D activities and to support the EC initiative for open access to
publications & open access to research data (ORD Pilot).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0090_LOGISTAR_769142.md
|
# Executive Summary
This document describes the Data Management Plan (DMP) for the LOGISTAR
project. The DMP provides an analysis of the main elements of the data
management policy that will be used throughout the project by the project
partners, with regard to all the datasets that will be generated, harvested
and/or used by the project. Documentation of this plan is a precursor to the
trials and pilot activities. The format of the plan follows the Horizon 2020
template [1].
_In more detail this document explains and describes:_
1. the LOGISTAR data identification and collection approach,
2. the LOGISTAR overall dataset structure, including an overview of identified data sources and datasets
3. the LOGISTAR overall data management plan and policy including
1. the policies for dataset reference and naming,
2. the dataset description (metadata scheme),
3. relevant standards and metadata,
4. guidelines for (secure) data sharing and
5. information security guidelines,
4. the approach for data archiving and preservation, and finally
5. Ethical aspects in regard to data management in the LOGISTAR project.
As data management is an ongoing process along the duration of the LOGISTAR
project and data management in the project is taking place in a dynamic
environment, this document on hand is seen as a living document, this means
that the document will be developed and maintained continuously over time.
# 1\. Data Collection Procedure
This Data Management Plan (DMP) has been prepared by taking into account the
template of the “Guidelines on Data Management in Horizon 2020” 1 .
Elaboration of the DMP will allow LOGISTAR partners to address all issues
related to management of data collected during the project as well as ethics.
DMP is planned as a deliverable for M6. However, it is a living document which
will be updated throughout the project based on the project progress.
The consortium will comply with the requirements of Regulation (EU) 2016/679
of the European Parliament and of the Council of 27 April 2016 (General Data
Protection Regulation) on the protection of individuals with regard to the
processing of personal data and on the free movement of such data.
Type of data, storage, recruitment process, confidentiality, ownership,
management of intellectual property and access: The Grant Agreement and the
Consortium Agreement are to be referred to for these aspects. The procedures
that will be implemented for data collection, storage, and access, sharing
policies, protection, retention and destruction will be according to the
requirements of the national legislation of each partner and in line with the
EU standards.
The Steering Committee of the project will also ensure that EU standards are
followed. Informed consent will be provided to all participants in the project
trials and pilots.
All collection of sensitive data will be done with full consideration of data
protection principles and industry standards, and will satisfy data protection
requirements in accordance with EU and non-EU directives and national
implementations thereof. Due to nature of services it is NOT likely that
personal data will be captured and processed. In case that there will be
sensitive/ personal data, collection and processing will be done according to
the applicable data protection provisions, such as Regulation (EU) 2016/679 on
the protection of individuals with regard to the processing of personal data
and on the free movement of such data including article 29 working group
8/2010 opinion and Directive 2002/58 on Privacy and Electronic Communications.
For this reason, in case of personal data collection and processing, only
anonymous user data will be collected and securely stored. Anonymous
identification of user-provided information will be leveraged only to confirm
the authenticity of users interacting with the system and to prevent malicious
behaviour. No need to personally identify users through their information is
envisioned nor to include sensitive data. The collected data will be treated
anonymously and additionally a various set of measures will be put in place in
order to protect user privacy and its data security, by embedding privacy by
design principles from the early stage of the project technical start. Where
needed, a prompt Privacy Impact Assessment (PIA) exercise will be performed.
The type, quality and quantity of accessed data will be regulated, by
designing and implementing adequate PIR (Privacy Information Retrieval) and
PPQ (Privacy Preserving Query) mechanisms.
By referring to the proposed work plan, it is worth noticing that all such
measures will be considered at all levels of the technical project
development, starting from WP1 (Market research: interviews, user needs
functional requirements analysis and network data collection), from WP2 where
data gathering and harmonization will be done (overall data storage and data
processing) up to WP3, WP4 and WP5 where data will be used to build different
algorithms and services. While new required and relevant technologies will be
developed as part of the project.
LOGISTAR is already aware of the following existing technological measures
necessary to minimize associated privacy risks such as:
* use of secure data storage, encrypted transfer of data over the capturing channels, controlled and auditable access for different classes of data;
* obscuring/removing user identities at the source of field trial data generation to prevent direct user tracing;
* obscuring personal location data through indirect or delayed routing to prevent individual localization as much as possible and limit user tracking through correlation of depersonalized data based on its location.
The procedure for data identification and collection in LOGISTAR has been
specified as follows, taking into account the specifics of the project:
1. Evaluation of the overall requirements elicitated in WP1 (Market research: interviews, user needs functional requirements analysis and network data collection)
2. Evaluation of the available requirements specification of WP7 (Use Cases and Living Labs)
3. Development of a metadata schema for LOGISTAR (to manage data monitoring for the project along a unique schema), based on DCAT (Data Catalogue Vocabulary) [2] 2
4. Data monitoring and data identification for the LOGISTAR project (along the ODI Data Spectrum 3 ), means open – shared – closed data. Thereby identification of data sources and datasets and collection of respective metadata of these datasets to provide overview and search & browse mechanisms over the LOGISTAR data.
5. Setting up a data catalogue containing metadata (no data!) of the above identified and collected datasets
6. Development of data- and information security guidelines to ensure trusted and secure data sharing between partners and third parties
7. Data acquisition and harvesting by making use of the WP2 data storage layer and harvesting mechanisms. Plus continuous ingestion of new data, as well as updates and maintenance of existing data.
8. The metadata and data stores will be used for data analysis and visualisation (in WPs 3,4,5,7).
As pointed out above the data collection of LOGISTAR follows the ODI Data
Spectrum that includes data and information as follows:
_Fig.001: ODI Data Spectrum, https://theodi.org/about-the-odi/the-data-
spectrum/ _
The overall LOGISTAR Data Management approach in LOGISTAR follows the (Linked)
Data
Lifecycle as follows:
_Fig. 002 (Linked) Data Life Cycle_
# 2\. Overall dataset structure
The Data Management Plan will present in details the procedures for creating
‘primary data’ as well as its management. Separate datasets will be created
for each stakeholder, providing the same structure, in accordance with the
guide of Horizon 2020 for the Data Management Plan. Data gathered during
validation of LOGISTAR and functionality as preparation for the pilots or for
the purpose of scientific publications will be included in this dataset as
well.
The consortium will decide and describe the specific procedures that will be
used in order to ensure long-term preservation of the data sets. This field
will provide information regarding the duration of the data preservation, the
approximate end volume, the associated costs and the plans of the consortium
to cover the costs.
### 2.1. Purpose of data management
The main objective of the LOGISTAR project is to allow **effective planning
and optimizing of transport operations** in the supply chain by taking
advantage of **horizontal collaboration** , relaying on the increasingly
**real time available data** gathered from the interconnected environment.
**For this, a real-time decision making support tool and a real-time
visualization tool of freight transport will be developed** , with the purpose
of delivering information and services to the various agents involved in the
supply chain, i.e. freight transport operators, their clients, industries and
other stakeholders such as warehouse or infrastructure managers.
The data management activities and guidelines in LOGISTAR are built on top of
this main project objectives – aligned with WP2 (Data Gathering and
Harmonisation) where the major objectives are as follows:
* the identification of broad, open and IoT data for the project as well as relevant stakeholder and stakeholders partner data (closed data on i.e. goods),
* the provision of the data acquisition layer of LOGISTAR (broad & open & IoT data) and
* the provision of the metadata / semantic layer of LOGISTAR, and finally
* the provision of the overall data storage layer of LOGISTAR including 3 stores: (i) event store (ii) big data store and (iii) metadata store
Thereby the ultimate goal is to prepare a managed collection of actionable
data to be used in other WPs. What includes strategies and mechanisms of
secure data storage and sharing so data can be used easily and secure for
analytics and visualisation et al.
### 2.2. Sources, Types and Formats of Data
The following sources of relevant data have been identified as relevant data
sets for the LOGISTAR project:
* **Closed Data / shared data** o Data from Use Case partners (types of data see below) coming from their transport management systems (TMS)
* 3 rd party data (external use case partners) providing data from TMS and/or specific datasets about routes, prices, et al.
* Simulated transport data from project partners
* **Open Data** o EU Data Portal, _https://www.europeandataportal.eu/de/homepage_ o EC Open Data Portal, _http://data.europa.eu/euodp/en/home_
* Lighthouse Project: Transforming Transport, _https://data.transformingtransport.eu/_ o Other Transport H2020 projects in place
* EU Intelligence Transport Systems, e.g. safe & secure t ruck p arking, _https://ec.europa.eu/transport/themes/its/road/action_plan/intelligent-truckparking_en_
* Standards et al (e.g. GS1, ISO, W3C…)
* Weather and traffic information data from relevant countries: UK, Italy, Europe
The following types of data have been identified as being relevant for the
LOGISTAR project. This list will be maintained and expanded over time along
the LOGISTAR project.
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Products ordered**
</td>
<td>
Product Code
</td> </tr>
<tr>
<td>
</td>
<td>
Order Number
</td> </tr>
<tr>
<td>
</td>
<td>
Quantity
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Order Data**
</td>
<td>
Order Number
</td> </tr>
<tr>
<td>
</td>
<td>
Facility Picking Order (this could be Facility code)
</td> </tr>
<tr>
<td>
</td>
<td>
Date & time order placed
</td> </tr>
<tr>
<td>
</td>
<td>
Date & time order ready for despatch
</td> </tr>
<tr>
<td>
</td>
<td>
Date & time required for delivery
</td> </tr>
<tr>
<td>
</td>
<td>
Delivery location of order (This could also be a code)
</td> </tr>
<tr>
<td>
</td>
<td>
Special delivery requests
</td> </tr>
<tr>
<td>
</td>
<td>
Orders to take from A to B
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Customer data**
</td>
<td>
Customer Code
</td> </tr>
<tr>
<td>
</td>
<td>
Location of customer
</td> </tr>
<tr>
<td>
</td>
<td>
Vehicle access constraints
</td> </tr>
<tr>
<td>
</td>
<td>
Opening Hours
</td> </tr>
<tr>
<td>
</td>
<td>
Typical delivery drop times
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Data Type**
</td>
<td>
**Data**
</td> </tr>
<tr>
<td>
**Vehicle data**
</td>
<td>
Tractor ID
</td> </tr>
<tr>
<td>
</td>
<td>
Trailer ID
</td> </tr>
<tr>
<td>
</td>
<td>
Vehicle departed from location
</td> </tr>
<tr>
<td>
</td>
<td>
Date & time of departure
</td> </tr>
<tr>
<td>
</td>
<td>
Current location
</td> </tr>
<tr>
<td>
</td>
<td>
Date & time expected at destination
</td> </tr>
<tr>
<td>
</td>
<td>
Order IDs on vehicle
</td> </tr>
<tr>
<td>
</td>
<td>
Truck type
</td> </tr>
<tr>
<td>
</td>
<td>
Truck features (characteristics)
</td> </tr>
<tr>
<td>
</td>
<td>
Position of vehicles
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Tractor data**
</td>
<td>
Tractor ID
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Facility data**
</td>
<td>
Facility Code
</td> </tr>
<tr>
<td>
(These could be supplier/factory/warehouse)
</td>
<td>
Location of facility
</td> </tr>
<tr>
<td>
Vehicle access constraints
</td> </tr>
<tr>
<td>
Opening Hours
</td> </tr>
<tr>
<td>
</td>
<td>
Typical load/unload times
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Product Profile**
</td>
<td>
Product code
</td> </tr>
<tr>
<td>
</td>
<td>
Ambient / chill / frozen / hazardous
</td> </tr>
<tr>
<td>
</td>
<td>
Dimensions
</td> </tr>
<tr>
<td>
</td>
<td>
Weight
</td> </tr>
<tr>
<td>
</td>
<td>
Stackability
</td> </tr>
<tr>
<td>
</td>
<td>
Contamination data
</td> </tr>
<tr>
<td>
</td>
<td>
Danger classes
</td> </tr>
<tr>
<td>
</td>
<td>
Moving of goods (location of goods)
</td> </tr>
<tr>
<td>
</td>
<td>
Pallet type
</td> </tr>
<tr>
<td>
</td>
<td>
Cases or quantity per pallet
</td> </tr>
<tr>
<td>
</td>
<td>
Cases or quantity per layer
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Costs**
</td>
<td>
Transportation costs
</td> </tr>
<tr>
<td>
</td>
<td>
calculation of costs (metrics)
</td> </tr>
<tr>
<td>
</td>
<td>
Rates negotiated
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Directives**
</td>
<td>
Chemical directives
</td> </tr>
<tr>
<td>
</td>
<td>
EU mobility directives
</td> </tr>
<tr>
<td>
</td>
<td>
EU transport directives
</td> </tr>
<tr>
<td>
</td>
<td>
National directives
</td> </tr>
<tr>
<td>
</td>
<td>
EU Environmental Directives
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Geo Information**
</td>
<td>
Regions
</td> </tr>
<tr>
<td>
</td>
<td>
Countries
</td> </tr>
<tr>
<td>
</td>
<td>
Addresses
</td> </tr>
<tr>
<td>
</td>
<td>
Routes
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Standards**
</td>
<td>
Article codes
</td> </tr>
<tr>
<td>
</td>
<td>
GS1 for retail
</td> </tr>
<tr>
<td>
</td>
<td>
Global location numbers
</td> </tr>
<tr>
<td>
</td>
<td>
Industry sectors (e.g. NACE codes)
</td> </tr>
<tr>
<td>
</td>
<td>
Country Codes (e.g. ISO)
</td> </tr>
<tr>
<td>
</td>
<td>
Languages (e.g. ISO 2 or 3 digits)
</td> </tr>
<tr>
<td>
</td>
<td>
City Codes (e.g. IATA 3 digit)
</td> </tr>
<tr>
<td>
</td>
<td>
Existing logistics taxonomies and ontologies
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Other data**
</td>
<td>
Vehicle filled
</td> </tr>
<tr>
<td>
</td>
<td>
Empty miles
</td> </tr>
<tr>
<td>
</td>
<td>
Types of containers
</td> </tr>
<tr>
<td>
</td>
<td>
CO2 emission / carbon footprint calculation
</td> </tr>
<tr>
<td>
</td>
<td>
miles empty
</td> </tr>
<tr>
<td>
</td>
<td>
events of interest (to be specified)
</td> </tr>
<tr>
<td>
</td>
<td>
weather data
</td> </tr>
<tr>
<td>
</td>
<td>
traffic information
</td> </tr>
<tr>
<td>
</td>
<td>
news articles (relevant)
</td> </tr>
<tr>
<td>
</td>
<td>
pollution level / location
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Facilities**
</td>
<td>
Addresses
</td> </tr>
<tr>
<td>
</td>
<td>
Opening hours
</td> </tr>
<tr>
<td>
</td>
<td>
Vehicle access (restrictions)
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Driver data**
</td>
<td>
availability of drivers
</td> </tr>
<tr>
<td>
</td>
<td>
time already worked / allowed to work
</td> </tr>
<tr>
<td>
</td>
<td>
working on the day
</td> </tr>
<tr>
<td>
</td>
<td>
driver schedules
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Rail data**
</td>
<td>
Train & schedules
</td> </tr>
<tr>
<td>
</td>
<td>
Capacity
</td> </tr>
<tr>
<td>
</td>
<td>
Rail operator
</td> </tr> </table>
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**Services**
</td>
<td>
Schedules
</td> </tr>
<tr>
<td>
</td>
<td>
Real time information (to be specified)
</td> </tr>
<tr>
<td>
</td>
<td>
Service Level constraints
</td> </tr> </table>
**In regards of formats of dat** a the project has to deal with a big variety
of data – means data from several sources and in several formats. The data and
information will be used in unstructured format (e.g. documents), as well as
in semi-structured and structured format (e.g. tabular data like CSV files).
Some data will be harvested via APIs and the format such data is being
received needs to be specified in the course of the technical requirements
specification and architecture work in WP6.
_A preliminary list of data formats is as follows:_
* API data (output in several formats available)
* XML
* RDF
* CSV
* Relational DBs
* Json (LD)
* Documents (MS Word, XLS, PDFs et al)
* Etc.
**In regards to re-use of existing data** the LOGISTAR project has a strong
focus on making use of existing data like standards (ISO, W3C, et al) in the
form of for example models, taxonomies, ontologies or controlled vocabularies
(like code lists), furthermore the use of open and broad data (e.g. in the
area of weather data, traffic information or environmental data) wherever
possible.
**The size of data is still not specified** at the moment of the creation of
this deliverable but it can be said, that LOGISTAR data has the following 3
main attributes: (i) big data (volume) and (ii) velocity data (real time data
/ streaming data) and (iii) high variety (different sources, different
formats) of data.
# 3\. Management plans and policy
This section reflects the current status of the primary data envisioned in the
project. Being in line with the EU’s guidelines regarding the DMP (European
Commission, 2016 4 5 ), this document should address for each data set
collected, processed and/or generated in the project the following elements:
1. Data set reference and name
2. Data set description
3. Standards and metadata
4. Data sharing
5. Archiving and preservation
To this end, the consortium develops a number of strategies that will be
followed in order to address the above elements.
In this section, we provide a detailed description of these elements in order
to ensure their understanding by the partners of the consortium. For each
element, we also describe the strategy that will be used to address it.
Wherever possible the LOGISTAR project will follow the **EC Guidelines for
Open Access** 6 as well as the **principles of FAIR data** 7 , this means:
**FAIR data** are data which meet standards of _findability_ ,
_accessibility_ , _interoperability_ , and _reusability_ .
Remark: as LOGISTAR is also working with sensitive industry data from partners
and 3 rd parties (e.g. data from transport management system from project
partners) such data cannot be made publicly available – BUT the listed
principles can be applied for data sharing between the partners and inside the
consortium, where applicable and necessary)
### 3.1. Data set reference and name
Unique identification of datasets is ensured by following provisioned unique
naming convention drafted for the purpose of the LOGISTAR project. The
convention for the dataset naming is as follows:
1\. Each data set name consists of 5 different parts separated with a “:”,
e.g. **PartnerName:EntityGroup:EnityType:VarcharId** ,
1. **PartnerName** represents the name (or the short name) of the organisation (e.g. data owner, data custodian) associated with the dataset:
i. UDEUSTO - UNIVERSIDAD DE LA IGLESIA DE DEUSTO ENTIDAD
RELIGIOSA ii. UCC - UNIVERSITY COLLEGE CORK - NATIONAL UNIVERSITY OF
IRELAND, CORK iii. CSIC - AGENCIA ESTATAL CONSEJO SUPERIOR DEINVESTIGACIONES
CIENTIFICAS iv. DNET - DRUSTVO ZA KONSALTING, RAZVOJ I IMPLEMENTACIJU
INFORMACIONIH I KOMUNIKACIONIH TEHNOLOGIJA DUNAVNET DOO
NOVI SAD
5. SWC - SEMANTIC WEB COMPANY GMBH
6. PRESTON - PRESTON SOLUTIONS LIMITED
7. MDST - MDS TRANSMODAL LIMITED
8. SAG - SOFTWARE AG
9. DBH - dbh Logistics IT AG
10. GENEGIS - GENEGIS GI SRL
11. AGLERS - AHLERS BELGIUM NV
12. ZAILOG - CONSORZIO ZAILOG
13. NESTLE - NESTLE UK LTD
14. PLADIS - UNITED BISCUITS (UK) LIMITED
15. CODOGNOTTO - CODOGNOTTO ITALIA SPA
2. **EntityGroup** – represents the category of data source, such as carrier name of the load
3. **EntityType** – represents the type of data source category
4. **VarcharId** – in systems there is a chance that context already have being assigned with the ID. In some cases, certain data context IDs in databases will be automatically iterated. For both, this suffix will be used as a final part of ID. It can be text and or numerical.
An example of one dataset name generated used above provide convention would
be:
Nestle:Group1:VehicleSpeed:0001
### 3.2. Data set description, Standards and Metadata
Data collected, processed or generated within the project will have its
description to explain dataset in more details (the metadata, MD). This data
will be provided by the data owner/producer and/or other stakeholders.
Information gathered during the LOGISTAR will be accompanied by context
information (location, date, time) as well as publicly available information
such as weather from the online local measurement stations.
The metadata schema (as follows) has been created by taking into account the
DCAT vocabulary 8 (a W3C recommendation, that is used for e.g. metadata for
open data across Europe and/or for the European Data Portal) but has been
slightly adapted to the needs of LOGISTAR.
The description will provide information as given in the table below.
<table>
<tr>
<th>
**Title**
</th>
<th>
Title of the dataset
</th> </tr>
<tr>
<td>
**Type of data**
</td>
<td>
The type of data, e.g. driver data, vehicle data, geo information
</td> </tr>
<tr>
<td>
**Data provider**
</td>
<td>
Provider of the data, not necessary owner
</td> </tr>
<tr>
<td>
**Dataset owner**
</td>
<td>
Owner of the data not necessary provider
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Brief description of the data features and the purpose of the data
</td> </tr>
<tr>
<td>
**Format (Media type)**
</td>
<td>
Doc, pdf, api, json, xml
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The license and terms under data can be used
</td> </tr>
<tr>
<td>
**Language**
</td>
<td>
ISO code of language
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
Yes, no, if available provide URI reference schema
</td> </tr>
<tr>
<td>
**Static / Dynamic dataset**
</td>
<td>
Information about the dataset if it is static (e.g. dataset as a cvs file) or
dynamic data (e.g. real time data via API)
</td> </tr>
<tr>
<td>
**Data Type along the ODI Data Spectrum**
</td>
<td>
Controlled vocabulary (CV) to describe the type of data: open, shared, closed
</td> </tr>
<tr>
<td>
**Classification of sensitivity of data**
</td>
<td>
CV to describe of a dataset is sensible (attributes: Yes / No). If a dataset
is marked as sensible: Yes then the Information & Data Security Checks and
Guidelines (see below) need to be taken into account.
</td> </tr> </table>
Potentially this LOGISTAR MD schema can be adapted and expanded over time if
necessary, for example the following additional fields could be useful for
metadata in LOGISTAR:
* Data purpose (what is this data used for?)
* Temporal Coverage (of Data; e.g. year 2017 or June 2016)
* Geographical Coverage of data (e.g. UK or Scandinavia)
* Language [use ISO like EN please]
* Size (this should be given approximately in MB. The goal is to know if the dataset is too large to plan how to handle it, e.g. if we need to import file couple of hundred of MB)
Finally, the MD schema will be used for data monitoring and identification and
the LOGISTAR data catalogue (a catalogue of metadata of identified datasets
for LOGISTAR).
In regard to the use of standards and the re-use of models (e.g. controlled
vocabularies, taxonomies and / or ontologies) the relevant standards and
models will be screened and identified along the requirements elicitation and
the use case specification in the LOGISTAR project – sources are as follows:
* Standards o GS1
* _https://www.gs1.ch/en/home/topics/master-data-based-on-gs1_
* _https://www.gs1.org/standards_
* _https://www.gs1.org/standards/gdsn_ o ISO (logistic related standardsm but also terminology and metadata and information security related standards as follows)
<table>
<tr>
<th>
▪
</th>
<th>
ISO 28000:2007: Specification for security management systems for the supply
chain
</th> </tr>
<tr>
<td>
▪
</td>
<td>
ISO 704, Terminology work
</td> </tr> </table>
* ISO 1087-1, Vocabularies
* ISO 11179, MDR
* ISO 20943-1 MDR (consistency)
* ISO 25964-1 Thesauri & Interoperability
* ISO 27001 - Information Security (Certified end 2018) o W3C, _https://www.w3.org/_
* Resource for Vocabularies: _https://bartoc.org/_
* EC ISA2 (Core) Vocabularies: _https://ec.europa.eu/isa2/solutions/core-vocabularies_en_
* EU Data Portal, _https://www.europeandataportal.eu/de/homepage_
* Lighthouse Project: Transforming Transport, _https://data.transformingtransport.eu/_
The principle approach is to make use of existing standards and wherever
necessary to interlink and/or map, or to adapt such to the LOGISTAR
requirements.
### 3.3. Secure Data sharing & Information Security Guidelines
The LOGISTAR project will define how data will be shared and more specifically
the access procedures, the embargo periods, the necessary software and other
tools for enabling re-use, for all datasets that will be collected, generated,
or processed in the project.
In case the dataset cannot be shared, the reasons for this will be mentioned
(e.g. ethical, rules of personal data, intellectual property, and commercial,
privacy-related, security-related).
In addition, beneficiaries do not have to ensure open access to specific parts
of the research data if the achievement of the action's main objective, as
described in Annex 1 of the DoW, would be jeopardised by making those specific
parts of the research data openly accessible. In this case, the data
management will present the reasons for not giving access.
_More concrete the following mechanisms has been specified:_
_Remark_ : such mechanisms will be adapted over time following the dynamic
requirements of the LOGISTAR project.
* Datasets that are identified as ‘sensible / closed’ data in the course of the data monitoring activities of LOGISTAR will be part of a data security check and the specified LOGISTAR data / information security guidelines (see as follows)
* Between the partners a clear Non Disclosure Agreement / NDA will be executed (in addition to the Consortium Agreement) that includes mechanisms and agreements for secure data sharing between the parties o This NDA will be adapted to be used also for agreements for data sharing with 3 rd parties in the course of the use case development in the project
* The NDA mentioned above will include – in the form of an Annex – specific Information and Data Security Guidelines that apply to LOGISTAR secure data sharing.
The attributes for this are as follows:
* Store data (into the LOGISTAR store) from each data provider separately (physically)
& ensure secure data transfer o No unnecessary data transfer (e.g. as of
federated systems)
* Aggregation / anonymisation of data (decision on dataset by dataset basis) if necessary and useful
* For data analytics / prediction etc data needs to be integrated, thereby the people making use of such data need to be specified and listed (remark: data sharing can increase data quality (e.g. address data))
* Establish mechanisms of TRUST (LOGISTAR operator = data stewardship) need to be specified and implemented
* Compliance in (i) GDPR and (ii) other data regulations, including:
* NO export of any data outside of the EU
* Any data breach, data loss or similar to be reported in e.g. 72 hours
* Data will be deleted on request in specified duration (plus written confirmation)
* Create and maintain a list of team members with access / data / organisation
* Data processing agreements for PII (personal identifiable information) between partners to be executed
* Any data processing, storage et al must follow Industry standards
* Data sharing only on a ‘need to know basis to data & any use is for LOGISTAR purpose only
To ensure a stable and efficient solution LOGISTAR will take into account best
practises of other projects and initiatives for secure data sharing like for
instance: NextTrust ( _http://www.nextrustproject.eu/_ ) or iShare (
_https://cordis.europa.eu/project/rcn/208159_de.html_ ) .
# 4\. Archiving and preservation
The data sharing procedures will be different across the datasets depending of
license and will be in accordance with the Grant Agreement.
Raw data will be converted to non-talking identifiers (coded identification of
the vehicle) with the use of one-way thickening algorithms using random values
(different for each tracked data set – salt cryptography) while the original
data will be discarded and the coded identification of the vehicle will be
stored for maximum of 24 hours for the specific needs of the system.
Appropriate technical and organizational measures will be taken against
unauthorised or unlawful processing of personal data in order to ensure that
the individual cannot be identified from the captured data and furthermore we
will ensure that data that could eventually lead to subsequent determination
of the individual’s path (where, when, how fast) will be stored for maximum of
24 hours. This way the possibility of identification of Individual is
sufficiently minimalized
The system will aggregate the data collected in short intervals which will
ensure the anonymity of the data.
# 5\. Ethical aspects
As in LOGISTAR sensible data will be harvested, stored and processed as well
as potentially also personal data will be processed the project has
established a separate workpackage (WP10 - Ethics requirements) to tackle
ethical issues.
The 'ethics requirements' that the project must comply with are included as
deliverables in this work package, see as follows:
D10.1: NEC - Requirement No. 1 [7] (Month 1)
The applicants must ensure that the research conducted outside the EU is legal
in at least one EU Member State.
D10.2: H - Requirement No. 2 [8] (Month 1)
The informed consent procedures that will be implemented for the participation
of humans must be submitted as a deliverable. Templates of the informed
consent/assent forms and information sheets (in language and terms
intelligible to the participants) must be kept on file.
D10.3: POPD - Requirement No. 3 [9] (Month 1)
Detailed information on the procedures for data collection, storage,
protection, retention, and destruction, and confirmation that they comply with
national and EU legislation must be included in the Data Management Plan.
However, relevant information that pertain to the interviewing/surveying
activities performed before the delivery of the Data Management Plan in M6
must also be provided by the start of these activities. In case personal data
are transferred from/to a non-EU country or international organisation,
confirmation that this complies with national and EU legislation, together
with the necessary authorisations, must be kept on file. Detailed information
on the informed consent procedures in regard to the collection, storage, and
protection of personal data must be submitted as a deliverable. Templates of
the informed consent forms and information sheets (in language and terms
intelligible to the participants) must be kept on file. In case of further
processing of previously collected personal data, relevant authorisations must
be kept on file.
D10.4: GEN - Requirement No. 4 [10] (Month 6)
An ethics mentor must be appointed to advise the project participants on
ethics issues relevant to the protection of personal data. A report on the
activities of the ethics mentor must be submitted with the Data Management
Plan.
### 5.1. Activities of the ethics mentor
The Ethics mentor appointed for LOGISTAR project is Dr Pedro Manuel Sasia who
is lecturer and researcher at the University of Deusto.
During these 6 first months of the project (from June 2018 to November 2018),
the activities of the ethics mentor have been focused on stablishing the
adequate procedures to fulfil the ethical requirements that are relevant to
the project, namely:
* Research out of Europe
Definition of Procedures to be followed for analysing the collection and
processing of personal data obtained or handled by LOGISTAR project in Serbia.
Data transfer agreement to comply with GDPR requirements. (Detailed in
Deliverable 10.1 [7])
* Informed consent
Definition of Procedures implemented for the participation of humans in
LOGISTAR’s research activities both in interviewing activities and for the
testing and validation of the system.
Informed consent templates
(Detailed in Deliverable 10.2 [8])
* Data management
Definition of procedures in relation with data collection, storage,
protection, retention and destruction in order to comply with the applicable
national and EU legislation. Particularly, data handling procedures to be
implemented related to interviewing activities that have taken place at the
beginning of the project (Detailed in Deliverable 10.3 [9])
The ethics mentor has been in close contact with coordinator via direct mail,
phone and regular meeting and has had access to all the information about the
activities of the project that could imply ethically relevant aspects via
email and accessing the common repository of LOGISTAR.
# 6\. Conclusions
The LOGISTAR project makes use of data along the whole ODI Data Spectrum,
means closed – shared – open data with main attributes: volume, velocity and
variety. Data comes from different sources like open data sources, consortium
members and also 3 rd parties in the course of the use case realisation.
Parts of the data are sensible data and potentially even personal data and
thereby secure data management / sharing is an important issue to be tackled
by the project. This will be taken into account from a technical as well as
from an organisational viewpoint!
This Data Management Plan on hand is created as a living document that is
maintained over time following the dynamic requirements of the LOGISTAR
project and it acts as a guideline for the whole consortium in regards of any
data management in the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0092_EnDurCrete_760639.md
|
1. **Introduction**
This document constitutes the first issue of Data Management Plan (DMP)
foreseen in the EU framework of the EnDurCrete project under Grant Agreement
No. 760639. The objective of the DMP is to establish the measures for
promoting the findings during the Project’s life and detail what data the
Project will generate, whether and how it will be exploited or made accessible
for verification and re-use, and how it will be curated and preserved. The DMP
enhances and ensures relevant Project´s information transferability and takes
into account the restrictions established by the Consortium Agreement. In this
framework, the DMP aligns The Dissemination, Communication and Networking
Plan. The first version of the DMP is delivered at month 8, later the DMP will
be monitored and regularly updated up to the release of the Final Data
Management Plan. It is acknowledged that not all data types will be available
at the start of the Project, thus whenever important, if any changes occur to
the EnDurCrete project due to inclusion of new data sets, changes in
consortium policies or external factors, the DMP will be updated in order to
reflect actual data generated and the user requirements as identified by the
EnDurCrete consortium participants.
The EnDurCrete project aims to develop a new cost-effective sustainable
reinforced concrete for long lasting and added value applications. The concept
is based on the integration of novel lowclinker cement including high-value
industrial by-products, new nano and micro technologies and hybrid systems
ensuring enhanced durability of sustainable concrete structures with high
mechanical properties, self-healing and self-monitoring capacities.
EnDurCrete project comprises seven technical work packages as follows:
* WP1 Design requirements for structures exposed to aggressive environment
* WP2 Development and characterisation of new green and low-cost cementitious materials
* WP3 Innovative concrete technologies, including nano/microfillers, coatings and reinforcement
* WP4 Multifunctional and multiscale modelling and simulations of materials, components and structures
* WP5 Lab-scale performance testing and development of monitoring tools for concrete components & structures
* WP6 Prototyping, demonstration and solutions performance validation
* WP7 Life cycle assessment and economic evaluation, standardization and health and safety aspects
Two non-technical work packages ensure the facilitation of the technical work
and coordination of all the work packages, dissemination and communication of
the project results. These work packages consist of the following:
* WP8 Training, dissemination and exploitation
* WP9 Project Management
This document has been prepared to describe the data management life cycle for
all data sets that will be collected, processed or generated by the EnDurCrete
project. It is a document outlining how research data will be handled during
the Project, and after the Project is completed. It describes what data will
be collected, processed or generated and what methodologies and standards are
to be applied. It also defines if and how this data will be shared and/or made
open, and how it will be curated and preserved.
2. **Open Access**
Open access can be defined as the practice of providing online access to
scientific information that is free of charge to the reader and that is
reusable. In the context of R&D, open access typically focuses on access to
“scientific information”, which refers to two main categories:
* Peer-reviewed scientific research articles (published in academic journals), or Scientific research data (data underlying publications and/or raw data).
It is important to note that:
* Open access publications go through the same peer review process as non-open access publications.
* As an open access requirement comes after a decision to publish, it is not an obligation to publish; it is up to researchers whether they want to publish some results or not.
* As the decision on whether to commercially exploit results (e.g. through patents or otherwise) is made before the decision to publish (open access or not), open access does not interfere with the commercial exploitation of research results. 1
Benefits of open access:
* Unprecedented possibilities for the dissemination and exchange of information due to the advent of the internet and electronic publishing.
* Wider access to scientific publications and data including creation and dissemination of knowledge, acceleration of innovation, foster collaboration and reduction of the effort duplication, involvement of citizens and society, contribution to returns on investment in R&D etc.
Possibilities to access
and share scientific
information
Faster growth
Fost
er collaboration
Involve citizens and
society
Build on previous
research results
OPEN ACCESS
Accelerated
innovation
Increased
efficiency
Improved quality of
results
Improved
transparency
Figure 1 - Open Access benefits
The EC capitalizes on open access and open science as it lowers barriers to
accessing publiclyfunded research. This increases research impact, the free-
flow of ideas and facilitates a knowledge-driven society at the same time
underpinning the EU Digital Agenda (OpenAIRE Guide for Research Administrators
- EC funded projects). Open access policy of European Commission is not a goal
in itself, but an element in promotion of affordable and easily accessible
scientific information for the scientific community itself, but also for
innovative small businesses.
**2.1 Open Access to peer-reviewed scientific publications**
Open access to scientific peer-reviewed publications (also known as Open
Access Mandate) has been anchored as an underlying principle in the Horizon
2020 Regulation and the Rules of Participation and is consequently implemented
through the relevant provisions in the Grant Agreement. Non-compliance can
lead, amongst other measures, to a grant reduction.
More specifically, Article 29 of the EnDurCrete GA: “Dissemination of results
- Open Access - Visibility of EU Funding” establishes the obligation to ensure
open access to all peer-reviewed articles relating to the EnDurCrete project.
_Article 29.2 EnDurCrete GA: Open access to scientific publications_
“Each beneficiary must ensure open access (free of charge online access for
any user) to all peer reviewed scientific publications relating to its
results.
In particular, it must:
1. as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications;
Moreover, the beneficiary must aim to deposit at the same time the research
data needed to validate the results presented in the deposited scientific
publications.
2. ensure open access to the deposited publication — via the repository — at the latest:
1. on publication, if an electronic version is available for free via the publisher, or
2. within six months of publication (twelve months for publications in the social sciences and humanities) in any other case.
3. ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication.
The bibliographic metadata must be in a standard format and must include all
of the following:
* the terms “European Union (EU)” and “Horizon 2020”;
* the name of the action, acronym and grant number;
* the publication date, and length of embargo period if applicable;
* a persistent identifier.”
1. **Green Open Access**
The green open access is also called self-archiving and means that the
published article or the final peer-reviewed manuscript is archived by the
researcher in an online repository before, after or alongside its publication.
Access to this article is often delayed (embargo period). Publishers recoup
their investment by selling subscriptions and charging pay-per-download/view
fees during this period during an exclusivity period. This model is promoted
alongside the “Gold” route by the open access community of researchers and
librarians, and is often preferred.
2. **Gold Open Access**
The gold open access is also called open access publishing, or author pays
publishing, and means that a publication is immediately provided in open
access mode by the scientific publisher. Associate costs are shifted from
readers to the university or research institute to which the researcher is
affiliated, or to the funding agency supporting the research. This model is
usually the one promoted by the community of well-established scientific
publishers in the business.
**2.2 Open Access to research data**
“Research data” refers to information, in particular facts or numbers,
collected to be examined and considered and as a basis for reasoning,
discussion, or calculation. In a research context, examples of data include
statistics, results of experiments, measurements, observations resulting from
fieldwork, survey results, interview recordings and images. The focus is on
research data that is available in digital form.
_Article 29.3 EnDurCrete GA: Open access to research data_
“Regarding the digital research data generated in the action (‘data’), the
beneficiaries must:
1. deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following:
1. the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;
2. other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan' (see Annex 1 of the EnDurCrete GA);
2. provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves).
This does not change the obligation to protect results in Article 27, the
confidentiality obligations in Article 36, the security obligations in Article
37 or the obligations to protect personal data in Article 39, all of which
still apply.
The beneficiaries do not have to ensure open access to specific parts of their
research data if the achievement of the action's main objective, as described
in Annex 1, would be jeopardized by making those specific parts of the
research data openly accessible. In this case, the data management plan must
contain the reasons for not giving access.”
**2.3 Dissemination & Communication and Open Access **
For the implementation of the EnDurCrete project, there is a complete
dissemination and communication set of activities scheduled, with the
objectives of raising awareness in the research community, industry and wide
public (e-newsletters, e-brochures, poster or events, are foreseen for the
dissemination of the EnDurCrete to key groups potentially related to the
project results’ exploitation). Likewise, the EnDurCrete website, webinars,
press releases or videos, for instance, will be developed for a communication
to a wider audience. Details about all those dissemination and communication
elements are provided in the deliverable D8.2 Communication, Networking and
Dissemination Plan. The Data Management Plan and the actions derived are part
of the overall EnDurCrete dissemination and communication strategy, which is
included in the abovementioned deliverable.
**3 Objectives of Data Management Plan**
The purpose of the EnDurCrete Data Management Plan is to provide a management
assurance framework and processes that fulfil the data management policy that
will be used by the EnDurCrete project partners regarding all the dataset
types that will be generated by the EnDurCrete project. The aim of the DMP is
to control and ensure quality of project activities, and to manage the
material/data generated within the EnDurCrete project effectively and
efficiently. It also describes how data will be collected, processed, stored
and managed holistically from the perspective of external accessibility and
long-term archiving.
The content of the DMP is complementary to other official documents that
define obligations under the Grant Agreement and associated annexes, and shall
be considered a living document and as such will be the subject of periodic
updating as necessary throughout the lifespan of the Project.
**EnDurCrete**
**Data Management**
**Plan**
Communication,
Networking and
Dissemination Plan
Exploitation Plan
Publication
Repository of
research data
IPR
management
Business models
Open access to
scientific
publication
Open access to
research data
IPR strategy
Business Plan
**4 EnDurCrete Project Website - storage and access**
EnDurCrete project website is used for storing both public and private
documents related to project and dissemination, and is meant to be live for
the whole project duration and minimum 2 years after the project end. Public
section of the website contains mainly public deliverables, brochure, (roll
up) poster, presentations, scientific papers, newsletters, magazine article,
videos, photos, etc. Reserved Area section of the project website includes
confidential deliverables, work packages related documentation, and is used as
the main exchange of information among the Project partners.
The website _www.endurcrete.eu_ was launched during the early Project stage,
its design is done by dissemination leader FENIX that is also in charge of
website maintenance and regular update. It is dynamic and interactive tool in
order to ensure a clear communication and wide dissemination of project news,
activities and results. The website is of primary importance due to the
expected impact on the target audiences. It was designed to give quick, simple
and neat information. The website is regularly updated with news and events
related to EnDurCrete Project, press releases, magazine articles, scientific
papers, etc. The website is available in English.
To ensure the safety of the data, the partners will use their available local
file servers to periodically create backups of the relevant materials. The
EnDurCrete project website itself already has its own backup procedures.
The Project Coordinator (HC) of the EnDurCrete along with the Dissemination
and Exploitation Leader (FENIX) will be in charge for data management and all
the relevant issues.
**5 Data Management Plan implementation**
The organisational structure of the EnDurCrete project matches the complexity
of the Project and is in accordance with the recommended management structure
of the DESCA model Consortium Agreement. The organisational structure of the
Project is shown in the figure below.
project
The general and technical management of the Project is handled by the
**Project Coordination**
**Group** (PCG). The PCG administers the project, acts as a single point of
contact between the EnDurCrete consortium and the Commission. It provides the
general direction to the project by regularly reporting to the General
Assembly (GA). The PCG comprises the Project Coordinator (PC), the Chief
Scientific-Technical Office (CSO), the Chief Financial Officer (CFO), and the
Chief Administrative Officer.
Responsibilities of the PCG include:
* financial control,
* contractual issues,
* communication,
* IPR issues, and
* reporting to the Commission
The R&D work in the Project is divided in seven technical work packages and
two non-technical work packages. Each work package is managed by **Work
Package Leader** (WP Leader). WP Leaders are responsible for managing their
work package as a self-contained entity.
Tasks and responsibilities of the WP Leaders include, among others, following:
* Coordination of the technical work in the WPs, including contribution to reporting
* Assessment of the WP progress to ensure the output performance, costs and timeline are met
* Identification of IPR issues and opportunities
* Organisation of the WP meetings
* Contribution to the dissemination activities
* Initiation of all actions necessary for reaching a solution or decision in consultation with the researchers involved and the PMs
In the case of technical problems at WP level, the WP Leader should be
notified as soon as possible.
In addition, each WP is further subdivided into its large components tasks,
which are allocated to a **Task Leader** responsible for their coordination.
In the organisation structure following management bodies are identified:
* **General Assembly (GA):**
GA consists of one representative for each partner institution. Each
representative is responsible for the proper utilization of the contractor’s
resources allocated to the project and for the attainment of the objectives
assigned to his institution. Each representative further names a deputy who
has the necessary knowledge and authorization to represent its institution in
the framework of the EnDurCrete project.
* **Dissemination, Exploitation and Standardisation Board (DESB):**
DESB forms a project body that shall assist and support the GA as far as
concerns issues on the exploitation of results and disagreement resolutions.
It constitutes the central office coordinating all the contacts towards
stakeholder communities and other dissemination and communication target
audiences. The DESB is also responsible for the performance of the innovation
management activities.
* **Demonstration Board (DB):**
DB coordinates the demonstration activities. The DB shall manage the
activities performed in different locations with a common systemic approach.
* **Technical Board (TB):**
TB is responsible for the technical activities of the Work Packages (WPs) and
consists of all the WPs Leaders (WPLs). The TB directly refers to the GA and
is responsible for providing technical updates on the on-going activities. The
TB is also an essential tool to keep the whole consortium informed about any
criticism, problem, and deviation from original plan that may arise when
carrying out the technical activities.
The GA is supported by the **Advisory Board (AB)** consisting of the number of
external experts that will be selected on the basis of their profound and
long-lasting expertise in the field of research, innovation and
industrialisation.
Partners of the EnDurCrete project demonstrate relevant management
capabilities necessary to support and provide major contribution to all the
activities envisaged in the Project work. All partners and their roles in the
EnDurCrete project are listed in the following table.
Table 1 - EnDurCrete partners and their role in the project
<table>
<tr>
<th>
**No.**
</th>
<th>
**Partner short name**
</th>
<th>
**Partner legal name**
</th>
<th>
**Partner role in the EnDurCrete project**
</th>
<th>
</th> </tr>
<tr>
<th>
1
</th>
<th>
HC
</th>
<th>
HEIDELBERGCEMENT AG
</th>
<th>
HC is a Project coordinator and leader of Development and characterisation of
new green and low-cost cementitious materials. HC brings key knowledge on the
development of new environmentally friendly low-clinker binders and of
concrete mixes integrating novel additive technologies. In addition, HC is
responsible for the Project Management.
</th> </tr>
<tr>
<th>
2
</th>
<th>
RINA-C
</th>
<th>
RINA CONSULTING SPA
</th>
<th>
RINA-C develops requirements for structures exposed to harsh environmental
conditions, designs and optimises smart textile selfmonitoring reinforcing
system, performs modelling simulation activities, calibrates monitoring tools
and performs structural health monitoring activities. Additionally, RINA-C
develops EnDurCrete business models and contributes to exploitation. RINA-C
has also small contributions for LCA and safety related aspects.
</th> </tr>
<tr>
<th>
3
</th>
<th>
CEA
</th>
<th>
COMMISSARIAT A L ENERGIE
ATOMIQUE ET AUX ENERGIES
ALTERNATIVES
</th>
<th>
CEA leads Multifunctional and multiscale modelling and simulations of
materials, components and structures, being involved in modelling and
simulations. CEA also contributes to Lab-scale performance testing and
development of monitoring tools for concrete components and structures. CEA is
responsible for the assessment of the exposure likelihood of the new nano-
modified EnDurCrete products.
</th> </tr>
<tr>
<th>
4
</th>
<th>
ACCIONA
</th>
<th>
ACCIONA CONSTRUCCION SA
</th>
<th>
ACCIONA provides its expertise to the demonstration and performance validation
activities in the demonstration sites located in Spain. ACCIONA also
collaborates in the definition of requirements for concrete design mix and
additives to be used and develops concrete mix designs integrating new
designed durability technologies and prepares concrete specimens for later
analysis. ACCIONA also participates with NDT technologies NT492 and electrical
resistivity measurements to asset
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
corrosion in laboratory specimens.
</th>
<th>
</th> </tr>
<tr>
<th>
5
</th>
<th>
KVAERNER
</th>
<th>
KVAERNER AS
</th>
<th>
KVAERNER is primarily in charge to write the requirements for offshore
platforms within Requirements and conceptual design of new components and
structures and contributes in Multifunctional and multiscale modelling and
simulation of materials, components and structures. KVAERNER also performs
testing at Stord shipyard to simulate North Sea water condition.
</th> </tr>
<tr>
<th>
6
</th>
<th>
SIKA
</th>
<th>
SIKA TECHNOLOGY AG
</th>
<th>
SIKA is a leader of Innovative concrete technologies, including
nano/microfillers, coatings and reinforcement and coordinates the design and
development of new durable concrete systems incorporating innovative
technologies. SIKA is in charge of evaluating the compatibility of the novel
additives developed by other partners with common
additives in use in current concrete technology.
</th> </tr>
<tr>
<th>
7
</th>
<th>
ZAG
</th>
<th>
ZAVOD ZA GRADBENISTVO SLOVENIJE
</th>
<th>
ZAG’s main contribution to the project deals with lab-scale performance
testing, the demonstration in a real environment (Croatia), performance
validation (as far as concerns corrosion monitoring) and the promotion of
standardisation activities.
</th> </tr>
<tr>
<th>
8
</th>
<th>
VITO
</th>
<th>
VLAAMSE INSTELLING VOOR TECHNOLOGISCH ONDERZOEK
N.V.
</th>
<th>
VITO contributes customized sustainable supplementary cementitious materials
to the project and collaborates in the development of the low impact binder
with minimal Portland cement content. In addition, VITO contributes to the
environmental assessment and high-grade recyclability of the end products. In
particular VITO will establish second life reuse potential of the developed
concrete products.
</th> </tr>
<tr>
<th>
9
</th>
<th>
NTNU
</th>
<th>
NORGES TEKNISKNATURVITENSKAPELIGE
UNIVERSITET NTNU
</th>
<th>
NTNU leads the characterization of the novel cementitious materials and
contributes to modelling of the phase assemblage of the novel binders. In
addition, NTNU contributes to the simulations of the experimental laboratory
tests by providing experimental data and a critical review of the simulation
results. NTNU performs purpose-build tests to validate or estimate durability
parameters required for the numerical models.
</th> </tr>
<tr>
<th>
10
</th>
<th>
UNIVPM
</th>
<th>
UNIVERSITA POLITECNICA DELLE
MARCHE
</th>
<th>
UNIVPM is an academic leader of Lab-scale performance testing and development
of monitoring tools for concrete components and structures. UNIVPM develops
and optimizes
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
novel self-sensing cement based mixtures manufactured with green micro-fillers
and it contributes to their durability assessment. UNIVPM manages the
calibration and testing of the self-sensing/monitoring properties of the new
concrete. UNIVPM will develop advanced non-destructive testing tools for non-
intrusive in-field inspection, which will be used in selected demos.
</th> </tr>
<tr>
<td>
11
</td>
<td>
FENIX
</td>
<td>
FENIX TNT SRO
</td>
<td>
FENIX is in charge of training, dissemination and exploitation activities.
</td> </tr>
<tr>
<td>
12
</td>
<td>
GEO
</td>
<td>
GEONARDO ENVIRONMENTAL TECHNOLOGIES LTD
</td>
<td>
GEO leads Life cycle assessment and economic evaluation, standardization and
health and safety aspects and brings its expertise to address environmental
and economic sustainability (LCA and LCC) and standardisation aspects. It also
performs training activities on sustainable concrete products.
</td> </tr>
<tr>
<td>
13
</td>
<td>
AMSolution
</td>
<td>
PROIGMENES EREVNITIKES & DIAHIRISTIKES EFARMOGES
</td>
<td>
The main role of AMSolution is to develop and optimise new multi-functional
protective coatings. AMSolution is responsible for development of multi-
functional coating formulation with self-healing as well as solar/UV
reflection, hydrophobicity, antimolding and self-cleaning properties;
investigation and optimization of encapsulation technique for the achievement
of desired healing efficiency in final coating formulation and finally,
execution of variety tests for the confirmation of the full compatibility of
the investigated materials.
</td> </tr>
<tr>
<td>
14
</td>
<td>
NTS
</td>
<td>
NUOVA TESI SYSTEM SRL
</td>
<td>
NTS brings expertise in precasting process and performance evaluation.
Additionally, NTS will manufacture the prototypes used for the demonstrations.
NTS is also a recipient of scope visits for adequate safety assessment and
management.
</td> </tr>
<tr>
<td>
15
</td>
<td>
IBOX
</td>
<td>
I-BOX CREATE S.L.
</td>
<td>
The main contribution of IBOX concerns the development and optimisation of
smart corrosion inhibitors, based on nano-modified clays.
</td> </tr>
<tr>
<td>
16
</td>
<td>
INFRA PLAN
</td>
<td>
INFRA PLAN KONZALTNIG JDOO ZA USLUGE
</td>
<td>
The main role of INFRA PLAN is to lead the demonstration activity on the Krk
bridge and contribute to ND monitoring activities. INFRA PLAN leads the
Prototyping, demonstration and performance validation in a bridge in Croatia,
by providing planning and the execution of the monitoring project.
</td> </tr> </table>
6. **Research data**
“Research data” refers to information, in particular facts or numbers,
collected to be examined and considered as a basis for reasoning, discussion,
or calculation. In a research context, examples of data include statistics,
results of experiments, measurements, observations resulting from fieldwork,
survey results, interview recordings and images. The focus is on research data
that is available in digital form.
As indicated in the Guidelines on Data Management in Horizon 2020 (European
Commission, Research & Innovation, October 2015), scientific research data
should be easily:
* _DISCOVERABLE_
The data and associated software produced and/or used in the project should be
discoverable (and readily located), identifiable by means of a standard
identification mechanism (e.g. Digital Object Identifier).
* _ACCESSIBLE_
Information about the modalities, scope, licenses (e.g. licencing framework
for research and education, embargo periods, commercial exploitation, etc.) in
which the data and associated software produced and/or used in the project is
accessible should be provided.
* _ASSESSABLE and INTELLIGIBLE_
The data and associated software produced and/or used in the project should be
easily assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. the minimal datasets are handled
together with scientific papers for the purpose of peer review, data is
provided in a way that judgments can be made about their reliability and the
competence of those who created them).
* _USEABLE beyond the original purpose for which it was collected_
The data and associated software produced and/or used in the project should be
useable by third parties even long time after the collection of the data (e.g.
the data is safely stored in certified repositories for long term preservation
and duration; it is stored together with the minimum software, metadata and
documentation to make it useful; the data is useful for the wider public needs
and usable for the likely purposes of non-specialists).
* _INTEROPERABLE to specific quality standards_
The data and associated software(s) produced and/or used in the project should
be interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc.
Some examples of research data include:
* Documents (text, Word), spreadsheets
* Questionnaires, transcripts, codebooks
* Laboratory notebooks, field notebooks, diaries
* Audiotapes, videotapes
* Photographs, films
* Test responses, slides, artefacts, specimens, samples
* Collection of digital objects acquired and generated during the process of research
* Database contents (video, audio, text, images)
* Models, algorithms, scripts
* Contents of an application (input, output, logfiles for analysis software, simulation software, schemas)
* Methodologies and workflows
* Standard operating procedures and protocols.
In addition to the other records to manage, some kinds of data may not be
sharable due to the nature of the records themselves, or to ethical and
privacy concerns (e.g. preliminary analyses, drafts of scientific papers,
plans for future research, peer reviews, communication with partners, etc.).
Research data also do not include trade secrets, commercial information,
materials necessary to be held confidential by researcher until they are
published, or information that could invades personal privacy. Research
records that may also be important to manage during and beyond the project
are: correspondence, project files, technical reports, research reports, etc.
7. **Data sets of the EnDurCrete project**
Projects under Horizon 2020 are required to deposit the research data \- the
data, including associated metadata, needed to validate the results presented
in scientific publications as soon as possible; and other data, including
associated metadata, as specified and within the deadlines laid down in a data
management plan.
At the same time, projects should provide information (via the chosen
repository) about tools and instruments at the disposal of the beneficiaries
and necessary for validating the results, for instance specialised software(s)
or software code(s), algorithms, analysis protocols, etc. Where possible, they
should provide the tools and instruments themselves.
The types of data to be included within the scope of the EnDurCrete Data
Management Plan shall as a minimum cover the types of data that is considered
complementary to material already contained within declared Project
Deliverables. In order to collect the information generated during the
Project, the template for data collection will be circulated periodically
every 6 months. The scope of this template is to detail the research results
that will be developed during the EnDurCrete project detailing the kind of
results and how it will be managed. The responsibility to define and describe
all non-generic data sets specific to an individual work package is with the
WP leader.
_Data Set Reference and Name_
Identifier for the data set to be produced. All data sets within this DMP have
been given a unique field identifier and are listed in the table 10.1 (List of
the EnDurCrete project data sets and sharing strategy).
_Data Set Description_
A data set is defined as a structured collection of data in a declared format.
Most commonly a data set corresponds to the contents of a single database
table, or a single statistical data matrix, where every column of the table
represents a particular variable, and each row corresponds to a given member
of the data set in question. The data set may comprise data for one or more
fields. For the purposes of this DMP data sets have been defined by generic
data types that are considered applicable to the EnDurCrete project. For each
data set, the characteristics of the data set have been captured in a tabular
format as enclosed in table 4 (List of the EnDurCrete project data sets and
sharing strategy).
_Standards & Metadata _
Metadata is defined as “data about data”. It refers to structured information
that describes, explains, locates, and facilitates the means to make it easier
to retrieve, use or manage an information resource.
Metadata can be categorised in three types:
* Descriptive metadata describes an information resource for identification and retrieval through elements such as title, author, and abstract.
* Structural metadata documents relationships within and among objects through elements such as links to other components (e.g., how pages are put together to form chapters).
* Administrative metadata manages information resources through elements such as version number, archiving date, and other technical information for the purposes of file management, rights management and preservation.
There are a large number of metadata standards which address the needs of
particular user communities.
_Data Sharing_
During the period, when the Project is live, the sharing of data shall be
defined by the configuration rules defined in the access profiles for the
project participants. Each individual project data set item shall be allocated
a character “dissemination classification” (i.e. public, or confidential) for
the purposes of defining the data sharing restrictions. The classification
shall be an expansion of the system of confidentiality applied to deliverables
reports provided under the EnDurCrete Grant Agreement.
The above levels are linked to the “Dissemination Level” specified for all
EnDurCrete deliverables as follows:
* PU Public
* CO Confidential, only for members of the consortium (including the Commission Services)
* EU-RES Classified Information: RESTREINT UE (Commission Decision 2005/444/EC)
* EU-CON Classified Information: CONFIDENTIEL UE (Commission Decision 2005/444/EC)
* EU-SEC Classified Information: SECRET UE (Commission Decision 2005/444/EC)
All material designated with a PU dissemination level is deemed uncontrolled.
In case the dataset cannot be shared, the reasons for this should be mentioned
(e.g. ethical, rules of personal data, intellectual property, commercial,
privacy-related, or security-related).
Data will be shared when the related deliverable or paper has been made
available at an open access repository. The expectation is that data related
to a publication will be openly shared. However, to allow the exploitation of
any opportunities arising from the raw data and tools, data sharing will
proceed only if all co-authors of the related publication agree. The Lead
author is responsible for getting approvals and then with FENIX assistance
sharing the data and metadata on Zenodo (www.zenodo.org), a popular repository
for research data. The Lead Author will also create an entry on OpenAIRE
(www.openaire.eu) in order to link the publication to the data. A link to the
OpenAIRE entry will then be submitted to the EnDurCrete Website Administrator
(FENIX) by the Lead Author.
OpenAIRE is an EC/funded initiative designated to promote the open access
policies of the EC and help researchers, research officers and project
coordinators comply with them. OpenAIRE implements the Horizon 2020 Open
Access Mandate for publications and its Open Research Data Pilot and may be
used to reference both the publication and the data. Each EC project has its
own page on OpenAIRE, featuring project information, related project
publications and data sets, and a statistics section.
In case of any questions regarding the Open Access policy of the EC the
representatives of the National Open Access Desk for OpenAIRE should be
contacted.
_Data archiving and preservation_
Both Zenodo and OpenAIRE are purpose-built services that aim to provide
archiving and preservation of long-tail research data. In addition, the
EnDurCrete website, linking back to OpenAIRE, is expected to be available for
at least 2 years after the end of the Project. At the formal Project closure
all the data material that has been collated or generated within the Project
and classified for archiving shall be copied and transferred to a digital
archive (Project Coordinator responsibility).
The document structure and type definition will be preserved as defined in the
document breakdown structure and work package groupings specified. At the time
of document creation, the document will be designated as a candidate data item
for future archiving. The process of archiving will be based on a data extract
performed within 12 weeks of the formal closure of the EnDUrCrete project.
The archiving process shall create unique file identifiers by the
concatenation of “metadata” parameters for each data type. The metadata index
structure shall be formatted in the metadata order. This index file shall be
used as an inventory record of the extracted files, and shall be validated by
the associated WP leader.
8. **Technical requirements of data sets**
The applicable data sets are restricted to the following data types for the
purposes of archiving. The technical characteristics of each data set are
described in the following sections. The copy rights with respect to all data
types shall be subject to IPR clauses in the Grant Agreement but shall be
considered to be royalty free. The use of file compression utilities, such as
“WinZip” is prohibited. No data files shall be encrypted.
1. **Engineering CAD drawings**
The .dwg file format is one of the most commonly used design data formats,
found in nearly every design environment. It signifies compatibility with
AutoCAD technology. Autodesk created .dwg in 1982 with the launch of its first
version of AutoCAD software. It contains all the pieces of information a user
enters, such as: Designs, Geometric data, Maps, Photos.
2. **Static graphical images**
Graphical images shall be defined as any digital image irrespective of the
capture source or subject matter. Images should be composed such to contain
only objects that are directly related to EnDurCrete activity and do not
breach IPR of any third parties.
Image files are composed of digital data and can be of two primary formats of
“raster” or “vector”. It is necessary to represent data in the rasterised
state for use on a computer displays or for printing. Once rasterized, an
image becomes a grid of pixels, each of which has a number of bits to
designate its colour equal to the colour depth of the device displaying it.
The EnDurCrete project shall only use raster-based image files. The allowable
static image file formats are JPEG and PNG.
There is normally a direct positive correlation between image file size and
the number of pixels in an image, the colour depth, or bits per pixel used in
the image. Compression algorithms can create an approximate representation of
the original image in a smaller number of bytes that can be expanded back to
its uncompressed form with a corresponding decompression algorithm. The use of
compression tools shall not be used unless absolutely necessary.
3. **Animated graphical images**
Graphic animation is a variation of stop motion and possibly more conceptually
associated with traditional flat cell animation and paper drawing animation,
but still technically qualifying as stop motion consisting of the animation of
photographs (in whole or in parts) and other non-drawn flat visual graphic
material. The allowable animated graphical image file formats are AVI, MPEG,
MP4, and MOV. The WP leader shall determine the most suitable choice of format
based on equipment availability and any other factors. This is mainly valid
for the EnDurCrete project promo video, which is expected to contain animated
graphical images, infographics and on-site interviews.
Table 2 - Video formats
<table>
<tr>
<th>
**Format**
</th>
<th>
**File**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**MPEG**
</td>
<td>
.mpg
.mpeg
</td>
<td>
MPEG. Developed by the Moving Pictures Expert Group. The first popular video
format on the web. Used to be supported by all browsers, but it is not
supported in HTML5 (See MP4).
</td> </tr>
<tr>
<td>
**AVI**
</td>
<td>
.avi
</td>
<td>
AVI (Audio Video Interleave). Developed by Microsoft. Commonly used in video
cameras and TV hardware. Plays well on Windows computers, but not in web
browsers.
</td> </tr>
<tr>
<td>
**WMV**
</td>
<td>
.wmv
</td>
<td>
WMV (Windows Media Video). Developed by Microsoft. Commonly used in video
cameras and TV hardware. Plays well on Windows computers, but not in web
browsers.
</td> </tr>
<tr>
<td>
**QuickTime**
</td>
<td>
.mov
</td>
<td>
QuickTime. Developed by Apple. Commonly used in video cameras and TV hardware.
Plays well on Apple computers, but not in web browsers. (See MP4)
</td> </tr>
<tr>
<td>
**RealVideo**
</td>
<td>
.rm
.ram
</td>
<td>
RealVideo. Developed by Real Media to allow video streaming
with low bandwidths. It is still used for online video and Internet TV, but
does not play in web browsers.
</td> </tr>
<tr>
<td>
**Flash**
</td>
<td>
.swf
.flv
</td>
<td>
Flash. Developed by Macromedia. Often requires an extra component (plug-in) to
play in web browsers.
</td> </tr>
<tr>
<td>
**Ogg**
</td>
<td>
.ogg
</td>
<td>
Theora Ogg. Developed by the Xiph.Org Foundation. Supported by HTML5.
</td> </tr>
<tr>
<td>
**WebM**
</td>
<td>
.webm
</td>
<td>
WebM. Developed by the web giants, Mozilla, Opera, Adobe, and Google.
Supported by HTML5.
</td> </tr>
<tr>
<td>
**MPEG-4 or MP4**
</td>
<td>
.mp4
</td>
<td>
MP4. Developed by the Moving Pictures Expert Group. Based on QuickTime.
Commonly used in newer video cameras and TV hardware. Supported by all HTML5
browsers.
Recommended by YouTube.
</td> </tr> </table>
4. **Audio data**
An audio file format is a file format for storing digital audio data on a
computer system. The bit layout of the audio data (excluding metadata) is
called the audio coding format and can be uncompressed, or compressed to
reduce the file size, often using lossy compression. The data can be a raw
bitstream in an audio coding format, but it is usually embedded in a container
format or an audio data format with defined storage layer. The allowable
animated audio file formats is MP3 or MP4. This is mainly valid for the
EnDurCrete project promo video, which is expected to contain interviews with
key partners, voice over and music.
Table 3 - Audio formats
<table>
<tr>
<th>
**Format**
</th>
<th>
**File**
</th>
<th>
**Description**
</th> </tr>
<tr>
<th>
**MIDI**
</th>
<th>
.midi
.mid
</th>
<th>
MIDI (Musical Instrument Digital Interface). Main format for all electronic
music devices like synthesizers and PC sound
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
cards. MIDI files do not contain sound, but digital notes that can be played
by electronics. Plays well on all computers and music hardware, but not in web
browsers.
</td> </tr>
<tr>
<td>
**RealAudio**
</td>
<td>
.rm
.ram
</td>
<td>
RealAudio. Developed by Real Media to allow streaming of audio with low
bandwidths. Does not play in web browsers.
</td> </tr>
<tr>
<td>
**WMA**
</td>
<td>
</td>
<td>
.wma
</td>
<td>
WMA (Windows Media Audio). Developed by Microsoft. Commonly used in music
players. Plays well on Windows computers, but not in web browsers.
</td> </tr>
<tr>
<td>
**AAC**
</td>
<td>
</td>
<td>
.aac
</td>
<td>
AAC (Advanced Audio Coding). Developed by Apple as the default format for
iTunes. Plays well on Apple computers, but not in web browsers.
</td> </tr>
<tr>
<td>
**WAV**
</td>
<td>
</td>
<td>
.wav
</td>
<td>
WAV. Developed by IBM and Microsoft. Plays well on
Windows, Macintosh, and Linux operating systems. Supported by HTML5.
</td> </tr>
<tr>
<td>
**Ogg**
</td>
<td>
</td>
<td>
.ogg
</td>
<td>
Theora Ogg. Developed by the Xiph.Org Foundation. Supported by HTML5.
</td> </tr>
<tr>
<td>
**MP3**
</td>
<td>
</td>
<td>
.mp3
</td>
<td>
MP3 files are actually the sound part of MPEG files. MP3 is the most popular
format for music players. Combines good compression (small files) with high
quality. Supported by all browsers.
</td> </tr>
<tr>
<td>
**MPEG-4 MP4**
</td>
<td>
**or**
</td>
<td>
.mp4
</td>
<td>
MP4. Developed by the Moving Pictures Expert Group. Based on QuickTime.
Commonly used in newer video cameras and TV hardware. Supported by all HTML5
browsers.
Recommended by YouTube.
</td> </tr> </table>
5. **Textual data**
A text file is structured as a sequence of lines of electronic text. These
text files shall not contain any control characters including end-of-file
marker. In principle the least complicated form of textual file format shall
be used as the first choice.
On Microsoft Windows operating systems, a file is regarded as a text file if
the suffix of the name of the file is "txt". However, many other suffixes are
used for text files with specific purposes. For example, source code for
computer programs is usually kept in text files that have file name suffixes
indicating the programming language in which the source is written. Most
Windows text files use "ANSI", "OEM", "Unicode" or "UTF-8" encoding.
Prior to the advent of Mac OS X, the classic Mac OS system regarded the
content of a file to be a text file when its resource fork indicated that the
type of the file was "TEXT". Lines of Macintosh text files are terminated with
CR characters.
Being certified Unix, macOS uses POSIX format for text files. Uniform Type
Identifier (UTI) used for text files in macOS is "public.plain-text".
**8.6 Numeric data**
Numerical Data is information that often represents a measured physical
parameter. It shall always be captured in number form. Other types of data can
appear to be in number form i.e. telephone number, however this should not be
confused with true numerical data that can be processed using mathematical
operators.
**8.7 Process and test data**
Standard Test Data Format (STDF) is a proprietary file format originating
within the semiconductor industry for test information, but it is now a
Standard widely used throughout many industries. It is a commonly used format
produced for/by automatic test equipment (ATE). STDF is a binary format, but
can be converted either to an ASCII format known as ATDF or to a tab delimited
text file. Software tools exist for processing STDF generated files and
performing statistical analysis on a population of tested devices. EnDurCrete
innovation development shall make use of this file type for system testing.
**8.8 Adobe Systems**
Portable Document Format (PDF) is a file format developed by Adobe Systems for
representing documents in a manner that is independent of the original
application software, hardware, and operating system used to create those
documents. A PDF file can describe documents containing any combination of
text, graphics, and images in a device independent and resolution independent
format. These documents can be one page or thousands of pages, very simple or
extremely complex with a rich use of fonts, graphics, colour, and images. PDF
is an open standard, and anyone may write applications that can read or write
PDFs royalty-free. PDF files are especially useful for documents such as
magazine articles, product brochures, or flyers in which you want to preserve
the original graphic appearance online.
**9 GDPR compliance**
At every stage, the EnDurCrete project management and Project Consortium will
ensure that the Data Management Plan is in line with the norms of the EU and
Commission [as expressed in the General Data Protection Regulation (GDPR)
(Regulation (EU) 2016/679)] and will promote best practice in Data Management.
The GDPR comes into force on 25 May 2018.
The responsibility of protection and use of personal data 2 is on the
Project partner collecting data. The questionnaire answers shall be anonymized
in as early stage of the process, and data making it possible to connect the
answers to individual persons shall be destroyed. The consent of the
questionnaire participant will be asked in all questionnaires conducted within
the EnDurCrete project. This will include a description how and why the data
is to be used. The consent must be clear and distinguishable from other
matters and provided in an intelligible and easily accessible form, using
clear and plain language. It must be as easy to withdraw consent as it is to
give it.
The questionnaire participants will not include children or other groups
requiring a supervisor. Also when asking for somebody’s contact information,
the asking party shall explain why this information is asked and for what
purposes it will be used.
_Controller and Processor_
Controller means the natural or legal person, public authority, agency or
other body which, alone or jointly with others, determines the purposes and
means of the processing of personal data.
Processor refers to a natural or legal person, public authority, agency or
other body which processes personal data on behalf of the controller.
_Data Protection Officer_
The Data Protection Officer (DPO) is responsible for overseeing data
protection strategy and implementation to ensure compliance with GDPR
requirements. Under the GDPR, there are three main scenarios where the
appointment of a DPO by a controller or processor is mandatory:
* The processing is carried out by a public authority
* The core activities of the controller or processor of processing operations which require regular and systematic processing of data subjects on a large scale; or
* The core activities of the controller or processor consist of processing on a large scale of sensitive data or data relating to criminal convictions / offences.
Each EnDurCrete partner shall assess its own data processing activities to
understand whether they fall within the scope of the requirements set out
above. If they do, then it will be important to either fulfil the DPO position
internally or from an external source. For those organisations to whom the
requirements do not apply, they may still choose to appoint a DPO. If they
choose not to appoint a DPO, then it is recommended to document the reasoning
behind that decision.
_Data protection_
European citizens have a fundamental right to privacy. In order to protect
this right of individual data subject, the anonymisation and pseudonymisation
can be used.
Anonymisation refers to personal data processing with the aim of irreversibly
preventing the identification of the individual to whom it relates. For the
anonymized types of data, the GDPR does not apply, as long as the data subject
cannot be re-identified, even by matching his/her data with other information
held by third parties.
Pseudonymisation refers to the personal data processing in such a manner that
the data can no longer be attributed to a specific data subject without the
use of additional information. 3 To pseudonymize a data set, the additional
information must be kept separately and subject to technical and
organizational measures to ensure non/attribution to an identified or
identifiable person. In other words, the pseudonymized data constitutes the
basic privacy-preserving level allowing for some data sharing, and represent
data where direct identifiers (e.g. names) or quasiidentifiers (e.g. unique
combinations of date and zip codes) are removed and data is mismatched with a
substitution algorithm, impeding correlation of readily associated data to the
individual’s identity. For such data, GDPR applies and appropriate compliance
must be ensured.
Due to the limited amount and less harmful nature of the personal data that is
collected within the EnDurCrete project, neither pseudonymisation nor
anonymisation will be used. Other means of data security will be used to
protect data collected in the framework of the Project.
_Breach Notification_
Under the GDPR, breach notification will become mandatory in all member states
where a data breach is likely to “result in a risk for the rights and freedoms
of individuals”. This must be done within 72 hours of first having become
aware of the breach. Data processors will also be required to notify the data
subjects and the controllers, “without undue delay” after first becoming aware
of a data breach.
_Right to be Forgotten_
Also known as Data Erasure, the right to be forgotten entitles the data
subject to have the data controller erase his/her personal data, cease further
dissemination of the data, and potentially have third parties halt processing
of the data. The conditions for erasure include the data no longer being
relevant to original purposes for processing, or a data subjects withdrawing
consent.
It should also be noted that this right requires controllers to compare the
subjects' rights to "the public interest in the availability of the data" when
considering such requests. If a data subject wants his/her personal data to be
removed from a questionnaire, the non-personal data shall remain in the
analysis of the questionnaire.
_Data portability_
GDPR introduces data portability which refers to the right for a data subject
to receive the personal data concerning them, which they have previously
provided in a 'commonly use and machine readable format' and have the right to
transmit that data to another controller.
The personal data collected within EnDurCrete project will be in electronic
form, mostly in Microsoft Excel file forms .xls or .xlsx. In case the data
subject requests to transmit his/her data to another controller there should
be no technical limitations for providing them.
_Privacy by design and by default_
Privacy by design refers to the obligation of the controller to implement
appropriate technical and organisational measures, such as pseudonymisation,
which are designed to implement data protection principles, such as data
minimisation, in an effective manner and to integrate the necessary safeguards
into the processing.
Privacy by default means that the controller shall implement appropriate
technical and organisational measures for ensuring that only personal data
which are necessary for each specific purpose of the processing are processed.
That obligation applies to:
* the amount of personal data collected,
* the extent of personal data processing, the period of personal data storage, and the accessibility of personal data.
In particular, such measures shall ensure that by default personal data are
not made accessible without the individual’s intervention to an indefinite
number of natural persons. 4
Personal data collected during the EnDurCrete project will be used only by
project partners, including linked third parties, and only for purposes needed
for the implementation of the project. Also within the EnDurCrete project, if
someone of the project consortium asks for personal data, the partner holding
the data should consider whether that data is needed for the implementation of
the Project. If personal data is provided, the data shall not be distributed
further within or outside the Project.
_Records of processing activities_
Records of data processing and plans for the use of data will be kept by the
WP Leaders of those work packages that collect personal data.
**10 Expected research data of the EnDurCrete project**
Expected research data of the EnDurCrete project is listed below. The table
template will be circulated periodically in order to monitor the data sets and
set the strategy for their sharing.
Table 4 - List of the EnDurCrete project data sets and sharing strategy
<table>
<tr>
<th>
**WP number and name**
</th>
<th>
**WP**
**lead**
</th>
<th>
**Task number and name**
</th>
<th>
**Duration**
</th>
<th>
**Task lead**
</th>
<th>
**Dataset name**
</th>
<th>
**Dataset description**
</th>
<th>
**Format**
</th>
<th>
**Level** 5
</th> </tr>
<tr>
<td>
WP1
Design requirements for structures exposed to aggressive environment
</td>
<td>
RINA
\-
C
</td>
<td>
Task 1.1: Requirements and design process for marine environment
</td>
<td>
M1-M5
</td>
<td>
RINA-C
</td>
<td>
List of technical directives, surveys, standards and regulations for concrete
materials in the target applications
</td>
<td>
Report describing the development of guideline documents for concrete
structures exposed to the different aggressive environments (marine,
continental and offshore).
</td>
<td>
.pdf
</td>
<td>
PU
</td> </tr>
<tr>
<td>
Design requirements for concrete structures exposed to marine environment
</td>
<td>
Report reviewing actual technical directives, surveys, standards and
regulations applying to concrete materials for harbours and maritime
construction at European level, as well as some key national documents.
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Task 1.2: Requirements and design process for continental
environment (road
</td>
<td>
M1-M6
</td>
<td>
RINA-C
</td>
<td>
List of technical directives, surveys, standards and regulations for concrete
materials in
</td>
<td>
Report describing the development of guideline documents for concrete
structures exposed to the different aggressive
</td>
<td>
.pdf
</td>
<td>
PU
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
infrastructures)
</th>
<th>
</th>
<th>
</th>
<th>
the target applications
</th>
<th>
environments (marine, continental and offshore).
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Design requirements for concrete structures exposed to continental environment
(road infrastructures)
</th>
<th>
Report reviewing actual technical directives, surveys, standards and
regulations applying to concrete materials for continental construction at
European level, as well as some key national documents.
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 1.3: Requirements and design process for offshore platforms
</th>
<th>
M1-M5
</th>
<th>
KVAERNER
</th>
<th>
List of technical directives, surveys, standards and regulations for concrete
materials in the target applications
</th>
<th>
Report describing the development of guideline documents for concrete
structures exposed to the different aggressive environments (marine,
continental and offshore).
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<th>
Offshore design requirements
</th>
<th>
Design loads, design process, design requirements, environmental exposure
scenario, concrete constituencies and composition, references
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<td>
**Data Shari**
</td>
<td>
**ng**
</td>
<td>
* Confidential data:
Reserved Area on the EnDurCrete website
* Public data:
EnDurCrete website
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
EnDurCrete website and RINA-C server
</td>
<td>
**Data management**
**Responsibilities**
</td>
<td>
Eriselda Lirza
</td> </tr> </table>
<table>
<tr>
<th>
**WP number and name**
</th>
<th>
**WP**
**lead**
</th>
<th>
**Task number and name**
</th>
<th>
**Duration**
</th>
<th>
**Task lead**
</th>
<th>
**Dataset name**
</th>
<th>
**Dataset description**
</th>
<th>
**Format**
</th>
<th>
**Level**
</th> </tr>
<tr>
<td>
WP2
Development and characterisation of new green and low-cost cementitious
materials
</td>
<td>
HC
</td>
<td>
Task 2.1: Optimisation of a novel Portland Composite Cement, including
sustainable supplementary cementitious materials
</td>
<td>
M2-M7
</td>
<td>
HC
</td>
<td>
D2.1 Report on in depth characterisation of Portland Composite
Cement components
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Results of the characterisation of individual cement components
</td>
<td>
Raw data and results of the various experiments carried out within T2.1 (TGA,
XRD, calorimetry, PSD, trace elements and heavy metals, SEMimages)
</td>
<td>
.xlsx
.png
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Task 2.2: Development of customised separate grinding technology for each PCC
component
</td>
<td>
M5-M8
</td>
<td>
HC
</td>
<td>
D2.2 Report on optimization of most promising mixes to be further investigated
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Results of the cement development
</td>
<td>
Raw data and results of the tested cements within T2.2 (strength, workability,
PSD)
</td>
<td>
.xlsx
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Task 2.3:
Characterization of the novel cementitious materials
</td>
<td>
M5-M8
</td>
<td>
NTNU
</td>
<td>
Results of the hydration study phase assemblage and reaction degree of the
hydrated novel binders
</td>
<td>
Raw data and results of the various experiments performed within T2.3 (TGA,
XRD, calorimetry, chemical shrinkage, rheological
measurements, SEM-EDS,
MIP)
</td>
<td>
.xlsx
.docx
.tif
.jpg
</td>
<td>
CO
</td> </tr>
<tr>
<td>
D2.3 Report on rheological
</td>
<td>
This report describes the results of the rheological
</td>
<td>
.docx .pdf
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
measurements and packing
</th>
<th>
tests performed in T2.3
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
D2.4 Report on the hydration study
</th>
<th>
This report describes the results of the hydration tests performed in T2.3
</th>
<th>
.docx .pdf
</th>
<th>
CO
</th> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
* Confidential data:
Reserved Area on the EnDurCrete website and Heidelberg Cement server
* Raw data and results:
Heidelberg Cement
server
* Public data: EnDurCrete website
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
EnDurCrete website and Heidelberg
Cement server
</td>
<td>
**Data management**
**Responsibilities**
</td>
<td>
Gerd Bolte
Arnaud Muller
</td> </tr>
<tr>
<td>
**WP number and name**
</td>
<td>
**WP**
**lead**
</td>
<td>
**Task number and name**
</td>
<td>
**Duration**
</td>
<td>
**Task lead**
</td>
<td>
**Dataset name**
</td>
<td>
**Dataset description**
</td>
<td>
**Format**
</td>
<td>
**Level**
</td> </tr>
<tr>
<td>
WP3
Innovative concrete technologies, including nano/ microfillers, coatings and
reinforcement
</td>
<td>
SIKA
</td>
<td>
Task 3.1: Development and optimization of smart corrosion inhibitors, based on
nano-modified clays
</td>
<td>
M1-M15
</td>
<td>
IBOX
</td>
<td>
Protocols with the synthesis description
</td>
<td>
Files with the description of the different steps to follow for the
development and optimization of smart corrosion inhibitors,
based on nano-modified
clays
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Characterization graphs
</td>
<td>
graphs with the characterization of the synthesized products applying
different techniques, such as, X-ray
</td>
<td>
.pdf
.jpeg
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
diffraction, thermogravimetric analysis, infrared or ultra violet spectroscopy
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Task 3.2: Development and optimization of novel self-sensing
carbon-based green micro-fillers
</th>
<th>
M1-M15
</th>
<th>
UNIVPM
</th>
<th>
Analysis of carbon based green microfillers behaviour in cement
</th>
<th>
Composition and properties, graphs
</th>
<th>
.xlsx;
.pdf;
.docx
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Test data
</th>
<th>
Data of tests performed during the characterization of the self-sensing
properties
</th>
<th>
.jpg;
.xlsx;
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 3.3: Development and optimization of new multi-functional protective
coatings
</th>
<th>
M1-M15
</th>
<th>
AM
\-
SOLUTIONS
</th>
<th>
Development of selfhealing coatings
</th>
<th>
Images coming from microscopy techniques for the evaluation of microcapsules
and coatings.
</th>
<th>
.png
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Development of self- cleaning coatings
</th>
<th>
Contact angle measurements for the evaluation of selfcleaning performance of
particles and coatings.
</th>
<th>
.xlsx
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Development of antimoulding coatings
</th>
<th>
Measurements and photos of anti-moulding properties of particles and coatings.
</th>
<th>
.xlsx, .png
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Development of lightreflective coatings
</th>
<th>
Measurements of thermal behaviour of the coatings under IR lamps.
</th>
<th>
.xlsx
</th>
<th>
CO
</th> </tr>
<tr>
<th>
D3.6 Development and evaluation of new multi-functional
</th>
<th>
Report on self-healing agents for developed EnDurCrete coatings
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
protective coatings
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Task 3.4: Evaluation of compatibility of additives in concrete
</th>
<th>
M4-M12
</th>
<th>
SIKA
</th>
<th>
\-
</th>
<th>
\-
</th>
<th>
\-
</th>
<th>
\-
</th> </tr>
<tr>
<th>
\-
</th>
<th>
\-
</th>
<th>
\-
</th>
<th>
\-
</th> </tr>
<tr>
<th>
Sub Task 3.5.1: Development of mix designs according to
requirements defined in WP1
</th>
<th>
M3-M9
</th>
<th>
HC
</th>
<th>
D3.1 Report on optimized mix designs using novel binders
</th>
<th>
Report
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Results of the concrete development
</th>
<th>
Raw data and results of concrete development (mix design/concrete composition,
strength, workability and durability results)
</th>
<th>
.xlsx
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Sub Task 3.5.2: Implementation of novel additives
</th>
<th>
M6-M12
</th>
<th>
HC
</th>
<th>
Implementation of novel additives in concrete
</th>
<th>
Raw data and results of the impact of the novel additive technologies (mix
design/concrete composition, resulting concrete properties)
</th>
<th>
.xlsx
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Sub Task 3.5.3:
Preparation of concrete specimens for lab-scale testing
</th>
<th>
M9-M15
</th>
<th>
ACCIONA
</th>
<th>
\-
</th>
<th>
\-
</th>
<th>
\-
</th>
<th>
\-
</th> </tr>
<tr>
<th>
Sub Task 3.5.4:
Validation, final tuning and roll out to WP6
</th>
<th>
M20-
M22
</th>
<th>
HC
</th>
<th>
D3.9 Report on optimized mix designs using novel binders and additives ready
for upscaling in WP6
</th>
<th>
Report
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Optimized mix design
</th>
<th>
Raw data and results of
</th>
<th>
.xlsx
</th>
<th>
CO
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
using additives
</th>
<th>
concrete development with additives (mix design/concrete composition,
resulting concrete properties)
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Task 3.6 Design and integration of the multifunctional selfmonitoring
reinforcing system
</th>
<th>
M1-M15
</th>
<th>
RINA-C
</th>
<th>
Textiles datasheets
</th>
<th>
Datasheets of textiles selected as candidates for the application
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Optical fiber sensors datasheets
</th>
<th>
Datasheets of optical fiber sensors selected as candidates for the application
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Test images
</th>
<th>
Pictures related to the
technological embedding
tests
</th>
<th>
.jpg
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Test videos
</th>
<th>
Videos of the
technological embedding
tests
</th>
<th>
video
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Sub Task 3.6.1: Design of multifunctional selfmonitoring reinforcing system
</th>
<th>
M1-M12
</th>
<th>
RINA-C
</th>
<th>
Textiles datasheets
</th>
<th>
Datasheets of textiles selected as candidates for the application
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Optical fiber sensors datasheets
</th>
<th>
Datasheets of optical fiber sensors selected as candidates for the application
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Subtask 3.6.2: Integration of the multifunctional selfmonitoring reinforcing
system
</th>
<th>
M6-M15
</th>
<th>
NTS
</th>
<th>
Test images
</th>
<th>
Pictures related to the
technological embedding
tests
</th>
<th>
.jpg
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Test videos
</th>
<th>
Videos of the
technological embedding
tests
</th>
<th>
video
</th>
<th>
CO
</th> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th>
<th>
* Confidential data:
Reserved Area on the EnDurCrete website
* Public data:
EnDurCrete website, Zenodo
</th>
<th>
**Data Archiving and preservation**
</th>
<th>
EnDurCrete website and servers of the respective partners
</th>
<th>
**Data management**
**Responsibilities**
</th>
<th>
TBD
</th> </tr>
<tr>
<td>
**WP number**
**and name**
</td>
<td>
**WP**
**lead**
</td>
<td>
**Task number and name**
</td>
<td>
**Duration**
</td>
<td>
**Task lead**
</td>
<td>
**Dataset name**
</td>
<td>
**Dataset description**
</td>
<td>
**Format**
</td>
<td>
**Level**
</td> </tr>
<tr>
<td>
WP4
Multifunctional and multiscale modelling and simulations of
materials, components and structures
</td>
<td>
CEA
</td>
<td>
Task 4.1: Completed EnDurCrete MODA template
</td>
<td>
M1-M41
</td>
<td>
RINA-C
</td>
<td>
EnDurCrete MODA
</td>
<td>
Modelling work flow and description of the single modelling steps
</td>
<td>
.docx
.ppt
</td>
<td>
PU
</td> </tr>
<tr>
<td>
Task 4.2: Multiscale modelling of the ageing mechanical and diffusive
properties of the new materials due to hydration and degradation
</td>
<td>
M3-M30
</td>
<td>
CEA
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Sub task 4.2.1:
Simulation of the phase assemblage of the novel binders
</td>
<td>
M3-M24
</td>
<td>
NTNU
</td>
<td>
Results of the modelling of the phase assemblage
</td>
<td>
Input data, modelling code, and results of the modelling of the phase
assemblage performed within T4.2.1
</td>
<td>
.xlsx .docx
</td>
<td>
CO
</td> </tr>
<tr>
<td>
D4.1 Report on modelling of the
</td>
<td>
This report presents the results of the modelling
</td>
<td>
.docx
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
phase assemblage of the novel binders
</th>
<th>
activities performed in T4.2.1, which constitutes inputs for T4.2.2 (D4.2)
</th>
<th>
.pdf
</th>
<th>
</th> </tr>
<tr>
<th>
Subtask 4.2.2:
Multiscale modelling of the material mechanical and diffusive properties at
the cement paste, mortar and concrete scale
</th>
<th>
M3-M30
</th>
<th>
CEA
</th>
<th>
D4.2 Report on multiscale analytical modelling at the cement paste, mortar and
concrete scale
</th>
<th>
Report describes
modelling methods and results
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Results of the multiscale analytical modelling
</th>
<th>
Input data and results of the modelling (evolution of the effective properties
as a function of phase assemblage)
</th>
<th>
.xlsx
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 4.3: Computational analyses of micromesostructures for model testing and
corrosion and cracking investigations
</th>
<th>
M9-M36
</th>
<th>
CEA
</th>
<th>
D4.3 Report on computational analyses of micromesostructures
</th>
<th>
Report describes modelling and simulations at micromeso scale
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Results of the computational analyses of micromesostructures
</th>
<th>
Results of the computational analyses: evolution of effective properties,
degradation (carbonation), cracking (carbonation-induced corrosion)
</th>
<th>
.xlsx,
.png
.jpg
.gif
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 4.4: Computational
analyses of macrostructures for service life estimation
</th>
<th>
M30-
M39
</th>
<th>
RINA-C
</th>
<th>
Report on computational analyses of macrostructures for service life
estimation, including
corrosion phenomena
and critical
</th>
<th>
This report describes macro modelling and simulations, aiming ultimately at
service life prediction of critical infrastructures.
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
environments
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
</td>
<td>
* Confidential data:
Reserved Area on the EnDurCrete website
* Public data:
EnDurCrete website, Zenodo
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
EnDurCrete website and servers of the respective partners
</td>
<td>
**Data management**
**Responsibilities**
</td>
<td>
Benoît Bary
</td> </tr>
<tr>
<td>
**WP number**
**and name**
</td>
<td>
**WP**
**lead**
</td>
<td>
**Task number and name**
</td>
<td>
**Duration**
</td>
<td>
**Task lead**
</td>
<td>
**Dataset name**
</td>
<td>
**Dataset description**
</td>
<td>
**Format**
</td>
<td>
**Level**
</td> </tr>
<tr>
<td>
WP5
Lab-scale performance
testing and development of monitoring tools for concrete
components & structures
</td>
<td>
UNIVPM
</td>
<td>
Task 5.1: Lab-scale performance testing
</td>
<td>
M12-
M20
</td>
<td>
ZAG
</td>
<td>
Lab-scale development of selfhealing coatings
</td>
<td>
Images coming from microscopy techniques for the evaluation of the coatings.
</td>
<td>
.png
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Reports on air permeability tests
</td>
<td>
Reports
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Reports on carbonation tests
</td>
<td>
Reports
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Reports on chloride diffusion tests
</td>
<td>
Reports
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Reports on water absorption and penetration of water tests
</td>
<td>
Reports
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Reports on porosity tests
</td>
<td>
Reports
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Reports on FT and FTS tests
</td>
<td>
Reports
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Reports on corrosion
</td>
<td>
Reports
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
tests
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
D5.1 Report on durability testing
</th>
<th>
Report on the durability tests results performed in T5.1, with the goal of
assessing the durability of novel concrete EnDurCrete solutions (against
several benchmarks) and giving inputs for computational model calibration.
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 5.2: Calibration and laboratory testing of self-
sensing/monitoring properties
</th>
<th>
M12-
M21
</th>
<th>
UNIVPM
</th>
<th>
Test data
</th>
<th>
Data of tests performed during the metrological characterization of the self-
sensing/monitoring properties
</th>
<th>
mat;
json; xls; pdf.
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 5.3: Advanced NDT tools for nonintrusive in-field inspection
</th>
<th>
M12-
M30
</th>
<th>
UNIVPM
</th>
<th>
Test data
</th>
<th>
Data of tests performed during the metrological characterization of the self-
sensing/monitoring properties
</th>
<th>
mat;
json; xls; pdf.
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Sub Task 5.3.1: NDT solutions for cracks/sub-surface damages and moisture
</th>
<th>
M12-
M30
</th>
<th>
UNIVPM
</th>
<th>
Test data
</th>
<th>
Data of tests performed during the metrological characterization of the self-
sensing/monitoring properties
</th>
<th>
jpg;
mat;
json; xls; pdf.
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Test data
</th>
<th>
Data of tests performed during the metrological characterization of the self-
sensing/monitoring properties
</th>
<th>
mat;
json; xls; pdf.
</th>
<th>
CO
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Sub Task 5.3.2: Ion migration under
Electrical field
</th>
<th>
M12-
M30
</th>
<th>
CEA
</th>
<th>
Test data
</th>
<th>
Data of tests performed during the ion migration
under electrical field measurement
</th>
<th>
.jpg
.txt
.xlsx
.mphbin
.mphtxt
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Ion migration under electrical field study
</th>
<th>
Report on the feasibility of Ion migration under electrical field
measurement as NDT solutions
</th>
<th>
.doc
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Sub Task 5.3.3:
Electrical resistivity
measurement
</th>
<th>
M12-
M30
</th>
<th>
ACCIONA
</th>
<th>
Electrical resistivity
measurement
</th>
<th>
Evaluation of the
electrical resistivity in EnDurCrete concretes
</th>
<th>
.docx
</th>
<th>
CO
</th> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
EnDurCrete project website in Reserved
Area
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
Regular backup of data on server, managed by IT
departments and
EnDurCrete website
</td>
<td>
**Data management**
**Responsibilities**
</td>
<td>
Gian Marco Revel;
Paolo Chiariotti
</td> </tr>
<tr>
<td>
**WP number**
**and name**
</td>
<td>
**WP**
**lead**
</td>
<td>
</td>
<td>
**Task number and name**
</td>
<td>
</td>
<td>
**Duration**
</td>
<td>
**Task lead**
</td>
<td>
**Dataset name**
</td>
<td>
**Dataset description**
</td>
<td>
**Format**
</td>
<td>
**Level**
</td> </tr>
<tr>
<td>
WP6
Prototyping, demonstration and solutions performance validation
</td>
<td>
ACCIONA
</td>
<td>
</td>
<td>
Task 6.1:
Demonstration and
Validation Plan
</td>
<td>
</td>
<td>
M17-M22
</td>
<td>
UNIVPM
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Task 6.2: Prototyping, demonstration and
</td>
<td>
</td>
<td>
M22-M40
</td>
<td>
ACCIONA
</td>
<td>
Evaluation of coatings performance
</td>
<td>
Images coming from optical observation
</td>
<td>
.png
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
performance validation in a maritime port in Spain
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
(including microscopy techniques) for the evaluation of the coatings.
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Evaluation of
EnDurCrete concretes and additives in a maritime port
</th>
<th>
Strength, porosity, permeability, chloride content, electrical conductivity,
and SEM results
</th>
<th>
.docx
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 6.3: Prototyping, demonstration and performance validation in a tunnel in
Spain
</th>
<th>
M22-M40
</th>
<th>
ACCIONA
</th>
<th>
Evaluation of coatings performance
</th>
<th>
Images coming from optical observation (including microscopy techniques) for
the evaluation of the coatings.
</th>
<th>
.png
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Evaluation of
EnDurCrete concretes and additives in a tunnel
</th>
<th>
Strength, porosity, permeability, chloride content, electrical conductivity,
and SEM results
</th>
<th>
.docx
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 6.4: Prototyping, demonstration and performance validation in an offshore
structure in Norway
</th>
<th>
M22-M40
</th>
<th>
KVAERNER
</th>
<th>
Evaluation of coatings performance
</th>
<th>
Images coming from optical observation (including microscopy techniques) for
the evaluation of the coatings.
</th>
<th>
.png
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 6.5: Prototyping, demonstration and performance validation in a bridge in
Croatia
</th>
<th>
M22-M40
</th>
<th>
INFRA PLAN
</th>
<th>
Evaluation of coatings performance
</th>
<th>
Images coming from optical observation (including microscopy techniques) for
the evaluation of the coatings.
</th>
<th>
.png
</th>
<th>
CO
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
D6.3 Ready-mix concrete prototypes
ready for demonstration
</th>
<th>
Report on prototypes for the bridge demo.
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
D6.6 Pilot
deployment report on the Adriatic coast bridge demo site
</th>
<th>
Report describing the bridge pilot, the installation made, including the
sensors and monitoring equipment.
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 6.6: Analysis of the results and validation of EnDurCrete solutions
</th>
<th>
M27-M40
</th>
<th>
RINA-C
</th>
<th>
Evaluation of coatings performance
</th>
<th>
Images coming from optical observation (including microscopy techniques) for
the evaluation of the coatings.
</th>
<th>
.png
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Demo data analysis 2
</th>
<th>
Data related to physical and mechanical parameters (corrosion progress,
mechanical strength, porosity, water permeability, Chloride content,
electrical conductivity etc.)
</th>
<th>
.xls .doc
</th>
<th>
CO
</th> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
Confidential data:
Reserved Area on the EnDurCrete
website
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
EnDurCrete website and partners’ servers
</td>
<td>
**Data management**
**Responsibilities**
</td>
<td>
Rosa Lample
</td> </tr>
<tr>
<td>
**WP number**
**and name**
</td>
<td>
**WP**
**lead**
</td>
<td>
**Task number and name**
</td>
<td>
**Duration**
</td>
<td>
**Task lead**
</td>
<td>
**Dataset name**
</td>
<td>
**Dataset description**
</td>
<td>
**Format**
</td>
<td>
**Level**
</td> </tr>
<tr>
<td>
WP7
</td>
<td>
G
E
O
</td>
<td>
Task 7.1: Environmental
</td>
<td>
M2-M42
</td>
<td>
GEO
</td>
<td>
Life cycle inventories
</td>
<td>
The inventory of all
</td>
<td>
.xls
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
Life cycle assessment and economic evaluation, standardization and health and
safety aspects
</th>
<th>
</th>
<th>
and economic viability of the novel products based on LCA and LCCA
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
inputs (material, energy etc.) related to all considered (sub)products
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
D7.1 Sustainability Life Cycle Assessment of the new product types
</th>
<th>
Short report providing overview of the key factors influencing the
sustainability of the novel products
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<th>
D7.2 Life Cycle Analysis at material level
</th>
<th>
Intermediate report on LCA of the new materials, cradle-to-gate
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
D7.5 Report on environmental and economic viability of the novel products
based on the findings of the LCA and LCCA
</th>
<th>
Final report including LCA on the product level ( cradle-to-grave) and life
cycle cost analysis
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<th>
Task 7.2:
Standardisation
</th>
<th>
M6-M42
</th>
<th>
ZAG
</th>
<th>
D7.4
Recommendations
for updates of current European standards and national technical requirements
</th>
<th>
Report on technical recommendations collected during the projects for updates
of existing standards or for future standards
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<th>
Recommendations
for updates of current European standards and national technical requirements
</th>
<th>
Presentation on technical recommendations collected during the projects for
updates of existing standards or for future standards
</th>
<th>
.pdf
</th>
<th>
For relevan t CEN
TCs
</th> </tr>
<tr>
<th>
Task 7.3: Assessment of the exposure likelihood of the new nano-
</th>
<th>
M1-M42
</th>
<th>
CEA
</th>
<th>
Samples pictures
</th>
<th>
Pictures of the different samples tested
(before/after) to illustrate
</th>
<th>
.jpg
</th>
<th>
CO
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
modified EnDurCrete products
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
the final report
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Mechanicals tests and artificial protocols
</th>
<th>
Description of the mechanical tests and the climatic ageing done on samples
and also of the standards used to performed its
</th>
<th>
.docx
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Real-time measurements raw data
</th>
<th>
Data obtained by CPC, FMPS, OPC during mechanical tests on noaged and aged
samples
</th>
<th>
.txt
.xlsx
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Off-line
measurements raw data
</th>
<th>
SEM/TEM pictures, EDS spectra, XPS data on samples collected during the
mechanical tests
</th>
<th>
.jpg
.tif
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Report on assessment of nanomaterial exposure likelihood
</th>
<th>
Global report on the evaluation of the general exposure likely to occur and
the value chain and the life cycle of the new “Endurcrete” concrete materials
developed in WP3.
The report is based on data obtained by CEA, IBOX, NTS and DAPP
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 7.4: Health, safety and risk assessment and management activities
</th>
<th>
M6-M42
</th>
<th>
CEA
</th>
<th>
Questionnaire for scoping visit
</th>
<th>
Questionnaire addressed to Endurcrete partners who handled or synthetized
nanoparticles to plan scoping visit and performed in second time measurement
campaign
</th>
<th>
.doc
</th>
<th>
CO
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
(exposure assessment)
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Raw data obtain during measurement campaign (real-time and off-lines)
</th>
<th>
Real-time measurement and off-lines characterisations obtained during
measurement campaign performed in Endurcrete partners facilities where
nanoparticles are used (handling or synthetised)
</th>
<th>
.txt
.jpg
.tif
.xlsx .doc
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Report on health and safety assessment and management measures
</th>
<th>
Global report in two parts, 1 st regarding the occupational exposure
assessment and management, 2 nd regarding the risk assessment and management
The report is based on data obtain by CEA, IBOX,
NTS and DAPP
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
* Confidential data:
Reserved Area on the EnDurCrete website
* Public data:
EnDurCrete website, Zenodo
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
EnDurCrete website and servers of the respective partners
</td>
<td>
**Data management**
**Responsibilities**
</td>
<td>
Jakub Heller
</td> </tr>
<tr>
<td>
**WP number**
**and name**
</td>
<td>
**WP**
**lead**
</td>
<td>
**Task number and name**
</td>
<td>
**Duration**
</td>
<td>
**Task lead**
</td>
<td>
**Dataset name**
</td>
<td>
**Dataset description**
</td>
<td>
**Format**
</td>
<td>
**Level**
</td> </tr>
<tr>
<td>
WP8
Training,
</td>
<td>
FENI
X
</td>
<td>
Task 8.1: Dissemination, Communication and
</td>
<td>
M1-M42
</td>
<td>
FENIX
</td>
<td>
D8.1 Project Website
</td>
<td>
Report describing the project website, including
</td>
<td>
.pdf
</td>
<td>
PU
</td> </tr> </table>
<table>
<tr>
<th>
dissemination and exploitation
</th>
<th>
</th>
<th>
Networking
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
public and private area
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
D8.2 Communication,
Networking and
Dissemination plan
</th>
<th>
Report identifying target audiences, key messages, communication channels,
roles and timelines
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
D8.3 Promo material design
</th>
<th>
Images and logos from project partners, photos/videos from dissemination
events, project promo videos consisting of animated graphical images, filming,
voice over and music. Promo materials shared online
</th>
<th>
.eps,
.jpeg,
.png,
.mpeg,
.avi,
.mp4,
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<th>
D8.4 Initial Data Management Plan
</th>
<th>
Initial data management plan analysing the main data uses and restrictions,
with focus on open access publication
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<th>
D8.7 Progress report on dissemination and networking activities and awareness
campaign
</th>
<th>
Progress report on performed dissemination and networking activities and
activities towards spreading project awareness among stakeholders and public
workshop organization
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<th>
D8.9 Final Data
Management Plan
</th>
<th>
Final data management
plan, including references to open
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
access publication developed by the Consortium
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
D8.11 Final report on dissemination and networking activities and awareness
campaign
</th>
<th>
Final report on performed dissemination and networking activities and
activities towards spreading project awareness among stakeholders and public
workshop organization
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<th>
Task 8.2: Exploitation and IPR management
</th>
<th>
M3-M42
</th>
<th>
FENIX
</th>
<th>
D8.5 Market Assessment
</th>
<th>
Preliminary market assessment mapping concrete market and other relevant
sectors information
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
D8.6 Initial
Exploitation Plan
</th>
<th>
Initial identification of the key project exploitable results,
characterization of each result and its expected use, individual partners’
exploitation plans and identification of potential risks
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
D8.10 Final
Exploitation Plan
</th>
<th>
Report on final version of the exploitation plan, consolidating comprehensive
exploitation strategy
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 8.3: Business models
</th>
<th>
M12-M42
</th>
<th>
RINA-C
</th>
<th>
D8.8 Business models
</th>
<th>
Business models for the new technologies, paving the way for future
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
market uptake
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Task 8.4: Training
Activities
</th>
<th>
M24-M42
</th>
<th>
GEO
</th>
<th>
D8.12 Report on training activities and guidelines
</th>
<th>
Report on training activities and guidelines and webinars for easy
installation, use and disassembly of the new solution
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
* Confidential data:
Reserved Area on the EnDurCrete
website
* Promo material (PU): EnDurCrete website, social network profiles, videos on YouTube, thematic portals Public data:
EnDurCrete website
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
EnDurCrete website and FENIX server
</td>
<td>
**Data management**
**Responsibilities**
</td>
<td>
Petra Colantonio
</td> </tr>
<tr>
<td>
**WP number**
**and name**
</td>
<td>
**WP**
**lead**
</td>
<td>
**Task number and name**
</td>
<td>
**Duration**
</td>
<td>
**Task lead**
</td>
<td>
**Dataset name**
</td>
<td>
**Dataset description**
</td>
<td>
**Format**
</td>
<td>
**Level**
</td> </tr>
<tr>
<td>
WP9
Project
Management
</td>
<td>
HC
</td>
<td>
Task 9.1: Project
Coordination
</td>
<td>
M1-M42
</td>
<td>
HC
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Task 9.2: Consortium Management
</td>
<td>
M1-M42
</td>
<td>
HC
</td>
<td>
D9.1: Periodic and final reports
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Task 9.3: Administrative and Financial
Management
</td>
<td>
M1-M42
</td>
<td>
HC
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
Confidential data:
Reserved Area on the EnDurCrete website and Heidelberg Cement server
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
EnDurCrete website
Heidelberg Cement server
</td>
<td>
**Data management**
**Responsibilities**
</td>
<td>
Arnaud Muller
</td> </tr> </table>
**Publication**
The EnDurCrete Consortium is willing to submit papers for
scientific/industrial publication during the course of the EnDurCrete project.
In the framework of the The Dissemination, Communication and Networking Plan
agreed by the GA, project partners are responsible for the preparation of the
scientific publications. As a general approach, the project partners are
responsible for the scientific publications as well as for the selection of
the publisher considered as more relevant for the subject of matter. Each
publisher has its own policies on self-archiving (Green Open Access:
researchers can deposit a version of their published work into a subject-based
repository or an institutional repository, Gold Open Access: alternatively,
researcher can publish in an open access journal, where the publisher of a
scholarly journal provides free online access).
After the paper is published and license for open access is obtained, project
partner will contact the leader of the Training, dissemination and
exploitation (FENIX), who is responsible for EnDurCrete data management, and
FENIX will upload the publication into project website and deposit in the
OpenAIRE repository ZENODO indicating the project it belongs to in the
metadata. Dedicated pages per project are visible on the OpenAIRE portal.
For adequate identification of accessible data, all the following metadata
information will be included:
* Information about the grant number, name and acronym of the action: European Union (UE), Horizon 2020 (H2020), Innovation Action (IA), EnDurCrete acronym, GA N° 760639
* Information about the publication date and embargo period if applicable: Publication date, Length of embargo period
* Information about the persistent identifier (for example a Digital Object Identifier, DOI): Persistent identifier, if any, provided by the publisher (for example an ISSN number)
For more detailed rules and processes about OpenAIRE, ZENODO, it is possible
to find within FAQ on the link _https://www.openaire.eu/support/faq_ .
57
**Conclusions**
This deliverable contains the first release of the Data Management Plan for
EnDurCrete project and it provides preliminary guidelines for the management
of the project results during the project and beyond. The Data Management
related to the data generation, storage and sharing has been addressed. The
report will be subject to revisions as required to meet the needs of the
EnDurCrete project and will be formally reviewed every six months and at the
end of the project to ensure ongoing fitness to the purpose.
58
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0096_QROWD_732194.md
|
# EXECUTIVE SUMMARY
The Data Management Plan (DMP) describes QROWD's data management life cycle
for the data to be collected, processed and/or generated, as part of making
its research data findable, accessible, interoperable and re-usable (FAIR),
following the guidelines for FAIR Data Management 1 . FAIR data management
guarantees that the advancements and results developed on top of these data
can be replicated and exploited by future EU-funded initiatives and the
community in general. This document provides an update on our processes after
the end of the project.
The key target readers of the DMP are the warrants of the Open Research Data
Pilot program that will check the compliance of the project with H2020
guidelines, and members of the research community interested in replicating
and/or reproducing the results of QROWD's research and development. Technical
partners of the consortium, that will use it as a guide for publishing and
maintaining data associated to bespoke research.
In this deliverable we describe the processes, policies and tools the QROWD
consortium used to ensure FAIR data management, that can be summarized as
follows:
* Findability: Datasets produced by research in the project will be assigned a DOI and deposited in Zenodo. Datasets meant to be part of the open source
* Accessibility: Research datasets will be published in Zenodo and linked to OpenAire
* Interoperability: In addition to the descriptions used for accessibility, we will select the most appropriate standards to format the data. We will pay particular attention to those issued from the OASC initiative.
* Re-usability: All research data outputs will be published with an open license with the exception of those corresponding to the TomTom use case.
We also describe the measures we took regarding data protection and privacy
with personal data.
The deliverable is complemented by the data catalog (D4.1), that was also
updated to reflect changes in the second half of the project.
# SUMMARY OF UPDATES
* On Data Summary was updated to reflect updates on the Data Catalog (D4.1), that now features the list of datasets that were used during the whole project (It was previously up to M12)
* On Data Summary, update on the non-publication of datasets including GPS coordinates of citizens, as deemed personal data with publication outside of the scope of consent given
* On FAIR, we removed mentions to DCAT-AP extension for scientific datasets to describe our datasets, as it stayed only as draft. We added mentions to our usage of ML-Schema, and crowd-voc (described in D3.4)
* On FAIR, we now state our choice of Fiware Data Models to increase interoperability with cities that have adopted OASC standards.
* Throughout, we added Zenodo as the main archive for data and software outputs.
* Added a section on Data Protection, describing the assessment of the risks and measures we took for processing the personal data required for the modal split use case.
* Add mention to data we did not publish due to Data Protection considerations: GPS stream and trip confirmation data.
# DATA SUMMARY
Data in QROWD is divided in three groups, according to its use and its
relation with other WPs:
1. Data collected and generated for testing and improving analytics and hybrid discovery capabilities of the QROWD's platform. This comprises the input data in RDF format on which the analysis/discovery processes will be run, the "ground truth" data used to assess the effectiveness of the newly developed approaches, and the crowdsourced data used either as part of the "ground truth", or to improve the analysis/discovery processes. All these data will be made openly accessible.
2. Data collected and generated for the Trento municipality use case (cf. D2.2). This comprises data transformed from municipality's data sources (static datasets and sensor data) and data collected from citizens as a complement to sensor data. Transformations from Open Data will remain open. In the initial version of this document, we expected that an appropriately curated and anonymized subset of the crowdsourced data that allows the reproduction of experiments conducted in connection with this use case will be published, provided that data subjects have agreed with the release of their particular GPS traces. However, GPS traces were considered sensitive data by the Data Protection Officer, and consent was only granted for a limited time (up to the end of the project). Therefore, we decided to not make GPS traces openly available. Only
3. Data collected and generated for the TomTom use case (cf D1.1). This use case re-uses some data from the Trento municipality, but also includes data proprietary to TomTom that cannot be released due to being core to TomTom's business model.
During the project, Data will be under the custody of the research/industrial
partner leading the work package, in the case of WP2, UniTN and MT will be
joint controllers. In the case of WP1, the custody is shared between TomTom
and InfAI. During this phase, access to the data will be restricted to
consortium partners.
The full list of data used and collected in the project is in the Data Catalog
deliverable (c.f. D4.1)). Most of it is data already considered as open and
without any GDPR implications, except for a combination of identifying fields,
and sensor streams collected through the i-Log application, namely
* Name, email, gender, age range, number of vehicles available, number of members of the household, preferred vehicle
* GPS stream
* Accelerometer stream
* Gyroscope stream
* User feedback from inferred trips
* User input for missing trips
Section 5 describes the data protection measures we took to guarantee a
seamless and GDPR compliant workflow.
# FAIR DATA
## FINDABILITY
Each dataset that we consider important for research purposes developed during
the project will be assigned a Digital Object Identifier through the Zenodo
service. In the initial version of this document we considered the use of the
DCAT-AP extension for scientific datasets recently proposed by the European
Commission Joint Research Centre 2 . However, as the DCAT-AP extension for
scientific datasets did not proceed further than unofficial draft, we
ultimately decided to only use Zenodo.
Datasets acquired and transformed through the data acquisition framework
(D4.2) will be findable through the internal CKAN infrastructure of the
framework for use by the project consortium.
Internal versioning will follow the incremental 0.x format until publication
of the linked scientific contribution, point from which the numeration will
follow the 1.x format for minor corrections or improvements.
## ACCESSIBILITY
Data corresponding to research conducted for the Trento municipality use case
considered as anonymous and or securely pseudonymised will be released (c.f.
section 5). Data corresponding to the TomTom use case will not be publicly
available due to the reliance of the business model of TomTom on it. The
possibility of releasing a subset of the data is currently being discussed
internally. This plan will be updated according to the final decision made.
Code corresponding to research conducted for WP5 and WP6 will be released to
the community with an open license.
While research is being undertaken, accessibility to datasets will be limited
to the members of the consortium. During this stage, partners leading the
specific research will be in charge of the storage and accessibility for the
rest of the partners that require it.
Following the directives of OpenAire, all research outputs that could be
released with open licenses are deposited in Zenodo 3 . This includes
releases from QROWD's Github repository corresponding to the latest versions
of code used in the project.
Concerning the tools required to access QROWD’s research data. At the time of
the first version of this plan, we initially expected that all datasets will
be available in RDF format. Any RDF Graph-Store and SPARQL engine can be used
to load them and query them. As a result of the re-use of other established
data models and formats (e.g. those used by OASC), only a fraction of the
produced datasets are in RDF.
## INTEROPERABILITY
To ensure inter-operability, we aligned our datasets to the FiWare data models
for transportation and parking 4 , as it is mostly used by the OASC
organisation. We also re-used the ML-Schema vocabulary to describe datasets
resulting from We also developed the crowd-voc 5 vocabulary for describing
datasets produced with Crowdsourcing.
## RE-USABILITY
Datasets connected to the Trento Municipality use-case (WP2) will be re-usable
according to one of the following schemes
1. Data transformed from existing data sources (curated or not) will have the same license as the original source.
2. Data collected and generated from the project that is connected to a scientific publication will be made available with a CreativeCommons license. In the case of data coming from crowdsourcing, appropriate anonymisation processes will be apply before release (cf. D11.1).
Datasets connected to the TomTom use-case (WP1) will not be re-usable.
# RESOURCE ALLOCATION
The archiving infrastructure and resources will be provided by Southampton and
UniTN. Successive updates of the DMP will be led by Soton, as coordinating
partner.
3 4 _https://zenodo.org/search?page=1 &size=20&q=qrowd _
_https://www.fiware.org/developers/data-models_ _/_
5 https://doi.org/10.5281/zenodo.3373397
# DATA SECURITY
Southampton, has a secure enterprise scale coherent storage solution for
active research data. The data stored within this facility is regularly backed
up and a copy of the back-up, regularly off-sited to a secure location for
disaster recovery purposes. The research data storage platform is solely for
the storage of research data. Final versions of datasets will be deposited in
Zenodo. UniTN’s infrastructure abides to the European Commission
Recommendation on access to and preservation of scientific information (July
17th 2012) the H2020/ERC Model Grant Agreement, that has all the security
measures to avoid data loss and intrusion.
Data for the TomTom business case will be hold by TomTom, following their
industry-grade security measures. Concerning location-identifying data, we
cite the following excerpt from TomTom's policy:
“ _Within_ _24 hours of you shutting down your device or app, TomTom
automatically and irreversibly destroys the data that would allow you or your
device to be identified from the location data we received._
_For Traffic, SpeedCameras, Danger Zones and Weather we delete the information
within 20 minutes after you have stopped using the service by shutting down
your device or app. We do not know where you have been and cannot tell anyone
else, even if we somehow were forced to._
_This, now anonymous, information is used to improve TomTom's products and
services, such as TomTom maps, Traffic, products based on traffic patterns and
average speeds driven, and for search queries to inform businesses how well-
received their information is. These products and services are also used by
government agencies and businesses.”_
TomTom data used in the project is always aggregated or stripped of its
identifiers.
# DATA PROTECTION
QROWD processes personal data of citizens as part of the modal split
estimation of WP2. In this section, we detail the measures we took for
ensuring GDPR compliance, on the light of the need for several partners of the
consortium to process data.
Under the advice and assistant of the Data Protection Officer of the
University of Trento, we carried out a checklist to assess data protection.
The original document, in Italian, is annexed to this document. We summarize
the key details as follows.
Roles were established as follows:
* Joint Data Controllers: Municipality of Trento and University of Trento
* Data Processors: University of Southampton, InfAI
The following table summarizes the data collected and who processed it.
<table>
<tr>
<th>
**Data field**
**(P = Personal)**
</th>
<th>
**Purpose**
</th>
<th>
**Processed by**
</th> </tr>
<tr>
<td>
Name (P)
</td>
<td>
To address citizen
</td>
<td>
Controllers
</td> </tr>
<tr>
<td>
Email (P)
</td>
<td>
Contact citizen
</td>
<td>
Controllers
</td> </tr>
<tr>
<td>
Gender (P)
</td>
<td>
Enable aggregations by gender
</td>
<td>
Controllers
</td> </tr>
<tr>
<td>
Age range
</td>
<td>
Enable aggregations by age range
</td>
<td>
Controllers
</td> </tr>
<tr>
<td>
Vehicles available and preferred
</td>
<td>
Enable aggregations
</td>
<td>
Controllers
</td> </tr>
<tr>
<td>
GPS Trace (P)
</td>
<td>
Automatic detection of trips and changes of transport mode.
</td>
<td>
Controllers and processors
</td> </tr>
<tr>
<td>
Accelerometer
</td>
<td>
Automatic detection of trips and changes of transport mode.
</td>
<td>
Controllers and processors
</td> </tr>
<tr>
<td>
Gyroscope
</td>
<td>
Automatic detection of trips and changes of transport mode.
</td>
<td>
Controllers and processors
</td> </tr>
<tr>
<td>
Manual confirmation and input of trips (P)
</td>
<td>
Validate automated detection of trips and changes of transport mode. Collect
trip data
</td>
<td>
Controllers and processors
</td> </tr>
<tr>
<td>
Preferred time to receive questions
</td>
<td>
Sets time to send questions to users.
</td>
<td>
Controllers and processors
</td> </tr> </table>
The evaluation revealed that none of the conditions of article 35.3 apply to
our collection and processing, namely
* No decisions with legal effect will be taken based on the collected data
* No special categories of personal data involved
* No large scale monitoring of public areas
Inline with the principle of minimisation. a pseudonymous was generated by
controllers to identify traces belonging to the same user, so the identifying
fields name and email would not need to be accessible to data processors. We
also designed the API calls available to data processors to avoid any data
leakage with respect to demographic information, returning only aggregated
information.
All identified risks on confidentiality, integrity, and availability, were
evaluated as "Low" or "Very Low". The table below shows the risk log.
<table>
<tr>
<th>
Risk #
</th>
<th>
Description
</th>
<th>
Probability
</th>
<th>
Impact
</th>
<th>
Mitigation
</th> </tr>
<tr>
<td>
1
</td>
<td>
Lost of device containing personal data
</td>
<td>
Low
</td>
<td>
High
</td>
<td>
Minimize the number of devices where personal data is stored
</td> </tr>
<tr>
<td>
2
</td>
<td>
Personal data wrongly sent to unauthorized party
</td>
<td>
Low
</td>
<td>
Medium
</td>
<td>
Check for record integrity before sending personal data back to participants
</td> </tr>
<tr>
<td>
3
</td>
<td>
Web server/service misconfiguration leaks personal data
</td>
<td>
Very low
</td>
<td>
High
</td>
<td>
Audit server configurations before experiments
</td> </tr>
<tr>
<td>
4
</td>
<td>
Information (e.g. Google accounts) of a participant needed for providing a
service is modified
</td>
<td>
Low
</td>
<td>
Low
</td>
<td>
Instruct participants to keep records stable during the experiments
</td> </tr>
<tr>
<td>
5
</td>
<td>
A linking record is lost.
</td>
<td>
Low
</td>
<td>
Low
</td>
<td>
Correct backup management
</td> </tr>
<tr>
<td>
6
</td>
<td>
Lost of data of a participant
</td>
<td>
Low
</td>
<td>
Medium
</td>
<td>
Correct backup management
</td> </tr>
<tr>
<td>
7
</td>
<td>
A critical service is
down
</td>
<td>
Very low
</td>
<td>
Medium
</td>
<td>
Infrastructure test.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0102_LAY2FORM_768710.md
|
# Executive summary
This deliverable provides the first version of the Data Management Plan for
project LAY2FORM. This deliverable presents a preliminary description of how
research data collected and generated within the scope of the project will be
handled during and after the end of LAY2FORM activities, namely concerning the
standards and sharing approaches. The Data Management will be continuously
reviewed and updated in months 24 and 48 as planned in LAY2FORM deliverables
list. This document follows the template provided by the European Commission
1 .
# Introduction
LAY2FORM project participates in the Open Research Data Pilot (ORD pilot)
according to Articles 29.2 and 29.3 of the GAM. This participation entails the
sharing and reuse of research data generated by H2020 programme funded
projects to improve and maximize their impact. Notwithstanding this, the ORD
Pilot addresses the balance required between openness and protection of
scientific knowledge for the sake of commercialization, privacy and security
purposes, following the principle “as open as possible, as closed as
necessary”.
The main purpose of this DMP is to describe the data management policies to be
followed by LAY2FORM consortium. More specifically, this document presents an
overview of the types of datasets to be generated and collected during the
project, the data standards and how data will be shared and preserved for
later reuse.
This DMP reflects the Consortium Agreement established by the partners and
currently in force. This DMP is also consistent with exploitation and IPR
requirements of the project.
Within the scope of the project, any research data linked or potentially
linked to results that can be exploited by any consortium partner will not be
considered into the open domain to protect commercialization interests. Any
other research data not linked to exploitable results will be deposited in an
open access repository.
This DMP is a document to be continuously reviewed during the course of the
project.
The first version of the DMP presented in this document will be updated in M24
(September-2019) and M48 (September-2021) of the project as formal
deliverables, with more detail on the procedures for data management.
# Summary of data types
With the framework of the project LAY2FORM research data, including datasets
with data, metrics and procedures, will be open for benchmarking purposes, as
well as technical data needed to validate the results presented in the
deposited scientific publications.
A summary of the data types, their formats and standards to be generated and
collected during the project are provided in Table 1 . During the project and
in the planned future issues of the Data Management Plan, this list may be
updated if necessary.
_Table 1 - Summary of data types, formats and standards_
<table>
<tr>
<th>
**Types of Data**
</th>
<th>
**Data formats and standards**
</th> </tr>
<tr>
<td>
Experimental/ observation-derived data.
</td>
<td>
Microsoft Office (docx, xlsx, pptx,…) and Adobe Acrobat (pdf) will be the
reference file formats.
LaTeX may be used for the production of scientific and technical
documentation.
</td> </tr>
<tr>
<td>
Models and representations (Product, system and process)
</td> </tr>
<tr>
<td>
Project and WP management documents (reports, presentations,…).
</td> </tr>
<tr>
<td>
Scientific and technical publications
</td> </tr>
<tr>
<td>
FEM structural simulation studies (Product, system and tooling)
</td>
<td>
FEM simulation results will be stored in the ERF format (based on the HDF5
file format)
ISO-STEP - according to ISO 10303 and ISO 14649 10-17 - will be the standard
for CAx data; non-applicable “ab-origine” data is stored in original SW tool
used and exported in ISO-STEP.
Some data linked to DSS will be stored under an ASCII file format
</td> </tr>
<tr>
<td>
FEM process simulation studies (hotforming)
</td> </tr>
<tr>
<td>
Software and algorithms (CAx, Decision Support System)
</td> </tr>
<tr>
<td>
Measurements raw data obtained during system/process/part characterization
</td>
<td>
Data will be filed in original format - sensor/application specific - then
exported to ASCII file using TXT and CSV format.
</td> </tr>
<tr>
<td>
Images
</td>
<td>
JPEG compressed format or equivalent. TIFF uncompressed for shearography and
thermography.
</td> </tr>
<tr>
<td>
Videos (short movies, animations, …)
</td>
<td>
MPEG codec / AVI format.
</td> </tr> </table>
The manufacturing process pilot to be developed within the scope of LAY2FORM
has been mapped and its several sub-processes were identified. In addition to
this, several parameters, both for the process control and process defect have
been analysed. This is perceived as a critical stage in terms of data
management, in the way that a preliminary overview of several parameters have
been defined for each sub-process. Table 2 presents shows some of these input
parameters and quality criteria evaluated.
Most of the data collected will be used for controlling the manufacturing
process through the Self-adaptive system and the Decision Support System.
The datasets collected will serve as a basis to the creation of a historical
dataset.
_Table 2 - Summary of main manufacturing steps with some associated control /
defect parameters._
<table>
<tr>
<th>
**Manufacturing stage**
</th>
<th>
**Process control data**
</th>
<th>
**Process defect data***
</th> </tr>
<tr>
<td>
Texturing
</td>
<td>
\-
</td>
<td>
Surface roughness
</td> </tr>
<tr>
<td>
Tape feeding
</td>
<td>
Feeding velocity, tape alignment
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Tape cutting
</td>
<td>
Laser frequency, displacement speed.
</td>
<td>
Heat affected zone (laser cutting), metal bent, fibre dragging
</td> </tr>
<tr>
<td>
US spot bonding
</td>
<td>
US frequency, US holding time, US pressure, number of welded spots
</td>
<td>
Local resin degradation, welding quality.
</td> </tr>
<tr>
<td>
Stacked layup
</td>
<td>
\-
</td>
<td>
Size of lap and gaps, local fibre damage, tape misalignment
</td> </tr>
<tr>
<td>
Stack heating
</td>
<td>
Consolidation temperature
</td>
<td>
Blank under or over heated. Material degradation.
</td> </tr>
<tr>
<td>
Stack compaction
</td>
<td>
Loading rate (press ramp speed), consolidation pressure/vacuum, holding time
</td>
<td>
Intimate contact, thermoplastic overflow, fibre drag, fibre misalignment,
nonhomogeneous fibre volume fraction
</td> </tr>
<tr>
<td>
Cooling
</td>
<td>
Cooling rate
</td>
<td>
Warpage
</td> </tr>
<tr>
<td>
Consolidated blank
</td>
<td>
\-
</td>
<td>
Intimate contact, void distribution, void size, void content, thermoplastic
overflow, fibre drag, fibre misalignment, non homogeneous fibre volume
fraction.
</td> </tr>
<tr>
<td>
Holding consolidated blank
</td>
<td>
Blank holder gripping force
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Heating
</td>
<td>
Blank temperature
</td>
<td>
Blank under or over heated, matrix degradation.
</td> </tr>
<tr>
<td>
In-place tension control
</td>
<td>
Blank in-plane stress
</td>
<td>
Material slip / fibre breakage
</td> </tr>
<tr>
<td>
Transfer to press position
</td>
<td>
Transfer time, transfer speed, trajectory: speed and position
</td>
<td>
Transfer time out of process window
</td> </tr>
<tr>
<td>
Molten matrix consolidated blank (shearography)
</td>
<td>
\-
</td>
<td>
Shear wrinkling, sagging, nonuniform temperature, matrix degradation, adhesive
degradation, temperature loss, transfer-time mismatch, void content, position
placement
</td> </tr>
<tr>
<td>
Forming
</td>
<td>
Mould temperature, loading rate, press trajectory: speed, position and time,
moulding pressurization: time, force and speed, moulding de-
pressurization: time and speed, die temperature
</td>
<td>
Parallelism, Wrinkles, fibre misalignment, insert misalignment, fibre
breakage.
</td> </tr>
<tr>
<td>
Force control (blank holder)
</td>
<td>
Clamp force, blank in-plane stress
</td>
<td>
Material slip
</td> </tr>
<tr>
<td>
Consolidation
</td>
<td>
Consolidation time
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Molten formed sub product
</td>
<td>
\-
</td>
<td>
Intra-ply shear, fibre wrinkling, fibre buckling, fibre waviness, air
entrapment, fibre damage.
</td> </tr>
<tr>
<td>
Cooling
</td>
<td>
Cooling rate, clamp force
</td>
<td>
Premature solidification, nonuniform surface temperature, transverse cracking,
metal/composite adhesion
</td> </tr>
<tr>
<td>
Demoulding
</td>
<td>
Cooling rate
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Formed sub product
</td>
<td>
\-
</td>
<td>
Residual stress/warpage/spring-in, transvers cracking, adhesion
metal/composite, crystallization
</td> </tr>
<tr>
<td>
Edge trimming
</td>
<td>
Laser type, laser power, trajectory
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Final product
</td>
<td>
\-
</td>
<td>
Geometry, delamination, mechanical properties, matrix degradation
</td> </tr> </table>
* The process defect data mentioned in the table will only be generated provided that the defect detection technology contemplated within the framework of the Lay2Form
project are relevant for each type of defect. This will be assessed during the
course of the project.
# FAIR data
3.1 Data findability
Appropriate provisions will be implemented within the scope of LAY2FORM to
make data generated in the project findable in repositories. Sets of metadata
attributes will be created through a machine-readable index in a searchable
resource to facilitate the findability of the data.
At the current state, the metadata requirements for the different data sets to
be generated are under development. Notwithstanding this, and with respect to
scientific publications to be deposited in repositories, the project will
comply with minimum metadata format requirements, namely: 1) the terms
['European Union (EU)' and 'Horizon 2020'] ['Euratom' and 'Euratom research
and training programme 2014-2018']; 2) the name of the action, acronym and
grant number; 3) the publication date and length of embargo period (if
applicable); 4) a persistent identifier (such as a Digital Object Identifier
or DOI).
3.2 Data accessibility
There are three access levels for the data generated by LAY2FORM:
* _Public_ : access and download are permitted only to registered users in the project website ( _http://lay2form-project.eu/_ ) , against password to login to the system. The list of files open to the public for dissemination will be maintained available and accessible by download via the website. Interested users need to request a login and a password to the official project e-mail. The purpose is to track data use and build a contact database for the project.
* _Confidential_ : accessible only to the subscriber of the Grant Agreement, namely their beneficiaries and linked third parties. Confidential data exchanged between consortium partners is made accessible through a repository in an internal secured server hosted by the PC. Data is available to partners through a webFTP site from which access is granted via a login and password provided by the PC.
* _Open access_ : peer review scientific publications and experimental data needed to validate results will be deposited in a repository using the “gold model” as preferential way, meaning that such data will be available for both subscribers and the wider public, with permitted reuse. Furthermore, the LAY2FORM consortium will seek for opportunities to provide open access to non-peer reviewed scientific publications, such as monographs or conference proceedings.
Some of the data generated during the manufacturing process, or by the self-
adaptive system, or coming from the process simulation, will most likely be
stored on a cloud service hosted by AWS. The storage conditions are currently
under study and have not yet been implemented in the project. These data will
also be available to the project partners.
In order to promote exchange and raise awareness on the project results with
other researchers, stimulating concurrent research to accelerate the uptake
and further development of breakthrough LAY2FORM technology concepts, research
data will be deposited in a public research data repository, namely Zenodo.
Both public and confidential data will be maintained accessible to authorized
users for at least 5 years after the end of the project.
3. Data interoperability
Data interoperability, or the ability for data to be processed by different
systems, will be ensured through standardised terms or controlled vocabularies
with qualified references to other metadata, so that can be machine readable,
following the FAIR principles
At the present moment, metadata requirements are under study by the
consortium.
Metadata templates and generators such as may be used to support this task.
The Dublin Core Schema, developed by the Dublin Core Metadata Initiative is a
set of vocabulary terms for describing digital and physical resources, and may
be used for this purpose.
4. Data re-use (through clarifying licences)
Data re-use, or the ability for data to be understood by humans and machines
through sets of precise and relevant metadata attributes, will be ensured by a
clarifying data usage licence. In the scope of project LAY2FORM, a set of
rights will be retained by the copyrights holders, and a Creative Commons will
be used for this purpose.
By observing the different rights granted by the six licences available from
Creative Commons, a licence **CC BY-NC-ND 4.0** will be used for the data
generated in the project. The following attributes of this licence should be
noted:
* **BY** : stands for attributions. A user of the data must give appropriate credit, a link to the licence, and indicate if changes were made.
* **NC** : stands for non-commercial. A licence with an NC modifier cannot be used for a commercial purpose, such as being sold or used in an advertisement. You may not use the material for commercial purposes. A commercial use is one primarily intended for commercial advantage or monetary compensation.
* **ND** : stands for no derivatives. This limits the creation of derivative works based upon the original, such as rewriting or translations. A user of the data must not distribute modified, remixed, transformed or built upon material.
# Allocation of resources
Data management will be assumed by the project coordinator – INEGI - taking
the responsibility for the communication activities under the frame of WP1,
and the associated costs eligible for this purpose.
The costs related to open access publications were already considered in the
estimated budget of the project.
# Data security
Multiple levels of security to access, transfer, store and back-up data files
(encryption methods: Secure Sockets Layer (SSL) will be the implemented to the
PC server, the project website and on the cloud infrastructure Amazon Web
Services (AWS), which is currently under study.
Regarding AWS, the data will be stored on S3 and Glacier services and used by
any AWS service (SageMaker, Redshift, …) located in the European region
(Ireland, Frankfurt, London or Paris). To prevent disaster recovery, all data
available on AWS will be replicated within an AWS Region across multiple
Availability Zones
AWS provides specific features and services which customers can leverage as
they seek to comply with the GDPR (
_https://aws.amazon.com/compliance/gdprcenter/?nc1=h_ls_ ) .
# Ethical aspects
The ethical framework of the data management within project LAY2FORM is
currently being assed. In this regards, the General Data Protection Regulation
(GDPR), which entered into force on the 25 th of May 2018, is under study in
order to ensure compliance with the newest legislation.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0104_Pret-a-LLOD_825182.md
|
# Introduction
1.1. Scope
This document contains the initial version of the Prêt-à-LLOD Data Management
Plan (DMP). The DMP is a living document and will be regularly updated.
Succesive stable versions of the DMP will be published in M24 and M36. This
document is complemented by “D5.2 Policy-based language Data Management” (due
in M24) and it is related to “D7.1 Ethics Requirements I” (delivered in M3).
The Data Management Plan adheres to and complies with the “H2020 Data
Management Plan – General Definition” given by the European Commission (EC)
online 1 , where the DMP is described as follows:
“ _A DMP describes the data management life cycle for the data to be
collected, processed and/or generated by a Horizon 2020 project. As part of
making research data findable, accessible, interoperable and reusable (FAIR),
a DMP should include information on:_
* _the handling of research data during and after the end of the project_
* _what data will be collected, processed and/or generated_
* _which methodology and standards will be applied_
* _whether data will be shared/made open access and_
* _how data will be curated and preserved (including after the end of the project)”_
Prêt-à-LLOD adopts policies compliant with the official FAIR guidelines [1]
(findable, accessible, interoperable and re-usable), as mandated by the EC.
Also, Prêt-à-LLOD participates in the Open Research Data Pilot (ORDP 2 ) and
is obliged to deposit the produced research data in a research data
repository, as per Art. 29.3 of the Grant Agreement.
This Section 1 concludes with the presentation of preliminary concepts;
Section 2 is the Data Management Plan itself and follows the template proposed
by the EC 3 .
1.2. Preliminary concepts
**Zenodo**
Zenodo 4 5 is a general-purpose open-access repository much used for
publishing5 deliverables and data of H2020 projects. Zenodo exposes the data
to OpenAIRE , a network of Open Access repositories to support the EC
publication policies.
Resources in Zenodo (consequently also in OpenAire) are identified with a
Document Object Identifier (DOI), they can be versioned, and there are good
chances that they will enjoy long term preservation. Moreover, common search
engines such as Google Scholar or Microsoft Research are aware of the assets
hosted at Zenodo and they enjoy high visibility.
A Prêt-à-LLOD community has been created in the Zenodo portal.
https://zenodo.org/communities/pret-a-llod/
Both deliverables and research data will be published in this Zenodo
community.
**Prêt-à-LLOD Data Portal**
The “ **Prêt-à-LLOD** **Data Portal** ” will be the data portal of the
project and will host the description of relevant datasets (metadata). The
data portal will also be used to host newly created language resources as long
as their size is manageable. This data portal will be open to the general
public and users will be able to search for datasets, visualize their
description and eventually download the resource itself.
The Prêt-à-LLOD Data Portal will be built using the CKAN 6 technology (a
standard software package for data portals) and Linghub, a Linked Data based
portal already describing language resources [3]. Due to many internet users
being already familiar with CKAN, their visual appearence will be respected,
only being customized for the needs of this project and using the corporate
image of Prêt-à-LLOD. A CKAN data access API will be exposed to offer
infomation on the datasets metadata.
# Data Management Plan
The sections of this document and the questions hereinafter are taken from the
_Horizon_ _2020 FAIR Data Management Plan (DMP) template._ The use of the
template is recommended by the EU commission.
2.1 Data summary
<table>
<tr>
<th>
**1\. Data summary**
</th> </tr>
<tr>
<td>
a) What is the purpose of the data collection / generation and its relation to
the objectives of the project?
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
The declared objectives of this project are:
* to support the exchange of multilingual cross-sectoral data
* to develop interoperable language technology services and language data
* to favour the sustainability of language technologies and language resources Consequently, the collection and generation of data are core activities for this project, and its purpose can be summarized as ‘prepare linguistic data so that it can power multilingual applications in a digital single market’.
An initial list of 52 processing activities is documented in annex A of “D7.1.
Ethic Requirements I” [2].
</th> </tr>
<tr>
<td>
b) What types and formats of data will the project generate / collect?
</td> </tr>
<tr>
<td>
</td>
<td>
The vast number of formats that will be handled by this project does not allow
a preliminary enumeration. Data in different formats will be collected and
eventually transformed. The preferred type for the generated data is the one
which most favours interoperability ― this will be RDF (Resource Description
Framework 7 ) in its different serializations.
The types of data in Table 1 have been identified, with respect to their
meaning:
<table>
<tr>
<th>
**Name**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Catalogue metadata**
</td>
<td>
Description of existing data resources
</td> </tr>
<tr>
<td>
**Open linguistic data**
</td>
<td>
Existing open linguistic data already available prior to Prêt-à-LLOD.
</td> </tr>
<tr>
<td>
**New linguistic data**
</td>
<td>
Transformation of existing resources or creation of new resources in the LLOD
(Linguistic Linked Open Data cloud [8]) by the Prêt-à-LLOD project. These
assets are considered results of this project.
</td> </tr>
<tr>
<td>
**Experiment-related data**
</td>
<td>
Data produced in the course of reports generations, execution of experiments
(e.g. experiments for automated linking), etc., often related to research
publications.
</td> </tr> </table>
_Table 1. Types of data generated or collected by Prêt-à-LLOD according to
their meaning_ The following types of data have been identified, according to
their openness.
<table>
<tr>
<th>
**Name**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Private to partners
</td>
<td>
Available to the partner who owns it
</td> </tr>
<tr>
<td>
Available to partners
</td>
<td>
Not public, only available to the partners. No
Non-Disclosure Agreements (NDAs) are not necessary and the Consortium
Agreement suffices.
</td> </tr>
<tr>
<td>
Published as Open Data
</td>
<td>
Both public and available with an open license.
</td> </tr> </table>
_Table 2. Types of data in Prêt-à-LLOD, according to their openness._
</td> </tr>
<tr>
<td>
c) Will you re-use any existing data and how?
</td> </tr>
<tr>
<td>
</td>
<td>
This project will extensively reuse linguistic resources, eventually
republishing them possibly after a transformation.
</td> </tr>
<tr>
<td>
d) What is the origin of the data?
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Datasets available in the LLOD cloud and resources available in other data
catalogues (OLAC 8 , LRE Map 9 , META-SHARE 10 , Clarin 11 , Retele
12 ), and private data resources that will not be exposed.
</th> </tr>
<tr>
<td>
e) What is the expected size of the data?
</td> </tr>
<tr>
<td>
</td>
<td>
The size of the data is broken down per data type:
Catalogue metadata: ~1Gb
Open linguistic data: not to be stored by Prêt-à-LLOD
New linguistic data: ~100Gb
Experiment-related data: ~10Gb
These figures have been estimated considering the experience of some of the
Prêt-à-LLOD partners in the past FP7-funded LIDER project 13 .
</td> </tr>
<tr>
<td>
f) To whom might the data be useful ('data utility')?
</td> </tr>
<tr>
<td>
</td>
<td>
Two large communities are identified: (i) the community of researchers and
practitioners of linguistics and social sciences and (ii) the community of
computer scientists and developers with interests in natural language
processing.
</td> </tr> </table>
2.2. FAIR data
<table>
<tr>
<th>
**2\. FAIR data**
</th> </tr>
<tr>
<td>
**2.1 Making data findable, including provisions for metadata**
</td> </tr>
<tr>
<td>
a) Are the data produced and / or used in the project discoverable and
identifiable?
</td> </tr>
<tr>
<td>
</td>
<td>
**Catalogue metadata** will be available at the Prêt-à-LLOD Data Portal **.**
Data will be discoverable because each dataset will have a description using
the standard DCAT vocabulary 14 (see Figure 1) --in particular, DCAT-AP: the
"DCAT application profile for European data portals", developed in the
framework of the EU ISA Programme 15 , which has become a de-facto standard.
**New linguistic data** produced by this project will also be offered through
the Prêt-à-LLOD Data Portal. Identifiability will be supported because each
data in all datasets will have a unique identifier (IRI 16 ) accessible
through the Web.
**Experiment-related data** will be published in Zenodo, in turn connected
with OpenAIRE and every major indexer of scientific documents. Eventually,
small pieces of data will also be available from source code repositories
(e.g. a Gitlab instance hosted in the premises of the coordinating institution
in Ireland).
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
_Figure 1. Metadata elements in the DCAT specification_
</th> </tr>
<tr>
<td>
b) What naming conventions do you follow?
</td> </tr>
<tr>
<td>
</td>
<td>
We defined the following two policies:
1. Identification of datasets. Datasets are identified by an slug (a user friendly and URL valid name of a resource).
2. URI minting policy, to be decided at a later stage of the project.
</td> </tr>
<tr>
<td>
c) Will search keywords be provided that optimize possibilities for re-use?
</td> </tr>
<tr>
<td>
</td>
<td>
The use of keywords is natural in the Prêt-à-LLOD Data Portal and in Zenodo.
Zenodo’s commitment with FAIR policies is made explicit 17 .
</td> </tr>
<tr>
<td>
d) Do you provide clear version numbers?
</td> </tr>
<tr>
<td>
</td>
<td>
The use of a semantic versioning is inherent to Zenodo. The Prêt-à-LLOD Data
Portal will also provide versioning and provenance mechanisms.
</td> </tr>
<tr>
<td>
e) What metadata will be created?
</td> </tr>
<tr>
<td>
</td>
<td>
The stored data are described by using the standard metadata schema Qualified
Dublin Core and DCAT.
Zenodo's metadata is compliant with DataCite's Metadata Schema minimum and
recommended terms, with a few additional enrichments 18 .
</td> </tr>
<tr>
<td>
**2.2 Making data openly accessible**
</td> </tr>
<tr>
<td>
a) Which data produced and / or used in the project will be made openly
available as the default?
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
By default, all metadata in Zenodo and Prêt-à-LLOD Data Portal are openly
available as soon as the record is published.
</th> </tr>
<tr>
<td>
b)
</td>
<td>
How will the data be made accessible (e.g. by deposition in a repository)?
</td> </tr>
<tr>
<td>
</td>
<td>
All data are stored in Zenodo and the Prêt-à-LLOD Data Portal. All metadata in
Zenodo and the Prêt-à-LLOD Data Portal are publicly available in an Open
Access modality. Eventually, language resources created by Prêt-à-LLOD will be
introduced in the well-known language resources catalogues (OLAC, LRE Map,
META-SHARE, Clarin, Retele).
</td> </tr>
<tr>
<td>
c)
</td>
<td>
What methods or software tools are needed to access the data?
</td> </tr>
<tr>
<td>
</td>
<td>
The extensive use of open specifications and consolidated standards grants
that there is no need for special software tools to access the data.
Eventually, experiment-related data may require of additional software (e.g.
GATE 19 ).
</td> </tr>
<tr>
<td>
d)
</td>
<td>
Is documentation about the software needed to access the data included?
</td> </tr>
<tr>
<td>
</td>
<td>
Not necessary for the time being.
</td> </tr>
<tr>
<td>
e)
</td>
<td>
Is it possible to include the relevant software (e.g. in open source code)?
</td> </tr>
<tr>
<td>
</td>
<td>
Not necessary for the time being.
</td> </tr>
<tr>
<td>
f)
</td>
<td>
Where will the data and associated metadata, documentation and code be
deposited?
</td> </tr>
<tr>
<td>
</td>
<td>
The following data stores are foreseen:
― The Prêt-à-LLOD Data Portal store defined in Section 1.1, hosted in Ireland,
for catalogue data and some newly generated resources.
― A Gitlab instance, hosted in Ireland, for small datasets.
― Zenodo for research-related data.
Data will also be mirrored, whenever possible, in projects with whom liaisons
will be established. In particular, relevant data will be also passed to the
ELG (European Language Grid 20 ) project, “Towards the Primary Platform for
Language Technologies in
Europe”.
</td> </tr>
<tr>
<td>
g)
</td>
<td>
Have you explored appropriate arrangements with the identified repository?
</td> </tr>
<tr>
<td>
</td>
<td>
The aforementioned repositories are either self-managed by Prêt-à-LLOD
partners, or they are already deemed for these purposes.
Formal arrangements with ELG are pending to be done.
</td> </tr>
<tr>
<td>
h)
</td>
<td>
If there are restrictions on use, how will access be provided?
</td> </tr>
<tr>
<td>
</td>
<td>
No restrictions have been identified at this stage, but the commercial
interest of the partners may lead to the creation of private data.
</td> </tr> </table>
<table>
<tr>
<th>
i) Is there a need for a data access committee?
</th> </tr>
<tr>
<td>
</td>
<td>
No. Rules that concern governing of data access of the partner institutions
will be followed and implemented, together with the FAIR principles followed
by this Plan.
</td> </tr>
<tr>
<td>
j) Are there well described conditions for access (i.e. a machine readable
license)?
</td> </tr>
<tr>
<td>
</td>
<td>
Licenses in Prêt-à-LLOD Data Portal are represented in a machine readable
form, using the most common metadata descriptor (dct:license, see Figure 1)
pointing to standard URL licenses’.
Whenever linked data is published, standard practices will be followed to
publish the rights information [5].
Moreover, in some cases, a fully machine readable representation of the
licenses is given using the Open Digital Rights Management Language (ODRL) 21
. Licenses from the RDFLicense dataset are also used [4].
</td> </tr>
<tr>
<td>
k) How will the identity of the person accessing the data be ascertained?
</td> </tr>
<tr>
<td>
</td>
<td>
Not necessary for the time being.
</td> </tr>
<tr>
<td>
**2.3 Making data interoperable**
</td> </tr>
<tr>
<td>
a) Are the data produced in the project interoperable?
</td> </tr>
<tr>
<td>
</td>
<td>
Both Zenodo and the Prêt-á-LLOD Data Portal use standard interfaces, protocols
and metadata, etc. Using standard metadata schemas in Zenodo, metadata can
easily be converted into other metadata schemas.
</td> </tr>
<tr>
<td>
b) What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?
</td> </tr>
<tr>
<td>
</td>
<td>
DCAT (described above) and the CKAN schema 22 based on it. Linghub currently
makes use of the META-SHARE OWL ontology [6].
</td> </tr>
<tr>
<td>
c) Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?
</td> </tr>
<tr>
<td>
</td>
<td>
Yes, see above (2.3.b).
</td> </tr>
<tr>
<td>
d) In case it is unavoidable that you use uncommon or generate project
specific ontologies or vocabularies, will you provide mappings to more
commonly used ontologies?
</td> </tr>
<tr>
<td>
</td>
<td>
The use of RDF as the meta-format grants the easy definition of links between
equivalent metadata elements.
</td> </tr>
<tr>
<td>
**2.4 Increase data re-use (through clarifying licences)**
</td> </tr>
<tr>
<td>
a) How will the data be licensed to permit the widest re-use possible?
</td> </tr>
<tr>
<td>
</td>
<td>
Open by 23default, using the CC-BY license (Creative Commons 4.0 Attribution
International ) unless this hampers the business model of our partners.
</td> </tr> </table>
<table>
<tr>
<th>
b) When will the data be made available for re-use?
</th> </tr>
<tr>
<td>
</td>
<td>
Data will be made available as soon as it is created and no data embargoes
are foreseen.
</td> </tr>
<tr>
<td>
c) Are the data produced and / or used in the project useable by third
parties, in particular after the end of the project?
</td> </tr>
<tr>
<td>
</td>
<td>
Making data ready to use is the motto of this project, and every possible
measure will be taken to maximize its usability.
</td> </tr>
<tr>
<td>
d) How long is it intended that the data remains re-usable?
</td> </tr>
<tr>
<td>
</td>
<td>
Data in the Prêt-à-LLOD Data Portal may not be supported after the end of the
project, but because it will be mirrored in the ELG, long time preservation
will be possible.
Research data will enjoy long term preservation as it will be uploaded to
Zenodo.
</td> </tr>
<tr>
<td>
e) Are data quality assurance processes described?
</td> </tr>
<tr>
<td>
</td>
<td>
No. Future versions of this DMP may include a definition of such process.
</td> </tr> </table>
2.3. Allocation of resources
<table>
<tr>
<th>
**3 Allocation of resources**
</th> </tr>
<tr>
<td>
a) What are the costs for making data FAIR in your project?
</td> </tr>
<tr>
<td>
</td>
<td>
None that is not foreseen in the Grant Agreement: making data FAIR is an
explicit objective of this project.
</td> </tr>
<tr>
<td>
b) How will these be covered?
</td> </tr>
<tr>
<td>
</td>
<td>
Not applicable.
</td> </tr>
<tr>
<td>
c) Who will be responsible for data management in your project?
</td> </tr>
<tr>
<td>
</td>
<td>
Víctor Rodríguez Doncel (UPM) will be the responsible for the management of
open data in this project. The management of private data will be
responsibility of the partners having produced it.
</td> </tr>
<tr>
<td>
d) Are the resources for long term preservation discussed?
</td> </tr>
<tr>
<td>
</td>
<td>
The cooperation agreements with the ELG project are headed towards long term
preservation.
</td> </tr> </table>
2.4. Data security
<table>
<tr>
<th>
**4 Data security**
</th> </tr>
<tr>
<td>
a) Is the data safely stored in certified repositories for long term
preservation and curation?
</td> </tr>
<tr>
<td>
</td>
<td>
Most data (catalogue data, newly created resources) will contain data to be
published under an open license. This data does not need any security measure
whatsoever. For the case partners generate privative data with personal
information, security measures
</td> </tr>
<tr>
<td>
</td>
<td>
will have to be adopted to comply with the General Data Protection Regulation
(GDPR, Regulation (EU) 2016/679).
</td> </tr>
<tr>
<td>
b) What provisions are in place for data security?
</td> </tr>
<tr>
<td>
</td>
<td>
Not yet described.
</td> </tr> </table>
2.5. Ethical aspects
<table>
<tr>
<th>
**5 Ethical aspects**
</th> </tr>
<tr>
<td>
a) Are there any ethical or legal issues that can have an impact on data
sharing?
</td> </tr>
<tr>
<td>
</td>
<td>
Ethical aspects have been extensively documented in Prêt-à-LLOD deliverables
“D7.1 Ethics Requirements I” and in “D7.4 Ethics Requirements 4”.
</td> </tr>
<tr>
<td>
b) Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?
</td> </tr>
<tr>
<td>
</td>
<td>
See Prêt-à-LLOD deliverable “D7.1. Ethics Requirements I”.
</td> </tr> </table>
2.6. Other issues
<table>
<tr>
<th>
**6 Other issues**
</th> </tr>
<tr>
<td>
a) Do you make use of other national / funder / sectorial / departmental
procedures for data management?
</td> </tr>
<tr>
<td>
</td>
<td>
― National University of Ireland Galway (NUIG) is subject to a _“Insight Open
Source Release Process”_ procedure.
_―_ Universidad Politécnica de Madrid (UPM) is subject to “ _Normativa sobre
protección de resultados de investigación de la Universidad Politécnica de_
_Madrid_ ” and “ _Reglamento del comité de ética de actividades i+d+i de la
Universidad Politécnica de Madri_ d”
These procedures are compatible with the provisions made in this data
management plan.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0107_MyoChip_801423.md
|
# Data Management Plan Overview
This document constitutes the Deliverable 3.2 “Data Management Plan” of the
project “Building a 3D innervated and irrigated muscle on a chip.”,
hereinafter referred to as MyoChip, funded by the European Union’s
H2020-FETOPEN-2016-2017, under the Grant Agreement number 801423\. According
to the European Commission suggested guidelines, participating projects are
required to develop a Data Management Plan (DMP). This Data Management Plan
describes the types of data that will be generated during the project,
standards that will be used to generate and store data, the ways data will be
shared, reused and preserved. It is very important to note that the DMP is not
a closed document, on the contrary, it will be adjusted/corrected throughout
the duration of the project.
# Open Research Data Pilot
The Open Research Data Pilot is part of the Open Access to Scientific
Publications and Research Data Program in H2020. It aims to improve and
maximize access to data generated by Horizon 2020 as well as to promote its
reutilization. As mentioned in article 29.3 of the grant agreement this pilot
also takes into account the need to carefully balance openness with protection
of scientific information. Taking all of this into consideration MyoChip
consortium will decide what information will be made public in a case to case
basis, always making a careful analysis of potential conflicts against
commercialization, intellectual property rights protection, etc.
Effective Dissemination and exploitation of MyoChip results depends on proper
management of data and its intellectual property. The terms and conditions
pertaining to ownership, access rights, exploitations of background and
dissemination of results, are defined in the Consortium Agreement signed by
all the partners.
Some examples of sections on our Consortium Agreement are:
8. Section: Results
1. Ownership of Results
Results are owned by the Party that generates them. Where Results are
generated from work carried out jointly by two or more Parties and it is not
possible for the purpose of applying for, obtaining and/or maintaining the
relevant patent protection or any other intellectual property right, to
separate the contributions made by the respective Parties to such Results,
those Parties shall have joint ownership of such Results.
2. Joint ownership
Joint ownership is governed by Grant Agreement Article 26.2 with the following
additions:
Unless otherwise agreed:each of the joint owners shall be entitled to use
their jointly owned Results for non-commercial research activities on a
royalty-free basis, and without requiring the prior consent of the other joint
owner(s), and
Joint ownership agreements should be formalized between the specific partners,
before the first exploitation act of the Joint Results concerned, to regulate
the share, protection and commercial exploitation of project results.
3. Transfer of Results
8.3.1
Each Party may transfer ownership of its own Results following the procedures
of the Grant Agreement Article °30.
8.3.2
It may identify specific third parties it intends to transfer the ownership of
its Results to in Attachment (3) to this Consortium Agreement. The other
Parties hereby waive their right to prior notice and their right to object to
a transfer to listed third parties according to the Grant Agreement Article
30.1.
8.3.3
The transferring Party shall, however, at the time of the transfer, inform the
other Parties of such transfer and shall ensure that the rights of the other
Parties will not be affected by such transfer. Any addition to Attachment (3)
after signature of this Agreement requires a decision of the General Assembly.
8.3.4
The Parties recognize that in the framework of a merger or an acquisition of
an important part of its assets, it may be impossible under applicable EU and
national laws on mergers and acquisitions for a Party to give the full 45
calendar days prior notice for the transfer as foreseen in the Grant
Agreement.
8.3.5
The obligations above apply only for as long as other Parties still have - or
still may request - Access Rights to the Results.
8.4 Dissemination
8.4.1
For the avoidance of doubt, nothing in this Section 8.4 has impact on the
confidentiality obligations set out in Section 10. 8.4.2 Dissemination of own
Results
8.4.2.1
During the Project and for a period of 2 year after the end of the Project,
the dissemination of own Results by one or several Parties including but not
restricted to publications, presentations, data and related metadata, shall be
governed by the procedure of Article 29.1 of the Grant Agreement subject to
the following provisions.
Prior notice of any planned publication shall be given to the other Parties at
least 45 calendar days before the publication. Any objection to the planned
publication shall be made in accordance with the Grant Agreement in writing to
the Coordinator and to the Party or Parties proposing the dissemination within
30 calendar days after receipt of the notice. If no objection is made within
the time limit stated above, the publication is permitted.
8.4.2.2
An objection is justified if
(a) the protection of the objecting Party's Results or Background would be
adversely affected (b) the objecting Party's legitimate interests in relation
to the Results or Background would be significantly harmed.
The objection has to include a precise request for necessary modifications.
8.4.2.3
If an objection has been raised the involved Parties shall discuss how to
overcome the justified grounds for the objection on a timely basis (for
example by amendment to the planned publication and/or by protecting
information before publication) and the objecting Party shall not unreasonably
continue the opposition if appropriate measures are taken following the
discussion.
8.5
The objecting Party can request a publication delay of not more than 90
calendar days from the time it raises such an objection. After 90 calendar
days the publication is permitted.
8.5.1 Dissemination of another Party’s unpublished Results or Background
A Party shall not include in any dissemination activity another Party's
Results or Background without obtaining the owning Party's prior written
approval, unless they are already made public.
8.5.2 Cooperation obligations
The Parties undertake to cooperate to allow the timely submission,
examination, publication and defence of any dissertation or thesis for a
degree that includes their Results or Background subject to the
confidentiality and publication provisions agreed in this Consortium
Agreement.
8.5.3 Use of names, logos or trademarks
Nothing in this Consortium Agreement shall be construed as conferring rights
to use in advertising, publicity or otherwise the name of the Parties or any
of their logos or trademarks without their prior written approval.
9. Section: Access Rights
9.1 Background included
9.1.1
In Attachment 1, the Parties have identified and agreed on the Background for
the Project and have also, where relevant, informed each other that Access to
specific Background is subject to legal restrictions or limits.
Anything not identified in Attachment 1 shall not be the object of Access
Right obligations regarding Background.
9.1.2
Any Party may add further own Background to Attachment 1 during the Project by
written notice to the other Parties. However, approval of the General Assembly
is needed should a Party wish to modify or withdraw its Background in
Attachment 1. 9.2 General Principles
9.2.1
Each Party shall implement its tasks in accordance with the Consortium Plan
and shall bear sole responsibility for ensuring that its acts within the
Project do not knowingly infringe third party property rights.
9.2.2
Any Access Rights granted expressly exclude any rights to sublicense unless
expressly stated otherwise.
9.2.3
Access Rights are granted on a non-exclusive basis.
9.2.4
Results and Background shall be used only for the purposes for which Access
Rights to it have been granted.
9.2.5
Requests for Access Rights shall be made, and Access Rights granted, in
writing. The granting of Access Rights may be made conditional on the
acceptance of specific conditions aimed at ensuring that these rights will be
used only for the intended purpose and that appropriate confidentiality
obligations are in place.
9.2.6
The requesting Party must show that the Access Rights are Needed.
9.3 Access Rights for implementation
9.3.1 Access Rights to Results and Background Needed for the performance of
the own work of a Party under the Project shall be granted on a royalty-free
basis, unless otherwise provided for in Attachment 1.
9.3.2
If the Background or Results take the form of non-consumable materials,
including but not limited to instruments, databases, software, protocols, said
background or Results will be returned to the providing Party by the receiving
Party at the end of the project, and eventual copies of said materials will be
destroyed by the receiving party. Notwithstanding the above provisions, the
Receiving and the providing party may negotiate conditions in which said
materials will be kept by the receiving Party, at fair and reasonable
conditions
9.4 Access Rights for Exploitation
9.4.1 Access Rights to Results
Access Rights to Results if Needed for commercial Exploitation of a Party's
own Results shall be granted on Fair and Reasonable conditions.
Provided the Party(ies) (co) owner(s) of the Results concerned gives its prior
written consent, Access rights to Results for internal and/or non-commercial
collaborative research activities shall be granted on a royalty-free basis for
academic Parties and on financial conditions for industrial Parties.
9.4.2
Access Rights to Background if Needed for Exploitation of a Party’s own
Results, including for research on behalf of a third party, shall be granted
on Fair and Reasonable conditions.
9.4.3
A request for Access Rights may be made up to twelve months after the end of
the Project or, in the case of Section 9.7.2.1.2, after the termination of the
requesting Party’s participation in the Project. 9.4.4
A Member which can show that its own liabilities, intellectual property rights
or other legitimate interests would be severely affected by the granting of
such access right, or that granting such access right would infringe its legal
obligations, will have the right to refuse such access right to Results or
Background.
The Member that wishes to exercise the veto must identify the exact rights
that will be affected by the decision, quantify the damages and identify the
moment when the damages will actually occur.
9.5 Access Rights for Affiliated Entities
Affiliated Entities have Access Rights under the conditions of the Grant
Agreement Articles 25.4 and 31.4. if they are identified in Attachment 4
(Identified Affiliated Entities) to this Consortium Agreement.
Such Access Rights must be requested directly by the Affiliated Entity from
the Party that holds the Background or Results.
Alternatively, the Party granting the Access Rights may individually agree
with the Party requesting the Access Rights to have the Access Rights include
the right to sublicense to the latter's Affiliated Entities listed in
Attachment 4. Access Rights to Affiliated Entities shall be granted on Fair
and Reasonable conditions and upon written bilateral agreement.
Affiliated Entities which obtain Access Rights in return fulfil all
confidentiality and other obligations accepted by the Parties under the Grant
Agreement or this Consortium Agreement as if such Affiliated Entities were
Parties.
Access Rights may be refused to Affiliated Entities if such granting is
contrary to the legitimate interests of the Party which owns the Background or
the Results.
Access Rights granted to any Affiliated Entity are subject to the continuation
of the Access Rights of the Party to which it is affiliated, and shall
automatically terminate upon termination of the Access Rights granted to such
Party.
Upon cessation of the status as an Affiliated Entity, any Access Rights
granted to such former Affiliated Entity shall lapse. Further arrangements
with Affiliated Entities may be negotiated in separate agreements.
9.6 Additional Access Rights
The Parties agree to negotiate in good faith any additional Access Rights to
Results as might be asked for by any Party, upon adequate financial conditions
to be agreed.
9.7 Access Rights for Parties entering or leaving the consortium
9.7.1 New Parties entering the consortium
As regards Results developed before the accession of the new Party, the new
Party will be granted Access Rights on the conditions applying for Access
Rights to Background.
9.7.2 Parties leaving the consortium
9.7.2.1 Access Rights granted to a leaving Party
9.7.2.1.1 Defaulting Party
Access Rights granted to a Defaulting Party and such Party's right to request
Access Rights shall cease immediately upon receipt by the Defaulting Party of
the formal notice of the decision of the General Assembly to terminate its
participation in the consortium.
9.7.2.1.2 Non-defaulting Party
A non-defaulting Party leaving voluntarily and with the other Parties' consent
shall have Access Rights to the Results developed until the date of the
termination of its participation.
It may request Access Rights within the period of time specified in Section
9.4.3.
9.7.2.2 Access Rights to be granted by any leaving Party
Any Party leaving the Project shall continue to grant Access Rights pursuant
to the Grant Agreement and this Consortium Agreement as if it had remained a
Party for the whole duration of the Project.
9.8 Specific Provisions for Access Rights to Software
For the avoidance of doubt, the general provisions for Access Rights provided
for in this Section 9 are applicable also to Software.
Parties’ Access Rights to Software do not include any right to receive source
code or object code ported to a certain hardware platform or any right to
receive respective Software documentation in any particular form or detail,
but only as available from the Party granting the Access Rights.
9.8.1 Definitions relating to Software
“Application Programming Interface”
means the application programming interface materials and related
documentation containing all data and information to allow skilled Software
developers to create Software interfaces that interface or interact with other
specified Software.
"Controlled Licence Terms" means terms in any licence that require that the
use, copying, modification and/or distribution of Software or another work
(“Work”) and/or of any work that is a modified version of or is a derivative
work of such Work (in each case, “Derivative Work”) be subject, in whole or in
part, to one or more of the following:
1. (where the Work or Derivative Work is Software) that the Source Code or
2. other formats preferred for modification be made available as of right to any third party on request, whether royaltyfree or not;
3. that permission to create modified versions or derivative works of the Work or Derivative Work be granted to any third party;
4. that a royalty-free licence relating to the Work or Derivative Work be granted to any third party.
For the avoidance of doubt, any Software licence that merely permits (¬but
does not require any of) the things mentioned in (a) to (c) is not a
Controlled Licence (and so is an Uncontrolled Licence).
“Object Code” means software in machine-readable, compiled and/or executable
form including, but not limited to, byte code form and in form of machine-
readable libraries used for linking procedures and functions to other
software.
“Software Documentation” means software information, being technical
information used, or useful in, or relating to the design, development, use or
maintenance of any version of a software programme.
“Source Code” means software in human readable form normally used to make
modifications to it including, but not limited to, comments and procedural
code such as job control language and scripts to control compilation and
installation.
9.8.2 General principles
For the avoidance of doubt, the general provisions for Access Rights provided
for in this Section 9 are applicable also to Software as far as not modified
by this Section 9.8.
Parties’ Access Rights to Software do not include any right to receive Source
Code or Object Code ported to a certain hardware platform or any right to
receive Source Code, Object Code or respective Software Documentation in any
particular form or detail, but only as available from the Party granting the
Access Rights.
The intended introduction of Intellectual Property (including, but not limited
to Software) under Controlled Licence Terms in the Project requires the
approval of the General Assembly to implement such introduction into the
Consortium Plan. 9.8.3 Access to Software
Access Rights to Software that is Results shall comprise: Access to the Object
Code; and,
where normal use of such an Object Code requires an Application Programming
Interface (hereafter API), Access to the Object Code and such an API; and,
if a Party can show that the execution of its tasks under the Project or the
Exploitation of its own Results is technically or legally impossible without
Access to the Source Code, Access to the Source Code to the extent necessary.
Background shall only be provided in Object Code unless otherwise agreed
between the Parties concerned.
9.8.4 Software licence and sublicensing rights
9.8.4.1 Object Code
9.8.4.1.1 Results - Rights of a Party
Where a Party has Access Rights to Object Code and/or API that is Results for
Exploitation, such Access shall, in addition to the Access for Exploitation
foreseen in Section 9.4, as far as Needed for the Exploitation of the Party’s
own Results, comprise the right:
to make an unlimited number of copies of Object Code and API; and
to distribute, make available, market, sell and offer for sale such Object
Code and API as part of or in connection with products or services of the
Party having the Access Rights;
provided however that any product, process or service has been developed by
the Party having the Access Rights in accordance with its rights to exploit
Object Code and API for its own Results.
If it is intended to use the services of a third party for the purposes of
this Section 9.8.4.1.1, the Parties concerned shall agree on the terms thereof
with due observance of the interests of the Party granting the Access Rights
as set out in Section 9.2 of this Consortium Agreement.
9.8.4.1.2 Results - Rights to grant sublicenses to end-users
In addition, Access Rights to Object Code shall, as far as Needed for the
Exploitation of the Party’s own Results, comprise the right to grant in the
normal course of the relevant trade to end-user customers buying/using the
product/services, a sublicense to the extent as necessary for the normal use
of the relevant product or service to use the Object Code as part of or in
connection with or integrated into products and services of the Party having
the Access Rights and, as far as technically essential:
* to maintain such product/service;
* to create for its own end-use interacting interoperable software in accordance with the Directive 2009/24/EC of the European Parliament and of the Council of 23 April 2009 on the legal protection of computer programs
9.8.4.1.3 Background
For the avoidance of doubt, where a Party has Access Rights to Object Code
and/or API that is Background for Exploitation, Access Rights exclude the
right to sublicense. Such sublicensing rights may, however, be negotiated
between the Parties.
9.8.4.2 Source Code
9.8.4.2.1 Results - Rights of a Party
Where, in accordance with Section 9.8.3, a Party has Access Rights to Source
Code that is Results for Exploitation, Access Rights to such Source Code, as
far as Needed for the Exploitation of the Party’s own Results, shall comprise
a worldwide right to use, to make copies, to modify, to develop, to
create/market a product/process and to create/provide a service.
If it is intended to use the services of a third party for the purposes of
this Section 9.8.4.2.1, the Parties shall agree on the terms thereof, with due
observance of the interests of the Party granting the Access Rights as set out
in Section 9.2 of this Consortium Agreement.
9.8.4.2.2 Results – Rights to grant sublicenses to end-users
In addition, Access Rights, as far as Needed for the Exploitation of the
Party’s own Results, shall comprise the right to sublicense such Source Code,
but solely for purpose of adaptation, error correction, maintenance and/or
support of the Software. Further sublicensing of Source Code is explicitly
excluded.
9.8.4.2.3 Background
For the avoidance of doubt, where a Party has Access Rights to Source Code
that is Background for Exploitation, Access Rights exclude the right to
sublicense. Such sublicensing rights may, however, be negotiated between the
Parties.
9.8.5 Specific formalities
Each sublicense granted according to the provisions of Section 9.8.4 shall be
made by a traceable agreement specifying and protecting the proprietary rights
of the Party or Parties concerned.
# FAIR Data
Beneficiaries must ensure that their research data respect FAIR principles,
i.e. they are Findable, Accessible, Interoperable and Reusable (FAIR).
## Findable
Partners agree to label all data with name, date and keywords to facilitate
data search & find. Also, all metadata files should be kept together with raw
data.
We are currently working with iMM’s Communication and IT teams to create the
new MyoChip website. In the website there will be a web platform that will
allow access to data stored in a server protected by login and password for
each partner.
## Accessible
Data will be made "as open as possible, as closed as necessary". Open data
will be made accessible by publishing and/or deposited on repository such as
Zenodo
(https://zenodo.org)
Decisions on specific identification of closed data, or data subject to
embargo related to IP policies will be defined at due time.
## Interoperable
All partners will use vocabularies that follow FAIR principles, being
accessible and broadly applicable. Data/Metadata files should include clear
references to other Data/Metadata files.
## Re-usable
Licensing policies will be defined when the general dissemination, IP
protection and exploitation policies will be more clearly drawn. Usually,
after acceptance of relevant publications, a 6-12 months’ embargo can be
considered. Data will be made available and reusable through open data
repositories for periods of up to 10 years.
# MyoChip Data Set Description
The MyoChip project will generate mainly electronic data, however some data
records can also be found handwritten as lab books for example. MyoChip
project will ensure that all electronic files follow the FAIR policy as
explained earlier. Expected size of data among all partners could amount close
to 100TB during the entire project. The majority of data will come from
software use for experimental setup, equipment and data analysis software. All
partners have identified the datasets that will most likely be produced during
the different phases of the project. These will be updated when necessary in
the next versions of the DMP. All types of data are listed in the table below.
<table>
<tr>
<th>
**Type of**
**Data**
</th>
<th>
**Formats used on MyoChip**
</th>
<th>
**Source**
</th> </tr>
<tr>
<td>
**Documents**
</td>
<td>
.docx, .doc,.xlsx, .xls, .pptx,
.ppt, .pdf, .txt
</td>
<td>
* Protocols elaborated by the partners
* Project meetings (minutes, presentations, other supporting documents), exchange of ideas
* Group meeting discussions
* Literature review: references in an Zotero, Endnote, Mendeley or other database;
* Word documents with search details (databases, strategies, results) and reviews
</td> </tr>
<tr>
<td>
**Video files**
</td>
<td>
.gif, .avi, .mov ,.mp4, .m4p,
.mpe, .czi
</td>
<td>
Different microscopes and analysis software.
</td> </tr>
<tr>
<td>
**Digital images**
</td>
<td>
.tif, .tiff, .gif, .jpeg, jpg, .jif,
.jfif, .jp2, .jpx, .j2k, .j2c, .fpx,
.pcd, .png, .pdf, .czi. ,.sld, .ai, .lsm,
</td>
<td>
Different microscopes and analysis software.
</td> </tr>
<tr>
<td>
**CAD files**
</td>
<td>
.dwg, .dxf, .gds, .cif, .stl, .step
</td>
<td>
Computer Assisted Design (CAD) files
</td> </tr>
<tr>
<td>
**Code**
</td>
<td>
</td>
<td>
Development of software
</td> </tr>
<tr>
<td>
**Database**
**files**
</td>
<td>
.sqlite, .enl, .data
</td>
<td>
Literature reference software such as Zotero, Endnote or Mendeley
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
.csv
</td>
<td>
Measurements / sensor outputs
</td> </tr> </table>
<table>
<tr>
<th>
**Reusing and Sharing**
</th>
<th>
**Archiving and preserving (including storage and backup)**
</th> </tr>
<tr>
<td>
All types of data are accessible to all the partners on demand. Partners share
and reuse data. Files will be shared at meetings.
</td>
<td>
The data will be stored by the partner collecting it (on their own computers
and/or institutional servers).
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0108_GENE-SWitCH_817998.md
|
1 DATA MANAGEMENT PLAN
**Project** 1 **Number:** 817998
**Project Acronym:** GENE-SWitCH
**Project title:** The regulatory GENomE of SWine and CHicken: functional
annotation during development
**Author** : Peter Harrison EMBL-EBI **Version** : 1.1
**Disclaimer**
The information in this document is provided as is and no guarantee or
warranty is given that the information is fit for any particular purpose. The
user thereof uses the information at its sole risk and liability.
**History of changes**
<table>
<tr>
<th>
**VERSION**
</th>
<th>
**PUBLICATION DATE**
</th>
<th>
**CHANGES**
</th> </tr>
<tr>
<td>
**1.0**
</td>
<td>
06.12.2019
</td>
<td>
Initial version from WP3
</td> </tr>
<tr>
<td>
**1.1**
</td>
<td>
23.12.2019
</td>
<td>
Minor revisions from consortium comments
</td> </tr> </table>
# 1\. Data Summary
The GENE-SWitCH project aims to deliver to the livestock community two
functionally mapped monogastric genomes, for chicken and pig. These functional
maps and associated data collections will enable immediate translation into
the pig and poultry sectors for developments in sustainable production. For
example, within the project the datasets will be employed to evaluate the
effect of maternal diet on the epigenome of pig foetuses. A key aspect of the
generated datasets will be the extensive associated rich and controlled
metadata, information that is key for the development of phenome to genome
resources. The project will utilise existing pig and chicken datasets from the
FAANG collection and wider community from the public archives as test datasets
for the development of its openly developed bioinformatic pipelines and to
enrich its new functional maps; it will also use the existing reference
genomes from the community as a starting point for its own improved mapped
genomes. The tissue samples for sequencing are being collected for pig from
the INRA experimental unit GenESI in Lusignan, France and chicken samples from
the UK National Avian Research Facility (www.narf.ac.uk). In total the project
will generate data from 14 different assays, including both the core FAANG
assays and additionally DNA methylation, Hi-C and Whole Genome Sequencing
(Table 1). The Data Management Plan will be periodically updated throughout
the project to reflect changes in the data produced by the project and any
changes in storage and release. A future update will include the sizes of the
datasets produced by the project.
Table 1. The number of GENE-SWitCH assays and samples across developmental
stages.
All processed data generated in the project will be shared using standard
bioinformatic data file formats (i.e. FASTQ, FASTA, SAM/BAM, GFF/GTF, BED, and
VCF). The project will make extensive use of open access existing legacy
datasets and of additional datasets generated during the lifetime of the
project identified and accessed from EMBL-EBI public archives. Overall the
project contributes to the global FAANG coordinated effort with i) the
deliverance of high quality functional genome maps for the pig and the
chicken, ii) demonstrable impact of these new data resources on developments
in the breeding industry, and iii) the production of cutting edge
bioinformatic pipelines and experimental techniques that will be of wide
benefit to the scientific research community and breeding industries.
# 2\. FAIR data
## 2\. 1. Making data findable, including provisions for metadata
The proposed data deposition of GENE-SWitCH data through FAANG to the EMBL-EBI
public archives will ensure the generated data is highly discoverable. GENE-
SWitCH utilises the FAANG metadata standards (
_https://data.faang.org/ruleset/samples#standard_ ) . All data submissions
will be validated through the FAANG validation and submission tools, that are
in fact being updated as part of the GENE-
SWitCH project and are accessible at (
_https://data.faang.org/validation/samples_
). The deposition in the public archives gives every data file a unique
accession. These accessions are globally recognised by the comparable archives
at the National Center for Biotechnology Information
(NCBI; _https://www.ncbi.nlm.nih.gov/_ ) and DNA Databank of Japan (DDBJ;
_https://www.ddbj.nig.ac.jp/index-e.html)/_ ) . Different assay files are
linked through the inclusion of the BioSamples identifier in all data
submissions so that all of the datasets generated on each sample can be easily
grouped and accessed from downstream presentation resources. GENE-SWitCH will
conform with the FAANG record naming conventions. The FAANG data portal
utilises ElasticSearch to ensure that all ontology validated metadata fields
are keyword searchable using its predictive simultaneous search across
samples, reads and analyses ( _https://data.faang.org/search_ ) . It will be
possible to search for GENE-SWitCH data as part of an all search or pre-limit
the search specifically to only return GENESWitCH project data results. The
data portal utilises the rich ontology supported metadata to provide filters
that allow a user to explore the GENE-SWitCH data based on species,
technology, breeds, sex, material, organism part, cell type, assay type,
archive, and sequencing instrument. All software will be appropriately
versioned using an agreed versioning structure from its coding standards
document of work package 2\.
### 2.2. Making data openly accessible
All samples and `omics data will be deposited in the EMBL-EBI public archives
that includes BioSamples, the European Nucleotide Archive, the European
Variation Archive, PRIDE and BioImage archive. These are widely recognised and
approved repositories for the long-term storage of biological data and the
deposition routes are established with the FAANG Data Coordination Centre
(DCC), that itself is based within the Molecular Archives cluster at EMBL-EBI.
Apart from the reserved right of first publication stipulation set out in the
FAANG Data Sharing statement ( _https://www.faang.org/data-shareprinciple_ )
, there are no restrictions on use of the data, no data access committee is
required and apart from anonymous usage analytics no tracking of individual
data use will be made. The following data sharing statement is available both
via the websites and Application Programmatic Interfaces (machine readable) of
the public archives and FAANG data portal.
_**"** This study is part of the FAANG project, promoting rapid prepublication
of data to support the research community. These data are released under Fort
Lauderdale principles, as confirmed in the Toronto Statement (Toronto
International Data Release Workshop. Birney et al. 2009. Pre-publication data
sharing. Nature 461:168-170). Any use of this dataset must abide by the FAANG
data sharing principles. Data producers reserve the right to make the first
publication of a global analysis of this data. If you are unsure if you are
allowed to publish on this dataset, please contact the FAANG Consortium
([email protected]) to enquire. The full guidelines can be found at _ _
_http://www.faang.org/data-share-principle_ . **"** _
The EMBL-EBI public archives are fully aware and accepting of incoming FAANG
data including the data of the GENE-SWitCH project. Whilst the FAANG metadata
is fully machine readable and the license is available to both web and
programmatic users, further improvements will be investigated to further
improve the machine readability, in collaboration with the requirements of the
other H2020 SFS30 projects. The FAANG DCC will investigate specific license
API endpoints, html embedding of license links and license structure
formatting to improve machine-based access, a key component of FAIR
compliance.
The submission model for GENE-SWitCH will make the data available for direct
download from both the FAANG data portal that utilises the underlying public
archives infrastructure and from the public archives themselves. This provides
by default a range of data access methods including web browser download, FTP,
Aspera, Globus and API access to give flexibility to data consumers. All of
these download options are open source and the archives have extensive
documentation on the various data access options. The FAANG data portal
collates the files from the various underlying archives to a single access
point. The FAANG API provides programmatic users with the access FTP addresses
to make a secondary call to download the data files themselves.
All GENE-SWitCH software will be publicly developed on the FAANG GitHub
repository, so that the development process is open to community input and
available pre-publication. All of the GENESWitCH repositories will be given
the prefix ‘proj-gs-‘ within the FAANG GitHub repository (
_https://github.com/FAANG_ ) . GENE-SWitCH will ensure that in the FAANG data
portal the bioinformatic pipeline that was used to generate the analysis file
is linked from the analysis file results page. This ensures that the analysis
file, the raw data that generated it, the protocols and the bioinformatic
pipelines are all downloadable from the same location. The software will
include complete documentation, nextflow workflow management and be
containerised in Docker. No specific tools are required to access the data
from the data portals or the FAANG data portal, as they will use standard
accepted file formats of the public archives. The FAANG data portal will
provide a GENE-SWitCH project slice that will allow the data portal and
programmatic access interface to provide a bulk download of all GENE-SWitCH
data at once, this will be available at _http://data.faang.org/projects/gene-
switch_ .
### 2.3. Making data interoperable
GENE-SWitCH data will be submitted through the FAANG DCC that will ensure the
data is interoperable with other FAANG datasets and highly reusable by the
wider livestock community. To ensure interoperability with all other FAANG
datasets, including the other three H2020 SFS30 projects, GENESWitCH will
employ the latest version of the FAANG metadata standards (and utilise all
future updates
to these standards), currently version 3.8 (
_https://github.com/FAANG/dccmetadata/tree/master/rulesets_ ) , and in a more
readable form at _https://data.faang.org/ruleset/samples#standard_ . It will
ensure its compliance with these standards by running all data through the
FAANG validation software prior to submission to the public archives. GENE-
SWitCH will develop coding standards to ensure that all pipelines developed by
the consortium are easily utilised, they will be containerised to ease
installation and reuse. For its pipelines it will utilise open software
applications, that will be implemented with a nextflow workflow manager and
containerised using Docker to ensure consistent reuse across the project and
by downstream users.
To ensure interdisciplinary interoperability GENE-SWitCH will utilise the
recommended ontologies of the FAANG metadata standards as set by the FAANG
Metadata and Data Sharing Committee. A specific action of the project will be
through the coordination of the FAANG DCC to improve the coverage and quality
of ontologies for use in livestock metadata recording, and the consortium will
publish a manuscript on the state of the art and usage of ontologies. Wherever
an ontology is not possible we will employ controlled lists to prevent
erroneous metadata recording. The ontologies that will be utilised in the
project will be:
OBI https://www.ebi.ac.uk/ols/ontologies/obi
NCBI Taxonomy https://www.ebi.ac.uk/ols/ontologies/ncbitaxon
EFO https://www.ebi.ac.uk/ols/ontologies/efo
LBO https://www.ebi.ac.uk/ols/ontologies/lbo
PATO https://www.ebi.ac.uk/ols/ontologies/pato
VT https://www.ebi.ac.uk/ols/ontologies/vt
<table>
<tr>
<th>
ATOL
</th>
<th>
https://www.ebi.ac.uk/ols/ontologies/atol
</th> </tr>
<tr>
<td>
EOL
</td>
<td>
https://www.ebi.ac.uk/ols/ontologies/eol
</td> </tr>
<tr>
<td>
UBERON
</td>
<td>
https://www.ebi.ac.uk/ols/ontologies/uberon
</td> </tr>
<tr>
<td>
CL
</td>
<td>
https://www.ebi.ac.uk/ols/ontologies/cl
</td> </tr>
<tr>
<td>
BTO
</td>
<td>
https://www.ebi.ac.uk/ols/ontologies/bto
</td> </tr>
<tr>
<td>
CLO
</td>
<td>
https://www.ebi.ac.uk/ols/ontologies/clo
</td> </tr>
<tr>
<td>
SO
</td>
<td>
_https://www.ebi.ac.uk/ols/ontologies/so_
</td> </tr>
<tr>
<td>
GO
</td>
<td>
_https://www.ebi.ac.uk/ols/ontologies/go_
</td> </tr>
<tr>
<td>
NCIT
</td>
<td>
https://www.ebi.ac.uk/ols/ontologies/ncit
</td> </tr>
<tr>
<td>
CHEBI
</td>
<td>
https://www.ebi.ac.uk/ols/ontologies/chebi
</td> </tr> </table>
### 2.4. Increase data re-use (through clarifying licences)
GENE-SWitCH data will be publicly released in the EMBL-EBI archives at the
earliest opportunity and pre-publication. This will be submitted to the
archives without embargo so that it is immediately released to the public.
This is in accordance with the FAANG data sharing principles
( _https://www.faang.org/data-share-principle_ ) , that is based upon the
principles of the Toronto
( _https://www.nature.com/articles/461168a_ ) and Fort Lauderdale
( _https://www.genome.gov/Pages/Research/WellcomeReport0303.pdf_ )
agreements. This reserves the right for GENE-SWitCH to make the first
publication with the data, whether a dataset has an associated publication is
tracked clearly in the FAANG data portal ( _https://data.faang.org/home_ ) .
All datasets will be clearly labelled with these data sharing principles, with
the following statement:
_**"** This study is part of the FAANG project, promoting rapid prepublication
of data to support the research community. These data are released under Fort
Lauderdale principles, as confirmed in the Toronto Statement (Toronto
International Data Release Workshop. Birney et al. 2009. Pre-publication data
sharing. Nature 461:168-170). Any use of this dataset must abide by the FAANG
data sharing principles. Data producers reserve the right to make the first
publication of a global analysis of this data. If you are unsure if you are
allowed to publish on this dataset, please contact the FAANG Consortium
([email protected]) to enquire. The full guidelines can be found at _ _
_http://www.faang.org/data-share-principle_ . **"** _
This enables the wider community to immediately make use of the data that
GENE-SWitCH produces to provide maximal value to researchers. All software
developed by the consortium will be openly licensed for reuse, an example of
this is in the GENE-SWitCH RNA-Seq pipeline ( _https://github.com/FAANG/proj-
gs-rna-seq/blob/master/LICENSE_ ) . In accordance with GENE-SWitCH coding
standards, this license file will be displayed in the root folder of all
repositories.
Data quality assurance processes and metrics will be investigated and
implemented by work package 2 as part of the pipeline development process.
It is intended that through the accurate recording of metadata, associated
protocols and analysis software, and deposition in public archives that the
data will remain available for long after the project grant ends, for the
lifetime of the underlying public archives. The data will therefore be
reusable by any party, at some point the datasets may be superseded by those
produced on newer technologies. There will be no restriction on third party
use of the data. The data generation work packages will apply the latest
recommended community standards for data quality, comply with any standards
set by FAANG working groups or the public archives, and for the generation and
execution of bioinformatics analysis will utilise the latest open source,
published and recognised analysis software for the construction of its
pipelines.
# 3\. Allocation of resources
GENE-SWitCH directly funds the activity of the FAANG Data Coordination centre
(DCC) to conduct data management and coordination for the project. The
proposal has specific tasks and deliverables that will ensure the data
generated in the project will conform to FAIR data principles. This in
particular enhances the existing FAANG metadata standards, archival support
tools, data portal discovery and data visualisations to improve findability,
accessibility, interoperability and reusability of GENE-SWitCH data. These
enhancements will also benefit the entire FAANG community as improvements will
apply to all FAANG data. Thus the costs associated with ensuring GENE-SWitCH
data is FAIR have been fully factored into the costs provided to EMBL-EBI in
work package 3. All work packages that generate and analyse data have
appropriate funding for the accurate recording and provision of metadata,
through the validation and submissions software provided by the DCC. Data
management is the responsibility of the FAANG Data Coordination Centre at
EMBL-EBI that is operated by Peter Harrison and Guy Cochrane.
GENE-SWitCH will use the EMBL-EBI public archives for the long-term
preservation of its generated data, these resources have separate long term
funding that will persist the data long after the grant ends. The inclusion of
the data within the FAANG consortium data portal
( _https://data.faang.org/home_ ) and Ensembl browser (
_https://www.ensembl.org/index.html_ ) also ensures the functional annotation
of genomes will remain accessible by the community in the long term, as these
are likely to continue to receive separate funding.
# 4\. Data security
GENE-SWitCH will at the earliest opportunity submit all data to the public
archives at EMBL-EBI. Intermediate results and ongoing analyses will be
conducted and stored on the EMBL-EBI embassy cloud platform that is located in
the same data centre as the public archives. Access to the GENESWitCH embassy
cloud analysis platform is controlled by user specific ssh keys only issued to
consortium members. As soon as an analysis is finished it will be submitted to
the relevant EMBL-EBI archive for immediate public release without embargo.
The EMBL-EBI archives are internationally recognised repositories for the
long-term secure storage of scientific data. The EMBL-EBI archives are
recognised Core Elixir data resources (
_https://elixireurope.org/platforms/data/core-data-resources_ ) . All data
will be assigned a unique identifier for long term identification and
preservation of the datasets. The EMBL-EBI data centres that host the public
archives providing the long-term data storage, and the embassy cloud platform
for the analysis and intermediate processing of GENE-SWitCH data are state of
the art. EMBL-EBI uses three discrete Tier III plus data centres in different
geographical locations to ensure long-term security. Research data is also
replicated through the International Nucleotide Sequence Database
Collaboration (INSDC; _http://www.insdc.org/_ ) agreements that sees the
data replicated at the National Center for Biotechnology Information (NCBI;
_https://www.ncbi.nlm.nih.gov/_ ) and DNA Databank of Japan (DDBJ;
_https://www.ddbj.nig.ac.jp/index-e.html_ ) that agree to recognise each
other centres accessioned datasets.
EMBL-EBI commits to store the data for the lifetime that the archives remain
active, this will be far beyond when the GENE-SWitCH grant ends, ensuring this
data remains available to the scientific community for years to come.
# 5\. Ethical aspects
The proposed GENE-SWitCH data management plan complies fully with all
international, EU and national legal and ethical requirements. GENE-SWitCH
data sharing and long-term preservation is not subject to informed consent.
GENE-SWitCH will fully comply with General Data Protection Regulations for its
activities and web services.
# 6\. Other issues
As well as complying with H2020 procedures for data management, the GENE-
SWitCH project will abide by the data sharing policy of the Functional
Annotation of Animal Genomes (FAANG) coordinated action (
_https://www.faang.org/data-share-principle_ ) . This statement outlines the
expectations of all FAANG projects that contribute to the coordinated action
in terms of data recording, archiving and sharing. The statement includes the
principles of the Toronto ( _https://www.nature.com/articles/461168a_ ) and
Fort Lauderdale
( _https://www.genome.gov/Pages/Research/WellcomeReport0303.pdf_ )
agreements. The requirements set out in the FAANG data sharing principles do
not conflict with those imposed by the EU H2020 data management principles.
5.2 FAANG Data Sharing Statement
This document describes the principles of data sharing held by the FAANG
consortium. This document is subject to approval by the FAANG steering
committee. Any queries about this
document should be sent to [email protected]_ .
<table>
<tr>
<th>
**Definitions**
**Archive** means one of the archives hosted at the EBI, NCBI or DDBJ. These
include the ENA, Genbank, ArrayExpress and Geo. A full list of the FAANG
recommended archives is available as part of the FAANG metadata
recommendations.
**Submission** means data and metadata submission to one of the FAANG
recommended Archives.
**FAANG member** means an individual who has signed up to the FAANG consortium
through the FAANG website and agreed to the FAANG core principles.
**Data** means any assay or metadata generated for or associated with FAANG
experiments.
**Analysis** means any computational process where raw assay data is aligned,
transformed or combined to produce a new product.
**Internal** means data that is only accessible via the FAANG private shared
storage.
**Private** shared storage means a storage space hosted at EMBL-EBI that has
password access via FTP, aspera and Globus Grid FTP technologies.
**Public** means all data available through the FAANG public FTP site, which
has no password and is accessible to everyone.
</th> </tr> </table>
FAANG recognizes that quickly sharing the data generated by the consortium
with the wider community is a priority. Rapid data sharing before publication
ensures that everyone can benefit from the data created by FAANG and can take
advantage of improved understanding of the functional elements in these animal
genomes to aid their own research.
All raw data produced for a FAANG associated project will be submitted to the
archives without any hold until publication date, thus allowing the data to be
publicly available immediately after successful archive submission and useful
to the community as soon as possible.
The FAANG analysis group will turn the raw data into primary and integrated
analysis results. Primary analysis results consistent of sample level analysis
such as alignment to a reference genome or quantification of signal in the
assay. Integrated analysis results represent analyses which drawn together
data from multiple samples and/or experiments such as genome segmentation or
differential analysis results.
The majority of these analysis results
will not be archived before publication but FAANG recognizes the need to share
them both within the consortium and with the community. Initially all files
that are not archived will be shared between FAANG members in private shared
storage hosted at the EMBL-EBI. Any individual who signs up to FAANG and
agrees to **_the Toronto principles_ ** 1 will be allowed access to this.
There will be metadata files in the private data sharing area, which make
credit for different datasets as clear as possible.
FAANG expects to make multiple releases each year. A data release will involve
declaring a data freeze and copying all files associated with that data freeze
from the private shared storage to the public FTP site. In the first instance
these data freezes will contain the primary analysis results. As FAANG's
analyses progress, the data freeze will be expanded to include integrative
analysis too. The data freeze process will be coordinated by the FAANG Data
Coordination Centre and will be based on consultation with FAANG members.
FAANG will also aim to release all data associated with a paper before
publication even if it lies outside this standard freeze cycle. The public
data will be available to the whole community.
All FAANG public data is released under **_Fort Lauderdale principles_ ** 2
. The FAANG website, data portal and FTP site will all have clear data reuse
statements on them.
When considering internal FAANG data, if one FAANG member wishes to publish
using data generated by another FAANG member they should first contact the
data generator and clarify the member's publication strategy. Collaboration is
for everyone's benefit and is strongly encouraged. The FAANG Steering
Committee commits to report to journal editors and the laboratories involved
any event that disregards the rights of data creators (including biological
measurements as well as analysis of such measurements).
All members of FAANG can and will continue to do experimental and analysis
work outside of FAANG and the other data generated is not required to meet the
same data sharing expectations.
Only FAANG data can be distributed through the private storage and public FTP
site.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0112_i-GRAPE_825521.md
|
# Executive Summary
i-GRAPE project aims to develop a new generation of analytical methods for
stand-alone, onthe-field control of the grape maturation phase and vine hydric
stress, based on highly integrated photonics components and modules.
The objective of this document is to establish the framework for managing the
data generated and collected throughout the project. This will consider:
1. Handling of data collected before, during, and after the project,
2. Identification of the major data sources,
3. Methodology and standards for data management,
4. Data sharing policy,
5. Data curation and preservation.
Following the EU’s guidelines regarding the Data Management Plan, this
document may be updated - if appropriate - during the project lifetime.
# Introduction
The present document is the deliverable D6.3 Data Management Plan (D 6.3). It
establishes the framework and guidelines for managing the data generated and
collected during the project. The target audience of the document, being a
public deliverable, is the i-GRAPE consortium, the European Commission and the
general public. The document is composed by the sections below with the
following rationale:
* Section 2- definitions,
* Section 3- data sources,
* Section 4- data storage and backup, - Section 5- data management principles, - Section 6- conclusion and revision cycle.
# Definitions
**Data Management Plan (DMP):** a working document which outlines how all
available datasets will be handled both during the active project phase and
after its end.
**Data Owners:** are the individuals or groups of individuals who are held
accountable for a dataset and who have legal ownership rights to a dataset
even though that dataset may have been collected, collated or disseminated by
another party.
**Dataset:** a collection of data created in the course of i-GRAPE project or
secondary data that can be published with due permission of the user. Datasets
are the fundamental unit of data management.
**Metadata:** Information about datasets stored in a repository. Metadata is
the term used to describe the summary information or characteristics of a set
of data. In general terms, this means the What, Who, Where, When and How of
the data. For example, in the particular case of data generated by i-GRAPE
sensors, this can include the area of geographic information, the grape
variety, or the temperature.
**Primary data:** original data that has been uploaded by the user especially
for the purpose in mind.
**Secondary data:** data that has been captured for another purpose than the
purpose in mind. Secondary data that is being reused, usually in a different
context.
# Data sources
Regarding the multidisciplinary character of the project, there are several
complementary sources of data that will be generated and collected during the
project. Therefore, it can be identified three major data sources:
**Historical datasets:** collection of already existing time series data that
target vine hydric stress and grape maturation parameters relevant to the
development of the project. These datasets are owned and supplied by the
project’s end-user of i-GRAPE’s technology (Sogrape).
**Field datasets:** systematic collection of data generated during the
monitoring of grapes and / or vine plants during a season. This includes data
acquired by i-GRAPE sensors, reference instrumentation (e.g. benchtop and
portable optical instrumentation such as spectrophotometers or fluorometers),
and information generated by wet chemistry assays.
**Experimental datasets** : collection of data generated during the
experimental activities related to the project (e.g. simulations of electronic
circuits, simulations of optical nanostructures, optical data collected with
reference materials for standardization).
# Data storage and backup
The data to be produced within i-GRAPE project will be securely stored and
backups made regularly.
The legal principles related to data storage and backup will be stated on the
i-GRAPE Data Policy as described below on section 5.2.
# Data management principles
This section provides information on i-GRAPE project principles for data
management considering the benefits, drivers, principles and mechanisms needed
for data acquisition, storage, security, retrieval, dissemination archiving,
and disposal. The i-GRAPE key principles for data management are described
under the sub-sections below.
## Data lifecycle control
The whole datasets of i-GRAPE will be managed in order to ensure that the data
is usable and securely stored in i-GRAPE database.
## Data policy
The datasets will be acquired by all partners during the activities of the
project. Each partner is responsible to add the datasets to the database
provided by INL.
Datasets will be managed by the i-GRAPE consortium under the procedures
established by this plan. Datasets will be periodically maintained in order to
make them usable in the long-term.
Intellectual Property Rights (IPR) management will specify any restrictions on
the use of the data. All data that is considered not essential for IPR
protection will be made accessible to the public.
## Metadata
All datasets produced during i-GRAPE project will contain compiled metadata
that will summarize the major characteristics of the dataset. The i-GRAPE
consortium will set the guidelines for establishing a consistent metadata
among the different datasets generated. This metadata record will allow a
correct identification and suitable reuse / reprocessing of the dataset.
## Data access and dissemination
Access and dissemination of the data will comply with the following
principles:
* The right to use or provide access to data can be passed to a third party and subject to dissemination policies.
* IPR management will define the access level of the data.
* Open Access will be applied to research data, upon assessing that:
* IPR, Copyright, and data ownership of the consortium and/or third party data are respected;
* No sensitive or confidential information are disclosed.
* The potential for re-use and exploitation of data will be considered.
* Public access to data available under i-GRAPE platform will be provided in compliance with the General Data Protection Regulation.
## Data audit
The i-GRAPE consortium will perform periodically audits to the datasets in
order to verify the compliance with the policies of the present document.
# Conclusions and revision cycles
The Data Management Plan is currently under implementation and includes all
datasets identified in section 3. The i-GRAPE consortium is responsible for
verifying the application of rules stated in the present document.
This document should be reviewed according to the quality and quantity of data
generated throughout the project. The revisions cycles should coincide with
the consortium meetings.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0113_SACOC_831977.md
|
# DATASETS
At this stage of the project, the following datasets are envisaged:
## REFERENCE DESIGN
The reference design (v0) consists of a flat plate with no fins or any other
heat exchanger geometry and is intended to validate flow conditions inside the
wind tunnel and the heat transmission.
### Experimental data: IDs **data_v0_exp_UPV** and **data_v0_exp_Purdue**
Table 1 _metadata for dataset Data_v0_exp_UPV_
#### FIELD VALUE
<table>
<tr>
<th>
Title
</th>
<th>
Experimental data for the reference design
</th> </tr>
<tr>
<td>
Creator
</td>
<td>
UPV
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Reference design v0 (flat plate)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset will include all the experimental data of the reference design
gathered at UPV with the purpose of providing realistic conditions to create
and validate the CFD model
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Contributor
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Date
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Type
</td>
<td>
Experimental data
</td> </tr>
<tr>
<td>
Format
</td>
<td>
ASCII text, MAT file, Excel workbook
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
Data_v0_exp_UPV
</td> </tr>
<tr>
<td>
Source
</td>
<td>
UPV wind tunnel
</td> </tr>
<tr>
<td>
Relation
</td>
<td>
Data_v0_CFD, Data_v0_exp_Purdue
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
TBD
</td> </tr> </table>
Table 2 _metadata for dataset Data_v0_exp_Purdue_
#### FIELD VALUE
<table>
<tr>
<th>
Title
</th>
<th>
Validation data for the reference design
</th> </tr>
<tr>
<td>
Creator
</td>
<td>
Purdue
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Reference design v0 (flat plate)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset will include all the experimental data of the reference design
gathered at Purdue
with the purpose of validating both CFD simulations and initial experimental
measurements
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Contributor
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Date
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Type
</td>
<td>
Experimental data
</td> </tr>
<tr>
<td>
Format
</td>
<td>
ASCII text, MAT file, Excel workbook
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
Data_v0_exp_Purdue
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Purdue wind tunnel
</td> </tr>
<tr>
<td>
Relation
</td>
<td>
Data_v0_CFD, Data_v0_exp_UPV
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
TBD
</td> </tr> </table>
### Numerical data: ID **data_v0_cfd**
Table 2 _metadata for dataset Data_v0_cfd_
#### FIELD VALUE
<table>
<tr>
<th>
Title
</th>
<th>
CFD results for the reference design
</th> </tr>
<tr>
<td>
Creator
</td>
<td>
UPM
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Reference design (flat plate)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset will include all the numerical results from the CFD calculations
carried out by UPM on
the v0 design geometry
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Contributor
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Date
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Type
</td>
<td>
Numerical simulation data
</td> </tr>
<tr>
<td>
Format
</td>
<td>
ASCII text, HDF5, proprietary CFD format
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
Data_v0_cfd
</td> </tr>
<tr>
<td>
Source
</td>
<td>
UPM calculations
</td> </tr>
<tr>
<td>
Relation
</td>
<td>
Data_v0_exp_UPV, Data_v0_exp_Purdue
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
TBD
</td> </tr> </table>
## STANDARD DESIGN
The standard design (v1) will include typical heat exchanger fins and will
represent current state of the art approaches for SACOCs design.
### Experimental data: IDs **data_v1_exp_UPV** and **data_v1_exp_Purdue**
Table 4 _metadata for dataset Data_v1_exp_UPV_
#### FIELD VALUE
<table>
<tr>
<th>
Title
</th>
<th>
Experimental data for the reference design
</th> </tr>
<tr>
<td>
Creator
</td>
<td>
UPV
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Design v1 (standard fins)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset will include all the experimental data of the v1 design gathered
at UPV with the
purpose of providing realistic conditions to cre-
ate and validate the CFD model
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Contributor
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Date
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Type
</td>
<td>
Experimental data
</td> </tr>
<tr>
<td>
Format
</td>
<td>
ASCII text, MAT file, Excel workbook
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
Data_v1_exp_UPV
</td> </tr>
<tr>
<td>
Source
</td>
<td>
UPV wind tunnel
</td> </tr>
<tr>
<td>
Relation
</td>
<td>
Data_v1_CFD, Data_v1_exp_Purdue
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
TBD
</td> </tr> </table>
Table 5 _metadata for dataset Data_v1_exp_Purdue_
#### FIELD VALUE
<table>
<tr>
<th>
Title
</th>
<th>
Experimental data for the standard design
</th> </tr>
<tr>
<td>
Creator
</td>
<td>
Purdue
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Design v1 (standard fins)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset will include all the experimental data of the v1 design gathered
at UPV with the
purpose of providing realistic conditions to cre-
ate and validate the CFD model
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Contributor
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Date
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Type
</td>
<td>
Experimental data
</td> </tr>
<tr>
<td>
Format
</td>
<td>
ASCII text, MAT file, Excel workbook
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
Data_v1_exp_Purdue
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Purdue wind tunnel
</td> </tr>
<tr>
<td>
Relation
</td>
<td>
Data_v1_CFD, Data_v1_exp_UPV
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
TBD
</td> </tr> </table>
### Numerical data: dataset ID **data_v1_cfd**
Table 6 _metadata for dataset Data_v1_cfd_
#### FIELD VALUE
<table>
<tr>
<th>
Title
</th>
<th>
CFD results for the standard design
</th> </tr>
<tr>
<td>
Creator
</td>
<td>
UPM
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Design v1 (standard fins)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset will include all the numerical results from the CFD calculations
carried out by UPM on
</td> </tr>
<tr>
<td>
</td>
<td>
the v1 design geometry
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Contributor
</td>
<td>
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Date
</td>
<td>
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Type
</td>
<td>
</td>
<td>
Numerical simulation data
</td> </tr>
<tr>
<td>
Format
</td>
<td>
</td>
<td>
ASCII text, HDF5, proprietary CFD format
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
</td>
<td>
Data_v1_cfd
</td> </tr>
<tr>
<td>
Source
</td>
<td>
</td>
<td>
UPM calculations
</td> </tr>
<tr>
<td>
Relation
</td>
<td>
</td>
<td>
Data_v1_exp_UPV, Data_v1_exp_Purdue
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## ADVANCED DESIGN 1
The next iteration (v2) will feature an innovative design that departs from
the current approach of fin-type SACOCs.
### Experimental data: IDs **data_v2_exp_UPV** and **data_v2_exp_Purdue**
Table 7 _metadata for dataset Data_v2_exp_UPV_
#### FIELD VALUE
<table>
<tr>
<th>
Title
</th>
<th>
Experimental data for the advanced design 1
</th> </tr>
<tr>
<td>
Creator
</td>
<td>
UPV
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Design v2 (advanced design 1)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset will include all the experimental data of the v2 design gathered
at UPV with the
purpose of providing realistic conditions to cre-
ate and validate the CFD model
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Contributor
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Date
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Type
</td>
<td>
Experimental data
</td> </tr>
<tr>
<td>
Format
</td>
<td>
ASCII text, MAT file, Excel workbook
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
Data_v2_exp_UPV
</td> </tr>
<tr>
<td>
Source
</td>
<td>
UPV wind tunnel
</td> </tr>
<tr>
<td>
Relation
</td>
<td>
Data_v2_CFD, Data_v2_exp_Purdue
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
TBD
</td> </tr> </table>
Table 8 _metadata for dataset Data_v2_exp_Purdue_
#### FIELD VALUE
<table>
<tr>
<th>
Title
</th>
<th>
Experimental data for the advanced design 1
</th> </tr>
<tr>
<td>
Creator
</td>
<td>
Purdue
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Design v2 (advanced design 1)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset will include all the experimental data of the v1 design gathered
at UPV with the
purpose of providing realistic conditions to cre-
ate and validate the CFD model
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Contributor
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Date
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Type
</td>
<td>
Experimental data
</td> </tr>
<tr>
<td>
Format
</td>
<td>
ASCII text, MAT file, Excel workbook
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
Data_v2_exp_Purdue
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Purdue wind tunnel
</td> </tr>
<tr>
<td>
Relation
</td>
<td>
Data_v2_CFD, Data_v2_exp_UPV
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
TBD
</td> </tr> </table>
### Numerical data: dataset ID **data_v2_cfd**
Table 9 _metadata for dataset Data_v2_cfd_
#### FIELD VALUE
<table>
<tr>
<th>
Title
</th>
<th>
CFD results for the advanced design 1
</th> </tr>
<tr>
<td>
Creator
</td>
<td>
UPM
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Design v2 (advanced design 1)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset will include all the numerical results from the CFD calculations
carried out by UPM on
the v2 design geometry
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Contributor
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Date
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Type
</td>
<td>
Numerical simulation data
</td> </tr>
<tr>
<td>
Format
</td>
<td>
ASCII text, HDF5, proprietary CFD format
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
Data_v2_cfd
</td> </tr>
<tr>
<td>
Source
</td>
<td>
UPM calculations
</td> </tr>
<tr>
<td>
Relation
</td>
<td>
Data_v2_exp_UPV, Data_v2_exp_Purdue
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
TBD
</td> </tr> </table>
## ADVANCED DESIGN 2
The final iteration (v3) will feature a different innovative design for the
SACOC geometry.
### Experimental data: IDs **data_v3_exp_UPV** and **data_v3_exp_Purdue**
Table 10 _metadata for dataset Data_v3_exp_UPV_
#### FIELD VALUE
<table>
<tr>
<th>
Title
</th>
<th>
Experimental data for the advanced design 2
</th> </tr>
<tr>
<td>
Creator
</td>
<td>
UPV
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Design v3 (advanced design 2)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset will include all the experimental data of the v3 design gathered
at UPV with the
purpose of providing realistic conditions to cre-
ate and validate the CFD model
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Contributor
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Date
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Type
</td>
<td>
Experimental data
</td> </tr>
<tr>
<td>
Format
</td>
<td>
ASCII text, MAT file, Excel workbook
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
Data_v3_exp_UPV
</td> </tr>
<tr>
<td>
Source
</td>
<td>
UPV wind tunnel
</td> </tr>
<tr>
<td>
Relation
</td>
<td>
Data_v3_CFD, Data_v3_exp_Purdue
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
TBD
</td> </tr> </table>
Table 11 _metadata for dataset Data_v3_exp_Purdue_
#### FIELD VALUE
<table>
<tr>
<th>
Title
</th>
<th>
Experimental data for the advanced design 2
</th> </tr>
<tr>
<td>
Creator
</td>
<td>
Purdue
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Design v3 (advanced design 2)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset will include all the experimental data of the v3 design gathered
at UPV with the
purpose of providing realistic conditions to cre-
ate and validate the CFD model
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Contributor
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Date
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Type
</td>
<td>
Experimental data
</td> </tr>
<tr>
<td>
Format
</td>
<td>
ASCII text, MAT file, Excel workbook
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
Data_v3_exp_Purdue
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Purdue wind tunnel
</td> </tr>
<tr>
<td>
Relation
</td>
<td>
Data_v3_CFD, Data_v3_exp_UPV
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
TBD
</td> </tr> </table>
### Numerical data: dataset ID **data_v3_cfd**
Table 12 _metadata for dataset Data_v3_cfd_
#### FIELD VALUE
<table>
<tr>
<th>
Title
</th>
<th>
CFD results for the advanced design 2
</th> </tr>
<tr>
<td>
Creator
</td>
<td>
UPM
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Design v3 (advanced design 2)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset will include all the numerical results from the CFD calculations
carried out by UPM on
the v3 design geometry
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Contributor
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Date
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Type
</td>
<td>
Numerical simulation data
</td> </tr>
<tr>
<td>
Format
</td>
<td>
ASCII text, HDF5, proprietary CFD format
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
Data_v3_cfd
</td> </tr>
<tr>
<td>
Source
</td>
<td>
UPM calculations
</td> </tr>
<tr>
<td>
Relation
</td>
<td>
Data_v3_exp_UPV Data_v3_exp_Purdue
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
TBD
</td> </tr> </table>
# FAIR DATA
## Making data findable, including provisions for metadata
The data produced in the project and described in the aforementioned tables
1-12 will be issued with DOI identifiers once it has reached an approved level
of maturity for consumption by interested parties. It is envisaged that the
DOIs will be assigned by the final, public data repository, which at this
point it is expected to be Zenodo. The metadata described in the
aforementioned tables follows the Dublin Core standard and will be uploaded to
the repository alongside the data and indexed, thus making it discoverable.
## Naming conventions
The dataset ID will be Data_[design iteration]_[experimental/CFD]_[Facility
(if experimental)]; inside the dataset the variables will be named according
to habitual aerospace engineering vocabulary. It is envisaged that both static
and total thermodynamic will be provided. This will be updated with the
precise structure of the dataset once the data becomes available.
## Search keywords
Adequate keywords will be provided alongside the data to maximize the re-use
potential. This will be updated with such keywords once the data is uploaded.
## Version numbers
Presently the following overall version numbers are envisaged.
* v0 Reference design: a flat plane with no special heat exchanger geometry
* v1 Standard design: SACOC equipped with standard fins
* v2 Advanced design 1: an innovative heat exchange geometry (TBD)
* v3 Advanced design 2: another innovative heat exchange geometry (TBD)
Sub-versions will be added to this document if required.
_Searchable metadata_
The metadata will be added to the repository search mechanism.
## Standardized formats
It is envisaged that standardised data formats will be used, such as:
* ASCII text (.txt, .csv, .dat)
* Level 5 MAT-file format (MATLAB) (.mat)
* Microsoft Excel Workbook (.xls)
* Portable Document Format (.pdf)
* HDF5 (.hdf)
This section will be updated if additional data formats are used.
## Open file formats
Some of the formats may require the use of proprietary tools. The intention
however is to provide copies of the data in openly accessible formats for all
the data.
## Open source tools
The objective is that all data be made accessible through open source tools,
such as Python, Octave, Calc, etc. for the experimental data, and ParaView,
VTK, etc. for the simulation data.
# MAKING DATA OPENLY ACCESSIBLE
## Openly available data
All data underlying the scholarly publications produced by the SACOC project
will be made openly available. it is envisaged at this point that data will
cover the thermodynamic variables describing the flow state in the
aforementioned designs.
## Data location
The data will be included at first in the internal project repository with a
short description of the test case represented by the data and the information
contained in the data (limited metadata). Once that the data has reached
appropriate maturity, it will be uploaded to a public repository such as
Zenodo where it will be available free of charge for any user.
## Methods or software tools
The data will be accessible through standard software such as EXCEL, MATLAB,
etc. It is expected that all data will be accessible through open source tools
such as Python, Octave, ParaView, etc.
## Software documentation
The data should be easily ingested into the software as it will be provided in
standardized file formats. It is possible that examples for popular
applications may be included.
## Included software
Links will be included to open source software that can be used to access the
data. Custom software written by the consortium members to process the data
may be included. This document will be updated to reflect such decision.
## Location of data and associated metadata, documentation and code
During the project, and for each WP, data will be stored at partner’s
datacenters and replicated in the shared SACOC repository provided by SAFRAN.
Open data will be deposited in an open access repository. At the moment Zenodo
has been identified as the most likely candidate.
## Special arrangements with the identified repository
At the moment it is not envisaged that the processed, scientific data
surpasses the standard Zenodo quota per dataset. If that was the case, special
arrangements will be sought.
## Restrictions on use and access provision
At this stage, it is envisaged that access to the data will be open.
Restrictions on use will be defined by the relevant license. It is expected
that this license will be Creative Commons’ AttributionNonCommercial-
ShareAlike 4.0 International (CC-BY-NC-SA 4.0). This document will be updated
to reflect changes in these decisions.
_Data access committee_
Not expected
_Conditions for access_
The acceptance of the license, will be included in a machine-readable format.
_Identification_
Data will be public access without identification.
# MAKING DATA INTEROPERABLE
It will be an objective that all data produced in the project shall be as
interoperable as possible, thus allowing data exchange and re-use between
researchers, institutions, organisations, countries, etc. In particular,
participants will adhere to the aforementioned standards for formats that will
be compatible with open source software applications thereby facilitating re-
combinations with different datasets from different origins.
This is especially important when comparing experimental datasets with
numerical results. Care will be put in defining a common and frame of
reference for all variables. These variables will be described by stablished
engineering vocabulary in order to maximize the interoperability of the data.
# INCREASE DATA RE-USE (THROUGH CLARIFYING LICENCES)
## Data licensing
It is envisaged that the Creative Commons’ Attribution-NonCommercial-
ShareAlike 4.0 International (CC-BY-NC-SA 4.0) license will be used for the
public datasets.
## Time framework for data availability within the project
Data will be made available for re-use immediately upon publication of the
accompanying article. There will be no embargo period for the data.
## Re-use after the end of the project
As the data will be deposited in Zenodo, it is expected that re-use will
continue after the end of the project. However, no permanent support, storage
facility or point of contact person for the general public is at this point
expected after the closing of the project budget.
_Time framework for data availability after the project conclusion_
Data will remain in Zenodo for as long as the repository operators allow.
# ALLOCATION OF RESOURCES
_Costs for making data FAIR in your project_
Will be updated on next DMP versions.
_Covering of FAIR costs_
Will be updated on next DMP versions.
_Responsible for data management_
The coordinator will be responsible for data management in SACOC.
_Resources for long term preservation_
Will be decided in agreement with the Topic Manager on next DMP versions.
_National or institutional repositories_
At this stage no depositing in national or institutional depositories is
envisaged.
# DATA SECURITY
## Provisions in place for data security
Partners’ data is stored in the respective datacenters of each institution,
which are secured and backed up by different means. Additionally, a secure,
managed shared repository has been provided by SAFRAN for use by all partners
and to secure the data transfers.
## Certified repositories for long term preservation and curation
Long term preservation is expected to be carried out by depositing the
scientific data in the Zenodo repository. Raw and auxiliary data will be kept
for a period to be decided and then deleted, as no budget can be allocated for
data preservation or curation after the end of the project.
# ETHICAL ASPECTS
The participants have not identified any ethical issue regarding the data, as
no experiments or data concerns living organisms whatsoever, nor is it
expected any impact on the environment as the tests will be carried out in a
closed, appropriate facility.
# OTHER ISSUES
At this stage, the data will not be subject to any additional data management
procedures, as this DMP will be used as a common framework that supersedes the
individual procedures of each member of the consortium.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0114_eFactory_825075.md
|
**Executive Summary**
This deliverable provides the plan for the management of data in the eFactory
project. It describes the methods applied to making data findable, openly
accessible, interoperable, re-useable and secure. Furthermore, the legal
framework, risks and measures associated to ethical aspects, mechanisms for
data protection as well as governance and trust are addressed.
In order to realise the **FAIR principle** along the eFactory project, this
deliverable describes the following main mechanisms:
**Document management:** The project team set-up common procedures and
practices that are used for handling documents within eFactory: the common
WebDAV repository “OwnCloud”, the usage of the Microsoft OneDrive cloud
storage, internal templates with its document metadata that support them as
well as the eFactory glossary.
**Data management:** The eFactory Marketplace framework provides the ground
for interlinking multiple marketplaces from different platforms. Within this
context, primarily the exchange of user data for single-sign-on, (meta-) data
related to third-party offerings and accountancy services (for tracking the
user journeys) are implemented. To access the different marketplaces, the
eFactory Marketplace framework accesses each external marketplace through a
central component called Data Spine.
**Data accessibility:** The Data Spine provides an open, platform-independent
and secure communication and interoperability infrastructure with interfaces
for the loosely coupled platforms, tools and services (e.g. third-party
marketplaces). According to the current design, the security framework
associated with Data Spine stores user data, i.e. username, password (and most
probably the email address and/or phone number for password recovery).
**Data interoperability:** Considering overall interoperability, the Data
Spine is the gluing mechanism in the context of connecting multiple tools,
services and platforms to realise a federated platform and ecosystem. Based on
the identification of common standards and abstractions, the APIs, connectors
and interfaces that need to be implemented for the tools, systems and
platforms federated through the Data Spine are defined and realised within the
project. Besides the eFactory Data Spine being in the centre of the
interoperability towards platforms, external interoperabilit _y_ is also
fostered by means of open experiments of smart factory tools and solutions as
well as the related data within the federated eFactory ecosystem.
When it comes to **Data Security and Privacy** , the eFactory project
carefully analyses the implications of, and compliance with, the relevant
regulations on data management and consumption. This includes ensuring
compliance with GDPR (General Data Protection Regulation) and NIS Direction
(Directive on Security of Network and Information Systems). Besides the fact
that the eFactory Consortium Agreement explicitly states that the project
partners are GDPR compliant based on the requirements of the regulation, the
following security controls are addressed within eFactory in the context of
data integrity and quality:
* Data input validation
* Data and metadata protection
* Data protection at rest
* Data protection in shared resources
* Notification of data integrity violations
* Informed consent by design
In addition, the eFactory project defines and implements **Data Governance and
Trust mechanisms** , covering information governance, a policy-based control
of information to meet all legal, regulatory, risk and business demands as
well as data governance, involving processes and controls to ensure that
information at the data level is true, accurate, and unique (not redundant).
It involves data cleansing to strip out corrupted, inaccurate, or extraneous
data and de-duplication, to eliminate redundant occurrences of data.
Considering **Ethical Aspects** , eFactory does not introduce any critical
issues or problems. However, several considerations typical to ICT and on-site
industrial trials, where employees are also involved in the demonstration and
evaluation stages, are considered. Here, the consortium is fully aware of
these and has the necessary experience to address them seamlessly by being
compliant with the relevant international and national law, regulations as
well as directives, e.g.
* The Universal Declaration of Human Rights and the Convention 108 for the Protection of Individuals with Regard to Automatic Processing of Personal Data
* Directive 95/46/EC & Directive 2002/58/EC of the European parliament regarding issues with privacy and protection of personal data and the free movement of such data
Despite the far-reaching provisions implemented, **Potential Risks and Related
Mitigation Activities** in the context of data management are continuously
analysed by the eFactory team, including the following domains: Data security,
storage and process of personal as well as confidentiality, privacy control,
and transparency.
# 0 Introduction
## 0.1 eFactory Project Overview
eFactory – European Connected Factory Platform for Agile Manufacturing – is a
project funded by the H2020 Framework Programme of the European Commission
under Grant Agreement 825075 and conducted from January 2019 until December
2022. It engages 30 partners (Users, Technology Providers, Consultants, and
Research Institutes) from 11 countries with a total budget of circa 16M€.
Further information can be found at eFactoryproject.eu.
In order to foster the growth of a pan-European platform ecosystem that
enables the transition from “analogue-first” mass production, to “digital
twins” and lot-size-one manufacturing, the eFactory project will design, build
and operate a federated digital manufacturing platform. The platform will be
bootstrapped by interlinking four base platforms from FoF-11-2016 cluster
funded by the European Commission, early on. This will inform the design of
the eFactory Data Spine and the associated toolsets to fully connect the
existing user communities of the 4 base platforms. The federated eFactory
platform will also be offered to new users through a unified Portal with
value-added features such as single sign-on (SSO), user access management
functionalities to hide the complexity of dealing with different platform and
solution providers.
## 0.2 Deliverable Purpose and Scope
The purpose of this deliverable “D11.10 Data Management Plan” is to document
the framework for the management of all generated data in the project with a
special focus on the FAIR data approach. Data management refers to all aspects
of creating, housing, delivering, maintaining, archiving and preserving data;
It is one of the essential areas of responsible conduct of research.
## 0.3 Target Audience
The deliverable at hand is of public nature, providing the eFactory project
team the fundament for handling data generated and managed in the eFactory
project.
## 0.4 Deliverable Context
This document is one of the cornerstones for achieving the project aims. Its
relationship to other documents is as follows:
* **Description of Action (DOA):** Provides the foundation for the actual research and technological content of eFactory. Importantly, the Description of Action includes a description of the overall project work plan
* **Project Handbook (D1.1)** : Provides the foundation for the practical work in the project throughout its duration and helps to ensure that the project partners follow the same well-defined procedures and practices also in terms of information sharing
## 0.5 Document Structure
This deliverable is broken down into the following sections:
* **Section 0 Introduction:** An introduction to this deliverable including a general overview of the project, an outline of the purpose, scope, context, status, and target audience of the deliverable at hand.
* **Section 1 Data Summary:** Provides an overview on data used and generated in the eFactory project as well as related parameters.
* **Section 2 FAIR Data:** Describes the ways applied to make data findable, openly accessible, interoperable and re-useable.
* **Section 3 Allocation of Resources:** Outlines the efforts towards the realisation of the FAIR data approach.
* **Section 4 Data Security:** Presents details about relevant regulations, data integrity and quality, data storage, data privacy, federated identity management and a blockchain approach.
* **Section 5 Ethical Aspects:** Provides information on relevant legal frameworks as well as potential data management risks and related mitigation measures.
* **Section 6 Other Issues:** Outlines project activities related to data protection, governance and trust.
* **Annexes:**
* **Annex A** : Document History
## 0.6 Document Status
This document is listed in the Description of Action as “public”.
## 0.7 Document Dependencies
This document has no preceding documents or further iterations.
## 0.8 Glossary and Abbreviations
A definition of common terms related to eFactory, as well as a list of
abbreviations, is available at: _https://www.efactory-project.eu/glossary_
## 0.9 External Annexes and Supporting Documents
Annexes and Supporting Documents:
• None
## 0.10 Reading Notes
• None
# 1 Data Summary
The following table summarises the data generated and/or managed within the
eFactory project as well as its fundamental parameters.
<table>
<tr>
<th>
**eFactory Context**
</th>
<th>
Internal Documents
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Documents set-up and updated during the preparation and execution of the
eFactory project. They include the Consortium Agreement (CA), Description of
Action (DoA), document templates meeting minutes, working documents and the
eFactory deliverables. The handling of eFactory related documents is done
based on OwnCloud, a solution for document management and storage.
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
Provision of all information to successfully perform the eFactory project
tasks
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.docx, .pptx, .xlsx, .pdf, .txt
</td> </tr>
<tr>
<td>
**Origins**
</td>
<td>
eFactory partners
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
Typically <20MB
</td> </tr>
<tr>
<td>
**Utility**
</td>
<td>
Depending on the dissemination level the interested public, eFactory partners
and/or the EC
</td> </tr> </table>
<table>
<tr>
<th>
**eFactory Context**
</th>
<th>
Marketplace
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
The eFactory Marketplace framework provides the ground for interlinking of
multiple marketplaces from different platforms.
Within this context primarily the exchange of user data for single-sign-on,
(meta) data related to third-party offerings (such as applications and
services) and accountancy services (for tracking the user journeys) are
implemented.
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
Exchange and administration of all data to provide the user of the eFactory
Marketplace a state-of-the-art service interaction and to enable user tracking
and affiliate revenue models in the eFactory ecosystem
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
Database entries
</td> </tr>
<tr>
<td>
**Origins**
</td>
<td>
eFactory partners
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
Depending on registered users
</td> </tr>
<tr>
<td>
**Utility**
</td>
<td>
eFactory partners including the eFF and third-party organisation, marketplaces
and platforms that aim to make use of developed applications and services
</td> </tr> </table>
<table>
<tr>
<th>
**eFactory Context**
</th>
<th>
Data Spine
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
The Data Spine provides an open, platform-independent and secure communication
infrastructure with interfaces for the loosely coupled platforms, tools and
services (e.g. third-party marketplaces).
According to the current design, the security framework associated with the
Data Spine may store user data for authorisation and authentication purposes.
It is not envisioned to store any other data.
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
Management of user data (username, password, email address and/or phone number
for password recovery) to offer the authorisation, authentication and user-
management services such as those associated with the user single-sign-on
functionality
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
Database entries and Data Spine source code (open-source) and related
specification documents
</td> </tr>
<tr>
<td>
**Origins**
</td>
<td>
eFactory partners, user communities of marketplaces, platforms and generally
the users in the eFactory ecosystem
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
Depending on registered users
</td> </tr>
<tr>
<td>
**Utility**
</td>
<td>
eFactory partners including the eFF, third-party marketplaces and platforms
along with their user-communities
</td> </tr> </table>
<table>
<tr>
<th>
**eFactory Context**
</th>
<th>
Dissemination and Promotion
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Dissemination material generated and provided by the eFactory consortium
includes presentations, contributions and publications at domain-specific
conferences and journals, software not covered by IPR as well as research data
not affected by IPR or data privacy.
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
Gain maximum awareness towards the eFactory project and its results as well as
the eFactory ecosystem, including the eFactory Foundation
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.docx, .pptx, .xlsx, .csv, .pdf
Source code (open-source) and related specification documents
</td> </tr>
<tr>
<td>
**Origins**
</td>
<td>
eFactory partners
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
Typically <100MB
</td> </tr>
<tr>
<td>
**Utility**
</td>
<td>
Interested public, eFactory partners, and/or the EC
</td> </tr> </table>
# 2 FAIR Data
## 2.1 Making Data Findable, Including Provisions for Metadata
### 2.1.1 Document Management
This section introduces common procedures and practices that are used for
handling various kinds of documents within eFactory common WebDAV repository
“ownCloud”, the usage of the Microsoft OneDrive cloud storage, internal
templates with its document metadata that supports them as well as the
eFactory glossary.
#### OwnCloud
The eFactory document management approach aims at reducing the burden for
project partners to synchronise, store, and locate documents. For this, the
ownCloud solution for document management and storage is used, it is also
referred to as synchronised file storage using the WebDAV protocol. It is
similar in operation to the well-known Dropbox solution except that is self-
hosted. This is convenient since it avoids issues associated with the geo-
location of confidential material. OwnCloud is used within eFactory for the
exchange and transfer of documents in progress and documents extensively used
by all partners, e.g. the current version of the DOA or the eFactory
templates.
The ownCloud software is installed on servers of the eFactory project partner
ASC, who is located in Germany. **Access to ownCloud is personalised** via a
dedicated username and password. If it is necessary to share the ownCloud
folder with further colleagues, the ICE Project Office needs to be contacted.
A sample ownCloud **folder structure** is shown in the following figure. It
**follows a hierarchical approach** , grouping horizontal documents like the
Consortium Agreement, the Description of Action and templates as well as
current and historical versions of work package related contents (subfolders
such as “[Working]” and “[Final]” for the according documents).
Figure 1: Sample ownCloud Folder Structure
The following list briefly describes the intended content of each key folder:
* CRITICAL: Critical documents for the project as mentioned above
* Admin: Source versions of previous and current EU Contract (including DOA) and CA
* Marketing and Templates: Logos, Graphics, Brochures, etc. and their sources
* Meetings: Resources and results primarily for physical meetings such as plenaries
* Reference Information: Important non-project document such as the Annotated Model Grant Agreement (AMGA)
* Pictures and Fun: Pictures from eFactory related events
* Work Packages: Contains subfolders for each work package and then within each subfolder, each task, and within each task there are subfolders for each deliverable.
In addition to the access management on a solution level mentioned above,
ownCloud does not offer an access rights model for individual folders.
#### OneDrive
Although ownCloud provides distributed sharing and allows offline editing in
common office tools in a latency-free way, it does not support multiparty
editing and the dealing of conflicts. Thus, a two-part solution is taken by
eFactory, using Microsoft OneDrive cloud office solution based on Microsoft
Excel spreadsheets for the recording of common financial or survey
information. Each project partner is given a **coded link to this repository**
and maintains the link securely such that only partners can access it.
#### Document Templates
In eFactory, Microsoft Word, Excel, and PowerPoint, as part of the Microsoft
Office suite, are used for most documents. For Microsoft Word and PowerPoint,
templates have been created and are available in ownCloud. To make sure that
documents can be easily exchanged, all partners need to make use of at least
Microsoft Office 2013.
For all formal deliverables, and informal ones that are submitted to the EC,
the Microsoft Word template is applied (file: “ **eFactory Document Template
xxx** ”).
Within eFactory it is also mandatory to make use of the eFactory Microsoft
PowerPoint template for external presentations regarding eFactory – i.e. at
non eFactory events and reviews meetings. It is preferred to use this for
internal meetings as well. If eFactory is only a minor part of a presentation,
e.g. to show the different projects a partner is involved in, it is _not_
mandatory to make use of the eFactory Microsoft PowerPoint template, but it
should be considered (file: “ **eFactory Presentation Template** ”).
### 2.1.2 Document Metadata
#### Deliverable Cover Page and Footer
The Word deliverable template cover page defines certain styles that are then
referenced via field codes in other parts of the document – e.g. the status
information on page 2 and in page footers of this deliverable. This allows
information to be entered once and automatically referenced correctly
throughout the document. This includes information for WP/Deliverable ID, name
status, etc.
#### Deliverable Status Information
The following states are used for deliverables:
* **Draft** : The working versions of a deliverable, i.e. work in progress which is not ready for review yet
* **For EU Approval** (implying Consortium Approved): A deliverable which has been accepted by the project-internal reviewers and is therefore sent to the EC (for approval)
* **EU Approved:** A deliverable accepted by the EC and therefore ready for publication at the eFactory Website
#### Naming Conventions and Versioning
In general, file names need to be meaningful and unique, and they should
include the word ‘eFactory’ at the start to distinguish from other projects.
For deliverables, this means that the file name indicates the deliverable
number, its version, and any further specific information:
* Example: “EU-ID D104 – eFactory-ID D1.3.1a – Periodic Report (M6) – Draft - v0.9.0 - ICE”
* General Format: “EU-ID D[N] – eFactory-ID D[N].[N][a] – [AAA][(Mx)] – [BBB] - v[M].[M].[M] [- CCC]”
* The spaces (“ “) and hyphens (“-“) are critical parts of the structural format and must be used
* EU-ID D[N] 🡪”EU-ID D104” The [N] represents the sequential number of the deliverable and which is used by the EU. This number can be found in the Budget XLS on “ownCloud/_CRITICAL…” During the drafting process, the editor should already include this ID
* eFactory-ID D[N].[N][a] 🡪 “eFactory-D1.3.1a” is the first (“a”) formal release of deliverable D1.3.1. Note that the [a] indicator is only used if there are multiple versions of the same deliverable. Typically, these [a] versions are related to living or period deliverables
* [AAA][Mx] 🡪 “Periodic Report (M6)” i.e. the name of the document and for iterative deliverables the Month of the deliverable
* [BBB] 🡪
* “Draft” - Labelled draft until the document is ready for reviewer 1
* “Reviewer1[a]”: The first (“a”) version ready for the internal Reviewer1 of the deliverable
* “Reviewer 2[a]” _:_ The first (“a”) version ready for the internal Reviewer2 of the deliverable
* “For EU Approval”: The version of the document, which is submitted to the EC. For this (and subsequent versions) the version number is deleted
* “Accepted” _:_ The version of the deliverable that has been accepted by the EC. It is published at the project Website if the deliverable is marked as public
* v[M].[M].[M] 🡪 “v0.9.0” is the 9th major draft version 0.9.0 of the deliverable. It is better to number below “1.0” so that the final output of the consortium can be identified as “1.0”
* [- CCC] 🡪 “- ICE” indicates a branch of the deliverable typically signified by a partner (e.g. ICE) or an individual’s acronym (UW). Branches should only be temporal documents, e.g. to decrease the risk of version conflicts as different partners may work on various parts of a document in parallel
If generating a PDF (for example for the definitive version to the EC), e.g.
from a Word document it should have the same filename as the original document
except for the file extension (e.g. “pdf”).
#### Microsoft Office Metadata
Microsoft Office allows metadata properties for each document to be entered.
In eFactory, the fields “Author” and “Title” are used. Usually, the author
information is filled in automatically, provided the author (deliverable lead)
stated the full name in the Word personalisation properties. The title needs
to be filled in manually and should be the same as on the first page of a
document.
#### Deliverable Confidentiality Information (Dissemination Levels)
There are two different dissemination levels for eFactory project
deliverables: **Public (PU)** deliverables, which are potentially available to
everybody and **Confidential (CO)** deliverables, which are available only for
the members of the eFactory consortium. The dissemination levels of all
eFactory deliverables have been defined within Table 3.2c of the eFactory DOA.
Data Management
Information regarding the dissemination levels must be marked in each
deliverable as defined in the eFactory template. Furthermore, a brief
description of the dissemination level and the logic for it needs to be given
in section 0.6 (Document Status) of each deliverable.
#### 2.1.3 Data Management eFactory Marketplace
The extensible eFactory Marketplace framework offers the interlinking of
multiple marketplaces from different platforms (i.e. NIMBLE, COMPOSITION,
DIGICOR and vf-OS). The framework will provide components, which can easily be
integrated in each platform to enable the access to external marketplaces.
Furthermore, in order to enable an integrated affiliate model for supporting
sustainable eFactory business models, an Accountancy Service (AS) is intended
to gather tracking data from user journeys; The figure below provides a
representative example:
Figure 2: User journey example in the eFactory Marketplace
Rather than creating a centralised marketplace from scratch, the marketplace
framework in eFactory interlinks existing marketplaces, enabling users to
access offered tools and services through a unified interface, which will be
embedded in the eFactory Portal and other platform’s marketplaces.
#### Standards and open / reusable metadata
To access the different marketplaces, the eFactory Marketplace framework will
access each external marketplace through the central component Data Spine. As
each external marketplace provides different data structures, the Data Spine
will provide a mechanism to implement the necessary metadata as well as the
conversion logic between data models of different marketplaces. As a central
aim in this context, the marketplace framework needs to handle minimum
complexity in dealing with the offerings of multiple marketplaces as possible.
## 2.2 Making Data Openly Accessible
### 2.2.1 eFactory Data Spine
The realisation of the eFactory Data Spine is envisioned to be based on an
**open-source technology** .
According to the current design, the security framework associated with the
Data Spine will store user data, i.e. username, password (and most probably
the email address for password recovery). It is not envisioned to store any
other transactional data therefore the Data Spine does not provide any
components for data storage and data management. Note: The eFactory Platform
could provide components that allow data storage and management, however, the
utilisation of these components (e.g. for analysis or decision support) will
be solely on the discretion of the eFactory users.
The Data Spine provides an open, platform-independent and secure communication
and data exchange infrastructure with interfaces for the loosely coupled
platforms, tools and services. This enables, for example, the analysis and
fusion of real-time data to securely capture multi-tier supply chain
intelligence. Clustering and propagation of business and supply chain
intelligence will also be possible through the Data Spine. It will improve the
competitiveness of the networked partner companies and increase the
possibility for collaborations between organisations from different domains;
allowing companies to share best practices and address dynamic market needs.
The high-level architecture of the Data Spine, reflecting the approach
described, is schematically shown in the following figure.
Figure 3: eFactory Data Spine Architecture
To give this ecosystem a maximum of flexibility, the tools and services
interlinked within the eFactory platform are offered as far as possible as
open-source resources under an Apache Licence (Version 2.0). This is already
the IPR and licencing basis preferred in the four base platforms in eFactory.
The Apache Licence only requires preservation of the copyright notice
(attribution) but is otherwise permissive as it allows further use of the
tools for any purpose, to distribute them, to modify them, and to distribute
modified versions of the tools, under the terms of the license, without
concern for royalties. The open-source nature of the eFactory tools also
supports the strategic goal of co-creation of smart factory technologies. The
adoption of permissive open-source licensing allows users to utilise the
eFactory tools as standalone or combined/integrated functionalities. In
addition, the eFactory platform provides open interfaces to the interoperable
Data Spine, allowing the interconnectivity of eFactory platform with external
platforms, tools and services. Moreover, a platform level SDK is developed –
building upon the SDK from (EU H2020) vf-OS to enable the development,
customisation and integration of smart factory applications. The SDK (in Task
5.5 of eFactory work program) provides a Studio environment with intuitive
interfaces, integrated libraries, execution environment and connectors to
industrial systems and data sources to enable prototyping, application
development and testing.
### 2.2.2 Software Versioning and Revision Control System
An eFactory instance of the open-source tool GitLab 1 has been installed and
is to be used for all development activities. GitLab covers the full software
development lifecycle from source code management over integrated bug tracking
mechanisms and continuous integration support. As GitLab provides many
optional modules covering all DevOps activities, during the project runtime it
will be decided if additional functionality will be added to the eFactory
GitLab instance.
The source code of the open-source components (e.g. Data Spine) will be
accessible in the project GitLab repository. Eventually, the open-source
components to be developed during the eFactory project will be hosted in a
publicly accessible software/code repository.
### 2.2.3 Dissemination of Results
What concerns the dissemination of project results, the eFactory partners are
fully aware about the **open access policy** that applies to scientific
publications as stated in the Article 29.2 of the H2020 Grant Agreement Open
Access to Scientific Publications. In this sense, all peer review publications
arising from eFactory, will be made freely and openly available via an online
repository and the project website. The actions which will be taken by the
project are:
* All presentations, contributions and publications even partially funded by the project will include the project logo, as well as the meta-data prescribed by the EC i.e. the acknowledgement of the grant agreement number, the term EU Horizon 2020, the name of the project, publication date and a persistent identifier
* The publications funded by the project will be uploaded to some social network such as ResearchGate as well as open-access repository such as OpenAire
(https://openaire.eu) and Zenodo (https://zenodo.org), and no later than 6
months after its original date of publication
* Software not covered by IPR will be open source licensed and openly distributed (e.g. via Source Forge or GitHub) community
* The open access to research data article (GA Article 29.3) will also be of application to eFactory. This will allow the consortium to:
* Deposit all the data generated in the project (specially data used in scientific publications) not affected by IPR or data privacy issues in an open repository
such as FIWARE-LAB or the relevant open data initiatives of the partners
involved in the proposal
* Provide information available at the repository about tools and instruments at the disposal of the beneficiaries and necessary for validating the eFactory results
Moreover, appropriate presentation materials will be published at the project
web site under a Creative Commons license.
Some of the important industrial fairs in Europe are already being used (e.g.
participation in AIX Expo, Paris Airshow and the IDSA Summit) to present the
project results to a broad public. Partners will provide appropriate data and
information to contribute towards project dissemination activities, which will
be made visible through the project website and social media channels.
## 2.3 Making Data Interoperable
### 2.3.1 Overall Interoperability
The rapid growth of smart manufacturing enterprises and digital manufacturing
platforms around Europe raises challenges of interoperability and questions
regarding the suitability of existing platforms to support agile
collaborations needed for lot-size-one production, particularly in cross-
sectorial scenarios. What concerns industrial data acquisition and processing,
advancements in CPS and IoT technologies have resulted in the proliferation of
new communication mechanisms and protocols that add to the complexity of
handling real time data exchange and analysis. The use of proprietary
technology for data transfer and the lack of adherence to standard protocols
can hinder the realisation and smooth operations of connected factories.
In its very centre, the **eFactory project realises a federated smart factory
ecosystem** by initially interlinking four smart factory platforms, from the
FoF-11-2016 cluster, through an open and interoperable Data Spine (see also
Chapter 2.2.1). The federation of the four base platforms is complemented by
industrial platforms, collaboration tools and smart factory systems,
specifically selected to support connected factories in lot-size-one
manufacturing.
The figure below schematically shows the information/data flow achieved by the
interoperation of available and emerging smart factory tools and services.
Starting at the bottom layers, there are groups of manufacturing firms
registered with the four base platforms or – as would be expected from an open
ecosystem – firms associated with another, similarly targeted platform. Each
of the four base platforms offer communication with external entities via open
APIs that are not homogenized yet. Furthermore, as the case of further
external platforms illustrates, there will not be a standardised cross-
platform interoperation layer for some time to come. This brings us to the
first important technical innovation of eFactory, the Data Spine:
Figure 4: Technical Concept of eFactory
The Data Spine interlinks the APIs of the participating platforms so that each
platform’s functionality is visible and accessible at the level of eFactory.
Overarching this, interoperable security features will give eFactory a layer
of tools that can be used transparently at the platform and marketplace. The
functionality of the eFactory Platform and Marketplace is thus composed of:
* Selected services offered by each of the original component platforms present in eFactory
* Services offered by any further platform that is willing to expose its API for alignment via the Data Spine
* Third party apps that are offered directly via eFactory either as free or paid services
* Dedicated management facilities to manage the governance, security and cloud deployment, etc.
For the ecosystem of eFactory, many engagement options arise from the
federated nature of the system: Manufacturers may
* Connect directly to the eFactory platform
* Develop new tools and services using the eFactory SDK
* Use the marketplace to transparently use underlying services that may come from any of the participating platforms, with internal cross-billing managed by eFactory, in the case of commercial offerings
### 2.3.2 Platform Interoperability
The interoperable Data Spine is the gluing mechanism that connects multiple
tools, services and platforms to realise an integrated platform. Based on the
identification of common standards and abstractions, the APIs, connectors and
interfaces that need to be implemented for the tools, systems and platforms
federated through the Data Spine are defined and realised within the project.
The implementation of the eFactory Data Spine through open-source technologies
will interlink and establish interoperability between - initially the existing
deployments of four base platforms (COMPOSITION, DIGICOR, NIMBLE and vf-OS)
along with their respective tools and services as schematically shown in the
following figure.
Figure 5: Federation of the 4 base platforms
The interconnectivity of the four base platforms will be followed by the
integration of other platforms (such as ValueChain’s iQluster, Siemens’s Mind
Works, Fortiss’s Future Factory, C2K’s Industreweb) and standalone tools
brought forward by the eFactory partners. Here, the Data Spine will enable the
integration of third-party platforms through a modular plugin system. Data
model conversions between two or more platforms will have to be handled by so
called “Processing Flows” that have to be implemented in order to make the
data interoperable between the platforms. The related generic flow of data is
schematically shown below:
Figure 6: Generic data flow between eFactory-connected platforms
### 2.3.3 External Interoperability
The basic building blocks of the eFactory ecosystem are the individual tools,
systems and platforms that are provided by different partners (and external
entities) to the eFactory project. These tools, systems and platforms are
interlinked through the Data Spine. In this respect, the tools, systems and
platforms need to be able to communicate through the Data Spine technology,
which means relevant interfaces and APIs to handle heterogeneous data will be
defined during the eFactory project.
What concerns the interoperability of smart factory tools and solutions are
the open experiments performed (through funded call) with the focus on the
enhancement of the eFactory platform e.g. through the integration of
innovative solutions in the federation. The open experimentation in eFactory
will include:
* Experiments that integrate a 3rd party application in the eFactory platform, providing a validation scenario to demonstrate the seamless access and utilisation of the 3rd party system/application by eFactory services and users
* Experiments that focus on the integration of 3rd party platforms with eFactory through agreements on security framework (e.g. single-sign-on, user authorisation, rights management etc) with the emphasis to provide eFactory users with wider access to Industry4.0 and digital manufacturing solutions
### 2.3.4 Message Interoperability
In the context of eFactory Data Spine, data model interoperability corresponds
to the ability to share information among partner messages and processes, as
well as to trigger appropriate actions based on the events received from
existing eFactory platforms. Considering the interoperability guidelines
designed and developed in Task 3.2, the data model interoperability task
aligns the data models of the federated platforms to support meaningful
message exchange and viable business processes that spread across two or more
of the existing eFactory platforms. The task utilises the proven methods and
opensource tools for data-model alignment to establish synergies and resolve
overlaps and conflicts between different data models.
### 2.3.5 Data Analytics
eFactory enhances the data handling, analytics and interoperability modules
from the four base platforms to (a) use/integrate them within the eFactory
platform in a way to make them accessible to wider use-base in cross-platform
scenarios, and (b) exposes them as analytic services that can be used in the
pilots/ experiments in on-demand basis. The TRL of the existing analytic
toolset (such as COMPOSITION’s Deep Learning toolkit and vf-OS’s Data Analytic
services) is enhanced with the aim of capturing in-factory implicit data
knowledge and providing the analytics that can help optimise the manufacturing
processes. By deploying the analytic services as untrained plug-and-play
applications, the eFactory platform will provide the means to analyse
heterogeneous datasets and propagate meaningful information to dashboards and
HMIs. The handling of the data by the analytic services will be the
responsibility of the service providers – as typical in a federated ecosystem.
The eFactory project will provide secure data storage service, if needed by
the analytic services, to temporarily store the raw or analysed data from
processes, shop-floor and manufacturing systems – see Section 4.3. However, no
handling or analytics of sensitive data (e.g. personal details or data of high
business value) is envisioned in the project.
## 2.4 Increase Data Re-Use
In the eFactory project, the sharing and re-use of data for research and
experimentation purposes will be determined by the data owner i.e. the entity
that has the data under its jurisdiction. It is necessary to take into account
that the data owner and data provider may not be the same entity. In line with
EC’s interests, the eFactory project supports the exchange, sharing and re-use
of non-personalised data through the Data Spine and other eFactory solution
with the fair use policy that the data is used with consent of the owner. The
data used for the validation of eFactory tools will be made available for use
in further experimentation (e.g. open-calls) though an open-access repository.
It is important to note that the eFactory project does not include any purely
technological solutions to prevent the mis-use of data during or after the
project lifetime. However, it supports these important aspects by putting in
place the necessary authentication and authorisation checks that govern the
access and (to certain extent) the utilisation of data stored in the eFactory
platform. Furthermore, the project supports the development of collaborative
solutions (within the project or through open-calls) and provides an
appropriate technology infrastructure to address the data sovereignty and data
protection issues.
# 3 Allocation of Resources
The management of data in the eFactory project is carried out through the
provisioning of relevant tools and systems, as described in Section 2.1.1.
These systems (such as OwnCloud) provide the required level of fairness
towards data sharing, security and privacy. During the eFactory project, the
data management systems (described in Section 2.1.1) are provided by the
project partners as part of their commitment towards the project.
The management of the data in the eFactory project is a collective activity of
all partners, where the project manager takes the lead role of establishing
the procedures and monitoring the utilisation of available infrastructure. The
underlying infrastructure is maintained by the respective owners e.g. ASC is
the owner of the OwnCloud document management system and therefore responsible
for ensuring the continuous provisioning and quality of service of OwnCloud
system. Similarly, the ownership of the other infrastructure e.g. Data Spine,
Marketplace etc. will be defined during the course of project.
The management of data is the responsibility of data owners who decide which
data to share, with whom, for what purpose and under what conditions. The
provisioning of data for research purposes is ensured by putting in place the
relevant procedures (based on H2020 guidelines) and by using open-access
repositories. This data will be limited to the purpose of the research and
prototyping activities conducted within the scope of this project, in
accordance with the data minimisation principle. If processing activities of
the personal data is needed, an explicit confirmation will be put in place to
make explicit that the beneficiary has lawful basis for the data processing
and that the appropriate technical and organisational measures are in place to
safeguard the rights of the data subjects.
# 4 Data Security
In Task 5.3, the eFactory team defines and implements data governance
mechanisms, covering the following aspects (for more information see Section
6.2):
* Information governance, a policy-based control of information to meet all legal, regulatory, risk, and business demands
* Data governance, involving processes and controls to ensure that information at the data level is true, accurate, and unique (not redundant). It involves data cleansing to strip out corrupted, inaccurate, or extraneous data and de-duplication, to eliminate redundant occurrences of data
For the security analytics in Task 6.2, some of the following open datasets
will be considered:
* https://github.com/defcom17/NSL_KDD
* http://www.shubhamsaini.com/datasets.html
* https://web.archive.org/web/20150205070216/http://nsl.cs.unb.ca/NSL-KDD/
## 4.1 Regulation
The project carefully analyses the implications of, and compliance with, the
relevant regulations on data management and consumption. This includes
ensuring compliance with GDPR (General Data Protection Regulation) 2 and NIS
Direction (Directive on Security of Network and Information Systems) 3 . The
tasks responsible for data storage (T4.3) and security framework (#T6.2) are
the core activities concerned with the management of data and ensuring the
compliance with relevant data security and privacy regulations. Furthermore,
the eFactory Consortium Agreement explicitly states that the project partners
are GDPR compliant.
## 4.2 Data Integrity and Quality
Based on GDPR requirements, the following security controls are addressed
within eFactory in the context of data integrity and quality.
* **Data input validation** : Controls over various factors like predictable behaviour, manual override, timing, etc. corresponding to the Data Quality Principle and the GDPR requirement for verifying sensitive data for its accuracy, completeness and for being up-to-date
* **Data and metadata protection** : Protection against unauthorised access and manipulation, automated restricted access and cryptographic protection for supporting subject’s requests to access personal data and deletion of personal data and/or personal data modification
* **Data protection at rest** : Cryptographic protection and off-line storage (GDPR requirement for deletion and/or modification of personal data by the data subject)
* **Data protection in shared resources** : Cryptographic protection (GDPR requirement for deletion of personal data and/or personal data modification by the data subject)
* **Notification of data integrity violations** : Monitoring services for detecting, reporting and investigating personal data breaches as well as for reviewing existing privacy notices and keeping them up-to-date
* **Informed consent by design** : User must have an informed consent on the data usage, which prevents the use of data in a way that is not according to the user wish (GDPR requirement for implementing privacy procedures for seeking, recording, and managing user’s consent)
## 4.3 Data Storage
Data gathered from shop-floors (Task 4.1) and analysed data (Task 4.2) is
stored in a secure data-store that will be made available as docker
containers, allowing users to deploy the container on the cloud or deploy on
premise. Access to the data storage is secured such that only authenticated
(using the single-sign on credentials) and authorised persons within the
federation are granted access. These data protection mechanisms ensure fine-
grained access control based upon the User Managed Access (UMA) standard,
where the data owners can themselves control who can use the data (even when
this is stored in the cloud). Privacy enforcing mechanisms are utilised to
ensure that stored personal data (if any) complies with privacy regulations
(in particular, GDPR), e.g., access to any personal data in the store must
follow informed consent. The data storage may also store and disclose personal
data in pseudonymised data sets – the data store provides support to
developers to convert data sets to a pseudonymised format (where personal data
is involved). Moreover, tools are also created to evaluate the extent
sensitive personal data is at risk of disclosure using the chosen form of
pseudonymisation and it is ensured that cross federation security and privacy
is achieved in a holistic end-to-end manner.
## 4.4 Data Privacy
The eFactory project pays specific emphasis on data privacy by putting in
place procedures where parties attempting to access information must be
authenticated (confirming their identity) and authorised (confirming they have
permission from the data owner for access). During the project data
confidentiality is maintained, whereby access to data is revealed only to
authorised parties.
Within Task 4.3 (Secure Data Storage Solution), data owners can configure
access to stored data using the User Managed Access (UMA) protocol standard,
which works in conjunction with OAuth to authentication user identities. For
this purpose, a holistic (platform level) framework for security, privacy and
management of data, as well as users on the eFactory platform, is developed
within the eFactory project Task 6.2. In terms of data and information
security, the framework specifies and implements the protocols that ensure
eFactory’s (i.e. interlinked platform, systems and tools) compliance with
relevant cybersecurity and privacy mechanisms. This includes mechanisms and
standards related to data security (e.g. encryption, cryptography) and privacy
(e.g. GDPR and NIS).
In terms of user management, the framework ensures that the eFactory users
have seamless access to the integrated resources while satisfying the security
and privacy concerns of users are satisfied. A preliminary study of the 4 base
platforms identified a common set of security protocols and standards (e.g.
OpenID Connect, OAuth2.0 and SAML 2.0) that are being used across them. The
open-source solutions KeyCloak and WSO2 have been identified as extensible
solutions that implement those open protocols and standards to provide
delegated identity management and role-based access management. These
technology implementations provide foundations for centralised access and
security infrastructure for the eFactory platform. In addition, the
standardised data encryption and cryptography techniques used in base
platforms are tuned to work in conjunction to ensure security and privacy of
data exchanged through eFactory. Moreover, during the eFactory project
continuous checks are done so that the interconnected platforms, systems and
tools in the federation adhere to the holistic security and privacy concepts.
## 4.5 Federated Identity Management
To access the administrative environment and for the separation of duties,
eFactory uses the Federated Identity Management, which includes:
* Single Sign On (SSO): It replaces various passwords with a single set of enterprise credentials and provides a consistent authentication experience
* Access security: It centralises access control with a policy-driven security layer for all apps and APIs
When it comes to the integrating of diverse platforms the priority is given to
standardised and modern technologies for identification, authorisation and
authentication methods. Furthermore, the registration is clearly separated
from the access to resources and backup authentication methods are put in
place.
The following figure shows the outline of the Security, Privacy & User
Management framework in eFactory:
Figure 7: Security, Privacy & User Management Framework
## 4.6 Blockchain Approach for Secure Data Exchange
The blockchain approach is currently being tested in several application
domains, including financial services, eHealth and supply chain management.
For traceability in supply chains, blockchain can be used to provide an audit
trail for products and their associated manufacturing and supply chain data.
Blockchain and Distributed Ledger Technologies (DLTs) stand to offer an end-
to-end accountancy mechanism that can facilitate product data integration,
services interoperability, cost-effectiveness and increased trust in supply
and value chain management. Towards this goal, companies such as IBM, Oracle,
and SAP are building their blockchain platforms on Hyperledger, a blockchain
technology more suitable to building business applications. Microsoft Azure,
Amazon AWS and IBM have all started offering blockchain as a service to
streamline the adoption of the technology and its applicability in several
fields.
While the most prominent use of blockchain is in cryptocurrencies, such as
Bitcoin, it can be used for in several applications such as fulfilment,
agreements/contracts, tracking and, of course, payments. The value it offers
is inherent to the technology, which is essentially a distributed ledger of
transactions kept on cryptographically protected blocks. As such transactions
across multiple parties, protected by security and privacy layer, are
immutable offering transparency and trust in supply chain management.
eFactory leverages blockchain technology to ensure trust, security and
automated exchange of supply chain data among all authorised actors. The goal
is to ensure the origin, quality, compliance and appropriate handling of
data/documents tracked throughout connected factories, while supporting
interoperability and product traceability. The eFactory blockchain service
realised in Task 5.4 is sectoral agnostic to serve cross-sectorial
stakeholders (production, distribution, customers, etc.). As a federation
level solution, no single entity owns the process of Blockchain but all
stakeholders can access and use Blockchain as a Service Platform.
# 5 Ethical Aspects
eFactory does not introduce any critical ethical issues or problems. However,
several considerations typical to ICT and on-site industrial trials, where
employees are also involved in the demonstration and evaluation stages, shall
be considered. The consortium is fully aware of these and has the necessary
experience to address them seamlessly as summarised below.
## 5.1 Legal Framework
eFactory proposed solutions do not expose, use or analyse personal sensitive
data for any purpose. In this respect, no ethical issues related to personal
sensitive data are raised by the technologies to be employed in the industrial
pilots planned in Greece, Germany, and Spain. Furthermore, the eFactory
consortium considers during the project lifetime the ethical rules and
standards of H2020, and those reflected in the Charter of Fundamental Rights
of the European Union. Generally speaking, ethical, social and data protection
considerations are crucial and are given all due attention. eFactory addresses
any ethical and other privacy issues in Task 1.4 for the investigation,
management and monitoring of ethical and privacy issues that could be relevant
to its envisaged technological solution and will establish a close-cooperation
with the Ethics Helpdesk of the European Commission.
Besides these general conditions, the consortium is aware that a number of
privacy and data protection issues could be raised by the activities (i.e. in
all pilots planned in WP9 activities) to be performed in the scope of the
project. The project involves the carrying out of data collection in all
industrial pilots and trials in order to assess the technology and
effectiveness of the proposed smart factory and digital manufacturing
solutions. For this reason, if any human participants are needed to be
involved in certain aspects of the project, then it will be done in full
compliance of any European and national legislation and directives relevant to
the country where the data collections are taking place
(International/European). The eFactory partners found the following
regulations to be relevant and considered when dealing with personal data:
* The Universal Declaration of Human Rights 4 and the Convention 108 5 for the Protection of Individuals with Regard to Automatic Processing of Personal Data
* Directive 95/46/EC 6 & Directive 2002/58/EC 7 of the European parliament regarding issues with privacy and protection of personal data and the free movement of such data.
Specifically, when dealing with personal data the eFactory partners will
observe the following guidelines:
* Unnecessary personal data collection is avoided (for example, unless it is absolutely required for security or it constitutes the nature of a research study, there is no collection of personal details, identities, bio-identification data at registration to eFactory software systems foreseen, i.e. nick names can used instead of real names whenever possible)
* The personal data needed for statistical analysis is collected anonymously, i.e. without association with the names of individuals;
* Any personal data is collected only with the explicit permission of the individuals in question
* The personal data collected is treated confidentially and carefully (taking proper technical means of information protection, e.g. storing general and personal data separately, using encryption for personal data and identities, deleting personal data when it becomes unnecessary)
* Individuals are given the right to access their personal data and the analysis and user models made based on it
To further ensure that the fundamental human rights and privacy needs of
participants are met whilst they take part in the project, in the evaluation
plans a dedicated section will be delivered for providing ethical and privacy
guidelines for the execution of the industrial trials. In order to protect the
privacy rights of participants, a number of best practice principles are
followed. They include:
* Data is not collected without the explicit informed consent of the individuals under observation. This involves being open with participants about what they are involving themselves in and ensuring that they have agreed fully to the procedures/research being undertaken by giving their explicit consent
* No data collected is sold or used for any purposes other than the current project
* A data minimisation policy is applied at all levels of the project and is supervised by each Industrial Pilot Demonstration responsible. This ensures that no data which is not strictly necessary to the completion of the current study is collected
* Any shadow (ancillary) personal data obtained during the course of the research is immediately deleted. However, the ultimate plan is to minimise this kind of ancillary data as much as possible. Special attention is also paid to comply with the Council of Europe Committee of Ministers Recommendation R(87)15 on regulating the use personal data in the police sector, Art.2
* The collection of data on individuals solely on the basis that they have a particular racial origin, particular religious convictions, sexual behaviour or political opinions or belong to particular movements or organisations which are not proscribed by law is prohibited and is not done within the project
* Compensation – if and when provided – will correspond to a simple reimbursement for working hours lost as a result of participating in the study. Special attention is paid to avoid any form of unfair inducement
* If employees of partner organisations are to be recruited, specific measures will be in place in order to protect them from a breach of privacy/confidentiality and any potential discrimination. In particular their names will not be made public and their participation will not be communicated to their managers
The pilot implementation activities (Task 9.1 –Task 9.4) are performed in
three European countries under the leadership of the pilot coordinating
partner. Below the relevant national legislation for the countries involved in
the pilot is outlined:
**Greek Pilot** (Kleemann, ELDIA, MilOil):
* Law 2472/1997 (and its amendment by Law 3471/2006) of the Hellenic Parliament
* Regulatory authorities and ethical committees
* Hellenic Data Protection Authority http://www.dpa.gr/
**German Pilot** (Airbus, Innovint Aircraft Interior GmbH, Walter Otto Müller
GmbH & Co.KG, AM Allied Maintenance GmbH):
* Federal Commissioner for Data Protection and Freedom of Information (https://www.bfdi.bund.de/DE/Home/home_node.html)
* Data protection authorities for its various states
(https://www.ldi.nrw.de/mainmenu_Service/submenu_Links/Inhalt2/Aufsichtsbehoerd
en/Aufsichtsbehoerden.php )
**Spain Pilot** (AIDIMME, LAGRAMA):
* Organic Law 3/2018, of December 5 th , of Personal Data Protection and guarantee of the digital rights (https://www.boe.es/eli/es/lo/2018/12/05/3)
* Law 34/2002, of July 11 th , of services of the information society and electronic commerce (https://www.boe.es/eli/es/l/2002/07/11/34/con)
* Law 9/2014, of May 9 th , General on Telecommunications.
(https://www.boe.es/eli/es/l/2014/05/09/9/con)
In addition to the relevant national legislation, the main EU and
international policy documents that are relevant to eFactory are listed below:
* Charter of Fundamental Rights of the European Union
* European Convention for the Protection of Human Rights and Fundamental Freedoms
* Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data
* Directive 2000/58/EC concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications)
* Directive 2002/58/EC on data protection
## 5.2 Risks and Related Measures
The table below summarises the ethical risks identified related to eFactory
activities. Within WP1 (Task 1.4), such risks are further elaborated prior the
execution of the industrial pilots and results will be included in the
corresponding reports of WP9.
<table>
<tr>
<th>
**No.**
</th>
<th>
**Ethical Risk**
</th>
<th>
**Description of Risk**
</th>
<th>
**Foreseen Risk Management Measures**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Data Security
</td>
<td>
Difficulty in ensuring the security of shared personal data in the trials.
</td>
<td>
Special attention will be given to ensure confidentiality and for
incorporating privacy enhancing technologies (pseudoanonymisation, etc.) to
ensure protection from data breaches. eFactory partners have the capacity and
the experience to cope with the delivery of security mechanisms, if needed.
</td> </tr>
<tr>
<td>
2
</td>
<td>
Storage and process of personal data,
Confidentiality
</td>
<td>
Measurements from various sensors will be transmitted wirelessly.
Difficulty in ensuring the security of privacy-related data collected before
and/or during the execution of the trials.
</td>
<td>
CERTH have the expertise and the know-how from similar past and ongoing
research projects, towards providing the necessary ethical guidelines that
should be adopted during the execution of the trials. Local ethical committee
(and the National committee, if needed) will be informed towards getting an
official permission for the execution of the selected trials.
</td> </tr>
<tr>
<td>
3
</td>
<td>
Loss of
Privacy
Control
</td>
<td>
Storage and process of privacy-related data towards the validation of
</td>
<td>
For activities related to the factory optimisation, existing data will be
initially categorised and only those that are not exposing privacy or ethical
issues will be
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
the eFactory integrated tools in the selected trials.
</td>
<td>
utilised. In any case, if needed for conducting the research activities of the
project, records or data dealing with privacy will anonymised and will be
totally destroyed after the research study.
Always, the data management policy will take care that such activities are not
forbidden by law of the country in which the information was collected, stored
and analysed.
</td> </tr>
<tr>
<td>
4
</td>
<td>
Delegation of
Control
Privacy
Incidental
Findings
</td>
<td>
Need to notify proper trial authorities.
</td>
<td>
Within Task 1.4, a sub-activity has been included to address local and
European legislation. In that context, all the pilots will be performed
according to them and relevant data protection authorities will be informed on
time.
</td> </tr>
<tr>
<td>
5
</td>
<td>
Lack of
Transparency
</td>
<td>
Work of professionals (Workers, Employees in selected trials, etc.).
</td>
<td>
An ethics manual will be delivered for each of the trials towards all
activities performed to be in compliance with National and European
legislation. Prior the execution of the pilots the local ethical committees
will be informed for any data analysis or collection needed, as part of the
eFactory Evaluation and the necessary documents will be created by the
respective Industrial Pilot Responsible in order to get an ethical approval.
</td> </tr> </table>
Summarising, privacy-related issues within the eFactory project are related
to:
* Concerns arising from the project’s activities and fields of implementation (use of existing data or newly collected information through the shop floor involving human activities or confidential information dealing with enterprise performance)
* Privacy protection and confidentiality of volunteers for the shop-floor data analysis and potential new collection during the industrial trials. Here, special guidelines will be delivered in the ethics manual of eFactory and informed consent will be created for the implied data utilisation by requesting all involved persons to read, be informed and sign the appropriate forms
# 6 Other Issues
## 6.1 Data Protection
In the course of the entire project, the fundamental rights of data protection
and the right to privacy of the volunteer research participants will be
strictly followed. Furthermore, the developments and tests performed within
eFactory project life will observe the Charter of Fundamental Rights of the
European Union 11 (2000/C 364/01). The following articles of this Charter
apply directly to this project:
* Article 1: Human dignity is inviolable. It must be respected and protected
* Article 7: Everyone has the right to respect for his or her private and family life, home and communications
* Article 8.1: Everyone has the right to the protection of personal data concerning him or her
* Article 8.2: Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data, which has been collected concerning him or her, and the right to have it rectified
* Article 8.3: Compliance with these rules shall be subject to control by an independent authority – in this case this responsibility lies with the eFactory Project Manager (ICE)
* Article 23: Equality between men and women must be ensured in all areas, including employment, work and pay. The principle of equality shall not prevent the maintenance or adoption of measures providing for specific advantages in favour of the underrepresented sex
## 6.2 Governance Rules and Trust Mechanisms
Task 5.3 of the eFactory project sets-up a formal model of distributed
collaborative activities where each activity is defined by its contribution to
the overall goal in a recursive approach. Formal contracts gather the way in
which companies are given responsibility for activities, and ensure the
results of the activity conform to the relevant regulations. The contracting
framework also covers the process used by a company to implement its activity,
ensuring compliance at process and at results level. Using this theoretical
model, the requirements for relevant regulations, smart contracting
mechanisms, secure message exchange, company sourcing, monitoring protocols
and coordination mechanisms are developed, ensuring support for regulation
compliance and trusted distributed and coordination of activities.
Based on these activities, the following governance rules and trust mechanisms
are implemented.
### Information level
• Policy based control of information to meet legal, regulatory and business
demands
### IT level
• Aligning IT efforts with the business objectives of eFactory
### Data level
* Ensuring that data are accurate and true
* Eliminating corrupted and inaccurate data (data cleansing)
* Eliminating redundant data (de-duplication)
* Ensuring security controls for data integrity and
quality
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0118_StarFormMapper_687528.md
|
**Introduction**
# 1.1 Scope
This document is the deliverable # D7.5 – “Data Management Plan - Update” for
the EU H2020 (COMPET-5-2015–Space) project “ **A Gaia and Herschel Study of
the Density Distribution and Evolution of Young Massive Star Clusters** ”
(Grant Agreement Number: **687528** ), acronym: **StarFormMapper** (SFM)
project.
**2\. Description of Work**
WP 7, “Data Management and Curation”, is aimed at the provision of central
storage for data associated with the project, together with its public access.
In addition, the documentation and metadata required for full access will be
properly described.
# 2.1 Server Status Update
At the time of the last update to the Data Management Plan (hereafer DMP)
contained in the deliverable D7.5 (submitted in September 2018), our intention
was still to follow the scheme outlined in our initial DMP (deliverable D7.1).
That scheme outlined:
“ _The project has allowed for separate servers at the Leeds and Madrid nodes,
which are now fully installed and functional. These will provide backup to
each other.”_
The intention was that these two servers would serve any data gathered by the
project, as well as acting as the gateway to any online resources developed
during the project. We still intend that the servers will carry out the former
task. However, the success of Quasar as a company , and the development by
them of their DEAVI server/client architecture, and in particular its approval
as a suitable Added Value Interface to sit near to the actual ESA archives at
ESAC, somewhat negates the need for the latter. We will deal with this in
detail below. First, we consider the current state of the servers and outline
exactly what we will provide on them.
In the last period, we noted that:
_The Leeds "data repository/backup" is functioning but due to changes in IT
staffing and management is not yet available as an external facing resource.
We cannot at the moment give a specific timing for this to happen, as the
staff required to set it up are beyond our control, as are the specific
details as to how this will be provided._
The last review requested that we provide an appropriate backup plan in the
event that this situation did not improve. The approved scheme was that the
Leeds server be transferred to Cardiff, who would provide the capability to
install the sofware required to provide this external facing service. The
transfer has now occurred (though as with all things related to Leeds IT
currently, rather late since this occurred in early Sep 19) and the required
setup is underway.
It is still our intention to make publically available all data gathered for
the project, together with appropriate descriptions and metadata. We can now
be clearer about what these data encompass. First, all of the simulations
developed at Cardiff will be provided to the community through the web, using
a detailed front end that provides filtering according to type of simulation
and the initial parameters etc. Secondly, we have acquired data for the star
formation region NGC2264 with both the JVLA and CFHT. The first of these
datasets will be analysed before the end of the project and the reduced data
products made available from this server as well. The second of these datasets
is unlikely to be fully analysed but we will provide it “as is” at the project
completion, and update this in time afer the project end. We note that we
still promise to provide these data to the community as a resource until ten
years afer the project inception (see 2.3 below).
The Madrid servers are fully functional and have been providing simulation
data to the consortium for over a year now. These data are also now public, as
described in D3.4.
# 2.2 Archive Data Analysis Update
The Quasar servers are running the Docker s/w that allows us to interface with
their developing toolset. This is part of the final adopted access protocol
for the project which will eventually become public. Testing of this has
proved the basic methodology. It was our original intention that these
services would be provided through our own servers going forward. We no longer
feel that there is adequate time to do this with the server transferred to
Cardiff, as it will need to be set up again from scratch.
However, since our last update, Quasar have been successful in fully testing
and deploying their client/server architecture (described in deliverables WP
4.3, 4.4, 4.10 and 4.12), both locally to them but more impressively as an
add-on to ESA’s archive services through their GAVIP platform. In addition,
Quasar as a company have been successful in expanding, and have a much clearer
long term future than at the start of the project.
This therefore opens up the option of a new route to our goal of a server that
can be used to apply our sofware to the ESA archives.
* First, Quasar are now able to commit themselves to being part of the long term data access solution, since the framework they have developed is used for other projects they are working on.
* Second, the success of the GAVIP trial allows for the possibility that the sofware can also be run there, nearer to the archive, which was one of our original goals. We cannot commit ESA to supporting GAVIP obviously, so our primary supported access will be through Quasar.
We aim to demonstrate fully the state of both the GAVIP and Quasar service
before we submit the deliverable D7.2 which has been delayed due to the IT
issues at Leeds. Our intention is to submit this by the end of this calendar
year.
# 2.3 Fair Data Update
There are no changes to the availability, openness, re-use provisions or the
requirement for making data findable. The only modification is on the item:
_“In particular, the University of Leeds will commit to hosting the server
mentioned in Section 2 for a period of at least 10 years.”_
Obviously this requirement now devolves to Cardiff.
# 2.4 Data Security Update
Now that we have a feasible plan for supporting the server in Cardiff, longer
term viability for the servers at Quasar, and for deploying our algorithms
through GAVIP, we can be confident on the longer term data security. Both
Quasar SR and the University of Cardiff will now work together to ensure that
as a minimum first step the server transferred to Cardiff provides a backup
facility to the data stored in Madrid, whether simulation data, observational
data, or project records, sofware, webpages etc. This is within the control of
the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0122_A-WEAR_813278.md
|
1. **Executive Summary**
This document comprises deliverable D1.2 Data Management and Quality Assurance
Plan of WP1 Management Working Package. The document is prepared according to
“H2020 templates: Data management plan v1.0 – 13.10.2016”. In general terms,
the research data is aimed to follow the 'FAIR' principle that refers to **f**
indable, **a** ccessible, **i** nteroperable, and **r** e-usable data. A-WEAR
will participate in the Open Research Data Pilot of Horizon 2020 and hence
will make the research data publicly available.
This document is a living document in which information can be made available
on a finer level of granularity through updates as the implementation of the
A-WEAR project progresses and when significant changes occur. This document
would be updated in the context of the periodic evaluation/assessment of the
project if any changes appear.
2. **Partners**
This section provides a list of A-WEAR partners and corresponding
abbreviations. We remark that the abbreviations are used here only for the
sake of compactness, but they may not be affiliated with or reflect company’s
official abbreviation.
**Table 2-1 A-WEAR Beneficiaries**
<table>
<tr>
<th>
**Consortium Member**
</th>
<th>
**Legal Entity**
**Short Name**
</th>
<th>
**Country**
</th> </tr>
<tr>
<td>
TAU
Tampere University (formerly Tampere University of Technology)
</td>
<td>
TAU (formerly
TUT)
</td>
<td>
Finland
</td> </tr>
<tr>
<td>
UJI
Universitat Jaume I de Castellon
</td>
<td>
UJI
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
BUT
Brno University of Technology
</td>
<td>
BUT
</td>
<td>
Czech
Republic
</td> </tr>
<tr>
<td>
UPB
University “Politehnica” of Bucharest
</td>
<td>
UPB
</td>
<td>
Romania
</td> </tr>
<tr>
<td>
URC
Universita Mediterranea di Reggio Calabria
</td>
<td>
UNIRC
</td>
<td>
Italy
</td> </tr> </table>
# Table 2-2 A-WEAR partner organizations
<table>
<tr>
<th>
**Partner Organization**
</th>
<th>
**Legal Entity**
**Short Name**
</th>
<th>
**Country**
</th> </tr>
<tr>
<td>
NET
Netcope technologies
</td>
<td>
Netcope
</td>
<td>
Czech
Republic
</td> </tr>
<tr>
<td>
CIT
CITST
</td>
<td>
CITST
</td>
<td>
Romania
</td> </tr>
<tr>
<td>
NXP
NXP Semiconductors
</td>
<td>
NXP
</td>
<td>
Romania
</td> </tr>
<tr>
<td>
WPS
Wirepas
</td>
<td>
Wirepas
</td>
<td>
Finland
</td> </tr>
<tr>
<td>
DLI
Digital Living International Oy
</td>
<td>
DLI
</td>
<td>
Finland
</td> </tr>
<tr>
<td>
BEIA
Beia Consult International
</td>
<td>
BEIA
</td>
<td>
Romania
</td> </tr>
<tr>
<td>
S2G
S2 Grupo
</td>
<td>
S2 GRUPO
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
ERI
Ericsson
</td>
<td>
Ericsson
</td>
<td>
Finland
</td> </tr>
<tr>
<td>
CPD
City of Castellón, police department
</td>
<td>
\-
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
IDOM
IDOM Consulting, Engineering, Architecture S.A.U.
</td>
<td>
IDOM
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
SWO
Sewio Networks
</td>
<td>
SEWIO
</td>
<td>
Czech
Republic
</td> </tr>
<tr>
<td>
T6E
T6 Ecosystems
</td>
<td>
T6-ECO
</td>
<td>
Italy
</td> </tr> </table>
The Working Package (WP) structure of the project is illustrated below.
**Figure 1 Working Packages in A-WEAR**
3. **Data Management Plan**
A-WEAR is part of a flexible pilot under Horizon 2020 called the Open Research
Data Pilot (ORD pilot). The ORD pilot aims to improve and maximize access to
and re-use of research data generated by Horizon 2020 projects and it takes
into account the need to balance openness and protection of scientific
information, commercialization and Intellectual Property Rights (IPR), privacy
concerns, security as well as data management and preservation questions.
1. **Data Summary**
1. **Data Purpose**
All technical WPs, namely WP2-WP5 (see Figure 1) will rely on various data
types (simulated, measurements-based, etc.) in order to analyze, verify, test,
and improve the developed algorithms and methods.
2. **Data Types and Formats**
The data will consist of software data developed by the (e.g., based on
Matlab, C/C++, Python, VHDL, Java, Android OS, Wear OS, etc.), raw data
measurements from the field experiments and testbed campaigns, mathematical
and statistical models, and channel traces and context-awareness metrics in
time, frequency or space domains. Data types can include multidimensional
time-series, structured data, and unstructured data – such as image analysis,
video analysis, audio analysis and machine generated data analysis. The data
types specific to eHealth studies in WP3 will be electronic health records,
clinical data based on HL7 standard, DICOM files.
3. **Data Re-using**
Open-access datasets and other open-access data might also be used for the
scope of A-WEAR research. The main open-access repositories that we plan to
use are:
1. EU Zenodo repository ( _www.zenodo.org_ ) : it contains research papers, datasets with measurements, software tools (Matlab, Python, etc.), etc.
2. EU OpenAIRE ( _https://www.openaire.eu/_ ) is the EU emerging repository for open science. We remark that the metadata records of the published data sets on Zenodo can be easily loaded into the OpenAIRE platform.
3. EU Open Data Portal ( _https://data.europa.eu/euodp/en/home_ ) : a repository of an expanding range of data from the European Union institutions and other EU bodies
4. Github repository ( _https://github.com/_ ) : it mostly contains software tools, but also reduced-scope datasets with measurements are available. Github and and Zenodo are tightly coupled and publishing majors releases of data in Zenodo from GitHub is trivial (almost automated).
5. Crawdad ( _https://crawdad.org/_ ) is an archive for wireless datasets at Dartmouth university
6. ArXiv ( _https://arxiv.org/_ ) is an archive of pre-print publications and unpublished work by research community in various fields (ICT, physics, mathematics, etc.)
7. Stanford Large Network Dataset Collection ( _https://snap.stanford.edu/data/#email_ ) is a library of relevant datasets for research on large social and information networks
8. Kaggle ( _https://www.kaggle.com/datasets_ ) : it provides a large collection of datasets and models in various areas, such as health domain (e.g., physiological parameters), demographics, data visualization tools, etc.
9. CodeOcean ( _https://codeocean.com_ ) is a collection of scientific codes (software) associated to papers published in IEEE venues, with the target of making the research results reproductible and reusable
10. UC Irvine machine learning repository ( _https://archive.ics.uci.edu/ml/index.php)_
11. Finnish open data ( _https://www.avoindata.fi/en_ ) is a repository of open data sets from Finnish R&D units, covering all vertical industries, such as smart cities, agriculture, energy, health, etc.
12. US government open data ( _https://www.data.gov/_ ) is a huge repository of data, software tools, and other resources with the purpose to help the research in various areas and to develop web and mobile applications
13. Romanian government open data ( _http://data.gov.ro/_ ) is a Romanian repository of open-access data collected from public administration institutions in Romania
All A-WEAR researchers will be highly encouraged to add all their publications
(and other relevant material) on Zenodo, in addition to other institutional or
personal repositories.
In addition to the above-mentioned open-access repositories, an open health
data repository will be available soon (managed by the Finnish National
Institute for health and Welfare), as an “Act on the Secondary Use of Health
and Social Data” has come to effect in Finland 1 since 1 May 2019.
The available open-source datasets can be used in various manners, such as:
14. benchmark data to test the developed algorithms in A-WEAR;
15. benchmark unlabeled (blind) data to organize competitions in A-WEAR or other events (e.g., at IPIN annual events);
16. benchmark software codes to compare the developed algorithms with existing state-of-the-art from scientific literature;
17. benchmark calibration scenarios and parameters for the purposes of cross-validation;
18. PhysioNet offers free web access to large collections of recorded physiologic signals (PhysioBank) and related open-source software (PhysioToolkit).
4. **Data Origin 3.1.5. Data origin at Beneficiaries**
The data will be collected utilizing the hardware (HW) existing in the A-WEAR
team, such as Arduino, Raspberry Pi-s, wireless sensor/actuator devices of
various nature, RFID systems, Intel Galileo dedicated to medical devices,
brain computer interface systems, and Artix-7 development boards, as well as
software (SW) tools available on mobile devices (e.g. via Google play for
Android devices) or developed by the AWEAR team, software tools for network
traffic analyses (such as Wireshark or similar), external open source software
tools available online.
In order to attain A-WEAR objectives, large or massive wearable data might be
collected through crowdsensing approaches for the purpose of social and
consumer applications, such as eHealth and public safety (e.g., for studies in
WP3). Crowdsensed data can be either stored/processed in the aggregated
manner, or represented in a user-tailored form according to the target
application.
The eHealth data used in our studies come from PhysioBank that offers a
variety of medical signals and data. Also, some data come from different
paraclinical investigation obtained from emergency hospitals of Bucharest
based on signed protocols. During our future experiments, data collection will
also contain information coming from the wearable healthcare monitoring
systems that may consist of various types of biosensors. These sensors are
measuring significant physiological parameters such as blood pressure,
electrocardiogram (ECG), muscle electromyography (EMG), oxygen in the blood,
body temperature and patient position.
Sensors found or planned on top android devices will be: Accelerometer,
Gyroscope, Magnetomete, Barometric pressure sensor, ambient temperature, Heart
rate monitor, Oxymetry sensor, Skin conductance sensor, skin temperature
sensor, blood glucose, wrist force sensors, ECG. Some sports sensors (bicycle)
will also be considered: speed, cadence, power, heart rate.
The cloud context-aware platform will be based on the UJI’s experience in open
sensor platforms for the smart city context. Crowdsourcing positioning data
will be collected and used for reduced training efforts and for the study of
the device-free localization methods (e.g., addressed in WP4).
Data origin for WP5 studies will rely on the different laboratory as well as
field measurement campaigns, where the data will be collected via a variety of
personal wearable devices (e.g. AR/VR glasses, smart watches, smart wristband,
etc.) as well as industrial sensors (e.g. electricity meters, environmental
sensors, etc.). The research will have to tackle the analysis of various data
types including multidimensional timeseries, structured data, unstructured
data – such as image analysis, video analysis, audio analysis and machine
generated data analysis.
**3.1.6. Data origin at Partner Organisations**
During the secondment the researcher might have access to data sets which are
provided for commercial purposes and property of the partner organisations or
the third parties. In these cases, the access rights and the use of the data
for scientific purposes will be agreed separate. In any case, opening the data
sets and supporting the researchers’ possibilities to research and graduate
will be targeted, as well as extend their skills for advanced career
possibilities. Always, when possible these data sets will be opened to allow
the maximum exploitation of the outcome of the action as well for further
scientific purposes beyond the action as for the commercialization by the
industry.
**3.1.7. Estimated Data Sizes**
The data sizes are expected to be on average below 100 GB of data/year),
however occasionally huge amount of data may be captured via USRPs
measurements with high sampling frequencies and such data can easily reach few
snapshots of 100GB or more in size (e.g, hours of data collected at high
sampling frequencies which might be relevant for the 5G studies of ESR4 may
require in excess of several hundred GB of storage, also smaprtphone data such
as gyroscope data, accelerometer data, WiFi and BLE data, etc. typically can
reach about 150 MB/hour in uncompressed form).
**3.1.8. Data Utility**
The main bulk of generated data to be made available will be SW and
measurements data. Sharing platform for the consortium data is the Microsoft
OneDrive and the main sharing platforms for the open-source data to be created
in A-WEAR are the EU Zenodo and GitHub. As parts of the university
infrastructure at TAU, the vital components of hosting and sharing the data
are expected to stay in place in the long term (i.e., minimum15 years after
the project’s end), thus ensuring continued access to the collected data.
**3.1.9. Data sharing across the consortium**
Data not containing any personal information and not raising any ethical
concerns (e.g., such as simulated data) will be shared between the Consortium
units on-a-need basis. The sharing of any other data types across the
consortium will be based on anonymity of data. Where partners require access
to data to enable a synthesis of findings across studies, this will be
provided in strongly anonymized form only. We will comply with the EC Data
Protection Directive 2 and its newer amendments 3 at all steps in our
project and after its end.
**3.2. FAIR data**
**3.2.1. Making data findable, including provisions for metadata**
A-WEAR project will follow the EU FAIR data guiding principles in order to
make it Findable, Accessible, Interoperable, and Reusable. A-WEAR goals are to
follow FORCE11 FAIR data principles 4 where possible:
**Metadata** to be created within A-WEAR refers to any summaries and
documentations about data to be produced within the project (measurements, SW,
simulated data, analytical/mathematical model, etc.), publications, conference
slides, workshop presentations, etc. Most metadata will be available through
the project deliverables and open-access publications, as well as through the
project dissemination channels (webpage, blog, Twitter, Youtube channels,
etc.). We will be guided by the following FAIR principles:
In order to make the data findable in A-WEAR, the following procedures will be
followed:
19. Clear version numbers will be used for the project deliverables.
20. The target publication venues of A-WEAR are open-access peer-reviewed journals and conferences, due to wide dissemination opportunities. Those are easy to access online and offer a simple keyword, author or DOI (digital object identifier) search through their homepages or publication search engines such as “sciencedirect.com” or “scopus.com”.
21. standard metadata generated for publications (Springer, SCOPUS, ISI, etc).
22. standard metadata for software on GIT.
**3.2.2. Making data openly accessible**
27 out of 35 A-WEAR deliverables are to be open-access deliverables. Relevant
measurement data and simulation-based data generated within A-WEAR project
will also be made available at least on Zenodo open-access repository, and
possibly to the Beneficiaries’ relevant webpages. Relevant software codes
developed with AWEAR will be made available at least on GitHub repository, and
possibly to the Beneficiaries’ webpages. The **relevance** will be agreed upon
by discussions within the Advisory Board of AWEAR, by considering the
following aspects:
23. the open-access data must be useful for the research community at large, e.g., by providing valuable benchmark solutions (SW, measurements, etc.).
24. the open-data might have a reasonable tradeoff between its size and its potential usefulness (e.g., huge datasets of radio frequency (RF) or intermediate frequency (IF) samples at high sampling frequency may be not relevant enough to be shared directly with the community or might not fit into the current upper size limits of existing repositories, but there might be made available on request to interested researchers).
The main target groups of the dissemination are the scientific community, the
industrial stakeholders in wearables and IoT, the authorities and bodies
responsible for development national and EU knowledge societies and digital
economy, potential end users (including all population in contact with a
wearable device, primary end users, public service personnel, etc.), high-
school pupils (as the future users of wearables and current users of
Internet), and persons developing multinational PhD and cross-sector
trainings.
Several activities will be considered in A-WEAR to ensure that there is a
clear way of communication between the ESRs and both the scientific and
general public the target groups. The main goal of these activities will be to
share results and more in general to create awareness of the importance of
A-WEAR research themes to society and to raise awareness of the MSCA Actions
aiming to follow FAIR principles.
A-WEAR will use 10 dissemination and outreach activities listed in Table 3-1
and conference and journal publications and workshop participation.
# Table 3-1 The 10-step involvement in social media in A-WEAR, in addition to
the project webpage
<table>
<tr>
<th>
**Additional dissemination activities besides webpage, scientific
publications, conference & workshop participation, and patents. All ESRs will
be involved in all these activities. One or two ESRs/task will lead the
efforts **
</th>
<th>
**Lead ESRs**
</th> </tr>
<tr>
<td>
**Webropol** survey active all through the EJD where users and stakeholders
will be free to share their concerns and challenges regarding the technology
(on one hand) and applications (on the other hand) of wearables
</td>
<td>
1,9
</td> </tr>
<tr>
<td>
**Facebook** open group for A-WEAR public awareness
</td>
<td>
10
</td> </tr>
<tr>
<td>
**LinkedIn** open group regarding discussions in the areas of A-WEAR with
**blog** posts on LinkedIn, including ESRs’ blog inputs on their experiences
within the EJD (technical, social, experiences associated to mobility in other
country, lesson learnt and
</td>
<td>
4,5
</td> </tr>
<tr>
<td>
best practices) with at least two posts/quarter
</td>
<td>
</td> </tr>
<tr>
<td>
Adding A-WEAR open-source measurement data on **open repositories,** such as
EU **Zenodo, GitHub** or **Bitbucket** – Fellows 3 and 13 will be in charge
with finding out the distribution terms for the open repositories, informing
the other fellows of those and regularly reminding each of them to distribute
their open measurement data through those repositories
</td>
<td>
3,13
</td> </tr>
<tr>
<td>
ESRs will maintain a **youtube** channel with video clips and fellows
testimonies related to the main topic of the project, providing lessons and
general-purpose talks, to spread the relevance of the activities carried out
in the network
</td>
<td>
6,14
</td> </tr>
<tr>
<td>
**Twitter** 140-character postings with links to to results and elevator
pitches
</td>
<td>
8
</td> </tr>
<tr>
<td>
ESRs will attempt contact with **local mass-media** to spread the activities
of the consortium, the Marie Curie Actions, and of individual activities
</td>
<td>
2,12
</td> </tr>
<tr>
<td>
Each ESR will post his/her publications (at least the abstract) on
**ResearchGate** and participate in the ResearchGate discussions related to
A-WEAR topics
</td>
<td>
7
</td> </tr>
<tr>
<td>
ESRs from each beneficiary will organize a **A-WEAR Open Day** (one per
beneficiary) where general audience will be invited to visit the host
facilities and create attraction to the conducted research activities &
doctoral studies
</td>
<td>
11
</td> </tr>
<tr>
<td>
Each ESR will commit to act as Marie Curie Ambassadors and visit **local
schools and universities** , as well as **local councils** , exposing the
activities and results of the network. They will give at least 2 public
presentations per ESR within the 36 months of contract. The specific election
of places to give the talk will be left for decision of the ESRs with the
support of the nominated supervisors.
</td>
<td>
15
</td> </tr> </table>
Research papers will be published as open-access by taking up self-archiving
rights for journals and conferences that have them, or if necessary paying the
open-access fees where self-archiving and or free open access is not possible.
Also, online pre-publication in ArXiv will be recommended to fellows.
In addition, related to data accessibility, participants to the conferences
where A-WEAR fellows will present their work will have a direct access to the
information through oral presentations, posters and conference proceedings.
The A-WEAR webpage is hosted at TAU. The project will have a dedicated webpage
aims to promote the ESRs’ skills and progress in their career in order to be
available for the best possible employment opportunities, the training
network, disseminate the results achieved and announce the events organized
within this project. This website will be supported by a set of static content
pages (institutional content) and will integrate a more dynamic area,
eventually adding a blog and making easy to any participant in the network to
collaboratively update and create new content.
A-WEAR beneficiaries commit to make their results available in open–access as
much as possible, through at least the followings: a) majority of deliverables
in public access; b) publications via open-access option in IEEE and other
publication forums; c) dissemination of results on open-forums such as
ResearchGate, personal webpages, and open library pages (e.g., TUT has own
open-access portfolios: Dpub and TUTCRIS, UJI publish the papers also on their
webpages, etc.); d) less sensitive and privacy-preserving measurement data to
be provided in open access.
The next European Researchers’ Night will be organized on 27 September 2019
together with Europe and 11 other locations in Finland. During one day and
night, visitors from all walks of life can take part in different kinds of
workshops, panel discussions and exhibitions as researchers open their doors
to the public. The A-WEAR researchers will participate the events yearly
either as soon as possible talking about their work or visiting the relevant
scientist. All events have been earlier very popular in Finland. They are
organized in collaboration between the universities and research organisations
and are free of charge.
The webpages of the partner organisations will be linked to the project
webpage. The partner organisations are encouraged to refer the A-WEAR action
in their professional or commercial occasions always when possible.
For cryptography-related papers free repository (
_https://www.iacr.org/eprint/_ ) will be used.
**3.2.3. Making data interoperable**
In order to make the data interoperable in A-WEAR, standard open formats will
be used for storage. Additional metadata will be described and clarified.
Proprietary software and language-dependent formats will be avoided where
possible (e.g., during industrial secondments).
**3.2.4. Making data re-usable (through clarifying licenses)**
In order to make the data re-usable in A-WEAR, the following procedures will
be followed:
25. The datasets will be typically shared under Creative Commons Attribution-NonCommercial 4.0 International License. Commercial licenses will be available upon request. Data is expected to be available as long as the service used for sharing data is operational. Data published in scientific journals in form of a journal article would be in open access.
26. We will encourage the use of the collected data in open challenges at dedicated conferences (e.g. IPIN annual open challenge on indoor localization).
27. We will include information how data were created and will describe experiment details.
A-WEAR team recognizes the importance of software licensing from the outset of
the project, but given the uncertainty we have now of the potential value of
such tools in the future, the exact licenses to be used in case-by-case
situation will be refined through the project. This is because some of the
created data may only be demonstrators or proof of concepts. Other type of
created data, on the contrary, may be valuable tools to launch marketable
ideas via starts-ups. Once we will have a clearer idea of what type of data
will be produced by the ESRs during the project (e.g., only open sources
tools, mix of proprietary and software tools, etc.), the DMP will be updated
with relevant information. At this point, we are exploring the spectrum of
licenses as a preliminary step based on the information found at
_https://choosealicense.com/_ .
Regarding the WP3 studies (eHealth domain), we mention that the
interoperability of medical data is a key concept for the electronic health
records by measuring the communication and cooperation capacity between
different healthcare entities that allow the exchange of information through
electronic health records or other medical information systems. The
interoperability of medical data is realized by means of HL7 standard that
assure automated conversion of information into structured data.
Interoperability of medical data also mean interoperability with medical
devices that capture (generates) medical information from medical sensors and
different devices like Holter, ECG, MRI, ECOGRAF, etc., interoperability with
emergency support systems, with other systems that can quickly and efficiently
deliver the medicines needed for patients unable to move and interoperability
of medical information by creating medical social media portals for
physicians, where they can access studies, updated medical guides and patients
records.
**3.2.5. Data – related procedures at A-WEAR units**
**3.2.5.1. TAU**
The public datasets will be archived and shared through the Research data
storage IDA
( _https://openscience.fi/ida_ ) and Research data finder ETSIN (
_https://openscience.fi/etsin_ ) services provided by the Finnish IT Centre
for Science (SCS) and endorsed by the Finnish Ministry of Education and
Culture. In addition, the data will be promoted on the project web site,
through relevant publications (with related DOIs and keywords) and through
presentation in scientific and public events.
All the data collected during the subjective experiments will be anonymized
and aggregated. The created test databases will be made available through web
sites where applicable.
**3.2.5.2. UJI**
The public datasets will be archived published in Zenodo and the IndoorLoc
platform
( _http://indoorloc.uji.es_ ) provided by Universitat Jaume I. In addition,
the data will be promoted on the project web site, through relevant
publications (with related DOIs and keywords) and through presentation in
scientific and public events.
All the data collected during the subjective experiments will be anonymized
and aggregated. The created test databases will be made available through web
sites where applicable.
**3.2.5.3. BUT**
We will publish the data sets in Open data of the Czech Republic,
_https://opendata.gov.cz/_ . In addition, the project outputs (generic data,
software, etc.) will be promoted on the project web site, through relevant
publications (with related DOIs and keywords) and through presentation in
scientific and public events.
All the data collected during the subjective experiments will be anonymized
and aggregated. The created test databases will be made available through web
sites where applicable.
**3.2.5.4. UPB**
UPB will have a local site linked to _http://cs.pub.ro_ on which the
collected data will be available. The data will be also promoted on the
project web site, through relevant publications. Regarding ESR8 research, the
collected data will be non-personal data (e.g. radio measurements).
**3.2.5.5. URC**
After the execution of any test, the resulting collected are analyzed for
modelling, model verification or contribution purposes. The data collected
during any subjective experiment will be anonymized and aggregated.
Data is then kept in personal computers with password security and open access
for only the people involved in the study or co-authors of the relevant
article for scientific purposes. The created test databases will be made
available through web sites where applicable. The collected data will be
disseminated through relevant publications (with related DOIs and keywords)
and through presentation in scientific and public events. Publications will be
available on the project web site and through other channels.
Should experiment involve external subjects supplying data (such as position,
for example), although these data are anonymized, subjects are also informed
to have rights to have their collected results destroyed if they wish by
supplying the nick name they chose or the number they are assigned with.
**3.3. Allocation of resources**
The data will be prepared during regular working hours of the ESR. The data
will be stored on OneDrive provided by TAU. No additional cost to the project
is expected for the OneDrive repository. The costs for publications in Open
Access journals can be between 1000 and 4000 EUR per paper and this will be
covered by A-WEAR project allocated resources.
**3.4. Data security**
All data will be stored on secured password protected computers. Data will not
be stored on unencrypted flash drives. For possible vulnerable data (such as
the data regarding anonymous user traces and operator collected data) to be
commonly used by several Consortium partners, a password-protected space on
the project web server will be created and data will be stored in encrypted
form.
The sharing platform for the consortium data is the Microsoft OneDrive. As
parts of the university infrastructure at TAU, the vital components of hosting
and sharing the data are expected to stay in place in the long term, thus
ensuring continued access to the collected data in a secure way.
Raw data collected from volunteers will only be retained for the lifetime of
the research project and stored on OneDrive and password-protected computers
according to the participants’ information security policy. It will be stored
unless explicit permission is requested and given by the research participants
for an extension period (which may necessitate an appropriate consent form
amendment).
All research participants will be informed of the nature and limits of
confidentiality in accordance with the data protection and privacy legislation
in the jurisdiction where the research is to be carried out. The web surveys
to be organized within A-WEAR will be done with volunteers and fully
consenting to fill in the web surveys and the data will be collected
anonymously, and without storing the IP of the respondent (e.g., Webropol
survey tool to be used in Dissemination activities has such an option). No
data will be collected from children or vulnerable adults or any other person
deemed unable to express his/her full and free consent.
**3.5. Ethical aspects related to data management**
Detailed description of the ethical aspects is given in deliverable D1.3
Collection of ethical clearance procedure and forms available at each
beneficiary.
**3.5.1. Wearables data of individuals**
Making data anonymous and implementing privacy-enhancing mechanisms will
require access to nonanonymized or weakly anonymized data at some point. In
A-WEAR, all data sets used will come from informed and volunteer individuals
and approved databases that will only be used for training purposes. Anonymity
techniques will be implemented as soon as it is feasible and databases will
not be shared between institutions.
Ethical assessment is a key component for the adoption of new medical
technologies. Ethical problems resulting from the inherent risks of Internet
enabled devices can appear due to the sensitivity of healthrelated data, and
their impact on the delivery of healthcare. These issues can also come from
that fact that devices range from single-sensor wearable devices to complex
spatial networks capable of measuring and health-related behaviors for
management of health and well-being. When talking about ethical issues
concerning eHealth wearables, we call the ethics of devices and data. eHealth
wearables are generally carried by the user at home, at residential care,
workplace or public spaces. In each case, a door into private life is created,
enabling the collection of data about the user’s health and behaviors and the
analysis. The lives of users can be digitized, recorded, and analyzed,
creating opportunities for data sharing, processing and mining. That is why
the privacy should be respected and ethical forms will be signed with all the
individuals that will be involved in any experiments and medical reports.
**3.5.2. Assessing privacy intrusion**
In order to assess the acceptability and privacy intrusion of specific
privacy-enhancing solutions, partners may choose to conduct surveys. For web
surveys, the information will be collected anonymously and with full consent
of the participants, on a volunteer basis. For in-person surveys, if any,
information sheets and consent forms will be provided, storage of personal
data will be avoided whenever possible and data with potential for re-
identification will be safely deleted as soon as it is feasible.
**3.5.3. Webropol surveys**
If qualitative methodologies are used, participants will be duly informed in
writing of the nature of the research and their involvement, their rights
during and after their participation and the final goal of the study. Data
will be collected and stored anonymously, as Webropol survey tool has an
option for fully anonymous data collection.
**3.6. Other aspects**
**3.6.1. Issue register**
The “Issue Register” is a log of any issue which arises during the course of
the project, and will be maintained by the Project Coordinator on a separate
folder on OneDrive. The Issue Register will collate any issue as it arises,
issues will then be analyzed and escalated or dealt with accordingly.
An issue could be anything which is of concern to an ESR, beneficiary,
partner, supervisor, etc., this could be deviations from the project plan,
identification of new risks or just concerns.
For example it may be an issue such as Partner X is not responding to emails,
this may be a precursor to larger problems which may impact on A-WEAR
progress, therefore, as an issue is raised it may cause a trigger which causes
a new risk to be identified and added to the risk register.
This register will also allow an additional tool for keeping track on the
risks and means for responding immediately with mitigating activities.
**3.6.2. Lessons Learnt Log**
A lessons log will also be maintained by the Project Coordinator to record
lessons generated out of the “Issues Register” and any other lessons from the
project. The lessons log will categories the lessons for their significance to
different parties and the stage in future projects at which the lesson log
should be reviewed. The A-WEAR project lessons log will be available to all of
the consortium members for ensuring that lessons learnt in this project may be
applied to future projects.
As well the lessons Learnt Log will provide information relevant for realizing
the significant results
* linked to dissemination, exploitation and impact potential of the outcome overall and the management and usage of data in particular, and
* with significant immediate or potential impact in science or industry.
**4\. Quality Assurance Plan**
Internal quality assurance (QA) of all deliverables will be carried out prior
to submission to the Commission. First of all, each deliverable gets assigned
of up to two internal reviewers. To that purpose, a draft copy will be
delivered to the internal reviewers one month before its due date for comments
on technical as well as formal quality.
The reviewer has specific responsibility for providing feedback to the lead
authors on more detailed quality assurance in terms of presentation, quality
of writing, consistency, clarity, etc. Reviewing will be done by using the
reviewing form, provided in Annex 1, in order to ensure consistency in the
reviewing process.
At the same time, draft copies of deliverables will be circulated
electronically to all partners for additional comments. Review forms and
comments are to be sent in the agreed way within a maximum of two weeks to
those responsible for deliverables, which gives the latter one week correction
time before the final version is delivered to the PC team and submitted.
External quality assurance is to be gained from the various contacts within
the advisory board, during the dissemination moments, and with the various
other contacts that will be established, including any ongoing and future
academic and industrial collaborations. For the appointed external advisory
board member, one week of reviewing time will also be granted, and one week of
revision time for the deliverable authors. In addition, as ESRs will publish
in peer-reviewed open-access publication channels, the peer reviewed papers
are a reliable quality check.
The impact from the representatives from industry during meetings or
public/professional presentations, etc. or the possible implementation of the
outcome as a part of the product development of the partner organisations will
also prove of the good quality of the outcome
**4.1. Project Coordinator Involvement**
Project coordinator (PC) team is officially responsible for sending all
deliverables to the European Commission for the evaluation process. It
cooperates with the deliverable leaders and Training and Project Manager on
all relevant matters to ensure the quality of the project’s deliverables. PC
receives (in cc) the deliverables for peer reviewing from the respective
responsible party followed by the results of peer reviewing from each assigned
reviewer. Finally, PC receives the confirmation of the satisfactory
implementation of the recommendations.
PC team should send reminders and alerts in due time to the responsible
parties in order to remind them the deadlines for the delivery submission and
the procedure to be followed within the quality assurance phase. PC team
receives the deliverables for peer reviewing from the respective responsible
partner and organizes the quality assurance procedure. If necessary, OD team
is also in charge with
28. Compiling the related peer review reports with recommendations or, if necessary, sending the deliverable to another partner who will be in charge for peer reviewing.
29. Delivering the results of peer reviewing to the deliverable author and other beneficiaries.
30. Verifying the satisfactory implementation of the recommendations of the peer review report, in cooperation with the responsible partner.
PC team is also responsible for keeping track of the reviewer’s assignments
and storage of the related data.
**4.2. Review responsibilities and process**
A deliverable is sent by the responsible person 15 days prior the deadline for
any comments and revision.
The compiled peer review report (if any major comments) is sent to the
relevant author(s) within 7 calendar days. If no major comments, the minor
comments are provided via telco or email to the responsible person.
The Author(s) of the deliverable, in cooperation with the other partners (if
applicable), carries out the required improvements with the highest priority,
and sends it back to the project coordinator team within a further 5 calendar
days.
**4.3. Simplified quality assurance procedure**
In order to achieve the QA, each reviewer will provide his comments either in
a free form or with track changes on the deliverables or by filling the form
given in Annex 1.
The reviewers will keep in mind the following questions when reviewing a
deliverable:
1. are the objectives/goals of the deliverable clearly presented?
2. does the work include references to relevant material and literature? (if applicable)
3. is there sufficient detail in all areas?
4. is the information technically sound?
5. are the findings clear and well argued?
6. does the deliverable provide inputs as expected for the subsequent work?
7. is the information clearly presented?
8. is the work cohesive and consistent?
9. is the writing style appropriate?
10. is the graphical content appropriate?
**4.4. Risk management**
TAU as coordinator is in charge with the risk management procedure. Each
partner has the responsibility to report immediately to their respective WP
leader and to the PC any risky situation that may arise and may affect the
successful completion of A-WEAR objectives. In case of problems or delays, the
AB will be consulted and it may set up task forces in order to take the
necessary actions. In case there is no resolution, the PC together with AB
will establish mitigation plans to reduce the impact of risk occurring. Table
3.2c shows the implementation risk analysis.
# Table 4-1 Implementation risks and mitigation procedures
<table>
<tr>
<th>
**Type**
</th>
<th>
**Risk No.**
</th>
<th>
**Description of Risk**
</th>
<th>
**Probabilit y /**
**Impact**
</th>
<th>
**WP No**
</th>
<th>
**Proposed mitigation measures**
</th> </tr>
<tr>
<td>
**Technical**
</td>
<td>
R1
</td>
<td>
Input measurement data is unavailable in research literature at the time when
mathematical modelling work has to start
</td>
<td>
High/Low
</td>
<td>
2
</td>
<td>
Conduct own measurements in BUT-TAU LTE test network similar to how it was
done in _http://wislab.cz/our-work/lte-assisted-wifidirect_
</td> </tr>
<tr>
<td>
R2
</td>
<td>
Scarce availability of off-the-shelf devices for implementing& testing real
use cases
</td>
<td>
Low/
Medium
</td>
<td>
2
</td>
<td>
A-WEAR team has a wide expertise in the implementation of simulators and
testbeds in order to overcome the considered issues
</td> </tr>
<tr>
<td>
R3
</td>
<td>
Insufficient crowdsensed data in WP3 studies
</td>
<td>
Low/
Medium
</td>
<td>
3
</td>
<td>
Collecting data through all A-WEAR units as much as possible; using analytical
models & existing open-source data to supplement the missing measurements
</td> </tr>
<tr>
<td>
R4
</td>
<td>
Standardization efforts in wearables is highly dynamic; new emerging standards
may rely on privacy assumptions we have not considered
</td>
<td>
High/ Low
</td>
<td>
4
</td>
<td>
Actively following the standardization efforts in wearables, IoT and future
wireless communications in in order to adjust the hypotheses and project work
accordingly.
</td> </tr>
<tr>
<td>
R5
</td>
<td>
Noisy mmWave and industrial data or inappropriate data format, not suitable
for machine learning analysis
</td>
<td>
Low/
Medium
</td>
<td>
5
</td>
<td>
Collecting data through all A-WEAR units in both supervised and unsupervised
modes from very beginning of the project; supplementing unavailable data with
statistical models; discussing with industrial partners for finding out
suitable/standardized formats
</td> </tr>
<tr>
<td>
R6
</td>
<td>
Some of the envisioned tasks may require collaboration with experts in other
fields (user experience, control theory, SW engineering, etc.)
</td>
<td>
Medium/ Medium
</td>
<td>
5
</td>
<td>
Utilize the rich contact network of the consortium units to seek prompt advice
in complex matters related to other fields of knowledge; proactive role of AB
in providing timely feedback on tasks planning and completion
</td> </tr>
<tr>
<td>
**Administrative**
</td>
<td>
R7
</td>
<td>
Integration problems in building the SW and HW platforms
</td>
<td>
Low/
Medium
</td>
<td>
2-5
</td>
<td>
A-WEAR team has a wide expertise in SW, HW and SoC and active discussions and
feedback from AB will help to overcome the problems.
</td> </tr>
<tr>
<td>
R8
</td>
<td>
Delays in recruitment process
</td>
<td>
Medium/ Low
</td>
<td>
1-7
</td>
<td>
The positions will be actively advertised through various channels, in
addition to the joint network links of all partners
</td> </tr>
<tr>
<td>
R9
</td>
<td>
More than 36 months needed to complete the double/joint PhD degree
</td>
<td>
High/ Medium
</td>
<td>
1-7
</td>
<td>
Each beneficiary commits to ensure all needed resources in terms of funding &
supervision to allow the ESRs to finish their joint/double degree.
</td> </tr>
<tr>
<td>
R10
</td>
<td>
Potential problems in leading a consortium of 17 partners
</td>
<td>
Low/ High
</td>
<td>
</td>
<td>
PC has worked successfully before (projects, publications,…) with 47% of the
17 A-WEAR units; PC has experience in leading large national Consortia and she
gets strong support from TAU Research Services (having great experience in
ITNs and EU projects) to address promptly any issues that might appear
</td> </tr>
<tr>
<td>
R12
</td>
<td>
Scientific misconduct
</td>
<td>
Low/
Medium
</td>
<td>
1-5
</td>
<td>
Termination of contract and recruitment of replacement
</td> </tr>
<tr>
<td>
R13
</td>
<td>
Some industrial partner going bankrupt
</td>
<td>
Low/ Low
</td>
<td>
1-5
</td>
<td>
Replacing the industrial secondment unit with new industrial partners,
suitable to the addressed objectives.
</td> </tr>
<tr>
<td>
R14
</td>
<td>
Topic divergences from the scheduled A-WEAR network
events in table 1.2b
</td>
<td>
Medium/ Low
</td>
<td>
7
</td>
<td>
If some of the forecast lecturers are not available, we invite new lecturers
to cover in a comprehensive non-overlapping manner the core topics
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0123_SUITCEYES_780814.md
|
# Executive Summary
This Data Management Plan (DMP) defines effective _governance_ and
_management_ of research data generated and or used with the SUITCEYES
project. It addresses issues of data generation, ownership, storage, access,
exchange, use, openness, protection, preservation and destruction.
This document (and its subsequent updated versions) will act as a guideline
and provides an overview of research-data-related procedures within the
project. It aims to facilitate collaboration and help avoid unnecessary
duplications of work and data creations. It further defines procedures and
routines for easy and effective information sharing during the project time
and beyond. It is also a useful tool in ensuring continuity and bridging gaps
even in the cases of additional new members in the project.
After the introductory parts, Chapter 3 of the document presents an adaptation
of five (5) Data Governance Domains of _Data principles_ , _Data quality_ ,
_Metadata_ , _Data access_ , and _Data lifecycle_ which aim at defining the
decision-making structures that govern data-related issues within the project.
Following these, chapter 4 provides further description of routines for data
management. The approach of reaching informed consent and other agreements
with users taking part in requirements, user studies, video-recordings and
other R&D activities has also been included. The document is concluded with a
short summary and two appendices.
# Introduction and Rationale
This deliverable – Data Management Plan (D8.14) – incorporates both Data
Governance and Data Management and accordingly defines the principals,
procedures, and routines that are put in place for the management of research
data within the SUITCEYES project.
As such, this document and its subsequent versions, act as a guide to help
form an up-to-date overview of the project-related data and related
procedures. This DMP is produced with the aim of facilitating information
sharing and collaboration, while avoiding unnecessary duplications of work. It
also defines the basis for various choices and is meant to help the members to
make sound and appropriate decisions when needed. This DMP is also meant to
create continuity even in potential cases of membership change.
This document presents the data governance approach adopted, and describes the
research data that will be collected, generated and or used. It outlines the
related data types and the ways in which the data will be handled both during
and after the project. Furthermore, it will describe which data will be made
available openly and which date will be kept protected, and the reasons why.
This DMP will remain a living document and will be updated when needed as the
project progresses. This deliverable is licensed under the Creative Commons
License CC BY-NC-SA (AttributionNonCommercial-ShareAlike).
## SUITCEYES and Open Data
SUITCEYES is a three-year long (2018-2020) Horizon 2020 RIA project with a
focus on facilitating communication in cases of deafblindness through a smart
haptic interface. The project will address three challenges of (a) improved
perception of the physical surroundings, (b) improved exchanges of semantic
contents, and (c) enjoyable learning through gamification.
The project involves many areas of research including disability studies, user
studies, psychophysics, sensor technologies, face and object recognition,
semantics and knowledge management, social media studies, gamification and
affective computing, and smart textiles. As such, much data will be accessed
used and or generated during the project.
Open data, typically refers to online free distribution of research data and
results for access and reuse by third parties towards benefiting both future
research and society. With the advances in the “open” movement, a growing
demand for open and interoperable research data has emerged.
The view held in SUITCEYES is that all research, and especially those funded
by public money, should benefit the whole society, be instrumental for
progress, and act as a stepping stone for further research. We will therefore
make the research results available through different channels and strive to
provide open access to the project data, as far as possible. However, not all
data generated and used within SUITCEYES are suitable or appropriate for
sharing and reuse, and hence, SUITCEYES has chosen not to participate in the
Open Research Data Pilot. The decision to opt out has been based on two
factors, the potential for exploitation of results by some partners, but more
importantly, the vulnerability of some of the project’s study participants and
the sensitivity of the data that will be generated in the project. SUITCEYES
involves user-studies (including interviews and observations) of sensitive
nature. The participants, due to the small population and specific
circumstances of each participant, are potentially easily recognizable.
Although we anonymize user-study-generated data soon after collection and
before sharing among the project members, this data is still not suitable for
making openly available for wider access and use. This decision is supported
by the results of multiple studies 1 2 3 that have shown the ease of
deanonymisation even in areas where stringent efforts have been made in
removing identifying data.
## Data Governance and Data Management
In this document we have broadened the scope to also include data governance
based on the structural features outlined by Khatri and Brown (2010) 4 where
in their state-of-the-art contribution they have defined five data governance
domains. Through the use of data governance, there is a distinction made
between governance of data and management of data. Management concerns making
and implementing decisions. Governance is concerned with the creation of a
structure that allows for a decision-making structure. As they exemplify,
“governance includes establishing who in the organization holds decision
rights for determining standards for data quality. Management involves
determining the actual metrics employed for data quality” (Khatri & Brown,
2010: 148). By extending the scope of this document to go beyond data
management issues and to also include data governance a more comprehensive
approach is adopted.
# Data Governance
Khatri and Brown (2010) have identified five (5) decision domains for data
governance comprising of (i) _Data principles_ , (ii) _Data quality_ , (iii)
_Metadata_ , (iv) _Data access_ and (v) _Data life cycle_ . Khatri and Brown
(2010) propose that there is a need for creating a decision-making structure
for each of these domains, where the full table with more detail is provided
in Appendix 1.
In the following subsections, we outline the plan for SUITCEYES data
governance according to these five domains.
## Data principles
Data principles concern the overarching ideas about the kind of decisions that
are to be made relative to the four other domains. These principles also
introduce boundary requirements for the use of data as well as standards for
data quality. The role of this domain is to clarify the role of data as an
asset. For the current DMP the following data principles have been established
as presented in Table 1.
Table 1: Adaptation of Khatri and Brown (2010)’s Domain of Data Principles in
SUITCEYES
<table>
<tr>
<th>
**Domain Decisions**
</th>
<th>
**Potential Roles or Locus of Accountability**
</th> </tr>
<tr>
<td>
* In SUITCEYES data is used for multiple purposes, including forming an understanding of the user needs, preferences and aspirations; forming an informed overview of the related policies; experimentations and conduct of research towards project goals and production of haptic, intelligent, personalized interface (HIPI).
* The various uses of data are communicated continuously at regular meetings which are held at various levels and in different formats. Furthermore, written documentations and an information sharing tool are further mechanisms for communicating uses of the data.
* Datasets are seen as assets and are therefore valuable and should be managed accordingly.
* Ownership implies responsibility and accountability for keeping data assets securely stored.
* As stipulated in related agreements, some of the data generated in SUITCEYES is of sensitive nature the use of which is regulated by the projects internal operational and ethical guidelines.
* Such data assets should be handled through principles of privacy by design and data minimization.
* All practices should be compliant to General Data Protection Regulation (GDPR).
* The current DMP should be used to guide all handling of data assets. It should also be revised regularly to accommodate the
</td>
<td>
* The Project Management Board (PMB) is the ultimate decision-making body within
SUITCEYES, and it is responsible to oversee the existence of appropriate data
governance structures within the project.
* The PMB is responsible for decisions made on data management issues.
* The PMB should refer difficult issues of data management to the Ethical Advisory Board (EAB) for advice.
* The project DMP (the text at hand and its subsequent updates) will also be reviewed by the project’s EAB.
* The default structure for ownership is that the partner creating the data also owns it as defined in Grant Agreement (GA) article 26 where some data may be subject to JOINT ownership governed by GA Article 26.2 with the addition of further stipulated in PCA section 8.
* Data assets and their ownerships are clearly defined in a related tool share with all project members.
* Securely stored refers both to protection against breach and to instances of force majeure, i.e. fire, data crash etc.
</td> </tr>
<tr>
<td>
development of new insights, challenges and problems.
</td>
<td>
</td> </tr> </table>
## Data quality
Data quality is connected to accurate, complete and trustworthy data being
available for various research tasks in a timely fashion. Lack of data quality
is a fundamental problem for most data intensive work and one of the core
issues that can be attended to through the DMP. There are multiple dimensions
involved in data quality which will be presented with the help of the
following table (Table 2). The role of data quality domain is to establish
requirements of intended uses of data.
Table 2: Adaptation of Khatri and Brown (2010)’s Domain of Data Quality in
SUITCEYES
<table>
<tr>
<th>
**Domain Decisions**
</th>
<th>
**Potential Roles or Locus of Accountability**
</th> </tr>
<tr>
<td>
* **Accuracy** refers to the correlation between recorded value, actual value and the kind of value needed for the research task. A number of crucial questions emerge regarding the user study data. For example: Will the interview transcripts reflect correctly the responses of the participants? Do the local conventions vary at different partner countries? How will the principles of data minimization and privacy by design affect the data sharing and interpretation as cross referenced in different countries of the studies? These accuracy-concerns are addressed by regular meeting and joint analyses among the researchers participating in user studies and collaboration with the User-Data Working Group. For the technical aspects of the project, experimental data, ontologies and so on, there will be other questions asked, and the accuracy of that data will be verified based on domain specific scientific measures and in collaboration with an Analytical-Data Working Group.
* **Timeliness** refers to up-to-date values being available at the right time. Timeliness is typically a challenge in complex projects with many dependencies and potential bottle necks. It is crucial for the project members across the board to be aware of the relationships between the different tasks and deadlines to ensure smooth and timely deliverance of results that are needed for the next phase in the project in their own and other work packages in the project.
* **Completeness** defines the need for the data to be as detailed, deep and broad as necessary for the research tasks. Related to the user studies, the ambitions for adherence to GDPR legislation and the principles of data minimization and privacy by design will be carefully balanced with the need to capture the data that is necessary for the design of the HIPI and an informed understanding of user needs and preferences. For Analytical data, while collection of broad data may not be bound by the same concerns, a challenge in cases of machine learning may be the lack of enough data. Such a challenge is carefully considered and potential solutions are investigated.
* **Credibility** refers to the need that the sources of data assets must be trustworthy. In the user studies utmost care is taken to ensure that most relevant participants are recruited and
</td>
<td>
* The PMB will develop and assign responsibilities to a _User-Data_
_Working Group_ (UDWG) to oversee **accuracy** , **timeliness** ,
**completeness** and **credibility** of user data.
* The PMB will develop and assign responsibilities to an _AnalyticalData Working Group_ (ADWG) to oversee accuracy, timeliness, completeness and credibility of analytical and technical data.
* There are a number of guidelines and tools devised and routines put in place in the project in order to ensure the **timely** conduct of all project tasks. The project PCA defines the members’ responsibility towards one another, the need for timely deliverance and measures to ensure compliance.
* The project has established a large network of contacts to allow collection of rich set of user data which will promote data completeness and credibility.
* Similarly, continued environ- mental scanning keeps the project members informed of emerging data sources and recent research to promote access to most relevant and appropriate set of data for conduct of research tasks within
</td> </tr>
<tr>
<td>
best user-study practices are put in place to ensure high levels of
trustworthiness in the results. Regarding the technical and analytical data,
in general Best Available Technology (BAT) and evaluation methods will be
utilized.
▪ A set of measures defined in task and work package meetings in collaboration
with UDWG and ADWG will be used for **evaluation** of data quality and
associated data collection procedures.
</td>
<td>
SUITCEYES.
▪ Measures for data quality will be discussed, set, and documented in regular
task and work package related meetings. These measures will be promoted and
followed up by the two data working groups UDWG and ADWG.
</td> </tr> </table>
## Metadata
Metadata includes descriptions of data assets. Proper use of metadata
facilitates findability and, in the long run, quality of research. The role of
the metadata domain is given as Establishing the semantics or “content” of
data so that it is interpretable by the users. Khatri and Brown (2010) make a
distinction between three types of metadata. These will be reviewed below
according to the way that they are seen as relevant for the project.
Table 3: Adaptation of Khatri and Brown (2010)’s Domain of Metadata in
SUITCEYES
<table>
<tr>
<th>
**Domain Decisions**
</th>
<th>
**Potential Roles or Locus of Accountability**
</th> </tr>
<tr>
<td>
* For describing and documenting different datasets a tool in the form of a project-wide spreadsheet has been devised with multiple columns that each describe an attribute of the dataset at hand. This tool generally includes the following sub-categories of information.
* **Content metadata** describes the contents of different datasets, including whether data has been generated through user studies, policy studies, or technical experimental research streams. The list of related attributes includes (but is not limited to) dataset identifier, data description, source and mode of creation. It also describes whether the data can be shared openly or is of sensitive nature and special care is required.
* **Storage metadata** involves information about means of data storage. This involves the choice of local and cloud-based as well as level of cryptology necessary for different kinds of data. For each dataset and based on the level of sensitivity (whether it can be openly shared or not), ownership and data type, the appropriate means of storage is defined.
* **User metadata** relates to various annotations that different project members may associate with various data assets. This can involve notations on usage, findability, preferences and user history.
* **General metadata** refers to all the other attributes and information recorded about each dataset, these include area of use, ownership, date of creation, history of change, the standards used, general technical format of data, compatibility level with different analysis tools, the procedure for metadata and data update, and more.
</td>
<td>
* The UDWG and the ADWG in collaboration with different instances in the project will develop a plan for the types of data used and create and provide guidelines related to appropriate storage procedure for each set of data.
* UDWG and ADWG oversee that the metadata tool is kept updated (as potentially new types and sets of data may emerge) and includes sufficient details to provide appropriate information towards effective further data generation, use, and potential reuse, storage and long-term preservation.
* The members collaborate closely with the two data work groups to facilitate their task of overseeing the upkeep of metadata information.
</td> </tr> </table>
## Data access
Building upon compliance with GDPR as well as principles of data minimization
and privacy by design, there needs to be a clear plan for data access in
place. The role of this domain is to specify access requirement of data. The
UDWG and the ADWG will be tasked with development of a plan for data rights to
various data assets. The PMB will monitor development of this plan. The plan
will also be evaluated by the EAB as elaborated in the following table.
Table 4: Adaptation of Khatri and Brown (2010)’s Domain of Data access in
SUITCEYES
<table>
<tr>
<th>
**Domain Decisions**
</th>
<th>
**Potential Roles or Locus of Accountability**
</th> </tr>
<tr>
<td>
* **Risk assessment** related to data value and sensitivity will conducted on regular basis.
* **Data access** related to sensitive material within the project is based on a clearly defined need for purposes of research as monitored and decided upon by the PMB.
* Mechanisms for sharing such data should be through cryptology technology **.**
* Other data assets can be shared publicly ( **Open Data** ) but decisions on such initiatives will be taken at a later stage of the project when all of the research needs of the project are clearly understood.
* Appropriate **naming conventions** as well as use of **standards** for data sets should be adopted to ensure interoperability within the project.
</td>
<td>
* Sharing of sensitive data will be monitored and decided upon by the PMB.
* UDWG and ADWG will be tasked to oversee the procedures for information security and alert of deviations and potential risks.
* Continued dialogue with partner organisation IT support centres will take place to ensure being kept updated on security and preservation issues.
</td> </tr> </table>
## Data lifecycle
All data moves through various lifecycle stages and this DMP is designed with
an awareness of this. The role of this domain is to determine the definition,
production, retention and retirement of data. Informed decisions related to
each stage of data lifecycle has gained increased importance in the light of
compliance with the GDPR principles of data minimization and privacy by
design. Some of the measures and related decisions are outlined in Table 5.
Table 5: Adaptation of Khatri and Brown (2010)’s Domain of data lifecycle in
SUITCEYES
<table>
<tr>
<th>
**Domain Decisions**
</th>
<th>
**Potential Roles or Locus of Accountability**
</th> </tr>
<tr>
<td>
* **Data inventory** will be conducted on multiple occasions during the project.
* **Data lifecycle plan** will be defined as part of the metadata tool, mentioned above. That is, at the time of data creation, not only the data will be defined, but plans will be put in place for data lifecycle, including long-term retention and even future destruction if appropriate.
* Towards **compliance with legislations** and other regulations, some data and records are required to be kept for given periods of times. An overview of such regulation is formed through access to related guidelines and in collaboration with partner organisations’ archival departments.
* At the time of project conclusion, the sensitive data will be securely archived based on the timelines specified in the metadata tool or made publicly available through different channels.
</td>
<td>
* During the project, the UDWG and the ADWG will be tasked with recommendations for data life cycle management.
* The UDWG and the ADWG will also supply recommendations on data that might at some stage be made available as open data.
* The UDWG and the ADWG might also make recommendations of changing physical storage and alternating practices of metadata during various stages of data lifecycles.
* Formal decisions related to the recommendations made by the UDWG and the ADWG will be made by the PMB.
</td> </tr> </table>
# Data Management
Based on the data governance structures described above, the following
sections describe the project-related datasets and actual measures and steps
employed to ensure effective production, access, use, reuse, storage and
preservation of data within SUITCEYES making it FAIR – findable, accessible,
interoperable and re-usable.
## Data collected/generated within SUITCEYES
There are a number of different data types and datasets either collected or
generated within SUITCEYES. Some of this data is defined as sensitive and not
suitable for sharing or openly making available to third parties. Others will
be deemed as intellectual assets of partners and intended for future
exploitation or are subject to other copyright issues and will not be shared
openly or not at least before adequate measures have been taken. A third group
of data will be made available under the open data principals for use and
reuse by third parties.
The main datasets in SUITCEYES contain data resulting from user-studies,
policy studies, bibliographic searches, collections of semantic vocabularies,
algorithms, technical experiments in the project. For administrative purposes
we have grouped these data under two broad categories of User Data and
Analytical Data.
All sensitive data that will include personal information or result from
interviews and observations are placed in the first group. All the other data-
sets (although some not related to experiments) fall in the same category.
Table 6 provides a summary of these data and its main categories. The separate
group of Social Data is also provided in the table as a data about the
potential interest of different groups, social media data and disseminated
information about the project.
Table 6: Main categories of data and the methods of its collection/generation
<table>
<tr>
<th>
Category
</th>
<th>
Type of Study
</th>
<th>
Methods of collection/ generation
</th>
<th>
Data
</th> </tr>
<tr>
<td>
User-Data
</td>
<td>
User studies
</td>
<td>
Interviews, observations, reaction tracking, audio visual recording,
qualitative data analysis tools
</td>
<td>
Transcripts, psychophysical data and informed consent forms
</td> </tr>
<tr>
<td>
User-Data
</td>
<td>
Policy studies
</td>
<td>
Interviews with decision makers
</td>
<td>
Transcripts
</td> </tr>
<tr>
<td>
AnalyticalData
</td>
<td>
Policy studies
</td>
<td>
Policy documents collection
</td>
<td>
Policy documents
</td> </tr>
<tr>
<td>
AnalyticalData
</td>
<td>
Literature studies
</td>
<td>
Searches in bibliographic databases (e.g. Web of Science).
</td>
<td>
Bibliographic metadata, collections of articles and other publications
</td> </tr>
<tr>
<td>
AnalyticalData
</td>
<td>
Semantics
</td>
<td>
Searches for and collection of a set of sign language vocabularies
</td>
<td>
Sets of sign language vocabularies
</td> </tr>
<tr>
<td>
AnalyticalData
</td>
<td>
Semantics
</td>
<td>
Searches for collections of social haptic signals
</td>
<td>
Sets of social haptic signals
</td> </tr>
<tr>
<td>
AnalyticalData
</td>
<td>
Deploying visual understanding algorithms
</td>
<td>
Wearable RGB-D (Red, Green, Blue and Depth) cameras or RGB and depth sensors
</td>
<td>
Benchmark datasets from wearable cameras for activity recognition, object
detection, face and hands detections,
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
navigation
</td> </tr>
<tr>
<td>
AnalyticalData
</td>
<td>
User studies
</td>
<td>
Scientific instruments (temperature loggers, am-meters, thermography, optical
microscopy, video, tensiometers, martindale etc.)
</td>
<td>
Technical
measurement data (temperature, time, vibration amplitudes, frequencies)
</td> </tr>
<tr>
<td>
Social-Data
</td>
<td>
Social interactions
</td>
<td>
Appropriate accounts on the social media
</td>
<td>
Data about potential interest of groups, disseminated information about the
project
</td> </tr> </table>
## Standards for collection, creation, and reuse
There are a set of standards and guidelines related to each type of data
collected or generated within the project. For example, there is a detailed
interview protocol that defines clearly the aim of the interviews; the
procedure for the interview, analysis, collaboration between the researchers
who conduct the user-studies; instruction about how the interviews are to be
conducted, the instructions for ensuring informed consent by the participants;
the interview questions and more. In other cases, for example for
bibliographic studies, the information scientist within the project will apply
best practice for data collection, pre-processing, analysis and
visualizations. Similarly, each of the researchers in the project will apply
their field expertise to ensure compliance with standards and data quality.
Furthermore, information is recorded about each sets of data, in a descriptive
accompanying document. The type of information included may vary from one
descriptive overview document to the next, but in general the types of
information that are included may comprise of information about the contents,
the source, means of data collection or generation, privacy level, storage
details, retention instructions, member(s) involved in data creation, notes,
dates of creation, use, versions, tools involved, methods used, and so on.
Additionally, a select set of such information is uniformly captured in a
Metadata Tool (Appendix 2), which for each dataset within the project defines
the dataset’s unique id and name, data type, description, ownership, purpose,
area of use, size, level of data sensitivity, depository, duration of
preservation, reuse instructions, accompanying metadata, required tools and
methods, data quality assurance process, and pertinent ethical considerations.
The information provided in the Metadata Tools therefore provides guidelines
as to whether each specific set of data can be shared for reuse or will be of
sensitive nature and would need to be protected and not shared.
## Ownership and responsibilities
In SUITCEYES, some data will be generated and other data may be captured for
further analysis and use within the project. The ownership of the collected
data will remain with the original owner. For the rest, the default structure
for data ownership in SUITCEYES is that the partner creating the data also
owns it as defined in GA article 26. Where some data is subject to JOINT
ownership, this is governed by Grant Agreement Article 26.2. Further
stipulated are listed in the project’s consortium agreement, section 8.
For the sake of clarity and future reference, the name(s) of data generator(s)
and owner(s) is (are) clearly stated in the Metadata Tool. The data owners are
responsible to provide the required information that informs the project of
the level of data sensitivity and means of storage and retention.
The PMB is the ultimate decision-making body within SUITCEYES and as such it
is also responsible for decisions made on data management issues. The PMB is
also responsible to oversee the existence of appropriate data governance
structures within the project. Towards this, the PMB will develop and assign
responsibilities to two Data Working Groups one for User-Data and one for
Analytical-Data. These working groups will collaborate with project members
and will oversee accuracy, timeliness, completeness and credibility of data.
## Dataset labelling convention
Each dataset will be assigned a unique identifying number (each dataset
receiving the next consecutive available number as indicated on the Metadata
Tool).
To facilitate labelling of datasets, much of the identifying information about
each dataset will be provided in the Meta-Data Tool and potential accompanying
data descriptive file. Therefore, the actual labelling of the dataset will be
as follows:
#### SC-DS_ID_Name
Where SC-DS is indicative of SuitCeyes-DataSet. ID refers to the dataset’s
unique identifying number. Finally, Name is to be formed in a way to provide
immediate meaningful information about the content of the dataset.
Example: **SC-DS_1_WoS-809 items-deafblindness-2018-02-02**
## Storage and sharing (during and after the project)
Currently, the main depository for SUITCEYES data is BOX, which is a data
storage and sharing solution, procured nationally within Sweden for use by
Swedish universities and their collaborators. This storage facility has been
approved as meeting the standards set by the GDPR requirements. Some data may
be stored locally at partner organisations in accordance to secure GDPR-
compliant guidelines. The consortium identifies the level of security;
sensitiveness; storage requirements; retention instructions; sharing routines;
and the specifics of archiving, preservation, and or destruction for each set
of data as they emerge and as the project progresses. These parameters will
define how the data within the project are to be handled.
## Data quality and evaluation
The consortium has defined guidelines, procedures, and routines to ensure a
general level of quality of data and research work by the means of the
structures that are put in place. In addition to this, each member of the
project is competent in their his or her specific area of research and well
familiar with related guidelines and best practices to ensure quality of work
and to apply appropriate evaluations measures. Furthermore, the internal
review structures and collaborative feedback by colleagues is a further means
for ensuring quality of work, data and research.
## Ethical and legal compliance
As mentioned earlier, SUITCEYES involves user studies. The partners involved
in these studies either hold or will seek and obtain ethics approval from
their national ethics review boards. Based on the procedures described above,
the sensitive data generated or used within the project will be subject to
guidelines and related best practices. After assigning codes for future cross-
referencing purposes, personal identification information will be removed from
the interview and observation data, soon after data collection and before
sharing (only if needed) with other project members.
The consortium partners are also aware of the GDPR (EU) 2016/679 5 which is
a regulation in EU law on data protection and privacy for all individuals
within the European Union. It also addresses the
export of personal data outside the EU. The GDPR aims primarily to give
control over personal data and to simplify the regulatory environment for
international business by unifying the regulation within the EU. It also
involves decisions regarding which datatypes can be characterized as
containing sensitive information and how such information is stored and
shared. Each partner is aware to examine the possibility of protecting its
results and must adequately protect them within the project, and after its
finishing. It is especially important if the results can reasonably be
expected to be commercially or industrially exploited and protecting them is
possible, reasonable, and justified. The consortium considers its own
legitimate interests and the legitimate interests (especially commercial) of
the other partners. Already on the stage of submitting the project application
the consortium decided to opt out of the Pilot on Open Research Data in
Horizon 2020. It is connected with:
* allowing the protection of results (e.g. patenting)
* incompatibility with privacy/data protection
Where possible within the SUITCEYES project, the partners are using healthy
adults, so that one can minimise the burden on the target user group. In other
words, people with deafblindness are only being called upon where their
“expert user” perspective is required. For healthy users, ethical risks are
low – there is no sensitive data being recorded, so the only concerns are
around data protection and health and safety. The partners will only work with
those people with deafblindness who have the capacity to consent, and who have
the communication skills to carry out an interview. Interviews will be
conducted via a caregiver, so that the participant is with someone familiar
and with someone skilled in acting as an intermediary. Interviews will be
conducted in a location of the participants' choice – in this way they will be
in a familiar environment. Naturally, informed consent forms will always be
applied.
## The approach and format of reaching informed consent
The approach to be used to reach informed consent, or any other agreement with
end users taking part in requirements and other user studies, has been
proposed and performed from the beginning of the project. This approach and
the format of reaching informed consent for this target group is of usage also
for other R&D activities and are made available / published as an important
output (the templated used by the consortium from the beginning of the project
are shown in Appendix 3). Moreover non-sensitive data are stored in a Google
drive folder (according to D1.1 Quality Assurance Plan). The YouTube videos
and other dissemination materials involving the presence of users and outside
participants require reaching informed consent for the target group presented
on the video, materials etc. In case the consortium will decide to video-
record the interviews, iteration of the agile process for WP7 or other
activities and applications involving persons outside the consortium, the DMP
requires to use the consent form that includes reference on how to treat these
materials (please check the forms in Appendix 3).
At the start, the consortium informs that giving consent to process any
personal data collected in the context of SUITCEYES is entirely voluntary and
that the consortium commits to protect personal data, and process it only
according to applicable laws and regulations such as the GDPR. The consent
might be withdrawn at any time, however withdrawing it might not effect
immediate stopping of using the material and that it does usually not affect
material that has already been made public. One is informed that the material
collected will be used in internal training of project members and/or
published in which media channels, e.g. billboards, newspapers, website, TV
programs, Twitter, YouTube, LinkedIn etc. It is important to note that the
consortium informs that published material probably reaches large audience and
that the consortium is not able to control other use of the material. It is
also mentioned that publishing the material on social media means that the
material is transferred to companies based in the United States. These
companies are members of the "Privacy Shield"- agreement and are thus
considered to ensure an adequate level of protection of personal data.
SUITCEYES ensures transparency with processing personal data. There is a
possibility of receiving information about the way of processing and copying
personal data. It will be received in a structured, commonly used and machine-
readable format. The consortium can rectify or supplement personal data that
is inaccurate or incomplete.
It is possible to erase personal data under certain circumstances, however
personal data that has been already made public, e.g. published on social
media is usually not affected by a withdrawn consent. Because of legal
provisions we may also be prevented from immediately erasing personal data.
Lodging a complaint to the supervisory authority is also possible. Therefore,
the privacy is an important issue for SUITCEYES consortium and we do the best
to protect personal data of internal and external members of the project.
Various formats of reaching informed consent have been used in SUITCEYES from
the beginning of the project, e.g.:
* The consent forms for the interviews with the users within WP2 in Greek, German, Swedish, Dutch and English
* Non-Disclosure Agreements that were signed with the advisors, symposia participants, and
e.g. persons who helped in transcribing some parts of the interviews
* The informed consent forms used for experiments in the Netherlands (in Dutch)
* The informed consent form (in English) used by HB for taking photos, filming and publishing
(for various university applications, also used in the project)
* A letter of consent regarding video/audio recording or photos in German
* Universal consent form created in the second year of project in accordance with the project identity.
The templates of these informed consent forms can be found in Appendix 3.
# Summary
This DMP incorporates both Data Governance and Data Management and accordingly
defines the principals, procedures, and routines that are put in place for the
management of research data within the SUITCEYES project. The SUITCEYES
partners will make the research results available through different channels
and strive to provide open access to the project data, as far as possible.
However, not all data generated and used within the project are suitable or
appropriate for sharing and reuse, and hence, SUITCEYES has chosen not to
participate in the Open Research Data Pilot. The main datasets in SUITCEYES
contain data resulting from user-studies, policy studies, bibliographic
searches, collections of semantic vocabularies, algorithms, technical
experiments in the project. For administrative purposes these data has been
grouped under two broad categories of User Data and Analytical Data. For the
sake of clarity and future reference, the name(s) of data generator(s) and
owner(s) is (are) clearly stated in the Metadata Tool. The data owners are
responsible to provide the required information that informs the project of
the level of data sensitivity and means of storage and retention (relevant
informed consent forms are also used). The PMB as the ultimate decisionmaking
body within SUITCEYES is also responsible for decisions made on data
management issues. Each dataset is assigned a unique identifying number (each
dataset receiving the next consecutive available number as indicated on the
Metadata Tool) and the main depository for SUITCEYES data is BOX, which is a
data storage and sharing solution. This storage facility has been approved as
meeting the standards set by the GDPR requirements. Some data may be stored
locally at partner organisations in accordance to secure GDPR-compliant
guidelines.
The DMP will be updated over the course of the project whenever significant
changes arise, such as (but not limited to):
* new data
* changes in consortium policies (e.g. new innovation potential, decision to file for a patent)
* changes in consortium composition and external factors (e.g. new consortium members joining or old members leaving).
# Appendices
### Appendix 1
Framework for data decision domains (Khatri & Brown, 2010: 149)
### Appendix 2
**SUITCEYES Metadata Tool**
The following information is recorded for each set of data collected or
generated within the project. (One dataset item is included here to
exemplify.)
78081416
### Appendix 3
**Informed consent forms used in SUITCEYES**
The consent form used for the interviews with the users within WP2 (in Greek)
The consent form used for the interviews with the users within WP2 (in German)
The consent form (last page) used for the interviews with the users within WP2
(in Swedish)
The consent form used for the interviews with the users within WP2 (in Dutch)
The consent form used for the interviews with the users within WP2 (in
English)
The form of Non-Disclosure Agreements that were signed with the advisors,
symposia participants,
and e.g. persons who helped in transcribing some parts of the interviews
The informed consent forms used for experiments in the Netherlands (in Dutch)
The informed consent form (in English) used by HB for taking photos, filming
and publishing (for
various university applications, also used in the project)
Letter of consent regarding video/audio recording or photos in German
Universal consent form created in the second year of project in accordance
with the project identity
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0126_SEA-TITAN_764014.md
|
# INTRODUCTION
SEA TITAN project participates in the Pilot on Open Research Data (ORD)
launched by the European Commission (EC) along with the H2020 programme [1].
This pilot is part of the Open Access to Scientific Publications and Research
Data programme in H2020. The goal of the programme is to foster access to
research data generated in H2020 projects. The use of a Data
Management Plan (DMP) is required for all projects participating in the Open
Research Data
Pilot.
Open access is defined as the practice of providing on-line access to
scientific information that is free of charge to the reader and that is
reusable. In the context of research and innovation, scientific information
can refer to peer-reviewed scientific research articles or research data.
Research data refers to information, in particular facts or numbers, collected
to be examined and considered and as a basis for reasoning, discussion, or
calculation. In a research context, examples of data include statistics,
results of experiments, measurements, observations resulting from fieldwork,
survey results, interview recordings and images. The focus is on research data
that is available in digital form.
The Consortium strongly believes in the concepts of open science, and in the
benefits that the European innovation ecosystem and economy can draw from
allowing the reuse of data at a larger scale.
Furthermore, there is a need to gather experience in wave technology,
especially power performance and operating data. In fact, there has been very
limited experience in wave energy, which is essential in order to fully
understand the challenges in device performance and reliability. The limited
data and experience that currently exists are rarely shared, as testing is
partly private-sponsored.
This project proposes to remove this roadblock by delivering for the first
time, open access, highquality power take-off (PTO) performance, reliability
and operational data to the wave energy development community.
Nevertheless, data sharing in the open domain can be restricted as a
legitimate reason to protect results that can reasonably be expected to be
commercially or industrially exploited. Strategies to limit such restrictions
will include anonymizing or aggregating data, agreeing on a limited embargo
period or publishing selected datasets.
## Purpose of the Data Management Plan
The purpose of the DMP is to provide an analysis of the main elements of the
data management policy that will be used by the Consortium with regard to the
project research data.
The DMP covers the complete research data life cycle. It describes the types
of research data that will be generated or collected during the project, the
standards that will be used, how the research data will be preserved and what
parts of the datasets will be shared for verification or reuse. It also
reflects the current state of the Consortium agreements on data management and
must be consistent with exploitation and IPR requirements.
The DMP is not a fixed document, but will evolve during the lifespan of the
project, particularly whenever significant changes arise such as dataset
updates or changes in Consortium policies.
This document is the first version of the DMP, delivered in Month 3 of the
project. It includes an overview of the datasets to be produced by the
project, and the specific conditions that are attached to them. The next
versions of the DMP will get into more detail and describe the practical data
management procedures implemented by the SEA TITAN. At a minimum, the DMP will
be updated in Month 18 (D8.6) and Month 36 (D8.7) respectively. This document
has been prepared by taking into account the “Template horizon 2020 data
management plan (DMP)” [Version 1.0. of 10 October 2016] and additional
consideration described in ANNEX I: KEY PRINCIPLES FOR OPEN ACCESS TO RESEARCH
DATA.
## Research Data Types in SEA TITAN
For this first release, the DMP highlights the data types expected to be
produced during SEA TITAN project life span, these datasets will be revised on
next iterations of the document if found redundant or insufficient.
According to such consideration, Table 1 reports a list of indicative types of
research data that SEA TITAN will produce. This list may be adapted with the
addition or removal of datasets in the next versions of the DMP to take into
consideration the project developments. A detailed description of each dataset
is given in the following sections of this document.
<table>
<tr>
<th>
#
</th>
<th>
Dataset reference
</th>
<th>
Lead partner
</th>
<th>
Related WP(s)
</th> </tr>
<tr>
<td>
1
</td>
<td>
DS_AMSRM_Performance
</td>
<td>
CIEMAT
</td>
<td>
WP2, WP3, WP4, WP5
</td> </tr>
<tr>
<td>
2
</td>
<td>
DS_AMSRM_Feasibility
</td>
<td>
CIEMAT
</td>
<td>
WP2, WP3, WP4, WP5
</td> </tr>
<tr>
<td>
3
</td>
<td>
DS_Cooling_System_performance
</td>
<td>
CIEMAT
</td>
<td>
WP6
</td> </tr> </table>
### Table 1. SEA TITAN types of data
Specific datasets may be associated to scientific publications (i.e.
underlying data), public project reports and other raw data or curated data
not directly attributable to a publication. The policy for open access are
summarized in the following picture.
Research data linked to exploitable results will not be put into the open
domain if they compromise its commercialization prospects or have inadequate
protection, which is a H2020 obligation. The rest of research data will be
deposited in an open access repository. When the research data is linked to a
scientific publication, the provisions described in ANNEX II: SCIENTIFIC
PUBLICATIONS will be followed.
Research data needed to validate the results presented in the publication
should be deposited at the same time for “Gold” Open Access ( _Authors make a
one-off payment to the publisher so that the scientific publication is
immediately published in open access mode_ ) or before the end of the embargo
period for “Green” Open Access ( _Due to the contractual conditions of the
publisher, the scientific publication can undergo an embargo period up to six
months since publication date before the author can deposit the published
article or the final peer-reviewed manuscript in open access mode_ ).
Underlying research data will consist of selected parts of the general
datasets generated, and for which the decision of making that part public has
been made.
Other datasets will be related to any public report or be useful for the
research community. They will be selected parts of the general datasets
generated or full dataset and be published as soon as possible.
## Responsibilities
Each SEA TITAN partner has to respect the policies set out in this DMP.
Datasets have to be created, managed and stored appropriately and in line with
applicable legislation.
The Project Coordinator has a particular responsibility to ensure that data
shared through the SEA TITAN website are easily available, but also that
backups are performed and that proprietary data are secured.
WEDGE GLOBAL, as WP1 leader, will ensure dataset integrity and compatibility
for its use during the project lifetime by different partners.
Validation and registration of datasets and metadata is the responsibility of
the partner that generates the data in the WP. Metadata constitutes an
underlying definition or description of the datasets and facilitate finding
and working with particular instances of data.
Backing up data for sharing through open access repositories is the
responsibility of the partner possessing the data. Quality control of these
data is the responsibility of the relevant WP leader, supported by the Project
Coordinator.
If datasets are updated, the partner that possesses the data has the
responsibility to manage the different versions and to make sure that the
latest version is available in the case of publicly available data.
WP1 will provide naming and version conventions. Last but not least, all
partners must consult the concerned partner(s) before publishing data in the
open domain that can be associated to an exploitable result.
# CHANGELOG
This document has been reviewed and no modifications are required so far.
# DATASETS DESCRIPTION
## DS_AMSRM_PERFORMANCE
Along the AMSRM development, the representative variables to be obtained
during the different design and testing procedures are separated in the
different stages: calculation of specifications and experimental tests
performance.
**Calculation of the specifications of the PTO**
During the simulation of the system, corresponding to WP2, the data obtained
to define and place the linear generator in the different WEC technologies
will be:
* Available space. (Length, width, height)
* Maximum stroke
* Maximum velocity
* Maximum force
After evaluating the WECs in different scenarios proposed for each WEC
technology, different values of: force, velocity and stroke will be obtained.
This data will be private, only shared internally for the project partners,
since they are sensible data corresponding to the involved technologies.
**Experimental tests performance**
Finally, during the laboratory test performance, accomplished in WP5, a set of
data will be collected, for each of the scenarios tested, corresponding to one
type of WEC technology and a certain sea location, reproducing a certain sea
state:
* Force values as a function of the current applied to the generator phases, for different velocities and current levels.
* Output power, supplied to the grid as a function of the force and velocity. Mechanical power will be also calculated, obtaining a complete global efficiency map.
This data will be mostly public, since it is considered they are part of the
results obtained from the project and part of the dissemination plan.
## DS_AMSRM_FEASIBILITY
The data set are obtained as a result of the design stage of the PTO solution.
Based on that solution, a PTO module will be defined to develop a prototype.
During the design of both the linear generator, the power converters and the
control platform, corresponding to WP3, different variables will be defined as
a result of the calculations:
* Based on Finite Elements Method (FEM) analysis, force map depending on the position, velocity and the current level. Force validation will demonstrate the feasibility of the proposed solution.
* Losses provided by the losses model, depending on the position, velocity and current level.
* Expected efficiency map depending on the position, velocity and current level. The losses model and efficiency map will allow to develop an energy matrix to explore the economic feasibility of the system when it is applied to the different WEC technologies.
* Thermal behaviour will be analysed along the different operation situations defined, validating the feasibility of the system.
This data will be private, only some of these data will be shared internally
for the project partners, since they are sensible data corresponding to the
know-how of the machine.
## DS_COOLING_SYSTEM_PERFORMANCE
Related to the thermal behaviour of the system, considering that PTO will be
evaluated for different WEC technologies and sea states, it will be analised
in those scenarios the time evolution of temperature in the following points:
\- At the linear generator: temperature at the machine coils (at least two
measurements), translator magnetic circuit and bearings (at least two
measurements) - At the power electronic converters: IGBT case, water cooling
fluid, ambient.
Related to the SLSG, since only calculation and preliminary design is
accomplished during the project, no thermal data will be provided. However,
the superconducting solution requires, as one of the main results of the
solution definition, a cryostat, being the system in charge of taking the
system to the required low temperature. Anyway, only a engineering solution
will be defined, no results or data set.
# STANDARDS AND METADATA
This aspect will be defined as part of task 7.3 Standardization activities,
identification and analysis of related existing standards and the contribution
to the ongoing and future standardization developments from the results of the
project.
The participation of a Standardization Body (UNE) provides the relevance,
knowledge and experience in the standardization system and its internal
procedures. Other project partners will provide the technical support to the
development of this task.
It is expected to fulfill an analysis of the applicable standardization
landscape by M6 and to define in detail the contribution to the ongoing and
future standardization developments by M36. As so this part of the document
will be updated as soon as more information is available for the consortium.
# DATA SHARING
During the lifecycle of the SEA-TITAN project datasets will be stored and
systematically organized in a database. An online data query tool will be
operational by Month 18 and for open dissemination by Month 24. The database
schema and the quarriable fields, will be also publicly available to the
database users as a way to better understand the database itself. In addition
to the project database, relevant datasets will be also stored in ZENODO [5],
which is the open access repository of the Open Access Infrastructure for
Research in Europe, OpenAIRE [4].
Data access policy will be unrestricted if no confidentiality or IPR issues
are expected by the relevant Work Package leader in consensus with the Project
Coordinator. All collected datasets will be disseminated without an embargo
period unless linked to a green open access publication.
Otherwise, in order to protect the commercial and industrial prospects of
exploitable results aggregated data will be used in order to limit this
restriction. The aggregated dataset will be disseminated as soon as possible.
In the case of the underlying data of a publication this might imply an
embargo period for green open access publications.
Data objects will be deposited in ZENODO under:
* Open access to data files and metadata and data files provided over standard protocols such as HTTP and OAI-PMH.
* Use and reuse of data permitted.
* Privacy of its users protected.
# ARCHIVING AND PRESERVATION
The SEA-TITAN project database will be designed to remain operational for at
least 5 years after project end. By the end of the project, the final dataset
will be transferred to the ZENODO repository, which ensures sustainable
archiving of the final research data. Items deposited in ZENODO will be
retained for the lifetime of the repository, which is currently the lifetime
of the host laboratory CERN and has an experimental programme defined for at
least the next 20 years.
Metadata and persistent identifiers in Zenodo are stored in a PostgreSQL
instance operated on CERN’s Database on Demand infrastructure with 12-hourly
backup cycle with one backup sent to tape storage once a week. Metadata is in
addition indexed in an Elasticsearch cluster for fast and powerful searching.
Metadata is stored in JSON format in PostgreSQL in a structure described by
versioned JSONSchemas. All changes to metadata records on Zenodo are versioned
and happening inside database transactions. In addition to the metadata and
data storage, Zenodo relies on Redis for caching and RabbitMQ and python
Celery for distributed background jobs.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0127_PLANMAP_776276.md
|
# Executive Summary
The updated PLANMAP Data Management Plan (DMP) is provided. Map-wide metadata
structure and fields for PLANMAP mapping products are described and
exemplified. The long-term data repository venues for PLANMAP products are
listed and specified. Delivery file formats are updated, as reflected by
deliverables. Described aspects in this DMP include: beneficiaries producing
data, adherence to FAIR principles, data types, formats and standards,
metadata, documentation, intellectual property and data storage, archiving and
curation during and after the project.
# Introduction
PLANMAP will both use and produce data. Different data categories can be
distinguished in the framework of PLANMAP:
**Base mapping data:**
* **A)** Individual higher-level data products derived from raw experiment data (which are archived on PDS or PSA), e.g. map-projected individual images or custom calibrated cubes (e.g. CTX, OMEGA, CRISM)
* **B)** Custom processed or mosaicked data from multiple data products (i.e. derived hyperspectral summary products, multi-image mosaics) available in public archives or repositories (e.g. PSA, PDS, USGS)
* **C)** Individual higher-level data products already produced by experiment teams and available from PDS/PSA archives (e.g. HRSC)
* **D** ) Custom processed or mosaicked data with from multiple data products (i.e. derived hyperspectral summary products, multi-image mosaics) produced by the consortium
**Integrated mapping products:**
* Intermediate temporary mapping products (for scientific discussion and sharing within the consortium) o Raster imagery/data o Vector mapping data
* finished geological maps (See _D2.1 (Mapping Standards)_ , Rothery et al, 2018,
_D2.2-public_ (Morphostratigraphic maps, Rothery et al., 2019) o Standard
USGS-like geological maps
* Integrated geo-spectral and geo-stratigraphic maps o Geo-structural maps o Geo-modelling maps o Landing site and traverse maps o In-situ integrated maps o Digital outcrop models o Subsurface models
* 3D models for Virtual Reality environments (from one or more of the above categories)
The fate of datasets and data products belonging to these categories is
different. Individual data products (A B, C) are preserved for the long-term
in appropriate archives. Their eventual reduction and reprocessing is
reproducible, with well-known open source tools, and supported for the long
term by robust institutions and agencies (e.g. USGS/NASA). Intermediate
mapping products are instrumental to producing final, released and/or
published PLANMAP digital mapping products (see section on Data Storage and
Management during the project) and their long termstorage is not planned, but
documentation, in the form of wiki or individual documents is going to be
preserved during the course of the project, and significant summaries and
excerpts will be included as deliverable text and annexes, and can also be
used as ancillary material attached to scientific publications.
The current long-term archiving and availability of PLANMAP data is as follows
(please refer to relevant subsections below):
* All Raster, vector and layout (pdf) mapping data o Short-term: on the PLANMAP data archive on _https://data.planmap.eu/_ o Long-term: on the ESA PSA DOI-granting guest storage facility on _https://www.cosmos.esa.int/web/psa/psa_gsf_
* Additional ancillary geologic models and specific 3D products o Short-term: on the PLANMAP data archive on _https://data.planmap.eu/_ o Long-term on the Univeristy of Padova DOI-granting data repository on _http://researchdata.cab.unipd.it_
* Additional ancillary specific compositional products o Short-term: on the PLANMAP data archive on _https://data.planmap.eu/_ o Long-term on the INAF DOI-granting data repository
# Scope of the document
The present document updates the type of data, their characteristics and their
use, archiving and preservation plans throughout the PLANMAP project.
Intellectual property rights are also clarified, as well as specific per-
partner data use and responsibilities. The document outlines the basic data
management directions that are going to be updated throughout the project and
issued at discrete steps.
# Beneficiaries using data
All beneficiaries will use data, in either individual or - in most cases -
combined forms. Data access to archived (NASA/ESA) mission data is free for
anyone. Some data will have temporary team-only access (during an embargo of
up to several months), such as mapping data used by PLANMAP researchers (see
section on Data Storage and Management during the project)
# Beneficiaries producing data
All beneficiaries will produce either derived data (higher-level data
products) or new data derived by both human and computer/algorithm-assisted
mapping.
In particular, beneficiaries are set to produce these data categories (see
Annex A, B):
* UNIPD:
* mosaics and higher level products derived from planetary archives o vector mapping products (geologic/geomorphologic maps) o 3D models
* (subject to increase/expansion)
* OU
* mosaics and higher level products derived from planetary archives o vector mapping products (geologic/geomorphologic maps) o (subject to increase/expansion)
* WWU
* mosaics and higher level products derived from planetary archives o vector mapping products (geologic/geomorphologic maps) o (subject to increase/expansion)
* INAF o mosaics and higher level products derived from planetary archives o (subject to increase/expansion)
* CNRS o 3D and virtual reality models o digital outcrop maps
* vector mapping products (geologic/geomorphologic maps and models) o (subject to increase/expansion)
* JacobsUni o mosaics and higher level products derived from planetary archives
* vector mapping products (geologic/geomorphologic maps) o (subject to increase/expansion)
# Adherence to FAIR principles
Data produced by PLANMAP will impact future robotic and space exploration,
mainly through mature, finished, published mapping products. Underlying data
and special mapping products will be of scientific use also before and beyond
that. Accessibility to the data will be provided in three different forms:
Findable data:
* Longer-term discoverability will be guaranteed via connected Institutional repositories (ESA, UNIPD, INAF), VESPA sharing and inclusion in planetary data archives that are accessible and commonly used by the community.
* Shorter-term discoverability will be supported by the PLANMAP web-map and data access Accessible data:
* Geological mapping products will have multiple level of accessibility, with variable scale and complexity, from individual units to finished products and thematic maps Interoperable data:
* OGC standards for CRS and formats will be adopted
* Data discovery interoperability will be granted via the use of state-of-theart VESPA EPN-TAP (Virtual European Solar and Planetary Access EuroPlanet Table Access Protocol) for data search and query.
Re-usable data:
* Raw data will be used and processed/reduced, with embedded re-usability upstream with respect to PLANMAP
* Custom base-map data (e.g. mosaics) and partial mapping products and processed/derived datasets underlying geological mapping products (standard, non-standard, integrated, etc.) will be usable by others, also in the future, regardless of the final geological mapping products.
* Integrated and/or final mapping products will be re-usable directly or indirectly, with access to combined information content or individual layers (See _D2.1 (Mapping Standards),_ _D2.2-public_ ) with relevant topologies (units, contacts, etc.).
# Data types, formats and standards
PLANMAP uses existing datasets and data products and creates new products
deriving from combination or derivation of existing, processed data products,
as well as from completely new mapping (e.g. units), see _D2.1 (Mapping
Standards)_ (Rothery et al., 2018).
# Data
**Raw data**
Planetary archives, PDS3, PDS4 imagery and cubes.
## Base mapping data
OGC-compliant data already available from external entities (e.g. USGS) or
base mapping data produced by PLANMAP partners, some in PDS standards/formats.
## Integrated mapping products
Integrated mapping products with individual layers are being produced in
OGCcompliant formats, both raster and vector, as well as with suitable 3D
formats (See Annex A). All individual layers/components of maps are in
geospatial format and with CRS suitable for the specific mapping project: in-
situ, local (mostly non-standard, see _D2.1 (Mapping Standards)_ ), regional
or global (both standard and non-standard).
# Metadata
The aim of including metadata is to allow reproducibility by providing
information about the processing steps performed. Map-wide metadata including
both geometric and bibliographic information are provided for each map (e.g.
as accessible on _https://data.planmap.eu/_ ).
## Raw data
Metadata from processed raw data are the same as those from archived data.
SPICE kernel version and software used (e.g. USGS ISIS) should be recorded.
Isis Cube labels (i.e. recording cumulative processing steps and used
ancillary data, metadata, CRS and alike). The information is going to be
recorded in processing labels and as temporary output in ASCII format.
## Base mapping data
Projection, cubes and images used, type of control network used, and relevant
additional information available from original derived data producers (e.g.
USGS, ESA, academic institutions or local PLANMAP base mapping data producers
or groups) will be recorded.
## Integrated mapping products
Metadata for integrated mapping products will be both map-related and sub-map
(i.e. geological unit)-related.
Map-related metadata include, as a minimum:
* Used datasets and products
* Mapping individuals
* CRS
* Summary of used tools and documented workflow
Unit-related data/metadata include, as a minimum, recorded and updated during
the mapping processes, see also _D2.1 (Mapping Standards)_
* Individual products and layers used to determine unit extent and contacts
* Eventual interpolation/extrapolation of data underlying mapped unit outline
* Qualitative assessment on uncertainties involved in the unit determination
Authors of the maps, programs, processing and basic info to allow
reproducibility of the underlying workflow is included and added to the
documentation. Also, geocoding of units (i.e. associating toponyms to
locations and mapped surface units) will be produced in order to ease search
of at least individual maps as well as individual units and their occurrence
within maps (see e.g. _http://geometrics.jacobsuniversity.de/_ , Rossi et al,
2018).
## Map types
PLANMAP map-types, as per DoA, include:
* **S** = Stratigraphic
* **C** = Compositional
* **M** = Morphologic
* **G** = Geo-structural
* **I** = Integrated
* **D** = Additional DOM-specific mapping products for individual or multiple lander/rover-imaged outcrops can be included
## Map-level metadata
Complementing metadata related to individual units, each map of PLANMAP
include several map-wide field, exemplified below:
<table>
<tr>
<th>
**Field**
</th>
<th>
**Field description (and example entries)**
</th> </tr>
<tr>
<td>
Map name (PM_ID)
</td>
<td>
PM-MER-MS-H02_3cc_01
</td> </tr>
<tr>
<td>
Target body
</td>
<td>
Mercury
</td> </tr>
<tr>
<td>
Title of map
</td>
<td>
Geologic Map of the Victoria Quadrangle (H02), Mercury
</td> </tr>
<tr>
<td>
Bounding box - Min
Lat
</td>
<td>
-22.5°
</td> </tr>
<tr>
<td>
Bounding box - Max
Lat
</td>
<td>
65°
</td> </tr> </table>
<table>
<tr>
<th>
Bounding box - Min Lon (0-360)
</th>
<th>
270°
</th> </tr>
<tr>
<td>
Bounding box - Max Lon (0-360)
</td>
<td>
360°
</td> </tr>
<tr>
<td>
Author(s)
</td>
<td>
Valentina Galluzzi; Laura Guzzetta; Luigi Ferranti; Gaetano di Achille; David
A. Rothery; Pasquale Palumbo
</td> </tr>
<tr>
<td>
Type
</td>
<td>
Released
</td> </tr>
<tr>
<td>
Output scale
</td>
<td>
1:3M
</td> </tr>
<tr>
<td>
Original Coordinate Reference System
</td>
<td>
Lambert conformal conic
Center longitude: 315°
Standard parallel 1: 30°
Standard parallel 2: 58°
Datum: 2440 km (non-IAU, MESSENGER team datum)
</td> </tr> </table>
<table>
<tr>
<th>
Data used
</th>
<th>
MESSENGER MDIS BDR v0 uncontrolled basemap (166
m/pixel)
MESSENGER MDIS 2013 complete uncontrolled basemap (250 m/pixel)
MESSENGER MDIS uncontrolled mosaics v6, v7, v8 (250 m/pixel)
MESSENGER MDIS partial mosaic (USGS) (200 mpp)
MESSENGER MDIS 2011 albedo partial mosaic (USGS) (200 m/pixel)
Mariner 10 + MESSENGER flyby uncontrolled basemap (USGS) (500 m/pixel)
MESSENGER MLA DTM (665 m)
MESSENGER MDIS M2 flyby stereo-DTM (DLR) (1000 m)
</th> </tr>
<tr>
<td>
Standards adhered to
</td>
<td>
Mapping scale: Tobler (1987); Output scale: USGS; Symbology: USGS FGDC and
other new symbols
</td> </tr>
<tr>
<td>
DOI
</td>
<td>
10.1080/17445647.2016.1193777
</td> </tr>
<tr>
<td>
Aims
</td>
<td>
Morpho-stratigraphic analysis of Mercury's units and BepiColombo target
selection.
</td> </tr> </table>
<table>
<tr>
<th>
Short description
</th>
<th>
Mercury’s quadrangle H02 ‘Victoria’ is located in the planet’s northern
hemisphere and lies between latitudes 22.5° N and 65° N, and between
longitudes 270° E and 360° E. This quadrangle covers 6.5% of the planet’s
surface with a total area of almost 5 million km2. Our 1:3,000,000-scale
geologic
map of the quadrangle was produced by photo-
interpretation of remotely sensed orbital images captured by the MESSENGER
spacecraft. Geologic contacts were drawn between 1:300,000 and 1:600,000
mapping scale and constitute the boundaries of intercrater, intermediate and
smooth plains units; in addition, three morpho-stratigraphic classes of
craters larger than 20 km were mapped. The geologic map reveals that this area
is dominated by Intercrater Plains encompassing some almost-coeval, probably
younger, Intermediate Plains patches and interrupted to the northwest, north-
east and east by the Calorian Northern Smooth Plains. This map represents the
first complete geologic survey of the Victoria quadrangle at this scale, and
an improvement of the existing 1:5,000,000 Mariner 10-based map, which covers
only 36% of the quadrangle.
</th> </tr>
<tr>
<td>
Related products
</td>
<td>
Geologic Map of the Hokusai Quadrangle (H05), Mercury
Geologic Map of the Shakespeare Quadrangle (H03), Mercury (pre-Planmap)
Geologic Map of the Kuiper Quadrangle (H06), Mercury (prePlanmap)
</td> </tr> </table>
<table>
<tr>
<th>
Units Definition (polygon styling)
</th>
<th>
Smooth Plains, sp, 255-190-190
Northern Smooth Plains, spn, 245-162-122
Intermediate Plains, imp, 245-122-122
Intercrater Plains, icp, 137-90-68
Crater material-well preserved, c3, 255-255-115
Crater material-degraded, c2, 92-137-68
Crater material-heavily degraded, c1, 115-0-0
Crater floor material-smooth, cfs, 255-255-175
Crater floor material-hummocky, cfh, 205-170-102
</th> </tr>
<tr>
<td>
Stratigraphic info
</td>
<td>
This map has an associated database of craters larger than 5 km used for basic
crater frequency analysis for N(5), N(10), and N(20).
</td> </tr>
<tr>
<td>
Other comments
</td>
<td>
Since the mapping scale (~1:400k) was much higher than the output scale (1:3M)
the polylines of the map were not smoothed.
This map is currently being updated to fit the new controlled MESSENGER's end-
of-mission basemaps.
A post-release boundary merging was done with the H03 and H05 quadrangles.
This map uses a legend also for feature labels.
</td> </tr>
<tr>
<td>
Heritage used
</td>
<td>
former Mariner 10 map by McGill and King (1983)
</td> </tr>
<tr>
<td>
Link to other repositories
</td>
<td>
(crater database link)
(shapefiles database link)
</td> </tr>
<tr>
<td>
Acknowledgements beyond Planmap
</td>
<td>
This research was supported by the Italian Space Agency (ASI) within the
SIMBIOSYS project [ASI-INAF agreement number I/022/10/0]. Rothery was funded
by the UK Space Agency (UKSA) and STFC.
</td> </tr> </table>
Table 1: Exemplary map-wide metadata for PLANMAP products, implemented for a
PLANMAP in-kind contribution (Geologic map by Galluzzi et al., 2016) see
_https://data.planmap.eu/pub/mercury/PM-MER-MS-H02_3cc_01/_
## Documentation
Documentation of PLANMAP will be available on the project wiki space (
_https://wiki.planmap.eu/display/planmap)_ , which will be kept functional
after project-end based on best-effort and availability of resources. The
internal wiki space is used for both internal project coordination and
technical, scientific documentation. The latter, in evolved form, will be also
shared via the project public wiki space (
_https://wiki.planmap.eu/display/public_ ).
The types of documentation in the PLANMAP wiki include:
* Summary of relevant activities per WP
* Procedures and workflows
* Mapping use case description
* Best practices and recommendations
* Tutorials on data handling and mapping
* Other documents
# Software
The software used to access and analyze PLANMAP data will be based on Open
Standards, in particular OGC standards. Both Open Source and proprietary
software (such as QGIS, ArcGis, and such like) will therefore be suitable for
accessing PLANMAP data. A particular case is constituted by the software that
is employed for 3d geological modeling, for which open source alternatives
rarely exist. For the choice of the software package two criteria will be
considered: a) the feasibility for the task that will be undertaken b) the
academic licensing scheme that is adopted. Under the same feasibility
conditions, software packages granting low-cost/affordable licensing schemes
for academic purposes will be favored.
The consortium will use a wide range of publicly available Open Source and
commercial tools to work and perform mapping tasks. Additionally algorithmic
and programmatic methods that add value to interactive human-computer mapping
will also use, as far as possible, Open Source tools, packages and libraries.
Software, tools and scripts or snippets developed throughout the project will
be shared both internally and externally via the PLANMAP GitHub organization
and relevant repositories ( _https://github.com/planmap-eu_ ). Some
repositories might be private, with access restricted to beneficiaries, during
the early phases of the project. Ultimately, all will be made public and will
be made available indefinitely after the end of PLANAMP.
**Data exploitation, accessibility and intellectual property**
Intellectual property rights on individual science outputs will be held by the
scientific collaborators and publishing venue/journal (e.g. individual
papers). Data and maps published on the PLANMAP data archive (ESA Guest
Storage facility, INAF, UNIPD or other institutional data repositories) and
their long-term evolution are cited either via their dataset DOI or via
relevant linked publications.
# Data and metadata
Produced base mapping data are provided as CC-BY (attribution).
Published maps (of any kind) are going to be provided, free to use, with CC-BY
(attribution).
Acknowledgment of the PLANMAP EC H2020 Space project is requested from those
using PLANMAP-derived data. A relevant acknowledgement message will be
included in the documentation provided to ESA, as well as within the global
metadata of VESPA-shared datasets.
# Documentation
Documentation licensing will follow Creative Commons CC-BY-4.0 (
_https://creativecommons.org/licenses/by/4.0/_ ). Documentation will be also
available, complementing or copying information on the public wiki space, on
GitHub and possibly other public repositories.
At the end of the project the entire body of documentation will be
consolidated and available both on the PLANMAP public (
_https://wiki.planmap.eu/display/public_ ) wiki and Github (
_https://github.com/planmap-eu_ ).
# Software
Software developed by PLANMAP partners is going to be open source, with the
possible exception of specific software involving SMEs (e.g subcontracted
within virtual-reality tasks). GPLv3 is recommended, or any other license
covered by the Open Source initiative (
_https://opensource.org/licenses/category_ ).
Specific licensing for WP involving SMEs and potential exploitation beyond the
project of pre-existing or specific technological aspects WP5 will be
established and documented.
Software, tools and scripts produced by PLANMAP will be available as soon as
they are considered usable, on the public GitHub organisation (
_https://github.com/planmap-eu_ ). Private repositories will be used during
the course of the project, but will cease to exist at its end and all will be
made public.
# Data/Software citation
Archived data used that comes from mission archives will follow the custom of
quoting experiment description papers and eventual relevant follow-up papers
(e.g. Malin et al., 2007; McEwen et al., 2007; Neukum et al., 2004; Jaumann et
al., 2007).
Datasets from NASA/ESA archives (PDS, PSA) follow the citation requirements of
those archives. In the case of NASA public domain data, the experiment-
description papers (e.g. Malin et al., 2007) would be cited in scientific
publications. ESA data follow similar citation styles (suggested citations are
included in the PSA entry pages).
Datasets produced by the PLANMAP consortium will be possibly quoted via:
* Relevant peer-reviewed publications or published maps indicated in the dataset metadata (similar to PSA/VESPA)
* Dataset-specific DOI, i.e via OpenAIRE/Zenodo/GitHub for relevant datasets
* Eventual additional DOI-generating data services that might become available during the project lifetime
# Data storage and management during project
During day-to-day operations and technical/scientific activities of the
consortium, data will be stored on each partner's premises as well as, when
relevant, on shared network resources (such as cloud, FTP and web mapping data
access services). In principle, data will be made publicly available as soon
as possible during the project, respecting publication embargoes.
# Data curation, archiving, preservation and security Data curation
Base data and maps (See Annex A) undergo archiving review by the archive
maintainers (PDS, PSA). If any issue is encountered (e.g. missing or
problematic labels, metadata and/or eventual problems with data themselves)
the PLANMAP consortium will share those information with respective archive
data publishers (PDS, PSA).
Mapping is an iterative, interactive process that will go through a few levels
of interactive, informal and formal scientific review within the PLANMAP
partners and consortium. Before a final map is produced (and its related
scientific publication is submitted), preliminary versions will be shared on
the PLANMAP web page, wiki and web-mapping data access page (
_https://maps.planmap.eu/_ and its file-based access site on
_https://data.planmap.eu/_ ).
In case underlying base mapping data require or are subject to improvements
that will affect the mapping, newer versions will be used and posted and
metadata updated.
# Data preservation
Input data (based data and maps) are preserved by the respective archives and
not under PLANMAP responsibility.
Custom higher-level data imagery, cubes, virtual environments and 3D models
produced by PLANMAP partners during the course of the project will be
preserved on PLANMAP storage services. After the project data will be shared
(See Data Sharing subsection), optimally with some redundancy and in different
geographic locations, for longer-term availability.
# Data security
The PLANMAP data processed, produced and analysed are not sensitive. No
specific security measures are planned. Data recovery, in case of storage
failures will be optimised by the use of central backup and local copies
across PLANMAP partner institutions.
# Data sharing
Data sharing will be performed via 4 possible channels:
* Individual partners. E.g., on own web site or repositories using industry standards for geodata (e.g. web-GIS)
* PLANMAP consortium, via web-gis and data-access web page, linked form the PLANMAP web page
* ESA PSA, upon delivery of data and mapping products
* VESPA, via distributed VO-compliant systems for integrated mapping products and, in the future, potentially sub-map, mapping unilt-level access (e.g. individual mapping units).
## EPN-TAP VESPA-based sharing on premise
VESPA-shared data contain data-product-level metadata pointing to actual data
sources. The release of data (see DoA) is planned in steps, to conclude by the
end of the project.
**Exemplar metadata for mapping products to be released via VESPA:**
A set of mandatory, documented metadata for VESPA services exists (
_https://voparisconfluence.obspm.fr/display/VES/Implementing+a+VESPA+service_
), plus optional ones. Those are mostly related to data-products (in PDS
sense) with some datasetwide. Individual unit granularity is not yet covered
by VESPA technical capabilities.
New developments of VESPA (currently not implemented, but envisaged for future
VESPA developments within the lifetime of PLANMAP) should allow for
metadatabased discovery and search that could extend the geographic data
search and experiment metadata search with feature/unit data/metadata search.
## ESA PSA data deliveries
Individual used data products already existing in planetary archives (PDS,
PSA) will not be released to PSA (already in PDS in either raw or processed
form).
**Exemplary metadata for mapping products to be released via PSA:**
Release to PSA of non-PDS data geological mapping data, with relevant
documentation of a minimum of:
* target body
* geographic extent (bounding box) of mapping product
* CRS
* additional fields as described in the Map-wide metadata table in the above sections.
Data exchange formats for archived data will include:
* For raster data = preferentially Geotiff
* For vector data = preferentially OGC/Geopackage
* Additional files or the same version of release raster files also provided in different formats might include eg. ISIS3 cube (.cub) format, Envi (.img + .hdr) or alike.
A copy of the data in either geotiff (raster) and Geopackage (vector) will be
provided in any case, where relevant.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0130_ReMAP_769288.md
|
# Introduction
## **Project Summary**
ReMAP “Real-time Condition-based Maintenance for adaptive Aircraft Maintenance
Planning” (hereinafter also referred as “ReMAP” or “the project”), is a
European project started on the 1 st of June 2018 and has a duration of four
years. The project addresses the specific challenge to take a step forward
into the adoption of Condition-Based Maintenance in the aviation sector. In
order to achieve this, a datadriven approach will be implemented, based on
hybrid machine learning & physics-based algorithms for systems, and data-
driven probabilistic algorithms for systems and structures. A similar approach
will be followed to develop a maintenance management optimisation solution,
capable of adapting to real-time health conditions of the aircraft fleet.
These algorithms will run on an open-source IT platform, for adaptive fleet
maintenance management. The proposed Condition-Based Maintenance solution will
be evaluated according to a safety risk assessment, ensuring its reliable
implementation and promoting an informed discussion on regulatory challenges
and concrete actions towards the certification of Condition-Based Maintenance.
## **Purpose of this Document**
Deliverable D9.3 Data Management Plan (DMP) addresses the way research data is
managed in the ReMAP project within the Open Research Data Pilot (ORD Pilot).
The ORD pilot aims to improve and maximise access and re-use of research data
generated by Horizon 2020 projects, considering the need to balance openness
and protection of sensitive information, commercialisation and Intellectual
Property Rights (IPR), privacy concerns, as well as data management and
preservation of questions.
DMPs are a key element for good data management, as they describe the
management of the data to be collected, processed and published during a
research project, creating awareness about research data management topics
such data storage, backup, data access, data sharing, archiving and licensing.
ReMAP hereby states the adherence to the FAIR 1 data principles, whereby
research data is made Findable, Accessible, Interoperable and Re-usable for
the community, responsibly considering possible data restrictions on public
sharing.
## **Context**
This Data Management Plan is closely linked to the report on D10.1 Ethics
Requirements – POPD, submitted to the European Commission at the end of
October 2018, whereby in section 4, a general overview of ReMAP data
management strategy regarding interview data was described.
In the following chapters, we enfold the ReMAP DMP making use of the UK
Digital Curation Centre template for _Initial DMP_ . It is acknowledged that a
DMP is a living document and, therefore, as the implementation of the project
progresses and significant changes occur, we will update this plan accordingly
on a finer level of granularity at the end of each project period (M18, M36
and M48) using the templates for the _Detailed DMP_ and _Final Review DMP_ .
It is important to mention that, at the outset of the project, we have engaged
the Data Steward of the Faculty of Aerospace Engineering at Delft University
of Technology (TUD), Dr. Heather Andrews ([email protected]). TUD
has appointed dedicated Data Stewards 2 at every faculty with the goal of
improving awareness of good research data management practices in a
disciplinary manner. Data Stewards are then the first point of contact for
research data management advice. The input and guidance of the faculty’s Data
Steward are at the basis of this plan.
# Data summary
### Purpose
The aim of ReMAP is to develop a maintenance management optimization solution
to monitor the real-time health conditions of aircrafts. The algorithms
resulting from this project will run on an open-source IT platform built by
the consortium.
The data collected, stored, protected and analysed throughout this project
consist of:
* personal data from interviews and workshop participants (see D10.1 Ethics Requirements – POPD deliverable) and,
* technical data (KLM operations, aircraft sensors and structural laboratory tests).
Personal data from interviews and workshop participants will be used for both
dissemination purposes and research purposes. Data collection, storage,
protection and analysis procedures regarding personal data used for
dissemination purposes has been already presented in the D10.1 Ethics
Requirements – POPD deliverable. In this DMP, only the management of
interviews data used for research purposes will be discussed.
Regarding the technical data, this consists of: i) data provided by KLM on
aircraft operations, health monitoring and flight information; ii) laboratory
test data on aircraft structural elements; iii) programming algorithms (code)
to analyse and model different aspects of the maintenance management of
aircrafts; iv) technology design and assessment results; and v) external data
collected from multiple sources relevant for the project (e.g., EUROCONTROL
Monoradar Weather Data or EASA Air quality data repository).
_Table 1. Research activities to be done per partner_ lists a description of
the research that will be carried out by each collaborating partner, and its
purpose, focusing on the technical data mentioned above.
<table>
<tr>
<th>
ATOS
</th>
<th>
_IT Platform for Integrated Fleet Health Management solution (IFHM)_
Development of an IT platform to collect data from systems and sensors, and
provide it to the different algorithms and decission support solutions.
Collaborating partners: ENSAM , IPN, KLM, TUD
</th> </tr>
<tr>
<td>
CTEC & STEC
</td>
<td>
_Development of sensor technology for Structural Health Management (SHM)_
Procurement, development and integration of promising sensor technologies for
damage monitoring in aeronautical composite structures.
Collaborating partners: ENSAM, UPAT
</td> </tr>
<tr>
<td>
ENSAM
</td>
<td>
_Damage monitoring of complex aeronautic structures by means of Ultrasonic
Lamb Waves_
Develop hardware and software systems able to monitor damages in composite
structures by means of Lamb waves emitted and received by piezoelectric
elements.
Collaborating partners: CTEC
</td> </tr>
<tr>
<td>
EMB
</td>
<td>
_Development of predictive algorithms for aircraft systems Prognostics &
Health Monitoring (PHM) _
Develop algorithms for predicting the Remaining Useful Life (RUL) of aircraft,
based on data from aircraft models provided by KLM (KLC).
Collaborating partners: ATOS, KLM, ONERA, UTRCI, UC
</td> </tr>
<tr>
<td>
KLM
</td>
<td>
_Development, verification and test of an IFHM_
KLM will provide the data to the rest of the partners. Data provided by KLM
consist on operations data, aircraft health monitoring data, and flight data
for different aircraft models. These datasets are commercially and safety-
sensitive data. Thus, they are subject to strict institutional and national
rules and protocols (see Annex A).
Collaborating partners: ATOS, EMB, ONERA, TUD, UTRCI, UC
</td> </tr>
<tr>
<td>
ONERA
</td>
<td>
_Safety risk assessment of the IFHM_
Identification of hazards and safety barriers related with CBM technologies.
Future CBM regulations and industrial processes discussion.
Collaborating partners: EMB, KLM, ONERA, TUD
</td> </tr>
<tr>
<td>
OPT
</td>
<td>
_Design and manufacture of aircraft structure cupons and study of ReMAP’s
impact on aircraft weight_
Design, test and manufacture components for experimental tests (WP4). Study of
the impact of a Condition-based Maintenance (CBM) approach in weight reduction
of aircraft structures.
</td> </tr>
<tr>
<td>
</td>
<td>
Collaborating partners: -EMB, TUD, UPAT
</td> </tr>
<tr>
<td>
TUD
</td>
<td>
_Development of predictive algorithms for aircraft structures and systems &
maintenance scheduling decision support _ _tool & safety risk assessment _
Develop validated multi-disciplinary SHM system methodologies towards
remaining useful life estimation (prognosis) in the presence of adverse
conditions during flight. Several sensing technologies are going to be used
along with an ambitious extended test campaign. This campaign will result in a
massive SHM database upon which the diagnostic and prognostic methodologies
are going to be developed and validated.
Development of the adaptive aircraft fleet maintenance schedule solution,
including the definition of an uncertainty mapping.
Model development for safety assessment of CBM technologies included in the
_IFHM._
Collaborating partners: CTEC, EMB, ENSAM, OPT, STEC, TUD
</td> </tr>
<tr>
<td>
UTRCI
</td>
<td>
_Development of system level Prognostics & Health Monitoring (PHM) &
Condition-Based Monitoring (CBM) _ _technologies_
Develop PHM models to predict and detect degradation and failures in aircraft
systems and components by using data on aircrafts, weather conditions,
component removals, among others.
Collaborating partners: ATOS, EMB, KLM, ONERA, UC
</td> </tr>
<tr>
<td>
UC
</td>
<td>
_Development of system level Prognostics & Health Monitoring (PHM) &
maintenance planning decision support tool _ Enabling edge computing and
Actionable information extraction and visualization for optimal maintenance
Development of efficient machine learning algorithms for optimal maintenance.
Development of a user interface for the maintenance planning decision support
tool. Development of an adaptive plan and uncertainty mapping.
Collaborating partners: ATOS, EMB, IPN, KLM, ONERA, TUD
</td> </tr>
<tr>
<td>
UPAT
</td>
<td>
_Structural Health Management: Diagnostics & Remaining Useful Life (RUL)
Prognostics _
Develop validated multi-disciplinary SHM system methodologies towards
remaining useful life estimation (prognosis) in the presence of adverse
conditions during flight. Several sensing technologies are going to be used
along with an ambitious extended test campaign. This campaign will result in a
massive SHM database upon which the diagnostic and prognostic methodologies
are going to be developed and validated.
Collaborating partners: CTEC, EMB, ENSAM, OPT, STEC, TUD
</td> </tr> </table>
**Table 1. Research activities to be done per partner**
### Data Types and Formats
As explained above, there are two main sets of data in this project: technical
data (KLM data, laboratory data, programming algorithms, design and assessment
data, external data) and personal data.
_Technical data:_
The IFHM solution resulting from ReMAP will be developed, validated and
demonstrated based on KLM’s operational data. KLM will follow internal
protocols to anonymize all data before sharing it with the consortium
partners, meaning that the research team will not have access to personal
data. The KLM operational data includes:
— Aircraft Health Monitoring data, used as input and/or validation data for
to-be-developed model for assessing condition and prognosis of individual
aircraft systems. This data is owned by KLM and it is restricted data under
governmental and company regulations.
— Aircraft Maintenance data, used to support the development of the
maintenance schedule solution and as validation data for tobe-developed model
for assessing condition and prognosis of individual aircraft system. This data
is owned by KLM and it is restricted data under governmental and company
regulations.
— Risk assessment, used to map and mitigate operation, technical, commercial,
economical and health & safety risks that are associated with aircraft
maintenance. This data is owned by KLM and it is restricted data under
governmental and company regulations.
Most of these datasets correspond to log files, reporting documents and
tabular data. Health monitoring data and flight data from aircraft models are
observational data from sensor measurements. KLM will provide the monitoring
data to partners in .csv format, while log files and reports will be provided
in .pdf format. It is important to mention no personal information (e.g.
ground staff, flight crew, etc.) will be disclosed by KLM. No information will
be disclosed that can, directly or indirectly, be linked to KLM staff either.
Aside KLM’s data, there will be experimental data from laboratory tests on
aircraft structural elements and composite generic elements and subcomponents,
typically found in modern commercial aircraft. This type of data will be
generated during the project at UPAT and TUD premises. The data correspond to
sensor recordings obtained during the tests, as well as Finite Element
Analysis outputs from simulation endeavors. Various file formats will be
involved depending on the monitoring technique and the associated software
that is utilized to record the data. However, all data will be converted to
.txt, .csv, or .dat files after some raw data processing to increase data
interoperability.
The codes that will be generated throughout the project will be mainly in
MATLAB, Python and R languages. For their development and validation,
researchers will also make use of external data (e.g. on weather conditions,
pollution information, etc.) collected from multiple sources, which shall
include public repositories and other existing data gathering channels
available to partners (e.g., EUROCONTROL Monoradar Weather Data or EASA Air
quality data repository).
Finally, the information generated with the design and development of sensor
technology and aircraft structures will be produced during the project,
together with data related with the assessment of the performance of the
multiple technologies developed for the IFHM solution proposed.
_Personal Data:_
As mentioned in Section 3 of D10.1 Ethics Requirements – POPD deliverable, the
project will carry out interviews to team members and external workshop
participants. In order to collect, store and use the personal data from
interviews, the consortium shall seek the informed consent of each individual,
following the policy of the EU for Data Protection (see D10.1 Ethics
Requirements – POPD deliverable). The individual subjects will be informed
about all aspects of the research in which they are being asked to participate
and the future use of the data they might provide.
The interviews will be recorded as audio-visual footage. The recording of each
interview will be stored in the work laptop/computer of the IPN researcher in
charge of the interview. The data will be saved in a private password-
protected folder accessed only by the respective IPN researcher. The IPN team
or another partner will anonymise and transcribe the data (e.g., into .docx
files). The transcriptions will then be shared with relevant consortium
researchers via SURFdrive ( _https://www.surfdrive.nl_ ) , which is a
password protected cloud storage service. The raw interview data will be then
transferred to a private repository in the DataverseNL environment ( _
https://dataverse.nl/) . _ This environment is expected to be safe enough for
the recorded material. In case the raw interview data contains highly
sensitive information, then the data will be saved in a Project Data drive at
TUD. This is a drive maintained by IT TUD and it is meant for confidential
data. The folder containing the raw data will be managed by Dimitrios
Zarouchas ( [email protected]_ ) from TUD, and accessible to the project
coordinator Bruno Santos ( [email protected]_ ) and to Mónica Ferreira (
[email protected]_ ), the IPN coordinator for WP9.
If it becomes pertinent for the research purpose of the project, the interview
data might be released to the public. This will happen only if the respective
interviewee agrees on it via email, reacting to a consent request sent by
either IPN (WP9 leader) or TUD (project coordinator) explaining the context,
purpose, content to be made public and the right to reject this request, which
is assumed by default. If any of the interviewees request not to keep the
recordings but for getting notes, they will be deleted from the repositories
and only the notes will be circulated among the consortium.
### Data Size
The estimated size of the data delivered by KLM to partners is about 2 TB in
total, taking as a reference data from 30 aircraft operating during a time
frame of 3 years. Included in these 2 TB is also the external data (e.g.,
weather conditions and pollution information) taken from public repositories
and/or data gathering channels available to partners.
The estimated size of the laboratory data is expected to be in the order of
100 GB.
The processing of the data and its use for algorithm development might be on
the order of 1 TB.
The data regarding the design and development of sensor technologies and
aircraft technology, together with the data resulting from the technology
assessment, should be in the order of few tens GB.
The estimated size of the interview data considering the audio-visual footage
is undetermined, but it might be in the order of few hundred GBs (considering
they will mainly be audio files and the transcribed files into .docx
documents).
### Data Utility
The final outcome of ReMAP will be useful for aircraft manufacturers (OEMs),
maintenance service providers and airlines around the world. The data
generated throughout the project will be useful for:
— The data from the laboratory tests will be useful to structural scientists,
OEMs and airline researchers, for the future development and training of SHM
prognostics and diagnosis data-driven algorithms.
— The sensor technology design and reliability performance data will be useful
to sensor companies and OEMs, for the analysis and development of sensor
solutions for SHM in future aircraft.
— The safety risk assessment data will be essential for the development of the
common roadmap towards the implementation of CBM in practice. In particular,
it will be useful for the discussion with EASA about continuing airworthiness
regulations part-M and part-145, and with the Maintenance Steering Group-3 (or
MSG-3) regarding aircraft maintenance procedures and they could be adapted to
CBM.
— The IFHM validation and test data will also be essential for the development
of the common roadmap. In this case, regarding the involvement of maintenance
service providers and airlines in a common solution to the implementation of
CBM. The results will test the overall capability of the multiple CBM
technologies developed, including the adaptive aircraft maintenance schedule
approach proposed.
# FAIR data
## **Making data findable, including provisions for metadata**
#### Metadata
During the research project, KLM will deliver the data to partners in a
structured way with proper documentation (README file) indicating (at least):
1. Data origin and collection methodology
2. Structure and organisation of data files
3. Data manipulations applied prior to sharing
4. Variable names and descriptors (if applicable)
5. Definitions of terminology and acronyms
At the moment there are no metadata standards defined, but KLM might select a
metadata standard at a later stage for partners to use when describing the
data. Even though KLM data will not be disclosed to the public (because of
safety and commercial reasons; see Annex A), such standard would be the one
used for other publishable datasets for consistency within the project. If no
discipline specific standard is used, then Dublin Core ( _www.dublincore.org_
) metadata standard will be adopted (and further information on the data will
be delivered in file headers or in separate documentation files). This last
statement also applies to documentation files (e.g. reports) and other types
of data including experimental data and input/output tabular data for code
training, testing and validation.
Regarding codes, these will be developed and managed mainly via Gitlab
(partners may use their account or use an account provided by ATOS) and
Subversion (SVN), where both allow easy metadata attachment. The metadata
standard to be adopted for all technology codes will be discussed as part of
the IT platform requirements and specifications (WP2). As stated above, if no
discipline specific standard is used, then Dublin Core metadata standard will
be adopted (and further information on the data will be delivered in file
headers or in separate documentation files).
Once scientific journal publications are published (in Open Access),
publishable data (according to the Consortium Aggrement) will be publicly
archived for the long term via the 4TU.Centre for Research Data archive
(documentation, experimental data and tabular data;
_https://researchdata.4tu.nl/en/home/_ ) , or similar online archives,
following their metadata standards (Dublin Core). TUD researchers can,
nowadays, upload up to 1 TB of data per year free of charge. This should
suffice for the data that will be archived for the long term. The 4TU.Centre
for Research Data Archive ensures data will be well-preserved and findable in
the long term (each uploaded dataset is given a unique persistent digital
identifier). In order to allow for responsible public reuse of the data,
datasets will be publicly released under an open content license (CC-BY). More
specific, metadata of the files in the dataset will be given in XML format (if
necessary) following the standards agreed upon during the project.
#### Version Control and File Naming Conventions
Partners working on programming algorithms will use Gitlab repositories to
collaboratively work with their research team members. Gitlab allows for clear
code version management, and it is already available to the respective
partners.
Subversion tool will also be used by some partners to keep track of versioning
for documents related to other technical data files.
Reports and other types of data will be managed manually for which file naming
conventions will be followed as the project progresses. This will described in
D1.1 Project Handbook.
## **Making data openly accessible:**
The data provided by KLM cannot be open to the public for safety and
commercial reasons (see Annex A for the provisions for KLM Operational Data
use). Processed and/or analysed data might be released to the public via the
4TU.Centre for Research Data Archive after proper discussion with KLM (KLC).
Codes and auxiliary scripts built upon the processed data provided by KLM
might be released via GitLab after proper discussion with all partners.
During the project, laboratory data will be accessible only to consortium
partners. Some processed data might be subject to Intellectual Property Rights
of the respective partner(s) that generate the data, and thus, will be
restricted for use within the consortium only. Whenever a journal article is
ready to be published (in Open Access), the laboratory data related to the
journal article will be released openly to the public via the 4TU.Centre for
Research Data Archive. This data includes all data necessary for the re-use of
the results, as well as the data needed to validate them.
The final technological outcome of ReMAP will be an open IT cloud platform
where the finalized algorithms can work in an interoperable manner. This IT
platform will be open source. The platform will be built in a modular way,
following an architecture that will allow the integration of third-party data
analytics and maintenance management solutions or the exploitation of
solutions developed by the ReMAP consortium (these ones, not necessarily
open).
Regarding the interviews to team members and workshop participants, the raw
interview data (audio visual footage) will not be released to the public. It
will only be accessible to relevant partners for a maximum period of 10 years
and erased afterwards. The data will be kept in the long term for recording
and auditing of the project and so it can be used for future research and
learning, unless the interviewee does not grant such a permission.
Nonetheless, as described, the participants have the “Right to be forgotten”.
They can request at any time the elimination of their personal data (such as
names, emails, contact details, information from interviews) stored in the
project data storage services and IPN servers.
If requested by the participants, when given consent for the interview, only
anonymized transcripts of the interviews might be used for research purposes.
The anonymized transcripts will be available only to relevant consortium
partners during the project. If the transcripts are used as material for a
journal article, then anonymized transcripts will be published via the
4TU.Centre for Research Data Archive at the same time the journal publication
is released.
The public outcomes of workshops that can be made public will be shared via
DataverseNL, with reference to it via the project’s website. Unless an
informed consent is given, the public outcomes from these workshops will be
free of any personal reference. The workshop coordinator will be responsible
for preparing these results and anonymize the information when necessary.
#### Documentation and Software for Data Access
As all openly publishable material will be made available via the 4TU.Centre
for Research Data Archive, DataverseNL or GitLab, the datasets’ models will be
easily downloadable from these platforms with the respective metadata.
Open and standard formats will be preferred for archived data files (e.g.,
.csv, .txt) and code (e.g., python). Proper documentation files will be
delivered together with the datasets in order to facilitate reuse of data.
In some cases, MATLAB files (.mat) will also be released, as MATLAB will be
used to record, process and visualize data in some research lines of ReMAP.
MATLAB is a licensed software, widely used in the engineering community, and
it is already available to all partners. CATIA software will also be used for
computer-aided engineering. This is a licensed software already available to
partners. Output design files of CATIA software will be converted to other
formats whenever the data can be open to the public, to facilitate reuse.
## **Making data interoperable:**
As mentioned above, all publishable data will be delivered in open and
standard data formats. Discipline specific metadata is currently under
discussion. If applicable, metadata will be delivered in XML/JSON format
together with the data (depending on the chosen format).
Proper documentation (README) files will be delivered accordingly. Tabular
data and codes (auxiliary scripts) will be archived with informative and
explanatory headers to facilitate data re-use and interoperability.
All code will be managed via Gitlab and/or SVN, which are interoperable with
one another, and are platforms that encourage interoperability between
different workflows.
The final IFHM solution will gather different algorithms that will be
interoperable with each other. The open IT platform over which the algorithms
will be implemented, will have proper documentation and manuals for
researchers to use.
## **Increase data re-use (through clarifying licenses):**
The data provided by KLM may never be disclosed because of safety and
commercial reasons, not after the end of the Project and not after the end of
the 4-year non-disclosure term stated in the Consortium Agreement (see Annex
A). All other data that cannot be disclosed (except KLM data) will be kept at
the respective institutional server for the long term (at least 4 years after
the end of the project); accessed only by team members within the institution,
for auditing and validation purposes. It is also acknowledged that, for some
of the outcomes, copyright and IPR rules as stated in the Consortium Agreement
may apply. This will include some of the _Prognostics & Health _ _Monitoring
(PHM) solutions to be developed_ . The following release of this data will be
determined based on an internal review and decision from the Steering
Committee, supported by the Project Board.
Since the results from this project will make a strong impact in the aviation
sector (airlines, manufactures, maintenance service providers, etc.), we find
it is extremely important to share the data responsibly. Hence datasets that
will be open to the public will be released along the journal scientific
publications after proper discussion with partners. The datasets will be
published via repositories such as GitLab (algorithms, code, auxiliary
scripts) and the 4TU.Centre for Research Data Archive (documentation, images,
tabular data, etc.) under open content licenses in order to increase data re-
use (e.g., CC-BY license for documentation and MIT license for software and
code). In the same way, and in order to motivate re-use of data, the journal
articles associated to these datasets will be published in open access and/or
selfarchived in ReMAP’s website and subject repositories, following the
publisher’s self-archiving policies.
Regarding the final IT platform, this will be distributed, deployed, and
explored according to an open source license. The platform itself, and the
users may integrate their data analytics solutions or use ReMAP’s proposed
solutions (both the ones made available in the public domain, and the ones
copyrighted subjected to usage fees).
# Allocation of resources
### Costs
In principle no costs are expected for the archiving of the publishable data
via the 4TU.Centre for Research Data Archive nor via Github. TUD researchers
can upload nowadays up to 1 TB of data to the 4TU.Centre for Research Data
Archive (per year) free of charge. TUD researchers also have free of charge
access to the DataverseNL environment. Also, most of the software used for
version control and data processing are already available at each institution,
as well as the storage capacity and privately accessed drives managed by each
institution’s IT department.
The IT platform will be developed and implemented in ATOS infrastructure,
using Cloud services with rental costs included in the budget of the project
(ATOS ‘other goods and services’ budget).
### Responsibilities
The following table specifies the team members who will be in charge of the
management of the data, within each research line of study. It is important to
mention that each institution has support staff that can provide advice
whenever data management issues arise, and who will be contacted if it is
necessary.
<table>
<tr>
<th>
Partner
</th>
<th>
Name
</th>
<th>
Email address
</th> </tr>
<tr>
<td>
TUD
</td>
<td>
Dimitrios Zarouchas
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
ATOS
</td>
<td>
Javier García Hernández
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
ENSAM
</td>
<td>
Nazih Mechbal
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
CTEC
</td>
<td>
Frank Claeyssen
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
EMB
</td>
<td>
Rúben Menezes
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
KLM
</td>
<td>
Floris Freeman
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
OPT
</td>
<td>
Nicole Cruz
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
UTRCI
</td>
<td>
Anarta Ghosh
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
UC
</td>
<td>
Bernardete Ribeiro
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
UPAT
</td>
<td>
Theodoros Loutas
</td>
<td>
[email protected]
</td> </tr> </table>
In case any of the above team members is unavailable, other research team
members (within one institution) will have access to the data, as all data
will be stored on the respective institutional servers, with provided access
only to team members.
# Data security
KLM will take care of anonymizing the data before sharing it with partners.
Servers that are set-up by KLM will have a redundancy scheme (e.g. RAID5 or
alternative) or backup plan to mitigate risks of disk failure. KLM internal
servers will keep a copy of the original raw data, in case the processed data
is no longer available. Also, KLM will monitor ReMAP’s process through an
internal knowledge management system (e.g. Confluence). All data that KLM
staff generated for ReMAP will be stored on an Office365 or similar
environment, such that other KLM staff can pick up work in case of long-term
illness or unforeseen staff changes.
Some data will be processed in work laptops of research team members only when
allowed (given the sensitivity of the data). Master copies will be kept at the
drives of each respective institution. The IT departments of each institution
will maintain the data regarding backups (redundancy) and secure storage
(protected access to only team members). Only team members within each
institution will have access to the data during the research project. Such
data access will be set up by the respective IT departments of each
institution. The data that will remain close to the public will be archived at
each partner’s servers for at least 4 years after the end of the project.
Surfdrive will be used for temporal data storage and for data sharing among
different partners coordinated by TUD (coordinator). Google Drive might also
be used for the sharing and temporal storage of non-sensitive data.
# Ethical aspects
Please refer to D10.1 Ethics Requirements – Protection of Personal Data (POPD)
deliverable for the management of personal data information for both
communicational and research purposes.
It is important to mention, in case there are ethics-related questions or
issues arising throughout the project, these will be reported to Bruno Santos
( [email protected]_ ) and will be discussed accordingly among team
members. Extra advise can be discussed with the Human Research Ethics
Committee of TUD (at [email protected]_ ).
# Other
ReMAP will make use of the TUD Research Data Framework Policy which can be
found following:
_https://www.tudelft.nl/en/2018/library/researchdatamanagement/tu-delft-
research-data-framework-policy-published/_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0131_SySTEM 2020_788317.md
|
# Executive Summary
This deliverable is the first version of the SySTEM 2020 Data Management Plan
(DMP). It has been written in detail so as to be a useful support tool for the
project consortium in how to manage data collected during the duration of the
project.
The DMP presents a data summary, describes the provisions for sharing FAIR
data, and addresses data security. It also addresses the allocation of
resources and data management roles and responsibilities. In addition, an
appendix is included containing the Grant Agreement and Consortium Agreement
Provisions.
The DMP will be constantly updated throughout the life of the project,
including before the first assessment (M15, July 2019) and at the end of the
project before the final review (M36, June 2021). It will also be reviewed if
there are any significant changes that affect the project, such as changes in
relevant policies, necessary adaptations to research methodologies, or any
other developments that occur that affect data management.
This deliverable includes a guide for the data collection, storage and the
ongoing and future activities of handling data, during and even after the
project is completed. Detailed information included in the DMP are: data
management lifecycle for all data sets that will be collected, processed or
generated by the research project. Further, the methodology and standards are
outlined and it is explained whether and how this data will be shared and/or
made open, and how it will be curated and preserved.
# Introduction
SySTEM 2020 will focus on science learning outside the classroom, mapping the
field across Europe, evaluating a number of transdisciplinary programmes to
design best principles for educators in this field, and also examining
individual learning ecologies by piloting self-evaluation tools for learners
which will document science learning outside of the classroom. This study will
map practices in 18 EU countries and Israel, including indepth studies in 8 of
these countries, covering learners between 9–20 years from various backgrounds
including those from geographically remote, socio-economically disadvantaged,
minority and/or migrant communities. This document (Deliverable 1.5 “Data
Management Plan”) describes the plan for data management for the duration of
the SySTEM 2020 project, and how it will be made available after the end of
the project (M36, July 2021).
Data is collected for the duration of the project using the following data
collection methods:
* A longitudinal questionnaire which surveys young people within the ages of 9-20 years old in all of the 19 participating countries on their individual learning ecologies in and outside of the school setting. A consent sheet for the guardians of minor respondents as well as for the respondents themselves is set up.
* Experimental sampling method (ESM) is used to implement a further, smaller case survey which is going to be answered by a subset of the young learners participating in the longitudinal questionnaire using an app run on their own smartphone. Alongside this, a smaller subset will also be involved in creating Learning Portfolios and Self-Monitoring Tools.
* A large part of the project is the creation of an online data visualisation of STE(A)M initiatives outside the classroom across Europe and beyond. Requirements for the data acquisition, - access and - publishing for this visualisation can be found in detail in D2.2 _User requirements and parameters for the map_ .
This rest of this document is structured as follows:
* Section 2 is a brief overview of FAIR data, the legal framework, including the EU regulation on personal data protection (GDPR), the H2020 provisions for open access to research data.
* Section 3 discusses the different data usage scenarios and the key issues to be examined in relation to each scenario. These issues include decisions on e.g. data anonymization, privacy and security protection measures, licensing etc;
* Section 4 is the conclusion, with a description of how the document will be maintained in the future.
# FAIR data
## Overview
A good DMP under H2020 should comply with the FAIR Data Handling Principles.
Sharing data in line with the FAIR principles requires that the data is
Findable, Accessible, Interoperable and Reusable (FAIR).
The European Commission (2016) considers the FAIR principles fulfilled if a
DMP includes the following information:
1. “The handling of research data during and after the end of the project”
2. “What data will be collected, processed and/or generated”
3. “Which methodology and standards will be applied”
4. “Whether data will be shared/made open access”, and
5. “How data will be curated and preserved (including after the end of the project)”.
The above information is provided in Section 3 of this document:
1. Data summary (typologies and contents of data collected and produced)
2. Data collection (which procedures for collecting which data)
3. Data processing (which procedures for processing which data)
4. Data storage (data preservation and archiving during and after the project)
5. Data sharing (including provisions for open access)
## Legal framework
This section gives a brief overview of the key references in regards to making
up the DMP external context. The next paragraphs respectively deal with:
1. The General Data Protection Regulation, which came into force in May 2018;
2. The terms of the H2020 Open Research Data Pilot (ORDP) the SySTEM 2020 consortium has adhered to;
3. The resulting, relevant provisions of both the Grant and the Consortium Agreements;
## The EU Personal Data Protection Regulation (GDPR)
Regulation (EU) 2016/679 sets out the new General Data Protection Regulation
(GDPR) framework in the EU, notably concerning the processing of personal data
belonging to EU citizens by individuals, companies or public sector/non-
government organisations, irrespective of their localization. It is therefore
important that the SySTEM 2020 consortium takes GDPR into consideration in its
data management.
GDPR was adopted on 27 th of April 2016 and became enforceable on 25 th of
May 2018 after a two-year transition period. The regulation has replaced the
previous Data Protection Directive (95/46/EC) and its national
implementations. Being a regulation, not a directive, GDPR does not require
Member States to pass any enabling legislation but is directly binding and
applicable. The GDPR text is available on the Eur-Lex website. The GDPR
provisions do not apply to the processing of personal data of deceased persons
or of legal entities. They do not apply either to data processed by an
individual for purely personal reasons, or activities carried out at home,
provided there is no connection to a professional or commercial activity. When
an individual uses personal data outside the personal sphere, for socio-
cultural or financial activities, for example, then the data protection law
has to be respected.
On the other hand, the legislative definition of personal data is quite broad,
as it includes any information relating to an individual, whether it relates
to his or her private, professional or public life. It can be anything from a
name, a home address, a photo, an email address, bank details, posts on social
networking websites, medical information, or a computer’s IP address.
It is worth noting that the specific requirements of GDPR for privacy and
security will be separately dealt with in other SySTEM 2020 Deliverables [such
as D8.1 (H - Requirement No. 1); D8.2 (POPD - Requirement No. 2); D8.3 (OEI -
Requirement No. 3); D8.4 (OEI - Requirement No. 4); D8.5 (H - POPD –
Requirement No. 6)].
## Open Access in Horizon 2020
The European Commission (EC) has launched in H2020 a flexible pilot for open
access to research data (ORDP), aiming to improve and maximise access to and
reuse of research data generated by funded Research & Development (R&D)
projects, while at the same time taking into account the need to balance
openness with privacy and security concerns, protection of scientific
information, commercialisation and intellectual property rights (IPR). This
latter need is aided by an opt-out rule, according to which it is possible at
any stage - before or after the GA signature - to withdraw from the pilot, but
legitimate reasons must be given, such as IPR/privacy/data protection or
national security concerns.
With the Work Programme 2017 the ORDP has been extended to cover all H2020
thematic areas by default. This has particularly generated the obligation for
all consortia to deliver a Data Management Plan (DMP), in which they specify
what data the project will generate, whether or not It will be freely
disclosed, how it will be made accessible for verification and reuse, and how
it will be curated and preserved. The ORDP applies primarily to the data
needed to validate the results presented in scientific publications. Other
data can however be provided by the beneficiaries of H2020 projects on a
voluntary basis. The costs associated with the Gold Open Access rule, as well
as the creation of the DMP, can be claimed as eligible in any H2020 grant. The
SySTEM 2020 consortium has decided to adhere to the Green and Gold Open Access
rule.
# SySTEM 2020 Data Management Plan
In this section, the different data usage scenarios will be discussed in
detail, in relation to data summary, data collection, data processing, data
storage, data sharing, and finally data security. In this way, the data
management lifecycle of the SySTEM 2020 project will be presented in full.
The three scenarios that make up the SySTEM 2020 data management lifecycle
are:
1. Original data produced by the SySTEM 2020 consortium and/or individual members of it (e.g. the questionnaires with young people, the credentialisation tool and the population of the map);
2. Existing data already in possession of the SySTEM 2020 consortium and/or individual members of it prior to the beginning of the project (see Appendix 1);
3. Existing data sourced/procured by the SySTEM 2020 consortium and/or individual members of it during the project’s timeline.
It is also important to note that the datasets handled within the three above
scenarios can belong to either of these three categories:
• Confidential data (for business and/or privacy protection); • Anonymised and
Public data (these two aspects go hand in hand); • Non anonymised data (the
residual category).
## Data Summary
The following table summarizes the typologies and contents of data collected
and produced during the project timeline.
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**TYPES OF DATASETS**
</th>
<th>
</th> </tr>
<tr>
<td>
**DATA USAGE SCENARIOS**
</td>
<td>
**Confidential**
</td>
<td>
**Anonymised,**
**Pseudonymised and**
**Public**
</td>
<td>
**Non anonymised**
</td> </tr>
<tr>
<td>
**Original data produced by the SySTEM 2020 consortium**
</td>
<td>
Any possible sensitive data from the questionnaires, the map, learning
portfolios, experience sampling method, codesign meeting and credentialisation
tool
Personal data from young people doing the surveys and their parents, such as
email and mail addresses and phone
numbers
New contacts established
</td>
<td>
Summaries of
questionnaires/interviews/
learning portfolios
Photos/videos of learners shot during the activities
will now include names
End user data, stakeholders and policy makers data on public
display
Contact data within deliverables
</td>
<td>
Photos/videos shot of adults during public events and project workshops
Audio recordings (e.g. Skype)
Data in the project internal repositories
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the SySTEM 2020 consortium and/or
partners**
</td>
<td>
Data embedded in some of the Background knowledge (see
Appendix A)
Contact databases
</td>
<td>
Data embedded in some of the Background
knowledge (see Appendix
A)
Data embedded in case studies materials
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data already in**
**possession of the SySTEM 2020 consortium and/or partners**
</td>
<td>
Raw data in possession of the pilots or of any third party involved in the
pilots
</td>
<td>
Free and open data (including from scientific and statistical publications)
</td>
<td>
N/A
</td> </tr> </table>
**Table 1** : Summary of relevant data for the SySTEM 2020 research agenda
* For any photos/videos shot during the project internal events and meetings, as well as public events related to the project, it is crucial to collect an informed consent form from all the participants, with an explicit disclaimer in case of intended publication of those personal images on e.g. newspapers, internet sites, or social media groups. This will bring the data back into the Confidential category, where it is legitimate to store and/or process it for legitimate reasons. When sharing photos/videos of learners, no names will be provided ensuring greater privacy.
* For any audio recordings stored, e.g. in the project’s official repository (currently Team Drive) or in individual partners’ repositories, care must be taken of the risk of involuntary disclosure and/or the consequences of misuse for any unauthorized purpose. Same goes for the personal data of each partner in the consortium.
* Informed consent forms must be signed (also electronically) by all participants taking part in questionnaires, learning portfolios and interviews. Detailed procedures on informed consent and their storage are reported in the deliverables for WP8.
* Informed consent is also required when using available contacts (be they pre-existing to the project or created through it) to disseminate information via e.g. newsletters or dedicated emails. In this respect, the GDPR provisions are particularly binding and must be carefully considered.
* As a general rule, access conferred to Background knowledge on a royalty free basis during a project execution does not involve the right to sublicense. Therefore, attention must be paid by each partner of SySTEM 2020 to ensure the respect of licensing conditions at all times and by every member of the team.
* This also applies to any dataset sourced or procured from third parties during the SySTEM 2020 project’s lifetime.
The following table describes how the DMP is most relevant for each WP:
<table>
<tr>
<th>
**Work**
**Package**
</th>
<th>
**The DMP is most relevant for...**
</th> </tr>
<tr>
<td>
WP1: MANAGE
</td>
<td>
How the DMP will be used as a document to define the how the project data will
be collected and stored
</td> </tr>
<tr>
<td>
WP2: MAP
</td>
<td>
How the data collected in the map will be stored, how map participants will be
informed that the data is being collected and opt in to be contacted
</td> </tr>
<tr>
<td>
WP3:
EXAMINE
</td>
<td>
How the data from the questionnaires will be collected and stored, analysed
and then the results made public with responses anonymised
</td> </tr>
<tr>
<td>
WP4:
IDENTIFY &
CO-DESIGN
</td>
<td>
How the data collected during the contextual inquiry and co-design workshop
will be stored, analysed and shared – taking into account any needs to
anonymise results based on the sensitivity of the data.
</td> </tr>
<tr>
<td>
WP5:
DEVELOP &
EXECUTE
</td>
<td>
How the data collected through the self-evaluation tools for learners,
facilitators and organisers will be collected, stored and analysed, How the
Learning Portfolios will be stored, and anonymised, or pseudonymised where
appropriate, before being shared publicly.
</td> </tr>
<tr>
<td>
WP6:
EVALUATE
</td>
<td>
How the data collected through experience sampling method and further
evaluation and self-reflection techniques can be collected, stored and
analysed according to this DMP. In accordance with the required consent,
collected data will be made available as open-access dataset.
</td> </tr>
<tr>
<td>
WP7: SHARE
</td>
<td>
How the results from the project will be shared via open access and made
public, while sensitive data still being anonymised.
</td> </tr>
<tr>
<td>
WP8: ETHICS
</td>
<td>
How data management will be referenced in the consent forms, and GDPR
followed.
</td> </tr> </table>
**Table 2** : Description of relevance of DMP for each WP in the SySTEM 2020
project
## Data Collection
The following table summarizes the procedures for collecting project related
data.
<table>
<tr>
<th>
</th>
<th>
**TYPES OF DATASETS**
</th>
<th>
</th> </tr>
<tr>
<td>
**DATA USAGE SCENARIOS**
</td>
<td>
**Confidential**
</td>
<td>
**Anonymised and Public**
</td>
<td>
**Non anonymised**
</td> </tr>
<tr>
<td>
**Original data produced by the SySTEM 2020 consortium**
</td>
<td>
Questionnaires, experience sampling method, interviews, Learning Profiles,
workshops, meeting with stakeholders, co-design sessions, evaluation sessions
</td>
<td>
Newsletters
Publications
Open Access repositories
</td>
<td>
Events coverage
– directly or via specialised
agencies
A/V conferencing systems
Internal repositories
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the SySTEM 2020 consortium and/or
partners**
</td>
<td>
Seamless access and use during project execution
</td>
<td>
Seamless access and use during project execution
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data already in**
**possession of the SySTEM 2020 consortium and/or partners**
</td>
<td>
Licensed access and use during project execution
</td>
<td>
Free, open access and use during project execution
</td>
<td>
N/A
</td> </tr> </table>
**Table 3** : Summary of SySTEM 2020 data collection procedures
Data will be collected in both paper and digital forms (CSV, PDF, Word, xls
spreadsheets and textual documents will be the prevalent formats). For the
data collected via paper (e.g. the questionnaires), these documents will be
scanned by the partners and third parties and stored digitally. Original
copies will be destroyed within 2 months of being collected. In case of
audio/video recordings and images, the most appropriate standards will be
chosen and adopted (such as .gif, .jpg, .png, .mp3, .mp4, .mov and .flv).
Website pages can be created in .html and/or .xml formats.
Research data in the SySTEM 2020 project is primarily generated by the members
of the consortium. This data mainly takes the form of questionnaires, Learning
Portfolios, and data collected from the map. Research data generated
throughout the project includes primarily qualitative material, as well as
some quantitative information, e.g. information about the questionnaire
participants. Curated and anonymised materials will be made publicly
available. This includes the following:
* Data gathering and analysis templates;
* Templates and guidelines for evaluation data gathering and analysis;
* Completed templates from each pilot containing evaluation material from workshops and other activities/events carried out in the co-creation labs;
* Questionnaire responses (originally Microsoft Excel documents);
* Intermediate evaluation analysis outputs based on data gathered in workshops and
* other activities/events carried out in the co-creation labs.
## Data Processing
The following table summarizes the procedures for processing SySTEM 2020
related data that can be envisaged at this project’s stage.
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**TYPES OF DATASETS**
</th>
<th>
</th> </tr>
<tr>
<td>
**DATA USAGE SCENARIOS**
</td>
<td>
**Confidential**
</td>
<td>
**Anonymised and Public**
</td>
<td>
**Non anonymised**
</td> </tr>
<tr>
<td>
**Original data produced by the SySTEM 2020 consortium**
</td>
<td>
Anonymisation
Analyses/
Visualisation
</td>
<td>
Qualitative and quantitative evaluation Analyses/
Visualisation
</td>
<td>
Selection/ destruction
Blurring of identities
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the SySTEM 2020 consortium and/or
partners**
</td>
<td>
Anonymisation
Statistical evaluation
</td>
<td>
Analyses/
Visualisation Qualitative and quantitative evaluation
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data already in**
**possession of the SySTEM 2020 consortium and/or partners**
</td>
<td>
Anonymisation
Statistical evaluation
</td>
<td>
Analyses/
Visualisation Qualitative and quantitative evaluation
</td>
<td>
N/A
</td> </tr> </table>
**Table 4** : Summary of SySTEM 2020 data processing procedures
State of the art tools will be used to process/visualise the data used or
generated during the project. Typically, the partners are left free to adopt
their preferred suite (such as Microsoft Office TM for PC or Mac, Apple’s
iWork TM and OpenOffice TM or equivalent). However, the following tools
are the ones mainly used by the consortium:
* Google’s shared productivity tools (so-called G-Suite TM ) are used for the cocreation of outputs by multiple, not co-located authors;
* Adobe Acrobat TM or equivalent software is used to visualise/create the PDF files;
* Photoshop TM or equivalent software are used to manipulate images;
* State of the art browsers (such as Mozilla Firefox TM , Google Chrome TM , Apple Safari TM and Microsoft Internet Explorer TM ) are used to navigate and modify the Internet pages, including the management and maintenance of social media groups;
* Google hangouts or Skype TM (depending on the number of participants) are the selected tools for audio/video conferencing, which may also serve to manage public webinars;
* Tools like Limesurvey and Google Forms are used for the administration of online surveys with remotely located participants;
* Dedicated YouTube TM channels can help broadcast the video clips produced by the consortium to a wider international audience, in addition to the project website;
* Mailchimp TM or equivalent software is helpful to create, distribute and administer project newsletters and the underlying mailing lists;
* At the moment email only is used for the consortium internal communication flow;
For research data collected and generated in the project, a fit-for-purpose
file naming convention will be developed in accordance with best practice for
qualitative data, such as described by the UK Data Archive (2011). This will
involve identifying the most important metadata related to the various
research outputs. Key information includes content description, date of
creation, version, and location of where data was created.
## Data Storage
The following table summarizes the procedures for storing project related
data, during and after the SySTEM 2020 lifetime, and the most frequently used
repositories.
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**TYPES OF DATASETS**
</th> </tr>
<tr>
<td>
**DATA USAGE SCENARIOS**
</td>
<td>
**Confidential**
</td>
<td>
**Anonymised and Public**
</td>
<td>
**Non anonymised**
</td> </tr>
<tr>
<td>
**Original data produced by the SySTEM 2020 consortium**
</td>
<td>
Individual partner repositories
Common project repository (the current one is Team Drive)
</td>
<td>
Project website
</td>
<td>
Individual partner repositories
Common project repository
</td> </tr>
<tr>
<td>
**Existing data**
**sourced/procured by the SySTEM 2020 consortium and/or partners**
</td>
<td>
Individual partner repositories Specific software
Repositories
</td>
<td>
Project website
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data already in**
**possession of the SySTEM 2020 consortium and/or partners**
</td>
<td>
Individual partner repositories Third party repositories Cloud repositories
</td>
<td>
Project website
</td>
<td>
N/A
</td> </tr> </table>
**Table 5** : Summary of SySTEM 2020 data storage procedures
Google Team Drive is the selected tool for SySTEM 2020’s data and information
repository since it is GDPR compliant. This includes both the project
deliverables (including relevant references utilised for their production or
generated from them as project publications, e.g. journal articles, conference
papers, e-books, manuals, guidelines, policy briefs, white papers etc.) and
any other related information, including relevant datasets.
Additionally, the project coordinator will make sure that the official project
repository periodically generates back-up files of all data, in case anything
may get lost, corrupted or become unusable at a later stage (including after
the end of the project). The same responsibility goes to each partner for the
local repositories utilised by them (in some cases, these are handled by large
organisations such as Universities; in others, by small organisations or even
personal servers or laptops).
As the license that the consortium establishes for final datasets will still
have to be determined, their intermediate versions will be deemed as
**business confidential** , and restricted to circulation only within the
consortium.
Finally, each digital object identified as a R&D result, including their
associated metadata, will be stored in a dedicated open access repository
managed by SGD, to the purpose of both preserving that evidence and making it
more visible and accessible to the scientific, academic and corporate world.
In addition to the SGD open access server, other datasets may be stored on the
following repositories:
* The SySTEM 2020 website (with links on/to the Social Media profiles);
* Individual Partner websites and the social media groups they are part of;
* The portals of the academic publishers where scientific publications will be accepted;
* Other official sources such as OpenAIRE/Zenodo and maybe EUDAT
## Data Sharing
Data sharing will be a very important aspect of the SySTEM 2020 project,
however it needs to be ensured that it is done in a useful and legitimate
manner. When sharing, it is of utmost importance to keep in mind, not only the
prescriptions and recommendations of extant rules and norms (including this
DMP), as far as confidentiality and personal data protection are concerned,
but also the risk of voluntary or involuntary transfer of data from the inside
to the outside of the European Economic Area (EEA).
In fact, while the GDPR applies also to the management of EU citizens personal
data (for business or research purposes) outside the EU, not all the countries
worldwide are subject to bilateral agreements with the EU as far as personal
data protection is concerned. For instance, the US based organisations are
bound by the so-called EU-U.S. Privacy Shield Framework, which concerns the
collection, use, and retention of personal information transferred from the
EEA to the US. This makes the transfer of data from the partners to any US
based organisation relatively exempt from legal risks. This may not be the
same in other countries worldwide, however, and the risk in question is less
hypothetical than one may think, if we consider the case of personal sharing
of raw data with e.g. academic colleagues being abroad for the purpose of
attending a conference. It is also for this reason that the sharing of non-
anonymized data is discouraged for whatever reason, as shown in the table.
It must be kept in mind that one of the SySTEM 2020 partners is based in
Israel, outwith the EU. However, it is one of the countries associated to
Horizon 2020.
“ _The association to Horizon 2020 is governed by Article 7 of the Horizon
2020 Regulation._ _Legal entities from Associated Countries can participate
under the same conditions as legal entities from the Member States.
Association to Horizon 2020 takes place through the conclusion of an
International Agreement._ ”
<table>
<tr>
<th>
</th>
<th>
**TYPES OF DATASETS**
</th> </tr>
<tr>
<td>
**DATA USAGE SCENARIOS**
</td>
<td>
**Confidential**
</td>
<td>
**Anonymised and Public**
</td>
<td>
**Non anonymised**
</td> </tr>
<tr>
<td>
**Original data produced by the SySTEM 2020 consortium**
</td>
<td>
Personal email communication Shared repositories
</td>
<td>
Project website Open access repository
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the SySTEM 2020 consortium and/or
partners**
</td>
<td>
Personal email communication Shared access to software repositories
</td>
<td>
Project website Open access repository
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data already in possession of the SySTEM 2020 consortium and/or
partners**
</td>
<td>
Personal email communication Shared repositories
</td>
<td>
Project website Open access repository
</td>
<td>
N/A
</td> </tr> </table>
**Table 6** : Summary of SySTEM 2020 data sharing procedures
We intend to make curated research data from the questionnaires, Learning
Portfolios and map accessible, as well as cross-cutting material reflecting on
the evaluation of the project as a whole. This data will be useful for
researchers, practitioners and others wishing to duplicate or adapt the SySTEM
2020 model for researching science learning outside the classroom. The
analysis material included in the research data will highlight the strengths
and challenges of the approach, allowing others to learn from the experiences
of the project.
In regards to how long is the intention that the data remains re-usable, we
will adhere to the standard of the chosen repositories for the project. The
curated research data related to the activities conducted in the
questionnaires will be made available at M32 (after the collection and
analysis is finished). By the end of the project at M36 all data that is not
affected by embargo will be made available through the appropriate
repositories.
## Data Security
Research data is shared between project partners and stored in a collaborative
online working platform during the project’s lifetime Google Team Drive:
_https://gsuite.google.com/learning-center/products/drive/get-started-team-
drive/#!/_
Team Drive is provided by Science Gallery Dublin as project coordinator.
Science Gallery Dublin is using the GSuite for Education licence, which is
provided at no charge by Google. Team Drive is ISO 27017 certified for Cloud
Security and is fully compliant with GDPR regulations.
Uncurated and unanalysed material created during the project is stored locally
by the SySTEM 2020 partners according to their institutional data management
and storage guidelines (see D8.1 and D8.2). All final versions of evaluation
data collected in the project and analysis outputs of this material will be
saved in a standardised filing system with dedicated naming conventions.
Consent forms will be kept beyond the end of the SySTEM 2020 project as
detailed in D8.1 and D8.3. Additional research data such as personal notes,
unused photos and video clips etc. will be safely deleted and discarded after
the end of the project. This research includes all data not made publicly
available for the long term.
# Conclusions and Future Work
## Conclusions
This document is the fundamental deliverable concerning the SySTEM 2020 Data
Management Plan (DMP) in fulfilment of the requirements of WP1. The main
reason for planning an early version of the DMP is so it can be the most
useful for the lifetime of the project in being a guideline for the activities
to follow beginning in M9 with the campaigning of the map and the
questionnaires.
However, to ensure the right balance between the current data management
framework and the real data that will come from the questionnaires, the
Experience-Sampling Method, Learning Portfolios and the map, the project
coordinators will check with the Executive Board every six months, if any
changes need to be made to the DMP.
The DMP has gathered together all the information regarding legal frameworks
and GDPR, and has described how the SySTEM 2020 will align with the principles
of FAIR data handling according to the EC requirements (and that the SySTEM
2020 consortium and individual partners and third parties are bound to
respect). It has also summarized the key aspects of data collection,
processing, storage and sharing (the typical contents of a DMP) within the
proposed data lifecycle elements and particularly highlighting - first and
foremost, to the attention of the partners - some key aspects of data
management that go beyond the operational link with open access and interfere
with privacy and security policies as well as with the way background
knowledge.
It is hoped that the DMP will enable all partners and third parties to
understand the different actions required when handling data of different
natures, and how to store and share it securely, in keeping with the EC
requirements.
## Maintenance of Data Management Plan
### Responsibilities of Data Management
The responsibilities for the data management are distributed and controlled as
follows:
_Data collection, storage and backup:_
The partners and third parties who collect data (e.g. questionnaires and
Learning Portfolios). The responsibilities also include taking care for data
security and personal data protection. Project partners who receive data from
data collectors for project evaluation (AALTO and ZSI) have the same
obligations.
_Metadata production in view of depositing the research data:_
The partners and third parties based on guidelines provided by the project
coordinators (SGD). They will be provided with templates for the tasks
involving data, which they will be required to fill in This will ease and
ensure consistency of the data provision, which will be controlled by the
project coordinators.
_Data deposition and sharing:_
The project coordinators (SGD), involving individual partners where further
information or clarification is required (e.g. with regard to content authors
or contributors, related material).
_Maintenance and updates of the Data Management Plan:_
The DMP will be maintained and updated by the project coordinators in
consultation with the Executive Board. Updates will be done according to the
planned schedule and when needed due to changes in consortium policies,
research methodology or other significant developments which affect the DMP.
_Responsibilities in general:_
Each partner or third party is obliged to respect and follow the rules of the
Grant Agreement and the Guidelines to the Rules on Open Access to Scientific
Publications and Open Access to Research Data in Horizon 2020 (European
Commission 2017). Support by the project coordinator or another partner in
following the rules does not transfer these obligations to the supporting
partner.
### Data Management Plan Maintenance
The SySTEM 2020 Data Management Plan will be maintained and updated by the
project coordinators of the project (SGD) in consultation with the Executive
Board.
The DMP is a “living document” that will be updated at least before the first
assessment of the project (M15, December 2018) and at the end of the project
lifecycle, before the final review (M36, September 2020). Furthermore the plan
will be updated if needed due to changes in consortium policies, research
methodology or other significant developments which affect the DMP.
### Contacts for the Data Management Plan
Kali Dunne, Science Gallery Dublin (Project Manager):
[email protected]_
Joanna Crispell, Science Gallery Dublin (European Projects Researcher):
[email protected]_
Derek Williams, Science Gallery Dublin (Technical Manager):
[email protected]_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0133_WorkingAge_826232.md
|
# Executive Summary
The Data Management Plan describes all the data management processes related
to the WorkingAge project. First of all, the document will define some general
principles about data management policy and scientific publications in
research context. Then, the Data Management Plan will define all the
procedures to collect, manage and store data with the priority to be GDPR
compliant. The document will define also technical procedures, such as
pseudonymization and data encryption, directly related to the GDPR compliance.
Finally, the Data Management Plan will describe official figures already
stated by the GDPR, such as Data Controller, who will deal with Data
Management and Data Protection issues.
# 1\. Introduction
This document is Version 1 of the Data Management Plan (DMP), presenting an
overview of data management processes, as agreed among WorkingAge
(WA) project’s partners. This DMP will first establish some general principles
in terms of data management and Open Access.
Subsequently, it will be structured as proposed by the European Commission in
H2020 Programme – Guidelines on FAIR Data Management in Horizon 2020, covering
the following aspects:
* Data Summary;
* FAIR Data;
* Allocation of resources;
* Data security;
* Ethical aspects;
The DMP is a “living” document outlining how the research data collected or
generated will be handled during and after the WorkingAge project. The DMP is
updated over the course of the project whenever significant changes arise.
# 2 General principles for data management
## 2.1 Data collected and personal data protection
**Within the WorkingAge (WA) project, partners collect and process research
data and data for general project management purposes, according to their
respective internal data management procedures and in compliance with
applicable regulations** . Data collected for general purposes may include
contact details of the partners, their employees, consultants and
subcontractors and contact details of third parties (both persons and
organisations) for coordination, evaluation, communication, dissemination and
exploitation activities. Research data are collected and processed in relation
with the research pilots. During the project lifetime, data are kept on
computers dedicated to this purpose, which are securely located within the
premises of the project partners. Data archiving, preservation, storage and
access, is undertaken in accordance with the corresponding ethical standards
and procedures of the partner institution where the data is captured,
processed or stored. The data is preserved for a minimum of 10 years (unless
otherwise specified). All data susceptible of data protection are subject to
standard anonymization and stored securely (with password protection). The
costs for this are covered by the partner organization concerned.
**Confirmation that the aforementioned processes comply with national and EU
legislation is provided by each partner and verified by each Data
Controller.**
# 3 Research data and Open Access
The WorkingAge project is part of the H2020 Open Research Data Pilot (ORDP)
and publication of the scientific results is chosen as a mean of
dissemination. In this framework, open access is granted to publications and
research data and this process is carried out in line with the Guidelines on
Open Access to Scientific Publications and Research Data in Horizon 2020.
The strategy to apply Open Access for the project’s scientific results is
revised, step by step, according to personal data protection regulations, the
results of the ethical approval process of the research protocols and the
provisions of the Consortium Agreement. If needed, it will be possible to “opt
out” from this open access strategy for specific and well-defined subsets of
data.
## 3.1 Scientific publications
Open access is applicable to different types of scientific publication related
to the research results, including its bibliographic metadata, such as:
* journal articles;
* monographs and books;
* conference proceedings, abstract and presentations; grey literature (informally published written material).
Grey literature includes also reports and deliverables of the projects related
to the research, whose Dissemination level is marked as Public.
Open access is granted as follows:
* Step 1 – Depositing a machine-readable electronic copy of a version accepted for publication in repositories for scientific publications (before or upon publication).
* Step 2 – Providing open access to the publication via the chosen repository.
For access to publications, a hybrid approach is considered (both green OA and
gold OA), depending on the item and the dissemination channels that will be
available:
* Green OA (self-archiving) – depositing the published article or the final peer-reviewed manuscript in repository of choice and ensure open access within at most 6 months (12 months for publications in the social sciences and humanities).
* Gold OA (open access publishing) – publishing directly in open access mode/journal.
## 3.2 Data Management Policy
The Data Management Policy will address the points below and will detail the
current status of reflection within the consortium regarding the data that is
being produced. According to ORDP requirements, the WorkingAge DMP observes
FAIR (Findable, Accessible, Interoperable and Reusable) Data Management
Protocols.
In order to better proceed with Data Management and Data Protection issues,
all partners which deal with data require the figure of the Data Controller,
this role will be achieved by the legal representative of each partner
managing (i.e. generating and/or processing) data, as stated by the Article 4
and the Article 24 of the GDPR. Each partner might nominate a Local Data
Manager, who will manage data on behalf of Data Controllers as a result of a
relationship that links them. Local Data Managers will be responsible for
contacting the users in order to provide the information sheet for the WA
project and obtain consent forms.
## 3.3 Research data
In addition, open access is granted also to underlying research data (data
needed to validate results presented in publication) and their associated
metadata, any other data (not directly attributable to the publication and raw
data) and information on the tools needed to validate the data and, if
possible, access to these tools (code, software, protocols etc.).
Open access is granted as following.
* Step 1 – Depositing the research data in a research data repository. A repository is an online database service, an archive that manages the long-term storage and preservation of digital resources and provides a catalogue for discovery and access.
* Step 2 – Enabling access and usage free of charge for any user (as far as possible).
The consortium will try to publish as much research data as possible, but this
will be decided on a case-by-case basis, in order to be compliant with the
GDPR in terms of publishing non-sensitive and personal data.
## 3.4 Other project’s outcomes
As per any other outcomes of the project, they are disseminated accordingly to
the Dissemination level indicated in the Description of Action and they are
also subject to protection in accordance with the Consortium Agreement and in
reference to Access Rights.
# 4 FAIR Data management plan
## 4.1 Data summary
The Data Summary provides an overview of the purpose and the nature of data
collection and generation, and its relation to the objective of the WorkingAge
(WA) project.
### 4.1.1 Objectives of the project and research
The WA project as a complex research requires careful planning, management and
administration in its development and implementation. Work has been structured
in ten work packages: six covering Research and Innovation work, two for test
(Integration and User tests), one for exploitation and dissemination, one for
the definition of all specifications and one management work package. WP1 will
include all management issues. Prior to the main research cycle, the
consortium will participate in WP2: this WP intends to set the bases for all
ulterior research including the selection of tools and participants,
optimizing the time for the tests. Research cycles (Expected at least two)
will start with the definition of the interventions for the users (WP3) at
work and in daily life process. This work will be the base for WP4 (HCI
platform), WP5 (IOT infrastructure and services), WP6 (Data Analysis) which
occur in parallel (and sharing produced knowledge). Ethics and security domain
performed during WP7. Deployment and Integration (WP8) in which the final
prototypes and the optimization of the research cycles will be adapted for the
tests. The intervention models and measurement prototypes will be then
integrated and used to collect data in the WP9 (Test Performance). In WP10
Standardization and Business Development, Commercialization and IPR Management
will be worked considering the future the market release of the studied
solution. This WP will also summarize all the actions proposed for
dissemination.
### 4.1.2 Purpose of the data collection during the project
WorkingAge will use innovative Human-Computer-Interaction (HCI) methods
(augmented reality, virtual reality, gesture/voice recognition and eye
tracking, neurometrics) to measure the user’s cognitive and emotional states
and create communication paths. At the same time with the use of Internet of
Things (IoT) sensors will be able to detect environmental conditions. The
purpose is to promote healthy habits of users in their working environment and
daily living activities in order to improve their working and living
conditions.
### 4.1.3 Relation to the objectives of the project
By studying the profile of the >50 (year old) workers and the working place
requirements in three different working environments (Office, Driving and
Manufacturing), both profiles (user and environment) will be considered.
Information obtained will be used for the creation of interventions that will
lead to healthy ageing inside and outside the working environment.
WorkingAge will test and validate an integrated solution that will learn the
user’s behaviour, health data and preferences and through continue data
collection and analysis will interact naturally with the user. This innovative
system will provide workers assistance in their everyday routine in the form
of reminders, risks avoidance and recommendations. In this way the WorkingAge
project will create a sustainable and scalable product that will empower their
user's easing their life by attenuating the impact of aging in their autonomy,
work conditions, health and well-being.
### 4.1.4 Processing of the data and consent form
Processing of all the WorkingAge project data will take place in several
countries, complying with GDPR and other local legislation. The consent form
in paper format will be stored in the correspondent country in which they are
generated.
### 4.1.5 The types and formats of data generated/collected
All the data are stored in digital way and the different types are defined as
follows:
#### 4.1.5.1 Raw data
Raw data is data produced by all the devices used in the measurements: EEG
(Electroencephalography), ECG (Electrocardiography), GSR (Galvanic Skin
Response), Camera (video and images), Voice Recognition, Movement and Pose
Recognition.
#### 4.1.5.2 Online pre-processing
Online pre-processing is the action of checking the quality and/or decoding
and/or modifying raw-data before storing or uploading them (e.g. by filtering
techniques).
#### 4.1.5.3 Online markers / annotation / indicators calculation
Thanks to sensors data and online calculations one can identify specific
events (e.g. interruptions during the working task, etc.) and/or compute
various indicators (e.g. performance indicators).
#### 4.1.5.4 Offline markers / annotation / indicators calculation
Thanks to observers and/or offline calculations one can retrieve contextual
information and/or identify specific events (e.g. unsafe situations, events
complex to automatically identify online), and/or various indicators. Such
process can be used to enrich and/or manually/automatically annotate or mark
information in the database.
#### 4.1.5.5 Offline data acquisition
Some data are acquired using either questionnaires or interviews that are
potentially supported by markers and/or annotation data. Such data (sometimes
called subjective data) can be stored in raw form (audio-visual recording or
scanned documents), or encoded form. Moreover, it is possible that data will
be acquired offline, deriving from service providers (e.g. weather forecast).
#### 4.1.5.6 WorkingAge database
The WorkingAge database will consist of data collected during studies (online
/ offline), plus markers / annotations / indicators (online / offline). The
project database will be organized in directories: each partner could use a
directory in which store the different data acquired types. All the data
stored in the database will be encrypted and each dataset will be accessible
only to the partner who acquired the dataset and to other partners who own the
secondary decryption key.
### 4.1.6 Data sources
In this section, all the sources of data are briefly discussed. The data will
be generated by two types of test ( _in-LAB_ phase, and _in-COMPANY_ phase) in
three different time-scales (single test, week test, long-term test).
It is noted that upcoming D2.6 “Study Protocols for the Test” will describe
the tests with more detail, which may introduce some modifications.
#### 4.1.6.1 In-LAB Phase
##### 4.1.6.1.1 Single Tests
Experiments will be performed with one individual in a single occasion. Single
tests will run in two sessions and two series of these tests will be
implemented. These tests will be performed with up to 90 individuals. They
will involve the analysis of in-depth aspects and validity of the intervention
focusing on user expectation, usability and validity. Users of these tests
will not have age requirements (only gender and/or health requirements) and
will not be rewarded.
In-LAB tests will be performed at the end of the development of three modules
of the WA system, namely WP4, WP5 and WP6. The aim of these tests is to verify
the functionality of each element of the WA system, irrespective of its
effectiveness, which will be tested in the second set of tests described
hereinafter. Researchers and students from the responsible partners will be
involved for testing the measure, teach and adapt modules, whereas tests for
the middleware will consist mainly in software tests to verify the
communication among the elements of the WA system. Preliminary in-lab tests
aim at verifying that the different modules are able to detect mental strain
and monitor user’s interaction with the system and the validation of the
technical functions and the identification of software bugs of the offline and
online training system. Therefore, users will be asked to carry out
representative tasks with the developed training system. Three types of
activities will be included in this type of tests:
“Offline” assessment, consisting in questionnaires regarding demographic
questions, or tests for perceptive, cognitive or motoric capabilities.
Moreover, skills and constitutional characteristics will be queried by
questioning.
Real-time measurements consist in measuring physiological indicators for
mental strain, such as pupil diameter, blinking rate, skin conductance,
cerebral activity, body temperature, and heart rate. Other measurements to
create HCI interactions will also be tested.
Performance indicators will be tracked, e.g. time for decisions, executions
step for the task, mistakes, and redundancies.
For these tests, users recruited by each organization developing a module will
interact with the system: UCAM for facial expression analysis and recognition,
EXO for gesture recognition, RWTH for eye tracking, ITCL for body pose
recognition, AUD and POLIMI for voice and BS for EEG, ECG.
#### 4.1.6.2 In-Company Phase
At the moment of writing, the pilot tests are planned to be performed in Spain
and Greece, by end-user’s organizations Grupo Antolin, Piraeus Bank in Greece
and FirstAid ambulance company in Greece, led by INTRAS. This multi site
design will allow the evaluation of the WA system in different social and
cultural contexts. The variety of partners’ profiles will allow the consortium
to test the WA solution in heterogeneous environments: some tests will be more
focused on the company dimension involving occupational health and safety
professionals or human resources managers, while others will address the
worker’s environments with the support of ICT or organizational departments.
Pilot application will consist of the following phases: (i) protocol design,
(ii) analysis of the pilot study with sample size considerations, (iii) pilot
applications, (iv) assessment of results. The research body of the consortium
will focus this pilot application to seek the potential mechanism of efficacy
for a new intervention and investigate those indicators that are triggering
the aforementioned intervention. The selection of the sample sizes for the WA
project includes judgement- and aims-specific considerations, but also
practical feasibility that leads to proper conclusions and interventions. The
inclusion criteria for the tests will be being healthy age 50+ and exclusion
criteria having neuropsychiatric disorders or addiction problems.
##### 4.1.6.2.1 Single Tests
Experiments will be performed with one individual in a single occasion. These
tests will be used to fine-tune the subsystems for the users that will perform
the week and long-term tests.
These tests will aim to assess reliably the psychological, physical, cognitive
and social health status in the presence of an occupational health specialist
by means of the HCI services. This experiment should be done with a large
group of 30 subjects (10 for each use case) from the total 90, and would be
preceded by development of the different assessment methodologies. These will
include information from existing renowned tests, such as: quality of life
(WHOQOL-BREF) and activities of daily living (ADCS-MCI-ADL), reduction in
health resource consumption (EQ-5D), Mini-Nutritional Assessment (MNA);
Health-related quality of life (HRQOL), Life’s Simple 7 metric., EQ5, ESM,
PHQ, GAD score, test of executive functioning, PAST, fluency tests, long-term
memory tests working memory and Lubben Social Network Scale. These
methodologies will be tested with questionnaires, speech, or AR interaction.
Other tests may be added, according to the decisions of the specialists.
Performance should be compared with questionnaires and other tests managed by
a trained specialist (external validity). Additionally, test-retest
reliability should be assessed.
##### 4.1.6.2.2 Week Tests
Experiments will run for several sessions and involving up to 45 individuals
from the final 90 individuals that will test the system. These experiments
will assess unmonitored interaction, track occupational related parameters for
analysis, and test if the system detects a health risk and delivers an
intervention. It will also be used for the assessment of the ability of WA to
self-improve. Objective (Are they still using it after three weeks?) and
subjective reactions (what did they like best? what didn’t they like?) will
also be tracked.
The scenarios considered in the tests, called evaluation scenarios, will be
defined for the three use cases. These specify what tasks the study
participants will be asked to perform while the effectiveness of the WA system
is measured. As part of the evaluation scenarios, the most frequent errors
likely to occur when using the traditional interaction systems will be found.
The goal will be to improve the interaction approach and, hence, decrease the
occurrence of such errors.
These tests (of the week category) will be comprised by a whole series of
experiments, with a total amount of 45 users involved (15 per use case) from
the 90 users. The interventions could take many forms. In order to test their
effectiveness in all WA aspects (physical, psychosocial, working and health)
they must be activated one-at-a-time and not all at once.
##### 4.1.6.2.3 Long-Term Tests
This test type will assess the ultimate goal of prevention and monitoring on a
long-range time scale, Up to 90 individuals will be monitored for about a year
towards the end of the project. Such a last testing session will also
investigate issues and benefits that may arise with long-term usage without
the interference of more controlled testing conditions and at the same time
test adherence and compliance. Good predictions and compile advice on how to
further pursue this objective in future research and development will be
included in the final report. There will also be a follow-up after 6 months to
see whether the technology is still used (reflecting sustainability). Users
will be rewarded with the equipment needed for the experiment. These users
will include the week tests users and new users performing the questionnaires
of the single tests to fine tune the system.
In order to test the adherence rate of the solution 90 users (30 per use case)
will have to use the complete solution for a long period of time (1 year).
Participants’ average weekly compliance rate will be calculated. Tests for
Dropout and Compliance; TAM will also be considered for evaluation. The
following Key-Point Indicators will be covered: i) reported average weekly
compliance of the indications; ii) Use of the networks to report improvement,
iii) attrition rate.
A compendium of tools series of testing for the assessment of the Physical,
Psychosocial, working and Health wellbeing of the worker in the context of
primary prevention will take place during the year.
The evaluation methodology will be user centred and will describe a series of
Key Performance Indicators (KPI) supported by the knowledge and input of the
parallel research. This evaluation will be supported on the evidence exposed
during the validation with real users.
## 4.2 FAIR Data
In general terms, research data generated in the WorkingAge project are – in
as far as possible – “FAIR”, that is findable, accessible, interoperable and
reusable.
### 4.2.1 Findability - Making data findable, including provisions for
### metadata
Publications are provided with bibliographic metadata (in accordance with the
guidelines). Unique and persistent identifiers are used (such as Digital
Object Identifiers - DOI), when possible also applying existing standards
(such as ORCID for contributor identifiers). As per the European Commission
guidelines, bibliographic metadata that identify the deposited publication are
in a standard format and include the following:
* The terms ["European Union (EU)" & "Horizon 2020"].
* The name of the action, acronym and grant number.
* The publication date, the length of the embargo period (if applicable) and a persistent identifier.
Datasets are provided with appropriate machine-readable metadata (see
paragraph 4.2.3) and keywords are provided for all type of data.
#### 4.2.1.1 Naming conventions and versioning
Files are named according to their content to ease their identification with
the project, following this format:
* Country code, e.g. 00 for Italy, 01 for Spain, 02 for UK (eventually we could include also the partner code).
* Dominant hand, R for right handed or L for left handed.
* Gender, M for male or F for female. Participant order.
* Age.
* Protocol code, e.g. LBA for In-LAB Acceptability.
* Data Type, e.g. A for EEG data, B for ECG data, C for GSR data.
Each partner will keep this pseudo-ID on his side, in particular it will be
stored by the Data Controller. BrainSigns, as Data Manager of the project,
will receive only the anonymous data label containing the partner’s name and
the participant’s order. An example is reported below:
<table>
<tr>
<th>
Mapping on Data Controller’s Side
</th>
<th>
Label uploaded on server (BrainSigns)
</th> </tr>
<tr>
<td>
03RM0001045LBAE
</td>
<td>
EX001
</td>
<td>
EX001
</td> </tr> </table>
### 4.2.2 Accessibility – Making data openly accessible
Data and related documentation are made available depositing them in the
repository of choice (Zenodo), together with the publications, and are
accessible free of charge for any user **. Zenodo is a repository built by
CERN, within the OpenAIRE project, with the aim of supporting the EC’s Open
Data policy by providing a set of tools for funded research** . Zenodo
provides tools to deposit publications and related data and to link them. Any
needed restriction in access to the data is evaluated before final
publication, in accordance with ethical aspects (conducting research with
humans and children) and with protection of personal data.
All the consent forms related to the WorkingAge activities will explicitly
indicate that the pseudonymized dataset will be published on a public
repository. In case of privacy issues, Zenodo repository allows the publisher
to restrict the data access, asking for the data owner approval before
downloading them.
### 4.2.3 Interoperability - Making data interoperable
Metadata models were evaluated among the ones available in the Metadata
Standards Directory.
Dublin Core standard (Table 1) was selected to add metadata to each of the
datasets identified in sub-section 4.1.
**Table 1 –** DC Metadata Element Set
<table>
<tr>
<th>
**Term name**
</th>
<th>
**contributor**
</th> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/contributor
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Contributor
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
An entity responsible for making contributions to the resource
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
Examples of a Contributor include a person, an organization, or a service.
Typically, the name of a Contributor should be used to indicate the entity
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**coverage**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/coverage
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Coverage
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
The spatial or temporal topic of the resource, the spatial
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
applicability of the resource, or the jurisdiction under which the resource is
relevant
</th> </tr>
<tr>
<td>
Comment
</td>
<td>
Spatial topic and spatial applicability may be a named place or a location
specified by its geographic coordinates. Temporal topic may be a named period,
date, or date range. A jurisdiction may be a named administrative entity or a
geographic place to which the resource applies.
Recommended best practice is to use a controlled vocabulary such as the
Thesaurus of Geographic Names [TGN]. Where appropriate, named places or time
periods can be used in preference to numeric identifiers such as sets of
coordinates or date ranges
</td> </tr>
<tr>
<td>
References
</td>
<td>
http://www.getty.edu/research/tools/vocabulary/tgn/index.html
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**creator**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/creator
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Creator
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
An entity primarily responsible for making the resource
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
Examples of a Creator include a person, an organization, or a service.
Typically, the name of a Creator should be used to indicate the entity
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**date**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/date
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Date
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
A point or period of time associated with an event in the lifecycle of the
resource
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
Date may be used to express temporal information at any level of granularity.
Recommended best practice is to use an encoding scheme, such as the W3CDTF
profile of ISO 8601 [W3CDTF]
</td> </tr>
<tr>
<td>
References
</td>
<td>
http://www.w3.org/TR/NOTE-datetime
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**description**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/description
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Description
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
An account of the resource
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
Description may include but is not limited to: an abstract, a table of
contents, a graphical representation, or a free-text account of the resource
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**format**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/format
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Format
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
The file format, physical medium, or dimensions of the resource
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
Examples of dimensions include size and duration.
Recommended best practice is to use a
controlled vocabulary such as the list of Internet Media Types [MIME]
</td> </tr>
<tr>
<td>
References
</td>
<td>
http://www.iana.org/assignments/media-types/
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**identifier**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/identifier
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Identifier
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
An unambiguous reference to the resource within a given
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
context
</th> </tr>
<tr>
<td>
Comment
</td>
<td>
Recommended best practice is to identify the resource by means of a string
conforming to a formal identification system
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**language**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/language
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Language
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
A language of the resource.
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
Recommended best practice is to use a controlled vocabulary such as RFC 4646
[RFC4646].
</td> </tr>
<tr>
<td>
References
</td>
<td>
http://www.ietf.org/rfc/rfc4646.txt
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**publisher**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/publisher
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Publisher
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
An entity responsible for making the resource available.
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
Examples of a Publisher include a person, an organization, or a service.
Typically, the name of a Publisher should be used to indicate the entity.
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**relation**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/relation
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Relation
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
A related resource.
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
Recommended best practice is to identify the related resource by means of a
string conforming to a formal identification system.
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**rights**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/rights
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Rights
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
Information about rights held in and over the resource.
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
Typically, rights information includes a statement about various property
rights associated with the resource, including intellectual property rights.
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**source**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/source
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Source
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
A related resource from which the described resource is derived.
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
The described resource may be derived from the related resource in whole or in
part. Recommended best practice is to identify the related resource by means
of a string conforming to a formal identification system.
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**subject**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/subject
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Subject
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
The topic of the resource.
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
Typically, the subject will be represented using keywords, key phrases, or
classification codes. Recommended best practice is to use a controlled
vocabulary.
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**title**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/title
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Title
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
A name given to the resource.
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
Typically, a Title will be a name by which the resource is formally known.
</td> </tr>
<tr>
<td>
**Term name**
</td>
<td>
**type**
</td> </tr>
<tr>
<td>
URL
</td>
<td>
http://purl.org/dc/elements/1.1/type
</td> </tr>
<tr>
<td>
Label
</td>
<td>
Type
</td> </tr>
<tr>
<td>
Definition
</td>
<td>
The nature or genre of the resource.
</td> </tr>
<tr>
<td>
Comment
</td>
<td>
Recommended best practice is to use a controlled vocabulary such as the DCMI
Type Vocabulary [DCMITYPE]. To describe the file format, physical medium, or
dimensions of the resource, use the Format element.
</td> </tr>
<tr>
<td>
References
</td>
<td>
http://dublincore.org/documents/dcmi-type-vocabulary/
</td> </tr> </table>
### 4.2.4 Data re-use and licensing
Publications and underlined data will be made available at the end of each
experimental phase, once all data are collected and analysed. All the data
indicated as Open Data will be made available for re-use after the end of the
project. The licences for publications and related data will be defined in the
final version of this plan, based on the final data, in order to verify
compliance with personal data protection regulations and the ethical approval
process results.
## 4.3 Allocation of resources
Costs related to open-access to research data in Horizon 2020 are eligible for
reimbursement under the conditions defined in the H2020 Grant Agreement [6.2
D.3], but also other articles relevant for the cost category chosen. Costs
cannot be claimed retrospectively. Project beneficiaries will be responsible
for applying for reimbursement for costs related to making data accessible to
others beyond the consortium.
## 4.4 Data Security
All research data produced during WorkingAge will be stored in dedicated hard
drive and in separated Network Attached Storage (NAS), and for backup purpose.
All the partners will transfer only pseudonymized data. Each transfer will be
protected by end-to-end encryption. The data transfer support will be provided
by the sharing platform called freeNAS, physically placed in BrainSigns and
exposed through HyperText Transfer Protocol over Secure Socket Layer (HTTPS).
All the data stored in the server will be encrypted. The software used for the
encryption and decryption procedure will be GnuPG. Each partner will own the
key to decrypt only the data that he acquired. BrainSigns will not have access
to the other partner’s datasets.
If any partner needs to access other partner’s research data, the data owner
will encrypt the data and will provide the secondary decryption key only to
the partner who needs the access.
The data owner will keep a register of all recipients of decryption keys.
### 4.4.1 Pseudonymization Process at Local level
Data Results in the platform are not associated with user’s identity.
The name of the research participant appears on the consent forms. All data in
the platform is pseudonymized by assigning an anonymized user code to each
participant.
Information of the association between platform user and participant of each
experimental location is transmitted to each Local Data Manager in Excel
format and with the specific data. The Excel sheet is secured through 256-bit
AES (Advanced Encryption Security) codification and password. The Data
Controller is responsible for the Excel sheet security. Participant’s data and
platform’s users’ conversion of the experimental location are stored by the
centre following the legal requirements of the country.
All data collected during the study through the platform is associated to the
platform user. That means that all shared reports, results, internal
communications and external publications do not contain any personal data of
the participant.
### 4.4.2 Data maintenance and storage at central WA level
#### 4.4.2.1 Data access in freeNAS platform
Research and research-related personal data collected are encrypted and stored
in the systems of the organization where the data were produced. Personal Data
is only accessible by Data Controller of each organization.
Access is restricted to each participant, under their fictional pseudo-
identity, and to the members of the Data Controller organization and
WorkingAge research team.
Each access to the research data is properly logged with the information of
the authorized user who requests access to the data.
#### 4.4.2.2 Process of backups of freeNAS platform
Each partner will send the pseudonymized and encrypted dataset to the server
through the freeNAS platform, after the experimental session conclusion. Each
partner will not transmit the primary decryption key. The partner’s Data
Controller will be responsible for the decryption key security.
## 4.5 Ethical aspects
The project will conform to privacy and confidentiality guidance from the EU
guidance notes, “Data protection and privacy ethical guidelines” and to the
Data Protection Directive (Directive 95/46/EC,
http://ec.europa.eu/justice/data-protection/index_en.htm). The following
ethical issues have been considered in the WA project and will be explained in
this point related to each country involved:
* Notification/Authorizations of the Tests
* Data processing in the Cloud
* Data Controllers
* Video recording
The ethical aspects of the research generating the scientific data of the
project are covered in the following deliverables, also taking into
consideration the European Commission Ethics Summary Report for the project.
* D 7.1 - Ethical and Legal report.
* D 7.2 - Security and Privacy Model.
The correspondence between the participant’s code, described in paragraph
4.2.1, and the participant’s identity is held in a suitably encrypted table
held on a secure computer at the Data Controller’s premises. No reference
about the participant’s code will be written on the respective consent form.
The ethics committee will include one member for each partner of the
consortium. This member will act also as the contact point for data privacy
issues and compliance to the data management plan. Contacts of DPOs of data
collectors will be included in the consent forms as per the Grant Agreement.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0134_INFORM_693537.md
|
# FAIR DATA
## Making data findable, including provisions for metadata
_Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?_
_What naming conventions do you follow?_
_Will search keywords be provided that optimize possibilities for re-use?_
Data will be encoded using the conventions for Sociology specified in the DDI
Codebook. Data will initially be made available through the INFORM project
website ( _http://www.formal-informal.eu/en/_ ) . After the end of the
project data will be held and made available through the UK Data Archive
(UKDA).
_Do you provide clear version numbers?_
The Management Board has decided that only final versions of research data
will be made publicly available, and that raw data and working versions will
be circulated only internally.
_What metadata will be created? In case metadata standards do not exist in
your discipline, please outline what type of metadata will be created and
how._
Categories and keywords are the standard referents employing the benchmarks
developed through Data Documentation Initiative (DDI, version 3.2). The
principal referent will be the metadata standards for Sociology, although as
the project is multidisciplinary additional keywords may be taken from the
standards for other disciplines (Anthropology, Economics, Political Science).
All of these are available through the DDI Codebook.
## Making data openly accessible
_Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions._
_Note that in multi-beneficiary projects it is also possible for specific
beneficiaries to keep their data closed if relevant provisions are made in the
consortium agreement and are in line with the reasons for opting out._
All processed quantitative data as well as interview summaries will be made
openly available. Unshared data will remain at the discretion of the party
that has obtained it.
_How will the data be made accessible (e.g. by deposition in a repository)?_
In the first instance data will be made available through the INFORM website.
After the project is completed data will be deposited in the UK Data Archive
(UKDA). The UKDA will assign a PID.
_What methods or software tools are needed to access the data?_
_Is documentation about the software needed to access the data included?_
_Is it possible to include the relevant software (e.g. in open source code)?_
Text files will be accessible using any of the widely used software packages
available for reading and manipulation of text. They will be downloadable and
can be analysed using any type of QCA software. Survey data will be
downloadable and suitable for analysis using any of the standard statistical
software packages currently in use.
No special software that is not commonly in use is required to access the
data.
_Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible._
After the project is completed data will be deposited in the UK Data Archive
(UKDA).
_Have you explored appropriate arrangements with the identified repository?_
This will be done in the final period of the INFORM project.
_If there are restrictions on use, how will access be provided?_
There are no restrictions on the use of publicly shared INFORM project data.
_Is there a need for a data access committee?_
The INFORM Management Board carries responsibility for issues related to data
access. After the end of the project period responsibility for issues related
to data access rests with the Project Coordinator.
_Are there well described conditions for access (i.e. a machine readable
license)?_
Having in mind that the data will be made available under a Creative Commons
licence we will be using the machine-readable codes which will be obtained
using the CC license chooser tool.
_How will the identity of the person accessing the data be ascertained?_
It is not necessary to ascertain the identities of people accessing the
publicly shared data.
## Making data interoperable
_Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different origins)?_
_What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?_
_Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?_
Categories and keywords are the standard referents employing the benchmarks
developed through Data Documentation Initiative (DDI, version 3.2). The
principal referent will be the metadata standards for Sociology, although as
the project is multidisciplinary additional keywords may be taken from the
standards for other disciplines (Anthropology, Economics, Political Science).
All of these are available through the DDI Codebook.
Text files will be accessible using any of the widely used software packages
available for reading and manipulation of text. They will be downloadable and
can be analysed using any type of QCA software. Survey data will be
downloadable and suitable for analysis using any of the standard statistical
software packages currently in use.
No special software that is not commonly in use is required to access the
data.
_In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?_
We will not be using uncommon, specific ontologies or vocabularies, but
standard ones defined in DDI codebook for Social Sciences. In fact, we have
chosen to follow standards defined by Data Documentation Initiative (DDI) in
order to make our data interoperable.
## Increase data re-use (through clarifying licences)
_How will the data be licensed to permit the widest re-use possible?_
_When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible._
Data will be made available under a Creative Commons licence accessible to all
users under the condition that INFORM is acknowledged as the source of the
data in case of publication. The embargo period applies to the period before
participants in the INFORM project release the first publications of project
data.
_Are the data produced and/or used in the project useable by third parties, in
particular after the end of the project? If the re-use of some data is
restricted, explain why._
All publicly shared data will be made available for use by third parties.
No restrictions are applied.
_How long is it intended that the data remains re-usable?_
We expect data to be preserved in perpetuity in the UK Data Archive (UKDA).
_Are data quality assurance processes described?_
As part of descriptive metadata we will describe, for quantitative data, the
procedures of sampling, data collection, testing of logical consistency of
data, ways of coding of data (including refusals to answer, no responses and
missing data); in the case of interviews ways of sampling, interview
guidelines and interview procedures, while in the case of ethnographic field
reports procedures for observation and documentation will be thoroughly
described, enabling assessment of their accuracy and overall data quality.
Further to the FAIR principles, DMPs should also address:
# ALLOCATION OF RESOURCES
_What are the costs for making data FAIR in your project?_
_How will these be covered? Note that costs related to open access to research
data are eligible as part of the Horizon 2020 grant (if compliant with the
Grant Agreement conditions)._
Open Access costs are budgeted in the INFORM grant. If necessary it will be
possible to apply for coverage of additional costs through the UCL Research
Office.
_Who will be responsible for data management in your project?_
The INFORM Management Board carries overall responsibility for data
management. Within the Management Board the individuals carrying principal
responsibility are Eric Gordy (Project Coordinator), Predrag Cvetičanin
(Research Coordinator) and Klavs Sedlenieks (Outreach Coordinator).
_Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?_
As already stated the decisions on how what data will be kept and for how long
will be made by the Management Board of the INFORM project, but resources for
long term data preservation are not yet determined (these will be determined
during the final year of the project). After the expiry of the project period,
responsibility for data preservation resides with the Project Coordinator.
# DATA SECURITY
_What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?_
All publicly shared data, in addition to being available through the INFORM
website and appropriate repositories, are also held by the coordinating
institution (UCL) and the institution coordinating the field research (CESK).
In the event of data loss or corruption this form of triangulation allows for
recovery of data.
The publicly shared project data includes no sensitive data calling for
special measures related to security.
Data that may contain sensitive information (ethnographic field data and
qualitative interview data) will be stored in an encrypted form in password-
protected environments. Procedures will be devised to ascertain maximum
separation of any identifiers and the data.
_Is the data safely stored in certified repositories for long term
preservation and curation?_
The publicly shared data will be deposited in the UK Data Archive
(UKDA) for long term preservation and curation
# ETHICAL ASPECTS
_Are there any ethical or legal issues that can have an impact on data
sharing? These can also be discussed in the context of the ethics review. If
relevant, include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA)._
The research proposal is being reviewed by UCL Research Ethics Committee. We
have identified the main area in which particular sensitivity is required as
the safety/privacy of those people subject to the research.
We will not be working with people who would be regarded as vulnerable by any
standard definition, such as children. Most of the ethnographic portions of
the study involve the collection of data based on the knowledge, attitudes and
practices of adults that, while not necessarily personal, may be sensitive in
the sense that people are persecuted for their political views in many parts
of the world. This requires a critical commitment to the preservation of
anonymity, not only in the final presentations of our data but also in the
storage of this data prior to and after publication. The data made available
to researchers will be anonymised data. Raw data will not be included in the
project archive.
In practical terms, this requires ensuring that all fieldworkers are trained
in methods of keeping and storing their field notes in formats (electronic or
otherwise) that would not allow a third party to identify persons either from
names or other distinguishing features such as job titles. We will work
together with UCL Research Ethics Committee to ensure that we consistently
abide by data protection concerns with respect to the safe storage of personal
data.
The sample of research sites includes non-EU countries, some of which are
lower-income countries. Although standard schemes of benefitsharing do not
apply, consciousness of inequalities in the relationship between international
researchers and domestic publics forms an essential element of the ethos of
ethnographic research. The experienced researchers on the project will be
sensitive to power differentials inherent in this type of international
research. Our research plan involves the engagement of domestic academics in
the research as members of the project advisory board and as local advisors to
the field researchers (as described above). We also intend to seek internal
university funding for workshops in the host countries of the research at
which findings will be presented and shared with the local academic and policy
communities.
Because the project consortium consists of researchers from a range of
countries, communication and coordination of ethical standards will be
essential. Guidelines in compliance with EU standards and UCL procedures will
be distributed to all researchers as part of the project coordination.
_Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?_
The survey and interview instruments include information to respondents
regarding the anonymisation of data and data preservation.
# OTHER ISSUES
_Do you make use of other national/funder/sectorial/departmental procedures
for data management? If yes, which ones?_
In the final instance policies will be determined in compliance with the
guidelines provided by the coordinating institution. The overseeing office is
UCL Research Data Services ( _https://www.ucl.ac.uk/research-
itservices/research-data-service_ ) , which assures compliance with
procedures outlined by the UCL Research Data Policy
( _https://www.ucl.ac.uk/library/research-support/researchdata/policies_ ) .
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0135_GetReal Initiative_807012.md
|
# Introduction and aim
The ultimate goal of the GetReal Initiative is to drive the sustainable
adoption of tools, methodologies and best practices from IMI GetReal and
thereby increase the quality of real-world evidence (RWE) generation in drug
development and regulatory/Health Technology Assessment (HTA) processes across
Europe. In this way, the project is committed to maximizing the societal value
of public and private investments in the IMI GetReal project.
The GetReal Initiative Description of Action (DoA) includes a Data Management
Plan (DMP) as deliverables 3.13, 3.14 and 3.15, as part of WP3. Together with
the GetReal Initiative Consortium Agreement, the DMP provides a general
framework regarding data management, data protection, data ownership,
accessibility and sustainability requirements.
Overall, the DMP provides a description of the data management, regarding
generated research data, that will be applied during the GetReal Initiative
project including:
* A description of the data repositories, who is able to access the data, and who owns the data.
* The main DMP elements for each of the studies contributing (or sharing data) to GetReal Initiative and its tools.
* The time period for which data will be stored
* The standards for data collection, validation evaluation.
* The possibilities of and conditions for sharing data.
* The implementation of data protection requirements.
The DMP is an evolving documents, therefore, some aspects may updates and/or
updated in later versions of the documents. An updated version of the document
will be uploaded as deliverables 3.14 and 3.15 in M12 and M24 respectively.
In summary, the GetReal Initiative DMP gives guidance and provides an
oversight of general data management, while each study needs to provide
specific data management information including, but not limited to, data
capture systems, data analysis systems, data protection and data privacy
measures, including description of de-identification of data sets and access
rules. In cases where the research results are not open access a justification
needs to be provided.
The following descriptions regarding research data and personal data are used:
**Research data** 1
_Refers to information, in particular facts or numbers, collected to be
examined and considered as a basis for reasoning, discussion, or calculation.
In a research context, examples of data include statistics, results of
experiments, measurements, observations resulting from fieldwork, survey
results, interview recordings and images. The focus is on research data that
is available in digital form. Users can normally access, mine, exploit,
reproduce and disseminate openly accessible research data free of charge._
**Personal data** 2
_Personal data is any information that relates to an identified or
identifiable living individual. Different pieces of information, which
collected together can lead to the identification of a particular person, also
constitute personal data. Personal data that has been de-identified, encrypted
or pseudonymised but can be used to reidentify a person remains personal data
and falls within the scope of the law._
_Personal data that has been rendered anonymous in such a way that the
individual is not or no longer identifiable is _no longer_ considered personal
data. For data to be truly anonymised, the anonymisation must be irreversible.
_
# General principles
This is the first version of the DMP for GetReal Initiative. The DMP is a
working document and will evolve during the course of the project. The
document will regularly be updated to reflect the project progress. Table 1
lists the deliverables related to the multiple versions of the DPM for GetReal
Initiative.
_Table 1 GetReal Initiative DMP deliverables_
<table>
<tr>
<th>
**Deliverable no.***
</th>
<th>
**Deliverable name**
</th>
<th>
**WP no.**
</th>
<th>
**Short name of lead participant**
</th>
<th>
**Type**
</th>
<th>
**Dissemination level**
</th>
<th>
**Delivery date**
</th> </tr>
<tr>
<td>
3.13
</td>
<td>
Data Management
Plan (M6)
</td>
<td>
3
</td>
<td>
UMCU
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
November
2018
</td> </tr>
<tr>
<td>
3.14
</td>
<td>
Data Management
Plan (M12)
</td>
<td>
3
</td>
<td>
UMCU
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
June 2019
</td> </tr>
<tr>
<td>
3.15
</td>
<td>
Data Management
Plan (M24)
</td>
<td>
3
</td>
<td>
UMCU
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
May 2020
</td> </tr> </table>
_DMP = Data Management Plan; WP = Work Package; R = Document, Report; PU =
public_
_*Accordingly to the Description of Action for GetReal Initiative, page 37_
The DMP follows the ‘FAIR data principle’, i.e. data should be findable,
accessible, interoperable and reusable 3 . The general principles on access
rules are defined in the GetReal Initiative Consortium Agreement (Section 8
Intellectual property – Access rights).
GetReal Initiative makes use of one information exchange platform, the GetReal
Initiative member area. The member area is an password secured web space were
consortium member can store and exchange reports and documents. The platform
is not meant to share patient research datasets. The member area is hosted by
UMCU, contact person: Florian van der Nolle ( _f.l.vandernolle-
[email protected]_ ).
# Overview of data types generated and collected in GetReal Initiative
The GetReal Initiative will generate and collect the following types of data:
user surveys, interview reports, usage data, website analytics, market data,
and recruitment data. No personal data will be transferred from/to a non-EU
country or international organisations. Interview report are most prone to
contain confidential information.
The individual research projects and their main researcher will ensure
appropriate data storage and protection. Therefore they are asked to complete
a small dataset specific DMP table, as described in this DMP (chapter 4). The
processes will be worked out and implemented between M6 and M24, in
collaboration between WP3 Project management and the main researchers.
A summary of the data generated in the project can be found in Table 2, this
table will be updated along the course of the project. All generated data is
expected to be useful for the GetReal Initiative project, especially for the
sustainability of the tools, methodologies and best practices generated in IMI
GetReal.
_Table 2 Summary of data generated in GetReal Initiative_
<table>
<tr>
<th>
**Group**
</th>
<th>
**Task**
</th>
<th>
**Objective**
</th>
<th>
**Design**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
Task Force A
</td>
<td>
1.2.3
</td>
<td>
Evaluation and maximization of the value/use of the PragMagic tool
</td>
<td>
User data
</td>
<td>
Numerical + textual
</td> </tr>
<tr>
<td>
Task Force A
</td>
<td>
1.2.2
</td>
<td>
Identify reasons for (not) conducting pragmatic trials, as well as describing
encountered challenges in the consideration, planning, conduct and evaluation
of pragmatic trials.
</td>
<td>
Survey
</td>
<td>
Textual
</td> </tr>
<tr>
<td>
Task Force A
</td>
<td>
1.2.2
</td>
<td>
Substantiate results from surveys and provide more in-depth insights into
specific challenges and their possible solutions
</td>
<td>
Interviews
</td>
<td>
Multimedia + textual
</td> </tr>
<tr>
<td>
Task Force B
</td>
<td>
1.3.3/ 1.3.3
</td>
<td>
Determine requirements for ADDIS uptake and implement/test improvements to the
ADDIS tool
</td>
<td>
User data
</td>
<td>
Numerical + textual
</td> </tr>
<tr>
<td>
Task Force B
</td>
<td>
1.3.3
</td>
<td>
Determine requirements for ADDIS uptake
</td>
<td>
Interviews
</td>
<td>
Multimedia + textual
</td> </tr>
<tr>
<td>
Work
Package 2
</td>
<td>
2.1/2.3
</td>
<td>
Conducting a market research and establishing the value of the GetReal brand
and the tools
</td>
<td>
Interviews
</td>
<td>
Multimedia + textual
</td> </tr>
<tr>
<td>
Work
Package 3
</td>
<td>
3.6
</td>
<td>
Maintaining the public website
</td>
<td>
User data
</td>
<td>
Numerical + textual
</td> </tr> </table>
# Operational data management requirements for GetReal Initiative research
projects
All individual studies within the project will need to complete the study-
specific DMP table (Table 3). The table will be shared on the member area. The
data owners are responsible for the completion of the table. Thereafter, the
completed table shall be shared with the data management team in WP3. They
will review the table for completeness, compliance with the DMP and the CA.
Furthermore, the completed tables will be added to the annex of future
versions of the DMP.
The data owners of the respective dataset are responsible to comply with all
legal and ethical requirements for data collection, handling, protection and
storage. This includes adherence to regulations, guidelines such as (but not
limited to) the EU clinical trial directive 2001/20/EC, Good clinical practice
(GCP) and Good Pharmacoepidemiology Practice (GPP), as applicable.
_Table 3 study specific DMP table (adapted from the Data Management General
Guidance of the DMP Tool_ 4 _)_
<table>
<tr>
<th>
**General Overview**
</th>
<th>
</th> </tr>
<tr>
<td>
**Title**
</td>
<td>
_Name of the dataset or research project that produced it_
</td> </tr>
<tr>
<td>
**Task**
</td>
<td>
_GetReal Initiative task/subtask where dataset was generated_
</td> </tr>
<tr>
<td>
**Data owner**
</td>
<td>
_Name(s) and address(es) of the organizations or people who own the data_
</td> </tr>
<tr>
<td>
**Start and end date**
</td>
<td>
_Start and end date of the study_
</td> </tr>
<tr>
<td>
**Methods**
</td>
<td>
_Explain how data is generated and analysed, listing equipment and software
used_
</td> </tr>
<tr>
<td>
**Type of data**
</td>
<td>
_User data or Interview data;_
_Does the dataset contain personal data?_
</td> </tr>
<tr>
<td>
**Processing**
</td>
<td>
_How is the data altered or processed (e.g. normalized), including de-
identification procedures_
</td> </tr>
<tr>
<td>
**Sources**
</td>
<td>
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
_E.g. Citations to data derived from other sources, including details of where
the source data is held and how it was accessed._
</td> </tr>
<tr>
<td>
**Funder**
</td>
<td>
_Information regarding financial support such as research grants, or indicate
that the data owner funds the study._
</td> </tr>
<tr>
<td>
**Content description**
</td>
<td>
</td> </tr>
<tr>
<td>
**Subject**
</td>
<td>
_Describe the subjects or content of the data_
</td> </tr>
<tr>
<td>
**Language**
</td>
<td>
_All languages used in the dataset_
</td> </tr>
<tr>
<td>
**Variable list and codebook**
</td>
<td>
_List all variables in the data file_
</td> </tr>
<tr>
<td>
**Data quality**
</td>
<td>
_Description of data quality standards and procedures to assure data quality_
</td> </tr>
<tr>
<td>
**Technical description**
</td>
<td>
</td> </tr>
<tr>
<td>
**File inventory**
</td>
<td>
_Files associated with the project, including extensions_
</td> </tr>
<tr>
<td>
**File formats**
</td>
<td>
_Format of the files_
</td> </tr>
<tr>
<td>
**File structure**
</td>
<td>
_Organization of the data file(s)_
</td> </tr>
<tr>
<td>
**Checksum** _(if applicable)_
</td>
<td>
_A digest value computed of reach file that can be used to detect changes_
</td> </tr>
<tr>
<td>
**Necessary software**
</td>
<td>
_Names of any special-purpose software packages required to create, view,
analyse, or otherwise use the data_
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
</td> </tr>
<tr>
<td>
**Rights**
</td>
<td>
_Any known intellectual property rights, statutory rights, licenses, or
restrictions on use of the data_
</td> </tr>
<tr>
<td>
**Access information**
</td>
<td>
_Where and how your data can be accessed by other researchers_
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
_Description of how data will be shared, including access procedures_
</td> </tr>
<tr>
<td>
**Ethical and legal issues**
</td>
<td>
_Description of any ethics and legal issues associated with the dataset, if
any_
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
_Description of how and to what extent long-term preservation of the data is
assured. This includes information on how this long-term preservation is
supported._
</td> </tr> </table>
# Sharing and secondary use of data generated within GetReal Initiative
The information collected from the completed study specific DMP tables will be
uploaded on the member area. This will enable easy identification of the
available datasets and their respective data owners by consortium members. The
data owners are responsible for appropriate findability outside the
consortium.
To achieve the objectives of GetReal Initiative, it is imperative to follow
the collaborative approach the partners agreed on when signing the consortium
agreement. This includes the necessity to share data from the individual
studies for the implementation of the project, while respecting data
protection and intellectual property of the partners’ work. For those
individual studies within the project that need to use data generated in
another task, the metadata will contain the data owner contact details to whom
a requester can reach out if they need to access the results.
All data that will be generated within the project will be made accessible for
verification and re-use for subsequent research in due time, taking into
account intellectual property rights. When third parties want to use the data
that was generated or collected as part of the GetReal Initiative project, the
consortium PMO office should be contacted via Florian van der Nolle (
[email protected]_ ). Giving access to external parties
will be considered by the Coordinating Team (CT) and the data owner. Decisions
are made on a case by case basis. A separate procedure for accessing
consortium data after the end of the project will be described in the final
version of the Data Management Plan (D3.15, M24).
# Personal data
The collection, handling storage and exchange of personal data will be
conducted in a secure manner, through secure channels. In addition, this will
happen under the applicable international, IMI and national laws and
regulations. Only data of relevance for the proposed research will be
collected, no excess data will be stored. GetReal Initiative researchers
commit to the highest standers of data security and protection in order to
preserve the personal rights and interests of study participants. They will
adhere to the provisions set out in the:
* Regulation (EU) 2016/679 - General Data Protection Regulation (GDPR) 5
* Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) 6
# Ethical aspects
The partners of GetReal Initiative and the associated partners are required to
adhere to all relevant international, IMI, and national legislation and
guidelines relating to the conduct of studies. ‘Ethics requirements’ are set
out in more detail by Work package 4 in Deliverable 4.1-4.4.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0137_EPRISE_732695.md
|
**1 INTRODUCTION**
To boost the benefits of public investment in research funded under H2020, the
European Commission wants to improve access to scientific information,
including both publications and research data. H2020 already mandates open
access to all scientific publications and from 2017 is running the Research
Data Pilot (ODR). The ODR pilot’s aim is to enable and maximise access to and
re-use of research data generated by Horizon 2020 projects.
Specifically, open access following the FAIR principle is encouraged, i.e.
that research data should be Findable, Accessible, Interoperable and Reusable.
The pilot applies across all thematic areas of the H2020 work programme,
including Research and Innovation Actions (RIA), Innovation Actions (IA) and
Coordination and Support Actions (CSA).
As a CSA, peer-reviewed scientific publications do not fall within the scope
of the EPRISE project. However, the deliverables containing the most relevant
results will be made available on the project website as well as all the
dissemination material.
The project will produce research data and this deliverable describes the
types of data and how it will be collected, processed and/or generated, which
methodology and standards will be applied, which data and with which tools
data will be shared and preserved. This Data Management Plan is not a static
document and it will be updated before the project reviews.
The EPRISE project aims to coordinate regional and European strategies and
financial resources. It also looks to support SMEs working in the photonics
industry to overcome barriers in four markets: Medical Technologies,
Pharmaceuticals, Agriculture and Food. Thus, it will produce two categories of
data, namely data about regions and data about markets. A selection of this
data will be made open access by means of an online database during and after
the end of the project.
# DESCRIPTION AND PURPOSE
Research data will be organised into three datasets. For each dataset, a brief
description is provided, including the type of data that will be collected or
generated, its purpose and relation to the objectives of the project, the data
utility.
## Regional Photonics Dataset
This dataset arises from the activities of the work package WP2 “Regional
Grounds and Opportunities”. It contains information on:
* Regional photonics ecosystem, namely companies, universities, research technology organisations (RTOs), clusters and networks;
* Regional Research and Innovation Smart Specialization Strategy (RIS3) and previous regional/national/European investments in the four target markets.
Data is both qualitative (description of organisations’ core activity,
investigation into RIS3) and quantitative (company size, investment amount).
The purpose of this data collection is:
* Mapping photonics activities and actors in the majority of regions of the eight countries covered by the EPRISE consortium. This relates to the project objective of raising regional authorities' awareness about the potential of photonics-based technologies and their applications on the four target markets;
* Writing success stories with the aim of highlighting regions’ photonics sector;
* Producing case studies to profile European/regional co-funding scenarios in relation to the project objective of promoting co-funding initiatives and coordinating regional photonics strategies within Europe.
The dataset has a twofold utility. Throughout the project, generated data
(success stories) will be useful to public authorities and policy makers to
showcase their regional photonics ecosystem to peers and potential partner
regions with the aim of developing new collaborations. After the project, the
open access database issued from the dataset will provide information useful
for the whole photonics community.
## Market Data
Information on Medical Technologies, Pharmaceuticals, Agriculture and Food
sectors includes data related to the access to these markets for photonics
companies, data about specific market experts, data about system integrators
and end-users. The aim is to cover the whole value chain. This data will be
collected and generated through the WP3 (“Go to Market services”) and WP4
(“Photonics SMEs networking”) activities and will be organised into two
separate datasets.
### Go-to-Market Dataset
This dataset consists of qualitative data, namely the results of a survey of
the market barriers that companies developing photonics-based products face in
the four target markets. The survey also contains information about the
potential interest of companies, integrators and end-users in participating in
the events organised in the framework of EPRISE (“European Photonics
Roadshow”).
The purpose of the dataset is:
* identifying a list of Go-to-Market challenges that companies encounter with the aim of organising events tailored to their needs and expectations;
* producing a list of companies, integrators and end-users to pre-arrange B2B meetings for the Roadshow, because one of the project objectives is to boost collaboration along the whole value chain.
As a result, the dataset will be useful to both SMEs and integrators. It will
encourage business development in the form of testing or adaptation of new or
existing photonics technologies on the basis of the end-users’ feedback.
### Dataset of Experts
In order to set-up a network of market specific experts in the chosen target
markets, WP3 will create a dedicated dataset, including their contact details
and area of expertise.
The purpose of this qualitative dataset is the organisation of Go-to-Market
sessions during the European Photonics Roadshow. By matching the list of Go-
to-Market challenges (Go-to-Market Dataset) to the suited expert, SMEs will be
provided with concrete solutions on how to overcome market barriers. Also,
project partners will be able to search the dataset for an expert to help
answer requests which come in from SMEs during the project. This fits with the
project aim of assisting SMEs in accessing the four target markets through
qualified advice tailored to their needs.
Thanks to this dataset, partner clusters will rely on a network of experts
when providing their members with business support as well as when organising
future events based on a wellestablished format. Additionally, companies will
enhance their business skills, while experts will benefit of increased
visibility.
# DATA COLLECTION
This section describes how the data will be collected and generated, the
origin of the data, how the data will be organised during the project,
including naming conventions, version control and folder structure. It
outlines how the consistency and quality of the data collection will be
controlled and documented to help secondary users to understand and re-use it.
Finally, it details how data will be stored and backed up during the research.
## Methodology
The methodology for the gathering of data about regional photonics ecosystems
was established by WP2’s leader and co-leader in the first two months of the
project. The dataset will be built-up by merging information (basically name
of the company/research institute, country, postal and website address) from
the already published Photonics 21, EPIC (European Photonics Industry
Consortium) and OASIS (Open the Access to Life Science Infrastructures for
SMEs) databases. This data will be checked, updated and improved by including
additional information such as home region of photonics companies,
universities, networks and clusters, contact details when available, company
size, target markets, keywords or description of the company core activity.
Companies will be classified as follows:
* Companies fabricating and developing photonics components or products;
* Companies manufacturing photonics-enabled products/systems;
* Industries dependent on photonics for product manufacturing.
Each partner will collect this information for as many regions as possible in
their country. New data will be derived from partner clusters’ databases,
cluster fieldwork and, as detailed in Section 4.2, from direct online
submission by organisations on the project website. Financial data and
information about a company’s evolution and maturity will be extracted from
additional existing national databases (for example “Allabolag” in Sweden or
“Corporama” in France). Regarding regional RIS3s and information on previous
regional and European investments and funding, data published by the regions
themselves (calls for bids, announcements, reports, roadmaps) or results of
other EU funded projects will be exploited. The latter will be obtained via
the CORDIS portal or by establishing collaborations with other H2020 project,
such as Europho21.
Data for the Go-to-Market Dataset will be collected via interviews with
companies, integrators and end-users. The interviews will be conducted by
partners during the first year of the project. An electronic template to guide
the interviews has been produced by the WP4’s and WP3’s leaders. It will be
used by every partner during the regional events and their organisation and
follow-up. The questionnaire contains questions that are common to all
interviewees (for example their interest in participating in the Roadshow),
others concerning end-users only (their awareness of photonics) or photonics
companies only (questions about Go-to-Market barriers and time to market).
Finally, the Datasets of Experts will be based on the EPRISE consortium
fieldwork and previous partners' contacts.
## Standards enabling data re-use
The management procedures enabling future use of data are described below:
* _Data formats_ : to facilitate data re-use standard file formats such as Excel and Word files (xls and docx extensions) will be used;
* _Metadata provision:_ The web-based tool developed for data sharing is described in the Section 4.2. No metadata is needed to re-use data imported from this online database.
* _Documentation provision_ : For online submission of data (see Section 4.2), procedural information will be provided on the dedicated webpage, namely how to fill in the data form, how to extract data, how gathered information will be used and how confidentiality aspects will be managed. A brief description of the dataset and its purpose will be included as well.
* _Naming conventions_ : Files will be named according to their content for easy identification. The name of the file version uploaded to the web-based tool chosen for data storage and internal sharing (see subsection 3.3) will be preceded by the prefix “EPRISE_GANo732695”.
Partners will attach the suffix “_initials” to the original filename when
editing files.
* _Version handling, folder structure_ : If multiple versions of a file are kept, the version number will be specified after the filename. In the internal shared space data will be stored using a folder structure following WP organisation. Once the files are uploaded, versioning will be automatically managed by the web-based tool in the case of further uploads or editing and co-editing performed directly in the shared space.
* _Quality assurance:_ project management structure, decentralized responsibility and crosscheck of the results (see deliverable D1.1, “Project management Guide”) will ensure data quality.
## Storage
Data collected and generated during the project as well as deliverables and
dissemination materials will be stored in an online space shared by partners.
A Microsoft Sharepoint site was set-up during the first months of the project
in the framework of the WP5 (“Dissemination and communication activities”).
This server-based web-application enables internal communication and exchange
of data and documents. The site is hosted by the WP5 leader on a server
located at his Sedgefield site in the UK. Stored files are regularly backed-up
(daily and weekly with the backups held off-site) and it is possible to track
their history, compare different file versions and restore previous versions.
It is accessible to all team members of each partner working on the project
via a personal account.
# DATA SHARING
This section focuses on data sharing, including which data will be retained,
how it will be made accessible and how it will be preserved during the project
and beyond the lifetime of the grant. It determines whether access will be
public or restricted to specific groups and how ethics and legal compliance
will be managed. The tools set-up for enabling re-use, procedures for
accessing existing data and submitting new data are also described. Finally, a
long-term preservation plan for the data is presented.
## Data Selection and Confidentiality
_Regional Photonics Dataset_
Information contained in the Regional Photonics Dataset will be partially
shared. The adopted selection criteria for data on photonics actors are the
following:
* Name of the organisation (company, RTO, network, cluster), country, region, postal and website addresses, target markets, and keywords/description will be open access;
* Company size and internal company classification will be used only within the consortium. Company classification could affect the data organisation on the online database described in Section 4.2;
* Contact details will be kept confidential (GPDR compliant) as they are personal data which is not publicly available. The contact details of the related cluster will be provided instead. Confidential details about company activity and strategy or trade secrets of new data issued from partner clusters’ databases will not be included in the company description.
Additionally, organisations will be informed about which data will be made
publicly visible and about the possibility of obtaining a correction of
inaccurate data or opting out at any moment. Companies submitting data
directly on the website will be informed about the applied confidentiality
policy when completing the form via dedicated documentation (see Sections 3.3
and 4.2).
Other regional data, namely information on RIS3, success stories and case
studies will be published only in the form of public deliverables or
dissemination material.
_Market Datasets_ :
The use of the Go-to-Market dataset will be restricted to the consortium.
Information will be exploited to organise business events and to write market
booklets for dissemination. As a general rule, questionnaire results published
in market booklets will be made anonymous. In case of publication of
information susceptible to disclosing confidential data (testimony, Go-to-
Market session outcomes) formal consent will be asked.
Regarding the Dataset of Experts, a mid-term (Month 15) version will be
published in the form of confidential deliverable made accessible only to
partners to organise the events and to the European Commission. An updated
version will be made open access in the last project period. Experts’ contact
details will be published in the online database only after having received
their permission.
Finally, depending on the questionnaire outcomes it will be decided if it’s
worth adding some of the data about integrators and end-users included in the
Go-to-Market Dataset to the Regional Photonics Dataset. The personal data
protection rules mentioned previously for the Regional Photonics Dataset will
be applied.
## Accessibility
The Regional Photonics Dataset and the Dataset of Experts will be shared via
an online database that will be published on the project website (
_https://eprise.eu/_ ) . The former will be made available at the end of the
first year of the project (Month 12), the latter at the end of the second year
(Month 24). Online data will be regularly updated as the activities of the WP2
and WP3 progress. The template used by partners for interviews with companies,
integrators and end-users (see Section 3.1) will be adapted and made available
on the website as an online questionnaire for external users from the second
year of the project.
A registration page has been designed on the website to allow organisations or
experts to directly provide their information to the EPRISE project with
permission to use it for the purposes described in this document (Sections 2.1
and 2.2). After they register with the site, they will able to log in and
create or modify their own profile at any time by filling in a web form.
All data collected online will be stored encrypted in a secure area of the
server which is hosting the website. The data will be exported securely at
regular intervals and then saved on the Sharepoint site for partners to access
and use it. Data stored in the Sharepoint site will be processed following the
selection procedures described in Section 4.1 and exported to the online
database. Data is stored in either csv or xls format and the standard language
for relational database management systems Structured Query Language (SQL) is
used. Website users will be able to search with different criteria depending
on the database section (section dedicated to the Regional Photonics Dataset
or section dedicated to the Dataset of Experts). Search results will be
displayed on a results webpage.
## Long-term preservation
In addition to the setting-up of the online database, public deliverables and
dissemination materials will be published on the project website. At the end
of the project, a Smart Book based on market data, including Go-to-Market
session outcomes, will be uploaded on the website in the form of an ebook.
This will ensure that the data will be preserved as long as the website
exists. At the end of the project, the Smart Book will be also distributed as
a long-lasting publication with an associated ISBN code (international
standard ISO 2108) through a service provided by the CNR partner.
The Photonics Dataset and the Dataset of Experts have long-term value and will
be preserved and curated beyond the lifetime of the project. The EPRISE
consortium is discussing with Photonics21 about the possibility of
transferring Regional Photonics Dataset information to their website for
longterm preservation. Photonics21 is renewing its website, therefore more
details about this collaboration will be available in October 2017. The
information contained in the Dataset of Experts will be included in partner
clusters’ databases for further exploitation in business event organisation
and business support to their members. Partners are considering transferring
data about experts to the Photonics21 website as well. Further details will be
provided in the updated version of this Data Management Plan.
# RESPONSABILITIES AND RESOURCES
WP leaders (WP2, WP3 and WP4) are in charge of the management of the data
collected and generated through the activities of their own work package. This
includes data storage in the space shared by partners and related procedures
(folder structure, naming, versioning, documentation provision) as well as
data selection and the protection of confidential data (anonymization, consent
request). As the WP5 deals with dissemination and communication activities,
its leader will manage the technical aspects such as development and
maintenance of the web-based tool developed for gathering and sharing
information (database) and related infrastructures (website, webserver). He
will also be in charge of the preservation of any digital content, including
data storage (Sharepoint site set-up) and the back-up of the data. The project
Manager will coordinate all the activities related to data management.
During the project, the costs related to the Sharepoint site and the website
will be covered by the
WP5 leader’s budget. 25 user licences for this Sharepoint site have been
purchased. Those licences are not time-limited. Sharepoint site will be
maintained open for six months after the end of the project to allow partners
to download copies of any of the information they want to keep. After that,
the site will be closed and the data archived by the WP5 leader. The project
website is being hosted by a web-hosting company under hosting fees and for
ownership of the domain name. The partners will decide commercially which
partner(s) will host the site after the end of the project. The data will be
then transferred to that partner’s preferred web host just prior to the end of
the project. Smart Book publication will be free of charge.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0138_SHIP2FAIR_792276.md
|
# 1\. INTRODUCTION
The SHIP2FAIR Data Management Plan (DMP) gives an overview of the data and
information collected throughout the project and shows the interaction and
interrelation of the data collecting activities within and between the work
packages. The DMP will also link these activities to the SHIP2FAIR partners
and discuss their responsibilities with respect to all aspects of data
handling.
Furthermore, the SHIP2FAIR DMP will lay out the procedure for data collection,
consent procedure, storage, protection, retention and destruction of data, and
confirmation that they comply with national and EU legislation. The DMP will
ensure that the exchange of data of companies and industries is in full
compliance with the participating companies and industries internal data
protection strategies. This DMP aims at providing an effective framework to
ensure comprehensive collecting and handling of the data used in the project.
Thereby and wherever trade secrets of the participating companies and
industries are not violated, SHIP2FAIR strives to comply with the open access
policy of Horizon 2020.
The DMP is intended to be a living document which will be adjusted to the
specific needs of SHIP2FAIR throughout the project’s runtime and will be
adapted whenever appropriate.
This is the first version of DMP to be revised during the course of the
project within Task 1.1 Consortium Management, including new data, changes in
consortium policies regarding innovation potential or decision to file a
patent, and changes in the consortium composition and external factors.
This plan will establish the measures for promoting the findings during
SHIP2FAIR’s lifecycle and will set the procedures for the sharing of data of
the project. Addressing FAIR principle for research data (Findable,
Accessible, Interoperable and Re-usable) SHIP2FAIR DMP will consider:
* Data set reference and name
* Data set description
* Standards and metadata
* Data sharing and handling during and after the end of the project
* Archiving and preservation (including after the end of the project)
The following document made use of the HORIZON 2020 FAIR DATA MANAGEMENT PLAN
TEMPLATE and was written with reference to the Guidelines to FAIR data
management in Horizon 2020 [1] and the GDPR (Regulation (EU) 2016/679).
# 2 SHIP2FAIR DATA SUMMARY
Being in line with the EU’s guidelines regarding the DMP, this document should
address for each data set collected, processed and/or generated in the project
the following characteristics: dataset description, reference and name,
standards and metadata, data sharing, archiving and preservation. At this
point in time, an estimation of the size of the data cannot be given. To this
end, the consortium develops a number of strategies that will be followed in
order to address the above elements.
This section, shall be provided a detailed description of these elements in
order to ensure their understanding by the partners of the consortium. For
each element, we also describe the strategy that will be used to address it.
## 2.1 Data set description, reference and name
In order to be able to distinguish and easily identify data sets, each data
set will be assigned with a unique name. This name can also be used as the
identifier of the data sets.
All data files produced, including emails, include the term “SHIP2FAIR”,
followed by file name which briefly describes its content, followed by a
version number (or the term “FINAL”), followed by the short name of the
organisation which prepared the document (if relevant).
Each data set that will be collected, processed or generated within the
project will be accompanied by a brief description.
### 2.2 Standards and metadata
This version of the SHIP2FAIR DMP does not include a compilation of all the
metadata about the data being produced in SHIP2FAIR project, but there are
already several domains considered in the project which follows different
rules and recommendations. This is a very early stage identification of
standards:
* Microsoft Office 2010 for text based documents (or any other compatible version) .doc, .docx, .xls, .xlsx, .ppt, .pptx. Also, especially where larger datasets need to be dealt with, .csv and .txt file formats will be used. All finished and approved documents will also be made available as .pdf documents.
* Illustrations and graphic design will make use of Microsoft Visio (Format: .vsd), Photoshop (Format: different types possible, mostly .png), and will be made available as .jpg, .psd, .tiff and .ai files.
* PFDs, PIDs and layouts will preferentially use inkscape.org, an open source software for vector graphics. (Format: .svg), and will be made available as .png, .jpg and .pdf files.
* MP3 or WAV for audio files.
* Quicktime Movie or Windows Media Video for video files.
These file formats have been chosen because they are accepted standards and in
widespread use. Files will be converted to open file formats where possible
for long-term storage.
Metadata will be comprised of two formats – contextual information about the
data in a text based document and ISO 19115 standard metadata in an xml file.
These two formats for metadata are chosen to provide a full explanation of the
data (text format) and to ensure compatibility with international standards
(xml format).
### 2.3 Data sharing, access and preservation
The digital data created by the project will be diversely curated depending on
the sharing policies attached to it. For both open and non-open data, the aim
is to preserve the data and make it readily available to the interested
parties for the whole duration of the project and beyond. A public Application
Programing Interface (API) will be provided to registered users allowing them
the access to the platform. The database compliance aims to ensure the correct
implementation of the security policy on the databases verifying vulnerability
and incorrect data. The target is to identify excessive rights granted to
users, too simple passwords (or even the lack of password) and finally to
perform an analysis of the entire database. At this point, we can assure that
at least the following measures will be considered for assuring a proper
management of data:
* Dataset minimisation. The minimum amount of data needed will be stored so as to prevent potential risks.
* Access control list for user and data authentication. Depending on the dissemination level of the information an Access Control List will be implemented reflecting there for each user the data sets that can be accessed.
* Monitoring and Log of activity. The activity of each user in the project platform, including the data sets accessed, is registered in order to track and detect harmful behaviour of users with access to the platform.
* Implementation of an alert system that informs in real time of the violation of procedures or about hacking attempts.
* Liability. Identification of a person who is responsible for keeping safe the information stored,
* When possible, the information will be also made available in the initiative that the EC has launched for open data sharing from research, which is ZENODO.ORG [2].
The mechanisms explained in this document aim at reducing to the maximum the
risks related to data storage.
#### 2.3.1 Non-Open research data
The non-open research data will be archived and stored long-term in the EMDESK
portal administered by CIRCE. The CIRCE platform is currently being employed
to coordinate the project's activities and to store all the digital material
connected to SHIP2FAIR. If certain datasets cannot be shared (or need
restrictions), legal and contractual reasons will be explained.
#### 2.3.2 Open research data
The open research data will be archived on the Zenodo platform (
_http://zenodo.org_ ) . Zenodo is a EU-backed portal based on the well-
established GIT version control system ( _https://git-scm.com_ ) [3] and the
Digital Object Identifier (DOI) system ( _http://www.doi.org_ ) [4]. The
portal's aims are inspired by the same principles that the EU sets for the
pilot; Zenodo represents thus a very suitable and natural choice in this
context. The repository services offered by Zenodo are free of charge and
enable peers to share and preserve research data and other research outputs in
any size and format: datasets, images, presentations, publications and
software. The digital data and the associated meta-data is preserved through
well-established practices such as mirroring and periodic backups. Each
uploaded data-set is assigned a unique DOI rendering each submission uniquely
identifiable and thus traceable and referenceable.
3. **ALLOCATION OF RESOURCES**
Data management in SHIP2FAIR will be done as part of the WP1 and CIRCE, as
project coordinator, will be responsible for data management in SHIP2FAIR
project. CIRCE has allocated a part of the overall WP1 budget and person
months to these activities. For the time being, the project coordinator is
responsible for FAIR data management. Costs related to open access to research
data are eligible as part of the Horizon 2020 grant (if compliant with the
Grant Agreement conditions). Resources for long term preservation, associated
costs and potential value, as well as how data will be kept beyond the project
and how long, will be discussed by the whole consortium during General
Assembly (GA) meetings.
4. **DATA SECURITY**
For the duration of the project, datasets will be stored on the responsible
partner’s storage system. Every partner is responsible to ensure that the data
are stored safely and securely and in full compliance with European Union data
protection laws. After the completion of the project, all the responsibilities
concerning data recovery and secure storage will go to the repository storing
the dataset.
All data files will be transferred via secure connections and in encrypted and
password-protected form (for example with the open source 7-zip tool providing
full AES-256 encryption: http://www.7-zip.org/ or the encryption options
implemented in MS Windows or MS Excel). Passwords will not be exchanged via
e-mail but in personal communication between the partners.
# 5 ETHICAL ASPECTS
This section deals with ethical and legal compliance issues, like the consent
for data preservation and sharing, protection of the identity of individuals
and companies and how sensitive data will be handled to ensure it is stored
and transferred securely. Data protection and good research ethics are major
topics for the consortium of this project. Good research ethics meet all
actions to take great care and prevent any situation where sensitive
information could get misused. This is what the consortium wants to guarantee
for this project. Research data which contains personal data will just be
disseminated for the purpose for which it was specified by the consortium.
Furthermore, all processes of data generation and data sharing have to be
documented and approved by the consortium to guarantee highest standards of
data protection.
SHIP2FAIR partners have to comply with the ethical principles as set out in
Article 34 of the Grant Agreement, which states that all activities must be
carried out in compliance with:
* ethical principles (including the highest standards of research integrity — as set out, for instance, in the European Code of Conduct for Research Integrity including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) and
* applicable international, EU and national law (in particular, EU Directive 95/46/EC).
### 5.1 Informed Consent
<table>
<tr>
<th>
</th>
<th>
Document:
Author:
Reference:
</th>
<th>
D1.4. Project Management Plan
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
CIRCE
</th>
<th>
Version: Date:
</th>
<th>
1
</th> </tr>
<tr>
<th>
D1.4 SHIP2FAIR ID GA 792276
</th>
<th>
1/10/18
</th> </tr> </table>
An Informed Consent Form will be handed out to any individual participating in
SHIP2FAIR interviews, workshops or other activities which may lead to the
collection of data which will subsequently be used in the project. An example
of the Informed Consent Form is shown in the Annex of this document.
### 5.2 Confidentiality
SHIP2FAIR partners must retain any data, documents or other material as
confidential during the implementation for the project. Further details on
confidentiality can be found in Article 36 of the Grant Agreement along with
the obligation to protect results in Article 27\.
### 5.3 Involvement of non-EU countries
SHIP2FAIR non-EU partner (TVP) has confirmed that the ethical standards and
guidelines of Horizon2020 will be rigorously applied, regardless of the
country in which the research is carried out. Activities carried out outside
the EU will be executed in compliance with the legal obligations in the
country where they are carried out, with an extra condition that the
activities must also be allowed in at least one EU Member State.
In SHIP2FAIR data will be transferred between the named non-EU country
(Switzerland) and countries in the European Union to allow for joined analyses
and storage of all data in the common database. All data transferred between
project partners (within or outside the EU) will be restricted to
pseudonymized or anonymized data and transfer will only be made in encrypted
form via secured channels.
### 5.4 Management of ethical issues
Personal data which will be collected within this project, will only be
stored, analysed and used anonymously. The individuals will be informed
comprehensively about the intent use of the information collected from them
and have to agree to the data collection for this scientific purpose with
their active approval in form of a written consent.
The identity of any individual interviewed or other wisely engaged in the
project (e.g. by email correspondence) will be protected by this anonymization
of the data. The anonymization process guarantees that no particular
individual can be identified anymore. Statistics and tables of quantitative
research will be published in a manner such that it will not be possible to
identify any person.
The legal experts of this project will guarantee that this process, including
the information for the individuals about data protection issues, fully
complies with national and EU laws.
Data collection, storage, protection, retention and destruction will be
carried out through the intranet system of the project: EMDESK.
Interviewees/beneficiaries/recipients will be informed about data security,
anonymity and use of data as well as asked for accordance. Participation
happens on a voluntary basis.
# 6 TIMETABLE FOR UPDATES
After each Steering Committee meeting, an updating of the document will be
performed, if required. This is the current Steering Committee calendar:
# 7\. LIST OF DATA SETS
This section will list the data-sets produced within the SHIP2FAIR project.
For each partner involved in the collection or generation of research data a
short technical description is given stating the context in which the data has
been created.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0139_MUSA_644429.md
|
# Executive summary
This document describes the Data Management Plan (DMP) for the _Multi-cloud
Security Applications_ (MUSA) Project (see Appendix A). This is the second
release of the DMP, during the project life cycle DMP will be updated as
described in Section 1. This second version supersedes the previous
deliverable D6.3, which content is partially replicated in this document. D6.3
is _Obsolete_ after the release date of this document.
This document describes the policy adopted for the management of data produced
during the project activity. It describes the types of data the project will
generate/collect, which standards will be used, how and in which cases the
data will be exploited, shared and/or made accessible to others, and how the
data will be curated and preserved, even after the project duration.
The document is structured as follows: the introductory Section 1 describes
the DMP life cycle and explains the context of the document. Then, Section 2
gives an overview of the expected type of data to be managed. Each of the
following sections (Section 3 and Section 4) is devoted to a type of data,
describing the policies adopted for their management.
# Introduction
## Purpose of the document
This document describes the _Multi-cloud Security Applications_ (MUSA) Project
Data Management Plans (DMPs), as introduced in the Horizon 2020 Work Programme
for 2014-15:
_“A further new element in Horizon 2020 is the use of Data Management Plans
(DMPs) detailing what data the project will generate, whether and how it will
be exploited or made accessible for verification and re-use, and how it will
be curated and preserved._
_The use of a Data Management Plan is required for projects participating in
the Open Research Data Pilot. Other projects are invited to submit a Data
Management Plan if relevant for their planned research.”_
The MUSA DMP is a live document, updated during the project as illustrated in
Figure 1, which assumes three incremental releases of the DMPs, at months M6,
M18, and M36 (end of project) respectively.
The DMP addresses the management procedures for each type of data generated in
the project. Similarly, the DMP description document, as also introduced in
Section 1.2, will contain a section for reporting each type of data produced
during the project, as per the H2020 reporting guidelines.
DMP Initial
Release (M6)
DMP Second
Release (M18)
DMP Final
release (M36)
Any new version of DMP will include all the information of the previous
release, which will be considered obsolete from the release date of the new
DMP, i.e., the DMP released at M18 will contain all the section of DMP release
at M6. Note that if DMP released at M18 contains corrections to sections in
common with M6, the policies described in the DMP released at M18 are valid
for the remainder of the project.
Each release of the DMP, included the initial release, will report the
management policies only for the data actually produced at the release date of
the DMP. The first section after the introduction will report the description
of all the types of data that the MUSA project is expected to produce.
## Structure of the document
The DMP contains an initial Section 2 that outlines the possible types of data
produced by the project. For each type of data, a dedicated section describes
the management policies; this release contains Section 3, devoted to
Scientific Publications and Section 4, which describes Public Reports.
Each section devoted to a type of data contains:
1. a description of the type of data;
2. a description of the standards adopted for that data and/or a description of their format (metadata);
3. a description of the way in which such data are shared;
4. a description of how to access to such data;
5. a description of how to discover such data;
6. a description of the mechanisms used in the MUSA project to archive and preserve such data.
The document includes in Appendix A the overview of MUSA motivation and
background, common to all MUSA deliverables.
## Relationships with other deliverables
All deliverables indirectly affect this document, due to the data they
contain. According to Section 1.2, this deliverable contains a section for
each type of data produced by the project.
## Contributors
All partners contributed to the definition of the policies adopted for the
data management plan, CeRICT and Tecnalia are the main contributors of the
deliverable.
The following documents are directly related to D6.7:
* D6.3 _Data Management Plan_ (M6) contains the initial version of the DMP, delivered at month
6\.
* D6.9 _Final data management report_ (M36) will contain the final version of the DMP for MUSA project, to be delivered at the end of the project in month 36.
# Expected Types of Data in MUSA
In order to collect the data types that will be produced during the project,
for this second release of DMP, we focused on the description of the work and
on the results obtained in the first 18 months of the project.
According to such consideration, this section reports the data type produced
during the first months of the project. Table 1 reports a very brief
description for each of them and few considerations related to the policies to
be applied for each type of data. A complete section of DMP is dedicated for
each data type reported in Table 1.
### Table 1: MUSA types of data available at M18
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Description**
</th>
<th>
**Notes**
</th> </tr>
<tr>
<td>
**Scientific**
**Publications**
</td>
<td>
Publications containing results of the project.
</td>
<td>
Scientific publications are subject to copyrights, depending on the editorial
form they assume. DMP policies have to take into account both the need for
large diffusion and the need for a wellevaluated editorial collocation.
</td> </tr>
<tr>
<td>
**Public Reports**
</td>
<td>
MUSA public deliverables and eventual internal reports and whitepapers.
</td>
<td>
Eventual internal reports and whitepapers could be produced during the
project. DMP rules outline how they are made publicly available.
</td> </tr> </table>
According to the work done in the first eighteen months of the project, we
already identified a set of possible data types that will be made available in
the next releases of DMP. The following Table 2 reports such data types
together with few considerations for them.
Note that we removed the Multi-cloud application scenarios collected during
the first year of the project as data set, because we considered they are
useful for the MUSA framework definition, but they are already included in
Deliverable D1.1.
Moreover, we added the Cloud Threat Catalogue to the data set. Both Security
Metric Catalogue and Cloud Threat Catalogue are not publicly available yet as
they are still under work.
### Table 2: MUSA expected types of data
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Description Notes**
</th> </tr>
<tr>
<td>
**Research Data**
</td>
<td>
Data, which supports Scientific Publications and/or Public
Reports for validation of results.
</td>
<td>
Annotated data of a corresponding type dependant on the context where data was
captured (e.g., different types of logs, configuration files, etc.).
</td> </tr>
<tr>
<td>
**Open Source**
**Software**
</td>
<td>
Software produced during the project under Open Source licenses.
</td>
<td>
The Consortium Agreement describes the ownership rules for the code. DMP
policies should only describe how the code is made publicly available if there
is such an interest.
</td> </tr>
<tr>
<td>
**Security Metrics Catalogue**
</td>
<td>
The complete set of Security
Metrics used in the project.
</td>
<td>
The MUSA framework focuses on supporting security aspects for multicloud
application development and operation. Security metrics are a known
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
research topic and any contribution to collect standard quantifiable metrics
is of interest for the project.
</th> </tr>
<tr>
<td>
**Cloud Threats**
**Catalogue**
</td>
<td>
Catalogue of security Threats and risks in Cloud.
</td>
<td>
The MUSA framework supports tools that simplify risk analysis in the cloud, in
order to generate Security SLAs.
Cloud Security Threats, together with detailed information that helps to
identify when such threats apply are being collected and made available to the
community.
Any contribution to enlarge such Threats Catalogue is of interest for the
project.
</td> </tr> </table>
# Scientific Publications
## Scientific Publications Data Set Description
This data set will contain all the Scientific Publications developed in the
project for the promotion of all the MUSA results.
In the first 18 months of the project, the following Scientific Publications
have been developed:
* “Towards Self-Protective Multi-Cloud Applications MUSA – a Holistic Framework to Support the Security-Intelligent Lifecycle Management of Multi-Cloud Applications”. Written by Erkuden Rios, Eider Iturbe, Leire Orue-Echevarria, Massimiliano Rak and Valentina Casola. Presented in CLOSER 2015 “5th International Conference on Cloud Computing and Services Science”.
* “Security in Cloud-based Cyber-physical Systems”. Written by Juha Puttonen, Samuel Olaiya Afolaranmi, Luis Gonzalez Moctezuma, Andrei Lobov, Jose L. Martinez Lastra. Presented in SecureSysComm2015.
* “Self-protecting multi-cloud applications”. Written by Antonio M. Ortiz, Erkuden Rios, Wissam Mallouli, Eider Iturbe, Edgardo Montes de Oca. Presented in IEEE Security and Privacy in the Cloud (SPC) 2015\.
* “Methodology to obtain security controls in Multi-cloud applications” Written by Samuel Olaiya Afolaranmi, Luis Gonzalez Moctezuma, Massimiliano Rak, Valentina Casola, Erkuden Rios and Jose L. Martinez Lastra. Presented in CLOSER 2016 “6h International Conference on Cloud Computing and Services Science”.
* “Enhancing Security in Cloud-based Cyber-physical Systems” Written by Juha Puttonen, Samuel Olaiya Afolaranmi, Luis Gonzalez Moctezuma, Andrei Lobov, Jose L. Martinez Lastra. Journal of Cloud Computing Research.
* “SLA-driven Monitoring of Multi-Cloud Application Components using the MUSA framework”. Written by Erkuden Rios, Wissam Mallouli, Massimiliano Rak, Valentina Casola and Antonio M. Ortiz. Presented in STAM 2016.
* “Per-service Security SLA: a New Model for Security Management in Clouds”. Written by V. Casola, A. De Benedictis, J. Modic, M. Rak, U. Villano. Presented in WETICE 2016.
* A Security SLA-driven Methodology to Set-up Security Capabilities on Top of Cloud Services”. Written by Valentina Casola, Alessandra De Benedictis, Madalina Erascu, Massimiliano Rak and Umberto Villano, (to be presented in SWISM 2016).
* “Scoring Cloud Services through Digital Ecosystem Community Analysis”. Written by Jaume Ferrarons Llagostera, Smrati Gupta, Victor Muntés-Mulero, Josep-Lluis Larriba-Pey, Peter Matthews. Presented in EC-Web 2016 (DEXA 2016 Conference).
## Standards and Metadata
Each MUSA Scientific Publication will follow the template that is asked in the
publication procedures of the different conferences, books or publications
where the publications will be presented.
## Data Sharing
MUSA project will support the open access approach to Scientific Publication
(as defined in article 29.2 of the Grant Agreement). Scientific Publication
covered by an editorial copyright will be made available internally to the
partners and shared publicly through references to the copyright owners web
sites.
Whenever is possible, a Scientific Publication, as soon as possible and at the
latest six months after the publication time, will be deposited in a machine-
readable electronic copy of the published version or final peer-reviewed
manuscript accepted for publication in a repository for scientific
publications. Moreover, the beneficiary should aim at depositing at the same
time the research data needed to validate the results presented in the
deposited scientific publications.
TECNALIA has just finalised the development of its own repository, which is
accessible by RECOLECTA [3] (a platform that gathers all scientific
repositories at Spanish national level) and OpenAire [4] (a new platform aimed
at gathering a H2020 EU funded-projects’ scientific publications). The
repository fulfils international interoperability standards and protocols to
gain longterm sustainability.
All scientific publications of the MUSA project are intended to be available
through OpenAire repository and the potential delayed access (‘embargo
periods’) required by specific publishers and magazines will be negotiated in
a case-by-case basis.
## Access to MUSA Scientific Publications
MUSA Scientific Publications will have open access to the deposited
publication — via the repository — at the latest:
* On publication, if an electronic version is available for free via the publisher, or
* Within six months of publication (twelve months for publications in the social sciences and humanities) in any other case.
## Discover the MUSA Scientific Publications
For MUSA Scientific Publications, it will be ensured open access, via the
repository, to the bibliographic metadata that identify the deposited
publication. The bibliographic metadata must be in a standard format and must
include all of the following:
* The terms "European Union (EU)" and "Horizon 2020";
* The name of the action, acronym and grant number;
* The publication date, and length of embargo period if applicable, and A persistent identifier.
## Archiving and Preservation
Scientific publications repositories increase visibility (and therefore the
impact) of the work of the authors and the organisations to which they belong,
using standardized international protocols that guarantee the visibility of
documents in the search engines. These same protocols allow metadata of the
repository and files within can be collected by external systems (collectors)
to offer new services (e.g., search across multiple repositories, etc.).
TECNALIA owns the _TECNALIA Publications_ repository, which is an open access
repository accessible by RECOLECTA [3] and OpenAire [4] as explained before.
The _TECNALIA Publications_ repository is visible through Google and fulfils
international interoperability standards and protocols to gain long-term
sustainability.
The aim of the consortium is that all scientific publications of the MUSA
project will be available through the OpenAire repository, which allows
searching publications per project. The potential delayed access (‘embargo
periods’) required by specific publishers and magazines will be negotiated in
a case-by-case basis.
# Public Reports
MUSA produces, as an open set of data, a number of reports, which summarize
the main project activities and deliverables, marked as public.
The project deliverables will be publicly released, when it is prescribed in
the description of the work, only after the acceptance from the European
Commission. Internal reports and whitepapers will be made publicly available
according to an agreement among the report authors.
## Public Report Data Set Description
The following table shows the Public Deliverables at month 18 of the project.
It is worth noticing that all the deliverables in the list are already
delivered, but not yet publicly available, waiting for EC approval.
#### Table 3: Public deliverables at M18
<table>
<tr>
<th>
**Deliverable (number)**
</th>
<th>
**Deliverable name**
</th>
<th>
**Work package number**
</th> </tr>
<tr>
<td>
D1.1
</td>
<td>
Initial MUSA framework specification.
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
D1.2
</td>
<td>
Guide to security management in multi-cloud applications lifecycle.
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
D2.1
</td>
<td>
Initial SbD methods for multi-cloud applications.
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
D5.1
</td>
<td>
MUSA case studies work plan
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
D6.1
</td>
<td>
MUSA brochure and public website
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
D6.2
</td>
<td>
Dissemination Strategy
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
D6.3
</td>
<td>
Data Management Plan
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
D6.4
</td>
<td>
Communication Plan
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
D6.5
</td>
<td>
Networking plan
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
D6.6
</td>
<td>
Dissemination, communication and networking report
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
D6.7
</td>
<td>
Data management report
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
D7.1
</td>
<td>
Initial market study, trends, segmentation and requirements
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
D7.2
</td>
<td>
Business scenarios analysis
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
D7.5
</td>
<td>
Standards analysis and strategy plan
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
D7.6
</td>
<td>
Revised standards strategy plan
</td>
<td>
WP7
</td> </tr> </table>
## Standards and MetaData
MUSA Public Deliverables have a standard template available on the internal
document management system (https://intranet.musa-project.eu). The Executive
summary, at the beginning of the document is a brief summary of the
deliverable content. All the information about the document is reported in
Section 1 (Introduction).
All Introduction sections contain:
* A description of the purpose of the deliverable (section 1.1).
* A description of the structure of the deliverable (section 1.2).
* A description of relationships with other deliverables (section 1.3).
* A list of contributors (section 1.4).
* A section devoted to summarize acronyms and abbreviations (section 1.5).
* A section that reports the revision history (section 1.6).
* A section that describes the changes applied in different versions after evaluation of the Commission (section 1.7) - optional
## Data Sharing
All public report/deliverables will be published through the MUSA website [1].
In the website of the MUSA project there is a section where all the MUSA
Public Results will be published and made available for free to the general
public.
## Access to MUSA Public Deliverables
The access to the public repository can be done through the Public Results
section of the MUSA website.
For accessing to these public reports no identification is required.
## Discover the MUSA Public Deliverables
The MUSA website will be made as visible as possible and discovering should be
possible through any web search engine.
## Archiving and Preservation
All final versions of the deliverables are maintained on the internal document
management system ( _https://intranet.musa-project.eu_ ) , based on
Alfresco. All reports available on the web site are archived together with web
site infrastructure (see D6.1 _MUSA brochure and public website_ ).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0140_StarFormMapper_687528.md
|
# Introduction
## 1.1 Scope
This document is the deliverable # D7.5 – “Data Management Plan - Update” for
the EU H2020 (COMPET-5-2015–Space) project “ **A Gaia and Herschel Study of
the Density Distribution and Evolution of Young Massive Star Clusters** ”
(Grant Agreement Number: **687528** ), acronym: **StarFormMapper** (SFM)
project.
# Description of Work
WP 7, “Data Management and Curation”, is aimed at the provision of central
sotrage for data associated with the project, together with its public access.
In addition, the documentation and metadata required for full access will be
properly described.
## Update
Our original intention on moving to annual reporting for the Data Management
Plan was that these would be scheduled yearly after the last report submitted
in Period 1 (which would have been May 2018 and 2019). Inadvertently this
became November 2017 and 2018 (ie the anniversary of the initial report). No
full report was submitted in November 2017 because of this oversight. Instead,
this document provides a brief overview of the position at the end of period
2.
_This project is being funded by the European Union’s Horizon 2020 research
and innovation actions_
_(_
_RIA) programme under the grant agreement No 687528._
The initial Data Management Plan (D7.1) was submitted and approved during
period 1. This document will deal only with updates to the plan. Quotes in
italics are taken from the relevant section of the original plan.
It is still our intention to make publically available all data gathered for
the project, together with appropriate descriptions and metadata. The relevant
descriptions will be developed in the time before the next review.
## Data Summary Update
“ _The project has allowed for separate servers at the Leeds and Madrid nodes,
which are now fully installed and functional. These will provide backup to
each other.”_
The separate site servers are fully functional – those in Madrid are also
capable of serving data through public access although none is as yet offered
(private access is provided). The server in Leeds is not as yet capable of
acting as a web host. The data currently provided are the simulations
generated by the project. No other restricted access data exists as yet for
the project.
The Quasar servers are running the Docker s/w that allows us to interface with
their developing toolset. This is part of the final adopted access protocol
for the project which will eventually become public. Testing of this has
proved the basic methodology.
Page 5 of 11
The Leeds "data repository/backup" is functioning but due to changes in IT
staffing and management is not yet available as an external facing resource.
We cannot at the moment give a specific timing for this to happen, as the
staff required to set it up are beyond our control, as are the specific
details as to how this will be provided. The specific issue in question is the
location of any external facing facility, and the limits that may be placed on
us with regard to the capability of such a facility. The minimum capability
that will certainly be offered is public data access in an archive sense. We
stress that this affects only our ability to serve data - the website for the
project is otherwise fully functional at Leeds. This situation will be
resolved before the next update.
## Fair Data Update
There are no changes to the availability, openness, re-use provisions or the
requirement for making data findable. The only modification is on the item:
_This project is being funded by the European Union’s Horizon 2020 research
and innovation actions_
_(_
_RIA) programme under the grant agreement No 687528._
_“In particular, the University of Leeds will commit to hosting the server
mentioned in Section 2 for a period of at least 10 years.”_
Obviously this required the actions outlines in 2.2 to be taken first.
## Data Security Update
A fuller description of the plan for ongoing data security, particularly given
the aim stated above in 2.3, will be provided at the next update. Other
options may be viable if the actual data repository size is relatively
limited, as it is at the moment. Both Quasar SR and the University of Leeds
will work together to ensure that as a minimum first step the Leeds node
provides a backup facility to the data stored in Madrid. This is within the
control of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0143_BRISK II_731101.md
|
# Introduction
The BRISK2 project will consolidate the creation of a centre of excellence in
the field of 2 nd and 3 rd generation biofuels through the coordination of
leading European research infrastructures. Via an integrated approach the
entire value chain of biomass conversion is covered, from the preparation of
the biomass feedstock, to conversion, then treatment and finally through to
efficient utilization. Beyond conventional biomass sources and thermochemical
conversion (tackled in the first BRISK project), BRISK2 also includes novel
biogenic sources such as green and marine biomass. Moreover, the scope of
biomass conversion is broadened to include biochemical conversion and new
biorefinery approaches.
BRISK2 will ultimately improve the success of the implementation of biofuels
in Europe by helping to consolidate bioenergy expertise and knowledge,
providing opportunities for international collaboration, fostering a culture
of cooperation and leading to new bioenergy research activities across Europe.
Sharing knowledge on biofuel production is the core of the project activities,
and this philosophy will be closely linked to the management and dissemination
of research data.
With this background, the present document reports the Data Management Plan
(DMP) that will be implemented within the BRISK2 project. The DMP contains all
the data-related activities that will be performed throughout the entire data
cycle (including data generation, processing and analysis, storage, sharing
and preservation). Given the strong focus of BRISK2 on cooperation on research
infrastructure to promote innovation in biofuels production, the vast majority
of the generated data will be made publicly accessible in order to maximize
the impact of the use of the shared biofuel infrastructure offered within the
project. BRISK2 will be automatically part of the H2020 Open Data Pilot being
a research infrastructure project that has started in May 2017\. With the
ultimate objective of producing FAIR data (findable, accessible, interoperable
and reusable), the H2020 Open Data Pilot crucially influences the development
of the DMP, which will be described in detail in Chapter 2.
# Data management plan
The Data Management Plan (DMP) that will be implemented in BRISK2 is based on
the Annex 1 of the "Horizon 2020 DMP" template [1] and the “Guidelines to the
rules on open access to scientific publications and Open Access to Research
Data in Horizon 2020” [2] prepared by the European Commission, the OpenAIRE
guidelines [1][3][4], and the derived guidelines of the UK Digital Curation
Centre (DCC) [5][6][7][8][9]. All these sources are based on the FAIR
principle (ensuring findable, accessible, interoperable and re-usable data).
As part of the H2020 Open Research Data Pilot, the data generated within
BRISK2 must be stored in a research data repository and made accessible for
the public. The most relevant questions related to the handling of data
throughout the project are addressed in the sections below.
## Decision tree for the dissemination of preservation of the data
According to the Grant Agreement of BRISK2, article 29.1 (pp. 48-50),
“Unless it goes against their legitimate interests, each beneficiary must — as
soon as possible —‘disseminate’ its results by disclosing them to the public
by appropriate means (other than those resulting from protecting or exploiting
the results), including in scientific publications (in any medium).
This does not change the obligation to protect results in Article 27, the
confidentiality obligations in Article 36, the security obligations in Article
37 or the obligations to protect personal data in Article 39, all of which
still apply.
A beneficiary that intends to disseminate its results must give advance notice
to the other beneficiaries of – unless agreed otherwise – at least 45 days,
together with sufficient information on the results it will disseminate (…).
Each beneficiary must ensure open access (free of charge online access for any
user) to all peer-reviewed scientific publications relating to its results”.
Figure 1 summarizes the decision process within the BRISK2 project to
determine whether the generated research data should be disseminated or
preserved as confidential. In consistency with the Grant Agreement, taking
into account the nature and objective of this project (sharing European
research infrastructure on biofuel production and promoting exchange between
academy, research and industry), and given the fact that BRISK2 participates
in the H2020 Open Access Program, it is expected that the vast majority of the
produced data will be made publicly available in form of databases (e.g.
Phyllis-2 database and others), project deliverables (all of them with public
dissemination level), open-access publications, other dissemination material
(posters, presentations, etc.) or in form of datasets. All this material will
be placed in a number of data repositories with public access. All peer-
reviewed publications derived from the project will be published as gold open
access and will be also accessible at (some of) the repositories. In the
unlikely event that a partner considers that the data should be kept
confidential, the partner should justify this to the project coordinator, who
in turn will decide whether the data should remain confidential or be
published. However, the data will be kept restricted to public access only
under exceptional circumstances. In case the data are part of a publication
(all of which will be released as open-access), a data embargo of maximum 6
months might applied before the release of data in the repository.
**FIGURE 1. DECISION TREE WITHIN BRISK2 FOR THE PRESERVATION OR DISSEMINATION
OF DATA.**
## Data generated within the project
The ultimate objective of the data produced within BRISK2 is to contribute to
the reinforcement of the European biofuel sector by sharing knowledge and
promoting cooperation in the use of strategic research infrastructure. As part
of the H2020 Open Data Pilot and in consistency with its nature of promoting
research collaboration, virtually all the research data produced within BRISK2
will be made publicly accessible.
Table 1 presents an overview of the data generated or received/collected by
the work packages involved in the generation, management and dissemination of
research data within the project, and an outline of the basic measures for the
management of these data. As can be seen, intense interaction and
collaboration is required between the different work packages: WP4 will act as
focal point and coordinator of the research data generated in WP5-WP8 (joint
research activities, JRA) as well as the TA work packages (WP9-WP23). WP4 will
in turn assist and collaborate with WP5-WP8 for the establishment of
measurement protocols and standards to ensure the delivery of goodquality
data.
**TABLE 1. OVERVIEW OF DATA PRODUCED OR RECEIVED BY DIFFERENT WORK PACKAGES OF
BRISK2.**
<table>
<tr>
<th>
**WP**
</th>
<th>
**Data generated**
</th>
<th>
**Data collected**
</th>
<th>
**Data management actions**
</th> </tr>
<tr>
<td>
WP3. Promotion and dissemination
</td>
<td>
\-
</td>
<td>
Description of experimental facilities for TA
</td>
<td>
Public website
</td> </tr>
<tr>
<td>
Internal project documents: agendas, minutes, templates, etc.
</td>
<td>
Internal partner area in website
</td> </tr>
<tr>
<td>
WP4. Protocols, databases and benchmarking
</td>
<td>
\-
</td>
<td>
Data from JRA and TA actions: Characterization data
* Protocols/methodologies
* Results of round robin tests and measurement campaigns.
* Techno-economic evaluation and LCA of biorefinery chain.
</td>
<td>
Phyllis-2 database
Project website BRISK2 public repositories (Zenodo, Research
Gate)
Other repositories (e.g. Gas Analysis wiki)
</td> </tr>
<tr>
<td>
WP5-WP8
WP9-WP23
</td>
<td>
* Characterization data: marine and green biomass, biorefinery streams, solid biofuels, etc.
* Description of TA experimental facilities.
* Protocols/procedures for data determination.
* Results of round robin tests, measurement campaigns and TA activities.
* Techno-economic
evaluation and LCA of biorefinery chain.
</td>
<td>
\-
</td>
<td>
Preparation of data according to format requirements agreed in advance
Supply of data to
WP4
Publications/ deliverables
</td> </tr> </table>
The results of the project collected both within JRA work packages WP5-WP8 and
within TA work packages (WP9-WP23) will be supplied to WP4 in a pre-agreed
format for dissemination in form of databases (Phyllis-2) and benchmarking
activities. The data will be collected, pooled and benchmarked within WP4 and
disseminated in close collaboration with WP3.
As a second step, a selection has to be made of the data generated within the
project that will be part of the open access datasets. For this task, the
systematic data appraisal procedure of the Digital Curation Centre described
in “Five steps to decide what data to keep” [6] has been followed. The data
appraisal can be found in Appendix A.
## FAIR data – datasets generated in the project
Once the possible input research data are identified (See Section 2.3), a
systematic method based on the Data Curation Centre guidelines has been
applied (see Appendix A) to select which data will be included in datasets in
public access tools (Zenodo date repository, Phyllis-2 database, Research
Gate). Based on the type of data expected, a preliminary list of datasets
(which in the course of the project will be updated) is listed in Table 2.
**TABLE 2. OVERVIEW OF DATASETS GENERATED IN BRISK2.**
<table>
<tr>
<th>
**Area**
</th>
<th>
**Dataset**
</th>
<th>
**WP**
</th> </tr>
<tr>
<td>
All areas
</td>
<td>
Development of protocols
</td>
<td>
WP5, WP6, WP7
</td> </tr>
<tr>
<td>
Biomass
characterization
</td>
<td>
Characterization of marine biomass and biorefinery streams
</td>
<td>
WP5, WP6, WP7
</td> </tr>
<tr>
<td>
Solid biofuels (torrefied biomass, biochars, ash behavior)
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
Thermochemical
conversion
</td>
<td>
TGA round robin: Pyrolysis, torrefaction, char oxidation and char gasification
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Round robin pyrolysis
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Round robin gasification/combustion
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Measurement of trace compounds in gasification producer gas
</td>
<td>
WP6, WP7
</td> </tr>
<tr>
<td>
Sampling method for pyrolysis and gasification plants
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
Biochemical conversion
</td>
<td>
High-throughput characterization techniques/ ATR-FTIR and online measurement
for characterization & pretreatment
</td>
<td>
WP6, WP7
</td> </tr>
<tr>
<td>
Biomass pretreatment
</td>
<td>
WP6, WP7
</td> </tr>
<tr>
<td>
Market analysis, techno-economic analysis and LCA of biorefinery chain
</td>
<td>
WP8
</td> </tr> </table>
A particular type of dataset produced in BRISK2 is the Phyllis-2 database,
where the results of the characterization of new and advanced biomass
resources will be also collected and made public. The management of the
Phyllis-2 database will be described in detail in Section 2.5.
## Phyllis-2 database
Within WP4 of BRISK2, the Phyllis-2 database ( _https://www.ecn.nl/phyllis2/_
) , which currently contains more than 3000 data records of biomass and waste
composition and properties, will be extended and upgraded. Phyllis-2 can be
considered a special part of the data management plan of BRISK2. Specific
plans for the improvement of Phyllis-2 within the project include the
following:
* Extension of the number of datasets by including compositional data and properties of green and marine biomass, as well as products, residues and intermediates of biorefinery processing.
* Creation of standard file formats for more efficient submission and uploading of datasets.
* Inclusion of the standard file template(s) at the Phyllis-2 site to promote and encourage the submission of data from external parties, thus increasing the impact of the database.
* Get journal editors involved by requesting that the data submitted as supplementary information is delivered using the standard file format of Phyllis-2.
* Implementation of algorithms for the automatization of the protocols for submission, delivery, check and upload of data records to the database.
* Merging and interlinking with existing databases (e.g. Zeewier portal, Atlas seaweed database, etc.).
* Interlinking of Phyllis-2 with the rest of BRISK2 public access tools (Zenodo repository, project website, Research Gate).
* Optionally, upgrading of the current layout to mobile-friendly.
Partner ECN (WP4 leader) will be responsible for the update, upgrading and
maintenance of Phyllis2 in the course of the BRISK2 project. ECN will also
keep Phyllis-2 as a heritage product after the project is finalized.
## Project website
In the project website _www.brisk2.eu_ , there will be a link to the
‘partner area’, a password-protected area of the site only accessible to
partners, which will protect any project confidential information. This area
will provide both a document drop and a partners’ forum where documents can be
stored such as:
* Restricted project documents.
* Templates for deliverables, etc.
* Agendas and minutes of project meetings and WP meetings.
* Completed application forms
* Conference posters and presentations
* User guides
* Any other necessary documentation
Thus, the BRISK2 website will be used as a private administrative repository
for minutes, agendas, deliverable drafts, and miscellaneous documents and
items to be shared with partners only, as described in the agreement.
The project website will also include a publically accessible area for
documents, whereby articles can be uploaded, published, tagged and categorised
as required. The project website will also act as signpost of the other public
access tools of BRISK2 (Zenodo, Phyllis-2, Research Gate). The WP3 leader
(Aston University) will be the responsible partner for the coordination of the
document management and public dissemination through the BRISK2 website.
## Open access data repository
As mandated by the H2020 Open Access Pilot [1][2], the datasets, articles,
reports and other grey literature generated in the course of the project will
be stored in a data repository for public access. This will ensure the re-use
of data according to the FAIR principle. As stated by OpenAIRE guidelines
[3][4], the BRISK2 repository will provide a landing page for each dataset,
with metadata that will make the data findable, with clear identification and
easy access to promote the reuse of the data.
Within BRISK2, besides Phyllis-2 (see Section 2.5) and the project website
(Section 2.6), data will be stored at the Zenodo repository site (
_www.zenodo.org_ ) . Zenodo allows the uploaded files to get a DOI, while it
ensures a better possibility for data curation after the completion of the
project. In this case, the data repository will be updated and maintained by
WP4 partners at least for the duration of BRISK2. Optionally, a BRISK2
community might be created. LNEG (WP4.1 task leader) will be responsible for
the update of the data related to protocols, whereas partner CENER (WP4.3
leader) will be responsible for the benchmarking activities of the project.
ECN (WP4 leader) will be the ultimate responsible for the proper management of
data in Phyllis-2, Zenodo and Research Gate repositories, whereas Aston
University (WP3 leader) will be the responsible of the data management in the
project website.
The BRISK2 website will act as a signposting facility to the Zenodo
repository, directing the direct web traffic from the project site in that
direction. The data repository will be regularly updated with new
contributions to the data sets. The users of the BRISK2 data repository will
thus comply to the conditions of use of Zenodo.
Besides Zenodo, a BRISK2 community will be created in Research Gate, where all
public deliverables and datasets will be uploaded for public access. The
combined implementation of the website, Phyllis-2, Zenodo and Research Gate
will maximize the impact of the project. The different platforms will be
interlinked.
## Metadata
The metadata associated to the different datasets will specify, on the one
hand, the information about the generation of the dataset (project,
organization, date, dataset type, scientific keywords). On the other hand, it
will address the list of parameters (and associated units) included in the
file.
According to this, a format and structure for the datasets generated within
BRISK2 that will be placed for open access in the repository site is proposed
below. Each dataset (which will be identified with keywords and a unique DOI
number to ensure the findability and interoperability of data according to
FAIR principle) will include the following information (following the example
from [11]):
1. General
* Dataset title (see Section 2.4)
* Project acronym: BRISK2
* Type: collection
* Authors: acronyms of the partner(s) participating in the dataset Date of creation
* Date of latest modification
2. Description of dataset
* Description of dataset
* Field of research
* Keywords
* Related publications/ websites/ data/ services
3. Team
* Authors/contributors to dataset
* Dataset manager
* Contact person
4. Terms of use
* Access Rights/conditions
* Rights
5. Data file(s).
6. Citation and DOI.
## Stakeholders interested in the results of BRISK2
The addressed audience of the BRISK2 project results includes:
1. The scientific and applied R&D community, in particular those in the areas of biofuel production, thermochemical conversion, biochemical conversion, biorefinery platform, marine biomass production.
2. Technology suppliers in these areas.
3. Seaweed cultivators and investors in cultivation of marine biomass.
4. Biomass producers.
5. Biofuel producers.
6. Transport sector and engine manufacturers.
7. Public entities (definition of policies and legal framework to increase the competitiveness of the European biofuel industry): associations, technology platforms, Biobased Industries Consortia, etc.
## Roles and responsibilities and internal information flow
Data management issues fall within the competencies of WP3 and WP4. The
required resources (PM effort, software), which have already been estimated
during the setting up of the project, are covered by the project funding.
The main responsibilities related to data management in BRISK2 (which might be
updated if necessary in the course of the project) are distributed as follows:
* The project partners will approve the selection of the project data repositories (Zenodo, website, Research Gate).
* The partners involved in the generation of each dataset will be responsible for the preparation of standard file templates for the delivery of results and the agreement on methodologies. The respective WP leader (WP5-WP8) to which the dataset belongs as well as WP4 partners will assist in the preparation of these templates and the methodology/protocols. The use of these templates are crucial for a proper information flow within the project.
* WP5-WP8 leaders will be responsible for the first quality-check and the submission to WP4 of the research data generated in each dataset (see Table 2) according to the agreed format throughout the duration of the project.
* TA hosts (WP9-WP23 leaders) will be responsible for the first quality-check and the submission to WP4 of the research data generated in TA activities according to the agreed format throughout the duration of the project.
* The WP4 leader will receive and do a second quality-check on the research input data received from WP5-WP8 and TA work packages, and will in turn distribute the data to Task leaders 4.1 (protocols) and 4.3 (benchmarking).
* ECN (Task 4.2 leader) will be responsible for the upgrading and maintenance of the Phyllis 2 database.
* LNEG (Task 4.1 leader) will be responsible for the maintenance of data related to protocols/methodology.
* CENER (Task 4.3 leader) will be responsible for the maintenance of data related to benchmarking activities.
* Aston University (WP3 leader) will be responsible for the launching and maintenance of the project website.
* ECN (WP4 leader) will be the responsible for the update and maintenance of the repository sites at Zenodo and Research Gate.
* The leaders of WP3 (dissemination) and WP4 (protocols, database and benchmarking) will collaborate with each other for the interlinking of the different data platforms (website, Phyllis2, Zenodo, Research Gate).
* Regular meetings will be held between the WP4 partners (if required, also with representation of WP5-WP8) to update the data management activities. The frequency will vary depending on the work load, but at least 3 WP4 meetings are envisioned. Complementary, the WP4 leader might participate in other WP meetings as representative for data management activities.
* The DMP will be accordingly updated and adapted by the WP4 leader based on the outcome of the project (lessons learned, best practices).
The internal information flow within BRISK2 is schematically depicted in
Figure 2 (for the sake of clarity, only JRA activities in WP5-WP8 are included
in the graph). Project partners involved in each dataset (small green circles)
will supply the data (according to a pre-agreed format) to the partners
coordinating each dataset (DS). The datasets coordinators will in turn supply
the data to the respective WP leader (WP5-WP8), who will perform a first
quality-check of the data and will supply it to the WP4 leader. The WP4 leader
will in turn distribute the data to the corresponding task leader: protocols
(Task 4.1), characterization data for Phyllis-2 database (Task 4.2) and
benchmarking (Task 4.3). WP4 partners will in turn assist the WP5-WP8 leaders
and dataset coordinators (dashed arrows) in the establishment of protocols and
the feedback on best practices from benchmarking. WP3 and WP4 will collaborate
with each other in the coordination of the different public access tools. In
the case of Transnational Access work packages (WP9-WP23), the data generated
will be firstly quality-checked by the corresponding host, who will then
submit the data to the WP4 leader.
**FIGURE 2. PROPOSED INTERNAL INFORMATION FLOW WITHIN BRISK2 PROJECT.**
The action list of for the implementation of the DMP is shown in Appendix B.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0144_UnCoVerCPS_643921.md
|
# Introduction
The Data Management Plan (DMP) is based on the information (models, tools, and
data) used in each tasks. UnCoVerCPS has an open access policy; the DMP has
been written following the Guidelines on Open Access to Scientific
Publications and Research Data in Horizon 2020 and the Guidelines on Data
Management in Horizon 2020. The required information was collected among all
the partners following Annex 1 provided by the European Commission in the
Guidelines on Data Management in Horizon 2020\. The template covers the
following
points:
* Identification;
* Description;
* Standards and metadata;
* Sharing policy;
* Archiving and preservation.
The aim of the consortium is to implement structures that ensure open-access
of scientific results, software tools, and benchmark examples.
# Elements of the UnCoVerCPS data management policy
These tables below summarize the data, models, and tools that have been
produced. The shared information establishes the basis for the validation of
each use case of the project. It should be noted that the scale of each
element may not directly correspond to its end volume, as the latter depends
on the format of data collected.
## Technische Universit¨at Mu¨nchen
### Element No. 1
Reference _TUM MP_ 1
Name Annotated motion primitives
Origin Generated from MATLAB
Nature Data points and sets
Scale Medium
Interested users People performing motion planning
Underpins scientific publications Yes
Existence of similar data No
Integration and/ or reuse Can be integrated in most motion planners
Standards and Metadata Not existing
Access procedures Download from website or request from authors
Embargo period N/a
Dissemination method Website
Software/tools to enable re-use Not required
Dissemination Level Open access
Repository UnCoVerCPS website
Storing time 12/01/2022
Approximated end volume 100 MB Associated costs None
Costs coverage N/a
**Table 1:** _TUM MP_ 1
2
<table>
<tr>
<th>
Reference
Name
</th> </tr> </table>
_MT_ 1
Manipulator trajectories
<table>
<tr>
<th>
Origin
Nature
Scale
Interested users
Underpins scientific publications
Existence of similar data
Integration and/ or reuse
Standards and Metadata
</th> </tr> </table>
Recorded from experiments with a robotic manipulator for safe human-robot
interaction
Joint angles and velocities over time
Medium
People researching in human-robot collaboration
No
Yes
Data can be compared, but not integrated
Not existing
<table>
<tr>
<th>
Access procedures
Embargo period
Dissemination
Software/tools to enable re-use
Dissemination Level
Repository
Storing time
Approximated end volume
Associated costs Costs coverage
</th> </tr> </table>
Download from website or request from authors
N/a
Website
Not required
Open access
UnCoVerCPS website
12/01/2022
1 GB
None
N/a
**Table 2:** _TUM MT_ 1
3
<table>
<tr>
<th>
Reference
Name
</th> </tr> </table>
_CORA_ 1
CORA
<table>
<tr>
<th>
Origin
Nature
Scale
Interested users
Underpins scientific publications
Existence of similar data
Integration and/ or reuse
Standards and Metadata
</th> </tr> </table>
N/a (software tool) Software
N/a (software tool)
People performing formal verification of CPSs
Yes
N/a (software tool)
Integrated in MATLAB
Not existing
<table>
<tr>
<th>
Access procedures
Embargo period
Dissemination
Software/tools to enable re-use
Dissemination Level
Repository
Storing time
Approximated end volume
Associated costs
Costs coverage
</th> </tr> </table>
Download from website or request from authors
N/a
Website
CORA is already a tool
Open access
Bitbucket
12/01/2022
10 MB
None
N/a
**Table 3:** _TUM CORA_ 1
3
<table>
<tr>
<th>
Reference
Name
</th> </tr> </table>
_CommonRoad_ 1
Traffic scenarios for trajectory planning of automated vehicles
<table>
<tr>
<th>
Origin
Nature
Scale
Interested users
Underpins scientific publications
Existence of similar data
Integration and/ or reuse
Standards and Metadata
</th> </tr> </table>
Various
Complete traffic situations medium
People performing motion planning
Yes
No
Platform independent (XML format)
Created by ourselves
<table>
<tr>
<th>
Access procedures
Embargo period
Dissemination
Software/tools to enable re-use
Dissemination Level
Repository
Storing time
Approximated end volume
Associated costs Costs coverage
</th> </tr> </table>
Website: commonroad.in.tum.de
N/a
Website
Not required
Open access
GitLab
12/01/2030
1 GB
None
N/a
**Table 4:** _TUM CORA_ 1
## Universit´e Joseph Fourier Grenoble 1
### Element No. 1
Reference _UJF SX_ 1
Name SpaceEx
Origin N/a (software tool)
Nature Software
Scale N/a (software tool)
Interested users Academia, researchers
Underpins scientific publications Yes
Existence of similar data N/a (software tool)
Integration and/ or reuse N/a
Standards and Metadata Not existing
Access procedures Available at spaceex.imag.fr
Embargo period None
Dissemination method Website
Software/tools to enable re-use None
Dissemination Level Open access
Repository Institutional (forge.imag.fr)
Storing time 31/12/2020
Approximated end volume 50 MB Associated costs None
Costs coverage N/a
**Table 5:** _UJF SX_ 1
## Universit¨at Kassel
### Element No. 2
Reference _UKS Con_ 1
Name Control Strategies
Origin Generated from MATLAB
Nature Algorithm
Scale Scalable
Interested users Partners using control algorithms
Underpins scientific publications Yes
Existence of similar data No
Integration and/ or reuse Implemented in MATLAB
Standards and Metadata Not existing
Access procedures Will be made available on website
Embargo period Available after publication
Dissemination method E-mail
Software/tools to enable re-use MATLAB
Dissemination Level Restricted to project partners until publication
Repository N/a
Storing time 31.12.2020
Approximated end volume _ < _ 10 _MB_ Associated costs None
Costs coverage N/a
**Table 6:** _UKS Con_ 1
## Politecnico di Milano
### Element No. 1
Reference _PoliMi MG_ 1
Name Microgrid data
Origin Measured and generated from MATLAB
Nature Data points
Scale Medium
Interested users Researchers working on microgrid energy management
Underpins scientific publications Yes
Existence of similar data No
Integration and/ or reuse Can be integrated in larger microgrid units
Standards and Metadata Not existing
Access procedures Download from website or request from authors
Embargo period N/a
Dissemination method UnCoVerCPS website
Software/tools to enable re-use Not required
Dissemination Level Open access
Repository UnCoVerCPS website
Storing time 12/01/2022
Approximated end volume 6 MB Associated costs None
Costs coverage N/a
**Table 7:** _PoliMi MG_ 1
## GE Global Research Europe
### Element No. 1
Reference _GEGR Model_ 1
Name MATLAB/Simulink model of wind turbine dynamics
Origin Designed in MATLAB/Simulink
Nature MATLAB/Simulink Model
Scale Small
Interested users All project partners working on verification
Underpins scientific publications Yes
Existence of similar data Yes, but existing models are typically more complex
Integration and/ or reuse Can be reused with verification tools accepting MAT-
LAB/Simulink models
Standards and Metadata N/a
Access procedures Made available to project partners upon request
Embargo period N/a
Dissemination method Limited to consortium partners Software/tools to enable
re-use MATLAB/Simulink
Dissemination Level Limited to consortium partners
Repository GE-internal repository
Storing time December 2019
Approximated end volume 1 MB
Associated costs N/a
Costs coverage N/a
**Table 8:** _GEGR Model_ 1
### Element No. 2
Reference _GEGR Data_ 1
Name Wind turbine load data
Origin Generated in MATLAB/Simulink
Nature Data on wind, turbine power, turbine speed, turbine
loads
Scale Medium
Interested users All project partners working on verification
Underpins scientific publications Yes
Existence of similar data Yes, but typically based on more complex models
Integration and/ or reuse Reuse in verification tools
Standards and Metadata N/a
<table>
<tr>
<th>
Access procedures
Embargo period
Dissemination method
Software/tools to enable re-use
Dissemination Level
Repository
Storing time
Approximated end volume
Associated costs Costs coverage
</th> </tr> </table>
Made available to project partners upon request
N/a
Limited to consortium partners MATLAB/Simulink
Limited to consortium partners
GE-internal repository
December 2019
100 MB
N/a
N/a
**Table 9:** _GEGR Data_ 1
## Robert Bosch GmbH
### Element No. 1
Reference _BOSCH Model_ 1
Name Simulink Model of an Electro-Mechanical Brake
Origin Designed in Simulink
Nature Simulink Model
Scale Small
Interested users People working on (simulation-based) verification
Underpins scientific publications Yes
Existence of similar data No
Integration and/ or reuse Can be used with verification tools accepting
Simulink
models
Standards and Metadata Not existing
Access procedures Download from ARCH website
Embargo period N/a
Dissemination method Website
Software/tools to enable re-use Mathworks Simulink
Dissemination Level Open access
Repository ARCH website (linked from UnCoVerCPS)
Storing time 12/01/2022
Approximated end volume 1 MB
Associated costs None
Costs coverage N/a
**Table 10:** _BOSCH Model_ 1
<table>
<tr>
<th>
**Element No.**
</th> </tr> </table>
2
<table>
<tr>
<th>
Reference
Name
</th> </tr> </table>
_BOSCH Model_ 2
SpaceEx Models of an Electro-Mechanical Brake with
Conformance Monitor
<table>
<tr>
<th>
Origin
Nature
Scale
Interested users
Underpins scientific publications
Existence of similar data
Integration and/ or reuse
Standards and Metadata
</th> </tr> </table>
Designed in SpaceEx
SpaceEx Model
Small
People working on formal verification
Yes
No
Can be used with verification tools accepting SpaceEx models
Not existing
<table>
<tr>
<th>
Access procedures
Embargo period
Dissemination method
Software/tools to enable re-use
Dissemination Level
Repository
Storing time
Approximated end volume
Associated costs
Costs coverage
</th> </tr> </table>
Download from UnCoVerCPS website
N/a
Website
SpaceEx
Open access (for distribution refer to LICENSE.txt)
UnCoVerCPS website
12/01/2022
6 KB
None
N/a
**Table 11:** _BOSCH Model_ 2
### Element No. 3
Reference _BOSCH Tests_ 1
Name Parametric test case instances
Origin Generated with MATLAB
Nature MATLAB mat files
Scale Medium
Interested users People working on test case generation for confor-
mance testing
Underpins scientific publications Yes
Existence of similar data No
Integration and/ or reuse The instances from the parametric test cases for
con-
formance testing in the automated driving use case can be loaded using MATLAB.
For more details and associated models see [2] and deliverable D1.3 [1]
Standards and Metadata Not existing
<table>
<tr>
<th>
Access procedures
Embargo period
Dissemination method
Software/tools to enable re-use
Dissemination Level
Repository
Storing time
Approximated end volume
Associated costs Costs coverage
</th> </tr> </table>
Download from UnCoVerCPS website
N/a
Website
MATLAB
Open access (for distribution refer to LICENSE.txt)
UnCoVerCPS website
12/01/2022
6 KB
None
N/a
**Table 12:** _BOSCH Tests_ 1
## Esterel Technologies
### Element No. 1
Reference _ET SCADE_
Name SCADE
Origin N/a (software tool)
Nature Software
Scale N/a (software tool)
Interested users People working on code generation
Underpins scientific publications Yes
Existence of similar data N/a (software tool)
Integration and/ or reuse API access to models
Standards and Metadata Scade
Access procedures Licensing, academic access
Embargo period N/a
Dissemination method Website
Software/tools to enable re-use SCADE
Dissemination Level Commercial access or Academics programs
Repository Proprietary
Storing time _ > _ 20 _years_
Approximated end volume N/a
Associated costs N/a
Costs coverage N/a
**Table 13:** _ET SCADE_
### Element No. 2
Reference _ET SCADE HY BRID_
Name Scade Hybrid
Origin N/a (software tool)
Nature Software
Scale N/a (software tool)
To whom it could be useful Code generation
Underpins scientific publications yes
Existence of similar data N/a (software tool) Possibilities for integration
and/or API access to models reuse
Standards and Metadata Scade Hybrid
Access procedures Licensing, academic access
Embargo periods n/a
Technical mechanisms for dissemi- ftp, email nation
Software and other tools to enable SCADE re-use
Access widely open or restricted to Commercial access or Academics programs
specific groups
Repository Proprietary
Data set will not be shared n/a
How and for how long data should _ > _ 20 _years_ be stored
Approximated end volume n/a
Associated costs n/a
Costs coverage n/a
**Table 14:** _ET SCADE HY BRID_
### Element No. 3
Reference _ET SX_ 2 _SH_
Name sx2sh
Origin N/a (software tool)
Nature Software
Scale N/a (software tool)
To whom it could be useful Code generation
Underpins scientific publications yes
Existence of similar data N/a (software tool)
Possibilities for integration and/or SpaceEx format reuse
Standards and Metadata SpaceEx / Scade Hybrid
Access procedures Licensing, academic access
Embargo periods n/a
Technical mechanisms for dissemi- ftp, email nation
Software and other tools to enable SpaceEx / SCADE
re-use
Access widely open or restricted to Academics programs specific groups
Repository Proprietary
Data set will not be shared n/a
How and for how long data should _ > _ 20 years be stored
Approximated end volume n/a
Associated costs n/a
Costs coverage n/a
**Table 15:** _ET SX_ 2 _SH_
## Deutsches Zentrum fu¨r Luft- und Raumfahrt
### Element No. 1
Reference _DLR MA_ 1
Name Maneuver Automata
Origin Generated from MATLAB
Nature Datapoints, sets and graph structures
Scale Big
Interested users People researching in motion planning
Underpins scientific publications Yes
Existence of similar data No
Integration and/ or reuse Low probability of reuse
Standards and Metadata Not existing
Access procedures Request from author
Embargo period N/a
Dissemination method Reduced version will be placed on UnCoVerCPS web-
site
Software/tools to enable re-use MATLAB
Dissemination Level Open access
Repository UnCoVerCPS website, DLR SVN
Storing time 12/01/2022
Approximated end volume 10 GB
Associated costs None
Costs coverage N/a
**Table 16:** _DLR MA_ 1
### Element No. 2
Reference _DLR TEST_ 1
Name Vehicle Trajectories
Origin Recorded during testdrives with one or two vehicles
Nature Datapoints
Scale Medium
Interested users People researching in driver assistance systems, vehicle
automation, vehicle cooperation, Car2X
Underpins scientific publications Yes
Existence of similar data Yes
Integration and/ or reuse Data can be compared, but not integrated
Standards and Metadata Not existing
<table>
<tr>
<th>
Access procedures
Embargo period
Dissemination method
Software/tools to enable re-use
Dissemination Level
Repository
Storing time
Approximated end volume
Associated costs Costs coverage
</th> </tr> </table>
Download from website or request from author
N/a
UnCoVerCPS website
MATLAB
Open access
UnCoVerCPS website, DLR SVN
12/01/2022
5 GB
None
N/a
**Table 17:** _DLR TEST_ 1
### Element No. 3
Reference _DLR TEST_ 2
Name Communication Messages
Origin Recorded during testdrives with one or two vehicles
Nature Sent and received messages of Car2Car-
Communication/Vehicle cooperation
Scale Medium
Interested users People researching in driver assistance systems, vehicle
automation, vehicle cooperation, Car2X
Underpins scientific publications Yes
Existence of similar data Yes
Integration and/ or reuse Data can be compared, but not integrated
Standards and Metadata Not existing
Access procedures Download from website or request from author
Embargo period N/a
Dissemination method UnCoVerCPS website
Software/tools to enable re-use MATLAB
Dissemination Level Open access
Repository UnCoVerCPS website, DLR SVN
Storing time 12/01/2022
Approximated end volume 1 GB Associated costs None
Costs coverage N/a
**Table 18:** _DLR TEST_ 2
## Fundacion Tecnalia Research & Innovation
### Element No. 1
Reference _TCNL TV D_
Name Twizy Vehicle Data
Origin Recorded from experiments with TCNL’s automated
vehicle
Nature Vehicle’s trajectory, accelerations (lateral, longitudi-
nal), speed, yaw as well as control commands leading to these values. Recorded
from vehicle’s CAN bus,
DGPS and IMU.
Scale Medium
Interested users Research group in automated vehicles
Underpins scientific publications No
Existence of similar data Yes
Integration and/ or reuse Data was used in the vehicle identification
Standards and Metadata Not existing
Access procedures Download from website or request from authors
Embargo period N/a
Dissemination method UnCoVerCPS website
Software/tools to enable re-use Matlab/Simulink
Dissemination Level Open access
Repository UnCoVerCPS website
Storing time 12/01/2022
Approximated end volume 10 MB
Associated costs None
Costs coverage N/a
**Table 19:** _TCNL TV D_
### Element No. 2
Reference _TCNL DTCD_
Name Dynacar Trace Conformance Data
Origin Recorded from Multi-body Dynacar Simulator. Data
used in the trace conformance testing of the Tecnalia
vehicle
Nature The reference data is based on high fidelity simulations
of a multi-body vehicle model.
Scale Medium
Interested users Conformance and validation test researchers
Underpins scientific publications No Existence of similar data No
Integration and/ or reuse Data can be compared
Standards and Metadata Not existing
Access procedures Download from website or request from authors
Embargo period N/a
Dissemination method UnCoVerCPS website
Software/tools to enable re-use Matlab/Simulink and Dynacar
Dissemination Level Open access
Repository UnCoVerCPS website
Storing time 12/01/2022
Approximated end volume 15 MB
Associated costs None
Costs coverage N/a
**Table 20:** _TCNL DTCD_
### Element No. 3
Reference _TCNL DCLCV_
Name Dynacar Closed Loop Controller Validation
Origin Recorded from our Automated driving testing tool,
based on the Dynacar Simulator. Data used to validate
different controllers.
Nature The reference data is based on high fidelity simulations
and real vehicles for the Automated Driving Use Case.
Scale Medium
Interested users Control research groups
Underpins scientific publications Yes Existence of similar data Yes
Integration and/ or reuse Data can be compared
Standards and Metadata Not existing
Access procedures Download from website or request from authors
Embargo period N/a
Dissemination method UnCoVerCPS website
Software/tools to enable re-use Matlab/Simulink and Dynacar
Dissemination Level Open access
Repository UnCoVerCPS website
Storing time 12/01/2022
Approximated end volume 10 MB
Associated costs None
Costs coverage N/a
**Table 21:** _TCNL DCLCV_
3 CONCLUSIONS AND FUTURE DEVELOPMENTS
## R.U. Robots Ltd
<table>
<tr>
<th>
**Element No.**
</th> </tr> </table>
1
<table>
<tr>
<th>
Reference
Name
</th> </tr> </table>
_RUR SS_ 1
Safety System for Human-Robot Colaboration Test
Bed
<table>
<tr>
<th>
Origin
Nature
Scale
Interested users
Underpins scientific publications
Existence of similar data
Integration and/ or reuse
Standards and Metadata
</th> </tr> </table>
N/a (software tool) Software
N/a (software tool)
People performing formal verification of CPSs
Yes
N/a (software tool)
High possibility for reuse in other control systems
Not existing
<table>
<tr>
<th>
Access procedures
Embargo period
Dissemination method
Software/tools to enable re-use
Dissemination Level
Repository
Storing time
Approximated end volume
Associated costs
Costs coverage
</th> </tr> </table>
Download from website or request from authors
N/a
Website
Compiler for appropriate programming language
Open access
Not know at this stage
12/01/2022
10 MB - estimated
None
N/a
**Table 22:** _RUR SS_ 1
# Conclusions and future developments
The tables above display the current practice proposed by the consortium
regarding the management of data sets, models and software tools. As
UnCoVerCPS has not collected huge amounts of data during its lifespan,
partners decided to include other elements apart from data sets in the data
management plan. The consortium will continue to provide open access to the
models and tools employed beyond the funding period of UnCoVerCPS.
REFERENCES
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0148_UTILE_733266.md
|
# The Data Management Plan (DMP)
This DMP provides details regarding all the (research) data collected and
generated within the UTILE project. In particular, it explains the way data is
handled, organized and made openly available to the public, and how it will be
preserved after the project is completed. This DMP also provides
justifications when versions or parts of the project data cannot be openly
shared due to third-party copyright issues, confidentiality or personal data
protection requirements or when open dissemination could jeopardize the
project’s achievements.
This DMP reflects the current state of the art of the UTILE project. The
details and the final number of the project data sets may vary during the
course this project. The variations will be recorded in updated versions of
this DMP.
2.1
Data summary
The overall objective of UTILE is to defragment and actively bring together
both Innovation Providers and Innovation Developers, by setting-up an
innovative ICT Marketplace as a valorisation one-stop-shop, and additionally
to develop a value adding strategy ensuring
1. to facilitate and catalyse innovation to effective valorisation,
2. to develop a viable business case ensuring sustainability of this initiative.
UTILE will evaluate all FP7 and H2020 Health projects present in CORDIS at the
effective date of 1 April 2017, which will be conducted by health focused TTOs
with direct engagement of actual market end-users (biotech, pharma, investors)
in Europe and the USA, to identify the research results with the highest
potential for translation and exploitation.
The data used in UTILE are derived from the following sources of information,
1. EU project reports, provided by the EU (CORDIS) 1 ;
2. EU project questionnaires 2 , requested from former project coordinators;
3. Evaluation methods, consisting of first evaluation round (III.A) 3 and second evaluation round (III.B) 4 , developed by UTILE partners in WP2.
The EU project reports (I) are (final and/or intermediate) reports which are
publicly available and published in CORDIS.
Within UTILE these project reports will be used by the TTOs for the evaluation
of the projects.
Within UTILE a questionnaire is developed (II). The questionnaire is a web
based version on the UTILE website (Marketplace) 5 . The EU will send former
project coordinators of all projects the request to complete this
questionnaire. Within the questionnaire specific project information is
requested that act as complementary information to the (final) reports, to
improve the evaluation process of the TTOs. The questionnaire will explicitly
request for non-confidential information and information which can be
published publicly. This is requested since a
submitted questionnaire will be uploaded directly into the backend of the
Marketplace. With a response rate of 1020% it is expected to receive around
120-240 completed questionnaires.
For evaluation of the projects, UTILE will developed and implement an
evaluation method (III). This evaluation will assist the matchmaking process
for the linking of Innovation Providers with Innovation Developers. The
evaluation method is a two-round evaluation method. The first evaluation round
(III.A), also known as basic assessment, will be done based upon the EU
project reports (I). To ensure consistency among the TTOs, a Basic Assessment
Form (BAF) was developed. To ensure there is consistency between TTOs, in a
pilot a number of projects was assessed by all TTOs and results were
comparable.
The result of this evaluation will be a web based information sheet in the
backend of the Marketplace. All projects will go through the first evaluation
round.
After the selection of projects from the first evaluation round (based on
either a positive rank in the BAF or a positive questionnaire), the second
evaluation round (III.B) will take place. This evaluation round is also known
as deep dive assessment. It will be an extended assessment of which the exact
method is still under development. It will be a thorough evaluation, which
will at least include the information from the first evaluation round and a
completed questionnaire if available. The main goal of the deep dive is to
verify whether research results selected in the first evaluation round are
promising enough for further valorisation. Similar to the first evaluation
round, the result will be an evaluation report in the form of a web based
information form.
The evaluation method (III) developed in UTILE can be of use for other
evaluators. The evaluation method (III) will be made public. Information from
the evaluation methods (III) will be published in the Marketplace for other
users to get information about the projects or a scientific area.
2.2
Fair Data
This DMP follows the EU guidelines 6 and describes the data management
procedures according to the FAIR principles 7 . The acronym FAIR identifies
the main features that the project data must have in order be findable,
accessible, interoperable and re-useable. The data used and collected in this
project are: the project reports (I), questionnaires (II) and assessments
(III.A & III.B). Since the project reports (I) are already public data and
made FAIR by the European
Commission, the questionnaires (II) and the evaluation rounds reports (III.A &
III.B) need to be made FAIR by UTILE.
## Findable data - Metadata
To make the data (questionnaires and evaluation reports) findable, unique data
identifiers are used. All this information will be stored into a database,
each entity will be a single table linked (related) to other entities by
external key. Due to these unique data identifiers and relations between data
entities, the data will be searchable on keywords like: “project ID, date-time
of upload or project title”. Due to these function the data will be easy
findable for project partners and externals. Next to this, data not created
for the functioning of Marketplace (i.e. the deliverables) will be saved using
specified conventions (i.e. “73326_D _#_ _[ _file description]_ ”) on the
backend of the website of UTILE. The deliverable 3.1: “Technical
documentation” 8 describes the design and architecture of the UTILE website
and Marketplace as well.
## Accessible & Interoperable data
The database and Java code will reside into the subnet of INNEN and will not
be accessible from outside. Only developers with secured connection will have
access to the database interface. Project information, incl. project reports
(I), questionnaires (II) and evaluation reports (III.A & III.B) will be
accessible for registered users as explained in the technical documentation of
Marketplace **Error! Bookmark not defined.** . The project information, which
is accessible after login, will be exposed using html & JavaScript interface
for users. Marketplace will display the project information as text files,
e.g. pdf format, which will all be in English. Since Marketplace uses html &
JavaScript interface no additional proprietary software is needed.
## Reusable data
The data collected in this project will be reusable for others, since no other
software is needed to download the information and the information can easily
be used as a source of information to other users. The evaluation method (III)
developed within UTILE exists of several steps and decisions which will lead
to the evaluation and ranking of the projects. It will be published on the
website as a deliverable and the UTILE website. Since this is public
information the method will be reusable.
2.3
Allocation of
Resources
For the hosting and maintenance of the UTILE website there is a small
allocation of resources, but this is minimal and accounted for in WP3. The
data will be saved on the website which will be hosted by a primary world
leading service with high-level physical and IT security. INNEN will be
responsible for the data and will make backups of the information and provide
maintenance where necessary. After the project the platform will be made
sustainable for example by making use of paid users for the maintenance of the
platform. The knowledge gathered within the project will be preserved on the
Marketplace website. The sustainability of the project will be investigate in
WP6, as part of the exploitation plan.
2.4
Data Security
The data security will mainly be dependent on the security of the website,
since all data will be located and transferred here. INNEN will be responsible
for this security and for providing back-ups were necessary. Since all data
will be transferred within the website, security issues with physical
transferable disks is not applicable. In addition mailing is only used for
communication purposes. The UTILE website will be totally under secure
connection using SSL (Secure Sockets Layer) protocol. The application will
stand on a dedicated machine not accessible from other applications or
domains. The dedicated machine will be hosted by a primary world leading
service with high-level physical and IT security. A backup of the entire
system will be done daily with a complete snapshot of the Linux virtual
machine. Application backups (database, files, etc.) will be executed with
6-12 hours frequency. Backup will be stored on a separate storage disks
provided by the hosting service. Since there will only be one up-to-date
version, there will not be any version specifications. The back-up will have
coded versions for security and maintenance reasons.
The security of the public website part (front-end) will use SSL protocol for
logged in pages. For the back-end of the website, restricted access is
provided to the project partners via registered login, due to which only
logged in user by secure connection will be allowed in the backend of the
website.
## 2.5 Ethical Aspects
There are some ethical aspects which may play a role in the implementation of
the project:
1. Tracking of search behaviour
To obtain information about the interest of the market, it is interesting to
track search behaviour of users of the marketplace. It could provide
policymakers, like the European Commission, relevant data on market interest
and thus may give direction to policy making. The tracking of search behaviour
might become an ethical aspect which need to be monitored. For anonymous
frontend user UTILE will probably use google analytics (analytics.google.com).
For registered user search tracking UTILE will store, the timestamp of the
search action and the parameters used for the search. We know that all other
information details (e.g. account name, username, etc.) have ethical issues
and thus will be avoided. Only an admin account will have access to that
information for recovery of registration problems.
2. Permission form, terms and conditions, disclaimer
For requesting and requiring information from the questionnaire, the
questionnaire will contain a permission form (terms and conditions,
disclaimer, under development by the UTILE Ethics Board) that states that
UTILE can use the provided information for the use of the execution of the
project. It needs to state the project for which this questionnaire will be
conducted, what kind of information is requested, what will be done with this
information and how this will be stored and protected. It will be emphasized
only public, non-confidential data should be shared through the questionnaire.
**Conclusions**
Within this DMP all the types and sources of information are described. It
provides an overview of how the data within UTILE will be used and handled.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0150_StarFormMapper_687528.md
|
# 1\. Introduction
1.1. Scope
This document is the deliverable “D7.1 Data Management Plan v1” for the EU
H2020 (COMPET-5-2015 – Space) project “A Gaia and Herschel Study of the
Density Distribution and Evolution of Young Massive Star Clusters” (Grant
Agreement Number: 687528), with abbreviated code name StarFormMapper (SFM)
project. The structure and content is based on [AD1].
1.2. Applicable documents
The following documents contain requirements and references, which are
applicable to the work:
\- [AD1] SFM proposal: A Gaia and Herschel Study of the Density Distribution
and Evolution of Young Massive Star Clusters. Reference 687528\. Reference:
Space2015- StarFormMapper-Parts1-3, Issue: 1.1, Date:
04/08/2015.
# 2\. Data Summary
The project relies on the analysis of images and catalogued properties of
objects related to star formation regions. The catalogues are in standard
database format for astronomy (VO compatible
XML or FITS tables), the images will be FITS files. This allows for long term
curation of the data. Any simulations present will either be stored in
appropriate generic form (we are considering how to provide these as HDF5
files) or suitable s/w to read them provided through our own dedicated
servers. The exact solution here is still to be determined. There are no
restrictions on the use of these data which are all public. Any other data
acquired for the project (e.g. from ground based telescopes) will have their
final products made available after the nominal telescope proprietary period
ends (usually 12 months).
The GAIA and Herschel data will be taken from the ESA Science Archive
(eventually may be fed directly from there into the project server). Any other
observational data will be taken from other relevant science archives but will
require local hosting as those archives do not have the same feed in
capability. The original metadata will be added to as appropriate for the
project.
We estimate the final data size for the project that will be stored locally to
be modest (since we are only interested in relatively small areas of the sky).
Our initial DMP is scoped on a final dataset of up to 8TB.
The project has allowed for separate servers at the Leeds and Madrid nodes,
which are now fully installed and functional. These will provide backup to
each other. In addition, off-server local backups (to local NAS) will also be
made up to the 8TB limit noted above. These facilities are allowed for within
the cost of the project. The initial project datasets will be limited to
simulation data which are estimated to require ~300MB each.
# 3\. Fair Data
3.1. Making data findable, including provisions for metadata:
Both FITS and the IVOA formats are standard, and the metadata requirements are
established.
The Madrid node will be responsible for creating and documenting all data
standards required for the project. They have prior experience with large ESA
archives which will be adopted by this project.
3.2. Making data openly accessible
Data will be open access, and made available through the dual public access
servers at Leeds and Madrid. The exact format of the interface has not yet
been decided, but will be addressed in a series of documents to be prepared by
Quasar.
3.3. Making data interoperable:
We will use standard IVOA metadata and file types.
3.4. Increase data re-use (through clarifying licenses):
* Quasar Science Resources will gather from public archives the data you will need to run your algorithms and the derived products will be accessible from the consortium. At the end of the H2020 project, the consortium will decide what to do with these data.
* The idea is to have the data available as soon as possible, but keeping in mind the release of the premature data could be unproductive. According to this, the consortium will decide when to make data public.
* There are no thoughts to restrict any kind of data. It should be all available at the end of the project.
* QSR will provide the interface to run the scientific validations. As the project progresses, the data quality test, a kind of scientific validation, will be developed by scientific teams.
* There are no restrictions about this point. As long as there are resources the data will remain re-usable. In particular, the University of Leeds will commit to hosting the server mentioned in Section 2 for a period of at least 10 years.
# 4\. Allocation of Resources
We have adopted open standards from the outset, so there are limited
additional costs in ensuring fair access to all data gathered.
The scientific validation of simulations and data products will be developed
by the scientific teams and therefore will be their responsibility. Data
products, meta-accessibility of data and SW interfaces to scientific
algorithms will be the responsibility of QSR.
Overall data management is the responsibility of the management group, with
day-to-day leadership of the DM being led by Leeds.
# 5\. Data Security
There are no sensitive data held by this project.
Immediate storage will be provided on the servers at Leeds and Madrid paid for
by the project. Off-line backup in Madrid will be provided by local NAS also
paid for by the project. Off-line backup in Leeds will be provided within
space already purchased by the Astrophysics Group for general data curation.
Medium term storage of key products is envisaged to continue through these two
routes. Longer term storage may evolve, particularly in line with the
University of Leeds data storage policies.
6\. Ethical Aspects
There are no ethical implications for this project.
# 7\. Other
All of our policies at Leeds are also consistent with the DMP requirements of
STFC, which the IT support team are already familiar with, and which is
aligned with the EU's policy on open access. The local IT team at Leeds will
at all times also follow institutional guidance. Quasar will adopt similar
policies.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0151_Co-ReSyF_687289.md
|
# Introduction
The Co-ReSyF project will deploy a dedicated data access and processing
infrastructure and user portal, with automated tools, methods and standards to
support research applications using Earth Observation (EO) data for monitoring
of Coastal Waters, leveraging system components deployed as part of the SenSyF
project (www.sensyf.eu). The main objective is to facilitate the access to
Earth Observation data and processing tools for the Coastal research
community, aiming at the provision of new Coastal Water services based on EO
data.
Through Co-ReSyF‘s collaborative front end, even unexperienced researchers in
EO will be able to upload their applications to the Cloud Platform, in order
to compose and configure processing chains, for easy deployment and
exploitation on a cloud computing infrastructure. Users will be able to
accelerate their development of high-performing applications taking full
advantage of the scalability of resources available from Terradue Cloud
Platform’s Application integration service. The system’s facilities and tools,
optimized for distributed processing, include EO data access catalogues,
discovery and retrieval tools, as well as a number of preprocessing tools and
toolboxes for manipulating EO data. Advanced users will also be able to go
further and take full control of the processing chains and algorithms by
having access to dedicated cloud back-end services, and to further optimize
their applications for fast deployment addressing big data access and
processing.
The Co-ReSyF capabilities will be supported and initially demonstrated by a
series of early adopters who will develop new research applications for the
coastal domain, guide the definition of requirements and serve as system beta
testers. Following this, a competitive call will be issued within the project
to further demonstrate and promote the usage of the Co-ReSyF release. These
pioneering researchers will be given access not only to the Cloud Platform
itself, but also to extensive training material on the system and on Coastal
Waters research themes, as well as to the project's events, including the
Summer School and Final Workshop.
## Purpose and Scope
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used by the Co-ReSyF
project with regard to all the datasets that will be generated by the project.
The DMP is not a fixed document, but evolves during the lifespan of the
project.
## Document Structure
The structure of the document is as follows:
x Chapter 2 : Description of the datasets to be produced x Chapter 3:
Description of the standards and metadata used x Chapter 4: Description of the
method for sharing the data x Chapter 5: Solutions for the archiving and
preservation
# Data Sets Description
## SAR_BATHYMETRY_DEM
Digital Elevation model of the sea bed surface derived from a collection of
SAR images of the area of interest. The derived bathymetry data will be
applicable to the region of the sea bed from the coastline ranging from depths
lower than 200m and greater than 10m.
The data may allow monitoring the coastal bathymetry evolution from multiple
images (at multiple times). Also in rapidly morphologically changing
conditions, such as during coastal storms or tsunamis, if the SAR-image
conditions are valid, then one can obtain pre and postdisaster bathymetries,
which are extremely useful for disaster/risk management and coastal management
in general. The bathymetry evolution would also be of extreme importance for
data assimilation studies with the numerical morphodynamical or storm-surge
models. Other uses of this product are: coastal engineering studies, coastal
morphological evolution studies, coastal wave and current numerical modelling,
etc...
## OPT_BATHYMETRY_DEM
Digital Elevation model of the sea bed surface derived from a collection of HR
optical images of the area of interest. The derived bathymetry data will be
applicable to the region of the sea bed from the coastline ranging from depths
from 0m to 10m (shallow waters).
This data will be complementary to the SAR_BATHYMETRY_DEM, covering the
shallow waters region that cannot be covered by the SAR technique. The usage
of the data will be the same as for the SAR_BATHYMETRY_DEM (refer to Section
2.1).
## WAT_BENTHIC_CLASS
Data containing the classification of the sea floor by its class of sea bottom
albedo type. The derived data will be applicable to the region of the sea bed
from the coastline ranging from depths from 0m to 10m (shallow waters).
For shallow ocean waters, knowledge of the optical bottom albedo is necessary
to model the underwater and above-water light field, to enhance underwater
object detection or imaging, and to correct for bottom effects in the optical
remote sensing of water depth or inherent optical properties (IOP’s).
Measurements of the albedo can also help one identify the bottomsediment
composition, determine the distribution of benthic algal or coral communities,
and detect objects embedded in the sea floor.
## WAT_QUALITY_PROP
Data containing the water quality properties (Chlorophyll-a concentration,
particulates backscattering coefficient, and absorption of Coloured dissolved
organic materials), for a region of interest. The derived data will be
applicable to the region of the sea bed from the coastline ranging from depths
from 0m to 10m (shallow waters).
Knowledge of the water quality properties can be used for environmental
analysis of the quality of the coastal waters and their evolution with the
growing of the coastal city areas. Knowledge of the Chlorophyll-a
concentration (biomass) is one of the most useful measurements in limnology
and oceanography. The biomass can be used for studying phytoplankton community
structure, the size frequency distribution of the algal cells and seasonal
shifts within the plankton community. Phytoplankton abundance is related to
natural cycles in nutrient availability and to the input of phosphate and
nitrate. Excess phosphate and nitrate can come from groundwater or water
treatment plants and sewer overflow (nitrate and phosphate are not removed in
most sewage treatment plants). Excess nutrients can cause blooms of
phytoplankton, which can contribute to bottom water anoxia under stratified
conditions.
## VESSEL_DETECTION_TESTS
Data containing the position of the detected vessels for a time interval and
region of interest, and the real position of vessels from historical AIS
databases for the same time interval and region of interest.
The data may be used for researchers to analyse the performance of their
detection algorithms with respect to the number of false alarms and true
positives for the ships detection. Ship detection plays an important role in
monitoring illegal ships activities and controlling the country’s borders. It
can also be used for statistics for marine traffic in order to identify the
areas with intensive ship routes.
## OIL_SPILL_DETECTION_TESTS
Data containing the position of the detected oil spills for a time interval
and region of interest, and the real position of the oil spills from
historical databases for the same time interval and region of interest.
The data may be used for researchers to analyse the performance of their
detection algorithms with respect to the number of false alarms and true
positives for the oil spills detection. Oil spill detection is used to monitor
the illegal dumping of oil from vessels and maintaining the environmental
integrity of the coastline. The identification of oil spills may also play an
important role in case of environmental disasters in order to assist in the
clean-up procedures of the environmental agencies.
## WAT_BOUNDARY_MAPS
The data is composed of two ocean boundary map for the period and region of
interest in question. One map is delineating the boundaries between different
pixels exhibiting different seasonalities, where prime zones for water mass
mixing can be found. And another map is providing a randomised SST/Chl
boundary output, to determine whether the patterns evident in the true
boundary data, are indeed patterns worth basing scientific research on.
Water mass convergence zones are critically important to maintaining our
ocean’s fisheries and ecological systems, frequently representing areas where
colder, nutrient rich water mixes with warmer waters. This interaction fuels
increased plant growth at the base of the ocean food chain, feeding the system
that produces the overwhelming majority of the fish we eat. As ocean surface
phytoplankton grows, sickens, and dies, their photosynthetic activity
fluctuates accordingly. Satellite-derived measurements of chlorophyll activity
provide a measurable estimate of this photosynthetic activity. Furthermore,
with near-daily repeat times over fixed points, some optical datasets
currently extend over a continuous 15 years.
## COAST_ALTIMETRY_TRACKS
Data containing the SSH, SLA, SWH and Wind Speed data, and its respective
geographical coordinates and time stamp, derived from the ALES retracker
algorithm for the region of interest and the selected time period with a
sampling frequency of 20Hz. In addition an extra dataset with the range and
the applied corrections in order to derive the main parameters will also be
part of the data.
Sea level rise is an important indicator of climate change and one of its
greatest impacts on society. Due to sea level rise many regions of the world’s
coasts will be at much increased risk of flooding during the course of the
21st century and hundreds of millions of people currently living just above
may have to be relocated, and coastal infrastructure to be moved or rebuilt.
The rise rate shows huge regional variations so it is essential for coastal
planners to have regional observations of sea level rise rate, which in
combination with regional models will lead to regional projections. Tide
gauges measure sea level variation but are affected by local vertical land
movement, for instance due to subsidence. Altimetry, and coastal altimetry in
particular, provide complementary measurements that are not affected by
vertical land variation. In essence the coastal planners need the integration
of both types of measurement.
# Standards and Metadata
The Co-ReSyF catalogue uses the OGC® OpenSearch Geo and Time extensions
standard to expose data + metadata. The baseline of the standard is the
OpenSearch v1.1 specification (A9.com, Inc, n.d.). The Geo and Time extensions
provide with the main queryables to manage geospatial data products, and the
specification is standardized at the Open Geospatial Consortium (OGC, n.d.).
According to the selected standard baseline and extensions, the data catalogue
queryable elements are as follows:
x count={count?} x startPage={startPage?} x startIndex={startIndex?} x
sort={sru:sortKeys?} x q={searchTerms?}
x start={time:start?} x stop={time:end?} x timerelation={time:relation?} x
name={geo:name?} x uid={geo:uid?} x bbox={geo:box?} x geometry={geo:geometry?}
x submitted={dct:dateSubmitted?}
# Data Sharing
The Cloud Platform will be the privileged storage and cataloguing resource for
managing the information layers produced by the project and described above.
The Data Agency hosted by Terradue is the dedicated Cloud Platform service in
charge of this type of operations. The catalogue entries of the Data Agency
are open and web accessible.
The data products referenced by a catalogue entry might require user
authentication to allow download from a storage repository, depending on the
policies (e.g. embargo period before full public release) applied by the
producer organization. User registration procedures shall be described on the
Co-ReSyF portal and be simple to follow by registrants (base don a single
signon principle).
To query the Data Agency catalogue from within application scripts, users can
take advantage of the opensearch-client tool (Terradue, n.d.). Here are the
available keywords to receive direct results:
x wkt -> retrieve the product geometry in Well Known Text format x enclosure
-> the URL to download the product x startdate -> Product Start Datetime x
enddate -> Product End Datetime x identifier -> Product ID
# Archiving and preservation
While Terradue Cloud Platform provides the Data management and operations
backbone from where data products can be directly accessed, the Cloud Platform
supports distributed computing protocols allowing managing different storage
locations for a same dataset.
Long term data preservation of the produced datasets is foreseen to follow two
main tracks:
1. Management of product copies onto the European Science Cloud, with as of today, physical resources being provided by the organization EGI (European Grid Infrastructure).
2. Management of products copies onto partners own storages offering an access point to the Platform to perform the data staging operations.
Option 1 is the default option for long-term preservation of the product
copies and will be assured as long as the Co-ReSyF platform is operational
with no additional cost to the partners for storing the data.
Option 2 is an alternative option in case the generator of the dataset prefers
to keep control of the data. This option is not foreseen to be used for the
datasets described above and in case it is used it is the responsibility of
the owner of the repository to arrange for financial support for ensuring the
preservation of the data.
# Reference Documents
x Co-ReSyF. (2016). _GRANT AGREEMENT-687289_ . European Commission, Research
Executive Agency.
x Terradue (n.d.). opensearch-client. Terradue Developer Cloud Sandbox.
Retrieved June 22 nd ,
2016, from
_http://docs.terradue.com/developersandbox/reference/man/bash_commands_functions/catalogue/opensearchclient.html?highlight=opensearch_
x A9.com, Inc (n.d.). OpenSearch Specifications 1.1. OpenSearch.org. Retrieved
June 22 nd ,
2016, from _http://www.opensearch.org/Specifications/OpenSearch_
x OGC (n.d.). OpenSearch Geo and Time Extensions. Opengeospatial.org.
Retrieved June 22 nd , 2016, from
_http://www.opengeospatial.org/standards/opensearchgeo_
_PUBLIC_ Page
## END OF DOCUMENT
_PUBLIC_ Page
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0157_SUPERTED_800923.md
|
2\. **Theory data:** Theoretical predictions about different quantities of
interest, typically obtained from a derivation underlying a sequence of
assumptions and approximations, and a subsequent numerical (or in some cases
analytical) solution of the resulting equations. Here the data can mean (i)
the numerical code used to obtain those results, or (ii) the curves produced
in the numerical simulations.
The purposes of the data management within the SUPERTED project are:
1. Takes care of the integrity of the data, its proper storage, and documentation.
2. Efficient communication of intermediate data among the project members.
3. Identifying three categories of data: publishable data (PU), supporting data (SU), and data that may require IPR protection (IPR).
4. In the spirit of open data initiatives, attempting to publish as much data as possible along with the research publications based on that data along with metadata helping to search that data.
# Data collection
Data collection takes place in work packages 1-4 as part of the research
targeting the SUPERTED goals. We identify slightly different ways of data
collection for the experiments and theory work.
1. **Experimental data:** All experimental partners of the project maintain electronic lab books on their experimental activity. These form the basis for collecting experimental data. Based on these lab books, we gather regularly a set of relevant information regarding each project on the SUPERTED wiki site. This site is open only to the SUPERTED project partners. To ensure useful representation of data, gathering the information into the wiki site requires some selection by the researchers. This is because the way experimental data is gathered in our field of science exhibits some level of uncertainty: a large part of data turns out to be not relevant for the general outcome because of some trivial or uninteresting reason - for example, that the sample under study was not of the intended type, or in case of equipment failure. The selected set of information is presented in the wiki site under the following generic form, structured within the different work packages/project deliverables in the wiki site.
Acronyms: PU=publishable as such after relevant paper has been submitted,
PE=publishable after submitting the relevant paper, but needs editing,
CO=confidential, do not publish, IPR=possibly relevant for patenting
PA - part of the published manuscript, SU - supplementary information, EX -
published separately **Overall aim:**
1\. First aim 2. Second aim **Specific aims:**
1. First specific aim
2. Second specific aim
**Process description - for example, different sample batches:**
(For each item, add the month/year, a reference to your lab book, and your
name and institute)
* Month/year
* Some overall description of this set of data
* Link to a summary of the data with pictures/descriptions - lab book reference - (name, institute)
**Conclusions (update during the process):**
* Conclusion for the process, or overall conclusions from the research **Open questions:**
* Possible new open questions whose solutions would help carry out the process
The main purpose of this website is to serve the efficient communication
within the project partners. However, at the same time it is possible to
directly identify parts of data that can be used either as part of
publication, part of supplementary information material, or part of an
external set of data to be published on an archival site. In addition, we will
actively aim at identifying data that needs to stay confidential for future
research projects or because it is linked to protecting intellectual property
rights. The different types of data will be identified via the acronyms and
colour codes listed above.
2. **Theory data:** Theoretical physics work does not produce lab books. However, it may support presenting the experimental data by producing predictions of expected behaviour of the experimentally studied observables. This is often not as such new publishable theory, but rather using existing theory within the parameter regime of the experiment. Such predictions often arise from some generic numerical codes, and can be presented in a graphical form as curves. SUPERTED project will include such curves and codes from the theory work within the wiki site as parts of the experimental project. This set of data then includes identification related to their publishing capability similarly as with the experimental data.
In addition, SUPERTED includes separate theory collaboration projects among
two of the partners. For such projects, we will create separate wiki site
pages, with the following format:
Acronyms: PU=publishable as such after relevant paper has been submitted,
PE=publishable after submitting the relevant paper, but needs editing,
CO=confidential, do not publish, IPR=possibly relevant for patenting
PA - part of the published manuscript, SU - supplementary information, EX -
published separately **Overall aim:**
1\. First aim 2. Second aim **Specific aims:**
1. First specific aim
2. Second specific aim **Intermediate results:**
(For each item, add a title of the subproject, month/year, and your name and
institute)
* Title of the subproject (month/year)
* Some overall description
* Link to a summary of the data with pictures/descriptions - (name, institute) ⮚ Link to codes (matlab/python/C++/Mathematica) producing this behavior **Conclusions (update during the process):**
* Conclusion for the process, or overall conclusions from the research **Open questions:**
* Possible new open questions whose solutions would help carry out the process
# Data storage, preservation and data sharing
The SUPERTED wiki site will act as an intermediate-stage repository for the
data gathered during the project, in addition to the lab books of the
participating experimental groups. Access of the participants to this wiki
site will be maintained until at least 3 years after the end of the project.
The wiki site is provided by the University of Jyväskylä for this project.
**Figure 1: Main page of SUPERTED project intranet or wiki site. Confluence
wiki is used in this project.**
In addition, access to the lab books will be maintained until at least 3 years
after the project. The data format within the wiki site is in terms of
description (ascii text) and pictures (png/jpg/pdf and other often-readable
formats). For the data chosen to be part of published scientific papers
(identified with acronyms PU/PE and/or PA/SU/EX), we will use one or more of
the following way to ensure data sharing and open access:
## PA: part of the published manuscript or SU: supplementary information
If the data is part of the published manuscript or the supplementary
information published together with the manuscript, it typically does not need
further efforts in terms of preservation as we will publish our results in
well-known journals that have long-term storage of the information. In
addition, if not forbidden by the journal, we will submit the manuscripts to
the arXiv repository (https://arxiv.org) 3 .
_Self-archived at JYU._ Research conducted at the JYU is self-archived
(parallel published) in the JYX publication archive. In JYU, researcher
submits a research article (both the final PDF and the final draft version)
using the TUTKA form in connection of registering the publication data for the
University Library. (Final draft = Author’s Accepted Manuscript (AAM) = the
authors' version = post-print = post-review = the version after peer-review
changes but before copy editing and formatting by the publisher.) The
University Library verifies the publication permission of the article, checks
the archived version and possible embargo and saves the article in JYX.
## EX: published separately
Besides supplementary information material to the published works, in certain
cases we will opt to move part of the background content of the project wiki
page directly into a public data repository (for example, University of
Jyväskylä JYX publication archive or the zenodo.org service provided by the
OpenAIRE project) for long-time storage and with open access. This data is
then linked to and from the published scientific articles.
# Documentation and metadata
The following table indicates an example structure of the metadata included in
the beginning of the self-archived datasets. It is compatible with the
DataCite 4.0 scheme 3
<table>
<tr>
<th>
**Project and GA number**
</th>
<th>
**SUPERTED 800923**
</th> </tr>
<tr>
<td>
Identifier
</td>
<td>
Superted.xxx (xxx is a running number)
</td> </tr>
<tr>
<td>
Creator
</td>
<td>
Name and institute (+ contact details)
</td> </tr>
<tr>
<td>
Title
</td>
<td>
Title of the data
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
Name and institute (+ contact details)
</td> </tr>
<tr>
<td>
Publication Year
</td>
<td>
Year
</td> </tr>
<tr>
<td>
Resource type
</td>
<td>
Experimental/simulation/code
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Short description of the presented data
</td> </tr>
<tr>
<td>
Data source
</td>
<td>
E.g. experiment performed in Pisa on date
</td> </tr>
<tr>
<td>
Version
</td>
<td>
Possible version information (number)
</td> </tr>
<tr>
<td>
Rights
</td>
<td>
Licence (if any) or use constraints
</td> </tr> </table>
The SUPERTED project maintains up-to-date documentation and metadata to ensure
that the research data produced in project will be 'FAIR', that is findable,
accessible, interoperable and reusable 5 .
# Intellectual property rights
The Intellectual Property Rights (IPR) Management Plan ( _Separate Deliverable
4.1_ ) of SUPERTED project has been elaborated as a set of rules and
protocols. In SUPERTED project, we follow three main principles:
1. Both the recognition of knowledge brought in (background) and generated within the project (foreground) by each partner has been assigned by default, based in the information contained in the Grant and Consortium Agreements. This will allow ensuring respect of partners’ rights without causing administrative burden.
2. There is a dedicated Task ( _Deliverable 4.1_ carried on by partners JYU and BIHUR) who will actively monitor the IPR generation and propose paths to exploit and disseminate the results avoiding conflicts. This will allow to systematically dealing with all knowledge generation while allowing partners to focus onto the technical activity.
3. Management of IPR (including conflicts) will be in the first instance carried out by the teams involved in the project. This is to take the most of the technical knowledge of the teams for discussing the issue and finding solutions.
# Responsibilities and resources
All SUPERTED partners are responsible for complying with the DMP and its
procedures for data collection, handling and preservation. JYU is responsible
for overseeing this implementation, along with ensuring that the plan is
reviewed and revised during the project. This DMP is a plan, and its actual
implementation may reveal better ways to operate. We hence leave the
possibility of improving the policies during the project. Any changes in DMP
will however require approval by the SUPERTED board, either in a common Skype
meeting, or in an annual project meeting. The contact person for
communication: Tero Heikkilä, [email protected]
## Bibliography
1. European Commission, Data management in Horizon 2020 Online Manual Available from:
http://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-
cuttingissues/open-access-data-management/data-management_en.htm#A1-template
2. University of Jyväskylä, Open Science and Research, Available from: https://openscience.jyu.fi/en/open-science-jyu
3. arXiv ®, Cornell University, Available from: https://arxiv.org
<table>
<tr>
<th>
DataCite, https://schema.datacite.org/meta/kernel-4.0/doc/DataCite-
</th> </tr>
<tr>
<td>
MetadataKernel_v4.0.pdf
</td>
<td>
</td> </tr> </table>
4\.
5\. European Commission. Guidelines on FAIR Data Management in Horizon 2020,
July 2016. Available from:
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/
oa_pilot/h2020-hi-oa-data-mgt_en.pdf
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0158_DESTINATIONS_689031.md
|
# Executive Summary
Throughout the whole work programme, the CIVITAS DESTINATIONS project embeds
the process of data management, and the procedure of compliance to
ethical/privacy rules set out in the Ethics Compliance Report (D1.1). The data
management procedures within the CIVITAS DESTINATIONS project arise within the
detail of the work, and not with the overall raison d’être of the project
itself, which is part of the EC Horizon 2020 programme, Mobility for Growth
sub-programme.
D1.7 represents the third edition of the Project Data Management Plan (PDMP)
related to the data collected, handled and processed by the CIVITAS
DESTINATIONS project in the horizontal (HZ) WPs until February 2019\.
According to the guidelines and indications defined in the Ethics Compliance
Report (D1.1), the overall approach to data management issues adopted by the
CIVITAS DESTINATIONS project is described in section 2.2.
The Project Data Management Plan is structured as follows:
* Section 2 provides the introduction to the role of the Project Data Management Plan (PDMP) in the project.
* Section 3 identifies the different typologies of data managed by the whole CIVITAS DESTINATIONS project (the data described are cumulated since the beginning of the project until M30).
* On the basis of the data typologies identified in section 3, section 4 details the specific data collected and generated by CIVITAS DESTINATIONS (the data described are cumulated since the beginning of the project until M30).
* Section 5 focuses on Horizontal (HZ) WPs and it specifies the data managed/processed and the procedures adopted (when applicable) at this level.
# Introduction
## Objectives of the CIVITAS DESTINATIONS project
The CIVITAS DESTINATIONS project implements a set of mutually reinforcing and
integrated innovative mobility solutions in six medium-small urban piloting
areas in order to demonstrate how to address the lack of a seamless mobility
offer in tourist destinations.
The overall objective of the CIVITAS DESTINATIONS project is articulated in
the following operational goals:
* Development of a Sustainable Urban Mobility Plan (SUMP) for residents and tourists focusing on the integrated planning process that forms the basis of a successful urban mobility policy (WP2);
* Development of a Sustainable Urban Logistics Plan (SULP) targeted on freight distribution processes to be integrated into the SUMP (WP5);
* Implementation and demonstration of pilot measures to improve mobility for tourists and residents (WP3-WP7);
* Development of guidelines to sites for stakeholder engagement (WP2-WP8);
* Development of guidelines to sites for the definition of business models to sustain the site pilot measures and the future implementation of any other mobility actions/initiatives designed in the SUMP (WP8);
* Development of guidelines to sites for the design, contracting and operation of ITS (WP8);
* Evaluation of results both at the project level and at site level (WP9);
* Cross-fertilization of knowledge and best practice replication including cooperation with Chinese partners (WP10);
* Communication and Dissemination (WP11).
## Role of PDMP and LDMP in CIVITAS DESTINATIONS
The role and the positioning of the PDMP within the whole CIVITAS DESTINATIONS
project (in particular with the Ethics Compliance Report, D1.1) is detailed in
the following:
* The PDMP specifies the project data typologies managed in CIVITAS DESTINATIONS;
* Based on the identified data typologies, the PDMP details the data which are collected, handled, accessed, and made openly available/published (eventually). The PDMP provides the structure (template) for the entire Data Management reporting both at Horizontal (WP8, WP9, WP10) and Vertical (from WP2 to WP7) level;
* The LDMP (D1.10) describes the procedures for data management implemented at site level.
## PDMP lifecycle
The CIVITAS DESTINATIONS project includes a wide range of activities spanning
from users’ needs analysis of the demonstration measures, including SUMP/SULP
(survey for data collection, assessment of the current mobility offer which
could include the management of data coming from previous surveys and existing
data sources, personal interviews/questionnaires, collection of requirements
through focus groups and coparticipative events, etc.) to the measures
operation (data of users registered to use the demo services, management of
images for access control, management of video surveillance images in urban
areas, infomobility, bookings of mobility services, payment data/validation,
data on the use of services for promotion purpose: green credits, etc.) and to
data collection for ex-ante evaluation to ex-post evaluation.
Data can be grouped in some main categories, but the details vary from WP to
WP (in particular for the demonstration ones) and from site to site.
Due to the duration of the project, data to be managed will also evolve during
the project lifetime.
For the abovementioned reasons, the approach used for the delivery of the PDMP
and LDMP is to restrict the data collection in each six-monthly period: this
will also allow the project partners, in particular Site Managers, to keep
track of and control the data to be provided.
This version of the PDMP covers the period of project activities until
February 2019.
# Data collected and processed in CIVITAS DESTINATIONS
The CIVITAS DESTINATIONS project covers different activities (identified in
section 2.1) and deals with an extended range of possible data to be
considered.
The term “data” can be related to different kinds/sets of information
(connected to the wide range of actions taking place during the project).
A specification of the “data” collected/processed in DESTINATIONS is required
together with a first comprehensive classification of the different main
typologies involved.
In particular, data in DESTINATIONS can be divided between the two following
levels:
1. Data collected by the project;
2. Data processed/produced within the project.
**Data collected** by the project can be classified in the following main
categories:
* Data for SUMP-SULP elaboration (i.e. baseline, current mobility offer, needs analysis, etc.);
* Data required to set up the institutional background to support SUMP-SULP elaboration, design and operation of demo measures;
* Data for the design of mobility measures in demo WPs (i.e. baseline, current mobility offer, needs analysis, etc.);
* Data produced in the operation of demo mobility measures (i.e. users’ registration to the service, validation, transactions/payment, points for green credits, etc.);
* Data collected to carry out the ex-ante and ex-post evaluation;
* Data required to develop guidelines supporting the design/operation of demo measures;
* Data used for knowledge exchange and transferability;
* Data used for dissemination.
Data collected by the CIVITAS DESTINATIONS project are mainly related to local
activities of the demonstration measures design, setup and implementation.
This process deals mostly with responsibilities of Site Managers. This is
reflected in the production of the LDMP for which each site provides its
contribution.
**Data processed/produced** by the project are mainly:
* SUMP/SULP;
* Demonstration measures in the six pilot sites;
* Outputs coming from WP8 (business principles and scenarios, ITS contracting documents, etc.), WP9 (evaluation) and WP10 (transferability).
Regarding this data, the data management process deals mostly with
responsibilities of Horizontal WP Leaders/Task Leaders and they are described
in this Deliverable.
The activities which have taken place since the beginning of the CIVITAS
DESTINATIONS project are the following (here the reporting is restricted to
the activities of interest for the data management process):
* **WP2** – collection of information on SUMP baseline
* **WP3** , **WP4** , **WP6** , **WP7** – User needs analysis, design and implementation of demonstration of services and measures, operation of demo services and measures
* **WP5:** collection of information on SULP baseline. User needs analysis design and implementation of services and measures, operation of demo services and measures
### • WP8
* Task 8.1 – Stakeholder mapping exercise detailing the organisations in each of the six sites which have differing levels of power and interest in the site measures. This included the collection of the names and email addresses of key individual contacts in these organisations and phone numbers. Development of guidelines in how to engage the identified stakeholder.
* Task 8.2 – Elaboration of the documents for the call for tender for subcontracting professional expertise on business model trainings and coaching activities to be provided to the project sites. Launch of the tender, collection of participants offers, evaluation of the offers and awarding of the tender to META Group srl. Coordination of sub-contracting activities by ISINNOVA.
* Task 8.3 – Provision of guidelines for the design of ITS supporting demo measures, provision of guidelines for tendering/contracting ITS, provision of guidelines for ITS testing.
### • WP9
* Task 9.1 and 9.3: Identification of indicator categories for ex-ante/ex-post evaluation. Continuous coordination activity in order to support LEMs (Local Evaluation Managers) and discuss the definition of their measures impact indicators (in accordance with the guidelines distributed in December 2016), the preparation of the local Gantt charts and the setting of the ex-ante impact evaluations. Close and continuous cooperation with the SATELLITE project.
* Task 9.2: Preparation and delivering of the draft evaluation report (delivered 4 th of July 2017)
* Data collection through MER (Measure Evaluation Report) and PER (Process Evaluation Report)
### • WP10
o Participation to ITB-China 2017 o Urban Mobility Management Workshop in
Beijing o On-site Technical Visits in Beijing and Shenzhen o Launch of
platform of followers
# Detail of data categories
In the following the typologies of “sensible” data produced, handled or
managed by these activities are identified. The description of the data
management procedure is provided in section 5 (for Horizontal WPs) and in
D1.10 (for demo WPs and site activities).
### WP2
__Task 2.2-Task 2.3 Mobility context analysis and baseline_ _ Data
collection/survey for SUMP elaboration:
* Census/demographic data;
* Economics data;
* Tourists flow;
* Accessibility in/out;
* O/D matrix;
* Data on network and traffic flow (speed, occupancy, incidents, etc.);
* Emissions/Energy consumption;
* Pollution;
* Questionnaires on travel behaviour, attitudes, perceptions and expectations;
* On-field measuring campaign carried out during the data collection phase.
__Task 2.6 Smart metering and crowdsourcing_ _
Automatic data collection supporting SUMP development: • Traffic flow;
* Passenger counting.
### WP3
__Task 3.2 User needs analysis, requirements and design_ _
Data collection/survey for safety problem assessment at local level and design
of demo measures:
* Data about network, cycling lanes, walking paths, intersections, crossing points, traffic lights;
* Traffic data (combined with WP2),
* Road safety statistics (number of incidents on the network, etc.) combined with WP2;
* Emissions/Energy consumption (combined with WP2);
* Survey on users’ needs and expectations;
* Reports coming from stakeholder and target users focus group;
* Statistics produced by Traffic Management System, Traffic Supervisor or similar.
### WP4
__Task 4.2 User needs analysis, requirements and design_ _
Data collection/survey for extension/improvement of sharing services and
design of demo measures:
* Data on sharing/ridesharing service demand;
* Data on sharing/ridesharing service offer;
* Statistics produced by the platform of management of bike sharing already operated (registered users, O/D trips, etc.);
* Survey on users’ needs and expectations;
* Reports coming from stakeholder and target users focus group.
Data collection/survey for take up of electrical vehicles and design of demo
measures:
* Data on the demand for electrical vehicles and recharge points;
* Data on the offer of electrical vehicles and recharge points;
* Survey on users’ needs and expectations;
* Reports coming from stakeholder and target users focus group.
__Task 4.4/Task 4.5/Task 4.6 Demonstration of demo services_ _ Data collection
during service demonstration
* Registered service users and related info;
* Data collected during the service operation;
* User satisfaction analysis.
### WP5
__Task 5.2 Logistics context and user needs analysis for piloting services on
freight logistics_ _ Data/collection surveys for SULP elaboration:
* Network/traffic data (combined with WP2);
* Data on shops, supply process, logistics operators, etc.;
* Energy/emissions consumption (combined with WP2);
* On-field measuring campaign carried out during the data collection phase;
* Questionnaires/survey on supply/retail process;
* Reports coming from stakeholder and target users focus group.
Data/collection surveys for demo logistics services
* Data related to the used cooked oil collection process currently adopted;
* Survey on users’ needs and expectations;
* Reports coming from stakeholder and target users focus group.
__Task 5.6/Task 5.7 Demonstration of demo services_ _
Data collection during service demonstration
* Registered service users and related info;
* Data collected during the service operation;
* User satisfaction analysis.
### WP6
__Task 6.2 User needs analysis, requirements and design_ _
Data/collection for the design of demo measures for increasing awareness on
sustainable mobility:
* Network/traffic data (combined with WP2);
* Energy/emissions consumption (combined with WP2);
* Data on mobility and tourist “green services”, green labelling initiatives and promotional initiatives already under operation;
* Survey on users’ needs and expectations;
* Reports coming from stakeholder and target users focus group.
Data/collection for the design of demo measures for mobility demand
management:
* Survey on users’ needs and expectations;
* Reports coming from stakeholder and target users focus group.
__Task 6.4/Task 6.5/Task 6.6 Demonstration of demo measures_ _ Data collection
during service demonstration
* Registered service users and related info;
* Data collected during the service operation;
* User satisfaction analysis.
### WP7
__Task 7.2 User needs analysis, requirements and design_ _
Data/collection for the design of demo measures for Public Transport services:
* Data on PT service demand;
* Data on PT service offer;
* Statistics produced by the systems already operated (i.e. ticketing);
* Survey on users’ needs and expectations;
* Reports coming from stakeholder and target users focus group.
__Task 7.4/Task 7.5/Task 7.6 Demonstration of demo measures_ _ Data collection
during service demonstration
* Registered service users and related info;
* Data collected during the service operation;
* User satisfaction analysis.
### WP8
__Task 8.1_ _
* Data on stakeholders:
* Contact names of individuals working at the stakeholder organisations; o Email addresses of the individuals; o Phone numbers of the stakeholder organisations.
__Task 8.2_ _
* Information provided by tender participants in their offer:
* General information of the tender participants (contact details and address, authorized signature and subcontracting, declarations);
* Information to prove the professional and technical capability to carry out the activities requested in the tender (description of proposed methodology, curriculum vitae of the experts).
* Information supporting CANVAS development for relevant measures in the sites
__Task 8.3_ _
N/A – The data collected in this WP in the reference period are not included
in the list of “sensible” data identified in D1.1.
### WP9
__Task 9.2 – Task 9.3 – Task 9.4 Evaluation Plan, Ex-ante/Ex-post evaluation_
_
• Baseline (BAU): baselines are calculated in different ways, including
surveys and according to the measures the baselines refer to. The used data
are highlighted below:
* Economic impacts (operating revenues, investment costs, operating costs); o Energy consumption (fuel consumption, energy resources); o Environmental impacts (air quality, emissions, noise); o Sustainable mobility (modal split, traffic level, congestion level, vehicle occupancy, parking, public transport reliability and availability, opportunity for walking, opportunity for cycling, bike/car sharing availability, road safety, personal safety, freight movements);
* Societal impacts (user acceptance, awareness and satisfaction, physical accessibility towards transport, car availability, bike availability);
* Health impacts.
### WP10
__Task 10.4 – Cross-fertilisation among consortium members and beyond_ _
* Information provided by tender participants in their offer;
* Management of personal data required to register to the platform.
__Task 10.5 – International cooperation in research and innovation in China_ _
* Data to prepare a collective brochure in Mandarin per site (as detailed below);
* Contacts collected by visitors in China cooperation events.
### WP11
N/A – The data collected in this WP in the reference period are not included
in the list of “sensible” data identified in D1.1.
# Data Management Plan
## WP2-WP7
The Data Management Plan for the demonstration measures (WP2-WP7) is detailed
in Deliverable D1.10 – Local Data Management Plan (LDMP) – third edition
(M19-M30).
## WP8
For each of the data categories identified in section 4, the following table
describes the management procedures.
<table>
<tr>
<th>
**WP8 – Task 8.1**
</th> </tr>
<tr>
<td>
**Stakeholder mapping**
</td> </tr>
<tr>
<td>
**Data management and storing procedures**
</td> </tr>
<tr>
<td>
8.1.1
</td>
<td>
How data collected by sites are stored?
</td>
<td>
Data has been inputted by the six cities into proforma Excel files, issued by
Vectos (electronic format)
</td> </tr>
<tr>
<td>
8.1.2
</td>
<td>
Please detail where the data are stored and in which modality/format (if
applicable)
</td>
<td>
Information provided by the six cities is stored in Vectos internal server in
electronic format.
</td> </tr>
<tr>
<td>
8.1.3
</td>
<td>
How data are used
(restricted use/public use)? Are they made publicly available?
</td>
<td>
Email addresses and individuals’ names are restricted and are only for the use
of the sites when liaising with stakeholders.
</td> </tr>
<tr>
<td>
8.1.4
</td>
<td>
Who is
organization responsible for data
storing management?
</td>
<td>
the
and
</td>
<td>
Vectos, Paul Curtis is overall responsible for the collation of the data and
storing centrally on the Vectos server. The six site managers are responsible
for the storing of their respective stakeholder data, with the following
variances:
* Andreia Quintal, HF (individual names, individual email addresses stored internally by Vectos)
* Antonio Artiles Del Toro, GUAGUAS (organisation phone numbers and individual email addresses stored internally by Vectos)
* Maria Stylianou, LTC (data stored internally only)
* Alexandra Ellul, TM (individual names, individual email addresses stored internally by Vectos)
* Stavroula Tournaki, TUC – (data held internally only)
* Renato Belllini, Elba – (data held internally only)
</td> </tr>
<tr>
<td>
8.1.5
</td>
<td>
By
(organization, responsible) data are accessible?
</td>
<td>
whom
</td>
<td>
Data is accessible to Vectos via the internal server. It is also accessible by
each site partner - who provided the details via their servers.
</td> </tr> </table>
**Table 1: Description of WP8 (Task 8.1) data management procedures
(stakeholders mapping)**
<table>
<tr>
<th>
**WP8 – Task 8.2**
</th> </tr>
<tr>
<td>
**Management of the call for tender for the selection of expert support for
business development for the more relevant site measures**
</td> </tr>
<tr>
<td>
**Data management and storing procedures**
</td> </tr>
<tr>
<td>
8.2.1
</td>
<td>
How data collected by tender participants are stored?
</td>
<td>
Tender participants have sent their offer in electronic format.
</td> </tr>
<tr>
<td>
8.2.2
</td>
<td>
Please detail where the data are stored and in which modality/format (if
applicable)
</td>
<td>
Information provided by the participants are stored in ISINNOVA archive in
electronic format. Details of awarded participant (META Group srl) have been
also forwarded to ISINNOVA accounting system for the management of payment
procedures.
</td> </tr>
<tr>
<td>
8.2.3
</td>
<td>
How data are used
(restricted use/public use)? Are they made publicly available?
</td>
<td>
Information are restricted and they are managed in accordance with
confidentiality rules required for tender management. Information will not be
made publicly available
</td> </tr>
<tr>
<td>
8.2.4
</td>
<td>
Who is the
organization responsible for data storing and management?
</td>
<td>
ISINNOVA, Ms. Loredana MARMORA
</td> </tr>
<tr>
<td>
8.2.5
</td>
<td>
By whom
(organization, responsible) data are accessible?
</td>
<td>
Data have been accessed by ISINNOVA team involved in the tender management and
awarding and by the members of the evaluation board (3 people from ISINNOVA
and 2 people from Madeira).
Data related to the awarded participant (META Group srl) are also available to
ISINNOVA accounting staff for payment management
</td> </tr> </table>
**Table 2: Description of WP8 (Task 8.2) data management procedures – call for
tender**
## WP9
For each of the data categories identified in section 4, the following table
describes the management procedures.
<table>
<tr>
<th>
**WP9**
</th> </tr>
<tr>
<td>
**Data management and storing procedures**
</td> </tr>
<tr>
<td>
9.7.1
</td>
<td>
How data collected by sites related to exante evaluation are stored?
</td>
<td>
Ex ante and ex post data collected by the Local Evaluation Manager (LEMs) and
Site Managers are stored in an ad hoc Excel file according to a structured
data collection template.
Information provided by sites through MER and PER templates
</td> </tr>
<tr>
<td>
9.7.2
</td>
<td>
Please detail where the data are stored and in which modality/format (if
applicable)
</td> </tr>
<tr>
<td>
9.7.3
</td>
<td>
How data will be used?
</td>
<td>
These data will be then transposed to the Measures Evaluation Report according
to the format provided by the SATELLITE project. They will be used under an
aggregated format.
</td> </tr>
<tr>
<td>
9.7.4
</td>
<td>
Who is the
organization responsible for data storing and management?
</td>
<td>
ISINNOVA
</td> </tr>
<tr>
<td>
9.7.5
</td>
<td>
By whom
(organization, responsible) data are accessible?
</td>
<td>
Data are accessible by the ISINNOVA evaluation manager (Mr. Stefano Faberi)
and his colleagues.
</td> </tr> </table>
**Table 3: Description of WP9 data management procedures**
## WP10
For each of the data categories identified in section 4, the following table
describes the management procedures.
<table>
<tr>
<th>
**WP10**
</th> </tr>
<tr>
<td>
**Participation to ITB China 2017**
</td> </tr>
<tr>
<td>
**Data management and storing procedures**
</td> </tr>
<tr>
<td>
10.1.1
</td>
<td>
How data collected are stored?
</td>
<td>
Data collected from the sites are included in a promotional brochure in
Mandarin.
The business cards collected by GV21 during the ITB-China trade fair (and
collateral events) have been used to send a follow-up email and allow to
identify follow-up actions that could be conducted by the sites (possibly
outside the project as no budget for ITB
China 2017’s follow-up actions allocated in the DESTINATIONS project). A
specific archive has been created to store those business cards’ data.
</td> </tr>
<tr>
<td>
10.1.2
</td>
<td>
Please detail where the data are stored and in which modality/format (if
applicable)
</td> </tr>
<tr>
<td>
10.1.3
</td>
<td>
How will be data used?
</td> </tr>
<tr>
<td>
10.1.4
</td>
<td>
Who is the
organization responsible for data storing and management?
</td>
<td>
GV21
</td> </tr>
<tr>
<td>
10.1.5
</td>
<td>
By whom
(organization,
responsible) data are accessible?
</td>
<td>
Data are accessible by GV21 (Mrs. Julia Perez Cerezo) and her colleagues.
</td> </tr> </table>
**Table 4: Description of WP10 data management procedures (Participation to
ITB China 2017)**
<table>
<tr>
<th>
**Urban Mobility Management Workshop in Beijing (June 2018)**
**On-site Technical Visits in Beijing and Shenzhen**
</th> </tr>
<tr>
<td>
**Data management and storing procedures**
</td> </tr>
<tr>
<td>
10.2.1
</td>
<td>
How data collected are stored?
</td>
<td>
Names and coordinates from attendees and people met at the technical visit
have been collected by GV21 and put in an electronic format. The file is
stored in a specific archive (see 10.1.1).
The data have been stored for any future promotion activities but not used
right now.
</td> </tr>
<tr>
<td>
10.2.2
</td>
<td>
Please detail where the data are stored and in which modality/format (if
applicable)
</td> </tr>
<tr>
<td>
10.2.3
</td>
<td>
How will be data used?
</td> </tr>
<tr>
<td>
10.2.4
</td>
<td>
Who is the organization responsible for data storing and management?
</td>
<td>
GV21
</td> </tr>
<tr>
<td>
10.2.5
</td>
<td>
By whom
(organization,
responsible) data are accessible?
</td>
<td>
Data are accessible by GV21 (Mrs. Julia Perez Cerezo) and her colleagues.
</td> </tr> </table>
**Table 5: Description of WP10 data management procedures**
**(On Site Technical Visits, Urban Mobility Management Workshop in Beijing)**
<table>
<tr>
<th>
**WP10**
</th> </tr>
<tr>
<td>
**Management of the call for tender for the selection of IT provider in charge
of the setup of the platform of followers**
</td> </tr>
<tr>
<td>
**Data management and storing procedures**
</td> </tr>
<tr>
<td>
10.3.1
</td>
<td>
How data collected by tender participants are stored?
</td>
<td>
Tender participants have sent their offer in electronic format
</td> </tr>
<tr>
<td>
10.3.2
</td>
<td>
Please detail where the data are stored and in which modality/format (if
applicable)
</td>
<td>
Information provided by the participants are stored by the Project
Dissemination Manager (PDM) and the CPMR Financial services.
Data are stored in electronic format on the CPMR server.
</td> </tr>
<tr>
<td>
10.3.3
</td>
<td>
How data are
used
(restricted use/public use)? Are they made publicly available?
</td>
<td>
The data stored were used to evaluate and select the successful bidder. They
are available in case of INEA audit.
</td> </tr>
<tr>
<td>
10.3.4
</td>
<td>
Who is the
organization responsible for data storing and management?
</td>
<td>
CPMR
</td> </tr>
<tr>
<td>
10.3.5
</td>
<td>
By whom
(organization, responsible) data are accessible?
</td>
<td>
Data are accessible by CPMR (Mr. Panos Coroyannakis) and his colleagues.
The offers have been shared with the Project PCO & PM teams via emails.
</td> </tr> </table>
**Table 6: Description of WP10 data management procedures (tender for platform
for followers)**
<table>
<tr>
<th>
**WP10**
</th> </tr>
<tr>
<td>
**Follower registration to the DESTINATIONS platform**
</td> </tr>
<tr>
<td>
**Data management and storing procedures**
</td> </tr>
<tr>
<td>
10.4.1
</td>
<td>
How data collected are stored?
</td>
<td>
The data are collected and stored on the platform’s administration site.
The site is on the server of the platform designer INEVOL
The data will only be used to invite the followers to join the platform by
sending a password to qualified followers.
</td> </tr>
<tr>
<td>
10.4.2
</td>
<td>
Please detail where the data are stored and in which modality/format (if
applicable)
</td> </tr>
<tr>
<td>
10.4.3
</td>
<td>
How will be data used?
</td> </tr>
<tr>
<td>
10.4.4
</td>
<td>
Who is the
organization responsible for data storing and management?
</td>
<td>
CPMR
The PLATFORM designer organisation INEVOL
</td> </tr>
<tr>
<td>
10.4.5
</td>
<td>
By whom
(organization, responsible) data are accessible?
</td>
<td>
Data are accessible by CPMR personnel Mr. Panos Coroyannakis, Mr. Stavros
Kalognomos and the platform designers INEVOL.
</td> </tr> </table>
**Table 7: Description of WP10 data management procedures (operation of
platform for followers)**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0160_NUTRIMAN_818470.md
|
# 2\. Data Management Plan (DMP) Applied Guiding Principles
According to the requirements, the NUTRIMAN DMP observes FAIR (Findable,
Accessible, Interoperable and Reusable) Data Management Protocols. This Data
Management Plan of NUTRIMAN is coordinated by Work Package 7, and is
articulated around the following key points:
I. This Data Management Plan (DMP) has been prepared by taking into account
the template of the Guideline on "Open access & Data management":
* http://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-cuttingissues/open-access-data-management/data-management_en.htm and
* http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2 020-hi-oa-data-mgt_en.pdf).
The elaboration of the DMP will allow NUTRIMAN partners to address all issues
related with IP protection and data.
The NUTRIMAN network Data Management Plans “DMP” describes the data management
life cycle for the data to be collected, processed and/or generated by the
project, that are findable, accessible, interoperable and re-usable, while
consistent with exploitation and Intellectual Property Rights requirements.
The NUTRIMAN DMP is an official project Deliverable (D7.2) due in Month 5 (28
February 2019), but it will be a live document throughout the project.
NUTRIMAN DMP is intended to be a living document in which information can be
made available on a finer level of granularity through updates as the
implementation of the project progresses and when significant changes occur. A
clear version number will be added for each NUTRIMAN DMPs updates.
This initial first version will evolve depending on significant changes
arising and periodic reviews at reporting stages of the project. This NUTRIMAN
DMP will be updated over the course of the project whenever significant
changes arise, such as(but not limited to):
* new data
* changes in consortium policies(e.g. new innovation potential, decision to file for a patent)
* changes in consortium composition and external factors(e.g. new consortium members joining or old members leaving).
As a minimum, the NUTRIMAN DMP will be updated in the context of the periodic
evaluation/assessment of the project.
II: Ethics requirements: For some of the activities to be carried out by the
project, it may be necessary to collect basic personal data (e.g. full name,
contact details, background), even though the project will avoid collecting
such data unless deemed necessary.
The consortium will comply with the requirements of Regulation (EU) 2016/679
and of the Council of 27 April 2016 on the protection of natural persons with
regard to the processing of personal data and on the free movement of such
data, and repealing Directive 95/46/EC (General Data Protection Regulation).
The ethics related issues of NUTRIMAN is a seperated official project
deliverables (D1.1.H-Requirements No. 1 and D1.2.PODPRequirements No. 2)
submitted on December 2018. All personnel data collected by the project will
be done after giving data subjects full details on the experiments to be
conducted, and after obtaining signed informed consent forms.
III: Type of data, storage, confidentiality, ownership, management of
intellectual property and access: Procedures that will be implemented for data
collection, storage, access, sharing policies, protection, retention and
destruction will be in line with EU standards as described in the Grant
Agreement and the Consortium Agreement, particularly Articles 18, Keeping
Records — Supporting Documentation; Article 23, Management of Intellectual
Property; Article 24 Agreement on background; Article 25, Access Rights to
Background; Article 26, Ownership of Results; Article 27, Protection of
Results — Visibility of EU funding; Article 30,Transfer and Licensing of
Results; Article 31, Access Rights to
Results; Article 36, Confidentiality; Article 37 Security-related Obligations;
Article 39 Processing of Personal Data; Article 52, Communication between the
parties, and “Annex I – Description of Work” of the Grant Agreement.
NUTRIMAN DMP is including information on:
* the handling of research data during and after the end of the project
* what data will be collected, processed and/or generated
* which methodology and standards will be applied
* whether data will be shared/made open access and
* how data will be curated and preserved(including after the end of the project).
# 3\. Data Summary
## The purpose of the data collection/generation and its relation to the
objectives of the project
NUTRIMAN is a Nitrogen and Phosphorus Thematic network compiling knowledge for
recovered bio-based fertiliser products, technologies, applications and
practices while connecting the market competitive and commercially “ready for
practice” innovative results from high research maturity applied scientific
programmes and common industrial practice, for the interest and benefit of
agricultural practitioners.
The NUTRIMAN general objective is to improve the exploitation of the N/P
nutrient management and recovery potential for the ready for practice cases
not sufficiently known by practitioners and make integrated coherence in this
strategic priority. NUTRIMAN is focusing on compiling knowledge and knowledge
exchange of best practices and methodologies for innovative organic and low
input farming in particular cost efficient and safe recovered N/P innovative
fertilizer supply. NUTRIMAN is supporting the collection, provision and EU-
wide efficient delivery of easily accessible multi lingual practice oriented
abstracts and training materials in the thematic area of N/P nutrient
recycling.
NUTRIMAN is targeting to provide cross-border knowledge exchange through
disseminating matured FP7/H2020/LIFE/OGs innovative research results, that are
near to be put into practice, but not sufficiently known by practitioners,
using the EIP common format for abstracts. NUTRIMAN is targeting to deliver a
substantial number of practice abstracts in the common EIP-AGRI format and
training materials in the thematic area of nutrient recovery. The practice
abstracts and training materials will remain easily and open available in the
long term, beyond the project period on the NUTRIMAN Multi lingual SME
practice oriented and maintained web platform, managed by the coordinator.
Relevant WPs for data collection:
* WP2: the Specific collection and provision of practice-oriented knowledge for Nitrogen and Phosphorus recovery innovative TECHNOLOGIES and PRODUCTS ready for practice
* WP3: practice abstracts in the common EIP-AGRI format and training material.
The purpose and specific objectives of the NUTRIMAN data collection:
* Inventory of matured FP7/H2020/LIFE/ Operational Groups (OGs) innovative research results from the field of Nitrogen and Phosphorus recovery EU28 technologies, which are near to be put into practice, but not sufficiently known by large industrial agricultural practitioners. (WP2)
* Inventory of matured FP7/H2020/LIFE/OGs innovative research results from the field of Nitrogen and Phosphorus recovery EU28 products, which are near to be put into practice, but not sufficiently known by SME small and medium scales users and agricultural practitioners. (WP2)
* Evaluation of technologies, products and practices, both by experts and by the potential end-users. (WP2)
* Collection of Practice abstracts in common EIP-AGRI format
(https://ec.europa.eu/eip/agriculture/en/eip-agri-common-format). The EIP
common format consists of a set of basic elements characterising the given
project. (WP3)
As the NUTRIMAN project progresses and data is identified and collected,
further information on the specific datasets will be outlined in subsequent
versions of the DMP. Additional datasets may be identified and added to future
versions of the NUTRIMAN DMP as necessary.
## Types and formats of data that will the project generate/collect
The organization of data collection and most convenient format will be the
responsibility of the relevant task leader and will be integrated in a
database hosted on the project internal database.
* We are expecting to collect >100 filled questionnaires (WP2) in a form of Word documentum with a size of ~500KB per each questionnaire.
* We are expecting to collect 100 of practice abstracts in the common EIP-AGRI format (Excel file) with a size of ~500KB per practice abstract.
## The origin of the data
The NUTRIMAN network is collecting open data that is free to access, reuse,
repurpose, and redistribute, where transparency is implemented. The project
will not generate/collect any protected sensitive research data, confidential
technical and business information. Concerns in particular in relation to
privacy, trade secrets, national security, legitimate commercial interests and
to intellectual property rights and copyright shall be duly taken into
account. The following origins are used for data collection:
* NUTRIMAN consortium partners projects. A database of 100 projects (47 EU/25 national/25 linked projects)
* Biorefine Cluster Europe (BCE; www.biorefine.eu, managed by UGent) connects projects and professionals to the NUTRIMAN in the field of nutrient and energy recovery and recycling.
* CORDIS database (https://cordis.europa.eu/): EUFP7/H2020 projects
* LIFE project database (https://ec.europa.eu/easme/en/life)
* Database of Ministry of Agriculture and Rural Development, Chamber of Agriculture: AKIS, Operational Groups projects
* Collecting from the vendors/owners of the innovative technologies/products via NUTRIMAN Questionnaire. A specific project questionnaire has been set up and placed on the NUTRIMAN web page by P1 TERRA. https://nutriman.net/questionnaire. NUTRIMAN is not collecting protected sensitive data (confidential technical and business information).
* Collecting from the vendors/owners of the Practice abstracts in common EIPAGRI format. The EIP common format used for reporting on projects and disseminate the results. This common format consists of a set of basic elements characterising the project and includes one or more "practice abstract"(s).
## The expected size of the data
* We are expecting to collect 100 filled questionnaires in a form of Word documentum with a size of ~500KB per each questionnaire. The expected total size is 50MB
* We are expecting to collect 100 of practice abstracts in the common EIP-AGRI format (Excel file) with a size of ~500KB per practice abstract. The expected total size is 50MB
## Data utility to whom might it be useful
Multi-lingual NUTRIMAN practice oriented master web platform
(https://www.nutriman.eu) will be used to spread the collected practice
abstracts and knowledge towards farmers and agricultural practitioners about
the insufficiently exploited P and N recovery innovative research results
(technologies, products and practices). NURTIMAN will use the wellestablished
networks of the project partners at national and regional level, which will
significantly boost local knowledge transfer to reach the largest possible
number of farmers. The following stakeholders might use the collected data
both at national/regional and EU28 levels.
* Individual farmers and farmer groups
* Agricultural networks and organisations: farmer associations and cooperatives, chambers of Agriculture, producers Organisations
* Agri practicioners
* Regulators and Policymaker
* Representatives of the European Commision, DG-AGRI, DG-Grow.
At International level the relevant international organisations, such as FAO
and other international organisations.
## _Detailed NUTRIMAN Data inventory table_
<table>
<tr>
<th>
Dataset No:
DS1
</th>
<th>
Dataset name:
Practice-oriented knowledge for Nitrogen and Phosphorus recovery innovative
TECHNOLOGIES
</th> </tr>
<tr>
<td>
Data identification
</td> </tr>
<tr>
<td>
Dataset description:
</td>
<td>
Collection of practice-oriented knowledge for Nitrogen and Phosphorus recovery
innovative TECHNOLOGIES
</td> </tr>
<tr>
<td>
Data Subject:
</td>
<td>
Type of data
</td> </tr>
<tr>
<td>
Qualified and quantified technology description:
</td>
<td>
* processing aim, process conditions , feed flexibility and description:
* energy/water use:
* added value innovative technical content:
* location:
* emissions and environmental/climate impacts (as of EU/MS regulations)
</td> </tr>
<tr>
<td>
INPUT tons/year and tons/hour
</td>
<td>
* input material(s) specs, input material availability in economical industrial scale, logistics and cost/ton • at minimum economical industrial scale
* scale up options:
</td> </tr>
<tr>
<td>
OUTPUT tons/year and tons/hour
</td>
<td>
* at minimum economical industrial scale:
* N/P nutrient concentration % and plant availability:
* Fertilizing product category selection
</td> </tr>
<tr>
<td>
Maturity and status description:
</td>
<td>
• TRL/IRL level or beyond
</td> </tr>
<tr>
<td>
CAPEX in EURO
</td>
<td>
• Capital Expenditure for economical industrial scale:
</td> </tr>
<tr>
<td>
OPEX in EURO
</td>
<td>
• Operational Expenditure for economical industrial scale:
</td> </tr>
<tr>
<td>
IPR status:
</td>
<td>
• Intellectual property rights, such as industrial properties (industrial
designs, models, know-how, patents) and copyrights, trademarks.
</td> </tr>
<tr>
<td>
EC/MS Authority permits:
</td>
<td>
* Permit number:
* Issuing Authority:
* Permit area:
* Permit validity :
</td> </tr>
<tr>
<td>
New/Existing data:
</td>
<td>
Existing
</td> </tr>
<tr>
<td>
Source:
</td>
<td>
* NUTRIMAN consortium partners projects. A database of 100
projects (47 EU/25 national/25 linked projects)
* Biorefine Cluster Europe (BCE; www.biorefine.eu, managed by UGent) connects projects and professionals to the NUTRIMAN in the field of nutrient and energy recovery and recycling.
* CORDIS database (https://cordis.europa.eu/): EUFP7/H2020 projects • LIFE project database (https://ec.europa.eu/easme/en/life)
* Database of Ministry of Agriculture and Rural Development, Chamber of Agriculture: AKIS, Operational Groups projects
* Collecting from the vendors/owners of the innovative technologies/products via NUTRIMAN Questionnaire. A specific project questionnaire has been set up and placed on the NUTRIMAN web page by P1 TERRA. https://nutriman.net/questionnaire. NUTRIMAN is not collecting protected sensitive data (confidential technical and business information).
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2: Specific collection and provision of practice-oriented knowledge for
Nitrogen and Phosphorus recovery innovative TECHNOLOGIES and PRODUCTS ready
for practice
T2.1. Collection of matured FP7/H2020/LIFE/OGs/national innovative
</td> </tr>
<tr>
<td>
</td>
<td>
research results from the field of Nitrogen and Phosphorus recovery EU28
technologies and products, which are near to be put into practice, but not
sufficiently known by practitioners.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
P5 UGent
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
P5 UGent
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
P1 TERRA
</td> </tr>
<tr>
<td>
Involving partners:
</td>
<td>
All NUTRIMAN consortium partners and linked Third Parties.
</td> </tr>
<tr>
<td>
Method of capture/Standards
</td> </tr>
<tr>
<td>
Method of Data capture
</td>
<td>
European Nutrient Recycling Contest Questionnaire
https://nutriman.net/questionnaire
</td> </tr>
<tr>
<td>
Format of data capture and expected size:
</td>
<td>
We are expecting to collect 100 filled questionnaires in a form of Word
documentum with a size of ~500KB per each questionnaire. The expected total
size is 50MB.
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data utility. Who outside of the consortium might use the data?
</td>
<td>
Restricted data not used outside the consortium.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication
</td>
<td>
No data sharing and/or publication.
</td> </tr>
<tr>
<td>
Data access policy. Type of access: Restricted
(only for members of the Consortium and the
Commission Services) or Public
</td>
<td>
Restricted only for the members of the Consortium, Farmers Advisory Board and
the Commission
Services.
</td> </tr>
<tr>
<td>
Ethical issue Y/N
Personal data protection
</td>
<td>
Yes. During this data collection it is necessary to collect basic personal
data (e.g. full name, contact details) which comply with the requirements of
Regulation (EU) 2016/679 and of the Council of 27 April 2016 (General Data
Protection Regulation). All personnel data collected will only be done after
giving data subjects full details on the experiments to be conducted, and
after obtaining signed informed consent forms. Information sheet and consent
form are provided together with the Questionnaire.
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
This DS1 dataset will be preserved in the coordinator TERRA HUMANA
infrastructure. +10 years after the closure of the project.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset No:
DS2
</th>
<th>
Dataset name:
Practice-oriented knowledge for Nitrogen and Phosphorus recovery innovative
PRODUCTS
</th> </tr>
<tr>
<td>
Data identification
</td> </tr>
<tr>
<td>
Dataset description:
</td>
<td>
Collection of practice-oriented knowledge for Nitrogen and Phosphorus recovery
innovative PRODUCTS
</td> </tr>
<tr>
<td>
Data Subject:
</td>
<td>
Type of data:
</td> </tr>
<tr>
<td>
Fertilizing product category selection as of EC Fertilizers
Regulation revision COM (2016) 157
</td>
<td>
• Category of fertilizing product
</td> </tr>
<tr>
<td>
Status of the product development incl.
TRL/IRL level
</td>
<td>
• TRL/IRL level or beyond
</td> </tr>
<tr>
<td>
Input material(s) specification:
</td>
<td>
• Specification of the input material
</td> </tr>
<tr>
<td>
Quality characterization
(Nutrients) as of EC
Fertilizers Regulation revision COM (2016)
157
</td>
<td>
* Overall texture of the product (granulometry, moistureI) • Organic carbon content (% of dry matter by weight):
* Total carbon content (% of dry matter by weight):
* Total Nitrogen content % dry matter:
* Phosphorus content mg/kg dry matter:
* Other macro and micro elements (mg/kg dry matter):
* Plant available nutrient content % (e.g water soluble, citric acid soluble nutrient content):
* Dry matter content:
* Particle density (g cm-3):
* pH
</td> </tr>
<tr>
<td>
Product safety as of EC Fertilizers Regulation revision COM (2016) 157
</td>
<td>
* Metals/metalloids: As, Cd, Cr, Cu, Hg, Ni, Pb, Zn (mg/kg dry matter):
* PAH16 or PAH19 (mg/kg dry matter):
* PCB6 (mg/kg dry matter):
* PCDD/F (ng WHO Toxicity equivalents/kg dry matter):
</td> </tr>
<tr>
<td>
Product testing condition
</td>
<td>
* Countries
* Condition
</td> </tr>
<tr>
<td>
Product economics,
EXW whole
sale:€cost/ton:
</td>
<td>
• EXWorks product availability at manufacturers location for professional
large scale users
</td> </tr>
<tr>
<td>
User recommendations
</td>
<td>
• incl. doses/ha, application conditions, formulations, asoI)
</td> </tr>
<tr>
<td>
EC/MS Authority permits for product use:
</td>
<td>
* Permit number:
* Issuing Authority:
* Permit area:
* Permit validity (crops):
</td> </tr>
<tr>
<td>
New/Existing data:
</td>
<td>
Existing
</td> </tr>
<tr>
<td>
Source:
</td>
<td>
* NUTRIMAN consortium partners projects. A database of 100
projects (47 EU/25 national/25 linked projects)
* Biorefine Cluster Europe (BCE; www.biorefine.eu, managed by UGent) connects projects and professionals to the NUTRIMAN in the field of nutrient and energy recovery and recycling.
* CORDIS database (https://cordis.europa.eu/): EUFP7/H2020 projects • LIFE project database (https://ec.europa.eu/easme/en/life)
* Database of Ministry of Agriculture and Rural Development, Chamber of Agriculture: AKIS, Operational Groups projects
* Collecting from the vendors/owners of the innovative technologies/products via NUTRIMAN Questionnaire. A specific
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
project questionnaire has been set up and placed on the NUTRIMAN web page by
P1 TERRA. https://nutriman.net/questionnaire. NUTRIMAN is not collecting
protected sensitive data (confidential technical and business information).
</th> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2: Specific collection and provision of practice-oriented knowledge for
Nitrogen and Phosphorus recovery innovative TECHNOLOGIES and PRODUCTS ready
for practice.
T2.1. Collection of matured FP7/H2020/LIFE/OGs/national innovative research
results from the field of Nitrogen and Phosphorus recovery EU28 technologies
and products, which are near to be put into practice, but not sufficiently
known by practitioners.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
P5 UGent
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
P5 UGent
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
P1 TERRA
</td> </tr>
<tr>
<td>
Involving partners:
</td>
<td>
All NUTRIMAN partners and linked Third Parties.
</td> </tr>
<tr>
<td>
Method of capture/Standards
</td> </tr>
<tr>
<td>
Method of Data capture
</td>
<td>
European Nutrient Recycling Contest Questionnaire
https://nutriman.net/questionnaire
</td> </tr>
<tr>
<td>
Format of data capture and expected size:
</td>
<td>
We are expecting to collect 100 filled questionnaires in a form of Word
documentum with a size of ~500KB per each questionnaire. The expected total
size is 50MB.
</td> </tr>
<tr>
<td>
Info about metadata and documentation
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data utility. Who outside of the consortium might use the data?
</td>
<td>
Restricted data not used outside the consortium.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication
</td>
<td>
No data sharing and/or publication.
</td> </tr>
<tr>
<td>
Data access policy. Type of access: Restricted
(only for members of the Consortium and the
Commission Services) or Public
</td>
<td>
Restricted only for the members of the Consortium, Farmers Advisory Board and
the Commission
Services.
</td> </tr>
<tr>
<td>
Ethical issue Y/N
Personal data protection
</td>
<td>
Yes. During this data collection it is necessary to collect basic personal
data (e.g. full name, contact details) which comply with the requirements of
Regulation (EU) 2016/679 and of the Council of 27 April 2016 (General Data
Protection Regulation). All personnel data collected will only be done after
giving data subjects full details on the experiments to be conducted, and
after obtaining signed informed consent forms. Information sheet and consent
form are provided together with the Questionnaire.
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
This DS2 dataset will be preserved in the coordinator TERRA HUMANA
infrastructure. +10 years after the closure of the project.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset No:
DS3
</th>
<th>
Dataset name:
PRACTICE ABSTRACTS in the common EIP-AGRI format
</th> </tr>
<tr>
<td>
Data identification
</td> </tr>
<tr>
<td>
Dataset description:
</td>
<td>
Development of practice abstracts in the common EIP-AGRI format
(https://ec.europa.eu/eip/agriculture/en/eip-agri-common-format)
The NUTRIMAN practice abstracts are synthesizes of simple and lesstechnical
language practice abstract collection of FP7, H2020 projects on the nutrient
recovery and innovative fertilizers and easily accessible enduser materials
developed and managed in substantial number and feed into the European
Innovation Partnership (EIP) 'Agricultural Productivity and Sustainability'
</td> </tr>
<tr>
<td>
Data Subject:
</td>
<td>
Type of data:
</td> </tr>
<tr>
<td>
Project information
</td>
<td>
* Title
* Geographical location
* Project period
* Project status
* Project coordinator contact data
* Project period and project status
* website hosting information on the project results and audiovisual materials
</td> </tr>
<tr>
<td>
Project Partners
</td>
<td>
• contact data
</td> </tr>
<tr>
<td>
Practice abstracts
</td>
<td>
• Short summary for practitioners
</td> </tr>
<tr>
<td>
New/Existing data:
</td>
<td>
Existing
</td> </tr>
<tr>
<td>
Source:
</td>
<td>
• Collecting from the vendors/owners of the innovative technologies/products.
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3: PRACTICE ABSTRACTS in the common EIP-AGRI format and training materials
T3.1. Development of practice abstracts in the common EIP-AGRI format.
T3.2. Translation of selected 25 best practice abstracts into partner’s native
languages
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
P10 UNITO
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
P10 UNITO
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
P1 TERRA
</td> </tr>
<tr>
<td>
Involving partners:
</td>
<td>
All NUTRIMAN consortium partners and linked Third Parties.
</td> </tr>
<tr>
<td>
Method of capture/Standards
</td> </tr>
<tr>
<td>
Method of Data capture
</td>
<td>
Collecting from the vendors/owners of the Practice abstracts in common EIP-
AGRI format. The EIP common format used for reporting on projects and
disseminate the results. This common format consists of a set of basic
elements characterising the project and includes one or more "practice
abstract"(s).
</td> </tr>
<tr>
<td>
Format of data capture and expected size:
</td>
<td>
We are expecting to collect 100 of practice
</td> </tr>
<tr>
<td>
</td>
<td>
abstracts in the common EIP-AGRI format (Excel file) with a size of ~500KB per
practice abstract. The expected total size is 50MB
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data utility. Who outside of the consortium might use the data?
</td>
<td>
The following stakeholders might use the collected data both at
national/regional and EU28 levels.
* Individual farmers and farmer groups
* Agricultural networks and organisations: farmer associations and cooperatives, chambers of Agriculture, producers
Organisations
* Agri practicioners
* Regulators and Policymaker
* Representatives of the European Commision, DG-AGRI, DG-Grow.
At International level the relevant international organisations, such as FAO
and other international organisations.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication
</td>
<td>
A multi lingual interactive practice oriented master NUTRIMAN farmer web
platform will be developed for provision of easily accessible practice-
oriented knowledge on the Nutrient Recovery and Nutrient Recycling.
The generated practice abstracts and training materials will also be shared
through the EIP network.
Best practice abstract booklet will be published for the selected 25 best
practice abstracts in the common EIP-AGRI format translated into partner’s
native language.
</td> </tr>
<tr>
<td>
Data access policy. Type of access: Restricted
(only for members of the Consortium and the
Commission Services) or Public
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Ethical issue Y/N
Personal data protection
</td>
<td>
Yes. During this data collection it is necessary to collect basic personal
data (e.g. full name, contact details of the project coordinator) which comply
with the requirements of Regulation (EU) 2016/679 and of the Council of 27
April 2016 (General Data Protection Regulation). All personnel data collected
will only be done after giving data subjects full details on the experiments
to be conducted, and after obtaining signed informed consent forms.
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
This DS3 dataset will be preserved in the coordinator TERRA HUMANA
infrastructure. +10 years after the closure of the project.
The multi languages interactive SME practice oriented NUTRIMAN master web
platform will be remain open in the long term (10 years) beyond the project
period and maintained by the coordinator P1 TERRA.
</td> </tr> </table>
# 4\. FAIR data
NUTRIMAN will work to ensure that its data will be ’FAIR’, that is findable,
accessible, interoperable and re-usable, according to the points below in line
with H2020 Guideline on
FAIR Data Management in Horizon 2020.
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hioa-
data-mgt_en.pdf
## 4\. 1. Making data findable, including provisions for metadata
PROTOCOL – Storing NUTRIMAN data and making it ’Findable’
NUTRIMAN developing open-access practice abstract database for public
information, gathering from the technology and product developers/owners. All
and any data, information and knowledge streams to and through the NUTRIMAN
will be documented, including written confirmations from the legal information
provider, e.g. the legal owner and manager of the data, information and
knowledge. Protecting intellectual property rights, copyrights innovative
ideas and dissemination activities may be an issue and need to be considered
already at early stage planning of the NUTRIMAN activities.
The NUTRIMAN is developing open-access practice database for public
information (practice abstract in the common EIP-AGRI format) in a language
easily understandable for agricultural practitioners, gathering the main
outputs and results for the interest and benefits of the farmers. Confidential
information streams, if any, to be carefully managed and legally documented.
The collection and provision of easily accessible practice-oriented data-
informationknowledge on the NUTRIMAN thematic area remains available in the
long term, at least ten years beyond the project period on the project website
(https://www.nutriman.net), which is maintained by the coordinator.
### 4.2. Making data openly accessible
The NUTRIMAN is developing open-access practice oriented database will be
available on the NUTRIMAN website (https://www.nutriman.net) for public
information (practice abstract in the standard common EIP-AGRI format) in a
language easily understandable for agricultural practitioners, gathering the
main outputs and results for the interest and benefits of the farmers.
Confidential information streams, if any, to be carefully managed and legally
documented.
### 4.3. Making data interoperable
The collected practice abstracts in the common EIP-AGRI format is multi
lingual and interoperable, that is allowing data exchange and re-use between
researchers, institutions, organisations, countries, and farmers. The EIP-AGRI
common format facilitates knowledge flows on innovative and practice-oriented
projects in the thematic area of N/P nutrient recycling. The use of this EIP-
AGRI format also enables farmers, advisers, researchers and all other actors
across the EU to contact each other. The database of practice abstracts will
be collected and published in a standard form of EIP-AGRI common format
(https://ec.europa.eu/eip/agriculture/en/eip-agri-common-format).
### 4.4. Increase data re-use (through clarifying licences)
The collected practice abstracts in the common EIP-AGRI format will be placed
on open NUTRIMAN website (https://www.nutriman.net) where the widest
accessible and re-use will be permitted. The collected and published practice
abstracts in the common EIP-AGRI format will remain re-usable for 10 years
after the closure of the project.
# 5\. Allocation of resources
Responsible person for data management of the project: Edward Someus/Terra
Humana
Ltd.
# 6\. Data security
All research data underpinning publications will be made available for
verification and re-use unless there are justified reasons for keeping
specific datasets confidential. The main elements when considering
confidentiality of datasets are:
* Protection of intellectual property regarding new processes, products and technologies where the data could be used to derive sensitive information that would impact the competitive advantage of the consortium or its members,
* Commercial agreements as part of the procurements of components or materials that might foresee the confidentiality of data.
* Personal data that might have been collected in the project where sharing them is not allowed by the national and European legislation.
# 6\. Ethical aspects
The NUTRIMAN consortium comply with the requirements of Regulation (EU)
2016/679 and of the Council of 27 April 2016 on the protection of natural
persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC (General Data
Protection Regulation). The ethics related issues of NUTRIMAN is a separated
official project deliverables (D1.1.H-Requirements No. 1 and
D1.2.PODPRequirements No. 2) submitted on December 2018.
NUTRIMAN has a dedicated work package (WP1) to ensure that ethical
requirements are met for all personal data processing undertaken in the
project in compliance with H2020 ethical standards and Regulation (EU)
2016/679. All NUTRIMAN partners will assure that the EU standards regarding
ethics and data management are fulfilled.
_The NUTRIMAN project is complying with H - Requirement No. 1. (D1.1.):_
* Details on the procedures and criteria that will be used to identify/recruit research participants has been developed and submitted as a deliverable (D1.1).
* Detailed information on the informed consent procedures that will be implemented for the participation of humans (stakeholders such as agricultural participants, growers, farmers, advisers) and in regard to data processing has been developed and submitted as a deliverable (D1.1.)
* Templates of the informed consent forms and information sheets covering the voluntary participation, data protection and data preservation issues (in language and terms intelligible to the participants) has been developed and the English version has been submitted as a deliverable (D1.1.).
## _The NUTRIMAN project is complying with POPD - Requirement No. 2 (D1.2.)_
* NUTRIMAN beneficiaries confirmed that it has appointed a Data Protection Officer (DPO) and the contact details of the DPO are made available to all data subjects involved in the research. For beneficiaries not required to appoint a DPO under the General Data Protection Regulation 2016/679 (GDPR) a detailed NUTRIMAN Data Protection Policy for the project has been developed and submitted as a deliverables (D1.2.).
* A description of the technical and organisational measures that will be implemented to safeguard the rights and freedoms of the data subjects/research participants has been submitted as a deliverable (D1.2.).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0161_WASP_825213.md
|
# Executive Summary
WASP is a research project aims to bring changes in flexible and wearable
electronics by developing new printing technology for the definition of
electronic devices and circuits on paper. The main goals of the projects are:
1. To demonstrate electronic functionalities and emerging electronic applications enabled by nanomaterials on paper.
2. To demonstrate a sustainable technology for low-cost and flexible electronics.
3. To demonstrate a wearable paper-based technology, including sensing and communication functionalities for health care applications.
4. To demonstrate multi-scale modeling of paper based devices and complete design tool chain for printed electronis circuits and systems.
During the development of the project, WASP will generate data from the
experimental and theoretical research activities. In this sense, and as a
project participating in the Open Research Data Pilot (ORDP) in Horizon 2020,
WASP will make its research data findable, accessible, interoperable and
reusable (FAIR).
The present document corresponds to the deliverable D1.6 “Data Management
Plan” of the WASP project and was produced as part of the WP1 “Management and
Dissemination”. It contains information about the main elements of the Data
Management policy that will be used by the Consortium with regard to the
project research data. It will include information about the production and
management of the research data along the project and the conditions and
aspects related to them. This is the first version of the DMP document. It
will be systematically reviewed and more details and description of the data
management procedures implemented by the WASP project will be included.
This deliverable will include the analysis of the most relevant aspects of the
data management policy. The document is divided into seven sections,
corresponding to the highlighted points of the Data Management Plan (DMP)
scheme: a) a general introduction describing the framework of the DMP; b) Data
Summary; c) FAIR data; d) Allocation of resources; e) Data security; f)
Ethical aspects and g) Other issues. Every section will include information
about the research data generated/collected, standards that will be used,
preservation of the data and the datasets will be shared for verification or
reuse.
# Introduction
The deliverable D1.6 – Data Management Plan (DMP) of the WASP project provides
the strategy for managing data generated and collected during the project. It
consists on a document that will include information about how the WASP
research data obtained will be handle during and after the project, so it will
be reviewed and updated as the project evolves. The DMP include that data
handling, the types and format of the data generated and collected, the
methodologies applied and how the data will be shared and preserved.
The use of a DMP is required for all projects participating in the Open
Research Data Pilot. The document has been prepared taken into account the
template of the “Guidelines on Data Management in Horizon 2020” (
_https://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oadata-mgt_en.pdf_ ). The purpose of the DMP is to provide an analysis of the
main elements of data management policy. In the particular case of the WASP
project, the expected types of research datasets that will be collected or
generated along the project lie in the following categories: i)
Materials/devices modelling and circuit design; ii) Materials and device
fabrication and characterization; iii) System integration and demonstration;
iv) Process scalability, susteintability and exploitation.
From the mentioned categories, WASP project will generate several datasets,
including experimental measurements data, codes developed in different
programming languages, data coming from the atomistic simulations and
scientific articles. The data will be processed and analysed and they will be
preserved using appropriated naming rules and metadata schemes. This
information will be then organized in a DMP, considering open science
resources that are interoperable and trusted.
This document is the first version of the DMP, delivered in Month 6. It
includes an overview of the datasets to be produced by the project, and the
specific conditions related with it. The document will be updated in regular
intervals, including more details about the procedures implemented within the
WASP project.
_**Figure 1: Research data life-cycle** _
# Data Summary
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data collection/generation purpose, based on the project objectives.**
</td>
<td>
The generation and collection of data has several purposes, in particular the
reproducibility of the performed results (either experimental and
theoretical), the dissemination (through articles), as well as the training of
researchers (either PhDs or post-docs), who will benefit from a detailed
description of the method/results obtained within the project.
</td> </tr>
<tr>
<td>
**Data types and formats generated/ collected**
</td>
<td>
In the WASP project several datasets will be generated, that can be classified
and identify schematically as follow:
**Dataset1** : Materials formulation for solution processing deposition
techniques
**Dataset2** : Development of a procedure to planarize/functionalize the paper
substrates
**Dataset3** : Sensing components
**Dataset4** : Energy generation and storage
**Dataset5** : Transponder design
**Dataset6** : Device models
At each dataset will be assigned a specific name/number. Inside these
datasets, the formats for the data generated/collected will correspond to:
1. Raw data in ASCII format (.txt, .xls, .xlsx)
2. Open-source codes developed in different programming languages (Fortran, C and python)
3. Text based documents related with the description of the methodologies implemented in the project and publication in scientific journals, in (.doc, .tex, .pdf).
4. Illustrations, graphics and presentations (.png, .jpg, .tiff, .eps, .ppt, .pptx, .pdf).
Further information about changes or inclusion of additional datasets during
the progress of the project will be included in the subsequent versions of the
DMP.
If specialized software is used then information about free readers will be
provided.
</td> </tr>
<tr>
<td>
**Origin of the data**
</td>
<td>
Based on the nature of the data generated, the origin of the data is related
with two principal sources: from the simulations activity, coming especially
from ab initio and atomistic simulations and the data obtained from the
experimental measurements of the devices. Moreover, another origin is the
development of open-source codes, that allow a complete transparency of the
obtained results.
</td> </tr>
<tr>
<td>
**Size of the data**
</td>
<td>
The size of the files varies depending on the data generated. From the
simulation activity we will get raw data in ASCII format of the order of
tens/hundreds of Gigabyte coming especially from ab initio and atomistic
simulations. From the experimental measurements, the amount of data will be
reduced, expecting a collection of data
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
of the order of hundreds of Megabytes at most. With respect to the documents
size, it is expected files of the order of Megabytes.
</th> </tr>
<tr>
<td>
**Re-use of existing data**
</td>
<td>
Some database will be used, for instance 2D Materials database (
_https://cmr.fysik.dtu.dk/c2db/c2db.html_ ) to get information about
computational simulations parameters to be used to model the devices.
Moreover, the DFT pseudopotentials included in Quantum
Espresso package ( _https://www.quantum-_
_espresso.org/pseudopotentials_ ) will be used in order to describe the
interaction among the atoms in the atomistic simulations.
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
A repository of the generated data and implemented methodologies will be
fundamental for the training of new researchers in the field and also to those
joining the research group, so to reduce the learning curve and to allow a
faster integration within the WASP project activity. The data obtained from
experimental measurements and simulations can be used by theoretical groups as
input for theoretical modelling or as a reference for future comparison and
for the reproducibility of published results. The data will be also important
for the private sector, for commercial applications and for other research
groups working in the field.
The data will be suitable for use by other research groups working on the
following topics: nanomaterials, flexible electronics, sensors, multi-scale
modelling.
</td> </tr> </table>
# FAIR Data
3.1 Making data findable, including provisions for metadata
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data discoverable with metadata, identifiable and locatable by means of a
standard identification mechanism (e.g.**
**DOI)**
</td>
<td>
WASP has chosen ZENODO platform ( _http://zenodo.org_ ) as repository data
archive. It is based on facilities and management FAIR data principles, on GIT
version control system and the Digital Object Identifier (DOI) system. ZENODO
is part of OpenAIRE collaboration and allows researchers to locate data and
publications, and assigning a persistent identifier and data citations in
order to link to them. The guidelines provided by ZENODO will be used by WASP
to ensure the right format of data uploaded to comply with FAIR principles.
</td> </tr>
<tr>
<td>
**Naming conventions**
</td>
<td>
The research data, documents and other product obtained in the WASP project
will be identified, collected and structured by using a name convention,
consisting of project and dataset name and an identification number related
with the dataset. Partners will be informed about the specific information and
metadata parameters that will support FAIR data management.
</td> </tr>
<tr>
<td>
**Re-use through keywords provided**
</td>
<td>
Keywords will be provided and updated with the project advancement.
</td> </tr>
<tr>
<td>
**Version numbers used**
</td>
<td>
Individual file names will contain version numbers that will be incremented at
each version.
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
Metadata is a descriptive information that help other researchers to find the
data in an online repository. Detailed metadata will give to the other
researchers the information to determine whether the dataset is important and
useful for its own research. The metadata created for all of the project’s
datasets will fulfil the respository’s (Zenodo) requirement for a minimum set
of metadata.
</td> </tr> </table>
3.2 Making data openly accesible
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data openly available and data shared under restrictions.**
</td>
<td>
The generated data from the research activity (simulations and experimental
measurements, codes, etc) will be included in repositories chosen for the
project (ZENODO platform) and it will be made openly available unless there is
a specific reason not to publish it. With respect the multi-scale simulations,
it will be availabile the website of open-source code NanoTCAD ViDES (an open-
source simulation framework), which is representing a benchmark for two-
dimensional materials based devices _http://vides.nanotcad.com_ , exploiting
ViDES as simulations tool, as well as the specific website of the WASP project
( _https://www.wasp-project.eu_ ).
The beneficiaries must give each other access (under reasonable conditions) to
background needed for exploiting their own results.
</td> </tr>
<tr>
<td>
**Accessibility of the data (e.g.**
**deposition in a repository)**
</td>
<td>
About the datasets, they will be transferred to the ZENODO, repository,
through open access to data files and metadata and use and reuse of data
permitted. Additional data storage will be ensured by individual partner
institution’s data repositories. Data will also included in HPC resources of
University of Pisa, and made publicly available, in order to allow
availability, well beyond the timespan of the project.
For what concerns the published articles, and in particular regarding those
published in journals which do not offer the possibility to the reader to
download freely, we will upload in public repositories as Arxiv.org the final
version of the draft of the article, as well as in the publication section of
the WASP website, where we will also attach the tarball of the raw data
included in the published figures.
Within the timespan of the project, we will consider the possibility of
including the generated codes in repositories (as for example github or
sourceforge).
</td> </tr>
<tr>
<td>
**Methods or software to access the data**
</td>
<td>
The data deposited on ZENODO will be accessible to the public without
restrictions. The access to the data will be through standard available
softwares. In the case specific software tools will be developed, a text
document including the information about such software and how to use it will
be provided.
</td> </tr>
<tr>
<td>
**Relevant software (open source code)**
</td>
<td>
Most of the softwares needed inside the project are available. In the
particular case of specific software developed in the project, the source will
be deposited in the repository. In particular, the link to the open source
softwares used to carry out the modelling and simulations (Quantum Espresso
and related codes therein for atomistic simulations and NanoTCAD Vides for
nanoscale devices simulation) will be included.
</td> </tr>
<tr>
<td>
**Data, metadata, documentation and code repositories**
</td>
<td>
WASP project has chosen ZENODO platform ( _http://zenodo.org_ ) as repository
data archive. It is based on facilities and management FAIR data principles.
ZENODO is part of OpenAIRE collaboration. The guidelines provided by ZENODO
will be used by WASP to ensure the right format of data is uploaded to comply
with FAIR principles.
</td> </tr>
<tr>
<td>
**Access under restriction on use**
</td>
<td>
There are no restrictions on the use of the published data, but users will be
required to acknowledge the Consortium and the source of the data in any
resulting publications.
</td> </tr>
<tr>
<td>
**Data access committee**
</td>
<td>
There won’t be need of data access committee.
</td> </tr>
<tr>
<td>
**Conditions for access**
</td>
<td>
Zenodo provides well described conditions for access.
</td> </tr>
<tr>
<td>
**Identity of the users accesing the data**
</td>
<td>
Zenodo does not need any especial permissions or registration in order to
access to the repository files. In order to upload the data, users are
required to register.
</td> </tr> </table>
3.3 Making data interoperable
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Interoperability of the data produced, data exchange and re-use between
researchers, institutions, organisations, countries, etc.**
</td>
<td>
The generated and shared data will consist on simple raw data (ASCII files in
multicolumn structures). These data can be opened by any user with a simple
text editor. It is possible also to release the data directly embedded in the
file exploited to generate the figures of the articles. The programs used to
get and visualise data are open source (for example, Xmgrace or Python).
</td> </tr>
<tr>
<td>
**Data, metadata, vocabularies, standards or methodologies for
interoperability**
</td>
<td>
Vocabularies will be used in metadata fields in order to support consistent,
accurate and quick indexing and retrieval of relevant data. Keywords will be
used for indexing and subject headings of the data and metadata. Vocabularies
and keywords will be the standards used by OpenAIRE and will be updated along
the project execution, in order to increase the interopaerability of the
project’s data and metadata.
</td> </tr>
<tr>
<td>
**Inter-disciplinary interoperability by standard vocabularies for data
types**
</td>
<td>
Standard vocabularies will be used for all datasets in order to ensure inter-
disciplinary interoperability and re-use. All datasets will use the same
standard vocabularies for data and metadata capture/creation.
</td> </tr>
<tr>
<td>
**Mappings to ontologies if unavoidable uncommon or specific ontologies or
vocabularies**
</td>
<td>
The compability of our project specific ontologies and vocabularies will be
guaranteed through appropriate mapping to more commonly used onrtologies.
</td> </tr> </table>
3.4 Increase data re-use (through clarifying licenses)
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**License for the re-use**
</td>
<td>
The data will be openly available under CC0 Creative Commons license. All open
content is openly accessible through open
APIs. Metadata is exported via OAI-PMH and can be harvested.
</td> </tr>
<tr>
<td>
**Availability of the data for the re-use. Application of eventual embargo.**
</td>
<td>
Data will be fully available in the repositories (both data and codes), except
in those cases we believe that a disclosure could be detrimental for the
competitive advantage gained with respect to other groups or if we decide to
secure the intellectual property with patents. If needed, datasets could be
deposited under embargo status and the repository will restrict access to the
data until the end of the embargo period, becoming available automatically and
the end of such period.
</td> </tr>
<tr>
<td>
**Third parties use of the data produced. Restrictions on the re-use of
data.**
</td>
<td>
The data produced and/or used along the project will be deposited on a public
repository (in our case ZENODO) and the access to it will be unlimited by
third parties.
</td> </tr>
<tr>
<td>
**Duration of the re-usable data**
</td>
<td>
There won’t be a limiting time for the re-usability from a thirdparty of the
data/info.
</td> </tr>
<tr>
<td>
**Description of the data quality assurance**
</td>
<td>
Repetion and comparation of the measurements, adherence to standards for data
recording, the use of specific vocabularies and terminology, characterisation
of the measurement set-ups and validation of the data collected will assure
the data quality.
</td> </tr> </table>
Allocation of resources
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Costs of data FAIR**
</td>
<td>
The repository services offered by ZENODO are free of charge and enable peers
to share and preserve research data in several sizes and formats: datasets,
images, presentations, publications and softwares.
* Data archiving at ZENODO: free of charge, including the DOI assigned to each dataset.
* Copyright licensing with Creative Commons: free of charge • Cost of the name domain: 12 EUR/yr
The eventual costs have been kept to a minimum by making only relevant data
FAIR.
</td> </tr>
<tr>
<td>
**How will these be covered?**
</td>
<td>
Costs related to open access to research data are eligible as part of the
Horizon 2020 grant (if compliant with the Grant Agreement conditions).
Resources for long term preservation, associated costs and potential value, as
well as how data will be kept beyond the project and how long, will be
discussed by the whole consortium during General Assembly (GA) meetings.
</td> </tr>
<tr>
<td>
**Data management responsible**
</td>
<td>
The PI and the project manager will lead the coordination of the updates to
the data management plan. Project manager will be responsible for organising
data backup and storage, data archiving and for depositing the data within the
repository (ZENODO).
</td> </tr>
<tr>
<td>
**Resourses for long term preservation, costs, value, availability of the
data.**
</td>
<td>
The value of the preservation will be determined during the progress of the
project. The associated costs for dataset preparation for archiving will be
covered by the project itself. Long-term preservation will be provided and
associated costs covered by a selected disciplinary repository.
There are no costs associated with the long-term preservation of the data.
</td> </tr> </table>
Data security
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data security provisions, data security, data storage and transfer of
sensitive data.**
</td>
<td>
In order to ensure the security of the data, some actions will be taken, for
instance: store data in at least two separate locations, lable files in
systematic and structured way in order to keep the coherence of the final
dataset. Deposition in the Zenodo public repository will provide additional
security as it has multiple replicas in a distributed file system which is
backed up on a nightly basis.
</td> </tr>
<tr>
<td>
**Storage of the data**
</td>
<td>
The data will be safely stored in the ZENODO open access repository. CERN is
working towards ISO certification of the organisational end technical
infrastructure which ZENODO relies on for long-term preservation. Every
partner will be responsible of the data produced and will ensure that the data
will be stored safely and securely and in agreement with the EU data
protection laws. At the end of the project the repository chosen to store the
dataset will have the responsibility of the data recovery and secure storage.
</td> </tr> </table>
Ethical aspects
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Ethical or legal issues on data sharing**
</td>
<td>
WASP partners involved in the project will follow the EU and national
standards regarding ethics and data management. They must comply with the
ethical principles and confidentiality (according with the Article 34 of the
Grant Agreement). WASP partners must retain any data, documents or other
material as confidential during the implementation for the project.
</td> </tr>
<tr>
<td>
**Consent for data sharing and long term preservation.**
</td>
<td>
Research data which contains personal data will just be disseminated for the
purpose for which it was specified by the consortium. Moreover, the data
generated and shared have to be documented and approved by the consortium to
guarantee highest standards of data protection.
</td> </tr> </table>
Other issues
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Use of**
**departmental management**
</td>
<td>
**national/funder/sectorial/ procedures for data**
</td>
<td>
Data management will be compliant with the research data policy of H2020
Horizon and European laws.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0164_DESTINATIONS_689031.md
|
# Executive Summary
This document represents the third edition of Local Data Management Plan
(LDMP) relating to the modalities of involvement of human participants and the
data under collection/collected, handled and processed by CIVITAS DESTINATIONS
sites over the period M1-M30 (until February 2019). This document is updated
on a yearly basis in order to integrate the different data typologies the
project will manage in its progress. The collection of data is carried out
over a six-month period to allow Site Managers to easily cope with this task.
This document follows the methodological approach adopted by CIVITAS
DESTINATIONS project and described in D1.7 (PDMP – third edition) according to
the guidelines defined in the Ethics Compliance Report (D1.1).
This deliverable is structured as follows:
* Section 2 is an introduction of the document covering the identification of objectives for its elaboration and delivery, the role of Local Data Management Plan (LDMP) into the whole CIVITAS DESTINATIONS project and the cross-relations with Project Data Management Plan (PDMP);
* Section 3 details the modalities of involvement of human participants and summarizes the sensitive data collected/handled
* The specific data collected and generated by DESTINATIONS sites in the period M1M30 (until February 2019) is detailed in the Annex 1. The Annex is organized per site with tables for the data collected with reference to each demo WP (WP2-WP7 and WP9).
# Role of Project and Local DMPs in DESTINATIONS
PDMP – third edition (D1.7) defines the overall approach assumed by the
project, it identifies the data typology involved, it describes the data
collected/handled/processed by horizontal WPs (WP8-WP11) and it sets the
framework for the LDMP.
LDMP details the data collected/under collection by CIVITAS DESTINATIONS sites
over the period M1- M30 (until February 2019).
Data has been collected through the contribution of Site Managers (SM)
according to the template defined in PDMP – first edition (D1.2). LDMP can be
considered an integration of Project Data Management Plan – third edition
(D1.7) which sets the framework for approaching data management in CIVITAS
DESTINATIONS project.
# Local Data Management Plan
In the following sections the DESTINATIONS Local Data Management Plans are
presented. In order to improve the readability, this section focuses on the
main topics: involvement of human participants and identification if/how
sensitive data have been collected by the sites during the design and the
operation of demonstration measures. Detailed specifications of data and
description of the collection, management and storing procedures is provided
in the following Annex (per site and per WP).
<table>
<tr>
<th>
**FUNCHAL (MAD)**
</th> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.1
</td>
<td>
In the case data collection processes involve human participants, please
describe the selection process
</td>
<td>
The procedures and criteria used to identify target participants to the
collection processes, are carried out under a fair and random method, assuring
a representative sample.
This participants’ sampling process arises as essential to ensure that a full
cross-section of individuals is surveyed (nationalities, students, etc.). The
sample is random and of a size that can be analysed with the ability to make
statistical inference for the overall sample.
The selection of the participants also guarantees the non-discrimination and
non-exclusion principles.
The data collection process assures above all the accuracy and integrity of
the research (the travel patterns, attitudes and socio-demographic
characteristics of the respondents) and will not code specific people or
households (anonymous data).
The implementation of the data collection process, occurs in predefined
places, seen as core locations to meet the target groups (Schools - students,
airport - tourists, etc.) and the best opportunity to evaluate the measures’
effects accordingly.
Data is collected mostly through questionnaires, applied voluntarily and
randomly to the participants, assuring a representative sample related to each
CIVITAS measure in place.
</td> </tr> </table>
<table>
<tr>
<th>
**FUNCHAL (MAD)**
</th> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.1.1
</td>
<td>
Which kind of inclusion/exclusion criteria have been adopted?
</td>
<td>
No inclusion/exclusion criteria were adopted.
</td> </tr>
<tr>
<td>
1.1.2
</td>
<td>
Have participants been included on a volunteer basis?
</td>
<td>
Yes. Questionnaires are filled out on a voluntary basis by the participants.
</td> </tr>
<tr>
<td>
1.1.3
</td>
<td>
Please confirm that the Informed Consent has been requested. Please keep copy
of the Informed Consent form adopted.
Please provide enclosed with this document a copy of one Informed Consent
sheet (in original language) together with a very brief text in English
describing in which data collection procedure the Consent has been asked and
which information have been given to the participants
</td>
<td>
Yes (when applicable).
To all tourists willing to participate in the Focus Group dynamic, a **Data
Protection and Privacy Note** is given to read and sign (who wants to join the
tourist panel). Consent forms are stored in HF office.
</td> </tr>
<tr>
<td>
1.1.4
</td>
<td>
Have persons not able to provide Informed Consent included as research
participants? In this case which procedures to get Informed Consent have been
adopted? And/or to ensure that they have not been subjected to any coercion?
</td>
<td>
No.
</td> </tr>
<tr>
<td>
1.1.5
</td>
<td>
Have participants been selected among any vulnerable group? In this case
please detail the motivations and the ethical rules applied
</td>
<td>
No. Random participants selected.
</td> </tr>
<tr>
<td>
1.1.6
</td>
<td>
Please specify which kind of personal data have been handled in the operation
of the local measures?
</td>
<td>
Yes. Name, phone number and e-mail, as
described in Annex (row 2.1.2.1)
</td> </tr>
<tr>
<td>
**FUNCHAL (MAD)**
</td> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.1.7
</td>
<td>
Which kind of actions has been put into practice in order to manage this data
(i.e.
procedures for anonymising the data, database protection, allocation of access
rights, etc)
</td>
<td>
Personal data will be managed (collection, storing and access) in accordance
with EU GDPR regulation.
The analysis of data will not reveal specific respondents to questionnaires.
The respondents will be anonymous codes and the codes will be used to mark
specific individuals in order to track their responses before and after a
CIVITAS measure and then used in the ‘panel analysis’. Following the analysis,
the codes will be erased and the data stored as anonymous. (described in
Annex, row 2.1.2.2).
As described in Annex (row 2.1.3.1), a separate Excel database was created to
store the personal data provided, which is protected by a strong password,
file stored on a PC only and where access to it is prohibited to any other
person.
The participants will be anonymous codes to prevent tracking.
</td> </tr> </table>
**Table 1: Description of involvement modalities for research participants in
Madeira**
<table>
<tr>
<th>
**RETHYMNO (RET)**
</th> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.2
</td>
<td>
In the case data collection processes involve human participants, please
describe the selection process
</td>
<td>
According to the research methodology applied by the Municipality and the
assigned subcontractor, the human participants involved were selected
randomly, while in order to acquire a more precise sample, the stratified
sampling selection of the final filled forms was followed
</td> </tr>
<tr>
<td>
1.2.1
</td>
<td>
Which kind of
inclusion/exclusion criteria have been adopted?
</td>
<td>
Inclusion / exclusion criteria were not adopted; the sample was selected
randomly
</td> </tr>
<tr>
<td>
1.2.2
</td>
<td>
Have participants been included on a volunteer basis?
</td>
<td>
Yes
</td> </tr> </table>
<table>
<tr>
<th>
**RETHYMNO (RET)**
</th> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.2.3
</td>
<td>
Please confirm that the Informed Consent has been requested? Please keep copy
of the Informed Consent form adopted.
Please provide enclosed with this document a copy of one Informed
Consent sheet (in original language) together with a very brief text in
English describing in which data collection procedure the Consent has been
asked and which information have been given to the
participants
</td>
<td>
The questionnaires were anonymous. No personal data were collected and all
participants were included on a volunteer basis
Therefore, no informed consent forms needed to be used
</td> </tr>
<tr>
<td>
1.2.4
</td>
<td>
Have persons not able to provide Informed Consent included as research
participants? In this case which procedures to get Informed Consent have been
adopted? And/or to ensure that they have not been subjected to any coercion?
</td>
<td>
All participants were informed about the procedure and type of data collected
by the researchers and were included on a volunteer basis
As noted in 1.2.3, due to the surveys set up (anonymous, no personal data),
procedures to get Informed Consent forms have not taken place
</td> </tr>
<tr>
<td>
1.2.5
</td>
<td>
Have participants been selected among any vulnerable group? In this case
please details the motivations and the ethical rules applied
</td>
<td>
No. Random sampling was used from people passing by, from selected public
spaces
</td> </tr>
<tr>
<td>
**RETHYMNO (RET)**
</td> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.2.6
</td>
<td>
Please specify which kind of personal data have been handled in the operation
of the local measures?
</td>
<td>
The questionnaires were anonymous and no personal data were collected or
handled from Municipality of Rethymno
</td> </tr>
<tr>
<td>
1.2.7
</td>
<td>
Which kind of actions has been put into practice in order to manage this data
(i.e. procedures for anonymising the data, database protection, allocation of
access rights, etc.)
</td>
<td>
N/A
</td> </tr> </table>
**Table 2: Description of involvement modalities for research participants in
Rethymno**
<table>
<tr>
<th>
**LIMASSOL (LIM)**
</th> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.3
</td>
<td>
In the case data collection processes involve human participants, please
describe the selection process
</td>
<td>
The sampled data will be random through the distribution of questionnaires in
Limassol region. The survey will involve randomly selected tourists and local
citizens for questions.
</td> </tr>
<tr>
<td>
1.3.1
</td>
<td>
Which kind of inclusion/exclusion criteria have been adopted?
</td>
<td>
* Include local citizens and tourists over 18 years old
* The questions and answers will take place at the same time
</td> </tr>
<tr>
<td>
1.3.2
</td>
<td>
Have participants been included on a volunteer basis?
</td>
<td>
Yes
</td> </tr> </table>
<table>
<tr>
<th>
**LIMASSOL (LIM)**
</th> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.3.3
</td>
<td>
Please confirm that the Informed Consent has been requested. Please keep copy
of the Informed Consent form adopted. Please provide enclosed with this
document a copy of one Informed Consent sheet (in original language) together
with a very brief text in English describing in which data collection
procedure the Consent has been asked and which information have been given to
the
participants
</td>
<td>
Questionnaires have been randomly distributed to tourists and local citizens
around the city centre of Limassol, and the questions will be orally based.
</td> </tr>
<tr>
<td>
1.3.4
</td>
<td>
Have persons not able to provide Informed Consent included as research
participants? In this case which procedures to get Informed Consent have been
adopted? And/or to ensure that they have not been subjected to any coercion?
</td>
<td>
Answering the questions was considered voluntary work, and participants have
not been subjected to any coercion.
</td> </tr>
<tr>
<td>
1.3.5
</td>
<td>
Have participants been selected among any vulnerable group? In this case
please details the motivations and the ethical rules applied
</td>
<td>
No
</td> </tr>
<tr>
<td>
**LIMASSOL (LIM)**
</td> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.3.6
</td>
<td>
Please specify which kind of personal data have been handled in the operation
of the local measures?
</td>
<td>
Questionnaires have been randomly distributed to citizens/tourists in order to
give real information about the mobility situation of Limassol city centre.
</td> </tr>
<tr>
<td>
1.3.7
</td>
<td>
Which kind of actions has been put into practice in order to manage this data
(i.e. procedures for anonymising the data, database protection, allocation of
access rights, etc.)
</td>
<td>
Questionnaires were anonymous and will be securely stored in our data files.
</td> </tr> </table>
**Table 3: Description of involvement modalities for research participants in
Limassol**
<table>
<tr>
<th>
**ELBA (ELB)**
</th> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.4
</td>
<td>
In the case data collection processes involve human participants, please
describe the selection process
</td>
<td>
Tourists for the dedicated survey on travel behavior, attitudes and opinions
were selected randomly. The survey on travel needs, attitudes, opinions and
level of satisfaction of TPL users were carried out on the bus and at the
information office of the local Public Transport Company (CTT Nord). The
survey regarding opinion and level of satisfaction for the additional TPL
service by boat (Chicchero) was targeted to passengers (tourists and
residents) selected randomly. The survey regarding the initiative of the
e-bikes long-term rental service and the customer satisfaction was targeted to
tourists and to participant hoteliers.
</td> </tr>
<tr>
<td>
1.4.1
</td>
<td>
Which kind of
inclusion/exclusion criteria have been adopted?
</td>
<td>
Considering the above criteria, the only criterion of exclusion was the
willingness not to answer.
</td> </tr>
<tr>
<td>
1.4.2
</td>
<td>
Have participants been included on a volunteer basis?
</td>
<td>
Yes
</td> </tr> </table>
<table>
<tr>
<th>
**ELBA (ELB)**
</th> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.4.3
</td>
<td>
Please confirm that the Informed Consent has been requested? Please keep copy
of the Informed Consent form adopted. Please provide enclosed with this
document a copy of one Informed Consent sheet (in original language) together
with a very brief text in English describing in which data collection
procedure the Consent has been asked and which information have been given to
the
participants
</td>
<td>
Respondents have been informed that data would have been collected anonymously
and for statistical analysis only so the statistical confidentiality will be
guaranteed.
For this reason, there was no need to collect a formal Informed Consent but we
received a verbal consent for the interview.
</td> </tr>
<tr>
<td>
1.4.4
</td>
<td>
Have persons not able to provide Informed Consent included as research
participants? In this case which procedures to get Informed Consent have been
adopted? And/or to ensure that they have not been subjected to any coercion?
</td>
<td>
All the involved participants have been able to provide a verbal consent for
the interview.
</td> </tr>
<tr>
<td>
1.4.5
</td>
<td>
Have participants been selected among any vulnerable group? In this case
please details the motivations and the ethical rules applied.
</td>
<td>
The selection was/will be random and no selection of specific vulnerable group
will be adopted.
</td> </tr>
<tr>
<td>
**ELBA (ELB)**
</td> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.4.6
</td>
<td>
Please specify which kind of personal data have been handled in the operation
of the local measures?
</td>
<td>
No personal data has been handled.
</td> </tr>
<tr>
<td>
1.4.7
</td>
<td>
Which kind of actions has been put into practice in order to manage this data
(i.e. procedures for anonymising the data, database protection, allocation of
access rights, etc.)
</td>
<td>
Not applicable
</td> </tr> </table>
**Table 4: Description of involvement modalities for research participants in
Elba**
<table>
<tr>
<th>
**MALTA (MAL)**
</th> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.5
</td>
<td>
In the case data collection processes involve human participants, please
describe the selection process
</td>
<td>
Participants to the telephone surveys with local residents under MAL4.1,
MAL6.2 and MAL7.1 were selected following a stratified random sampling
strategy using the telephone directory of one of the main national telephony
providers.
Participants to in-person surveys with local residents and tourists under
MAL6.3 and MAL7.1 were randomly selected for participation at the airport,
ferry terminal and cruise line terminal in the case of MAL6.3, and while
waiting to board the ferry or whilst on the ferry for MAL7.1.
</td> </tr>
<tr>
<td>
1.5.1
</td>
<td>
Which kind of
inclusion/exclusion criteria have been adopted?
</td>
<td>
Respondents under the age of 18 were excluded.
</td> </tr>
<tr>
<td>
1.5.2
</td>
<td>
Have participants been included on a volunteer basis?
</td>
<td>
Yes. The respondents were asked whether they would like to participate in the
research. During the introduction, the interviewer explained that it is on a
voluntary basis.
</td> </tr> </table>
<table>
<tr>
<th>
**MALTA (MAL)**
</th> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.5.3
</td>
<td>
Please confirm that the Informed Consent has been requested? Please keep copy
of the Informed Consent form adopted.
Please provide enclosed with this document a copy of one Informed
Consent sheet (in original language) together with a very brief text in
English describing in which data collection procedure the Consent has been
asked and which information have been given to the
participants
</td>
<td>
Consent has been requested verbally during the telephone survey, as well as
during in-person surveys. The respondent was also able to stop during the
interview process should he/she wished to do so.
We do not have copies of the Informed Consent form as the research was done
over the telephone or in-person. Such Consent is not required since there is
no follow-up following the research.
</td> </tr>
<tr>
<td>
1.5.4
</td>
<td>
Have persons not able to provide Informed Consent included as research
participants? In this case which procedures to get Informed Consent have been
adopted? And/or to ensure that they have not been subjected to any coercion?
</td>
<td>
People who declined participation were not included as research participants.
In the telephone survey, a larger sample than required was extracted to
compensate for non-response or refusal to participate.
None of the participants have been subjected to coercion to participate.
</td> </tr>
<tr>
<td>
1.5.5
</td>
<td>
Have participants been selected among any vulnerable group? In this case
please details the motivations and the ethical rules applied
</td>
<td>
Elderly people have been included in the surveys in order to ensure a
representative sample.
</td> </tr>
<tr>
<td>
**MALTA (MAL)**
</td> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.5.6
</td>
<td>
Please specify which kind of personal data have been handled in the operation
of the local measures?
</td>
<td>
No personal data has been collected
</td> </tr>
<tr>
<td>
1.5.7
</td>
<td>
Which kind of actions has been put into practice in order to manage this data
(i.e. procedures for anonymising the data, database protection, allocation of
access rights, etc.)
</td>
<td>
No personal data has been collected
</td> </tr> </table>
**Table 5: Description of involvement modalities for research participants in
Malta**
<table>
<tr>
<th>
**LAS PALMAS DE GRAN CANARIA (LPA)**
</th> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.6
</td>
<td>
In the case data collection processes involve human participants, please
describe the selection process
</td>
<td>
The interviews for the mobility survey carried out in LPA3.1 were made by
using Computer Assisted Telephone Interview (CATI) software. This software
automatically selects the people to interview based on criteria in order to
reach a representative sample.
</td> </tr>
<tr>
<td>
1.6.1
</td>
<td>
Which kind of
inclusion/exclusion criteria have been adopted?
</td>
<td>
The criteria adopted was to reach a proportional sample to the whole universe
(inhabitants of Las Palmas de Gran Canaria and the whole island of Gran
Canaria) based on age, gender, employment status, etc.
</td> </tr>
<tr>
<td>
1.6.2
</td>
<td>
Have participants been included on a volunteer basis?
</td>
<td>
Once the CATI software dialled the phone numbers the interviewer asked the
interviewee his/her consent to get his/her answers recorded.
</td> </tr> </table>
<table>
<tr>
<th>
**LAS PALMAS DE GRAN CANARIA (LPA)**
</th> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.6.3
</td>
<td>
Please confirm that the Informed Consent has been requested? Please keep copy
of the Informed Consent form adopted.
Please provide enclosed with this document a copy of one Informed
Consent sheet (in original language) together with a very brief text in
English describing in which data collection procedure the Consent has been
asked and which information have been given to the
participants
</td>
<td>
There is an Informed Consent for each interview that we carried out for the
mobility survey. However, the Informed Consent of each participant has not
been merged in a single document (audio or transcribed in a sheet document).
</td> </tr>
<tr>
<td>
1.6.4
</td>
<td>
Have persons not able to provide Informed Consent included as research
participants? In this case which procedures to get Informed Consent have been
adopted? And/or to ensure that they have not been subjected to any coercion?
</td>
<td>
No persons without Informed Consent were included in the survey.
</td> </tr>
<tr>
<td>
1.6.5
</td>
<td>
Have participants been selected among any vulnerable group? In this case
please details the motivations and the ethical rules applied
</td>
<td>
No.
</td> </tr>
<tr>
<td>
**LAS PALMAS DE GRAN CANARIA (LPA)**
</td> </tr>
<tr>
<td>
**Details of involvement modalities of research participants**
</td> </tr>
<tr>
<td>
1.6.6
</td>
<td>
Please specify which kind of personal data have been handled in the operation
of the local measures?
</td>
<td>
Please see WP3 description (LPA3.1).
</td> </tr>
<tr>
<td>
1.6.7
</td>
<td>
Which kind of actions has been put into practice in order to manage this data
(i.e. procedures for anonymising the data, database protection, allocation of
access rights, etc.)
</td>
<td>
Please see WP3 description (LPA3.1).
</td> </tr> </table>
**Table 6: Description of involvement modalities for research participants in
Las Palmas**
# Conclusions
Summarizing the information provided in the previous section 3:
* Human participation to the mobility measures demonstrated in CIVITAS Destinations is mainly related to questionnaires/interviews/survey carried out for the assessment of local needs (design of the measures) and the assessment of impacts/level of satisfaction (evaluation of the measures). The selection of participants has been carried out randomly, the participants have been always able to provide the informed consent and free to decline participation. The Informed Consent has been asked in different way (written/verbally). The purpose for collecting the Informed Consent varies case by case: in a large number of cases, the Informed Consent focused mainly on informing the participants why the data has been collected (when the data collected are not sensitive) and sometime to specify the procedures for data storing and handling (when sensitive data are collected)
* A procedure for collecting examples of Informed Consents adopted in the sites has been already established: this action fosters also the exchange of practices among the sites
* Data has been collected mostly in an anonymous and aggregated way. Most of them have been accessed from public sources. In a few cases where personal data has been collected appropriate procedures for Informed Consent and handling of data have been established
* In general, the data collected by the sites are made available for dissemination purposes in an aggregated way or as an extraction, not in a publicly accessible “open” data format.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0167_SENSITIVE_801347.md
|
# DATA COLLECTION
Describe for all the data that will be collected the collection procedure in
as much detail as possible including where possible pictures images etc.
## • Mouse RNA-seq data
Mouse tissue from the esophagus and the large intestine will be isolated from
healthy and diseased mice and will be processed for the generation of RNA,
which will be used from the generation of sequencing libraries to be run in
the Illumina NextSeq 500 sequencer within the Greek genome Center at BRFAA.
Raw data will be processed with the use of appropriate algorithms and
comparison will be performed between healthy tissues and diseased tissues at
various stages of the disease. Molecular biomarkers that discriminate between
the healthy and diseased state will be identified.
## • Optical data from mice and human tissues
Mouse tissue from the esophagus and the large intestine from healthy and
diseased mice and human specimen collected at UMCG will be imaged with the
hybrid Raman/Scattering microscope (see Fig. 1 for the designs of the
microscope), as well as the existing SHG, THG, and OA microscope at HMGU.
Specific acquisition protocols will ensure imaging of the same tissue regions
with both microscopes. Raw data will be processed with the use of appropriate
algorithms and comparison will be performed between healthy tissues and
diseased tissues at various stages of the disease. Molecular biomarkers that
discriminate between the healthy and diseased state will be identified in
combination with the RNA-seq analysis.
**Fig. 1.** Designs of the (a) Raman and (b) scattering modules as coupled in
the hybrid microscope.
## • Clinical validation of the endoscope on patients
**_Adenoma trial in high risk colorectal cancer Lynch syndrome (LS) patients_
** : We will perform SRS/Scattering endoscopy of the rectum to identify
patients with adenomas and the accompanying field cancerization fingerprints
with guidance of standard surveillance colonoscopy. Results will be compared
with the amount of detected adenomas in these patients and degree of standard
histopathology. On average, 30-50% of the Lynch patients will have 1 or more
polyps during colonoscopy. The patient without adenomas will serve as
controls. Control biopsies of normal tissue will be collected for ex vivo
control.
**_Barrett trial in patients with early stage esophageal cancer_ ** :
SRS/Scattering endoscopy will be performed on patients with dysplasia or early
stage esophageal cancer patients, who are scheduled for endoscopic mucosal
resection (EMR). As control, we will include Barrett patients undergoing
surveillance endoscopy without dysplastic lesions. Field cancerization
alterations will be assessed in the normal adjacent Barrett epithelium.
Control biopsies of normal tissue will be collected for ex vivo control.
The hybrid SRS/Scattering endoscope will consist of two polarization-
maintaining fibers for scattering (illumination/collection) and one excitation
and a number of collection fibers for SRS spectroscopy. Before in vivo human
application, the endoscope will undergo constancy and mechanical integrity
tests against repeat standard clinical washing procedures. The number, type,
and distribution of fibers will be determined during the design and validation
of the test probes and will be based on the findings from the multimodal
microscope, as well as on the clinical needs. Finally, the developed endoscope
will follow the “motherdaughter” approach, where it will be passed through the
working channel of a normal endoscope (figure 2) and thus be used during
normal surveillance colonoscopy/endoscopy procedures.
**Figure 2 Design of standard gastrointestinal endoscope (GIF-H190) that is
currently used as a standard endoscope in the UMCG. Images adapted from EVIS
EXERA III gastrovideoscope brochure.**
# DATA MANAGEMENT
SENSITIVE participates in the Open Research Data Pilot (ORDP) which aims to
improve access to and re-use of research data generated by Horizon 2020
projects and applies primarily to the data needed to validate the results
presented in scientific publications.
To support the FAIR principles, SENSITIVE is oriented towards the Zenodo
solution. Zenodo is built and developed by researchers, for Open Science. The
OpenAIRE project, for open access and open data movements in Europe was
commissioned by the EC to support their nascent Open Data policy by providing
a catch-all repository for EC funded research. CERN, an OpenAIRE partner and
pioneer in open source, open access and open data, provided this capability
and Zenodo was launched 2013. In support of its research programme CERN has
developed tools for Big Data management and extended Digital Library
capabilities for Open Data. Through Zenodo these Big Science tools could be
effectively shared with the long-tail of research.
For **findable** data, Zenodo provides a Digital Object Identifier (DOI),
which is issued to every published record. Zenodo's metadata is compliant with
DataCite's Metadata Schema m inimum and recommended terms, with a few
additional enrichements. The DOI is a top-level and a mandatory field in the
metadata of each record. Metadata of each record is indexed and searchable
directly in Zenodo's search engine immediately after publishing. Metadata of
each record is sent to DataCite servers during DOI registration and indexed
there.
For making data openly **accessible** Zenodo provides metadata for individual
records as well as record collections are harvestable using the OAI- PMH
protocol by the record identifier and the collection name. Metadata is also
retrievable through the public REST API. OAI-PMH and REST are open, free and
universal protocols for information retrieval on the web. Metadata are
publicly accessible and licensed under public domain. No authorization is ever
necessary to retrieve it. Data and metadata will be retained for the lifetime
of the repository.
For making data **interoperable** Zenodo provides a formal, accessible,
shared, and broadly applicable meta(data) language. Zenodo uses JSON Schema as
internal representation of metadata and offers export to other popular formats
such as Dublin Core or MARCXML. Each referenced external piece of metadata is
qualified by a resolvable URL.
For making data **re-useable** in Zenodo each record contains a minimum of
DataCite's mandatory terms, with optionally additional DataCite recommended
terms and Zenodo's enrichments. License is one of the mandatory terms in
Zenodo's metadata and is referring to an Open Definition license. Data
downloaded by the users is subject to the license specified in the metadata by
the uploader. All data and metadata uploaded is tracable to a registered
Zenodo user. Metadata can optionally describe the original authors of the
published work. Zenodo is not a domain-specific repository, yet through
compliance with DataCite's Metadata Schema, metadata meets one of the broadest
crossdomain standards available.
# DATA SECURITY / SHARING
Guidelines for data security and personal data protection will be followed. To
protect data storage and processing, the following safety measures will be
undertaken:
* Compliance with the **General Data Protection Regulation** – (EU) 2016/679 (EU **GDPR** )
* Reporting of data security incidents including personal data breach to data controller of the related dataset within two working days after becoming aware of it.
The datasets should be shared with all partners once all relevant legal
procedures (informed consent, CDA, MTA, ethical approval) are in place.
The informed consent of study participants will cover the sharing of data
collected directly from the study participant and/or based on the measurement
of the derived bio-samples from the study participant. Even though every
effort will be made to keep data confidential, there is a possibility that
information can be linked to the individual. We will inform the research
subject about this possibility and respect his decision, if he/she wishes to
completely anonymise his/her data.
# ETHICAL ASPECTS
All research activities performed for the collection, use, transfer and
protection of patient data, including biological samples will follow the
ethical standards and guidelines of Horizon 2020 including the Charter of
Fundamental Rights of the European Union and the European Convention on Human
Rights. Patients’ data and biological samples will be lawfully collected and
processed. Medical research in human subjects will follow the procedures
described in the World Medical Association’s Declaration of Helsinki and the
Oviedo Bioethics Convention (Convention on Human Rights and Biomedicine). In
addition, all procedures will comply with National law and the European
Union’s General Data Protection Regulation (GDPR) . Collection, use, storage
and otherwise processing of human genetic data, human proteomic data and of
the biological samples from which they are derived will comply with UNESCO's
Universal Declaration on the Human Genome and Human Rights and International
Declaration on Human Genetic Data.
All SENSITIVE partners confirm that the ethical standards and guidelines of
Horizon 2020 will be rigorously applied, regardless of the country in which
the research is carried out. No studies will commence before the approval of
relevant ethics committee has been obtained where such an approval is
required. Biomedical research will comply with international and EU
conventions and declarations. Appropriate informed consent form is a
prerequisite for obtaining approval. Informed consent procedures and data
processing, including data/sample collection, and use, data transfers, storage
and security will comply with the new EU’s General Data Protection Regulation.
All data related processes, from collection and sharing to data research and
sustainability will be in compliance with the legal requirements established
by GDPR (General Data Protection Regulation).
# CONCLUSION
This is the first version of the SENSITIVE Data Management Plan. The plan
follows the Horizon 2020guidelines for findable, accessible, interoperable and
reusable (FAIR) data and will address the EU’s General Data Protection
Regulation (GDPR). The project participates in the Open Research Data Pilot
(ORDP), which aims to improve access to and re-use of research data generated
by Horizon 2020 projects and applies primarily to the data needed to validate
the results presented in scientific publications. The SENSITIVE DMP is
intended to be a “living document” and will be updated in the context of the
periodic reporting of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0168_HARMONIC_847707.md
|
# Data summary
Whereas the HARMONIC project involves the collection and processing of
sensitive personal data on medical and radiological history (all data,
regardless of genetic, biometric and/or health data, are subject to medical
confidentiality and data protection), we will ensure that, whenever possible,
our data is Findable, Accessible, Interoperable and Reusable.
Protocols are developed for each of the four scientific WP (as shown in Figure
2 below):
**—** WP2: protocol to build the cohort of cancer patients
**—** WP3: protocol to build the cohort of cardiac patients
**—** WP4: dose reconstruction protocols (one protocol dedicated to each
cohort) **—** WP5: biology protocol
**FIGURE 2** : Organisation of the project
## DATA PURPOSE
The HARMONIC project combines retrospective and prospective data collection in
two paediatric populations: paediatric patients treated with ionising
radiation for cancer or for cardiac defects. Data collection will be performed
in the participating hospitals after local, regional/national ethics approvals
are obtained. Our study will not affect treatment.
Data collected and processed within HARMONIC will be relevant and limited to
the purpose of the research project in accordance with the `data minimisation´
principle.
To fulfil the objectives of HARMONIC in _evaluating specific outcomes,
reconstructing_ _doses and investigating radiation-induced cellular responses
and biomarkers of_ _sensitivity_ , data collection will include:
**—** In WP2, extracting data from medical records and hospital databases such
as clinical outcomes, lab test results, specific prescriptions, collecting
data from electronic and paper records from the radiotherapy departments
(Treatment planning, DICOM RT data), obtaining neurovascular/MRI imaging and
cardiac echography data, collecting blood and saliva samples and interviewing
participants via questionnaires. Linkage with regional/national registries or
healthcare databases is anticipated.
**—** In WP3, collecting data from electronic and paper records from the
radiology and cardiology departments (DICOM data). Linkage with cancer,
mortality, other local/national registries and with health insurance databases
will be performed.
Retrospective data collection will be performed in all hospitals participating
in WP3 and three centers participating in WP2 (UZ Leuven (Belgium), Gustave
Roussy (France) and University Hospital Essen (Germany)). Linkage with
regional/national registries or healthcare databases will be done at the
national level, following the regulation in place in each participating
country. All personal identifiers necessary for linkage will be dissociated
with the link between the study ID and personal information stored in password
protects computer files either in the study centre or in appropriate authority
departments, depending on the country. Access to these files will be strictly
limited to authorised project personnel. Only pseudonymised data will be
included in the study database and transferred to participating centres for
processing and analysis
Prospective data collection is anticipated in all medical centres
participating in WP2 (KU Leuven (Belgium), DC Aarhus University hospital
(Denmark), Centre François Baclesse (France), Gustave Roussy (France),
University Hospital Essen (Germany)) and in at least two Italian cardiology
centres (Genova, Bergamo) of WP3.
Prospective data collection will be performed under informed consent signed
with the clinicians in the participating hospitals. For that, adapted
documents on the study description and procedures will be prepared to explain
the project to the participant and its legal representative and to guarantee
their comprehension. Material will be prepared specifically for the assent to
understand and consent to participation in the study.
Informed consent will include
**—** In WP2 and WP3, collection of human cells and tissues (blood and
saliva):
* A safe amount of blood samples will be collected.
* Saliva collection is a non-invasive procedure; it is performed in dedicated tubes.
**—** Specifically, in WP2, prolonged MRI acquisition of 7 to 14 minutes per
scan (but no additional MRI) resulting in minimal discomfort.
## DATA TRANSFER
The HARMONIC project is collaborative by nature with main objective to pool
data and build European cohorts of pediatric patients. Transfer of data or
material between partners (or with linked third parties) is therefore
essential and will be performed after Data/Material Transfer Agreements are
signed. Templates of these agreements are included in the Consortium Agreement
and are provided in Annex-1 of this document. It is reminded that only
pseudonymised data will transferred, through secured servers, to the central
databases for analyses.
## DATA COLLECTION
Data collection will be done independently for the two cohorts. For
preservation and longterm access, data collection will be accompanied with
proper documentation and associated metadata. Files will include the data
itself, documentation files with a description of how the data was collected,
and metadata that administer each behavioural task.
### 1\. CANCER PATIENTS
A detailed table of the cohort database is provided in Annex-2, with data
type, coding system and time point of collection (assuming prospective data
collection could be pursued in the last year of the project in parallel to
analyses). The database is structured with a core dataset including data which
should be available, for all patients, to allow building the cohort and
reconstruct doses (Task 4.2 in WP4). It also includes task-specific datasets
to performed WP2 tasks and subtasks on:
**—** Endocrine dysfunctions
**—** Cardiovascular diseases
**—** Neurovascular damages
**—** Second cancer
**—** QoL and societal impact
All data are classified in mandatory or optional as follows:
**—** Level I (Task 2.1, all centres): Mandatory for all patients
**—** Level II (Task 2.1, all centres): Optional for all patients / if
information is available **—** Level III
* a) (Additional data, Tasks 2.2. to 2.6, in participating centres): Mandatory for patients included in the specific Task
* b) (Additional data, Tasks 2.2. to 2.6, in participating centres): Optional for patients included in the specific Task
Optional information will be used to conduct specific sub-analyses within each
task.
The database of the project (HARMONIC-RT) will be developed at INSERM (partner
2): an experienced data manager (permanent staff) will be responsible for the
development of the CRFs, the setting-up of the database and supervising the
maintenance of this database. A budget has been secured to hire a data manager
(half-time for 4 years) to ensure the daily maintenance of the database and
data quality control. Beyond the resources directly allocated to the project,
INSERM can benefit from the support of its IT and data management permanent
staff, who is experienced to setting-up and maintaining databases for large-
scale epidemiological studies.
INSERM will develop the structure of the database and the CRFs, in close
collaboration with Essen and with the help of all investigators. An overview
of the circuit of data is provided in Figure 3 below. The CRFs and the
database will be developed by an experienced data manager. The use of RedCap (
_https://www.project-redcap.org/_ ) is currently investigated as potential
secure web application to build and manage the database.
A significant proportion of the included patients will come from Essen, which
already has a local registry. Automatically data transfer from this database
to the HARMONIC-RT centralized database will be explored.
Data collection will be performed in participating centres:
**—** Investigators from each participating centre will have access rights to
their own structured data at any time, but only to their own data except if
they contribute to a task group. Task group investigators should have access
to core data + taskspecific data that are provided by all centres contributing
to that task
**—** Linkage with external registries or databases (e.g. national cancer
registries, health insurance data) should be made at the centre or sponsor
level, because personal identifiers are needed for this purposes, and only
pseudonymized data will go to the centralized database
**—** Results of WP2, WP4 and WP5 tasks and subtasks (e.g. estimated radiation
doses, measured biomarkers) will be transferred to the centralized database
**FIGURE 3**
:
Overview of the circuit of data
_Reconstruction of doses and optimization_ (task 4.2 of WP4). Data mandatory
for dose reconstruction are height, weight and sex of the patient, DICOM files
including CT images and treatment planning data and delivered dose (see
Annex-2 Radiotherapy table). These data will be used together with
characteristics of the machine (beam commissioning data: Linac, Beam data and
detectors) to estimate doses to the main organs and structures (whether there
are in or out of the radiation field) (See Annex 3 for details). Pseudonymised
DICOM files will be transferred between treatment centre and partners involved
in dosimetry activities via a secured dedicated platform (several options are
being investigated considering technical specifications and compliance with
GDPR).
Contouring of the structures of interest will be provided by the clinical
centres to the dosimetry team (WPE, SCK●CEN, CEA and UZH) so that doses could
be estimated based on exiting analytical models, available phantom library and
Monte-Carlo simulation. A whole-body representation of the patient will be
created automatically by matching the patient structures acquired for
treatment planning through imaging with a computational human phantom.
Based on data collected for organ dose estimation, we will refine current
dosimetry systems and further develop and validate dosimetric tools, in
particular developing tools for estimation of dose in the clinic from
secondary neutrons in PBT, to provide the medical community with means to
improve real-time patient specific dosimetry, investigate the overall
radiation burden in radiotherapy patients, including contribution from CT
imaging for therapy planning and re-planning.
### 2\. CARDIAC PATIENTS
Individual epidemiological data for study subjects included in the study will
be collected by participating national centres. Database will be organised in
a similar way in all participating countries with a relational database
structure similar to the one presented in Annex-4. Data will be stored locally
and standards procedures will be implemented to send the data to centralized
database which will be localised at ISGlobal on a password protected SQL
server with restricted access to the data manager. The preferred mode of
transferring the data and the type of files will be defined in collaboration
with the responsible data base manager of national study centre. Each data
transfer should be accompanied by a description of the data: list of datasets,
number of rows in each file, the coding system used. The anticipated size of
the database (incl. dose data) is approximately 500Go. National data managers
will be in direct contact with data manager at ISGlobal, they will send data
samples to test data quality, validity and compatibility; feedback on the data
received will be provided. ISGlobal (following agreement with national
centres) will append national datasets into one international dataset and
transfer it to the centre responsible for the specific analysis task
(Statistical analyses) as defined in Annex I (Description of work) of the
Grant agreement. This will be done in close collaboration with national teams
and the centre responsible for specific analysis task, adjusting for the needs
of each specific data analyses task.
Basic data will be collected mainly from hospital electronic records
concerning the patient and cardiac procedure. Health insurance databases will
also be used where possible.
Basic data which will be transferred will include:
**—** Study ID (link with patient ID will remain at national level in a
password protected file with limited access to allow linkage with external
registries or databases (e.g.
national cancer registries, health insurance data)
**—** Name/code of the cardiology department
**—** Date of cardiac catheterisation,
**—** Machine type if available
**—** Name of procedure according to the classification built within the
project.
**—** Height and weight if available
**—** Dosimetric data: radiation dose structured reports (RDSRs) will be
obtained when available. The information from these RDSRs will be used to
estimate exposure parameters (see Annex 5).
Data on potential confounding will also be collected, where possible. It will
include data on transplant, confounding by indication/predisposing syndroms
and socio-economic status. Also, since these patients might have been
subjected to other types of medical (diagnostic) procedures, efforts will be
made to reconstruct, where possible, the personal radiological history of the
patients, including history of CT examinations.
_Reconstruction of doses and optimization (WP4)_
Detailed dose reconstructions will be performed for examinations in which
RDSRs can be obtained. The use of a dedicated software for data collection is
currently under investigation. The information from these RDSRs will be used
to estimate exposure parameters (beam angle, x-ray energy, etc.) for
examinations in which less detailed data were recorded. Dose estimation will
be performed using computer modelling which will be validated using physical
measurements in tissue-equivalent anthropomorphic phantoms.
Target organs for dose reconstruction will include breast (right and left),
bone marrow, brain, thyroid, heart, lens, oesophagus and lungs. Estimated
organ doses will be linked to the patient study ID.
Optimization of interventional procedures is difficult, as doses depend
largely on factors including difficulty of case, experience of operator, type
of procedure. Our data will provide a unique set of data on the trends in
radiation doses, patient characteristics and procedure types over time. This
information will be useful in the setting of indicationspecific diagnostic
reference levels (DRLs) for radiation protection purposes. The dosimetry
system developed during the project, linking organ doses to DAP or K A,R as
a function of a discrete set of variables, will be of use to hospitals and
researchers in radiation dose audits and dose reduction research.
To further contribute to optimization of doses in interventional cardiology,
we propose to develop, in collaboration with 2 to 3 pilot sites, an innovative
Augmented Reality (AR) Computer System that is able to support the operator
and assess the procedure. The operator will be supported with AR information
(wearing a dedicated headset) to increase image quality. The tool can then be
presented in dedicated training sessions to improve the realization of future
procedures.
### 3\. BIOMARKERS ANALYSES
All patients, radiotherapy and interventional radiotherapy will receive a
consent form and short invitation from wp2 and wp3 including the purpose and
aims of the research in a concise and clear manner. For each patient who
agreed to participate blood and saliva samples will be collected. Other
relevant patients’ data for biomarker and mechanistic studies include
demographics and treatment data: Age at first cancer diagnosis, sex, other
vascular and cardiotoxic treatment (e.g. cumulative doses of anthracyclines,
alkylating agents and other chemotherapy, including start date and end date),
radiation dose to different organ and substructures of interest performed by
WP4, follow-up data e.g., health events including; vascular and cardiac
damages, hypertension etc. Estimated doses of interest for WP5 activities are
doses to the heart, brain and large/middle vessels (mediastinal carotids,
willis polygon; cerebral arteries).
Blood and saliva samples from 300 patients will be collected in WP2 and WP3.
Blood will be collected at 3 time points: 1- Before start of
radiotherapy/interventional cardiology 2- day of finishing
radiotherapy/interventional cardiology or anytime up to three months after
finishing therapy 3- one year after finishing radiotherapy/interventional
cardiology.
Biological samples will receive a unique identification number prior to any
transfer of material (link between the ID of the patient and the code will be
kept at the collection centre in password protected files). Centralization of
material will be performed at SU (Sweden) where material will be sent in boxes
containing dried ice to keep the samples (blood, serum, plasma) frozen during
the transport.
The biomarkers which will be investigated are associated with changes induced
by radiation at the level of the transcriptome (miRNA), the proteome (plasma
and saliva protein profiling; and RPPA) and the epigenome (gene expression
regulation and protein modification) as well as inflammation and oxidative
stress levels (see Deliverable 5.1 for specific details)
## DATABASE STRUCTURE VALIDATION
The CRFs will be reviewed against the database specifications documents as
well as the protocols to ensure the following:
**—** There is no duplication of data being collected.
**—** The forms are in the correct order.
**—** All associated code lists are correct.
**—** Data required for the statistical analysis have been included.
After completing the above, test data will be entered into the database to
ensure that all the specifications for each field have been programmed
correctly, including:
**—** Tab order of the fields.
**—** Sufficient field length.
**—** Entering of valid vs invalid data.
These checks ensure that when the database goes live, the data management
system works as expected.
Any changes that are required to the CRF during the study (assuming they are
approved by relevant ethical committees), will follow the above validation and
verification procedures before going into the production database.
## DATA CLEANING
We assume that data will be checked and validated by providers. A list of
logical validation checks will be generated and shared with all contributing
centres (a list of basic checks is provided in Annex 6). The validations will
be performed at the level of the centre (or national level in WP3). Additional
validation and cleaning will be performed at entry in centralized database and
prior to analyses for specific tasks.
# Data sharing
Intellectual property and data generated by this project will be managed in
agreement with the guidelines of EC and based on the Consortium Agreement and
Publication Policy. HARMONIC beneficiaries will disclose all inventions
developed within the project and such inventions will be reported and managed
according to the e.g. EC guidelines.
Integration and re-use of the data relies on the data being well organized and
adequately documented. The shared data are expected to be of interest to the
scientific and more specifically the radiation protection community, the
medical community and the general public. The type of data and tools to be
produced in HARMONIC include (but are not restricted to) background
demographic information, results from computational model simulations,
software….. Data and data products will be made available with as few
restrictions as possible. The publication of the research data or tools may
take place during the project or at the end of the project in accordance with
normal scientific practices.
Access to the project tools will be made available for educational, research
and non-profit purposes. This access may be provided via web-based
applications.
Research data documenting, supporting and validating research results will be
made available after the main findings of the final research dataset have been
accepted for publication. This research data will be processed to prevent the
disclosure of personal data. Reported study results will pertain to analyses
of aggregate data. No individual’s name will be associated with any published
or unpublished report of this study.
Results of the HARMONIC project will be disseminated mainly through open
access (free of charge online access for any user) peer-reviewed scientific
publications. Most beneficiaries have secured a budget for manuscript
publication.
Each beneficiary publishing results will:
**—** as soon as possible and at the latest on publication, deposit a machine-
readable electronic copy of the published version or final peer-reviewed
manuscript accepted for publication in a repository for scientific
publications; the repository of University Pompeu Fabra, Barcelona, is used by
the coordinator (ISGlobal);
**—** deposit at the same time, the research data needed to validate the
results presented in the deposited scientific publications and/or publish data
papers.
**—** deposit — via the repository — the bibliographic metadata that identify
the deposited publication
**—** The bibliographic metadata shall include all of the following:
* the terms “Euratom” and “Euratom research and training programme 20142018”;
* the Health effects of cArdiac fluoRoscopy and MOderN radIotherapy in paediatriCs, HARMONIC and grant number 847707;
* the publication date, and length of embargo period if applicable, and
* a persistent identifier or digital object identifier (DOI) for the submitted data set(s).
# Ethical Requirements (POPD 4 & 5)
As described in Deliverable 1.1 - Ethics and Data Protection Requirements,
general procedures to be included in the research protocol to safeguard the
privacy of study subjects are:
**—** Pseudonymization will be implemented as a general standard meaning that
all material obtained in the framework of the project (questionnaires,
diagnostic images and imaging data, detailed information on treatment if
available) will be identified through a code, the name and/or other personal
data that could allow the identification of the participant will never be
indicated. This unique identifier will link all basic data required for the
study.
**—** The master key file linking the centre’s study numbers with personal
identifiers will be maintained in a password protected file with limited
access.
**—** All files containing personal data will be stored in encrypted and/or
passwordlocked files. Access to these files will be limited to authorized
project personnel;
**—** Written consent to use personal data will be obtained from the
participants consenting to be involved in specific parts of the project. The
contact details of the DPO will be made available to the participants.
**—** The patient will be free to withdraw his or her consent to the study at
any time
**—** Separate age-graded information and consent forms will be available for
minors/assents.
**—** If, according to the project requirements, it is necessary to transfer
personal data, participants will be properly informed in the consent form and
measures to ensure personal data protection will be implemented. Transfer of
anonymized data will be done according to the current legislation.
**—** Reported study results will pertain to analyses of aggregate data. No
individual’s name will be associated with any published or unpublished report
of this study.
**—** All project personnel will be trained in the importance of
confidentiality of individual records and required to sign a confidentiality
agreement.
**—** All data transfers will be completed using secured servers.
## PSEUDONYMISATION TECHNIQUES
The codification of a study subject consists of a country code and a study
subject code. This entire code is called the “Study ID of study subject”
throughout this document. A Study ID should not be any number, such as social
security or health insurance number, which could allow the identification of
the subject.
Study ID of the study subjects are therefore created as a 9-digit number
**—** Country code (2-digit number)
**—** Study subject code (7-digit number)
As an example: the subject with national ID=5689 in the country coded as 13
would have the ID=130005689. Keeping the country code inside the ID allows
rapid grouping of subjects in analysis.
If the country code cannot be set for any raison, the code “99-unknown” will
be used. However, each patient should belong to specific national cohort.
Pseudonymisation of DICOM data will be performed under the following principle
_(1,2)_ to ensure that there are no reasonably likely means to identify the
patient from the use of this data:
**—** All DICOM tags associated with patient ID (example tag 0010,0020) will
be systematically replaced by Study ID or any other relevant general term.
**—** A code will be automatically generated (or Study ID will be entered) as
a new field.
**—** Where required for de-identification, date of birth (month/year to be
kept) and/or date of examination will be removed
**—** Link between Patient ID and code will be kept on a password protected
file in the centre providing data.
“To ascertain whether means are reasonably likely to be used to identify the
natural person, account should be taken of all objective factors, such as the
costs of and the amount of time required for identification, taking into
consideration the available technology at the time of the processing and
technological developments.”
## TRANSFER OF DATA TO NON EU COUNTRIES
Transfer of pseudonymised data is anticipated from and to Switzerland (for
dose reconstruction purposes). The European Commission has recognised that
Switzerland is providing adequate protection (Adequacy decision). The effect
of such a decision is that personal data can flow from the EU (and Norway,
Liechtenstein and Iceland) to Switzerland without any further safeguard being
necessary. In others words, transfers to the country in question will be
assimilated to intra-EU transmissions of data.
# Data security, storage and archiving
## DATA SECURITY AND INTEGRITY
Data must never be sent via public networks or external wireless (including
email), and it should be encoded (or protected by any other encryption method)
that guarantees the information is not intelligible or manipulated by third
parties.
### 1\. CANCER PATIENTS
All data sent to INSERM is stored on the Institution’s server, within an
internal network duly protected to guarantee the integrity and security of the
information. Access is restricted to users with unique, personal and non-
transferrable authorisation.
The REDCAP application used at CESP for data collection is hosted on a
virtualized infrastructure (VMWARE). Access to the application is possible
from outside the CESP network, through a web proxy. The application is
protected by a double authentication (a first authentication at the level of
the proxy and a second at the level of the application itself). The
identifiers are different and nominative. Communication between the
application and the remote stations is encrypted with the implementation of
the https protocol.
In terms of network communications, security is provided by STORMSHIELD
firewall to ensure the filtering between the web proxy and the virtual server.
Daily backups are performed on disks, which are then outsourced on tapes
(fireproof box).
### 2\. CARDIAC PATIENTS
All data sent to ISGlobal is stored on the Institution’s server, within an
internal network duly protected by the _IT Resources Service_ to guarantee the
integrity and security of the information. Access is restricted to users with
unique, personal and non-transferrable authorisation. All information housed
on the ISGlobal servers is of a confidential nature. All the computer
equipment of the Institution is equipped with anti-virus software, connected
to a central database where all the virus definitions are up to date, and
there are periodically scheduled scans. Also, all the files coming from an
external source (usb devices, e-mail...) are scanned in real-time to avoid
accidental infections.
The data backup and storage infrastructure comprises six NAS server devices.
Of these, five are in the PRBB 1 data centre and are used to manage the
storage of all the institution’s data (administration and research projects)
and backup tasks corresponding to said data. A sixth Synology is located in a
nearby data centre, to have a physical device outside the main headquarters as
an external backup.
There are four kinds of backup copies with two levels (1-internal, 2-external)
to be able to restore the institution’s data and systems in different
scenarios (see below):
* Internal backup of data
* Internal backup of Server
* External backup of data
* External backup of server
The back-up strategy in place is meant to prevent accidental loss of data, to
allow reset of the service to its previous, fully functional status, in such
diverse situations as corruption of the operating system, configuration errors
or those caused by an IT attack, to restore all the centre’s data and data
infrastructure in the event of a critical incident at the institution’s data
centre.
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**DATA**
</th>
<th>
**SERVER**
</th>
<th>
</th> </tr>
<tr>
<td>
INTERNAL
</td>
<td>
PROTOCOL
</td>
<td>
Hyperbackup & Active Backup for Server
</td>
<td>
Snapshots
</td>
<td>
</td> </tr>
<tr>
<td>
FREQUENCY
</td>
<td>
daily
</td>
<td>
3 times a week
</td>
<td>
</td> </tr>
<tr>
<td>
RETENTION
</td>
<td>
30 days
</td>
<td>
15 days
</td>
<td>
</td> </tr>
<tr>
<td>
EXTERNAL
</td>
<td>
PROTOCOL
</td>
<td>
Hyperbackup
</td>
<td>
Hyperbackup Snapshots
</td>
<td>
and
</td> </tr>
<tr>
<td>
FREQUENCY
</td>
<td>
Twice a week
</td>
<td>
Twice a week
</td>
<td>
</td> </tr>
<tr>
<td>
RETENTION
</td>
<td>
15 days
</td>
<td>
15 days
</td>
<td>
</td> </tr> </table>
## STORAGE AND ARCHIVING
Parties shall negotiate, toward the end of the project, an agreement to
provide guidance on future use of the databases including the following
provisions:
* The access policy for the Parties, Affiliated entities to the databases
* The modalities for third parties to access databases
* The modalities to maintain the integrity of the datasets, the storage place for the databases and modalities to keep copies on remote storage
* The legal framework
### 1\. CANCER PATIENTS
The HARMONIC-RT database is meant to initiate a pan-European registry of
particle beam therapy (and photon therapy at a later stage) in children and
adolescents. It is anticipated that toward the end of the project, agreements
will be signed for data to constitute such a sustainable registry. Complete
documentation for long-term data use and preservation will be provided.
### 2\. CARDIAC PATIENTS
ISGlobal Data Manager will follow ISGlobal guidelines to provide accurate and
complete documentation for data preservation. It will ensure that the data are
curated in a relevant long-term archive and ensure data will be available
after project funding has ended. We will create metadata for long-term data
preservation.
If no agreements are signed, the data will be returned to the providing
centre, in the format used for analysis and it will be destroyed from the
centralized database.
### 3\. BIOLOGICAL SAMPLES
Samples will be kept at 20°C for short-term storage (6 months) and -80 °C for
long-term storage. The plan is to analyse the samples within 6 months and the
left-over samples will be sent to the original centres upon request.
Otherwise, the samples will be decoded and used for new method development or
discarded after the results are published.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0169_ICO2CHEM_768543.md
|
# Data Summary
ICO2CHEM aims at developing a new production concept for converting waste CO
2 to valueadded chemicals. The focus is on the production of white oils and
high molecular weight aliphatic waxes. The technological core of the project
consists in the combination of a Reverse Water Gas Shift (RWGS) reactor
coupled with an innovative modular Fischer-Tropsch (FT) reactor. The aim of
the project is to demonstrate the production of white oils and aliphatic high
molecular weight waxes from industrial CO 2 and H 2 streams at Industrial
Park Höchst in Frankfurt am Main, Germany. During the project, the
experimental data is generated related to development of catalysts, reactors,
product analysis and demonstration runs. In addition, market, techno-economic
and life cycle analysis will be produced. Those analyses require experimental
data obtained during the project.
Data generated will be mainly numerical data (e.g. analytical data, test
results) and graphical data (e.g. process flow charts, P&I diagrams) in
electronic format. The project will also generate project management
documents, such as minutes of the meetings, deliverables and progress reports.
# FAIR data
2.1 Making data findable
According to practices established in other EU projects coordinated by VTT,
the official data (i.e. consortium agreement, reports, deliverables, minutes,
meeting presentations, publications, publication permissions, etc) will be
stored centrally on VTT’s External SharePoint server, where all partners have
an access to the data.
Raw data will be stored in accordance to internal rules of each partner.
However, when needed the raw data can be also shared between the partners via
SharePoint.
2.2 Making data openly accessible
During the project, the data will be used by the consortium members. After the
project is closed, all requests for further use of data will be considered
carefully and whenever possible approved by the Coordinator and by the General
Assembly. Permission for data use will be granted providing there are no IPR
or confidentiality issues involved or any direct overlap of research questions
with the primary research.
All essential results will be documented as deliverable reports. Written
deliverables will be stored on the VTT’s external SharePoint server. All
partners have an access to all documents stored on the SharePoint server. Most
of the deliverables produced during the project are confidential. With the
approval of partners, the results will be disseminated through several routes
described in DoA and D8.2 Dissemination plan.
2018/05/04 Version 1, 4
Open access publication will be ensured to all peer-reviewed scientific
publications. All publications are under the permission policy described in
the CA: Prior notice of any planned publication and patent application shall
be given to the other Parties at least 30 calendar days before the
publication. Any objection to the planned publication shall be made in
accordance with the Grant Agreement in writing to the Coordinator and to the
Party or Parties proposing the dissemination within 21 calendar days after
receipt of the notice. If no objection is made within the time limit stated
above, the publication is permitted. In case of objection, it has to include a
precise request for necessary modifications. The objecting Party can request a
publication delay of not more than 90 calendar days from the time it raises
such an objection. After 90 calendar days, the publication is permitted
provided that confidential Information of the objecting Party has been removed
from the Publication as indicated by the objecting Party.
2.3 Increase data re-use (through clarifying licences)
The publication permission procedure described in Section 2.2.2 must be
followed during the Project, and one year after the project. Permission for
data use will be granted providing there are no IPR or confidentiality issues
involved.
# Allocation of resources
The Coordinator is responsible for data management in the project. Data
management is included in Task 8.2 Intellectual property rights (IPR) and
exploitation. Costs related to open access publishing are included in
partners’ budgets.
# Data security
SharePoint data backups are taken every day. The SharePoint workspace will be
maintained 20 years after the end of the project. It will also guarantee the
availability of SharePoint documents for the same period. VTT is responsible
for the maintenance of the SharePoint workspace.
# Ethical aspects
The project does not raise any ethical issues mentioned in the administrative
proposal form.
All participants in this project are committed to the responsible engineering
principles and will conform to the current legislation and regulations in
countries where the research will be carried out. Ethical standards and
guidelines of Horizon2020 will be rigorously applied regardless of the country
in which the research is carried out. The ethical principles of the research
integrity will be respected as set out in the European Code of Conduct for
Research Integrity 1 .
1
http://ec.europa.eu/research/participants/data/ref/h2020/other/hi/h2020-ethics_code-
of-conduct_en.pdf
2018/05/04 Version 1, 5
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0170_ROMSOC_765374.md
|
_1\. Introduction_
# Introduction
This document describes the data management life cycle for the data to be
collected, processed an/or generated within the ROMSOC project. Carefully
managing research data is an essential part of good research practice and
starts with adequate planning. According to the Open Research Data Pilot open
access to research data that is needed to validate the results presented in
scientific publications has to be guaranteed. Moreover, open access to
scientific peer-reviewed publications is obligatory in the Horizon 2020
programme.
The purpose of this document is to help to make the research data findable,
accessible, interoperable and reusable (FAIR). It specifies how data will be
handled both during and after the research project and reflects on data
collection, data storage, data security and data retrieval. The Data
Management Plan presented herein has been prepared by taking into account the
template of the Version 3.0 of the “Guidelines on FAIR Data Management in
Horizon 2020”. 1
The ROMSOC Data Management Plan (DMP) is an evolving document that will be
edited and updated throughout the project. This initial version of the DMP
will be officially delivered in project-month 6 (February 2018). It will be
updated over the course of the project and resubmitted to the European
Commission (EC) whenever significant changes arise (e.g. new data, changes in
consortium or consortium composition), and as a minimum as part of the
Progress Report in project-month 13 (September 2018) and as part of the
Periodic Reports in project-months 24 (August 2019) and 48 (August 2021).
# Data Summary
The ROMSOC network will produce mainly two kinds of research data: models to
describe real-world systems and processes, and algorithms for simulation and
optimization in the form of sophisticated software. A collection of benchmarks
for model hierarchies will be created and these benchmarks will be open
access. Besides, another type of research data are scientific publications
(e.g., technical reports, peer-reviewed publications, software manuals) and
related consolidated simulation results. For text-based documents the data
format PDF/A is binding. ZIP files are used for sets of related files. In this
case an associated publication is saved independently, not as the content of
the ZIP file, so that it can be indexed and the content can be found via full-
text search.
# FAIR data
In 2014, the Technische Universitat Berlin (TU Berlin) established a research
data infrastructure. It was devel-¨ oped by the Service Center Research Data
and Publications (SZF); a virtual organization, where the University Library,
the IT-Service-Center tubIT and the Research Department of TU Berlin cooperate
to support the researchers of TU Berlin in all questions concerning their
research data (https://www.szf.tuberlin.de/). The research data infrastructure
complies with the requirements of the funding organizations, e.g. the Deutsche
Forschungsgemeinschaft (DFG) or the European Commission (EC). The technical
core of the research data infrastructure at TU Berlin is the institutional
repository DepositOnce. The repository DepositOnce is based on the open source
repository software DSpace. In DepositOnce consolidated research data and all
information necessary to verify or reuse the data (e.g. scripts, calculations,
software etc.) as well as publications can be stored. DepositOnce provides a
workflow for the description and upload of the data files. There is a record
for each data set (research data as well as publications). Each record has a
persistent identifier (Digital Object Identifier (DOI)). Research data may be
linked to the corresponding publications and vice versa via their DOIs. Other
core functions of DepositOnce are versioning and an embargo function (i.e. a
blocking period for the publication of full texts and research data until a
certain date).
_3.1 Making data findable, including provisions for metadata_
## Making data findable, including provisions for metadata
DepositOnce automatically assigns a DOI to every submitted record and each of
its versions for a persistent identification of the data. To get DOIs, the
University Library (as head of SZF) has concluded a contract with the DOI
registration agency DataCite. DepositOnce uses the standard metadata schema
Qualified Dublin Core to describe the stored data. To meet the requirements of
its service partners, e.g. the DOI registration agency DataCite or Open Access
initiatives like OpenAIRE, some additional qualifiers have been added. Based
on the principle of using standard metadata schemes in DepositOnce, metadata
can easily be converted into other metadata schemes. All metadata in
DepositOnce are made publicly available in the sense of Open Access. To enable
search engines and service providers to index contents stored in DepositOnce,
all reasonable steps are taken: like generating sitemaps, offering an OAI-PMH
interface etc. DepositOnce is included in common search engines, e.g. Google
Scholar, BASE – Bielefeld Academic Search Engine and others. Furthermore the
DOI registration agency DataCite itself acts as a data provider: While
registering a DOI all important metadata are sent to DataCite. Both nationally
and internationally defined standards and interfaces – such as the Dublin Core
Metadata Initiative, the regulations of the Open Archives Initiative (OAI) or
the German National Library’s xMetaDissPlus format – are used for the formal
and content-related indexing of the digital objects. Metadata is captured by
means of an upload form into which the authors enter the data. In doing so,
the authors attribute a class according to the Dewey Decimal Classification
(DDC) as well as free keywords in German and English.
According to the principle of ”Good Scientific Practice” (e.g. in order to
guarantee correct citation) the data file and the descriptive metadata cannot
be changed after they have been published in DepositOnce. One of the core
functions of DepositOnce is versioning, which allows new versions of published
records while previous versions are kept available. To every new version, a
new persistent identifier (DOI) is assigned. Previous and new versions are
linked to each other automatically.
## Making data openly accessible
DepositOnce is committed to the open access concept and is one of the tools
for implementing open access objectives in line with the _Berlin Declaration
on Open Access to Knowledge in the Sciences and Humanities_ :
* The metadata of digital objects stored on DepositOnce is freely accessible on the Internet and may be obtained, saved and made accessible to third parties via open interfaces that are accessible to anyone. The metadata is subject to the CC0 license.
* After publication, the digital objects are normally freely accessible. This applies unconditionally to publications. Access protection may be arranged for research data; to this end, DepositOnce has a management system for access rights.
In order to ensure permanent access to the digital objects, the latter are
assigned quotable, lasting indicators in the form of Digital Object
Identifiers, which are registered and administered at DataCite. Once
published, digital objects cannot be changed. DepositOnce provides a
versioning feature to document new findings. A search feature for both the
bibliographic metadata and the full texts is available on the pages of
DepositOnce. Digital objects can be researched in local, regional and
transregional library catalogs and search engines as well as via the DataCite
Metadata Store and the academic search engine BASE. To increase the visibility
of its service, DepositOnce is registered with the Registry of Open Access
Repositories (ROAR), the Directory of Open Access Repositories (OpenDOAR) and
the Registry of Research Data Repositories (re3Data). As a registered OAI Data
Provider, DepositOnce fulfills the requirements of OAI-PMH Version 2.0. The
base URL is https://depositonce.tu-berlin.de/oai/request.
## Making data interoperable
The production of high quality, portable, and easy-to-use scientific software
must heavily rely on rigorous implementation and documentation standards. To
ensure such high quality demands, the Working Group on
_3.4 Increase data re-use (through clarifying licences)_
Software (WGS) produced in 1990 the Implementation and Documentation Standards
[1] for the SLICOT 2 library (Subroutine Library in Control and Systems
Theory), a library of widely used control system design algorithms. All
software that is develop within the ROMSOC project will follow these
implementation and documentation standards. The collection of benchmarks for
model hierarchies that will form the basis for interdisciplinary research and
for the training program in the ROMSOC project will follow the standards set
by the SLICOT Benchmark Collection [2] and the collection of benchmark
examples for model reduction of linear time invariant dynamical systems [3].
By following these software documentation and implementation standards a
uniform, understandable user interface is ensured that is fundamental to user
acceptance of the developed software routines and necessary to ensure
portability, reliability, and ease of maintenance of the software itself.
## Increase data re-use (through clarifying licences)
Any publication at DepositOnce requires that the author permanently transfers
to the Operator (i.e. the SZF) the non-exclusive right to reproduce the
publication and make it publicly accessible. Any printed or electronic
publication of the research results – with amendments or in excerpts, if
applicable –prior to, or after their publication at DepositOnce remains at the
absolute discretion of the author. The transfer of the non-exclusive right of
use entitles the Operator of DepositOnce to permanently
* make accessible to the public electronic copies of the digital object (upon expiry of the news embargo, if any) and, if need be, to modify the digital object (with regard to its form) in order to enable its display on future computer systems;
* announce and transmit digital objects to third parties, for instance, as part of the libraries’ national collection mandates, particularly for the purpose of long-term archiving;
* transfer agreed rights and obligations to another repository (for instance, any repository succeeding DepositOnce). This also entitles DepositOnce to assign to a third party the right to supply the digital object to the public (e.g., through a facility specializing in enabling the long-term availability of such objects).
In addition, beyond the German copyrights, authors may transfer certain rights
of use to the general public by means of suitable open-content licenses (for
instance a Creative Commons license or software licenses such as GPL, BSD, MIT
or Apache licenses). For publications the Creative Commons license
‘Attribution CC BY’ will be used.
# Allocation of resources
The service of DepositeOnce itself is free of charge. Costs may arise for
additional storage space. According to the policy for safeguarding good
scientific practice of TU Berlin, SZF as the service provider of DepositOnce
guarantees the storage of the research data for at least 10 years. In
cooperation with other partners and institutions (e.g. the Zuse Institute
Berlin and the Kooperativer Bibliotheksverbund Berlin-Brandenburg); SZF will
develop a concept for long term preservation. According to the concept of the
research data infrastructure the task-sharing between the stakeholders is as
following:
* The researchers are responsible for the quality check of the research results: After having collected a large amount of data during the project they select those data that shall be preserved. In the submission process to DepositOnce they describe these data and upload the data files. Researchers are also responsible to create new versions of published submissions if necessary.
* SZF is responsible for the formal check of the submitted research data: The uploaded data files are not published immediately but stored in an intermediate store. SZF checks e.g. whether the metadata fields are filled properly, whether PDFs can be opened, etc. If there are any questions, the SZF-Team contacts the submitter of the data. Accepted submissions are published by SZF and stored in DepositOnce.
_5\. Data security_
* tubIT is responsible for the IT infrastructure which includes safe storage and the accessibility of all data.
# Data security
The DepositOnce servers that store the research data and their metadata are
part of the security concept of tubIT. Every new IT service at TU Berlin has
to go through an approval procedure that is conducted by the Data Protection
Officer of TU Berlin and the Staff Council. DepositOnce has successfully
passed this procedure. DepositOnce uses the virtual server infrastructure of
tubIT that is secured by several firewalls. All servers, networks and backup
services are maintained by tubIT. The security concept strictly restricts any
physical access to the data center and any remote access to the servers. The
metadata as well as the research data are backed-up at least once a day in
form of database dumps and files. A cronjob verifies file checksums once a
week and ensures the data integrity. The administrators at the University
Library are responsible for the further enhancement and development of the
software. They are also responsible for the recovery of the DepositOnce
services out of the backup. Digital objects are stored for the long term, that
is, in accordance with the recommendations of the German Research Foundation,
for at least ten years (see _Guidelines for Safeguarding Good Academic
Practice_ at TU Berlin). In cooperation with a suitable facility specialized
in long-term archiving, the Operator aims to digitally preserve the digital
objects stored in DepositOnce. The digital preservation of publications is
ensured through the long-term archiving system of the German National Library.
# Ethical aspects
In order to protect the participants’ identity all contents that are stored in
DepositOnce have to be anonymized according to the policy for safeguarding
good scientific practice of TU Berlin. When submitting a data file in
DepositOnce the submitter has to confirm that the research data which has been
submitted do not contain any personal data. If personal data are contained
they must be anonymized completely according to canonical standards and the
human subjects must have consented to the data collection as well as to the
publication of the (anonymized) data. Ownership of results as well as access
rights to Background and Software are regulated in the Consortium Agreement
(Section 8 and 9). Any ethical or legal issues that can have an impact on data
sharing will be discussed in the context of the ethical reports (in project-
month 12, 24 and at the end of the project).
# Other issues
We follow the _Guidelines on FAIR Data Management in Horizon 2020_ , i.e.,
research data should be findable, accessible, interoperable and reusable. TU
Berlin doesn’t have a Research Data Management policy. However, since 2002 TU
Berlin has a directive for safeguarding good scientific practice:
_”Richtlinien zur Sicherung guter wissenschaftlicher Praxis an der TU Berlin”_
. We also respect the research data policy of Friedrich-AlexanderUniversitat
Erlangen-N¨ urnberg (FAU).¨ 3
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0171_TAHYA_779644.md
|
# 1\. INTRODUCTION
TAHYA is part of the Horizon 2020 Open Research Data Pilot, the Pilot project
of the European Commission which aims to improve and maximize access to and
reuse of research data generated by projects. The focus of the Pilot is on
encouraging good data management as an essential element of research best
practice.
The Deliverable D8.6 Data Management Plan (DMP), represents the first version
of the DMP of the TAHYA project. TAHYA is a Research and Innovation Action
project funded under the Fuel Cells and Hydrogen 2 Joint Undertaking that will
last 36 months. As such, TAHYA participates in ORD Pilot, and, therefore, is
providing, as requested, the current deliverable six months after the
beginning of the project (M6, June 2018).
The DMP is not a fixed document, but it is likely to evolve during the whole
lifespan of the project, serving as a working document. This document will be
updated as needed during the Project General Assemblies.
st version Data Management Plan of the TAHYA
The purpose of the current deliverable is to present the 1 project. The
deliverable has been compiled with the collaborative work among the
coordinator and the consortium partners who were involved in data collection,
production and processing. It includes detailed descriptions of all datasets
that will be collected, processed or generated in all Work Packages during the
course of the 36 months of TAHYA project. The deliverable is submitted six
months after project start as required by the European Commission (EC) through
the latest guidelines: The Open Research Data Pilot (ORD Pilot). For the
methodological part, the latest EC guidelines 1 have been adopted for the
current deliverable.
The deliverable is structured in the following sections:
1. An introduction to the deliverable and a brief description on how Data Management is approached in Horizon 2020 (H2020) program along with the importance of it.
2. A description of the methodology used, an analysis of the chapters of the provided template and last the methodological steps followed in TAHYA.
3. A description of the datasets to be used in TAHYA reflected on the template provided by the EC.
4. A summary table with all the datasets included in 1 st TAHYA DMP.
# 2\. DATA MANAGEMENT IN H2020 PROGRAM
According to the latest Guidelines on FAIR Data Management in Horizon 2020
released by the EC DirectorateGeneral for Research & Innovation on the 26th of
July 2016 “ _beneficiaries must make their research data findable, accessible,
interoperable and reusable (FAIR) ensuring it is soundly managed_ ”.
FAIR data management is part of the ORD Pilot promoted by the European
Commission. The purpose of the ORD is to improve and maximize access to and
re-use of research data generated by H2020 projects and to take into account
the need to balance openness and protection of scientific information,
commercialization and Intellectual Property Rights (IPR), privacy concerns,
security, as well as data management and preservation issues.
The inclusion of a DMP is a key element for FAIR data management in a H2020
project. In a DMP, the data management life cycle for the data to be
collected, processed and/or generated by a H2020 project is described and
analysed. The DMP should also include information on (a) the handling of
research data during & after the end of the project, (b) what data will be
collected, processed and/or generated, (c) which methodology & standards will
be applied, (d) whether data will be shared/made open access and (e) how data
will be curated & preserved (including after the end of the project).
# 3\. METHODOLOGY
## a. DMP Template
In order to assist the beneficiaries with the completion of the DMP, the EC
produced and provided a template that act as a basis for data description. The
template contains a set of questions that beneficiaries should answer with a
level of detail appropriate to the project. If no related information is
available for a given dataset, then the phrase “ _Non-applicable_ ” or N/A
will be used. In the following paragraphs, the main sections and proposed
contents of the template are listed and presented, along with the way TAHYA
reflects to these sections.
## b. Data summary
In this section, beneficiaries are asked to describe (a) the purpose of the
data collection or generation and how this purpose reflects to the objectives
set in the project as a whole, (b) the types and formats of data that will be
generated or collected, (c) the origin of the data, (d) the expected size of
the data, and also (e) whether existing data will be reused and (f) the
usefulness of the described datasets.
## c. FAIR data
### a) Making data findable, including provisions for metadata
This section includes a description of metadata and related standards, the
naming and keywords to be used. In the context of TAHYA the following naming
convention will be used for all the datasets of the project. First the work
package number will be placed, then the serial number of the dataset within
this work package and last the dataset title, all separated with underscore
(Data_<WPno>_<serial number of dataset>_<dataset title>).
An example can be the following Data_WP2_1_specifications_data.
However, it has to be noted that this naming convention describes only the
general dataset that can contain files of different size and format. The
naming of each separate file follows a different naming convention that is
proposed by the partners who creates the files.
The use of a standard identification mechanism in for the datasets of TAHYA
will be decided by the project consortium. If it turns out to be necessary,
the use of the Guidelines and standards provided by the International DOI
Foundation (IDF) and the DOI system and ISO 26324 2 will be considered.
### b) Making data openly accessible
This section includes a description of the data that will be made accessible
and how. It also explains why some datasets cannot be made open due to
possible, legal, contractual or ethical issues. It is possible that some
beneficiaries have decided to keep their data closed. A description of the
potential data repositories is also included along with the potential software
tools required to access the data.
In the context of TAHYA, the following options for open repositories of data,
metadata, documentation or code will be considered: (a) The Registry of
Research Data Repositories 3 , (b) Zenodo 3 , (c) OpenAIRE 4 ,
In the context of the TAHYA DMP, not any arrangements have been made with an
identified repository. This will be discussed by the consortium during the
upcoming plenary meeting. Currently the data are collected and preserved on a
private platform: Project Netboard 5 .
### c) Making data interoperable
In this section, data interoperability is detailed for every dataset of TAHYA.
Issues such as the allowing of data exchange between researchers, institutions
or even countries are covered along with all the technicalities including
standards for formats, metadata vocabularies or ontologies of vocabularies.
The issue of interoperability will be discussed among the consortium members
in the upcoming project plenary meeting.
### d) Increase data re-use (through clarifying licenses)
This section describes the licenses, if any, under which data will be re-used
in TAHYA. It includes provisions regarding the period when data will be
available for reuse and if third parties will have the option to use the data
and when.
### e) Allocation of resources
FAIR data management in TAHYA project is under WP9 –Dissemination and
Exploitation strategy lead by Partner N°6 Absiskey, in close collaboration
with the Coordinator. Within the project budget, a specific amount of person
months has been dedicated for these activities. All costs related to FAIR data
management that will occur during project implementation will be covered by
the project budget. Any other cost that may relate to long term data
preservation will be discussed among consortium members.
### f) Data security
Data security is of major importance in the TAHYA project. Special attention
will be given to the security of sensitive data. The protection of data will
be ensured through procedures and appropriate technologies, on Project
NetBoard like the use of HTTPS protocol for the encryption of all internet
transactions and appropriate European and Internet security standards from
ISO, ITU, W3C, IETF and ETSI. If data will be kept in a certified repository,
then the security standards of that repository will apply.
### g) Ethical aspects
With respect to the H2020 ethics self-assessment, the TAHYA proposal and the
use case scenarios to be defined will not be concerned with any ethical issue.
### h) Other issues
In this section, other issues can be covered not included above such as the
use of other national/funder/sectorial/departmental procedures for data
management.
## d. Methodological steps in TAHYA
For the 1 st version of TAHYA DMP, the following methodological steps were
followed:
1. Absiskey and the Coordinator, responsible for the implementation WP9 – Dissemination and Exploitation strategy - sent to all partners, well in advance, an email notifying them about the upcoming deliverable. Contribution was asked from all partners that were involved in any data collection in each task of the WPs. They were asked to answer a questionnaire on which data they were expecting to produce and collect during the project.
2. In parallel, the latest guidelines from the EC regarding data management were sent to all partners to be informed. Sufficient time was given to send their input.
3. The project team collaborated efficiently and contributed with the needed information.
The first version of the TAHYA DMP is intended to provide an initial screening
of the data to be collected, processed and produced within TAHYA. It is also
the first attempt to collect the vision and input from all the partners
involved in any data management option. During the upcoming Project Steering
Boards in October 2018 special attention will be given to data management in
order to provide further clarifications and conclusions on data management.
# 4\. DATASETS
## a. WP1 – Project Decision making and innovation management
N/A
## b. WP2 – End-users' specifications, product, safety & service definition
N/A
Specifications are highly confidential results which cannot be share and held
by VW.
## c. WP3 – Design and Prototyping
Design and developments for liner, composite and OTV are highly confidential
results which cannot be share to the scientific community and held by
industrial partners (Optimum, Raigi and Anleg).
Only one task in this WP could lead to the open access to data: Task 3.5
optimisation and filling and venting process.
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**WP3_1_Simulation assumptions_data**
</th> </tr>
<tr>
<td>
1\. Data summary
</td>
<td>
**Purpose:** Simulation data
**Data formats** : *.xlsx, *.docx, *.pdf, *.pptx
**Will you re-use any existing data and how?**
* yes
**What is the origin of the data?**
* simulation,
* experimental protocols, measurement conditions **What is the expected size of the data?**
* The total file of this dataset will be approximately 0,2 Gb.
**To whom might it be useful ('data utility')?**
* scientific community
</td> </tr>
<tr>
<td>
2\. FAIR Data
</td>
<td>
Findable, Accessible, Interoperable, Re-usable
</td> </tr>
<tr>
<td>
2.1 Making data findable, including provisions for metadata
</td>
<td>
Description of the data:
\- WP3_1_Simulation assumptions_data
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
\- assumption data on gas and Temperature distribution to members of a
consortium on project, members of an international scientific community for
research purposes. The data will be stored on Project Netboard a secure
platform and TUC servers.
</td> </tr>
<tr>
<td>
2.3 Making data interoperable
</td>
<td>
\- the data produced in the project are interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations
</td> </tr>
<tr>
<td>
2.4. Increase data re-use (through clarifying licences)
</td>
<td>
Specify the licenses and the conditions for sharing and reusing the data:
\- No specific conditions
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
All costs related to the data collection and processing are covered by the
project budget with dedicated person months under WP9.
</td> </tr>
<tr>
<td>
4\. Data security
</td>
<td>
Audio, doc and xls files will be deposited in ABSISKEY servers and will be
protected with the ABSISKEY server’s security protocol:
PNB security is provided by OVH. OVH is currently the number 3 web hosting
provider worldwide.
OVH condition of security: https://www.ovh.co.uk/aboutus/security.xml and
https://www.ovh.co.uk/aboutus/datacentres.xml
In addition to this security, a complete database backup is performed each day
and stored during one week. Each week a secured backup is stored on CD.
Finally, every connection to Project NetBoard is made using the https protocol
to login to the platform _._
</td> </tr>
<tr>
<td>
5\. Ethical aspects
</td>
<td>
There are no ethical issues regarding the project.
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
N/A
</td> </tr> </table>
## d. WP4 – Design verification phase
To be defined later on in the project according to IP management of
development realised in WP3.
## e. WP5 – System validation phase and safety aspects
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**WP5_1_performance results_data**
</th> </tr>
<tr>
<td>
1\. Data summary
</td>
<td>
**Purpose:** Results of the hydraulic performance and fire resistance tests.
**Data formats** : *.xlsx , *.docx, *.pdf , *.pptx, *.mp4, *.asc , *.opj
**Will you re-use any existing data and how?**
* N/A
**What is the origin of the data?**
* Experimental measurement data recorded with universal measurement amplifiers **What is the expected size of the data?**
* The total size of all data will be approx. less than 2 Gb.
**To whom might it be useful ('data utility')?**
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
\- Project partners, scientific community, notified bodies, competent
authority
</th> </tr>
<tr>
<td>
2\. FAIR Data
</td>
<td>
Findable, Accessible, Interoperable, Re-usable
</td> </tr>
<tr>
<td>
2.1 Making data findable, including provisions for metadata
</td>
<td>
Description of the data:
\- The following naming for the dataset will be WP5_1_BAM_performance
results_data
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
* mainly results of normalised data needed for publications (national/international scientific community and standards committee)
* raw measurement data only to members of the consortium
* Shared documents/data uploaded to Project Netboard All Data will be stored on company internal shared directory with limited number of users
</td> </tr>
<tr>
<td>
2.3 Making data interoperable
</td>
<td>
All published data are generated with common programs and stored within common
file formats and therefore they are interoperable and can be re-use between
researchers, institutions, companies etc.
</td> </tr>
<tr>
<td>
2.4. Increase data re-use (through clarifying licences)
</td>
<td>
Specify the licenses and the conditions for sharing and reusing the data:
\- Creative Commons:
(CC BY-ND 3.0 DE or CC BY-NC-ND 3.0 DE) https://creativecommons.org/
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
All costs related to the data collection and processing are covered by the
project budget with dedicated person months under WP9.
</td> </tr>
<tr>
<td>
4\. Data security
</td>
<td>
Files will be deposited in ABSISKEY servers and will be protected with the
ABSISKEY server’s security protocol:
PNB security is provided by OVH. OVH is currently the number 3 web hosting
provider worldwide.
OVH condition of security:
_https://www.ovh.co.uk/aboutus/security.xml_ and
_https://www.ovh.co.uk/aboutus/datacentres.xml_
In addition to this security, a complete database backup is performed each day
and stored during one week. Each week a secured backup is stored on CD.
Finally, every connection to Project NetBoard is made using the https protocol
to login to the platform _._
</td> </tr>
<tr>
<td>
5\. Ethical aspects
</td>
<td>
There are no ethical issues regarding the project.
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
N/A
</td> </tr> </table>
## f. WP6 – Manufacturing process
N/A according to IP definition and management.
## g. WP7 – Economical aspects and implementation strategy
N/A
This topic and associated results are highly confidential and cannot be shared
by industrial partners.
## h. WP8 – RCS standardisation work
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**WP8_1_safety levels composite cylinders_data**
</th> </tr>
<tr>
<td>
1\. Data summary
</td>
<td>
**Purpose:** Identify and improve existing safety level of composite cylinders
and standards.
**Data formats** : *.xlsx, *.docx, *.pdf, *.pptx
**Will you re-use any existing data and how?**
* N/A
**What is the origin of the data?**
* Experimental test results
**What is the expected size of the data?**
* The total size of all data will be approx. less than 0,2 Gb.
**To whom might it be useful ('data utility')?**
* Project partners, scientific community, notified bodies, competent authority
</td> </tr>
<tr>
<td>
2\. FAIR Data
</td>
<td>
Findable, Accessible, Interoperable, Re-usable
</td> </tr>
<tr>
<td>
2.1 Making data findable, including provisions for metadata
</td>
<td>
Description of the data:
\- The following naming for the dataset will be
WP8_1_BAM_safety levels composite cylinders_data
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
* mainly results of normalised data needed for publications (national/international scientific community and standards committee)
* Shared documents/data uploaded to Project Netboard All Data will be stored on company internal shared directory with limited number of users
</td> </tr>
<tr>
<td>
2.3 Making data interoperable
</td>
<td>
All published data are generated with common programs and stored within common
file formats and therefore they are interoperable and can be re-use between
researchers, institutions, companies etc.
</td> </tr>
<tr>
<td>
2.4. Increase data re-use (through clarifying licences)
</td>
<td>
Specify the licenses and the conditions for sharing and reusing the data:
\- Creative Commons:
(CC BY-ND 3.0 DE or CC BY-NC-ND 3.0 DE) https://creativecommons.org/
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
All costs related to the data collection and processing are covered by the
project budget with dedicated person months under WP9.
</td> </tr>
<tr>
<td>
4\. Data security
</td>
<td>
Audio, doc and xls files will be deposited in ABSISKEY servers and will be
protected with the ABSISKEY server’s security protocol:
PNB security is provided by OVH. OVH is currently the number 3 web hosting
provider worldwide.
OVH condition of security: _https://www.ovh.co.uk/aboutus/security.xml_ and
_https://www.ovh.co.uk/aboutus/datacentres.xml_
In addition to this security, a complete database backup is performed each day
and stored during one week. Each week a secured backup is stored on CD.
Finally, every connection to Project NetBoard is made using the https protocol
to login to the platform _._
</td> </tr>
<tr>
<td>
5\. Ethical aspects
</td>
<td>
There are no ethical issues regarding the project.
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
N/A
</td> </tr> </table>
_**i. WP9 – Dissemination and exploitation strategy** _
NA
<table>
<tr>
<th>
**WP /**
**Task**
</th>
<th>
**Responsible partner**
</th>
<th>
**Dataset name**
</th>
<th>
**File types**
</th>
<th>
**Findable**
</th>
<th>
**Accessible**
</th>
<th>
**Interoper able**
</th>
<th>
**Reusable**
</th>
<th>
**Size**
</th>
<th>
**Security**
</th>
<th>
**Ethics**
</th> </tr>
<tr>
<td>
WP3
T3.5
</td>
<td>
TUC
</td>
<td>
WP3_1_Simulation
assumptions_data
</td>
<td>
*.xlsx, *.docx, *.pdf, *.pptx
</td>
<td>
WP3_1_Simulation
assumptions_data
</td>
<td>
Results published (papers, presentation
etc.)
\- depository: project netboard
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
<0,2 GB
</td>
<td>
Kept in AK /TUC servers
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
WP8 /
T08.01,
T08.02, T08.03.
</td>
<td>
BAM
</td>
<td>
WP8_1_safety levels composite cylinders_data
</td>
<td>
*.xlsx, *.docx,
*.pdf, *.pptx
</td>
<td>
WP8_1_BAM_safe
ty levels
composite cylinders_data
</td>
<td>
* Results published (papers, presentation
etc.)
* depository: project netboard and company
internal
</td>
<td>
Yes
</td>
<td>
License publications
CC BY-ND 3.0 DE or CC BY-NC-ND
3.0 DE
</td>
<td>
<0,2 GB
</td>
<td>
Kept in AK
/BAM
servers
</td>
<td>
N/A
</td> </tr> </table>
**D1.4__TAHYA-P6_AK_190115** Page **13** / **16**
**5.**
**SUMMARY TABLE**
**6.**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0172_SKILLFUL_723989.md
|
# Executive Summary
This report constitutes the Deliverable 5.2 (Data Management Plan) of the
SKILLFUL project, part of the WP5 (Pilots). SKILLFUL project aims to identify
the skills and competences needed by the Transport workforce of the future
(2020, 2030 and 2050 respectively) and define the training methods and tools
to meet them. Within this context SKILLFUL have developed new training
schemes, which have been adapted to the particular needs of the transportation
professionals of the present and the future.
These training schemes have been validated through the realisation of Pilots
in 27 Pilot sites in 12 different countries (namely, Brazil, Denmark, Finland,
France, Germany, Greece, Ireland, Italy, Lithuania, Portugal, Slovakia and
Spain).
The Data Management Plan (DMP) has been prepared (and also updated), following
the regulations of the Pilot action on Open Access to Research Data of Horizon
2020. In this 3 rd and final version of the Deliverable the necessary
aspects of the Data Management process, mainly related to the SKILLFUL pilots’
realisation are being described.
# Introduction
SKILLFUL ( _http://skillfulproject.eu/_ ) is dealing with one of the main
challenge for the transportation sector, which is the ability to attract new
employees, as well as equip the existing ones with the competences required
for addressing the needs of the constantly changing and developing
transportation sector. Its vision is to identify the skills and competences
needed by the Transport workforce of the future (2020, 2030 and 2050
respectively) and define the training methods and tools to meet them. Within
this context, the project objectives can be described as following:
* to critically review the existing, emerging and future knowledge and skills requirements of workers at all levels in the transportation sector, with emphasis on competences required by important game changers and paradigm shifters (such as electrification and greening of transport, automation, MaaS, etc.);
* to structure the key specifications and components of the curricula and training courses that will be needed to meet these competence requirements optimally, with emphasis on multidisciplinary education and training programmes;
* to identify and propose new business roles in the education and training chain, in particular those of “knowledge aggregator”, “training certifier” and “training promoter”, in order to achieve European wide competence development and take-up in a sustainable way.
For the aforementioned objectives to be achieved, the whole project process
has been structured that way that it can be divided into three major
categories/ steps:
* **Step 1** : Identification of Future Trends/ Needs & Best Practices
* **Step 2** : Development of Training Schemes & Definition of Profiles and Competences
* **Step 3** : Verification and Optimization of training schemes
During the third step of the SKILLFUL project and the procedure of the
training schemes piloting and verification, data have been collected during
the realisation of the 27 pilots that were organised during the years 2018 and
2019. The Data Management Plan is a deliverable directly connected to
evaluation and pilot plans for each of the pilot sites.
This final version of the deliverable (Month 36) includes description of the
datasets developed and used for the pilots’ analysis, following the
regulations of the Pilot action on Open Access to Research Data of Horizon
2020 [1].
## Interrelations
Data Management aspects are closely related to:
1. Ethics issues in SKILLFUL, especially in the context of collecting, managing and processing data from real-life users (including Pilots’ participants),
2. Legal issues related to personal data (including sensitive personal data), security and privacy.
Therefore, this document has been updated as the work evolved and in close
synergy with the work of Activity 7.4 “A7.4 Quality Assurance, Ethics, Equity
and Gender issues”.
# Data processes during the SKILLFUL Pilots
The Guidelines on Data Management in Horizon 2020 document has been taken
under consideration, used in order to identify and define the data management
procedures, which the SKILLFUL project has followed. Data collection, storing,
accessing, and sharing abides to the international legislation (Data
Protection Directive 95/46/EC “on the protection of individuals with regard to
the processing of personal data and on the free movement of such data” and EU
general data protection regulation 2016/679 (GDPR), which has taken effect in
May 25 2018) and guidelines [2]. This final update (M36) contains descriptions
of dataset structures of data contained and used for the evaluation of the
SKILLFUL pilots.
Subjective data have been collected during several types of qualitative
surveys of the project (i.e. within WP1, WP2 and WP4), workshops (i.e. within
WP1, WP2) and, of course during the realisation of the pilots (WP5). This data
is collected, managed and processed by SKILFUL partners and it has been
anonymised in all cases. In the case of Pilots, subjective data deal mostly
with the evaluation and assessment of the piloted SKILLFUL courses, as well as
satisfaction/perceived quality and acceptance (perceived/rated by users) of
these courses, according to the pilot participants, namely the following:
* Trainers
* Trainees
* Organisers
* Stakeholders
Apart from data, metadata have been collected to define the characteristics
and in many cases to facilitate processing, storing, and, finally,
understanding the data collected during the pilots. Metadata definitions range
from quality descriptions of datasets when they are used by analysts who did
not participate in the data collection, thus, it is important for them to
understand as much as possible about the related processes and procedures to
aggregation of data to something different.
For the evaluation phase of the SKILLFUL Pilots, an online questionnaire was
developed ( _https://www.soscisurvey.de/skillfulTCE/_ ) . Web link to the
questionnaire was distributed to all survey participant groups (4 different
types mentioned above). The following types of data have been logged, managed
and processed in this SKILLFUL system during performance and evaluation of
SKILLFUL
Pilots:
* **Personalisation data:** Data concerning each survey respondent’s profile (each pilot’s participant as listed above) have been collected by the evaluation questionnaires provided. More specifically, the following information is being required:
o Gender information o Curriculum background (only for trainees) o Years of
professional experience (only for trainees) All information acquired in
anonymised.
* **User feedback data:** Through the second part of each questionnaire (for all 4 types of participants) an upper level (subjective) evaluation of each pilot course examined has been obtained from the participants. The feedback concerns various aspects of the course, such as (indicatively) its content and usefulness, appropriateness, its potential of providing new skills, its learning outcomes and how they could improve the job opportunities of the individual participants. Feedback is provided also on the organisation of the pilot course (i.e. resources, functionality of classroom, technical equipment, timetable, teaching and learning methods, the trainers’ background knowledge and skills, etc.). At the end of each questionnaire, users were asked to indicate their overall rating concerning the examined Pilot course, while also provide general comments and suggestions for improvement.
All information has been acquired in anonymised form.
Figure 1: Example of the anonymisation of the data obtained during the
SKILLFUL pilots
## Data storage and back up
The collection of the data obtained by the SKILLFUL were regularly securely
stored and backed-up. Sometimes multiple copies were made as the data were
initially collected by CERTH, which was also responsible for the monitoring
and optimisation of the online questionnaires and periodically (biweekly),
copies of the data were send to VTT, the partner that was responsible for the
analysis of the Pilots information and the consolidation of their results
(processor).
The following data storage options have been used:
* **External hard drives/USB sticks:** used in long-trials (WP5) and local evaluations. They have served as backups and intermediate storage units before transferring data to a permanent/longterm storage place.
* **Personal computers and laptops:** Similarly they have mainly served as a short-term options and for transferring data after the evaluation sessions to a selected storage place.
* **Network/fileservers:** large data sets are being stored and they will serve as the long-term storage solution. Regular backups ensure data are not lost or corrupted.
## Data ownership and preservation
Any data gathered during the lifetime of the project are the ownership of the
beneficiary or the beneficiaries (joint ownership) that produce them according
to subsection 3, Art. 26 of the signed Grant Agreement (723989-SKILLFUL).
Data will be preserved after the end of the project (for a period of two
years) only for complete datasets that partners have agreed to share them with
other researchers (if any). However, since the data obtained by the
participants during the SKILLFUL project and especially during the Pilots,
concerned mainly personal data describing features that have to do with the
training and the professional skills of the participants, and also due to the
fact that the pools of the data are small they are not intended to be reused
or shared with third parties. The data collected do not include physiological
measurements and information, so their nature make them not appropriate for
algorithmic applications.
# SKILLFUL data privacy policy
Participants’ personal data have been being used in strictly confidential
terms and have been published only as statistics (anonymously).
The stored data only refer to users’ gender, professional background and
nationality (no other identifier was collected). Nevertheless, stored date
relate only to users’ activities related to their specific position and job,
not to a person’s beliefs or political or sexual preferences. Moreover, it is
very important to also mention that any data related to the performance of
each pilot participant/ user to their job/ position duties (“incidental
findings”) it is not part of the SKILLFUL research and thus will not be taken
into account and no relevant information will be disclosed to any 3 rd
party; including the trainees’ colleagues and management. Any of the following
data have _not_ be stored:
* Name, address, telephone, fax, e-mail, photo, etc. of the user (any direct or indirect link to user ID).
* User location (retrieved every time dynamically by the system, but not stored).
* Any other preferences/actions by the users, except the ones motioned explicitly above.
* To whom they communicates, their frequent contacts, etc.
## During pilots
During the SKILLFUL Pilot tests:
1. In cases that any personal data (i.e. names, address, and contact details) from transport professionals participating in the pilots were required, these were provided only to a single person in each pilot site, to be stored in a protected local database (to contact them and arrange for the tests). None of these persons participated in the evaluation process and the analysis of the data. Each participant was registered in the database through an anonymous ID.
2. This personal data have been kept in the database only for the duration of each trial (short term trials-up to 1 week, long term trials- up to 1 month). Such data have not been communicated to any other partner or even person in each pilot site. Once the pilots were completed, any relevant information has been deleted.
3. Since personal data have been deleted, no follow-up studies with the same people will be feasible.
The partners of the consortium agree and declare that personal data have been
used in strictly confidential terms and been published only as statistics
(anonymously).
# Conclusions
This deliverable (D5.2) contains and monitors since the beginning of the
SKILLFUL project the Data Management processes that have been structured and
followed, regarding mainly the analysis of the data provided by real users
(i.e. through questionnaire’s, dissemination events and workshops and during
the project’s pilot tests), emphasizing the proper management of personal data
and the protection of participants.
This deliverable has acted as s a reference document, since ethical
considerations, especially about data protection, privacy and security have
elaborated in it, in cooperation also with the SKILLFUL Ethics Board.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0173_OPERA_654444.md
|
# 1\. INTRODUCTION
## 1.1 OPERA MOTIVATION
The OPERA project participates in the Pilot on Open Research Data launched by
the European
Commission (EC) along with the H2020 programme. This pilot is part of the Open
Access to Scientific Publications and Research Data programme in H2020. The
goal of the programme is to foster access to research data generated in H2020
projects. The use of a Data Management Plan (DMP) is required for all projects
participating in the Open Research Data Pilot.
Open access is defined as the practice of providing on-line access to
scientific information that is free of charge to the reader and that is
reusable. In the context of research and innovation, scientific information
can refer to peer-reviewed scientific research articles or research data.
Research data refers to information, in particular facts or numbers collected
to be examined and considered, and as a basis for reasoning, discussion, or
calculation. In a research context, examples of data include statistics,
results of experiments, measurements, observations resulting from fieldwork,
survey results, interview recordings and images. The focus is on research data
that is available in digital form.
The Consortium strongly believes in the concepts of open science, and in the
benefits that the European innovation ecosystem and economy can draw from
allowing the reuse of data at a larger scale.
Furthermore, there is a need to gather experience in open sea operating
conditions, structural and power performance and operating data in wave
energy. In fact, there has been very limited open sea experience in wave
energy, which is essential in order to fully understand the challenges in
device performance, survivability and reliability. The limited operating data
and experience that currently exists are rarely shared, as testing is partly
private-sponsored.
This project proposes to remove this roadblock by delivering for the first
time, open access, high-quality open sea operating data to the wave energy
development community.
Nevertheless, data sharing in the open domain can be restricted as a
legitimate reason to protect results that can reasonably be expected to be
commercially or industrially exploited [1] . Strategies to limit such
restrictions will include anonymising or aggregating data, agreeing on a
limited embargo period or publishing selected datasets.
## 1.2 PURPOSE OF THE DATA MANAGEMENT PLAN
The purpose of the DMP is to provide an analysis of the main elements of the
data management policy that will be used by the Consortium with regard to the
project research data.
The DMP covers the complete research data life cycle. It describes the types
of research data that will be generated or collected during the project, the
standards that will be used, how the research data will be preserved and what
parts of the datasets will be shared for verification or reuse. It also
reflects the current state of the Consortium agreements on data management and
must be consistent with exploitation and IPR requirements.
The DMP is not a fixed document, but will evolve during the lifespan of the
project, particularly whenever significant changes arise such as dataset
updates or changes in Consortium policies.
This document is the final version of the DMP which was first delivered in
Month 6 of the project (D8.5) and later updated (D8.6).
This document has been produced following the EC guidelines for project
participating in this pilot and additional consideration described in ANNEX I:
KEY PRINCIPLES FOR OPEN ACCESS TO RESEARCH DATA.
## 1.3 RESEARCH DATA TYPES IN OPERA
The data types that will be produced during the project are focused on the
Description of the Action (DoA) and their results.
According to such consideration, Table 1.1 reports a list of categories of
research data that OPERA will produce. These research data types have been
mainly defined in WP1, including data structures, sampling and processing
requirements, as well as relevant standards. This list may be adapted with the
addition or removal of datasets in the next versions of the DMP to take into
consideration the project developments. A detailed description of each dataset
is given in the following sections of this document.
#### TABLE 1.1: OPERA TYPES OF DATA
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset category**
</th>
<th>
**Lead partner**
</th>
<th>
**Related WP(s)**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Environmental monitoring
</td>
<td>
TECNALIA
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Mooring performance
</td>
<td>
UNEXE
</td>
<td>
WP1, WP2, WP5
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Bi-radial performance
</td>
<td>
IST
</td>
<td>
WP1, WP3
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Power output
</td>
<td>
OCEANTEC
</td>
<td>
WP1, WP4, WP5
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
Power quality
</td>
<td>
UCC
</td>
<td>
WP1, WP5
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
Offshore operations
</td>
<td>
TECNALIA
</td>
<td>
WP6, WP7
</td> </tr> </table>
Specific datasets may be associated to scientific publications (i.e.
underlying data), public project reports and other raw data or curated data
not directly attributable to a publication. Datasets can be both collected,
unprocessed data as well as analysed, generated data. The policy for open
access are summarised in the following picture.
Research data linked to exploitable results will not be put into the open
domain if they compromise its commercialisation prospects or have inadequate
protection, which is a H2020 obligation. The rest of research data will be
deposited in an open access repository.
When the research data is linked to a scientific publication, the provisions
described in ANNEX
II: SCIENTIFIC PUBLICATIONS will be followed. Research data needed to validate
the results presented in the publication should be deposited at the same time
for “Gold” Open Access 1 or before the end of the embargo period for “Green”
Open Access 2 . Underlying research data will consist of selected parts of
the general datasets generated, and for which the decision of making that part
public has been made.
Other datasets will be related to any public report or be useful for the
research community. They will be selected parts of the general datasets
generated or full datasets (e.g. up to 2 years of key operating data) and be
published as soon as they become available.
## 1.4 ROLES AND RESPONSIBILITIES
Each OPERA partner has to respect the policies set out in this DMP. Datasets
have to be created, managed and stored appropriately and in line with
applicable legislation.
The Project Coordinator has a particular responsibility to ensure that data
shared through the OPERA website are easily available, but also that backups
are performed and that proprietary data are secured.
OCEANTEC, as WP1 leader, will ensure dataset integrity and compatibility for
its use during the project lifetime by different partners.
Validation and registration of datasets and metadata is the responsibility of
the partner that generates the data in the WP. Metadata constitutes an
underlying definition or description of the datasets, and facilitate finding
and working with particular instances of data.
Backing up data for sharing through open access repositories is the
responsibility of the partner possessing the data.
Quality control of these data is the responsibility of the relevant WP leader,
supported by the Project Coordinator.
If datasets are updated, the partner that possesses the data has the
responsibility to manage the different versions and to make sure that the
latest version is available in the case of publicly available data. WP1 will
provide naming and version conventions.
Last but not least, all partners must consult the concerned partner(s) before
publishing data in the open domain that can be associated to an exploitable
result.
# 2\. DATA COLLECTION, STORAGE AND BACK-UP
The OPERA project will generate data resulting from instrumentation recordings
during the lab testing and open-sea testing. In addition to the raw,
uncorrected sensor data, converted and corrected data, as well as several
other forms of derived data will be produced.
Instrumentation, data acquisition and logging systems are thoroughly described
in D1.1 [2] . A database management system will be used in the project to
create, read, update and delete data from a database. The software platform
being used is MySQL 5.7.9. A SCADA system will allows partners to access
monitoring information locally and remotely.
The following sections describe the different datasets that will be produced
in the course of the project.
## 2.1 ENVIRONMENTAL MONITORING DATA
Environmental monitoring data will be collected at two locations, namely the
Mutriku shoreline plant and the open sea test site BiMEP. These numeric
datasets will be directly obtained through observations, and derived using
statistical parameters and models.
In general, environmental monitoring datasets will be useful for further
research activities beyond the scope of OPERA objectives. Metocean
observations are common practice for different uses. Dataset could be
integrated and reused, particularly for the characterisation of wave resource
and the estimation of device performance. They will be also valuable for
technology developers who plan to test their devices at either Mutriku or
BiMEP.
Although the raw datasets are useful by themselves, it is the objective of the
OPERA project to use the data as a basis for at least one scientific
publication.
A short description of the environmental monitoring datasets is given next. At
present two datasets have been made public through ZENODO as indicated in the
tables.
**TABLE 2.1: WAVE RESOURCE AT MUTRIKU**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Wave_Mutriku
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
* Wave resource data 200 m off the shoreline plant.
* Main data are the pressure fluctuations over time.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• RBR & Isurki Pressure Gauges
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* TXT for instrument recordings and derived data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
* Scilab program to transform pressure into wave height and period.
* Spectral analysis software.
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• 2 GB (6 months @ 2 Hz sampling frequency)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• Internal USB memory stick, on-site database server, and real-time
replication onto cloud-hosted database server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
• Daily back-ups on both local and cloud-hosted servers. 15-day retention
period for incremental backups in the latter.
</td> </tr>
<tr>
<td>
**Link**
</td>
<td>
• _https://zenodo.org/record/832847#.WpPuDLNG0kI_ (version 1.0)
</td> </tr> </table>
**TABLE 2.2: WAVE RESOURCE AT BIMEP**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Wave_BiMEP
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
* Wave resource at 300 m up-wave of the WEC.
* Datasets mainly consist of wave parameters such as wave H s , T p , direction and spreading.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• TRIAXYS surface following buoy
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* TXT for instrument recordings
* MS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• Spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
* 150 MB of statistical data (20 min x 2 years)
* 1 GB of real-time data (20 min x 2 Hz sampling frequency when real-time communications activated)
* 8 GB (2 years @ 2 Hz sampling frequency)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• Internal USB memory stick, on-site database server and real-time replication
onto cloud-hosted database server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr>
<tr>
<td>
**Link**
</td>
<td>
• _https://zenodo.org/record/1311593#.W1Xp55N9hPY_ (version 2.1)
</td> </tr> </table>
## 2.2 MOORING PERFORMANCE DATA
Experimental data will be collected at the DMAC facility in UNEXE [4] .
Besides, field tests will be conducted at the open sea test site at BiMEP.
Datasets consists of mooring performance data will be both experimental and
observational, raw and derived (using statistical parameters and models).
The mooring performance dataset will be useful to inform technology
compliance, survivability and reliability as well as economical improvements.
They will be also valuable for the certification processes of other technology
developers. These data will be the basis for at least one scientific
publication on the comparison and validation of dynamic response and mooring
loads from field measurements.
A short description of the mooring performance datasets is given below.
**TABLE 2.3: TETHER LOADS AT DMAC**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Tethers_Lab
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• Characterisation of design load behaviour, fatigue and durability of several
elastomeric tether specimens.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• DMAC facility
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Experimental
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
• CSV (processed data)
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• Labview, Optitrack Motive and Matlab
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• 8.7 GB (50 Hz sampling frequency)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• Network storage
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
• Network drive is backed up daily with two-disk fault tolerance (i.e.
backups are safe even if two disks fail). Backups are stored in a different
building and protected by a dedicated UPS.
</td> </tr> </table>
#### TABLE 2.4: MOORING LOADS AT BIMEP
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Mooring_BiMEP
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
* Extreme loads and motion response to different sea states will be monitored.
* The loading data will be combined with the environmental monitoring dataset to derive the final mooring performance dataset.
* Comparison between the polyester lines and the elastomeric mooring tethers.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
* MARMOK-A-5 prototype.
* A mooring condition monitoring has been implemented for the project consisting of 4 load shackles deployed in two mooring nodes of the prototype.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* TXT for raw instrument recordings
* MS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• Statistical and spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• ≤ 400 GB (2.5 years recording x 16 measurements @ 20 Hz)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• On-site database server and real-time replication onto cloudhosted database
server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
## 2.3 BIRADIAL TURBINE PERFORMANCE DATA
Experimental data will be collected at existing IST Turbomachinery Laboratory
(Dry Lab) for tests in varying unidirectional flow. Also, field tests will be
conducted both at Mutriku shoreline plant and the BiMEP open sea test site.
Bi-radial turbine performance data will be both experimental and
observational, raw and derived (using statistical parameters and models).
The bi-radial turbine performance dataset will be useful to assess turbine
efficiency and reliability. The loading data will be combined with the
environmental monitoring dataset to derive the final bi-radial turbine
performance dataset.
This dataset will be the basis for at least one scientific publication on the
description of the biradial turbine dry tests performed at the IST
turbomachinery test rig.
A short description of the bi-radial turbine performance datasets is given
below.
#### TABLE 2.5: BI-RADIAL TURBINE PERFORMANCE AT DRY LAB FACILITY
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Biradial_Turbine_Lab
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• Assess turbine performance though unidirectional steady-state and
alternating flow.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
* IST Turbomachinery Laboratory.
* Sensor data acquired at a frequency of 1kHz for turbine pressure head, plenum temperature and humidity, turbine rotational speed, turbine flow rate and the instantaneous position of the flow control valve.
* The voltage and the current of the three AC phases at the input and output of the power electronics were acquired at a frequency of 62.5kHz.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Experimental
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
• Matlab “mat” files and comma separated value “csv” text files.
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
* Matlab (Experimental data acquisition)
* A special purpose parallelized C++ software (Data filtering),
</td> </tr>
<tr>
<td>
</td>
<td>
• A software package written in the Julia language (Computation of the
instantaneous and time-averaged turbine shaft power, electrical power and
available pneumatic power)
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• 320 GB
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• Local PC storage
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
• Static data stored at three computers
</td> </tr> </table>
**TABLE 2.6: BI-RADIAL TURBINE PERFORMANCE AT MUTRIKU**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Biradial_Turbine_Mutriku
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• Assess turbine performance and collect extensive data on drivers of
components fatigue such as high rpm and accelerations; electrical, temperature
and pressure load cycles; humidity in the cabinet (which exacerbates
electrical stress damage); rate of salt accumulation and corrosion.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
* Mutriku Wave Power Plant.
* Bi-radial turbine-generator set and chamber #9 have been instrumented for the project.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* MS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• Statistical and spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• ≤ 50 GB (6-month recording x 150 measurements @ 4 Hz)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• On-site database server and real-time replication onto cloudhosted database
server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
**TABLE 2.7: BI-RADIAL TURBINE PERFORMANCE AT BIMEP**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Biradial_Turbine_BiMEP
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• Internal water level, chamber pressure/temperature/humidity, rotation speed
and torque to assess turbine efficiency in response to different sea states to
compare turbine performance drivers of components fatigue
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
* MARMOK-A-5 prototype.
* Bi-radial turbine-generator set and hull structure have been instrumented for the project.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
• MySQL database for real-time dataMS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• Statistical and spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• ≤ 100 GB (12-month recording x 150 measurements at 4 Hz)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• On-site database server and real-time replication onto cloudhosted database
server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
## 2.4 POWER OUTPUT DATA
Experimental data will be collected at electrical test rigs of UCC [5] and
TECNALIA [6] . Besides, field tests data will be collected at Mutriku
shoreline plant and at the BiMEP open sea test site. Numerical models will be
also used to extend the dataset beyond sea-trials data. In the latter,
specialist software may be needed for further processing the data. Selected
parts of the generated datasets generated will be made public. Power output
data will be both experimental and observational, raw and derived such as
mean, standard deviation, minimum and maximum values.
Power output data will be useful to identify sources of uncertainty in power
performance prediction and for the certification processes of other technology
developers.
This dataset will be the basis for at least one scientific publication. At the
time of writing this deliverable, there is a publication in preparation
comparing operational data from Control Strategies applied to the biradial
turbine in the Mutriku Wave Power Plant. This publication desn not require
BiMEP experimental data. Nontheless, a short description of all possible power
output datasets is given below.
#### TABLE 2.8: POWER OUTPUT AT ELECTRICAL TEST RIG
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Power_Output_Lab
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• Generator speed, voltage, frequency and electric power.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• Electrical test rigs of UCC and TECNALIA
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Experimental and Simulation
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
• MS Excel
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• MATLAB numerical model of the Mutriku Wave Power Plant
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• 20 GB (7 CLs x approx. 300 MB)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• Network storage
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
• Daily back-ups
</td> </tr> </table>
#### TABLE 2.9: POWER OUTPUT AT MUTRIKU
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Power_Output_Mutriku
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• Generator speed, voltage, frequency and electric power, including phase
voltages & currents.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
* Mutriku Wave Power Plant.
* Bi-radial turbine-generator set and chamber #9 have been instrumented for the project.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* MS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• Statistical and spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• ≤ 50 GB (6-month recording x 150 measurements @ 4 Hz)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• On-site database server and real-time replication onto cloudhosted database
server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
#### TABLE 2.10: POWER OUTPUT AT BIMEP
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Power_Output_BiMEP
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• Generator speed, voltage, frequency and electric power, including phase
voltages & currents.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• MARMOK-A-5 prototype. Bi-radial turbine-generator set and hull structure
have been instrumented for the project.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* MS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• Statistical and spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• ≤ 100 GB (12-month recording x 150 measurements at 4 Hz)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• On-site database server and real-time replication onto cloudhosted database
server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
## 2.5 POWER QUALITY DATA
Experimental data will be collected at electrical test rig of UCC [10]. Also,
field tests data will be collected at the Mutriku shoreline plant. Simulated
models may be used to assess the power quality for other operating conditions,
such as varying control algorithms, resource conditions, grid strengths, and
control using a dry-lab to create a wider profile for the WEC. Power quality
data will be both experimental and observational, raw and derived (using
statistical parameters and models). Selected parts of the experimental
datasets generated will be made public.
Power quality data will be useful to identify sources of uncertainty in
assessing the impact of the wave energy converter on the performance of the
grid. They will be also valuable for the certification processes of other
technology developers.
This dataset will be the basis for at least one scientific publication on the
approach and results of Power Quality monitoring of OWC devices and the fault
response of WEC on small grid.
A short description of the power quality datasets is given next.
#### TABLE 2.11: POWER QUALITY AT ELECTRICAL TEST RIG
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Power_Quality_Lab
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• Current, voltage, power quality characteristic parameters (such as voltage
fluctuations, harmonics, inter-harmonics, active/reactive power, and flicker).
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• Electrical test rig at UCC
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Experimental and Simulation
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
• MS Excel
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• MATLAB Simulink numerical model of the Mutriku Wave Power Plant
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
* Maximum 1.2 GB per 10-minute test (at 20 kHz sampling frequency).
* 4 signals at 20 kHz for 10 minutes per test
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• Network storage
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
• Daily back-ups
</td> </tr> </table>
#### TABLE 2.12: POWER QUALITY AT MUTRIKU
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Power_Quality_Mutriku
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• Data will be collected from both a single turbine and the plant as a whole,
obtaining valuable conclusions about how aggregation of multiple turbines
affects the power quality.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• Mutriku Wave Power Plant.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* MS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
* LabView
* Statistical and spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
* > 200 GB (12-month recording x 12 measurements @ 20 kHz).
* Given the large data storage requirements, the measurements will be triggered, and not carried out continuously. After sufficient power quality analysis has been carried out at 20 kHz, the sampling rate will then be reduced (to approximately 12 kHz, and 10 kHz).
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• On-site database server and real-time replication onto cloudhosted database
server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
Power quality at Mutriku should be used in the case of the approach and
results of Power Quality monitoring of OWC, whereas, the Electrical test rig
will address the fault response of WEC on small grid.
## 2.6 OFFSHORE OPERATIONS DATA
Field tests will be conducted at the BiMEP open sea test site. The offshore
operations data will be combined with the environmental monitoring dataset to
derive the final dataset. Offshore operations data will be observational and
derived.
Offshore operations data will be useful to reduce the uncertainty on the
determination of risk and cost of offshore operations, and to optimise these
activities. The offshore logistics experience can be extrapolated to different
scenarios of larger deployment with a view to more accurately assess the
economies of scale and identify logistics bottlenecks when deployed in large
arrays.
Collected datasets will be used for the global modelling of costs, namely the
integration of real sea operating data in an economic model. These datasets
alone are however not expected to be sufficient for a scientific publication.
Therefore, no underlying data is foreseen regarding Offshore Operations Data.
Nevertheless, a short description of the offshore operations datasets is given
below. **TABLE 2.13: OFFSHORE OPERATIONS**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Offshore_Operations
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• Failures, type of maintenance, offshore resources (such as vessels,
equipment, personnel, parts and consumables), health & safety, and activity
log.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• Unlike the previous datasets, these are not based on process instrumentation
and therefore will not be stored in the WP1 database.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Observational
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
• MS Excel
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• n/a
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• 10 MB
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• Network storage
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
• Daily back-ups on a separate server
</td> </tr> </table>
# 3\. DATA STANDARDS AND METADATA
The following standards should be used for data documentation:
* Ocean Data Standards Project [7] : it contains an extensive number of references on Oceanographic Data Management and Exchange Standards. It includes references on Metadata, Date and Time, Lat/Lon/Alt, Country names, Platform instances, Platform types, Science Words, Instruments, Units, Projects, Institutions, Parameters, Quality Assurance and Quality Control.
* ISO 19156:2011 [8] : it defines a conceptual schema for observations, and for features involved in sampling when making observations. These provide models for the exchange of information describing observation acts and their results, both within and between different scientific and technical communities.
* IEC TS 62600-101 [9] : technical specification for wave energy resource assessment and characterisation.
* DNVGL-OS-E301 [10] : it contains criteria, technical requirements and guidelines on design and construction of position mooring systems. The objective of this standard is to give a uniform level of safety for mooring systems, consisting of chain, steel wire ropes and fibre rope.
* IEC TS 62600-10 [11] : technical specification for assessment of mooring system for Marine Energy Converters (MECs).
* IEC TS 62600-100 [12] : technical specification on power performance assessment of electricity producing wave energy converters
* IEC TS 62600-102 [13] : technical specification on wave energy converter power performance assessment at a second location using measured assessment data
* IEC TS 62600-30 [14] : technical specification on electrical power quality requirements for wave, tidal and other water current energy converters
* IEC 61000-4-7:2002 [15] : further instructions on processing harmonic current components are given in for power supply systems and equipment connected thereto. 🞂 FRACAS [16] Failure the Reporting, Analysis and Corrective Action System
* ISO 14224:2006 [17] : collection and exchange of reliability and maintenance data for equipment.
Metadata records will accompany the data files in order to describe,
contextualise and facilitate external users to understand and reuse the data.
OPERA will adopt the DataCite Metadata Schema [18] , a domain agnostic
metadata schema, as the basis for harvesting and importing metadata about
datasets from data archives. The core mission of DataCite is to build and
maintain a sustainable framework that makes it possible to cite data through
the use of persistent identifiers.
The following metadata should be created to identify datasets:
* Identifier: A unique string that identifies the dataset
* Author/Creator: The main researchers involved in producing the data in priority order
* Title: A name or title by which a data is known
* Publisher: The name of the entity that holds, archives, publishes prints, distributes, releases, issues, or produces the data.
* Publication Year: The year when the data was or will be made publicly available 🞂 Subject: Subject, keyword, classification code, or key phrase describing the resource.
* Contributor: Name of the funding entity (i.e. "European Union" & "Horizon 2020")
* Size: Unstructured size information about the dataset (in GBs)
* Format: Technical format of the dataset (e.g. cvs, txt, xml, ...)
* Version: The version number of the dataset
* Access rights: Provide a rights management statement for the dataset. Include embargo information if applicable
* Geo-location: Spatial region or named place where the data was gathered
# 4\. DATA SHARING AND REUSE
During the life cycle of the OPERA project datasets will be stored and
systematically organised in a database tailored to comply with the
requirements of WP1 (for more details on the database architecture, please see
D1.1 Process instrumentation definition [2] ). An online data query tool was
operational in Month 12, and available for open dissemination by Month 18. The
database schema and the queryable fields, will be also publicly available to
the database users as a way to better understand the database itself.
In addition to the project database, relevant datasets will be also stored in
ZENODO [19] [10] , which is the open access repository of the Open Access
Infrastructure for Research in Europe, OpenAIRE [20]
All collected datasets will be disseminated without an embargo period unless
linked to a green open access publication. Data objects will be deposited in
ZENODO under:
* Open access to data files and metadata and data files provided over standard protocols such as HTTP and OAI-PMH.
* Use and reuse of data permitted. 🞂 Privacy of its users protected.
Data access policy is summarised in the following table.
### TABLE 4.1: DATA ACCESS POLICY
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Data access policy**
</th> </tr>
<tr>
<td>
DS_Wave_Mutriku
</td>
<td>
* Unrestricted since no confidentiality or IPR issues are expected regarding the environmental monitoring datasets
* Licence: CC-BY
</td> </tr>
<tr>
<td>
DS_Wave_BiMEP
</td> </tr>
<tr>
<td>
DS_Tethers_Lab
</td>
<td>
* Restricted to WP2 participants, in order to protect the commercial and industrial prospects of exploitable results (KER1 and KER3).
* Samples of aggregated data (e.g. load averages or extreme load ranges) will be shared in the open domain for the most relevant sea states.
* Licence: CC-BY-ND
</td> </tr>
<tr>
<td>
DS_Mooring_BiMEP
</td> </tr>
<tr>
<td>
DS_Biradial_Turbine_Lab
</td>
<td>
* Restricted to WP3 participants, in order to protect the commercial and industrial prospects of exploitable results (KER1 and KER2).
* Samples of aggregated data (e.g. chamber pressure, air flow, mechanical power) will be shared in the open domain.
* Licence: CC-BY-ND
</td> </tr>
<tr>
<td>
DS_Biradial_Turbine_Mutriku
</td> </tr>
<tr>
<td>
DS_Biradial_Turbine_BiMEP
</td> </tr>
<tr>
<td>
DS_Power_Output_Lab
</td>
<td>
* Restricted to WP4 and WP5 participants, in order to protect the commercial and industrial prospects of exploitable results (KER1, KER4 and KER6).
* Samples of aggregated data (e.g. electric power for the different control laws) will be shared in the open domain.
* Licence: CC-BY-ND
</td> </tr>
<tr>
<td>
DS_Power_Output_Mutriku
</td> </tr>
<tr>
<td>
DS_Power_Output_BiMEP
</td> </tr>
<tr>
<td>
DS_Power_Quality_Lab
</td>
<td>
* Restricted to WP5 participants, in order to protect the commercial and industrial prospects of exploitable results (KER1, KER4 and KER6).
* Samples of aggregated data (e.g. active, reactive power and power factor) will be shared in the open domain.
* Licence: CC-BY-ND
</td> </tr>
<tr>
<td>
DS_Power_Quality_Mutriku
</td> </tr>
<tr>
<td>
DS_Offshore_Operations
</td>
<td>
• Not to be shared in the open domain in order to protect the commercial and
industrial prospects of partners.
</td> </tr> </table>
# 5\. DATA ARCHIVING AND PRESERVATION
The OPERA project database will be designed to remain operational for 5 years
after project end. By the end of the project, the final dataset will be
transferred to the ZENODO repository, which ensures sustainable archiving of
the final research data.
Items deposited in ZENODO will be retained for the lifetime of the repository,
which is currently the lifetime of the host laboratory CERN and has an
experimental programme defined for the at least next 20 years. Data files and
metadata are backed up on a nightly basis, as well as replicated in multiple
copies in the online system. All data files are stored along with a MD5
checksum of the file content. Regular checks of files against their checksums
are made.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0179_OPERA_654444.md
|
# 1\. INTRODUCTION
## 1.1 OPERA MOTIVATION
The OPERA project participates in the Pilot on Open Research Data launched by
the European Commission (EC) along with the H2020 programme. This pilot is
part of the Open Access to Scientific Publications and Research Data programme
in H2020. The goal of the programme is to foster access to research data
generated in H2020 projects. The use of a Data Management Plan (DMP) is
required for all projects participating in the Open Research Data Pilot.
Open access is defined as the practice of providing on-line access to
scientific information that is free of charge to the reader and that is
reusable. In the context of research and innovation, scientific information
can refer to peer-reviewed scientific research articles or research data.
Research data refers to information, in particular facts or numbers, collected
to be examined and considered and as a basis for reasoning, discussion, or
calculation. In a research context, examples of data include statistics,
results of experiments, measurements, observations resulting from fieldwork,
survey results, interview recordings and images. The focus is on research data
that is available in digital form.
The Consortium strongly believes in the concepts of open science, and in the
benefits that the European innovation ecosystem and economy can draw from
allowing the reuse of data at a larger scale.
Furthermore, there is a need to gather experience in open sea operating
conditions, structural and power performance and operating data in wave
energy. In fact, there has been very limited open sea experience in wave
energy, which is essential in order to fully understand the challenges in
device performance, survivability and reliability. The limited operating data
and experience that currently exists are rarely shared, as testing is partly
private-sponsored.
This project proposes to remove this roadblock by delivering for the first
time, open access, high-quality open sea operating data to the wave energy
development community.
Nevertheless, data sharing in the open domain can be restricted as a
legitimate reason to protect results that can reasonably be expected to be
commercially or industrially exploited. Strategies to limit such restrictions
will include anonymising or aggregating data, agreeing on a limited embargo
period or publishing selected datasets.
## 1.2 PURPOSE OF THE DATA MANAGEMENT PLAN
The purpose of the DMP is to provide an analysis of the main elements of the
data management policy that will be used by the Consortium with regard to the
project research data.
The DMP covers the complete research data life cycle. It describes the types
of research data that will be generated or collected during the project, the
standards that will be used, how the research data will be preserved and what
parts of the datasets will be shared for verification or reuse. It also
reflects the current state of the Consortium agreements on data management and
must be consistent with exploitation and IPR requirements.
**FIGURE 1.1: RESEARCH DATA LIFE CYCLE (ADAPTED FROM UK DATA ARCHIVE [1] )**
The DMP is not a fixed document, but will evolve during the lifespan of the
project, particularly whenever significant changes arise such as dataset
updates or changes in Consortium policies.
This document is the first version of the DMP, delivered in Month 6 of the
project. It includes an overview of the datasets to be produced by the
project, and the specific conditions that are attached to them. The next
versions of the DMP will get into more detail and describe the practical data
management procedures implemented by the OPERA project with reference with the
IT tools developed in WP1. At a minimum, the DMP will be updated in Month 18
(D8.6) and Month 30 (D8.7) respectively.
This document has been produced following the EC guidelines for project
participating in this pilot and additional consideration described in ANNEX I:
KEY PRINCIPLES FOR OPEN ACCESS TO RESEARCH DATA.
## 1.3 RESEARCH DATA TYPES IN OPERA
For this first release of DMP, the data types that will be produced during the
project are focused on the Description of the Action (DoA) and on the results
obtained in the first months of the project.
According to such consideration, Table 1.1 reports a list of indicative types
of research data that OPERA will produce. These research data types have been
mainly defined in WP1, including data structures, sampling and processing
requirements, as well as relevant standards. This list may be adapted with the
addition or removal of datasets in the next versions of the DMP to take into
consideration the project developments. A detailed description of each dataset
is given in the following sections of this document. **TABLE 1.1: OPERA TYPES
OF DATA**
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset**
</th>
<th>
**Lead partner**
</th>
<th>
**Related WP(s)**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Environmental monitoring
</td>
<td>
TECNALIA
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Mooring performance
</td>
<td>
UNEXE
</td>
<td>
WP1, WP2, WP5
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Biradial performance
</td>
<td>
IST
</td>
<td>
WP1, WP3
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Power output
</td>
<td>
OCEANTEC
</td>
<td>
WP1, WP4, WP5
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
Power quality
</td>
<td>
UCC
</td>
<td>
WP1, WP5
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
Offshore operations
</td>
<td>
TECNALIA
</td>
<td>
WP6
</td> </tr> </table>
Specific datasets may be associated to scientific publications (i.e.
underlying data), public project reports and other raw data or curated data
not directly attributable to a publication. The policy for open access are
summarised in the following picture.
**FIGURE 1.2: RESEARCH DATA OPTIONS AND TIMING**
Research data linked to exploitable results will not be put into the open
domain if they compromise its commercialisation prospects or have inadequate
protection, which is a H2020 obligation. The rest of research data will be
deposited in an open access repository.
When the research data is linked to a scientific publication, the provisions
described in ANNEX II: SCIENTIFIC PUBLICATIONS will be followed. Research data
needed to validate the results presented in the publication should be
deposited at the same time for “Gold” Open Access 1 or before the end of the
embargo period for “Green” Open Access 2 . Underlying research data will
consist of selected parts of the general datasets generated, and for which the
decision of making that part public has been made.
Other datasets will be related to any public report or be useful for the
research community. They will be selected parts of the general datasets
generated or full datasets (i.e. up to 2 years of key operating data), and be
published as soon as possible.
**1.4**
**RESPONSIBIL**
**ITIES**
Each OPERA partner has to respect the policies set out in this DMP. Datasets
have to be created, managed and stored appropriately and in line with
applicable legislation.
The Project Coordinator has a particular responsibility to ensure that data
shared through the OPERA website are easily available, but also that backups
are performed and that proprietary data are secured.
OCEANTEC, as WP1 leader, will ensure dataset integrity and compatibility for
its use during the project lifetime by different partners.
Validation and registration of datasets and metadata is the responsibility of
the partner that generates the data in the WP. Metadata constitutes an
underlying definition or description of the datasets, and facilitate finding
and working with particular instances of data.
Backing up data for sharing through open access repositories is the
responsibility of the partner possessing the data.
Quality control of these data is the responsibility of the relevant WP leader,
supported by the Project Coordinator.
If datasets are updated, the partner that possesses the data has the
responsibility to manage the different versions and to make sure that the
latest version is available in the case of publically available data. WP1 will
provide naming and version conventions.
Last but not least, all partners must consult the concerned partner(s) before
publishing data in the open domain that can be associated to an exploitable
result.
# 2\. ENVIRONMENTAL MONITORING
## 2.1 DATASET REFERENCE AND NAME
DS_Environmental_Monitoring
**2.2**
**DATASET DESCRIPTION**
The DS_Environmental_Monitoring datasets mainly consists of several wave
parameters (such as wave Hs, Tp, direction and spreading), but may also
include wind, tide, current and temperature parameters. These numeric datasets
will be directly obtained through observations, and derived using statistical
parameters and models. In the latter, specialist software may be needed for
further processing of data.
Environmental monitoring data will be collected at two locations, namely the
Mutriku shoreline plant and the open sea test site BiMEP.
Currently at Mutriku, the single environmental parameter collected is the wave
elevation in the Oscillating Water Column. In order to increase the
characterisation of the wave resource, a new wave instrument will be installed
about 300 m off the shoreline for approximately 6 months. It will be of the
bottom mounted pressure gauge type.
The BiMEP reference buoy is the main wave instrument for the test site Fugro-
OCEANOR Wavescan buoy, deployed in March 2009, recording almost continuously
except for a significant gap from January 2013 to April 2014. This wave
instrument records 17 minutes heave, pitch and roll time series, from which
omnidirectional and directional spectra can be estimated, as well as standard
sea-state parameters. It is located at about 1.1 km to the WSW of the
prototype deployment. Additionally, a surface following buoy will be installed
close to the prototype for the research activities of the project at BiMEP for
approximately two years.
In general, environmental monitoring datasets will be useful for further
research activities beyond the scope of OPERA objectives. Metocean
observations are common practice for different uses. Dataset could be
integrated and reused, particularly for the characterisation of wave resource
and the estimation of device performance. They will be also valuable for
technology developers who plan to test their devices at either Mutriku or
BiMEP.
Although the raw datasets are useful by themselves, it is the objective of the
OPERA project to use the dataset as a basis for at least one scientific
publication.
## 2.3 STANDARDS AND METADATA
There have been many discussions for processing data and information in
oceanography. Many useful ideas have been developed and put into practice, but
there have been few successful attempts to develop and implement international
standards in managing data.
The Ocean Data Standards Project [2] contains an extensive number of
references on Oceanographic Data Management and Exchange Standards. It
includes references on Metadata, Date and Time, Lat/Lon/Alt, Country names,
Platform instances, Platform types, Science Words, Instruments, Units,
Projects, Institutions, Parameters, Quality Assurance and Quality Control.
The ISO 19156:2011 defines a conceptual schema for observations, and for
features involved in sampling when making observations. These provide models
for the exchange of information describing observation acts and their results,
both within and between different scientific and technical communities.
Additionally, regarding the wave energy application, the relevant standard is
the technical specification for wave energy resource assessment and
characterization IEC TS 62600-101 [4]
.
The environmental monitoring system will be integrated in the existing IT
infrastructure at Mutriku and BiMEP. A SCADA system will be developed that
allows partners to access monitoring information locally and remotely.
TECNALIA will be responsible for version control and validation dataset of
datasets to be shared open access.
**2.4**
**DATA SHARING**
During the lifecycle of the OPERA project datasets will be stored and
systematically organised in a database tailored to comply with the
requirements of WP1 (for more details on the database architecture, please see
D1.1 Process instrumentation definition). An online data query tool will be
operational by Month 12 and for open dissemination by Month 18. The database
schema and the queryable fields, will be also publicly available to the
database users as a way to better understand the database itself.
In addition to the project database, relevant datasets will be also stored in
ZENODO [5] , which is the open access repository of the Open Access
Infrastructure for Research in Europe, OpenAIRE [6] .
Data access policy will be unrestricted since no confidentiality or IPR issues
are expected regarding the environmental monitoring datasets. All collected
datasets will be disseminated without an embargo period unless linked to a
green open access publication. Data objects will be deposited in ZENODO under:
* Open access to data files and metadata and data files provided over standard protocols such as HTTP and OAI-PMH.
* Use and reuse of data permitted. Privacy of its users protected.
## 2.5 ARCHIVING AND PRESERVATION
The OPERA project database will be designed to remain operational for 5 years
after project end. By the end of the project, the final dataset will be
transferred to the ZENODO repository, which ensures sustainable archiving of
the final research data.
Items deposited in ZENODO will be retained for the lifetime of the repository,
which is currently the lifetime of the host laboratory CERN and has an
experimental programme defined for the at least next 20 years. Data files and
metadata are backed up on a nightly basis, as well as replicated in multiple
copies in the online system. All data files are stored along with a MD5
checksum of the file content. Regular checks of files against their checksums
are made.
# 3\. MOORING PERFORMANCE
## 3.1 DATASET REFERENCE AND NAME
DS_Mooring_Performance
**3.2**
**DATASET DESCRIPTION**
The DS_Mooring_Performance datasets consists of extreme loads and motion
response (6 DOF) to different sea states for the mooring lines. Mooring
performance data will be both experimental and observational, raw and derived
(using statistical parameters and models).
Experimental data will be collected at the DMAC facility in UNEXE [7] . These
tests will be focused on the characterisation of design load behaviour,
fatigue and durability of several elastomeric tether specimens. Raw data will
be used. Selected parts of the experimental datasets generated will be made
public.
Field tests will be conducted at the open sea test site at BiMEP. A mooring
condition monitoring will be implemented for the project consisting of 4 load
shackles deployed in two mooring nodes of the prototype. Extreme loads and
motion response to different sea states will be monitored. The loading data
will be combined with the environmental monitoring dataset to derive the final
mooring performance dataset. Selected parts of the field test datasets
generated will be made public.
The mooring performance dataset will be useful to inform technology
compliance, survivability and reliability as well as economical improvements.
They will be also valuable for the certification processes of other technology
developers.
This dataset will be the basis for at least one scientific publication.
## 3.3 STANDARDS AND METADATA
In order to ensure the required compatibility, this dataset will use the same
ocean data standards than the previous environmental monitoring dataset for
data and metadata capture/creation.
The offshore standard DNVGL-OS-E301 [8] contains criteria, technical
requirements and guidelines on design and construction of position mooring
systems. The objective of this standard shall give a uniform level of safety
for mooring systems, consisting of chain, steel wire ropes and fibre rope.
Besides, regarding the wave energy application, the relevant standard is the
technical specification for assessment of mooring system for Marine Energy
Converters (MECs) IEC TS 62600-10 [9] .
During the OPERA project, a SCADA system will be developed that allows
partners to access monitoring information locally and remotely. UNEXE will be
responsible for version control and validation dataset of datasets to be
shared open access.
**3.4**
**DATA SHARING**
As it has been described before, during the lifecycle of the OPERA project
datasets will be stored and systematically organised in a database tailored to
comply with the requirements of WP1. An online data query tool will be
operational by Month 12 and for open dissemination by Month 18. The database
schema and the queryable fields will be also publicly available to the
database users as a way to better understand the database itself.
Full data access policy will be restricted to WP2 participants, in order to
protect the commercial and industrial prospects of exploitable results (ER1
and ER3). However, aggregated data will be used in order to limit this
restriction.
The aggregated dataset will be disseminated as soon as possible. In the case
of the underlying data of a publication this might imply an embargo period for
green open access publications.
Data objects will be deposited in ZENODO under open access to data files and
metadata, permitting its use and reuse, as well as protecting privacy of its
users.
## 3.5 ARCHIVING AND PRESERVATION
As it has been described before, the OPERA project database will be designed
to remain operational for 5 years after project end. By the end of the
project, the final dataset will be transferred to the ZENODO repository, which
ensures sustainable archiving of the final research data.
# 4\. BIRADIAL TURBINE PERFORMANCE
## 4.1 DATASET REFERENCE AND NAME
DS_Biradial_Turbine_Performance
**4.2**
**DATASET DESCRIPTION**
The DS_Biradial_Turbine_Performance datasets mainly consists of internal water
level, chamber pressure/temperature/humidity, rotation speed and torque to
assess turbine efficiency in response to different sea states. Biradial
turbine performance data will be both experimental and observational, raw and
derived (using statistical parameters and models).
Experimental data will be collected at existing rig in IST Turbomachinery
Laboratory for tests in varying unidirectional flow. Built-in sensors will
measure rpm, pressure differential across rotor, vibration and generator
temperature, voltage and current. Raw data will be used. Selected parts of the
experimental datasets generated will be made public.
Field tests will be conducted both at Mutriku shoreline plant and the BiMEP
open sea test site.
Testing at Mutriku will assess turbine performance and collect extensive data
on drivers of components fatigue such as high rpm and accelerations;
electrical, temperature and pressure load cycles; humidity in the cabinet
(which exacerbates electrical stress damages); rate of salt accumulation and
corrosion.
Similar data will be collected at BiMEP and results will be compared.
Additionally, lowfrequency accelerometers will assess loads on the rotor and
bearings.
The loading data will be combined with the environmental monitoring dataset to
derive the final biradial turbine performance dataset. Non-dimensional values,
aggregated data and selected parts of the field test datasets generated will
be made public.
The biradial turbine performance dataset will be useful to assess turbine
efficiency and reliability.
This dataset will be the basis for at least one scientific publication.
## 4.3 STANDARDS AND METADATA
In order to ensure the required compatibility, this dataset will use the same
ocean data standards than the previous environmental monitoring dataset for
data and metadata capture/creation.
DNV GL will advise on applicable rules and standards to ensure appropriate
design and data capture for open ocean operating conditions.
During the OPERA project, a SCADA system will be developed that allows
partners to access monitoring information locally and remotely. IST will be
responsible for version control and validation dataset of datasets to be
shared open access.
**4.4**
**DATA SHARING**
OPERA project datasets will be stored and systematically organised in a
database tailored to comply with the requirements of WP1. An online data query
tool will be operational by Month 12 and for open dissemination by Month 18.
The database schema and the queryable fields, will be also publicly available
to the database users as a way to better understand the database itself.
Full data access policy will be restricted to WP3 participants, in order to
protect the commercial and industrial prospects of exploitable results (ER1
and ER2). However, aggregated data will be used in order to limit this
restriction.
The aggregated dataset will be disseminated as soon as possible. In the case
of the underlying data of a publication this might imply an embargo period for
green open access publications.
Data objects will be deposited in ZENODO under open access to data files and
metadata, permitting its use and reuse, as well as protecting privacy of its
users.
## 4.5 ARCHIVING AND PRESERVATION
As it has been described before, the OPERA project database will be designed
to remain operational for 5 years after project end. By the end of the
project, the final dataset will be transferred to the ZENODO repository, which
ensures sustainable archiving of the final research data.
# 5\. POWER OUTPUT
## 5.1 DATASET REFERENCE AN
DS_Power_Output
**5.2**
**DATASET DESCRIPTION**
The DS_Power_Output datasets mainly consists of generator speed, voltage,
frequency and electric power. Power output data will be both experimental and
observational, raw and derived such as mean, standard deviation, minimum and
maximum values.
Experimental data will be collected at electrical test rigs of UCC [10] and
TECNALIA [11] . Field tests data will be collected at Mutriku shoreline plant
and at the BiMEP open sea test site. Numerical models will be also used to
extend the dataset beyond sea-trials data. In the latter, specialist software
may be needed for further processing the data. Selected parts of the generated
datasets generated will be made public.
Power output data will be useful to identify sources of uncertainty in power
performance prediction. They will be also valuable for the certification
processes of other technology developers.
This dataset will be the basis for at least one scientific publication.
## 5.3 STANDARDS AND METADATA
In order to ensure the required compatibility, this dataset will use the same
ocean data standards than the previous environmental monitoring dataset for
data and metadata capture/creation.
Additionally, regarding the wave energy application, the relevant standards
are the technical specification on power performance assessment of electricity
producing wave energy converters IEC TS 62600-100 [12] , and the technical
specification on wave energy converter power performance assessment at a
second location using measured assessment data IEC TS 62600-102 [13] .
As indicated in the technical specifications, the datasets shall provide a
record of sea state and electrical power production over time. Each aggregated
data record shall be date and time stamped using ISO 8601\. The records shall
be annotated with quality control flags giving the results of the quality
control checks carried out during recording and analysis.
A SCADA system will be developed that allows partners to access monitoring
information locally and remotely. OCEANTEC will be responsible for version
control and validation dataset of datasets to be shared open access.
**5.4**
**DATA SHARING**
During the lifecycle of the OPERA project datasets will be stored and
systematically organised in a database tailored to comply with the
requirements of WP1. An online data query tool will be operational by Month 12
and for open dissemination by Month 18. The database schema and the queryable
fields, will be also publicly available to the database users as a way to
better understand the database itself.
Full data access policy will be restricted to WP4 and WP5 participants, in
order to protect the commercial and industrial prospects of exploitable
results (ER1, ER4 and ER6). However, aggregated data will be used in order to
limit this restriction.
The aggregated dataset will be disseminated as soon as possible. In the case
of the underlying data of a publication this might imply an embargo period for
green open access publications.
Data objects will be deposited in ZENODO under open access to data files and
metadata, permitting its use and reuse, as well as protecting privacy of its
users.
## 5.5 ARCHIVING AND PRESERVATION
As it has been described before, the OPERA project database will be designed
to remain operational for 5 years after project end. By the end of the
project, the final dataset will be transferred to the ZENODO repository, which
ensures sustainable archiving of the final research data.
# 6\. POWER QUALITY
## 6.1 DATASET REFERENCE AN
DS_Power_Quality
**6.2**
**DATASET DESCRIPTION**
The DS_Power_Quality datasets consists of current, voltage, power quality
characteristic parameters (such as voltage fluctuations, harmonics, inter-
harmonics, active/reactive power, and flicker). Power quality data will be
both experimental and observational, raw and derived (using statistical
parameters and models).
Experimental data will be collected at electrical test rig of UCC [10]. Field
tests data will be collected at the Mutriku shoreline plant. Simulated models
may be used to assess the power quality for other operating conditions, such
as varying control algorithms, resource conditions, grid strengths, and
control using a dry-lab to create a wider profile for the WEC. In the latter,
specialist software may be needed for further processing the data. Selected
parts of the experimental datasets generated will be made public.
In Mutriku, data will be collected from both a single turbine and the plant as
a whole, obtaining valuable conclusions about how aggregation of multiple
turbines affects the power quality. Non-dimensional values, aggregated data
and selected parts of the Mutriku field test datasets generated will be made
public.
Power quality data will be useful to identify sources of uncertainty in power
performance prediction. They will be also valuable for the certification
processes of other technology developers.
This dataset will be the basis for at least one scientific publication.
## 6.3 STANDARDS AND METADATA
In order to ensure the required compatibility, this dataset will use the same
ocean data standards than the previous environmental monitoring dataset for
data and metadata capture/creation.
Additionally, regarding the wave energy application, the relevant standards
are the technical specification on electrical power quality requirements for
wave, tidal and other water current energy converters IEC TS 62600-30 [14] .
Further instructions on processing harmonic current components are given in
IEC 61000-47:2002 [15] , for power supply systems and equipment connected
thereto.
A SCADA system will be developed that allows partners to access monitoring
information locally and remotely. UCC will be responsible for version control
and validation dataset of datasets to be shared open access. Additional
attention must be given to integrating and combining the power quality
datasets with others due to the largely varying timescales.
**6.4**
**DATA SHARING**
Datasets will be stored and systematically organised in a database tailored to
comply with the requirements of WP1. An online data query tool will be
operational by Month 12 and for open dissemination by Month 18\. The database
schema and the queryable fields, will be also publicly available to the
database users as a way to better understand the database itself.
Full data access policy will be restricted to WP5 participants, in order to
protect the commercial and industrial prospects of exploitable results (ER1,
ER4 and ER6). However, aggregated data will be used in order to limit this
restriction.
The aggregated dataset will be disseminated as soon as possible. In the case
of the underlying data of a publication this might imply an embargo period for
green open access publications.
Data objects will be deposited in ZENODO under open access to data files and
metadata, permitting its use and reuse, as well as protecting privacy of its
users.
## 6.5 ARCHIVING AND PRESERVATION
The OPERA project database will be designed to remain operational for 5 years
after project end. By the end of the project, the final dataset will be
transferred to the ZENODO repository, which ensures sustainable archiving of
the final research data.
# 7\. OFFSHORE OPERATIONS
## 7.1 DATASET REFERENCE AN
DS_Offsohre_Operations
**7.2**
**DATASET DESCRIPTION**
The DS_Offsohre_Operations datasets consists of failures, type of maintenance,
offshore resources (such as vessels, equipment, personnel, parts and
consumables), health & safety, and activity log.
Offshore Operations data will be observational and derived.
Field tests will be conducted at the BiMEP open sea test site. The offshore
operations data will be combined with the environmental monitoring dataset to
derive the final dataset. Full datasets will be made public.
Offshore operations data will be useful to reduce the uncertainty on the
determination of risk and cost of offshore operations, and to optimise these
activities. The offshore logistics experience can be extrapolated to different
scenarios of larger deployment with a view to more accurately assess the
economies of scale and identify logistics bottlenecks when deployed in large
arrays.
Although the raw datasets are useful by themselves, it is the objective of the
OPERA project to use the dataset as a basis for at least one scientific
publication.
## 7.3 STANDARDS AND METADATA
Unlike the previous datasets, these are not based on process instrumentation
and therefore will not be stored in the WP1 database. This dataset can be
imported from, and exported to a CSV, TXT or Excel file.
Failure data will be reported according to Failure the Reporting, Analysis and
Corrective Action System (FRACAS) [16] and the ISO 14224:2006 Collection and
exchange of reliability and maintenance data for equipment [17] .
The DataCite Metadata Schema [18] will be used for publication of the offshore
operations datasets. DataCite is a domain-agnostic list of core metadata
properties chosen for the accurate and consistent identification of data for
citation and retrieval purposes.
TECNALIA will be responsible for version control and validation dataset of
datasets to be shared open access.
**7.4**
**DATA SHARING**
As it has been described before, the datasets will be organised in files
tailored to comply with the requirements of WP6. The file structure will be
also publicly available to the data users as a way to better understand the
file itself.
The aggregated dataset will be disseminated in order to protect the commercial
and industrial prospects of exploitable results (ER1 and ER8). In the case of
the underlying data of a publication this might imply an embargo period for
green open access publications.
Data objects will be deposited in ZENODO under open access to data files and
metadata, permitting its use and reuse, as well as protecting privacy of its
users.
## 7.5 ARCHIVING AND PRESERVATION
As it has been described before, the OPERA project database will be designed
to remain operational for 5 years after project end. By the end of the
project, the final dataset will be transferred to the ZENODO repository, which
ensures sustainable archiving of the final research data.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0180_OPERA_654444.md
|
# 1\. INTRODUCTION
## 1.1 OPERA MOTIVATION
The OPERA project participates in the Pilot on Open Research Data launched by
the European
Commission (EC) along with the H2020 programme. This pilot is part of the Open
Access to Scientific Publications and Research Data programme in H2020. The
goal of the programme is to foster access to research data generated in H2020
projects. The use of a Data Management Plan (DMP) is required for all projects
participating in the Open Research Data Pilot.
Open access is defined as the practice of providing on-line access to
scientific information that is free of charge to the reader and that is
reusable. In the context of research and innovation, scientific information
can refer to peer-reviewed scientific research articles or research data.
Research data refers to information, in particular facts or numbers collected
to be examined and considered, and as a basis for reasoning, discussion, or
calculation. In a research context, examples of data include statistics,
results of experiments, measurements, observations resulting from fieldwork,
survey results, interview recordings and images. The focus is on research data
that is available in digital form.
The Consortium strongly believes in the concepts of open science, and in the
benefits that the European innovation ecosystem and economy can draw from
allowing the reuse of data at a larger scale.
Furthermore, there is a need to gather experience in open sea operating
conditions, structural and power performance and operating data in wave
energy. In fact, there has been very limited open sea experience in wave
energy, which is essential in order to fully understand the challenges in
device performance, survivability and reliability. The limited operating data
and experience that currently exists are rarely shared, as testing is partly
private-sponsored.
This project proposes to remove this roadblock by delivering for the first
time, open access, high-quality open sea operating data to the wave energy
development community.
Nevertheless, data sharing in the open domain can be restricted as a
legitimate reason to protect results that can reasonably be expected to be
commercially or industrially exploited [1] . Strategies to limit such
restrictions will include anonymising or aggregating data, agreeing on a
limited embargo period or publishing selected datasets.
## 1.2 PURPOSE OF THE DATA MANAGEMENT PLAN
The purpose of the DMP is to provide an analysis of the main elements of the
data management policy that will be used by the Consortium with regard to the
project research data.
The DMP covers the complete research data life cycle. It describes the types
of research data that will be generated or collected during the project, the
standards that will be used, how the research data will be preserved and what
parts of the datasets will be shared for verification or reuse. It also
reflects the current state of the Consortium agreements on data management and
must be consistent with exploitation and IPR requirements.
The DMP is not a fixed document, but will evolve during the lifespan of the
project, particularly whenever significant changes arise such as dataset
updates or changes in Consortium policies.
This document is an update of the DMP which was delivered in Month 6 of the
project (D8.5). It included an overview of the datasets to be produced by the
project, and the specific conditions that are attached to them. The current
version of the DMP gets into more detail and describes the practical data
management procedures implemented by the OPERA project with reference with the
IT tools developed in WP1. The final version of the DMP will be delivered in
Month 30 (D8.7).
This document has been produced following the EC guidelines for project
participating in this pilot and additional consideration described in ANNEX I:
KEY PRINCIPLES FOR OPEN ACCESS TO RESEARCH DATA.
## 1.3 RESEARCH DATA TYPES IN OPERA
The data types that will be produced during the project are focused on the
Description of the Action (DoA) and their results.
According to such consideration, Table 1.1 reports a list of categories of
research data that OPERA will produce. These research data types have been
mainly defined in WP1, including data structures, sampling and processing
requirements, as well as relevant standards. This list may be adapted with the
addition or removal of datasets in the next versions of the DMP to take into
consideration the project developments. A detailed description of each dataset
is given in the following sections of this document.
### TABLE 1.1: OPERA TYPES OF DATA
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset category**
</th>
<th>
**Lead partner**
</th>
<th>
**Related WP(s)**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Environmental monitoring
</td>
<td>
TECNALIA
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Mooring performance
</td>
<td>
UNEXE
</td>
<td>
WP1, WP2, WP5
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Bi-radial performance
</td>
<td>
IST
</td>
<td>
WP1, WP3
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Power output
</td>
<td>
OCEANTEC
</td>
<td>
WP1, WP4, WP5
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
Power quality
</td>
<td>
UCC
</td>
<td>
WP1, WP5
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
Offshore operations
</td>
<td>
TECNALIA
</td>
<td>
WP6
</td> </tr> </table>
Specific datasets may be associated to scientific publications (i.e.
underlying data), public project reports and other raw data or curated data
not directly attributable to a publication. Datasets can be both collected,
unprocessed data as well as analysed, generated data. The policy for open
access are summarised in the following picture.
Research data linked to exploitable results will not be put into the open
domain if they compromise its commercialisation prospects or have inadequate
protection, which is a H2020 obligation. The rest of research data will be
deposited in an open access repository.
When the research data is linked to a scientific publication, the provisions
described in ANNEX II: SCIENTIFIC PUBLICATIONS will be followed. Research data
needed to validate the results presented in the publication should be
deposited at the same time for “Gold” Open Access 1 or before the end of the
embargo period for “Green” Open Access 2 . Underlying research data will
consist of selected parts of the general datasets generated, and for which the
decision of making that part public has been made.
Other datasets will be related to any public report or be useful for the
research community. They will be selected parts of the general datasets
generated or full datasets (e.g. up to 2 years of key operating data) and be
published as soon as they become available.
## 1.4 ROLES AND RESPONSIBILITIES
Each OPERA partner has to respect the policies set out in this DMP. Datasets
have to be created, managed and stored appropriately and in line with
applicable legislation.
The Project Coordinator has a particular responsibility to ensure that data
shared through the OPERA website are easily available, but also that backups
are performed and that proprietary data are secured.
OCEANTEC, as WP1 leader, will ensure dataset integrity and compatibility for
its use during the project lifetime by different partners.
Validation and registration of datasets and metadata is the responsibility of
the partner that generates the data in the WP. Metadata constitutes an
underlying definition or description of the datasets, and facilitate finding
and working with particular instances of data.
Backing up data for sharing through open access repositories is the
responsibility of the partner possessing the data.
Quality control of these data is the responsibility of the relevant WP leader,
supported by the Project Coordinator.
If datasets are updated, the partner that possesses the data has the
responsibility to manage the different versions and to make sure that the
latest version is available in the case of publicly available data. WP1 will
provide naming and version conventions.
Last but not least, all partners must consult the concerned partner(s) before
publishing data in the open domain that can be associated to an exploitable
result.
# 2\. DATA COLLECTION, STORAGE AND BACK-UP
The OPERA project will generate data resulting from instrumentation recordings
during the lab testing and open-sea testing. In addition to the raw,
uncorrected sensor data, converted and corrected data, as well as several
other forms of derived data will be produced.
Instrumentation, data acquisition and logging systems are thoroughly described
in D1.1 [2] . A database management system will be used in the project to
create, read, update and delete data from a database. The software platform
being used is MySQL 5.7.9. A SCADA system will allows partners to access
monitoring information locally and remotely.
The following sections describe the different datasets that will be produced
in the course of the project.
## 2.1 ENVIRONMENTAL MONITORING DATA
Environmental monitoring data will be collected at two locations, namely the
Mutriku shoreline plant and the open sea test site BiMEP. These numeric
datasets will be directly obtained through observations, and derived using
statistical parameters and models.
In general, environmental monitoring datasets will be useful for further
research activities beyond the scope of OPERA objectives. Metocean
observations are common practice for different uses. Dataset could be
integrated and reused, particularly for the characterisation of wave resource
and the estimation of device performance. They will be also valuable for
technology developers who plan to test their devices at either Mutriku or
BiMEP.
Although the raw datasets are useful by themselves, it is the objective of the
OPERA project to use the data as a basis for at least one scientific
publication.
A short description of the environmental monitoring datasets is given next.
### TABLE 2.1: WAVE RESOURCE AT MUTRIKU
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Wave_Mutriku
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Wave resource data 200 m off the shoreline plant. Main data are the
pressure fluctuations over time.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
RBR & Isurki Pressure Gauges
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* TXT for instrument recordings and derived data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Scilab program to transform pressure into wave height and period. Spectral
analysis software.
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
2 GB (6 months @ 2 Hz sampling frequency)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
Internal USB memory stick, on-site database server, and real-time
replication onto cloud-hosted database server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
Daily back-ups on both local and cloud-hosted servers. 15-day retention
period for incremental backups in the latter.
</td> </tr> </table>
### TABLE 2.2: WAVE RESOURCE AT BIMEP
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Wave_BiMEP
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
* Wave resource at 300 m up-wave of the WEC.
* Datasets mainly consist of wave parameters such as wave H s , T p , direction and spreading.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
TRIAXYS surface following buoy
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* TXT for instrument recordings
* MS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
* 150 MB of statistical data (20 min x 2 years)
* 1 GB of real-time data (20 min x 2 Hz sampling frequency when real-time communications activated)
* 8 GB (2 years @ 2 Hz sampling frequency)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
Internal USB memory stick, on-site database server and real-time replication
onto cloud-hosted database server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
## 2.2 MOORING PERFORMANCE DATA
Experimental data will be collected at the DMAC facility in UNEXE [4] .
Besides, field tests will be conducted at the open sea test site at BiMEP.
Datasets consists of mooring performance data will be both experimental and
observational, raw and derived (using statistical parameters and models).
The mooring performance dataset will be useful to inform technology
compliance, survivability and reliability as well as economical improvements.
They will be also valuable for the certification processes of other technology
developers. These data will be the basis for at least one scientific
publication.
A short description of the mooring performance datasets is given below.
**TABLE 2.3: TETHER LOADS AT DMAC**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Tethers_Lab
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Characterisation of design load behaviour, fatigue and durability of several
elastomeric tether specimens.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
DMAC facility
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Experimental
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
CSV (processed data)
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Labview, Optitrack Motive and Matlab
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
8.7 GB (50 Hz sampling frequency)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
Network storage
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
Network drive is backed up daily with two-disk fault tolerance (i.e.
backups are safe even if two disks fail). Backups are stored in a different
building and protected by a dedicated UPS.
</td> </tr> </table>
### TABLE 2.4: MOORING LOADS AT BIMEP
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Mooring_BiMEP
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
* Extreme loads and motion response to different sea states will be monitored.
* The loading data will be combined with the environmental monitoring dataset to derive the final mooring performance dataset.
* Comparison between the polyester lines and the elastomeric mooring tethers.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
* MARMOK-A-5 prototype.
* A mooring condition monitoring has been implemented for the project consisting of 4 load shackles deployed in two mooring nodes of the prototype.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* TXT for raw instrument recordings
* MS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Statistical and spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
≤ 400 GB (2.5 years recording x 16 measurements @ 20 Hz)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
On-site database server and real-time replication onto cloudhosted database
server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
## 2.3 BIRADIAL TURBINE PERFORMANCE DATA
Experimental data will be collected at existing IST Turbomachinery Laboratory
(Dry Lab) for tests in varying unidirectional flow. Also, field tests will be
conducted both at Mutriku shoreline plant and the BiMEP open sea test site.
Bi-radial turbine performance data will be both experimental and
observational, raw and derived (using statistical parameters and models).
The bi-radial turbine performance dataset will be useful to assess turbine
efficiency and reliability. The loading data will be combined with the
environmental monitoring dataset to derive the final bi-radial turbine
performance dataset.
This dataset will be the basis for at least one scientific publication.
A short description of the bi-radial turbine performance datasets is given
below.
### TABLE 2.5: BI-RADIAL TURBINE PERFORMANCE AT DRY LAB FACILITY
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Biradial_Turbine_Lab
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Assess turbine performance though unidirectional steady-state and
alternating flow.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
* IST Turbomachinery Laboratory.
* Sensor data acquired at a frequency of 1kHz for turbine pressure head, plenum temperature and humidity, turbine rotational speed, turbine flow rate and the instantaneous position of the flow control valve.
* The voltage and the current of the three AC phases at the input and output of the power electronics were acquired at a frequency of 62.5kHz.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Experimental
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
Matlab “mat” files and comma separated value “csv” text files.
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
* Matlab (Experimental data acquisition)
* A special purpose parallelized C++ software (Data filtering),
* A software package written in the Julia language (Computation of the instantaneous and time-averaged turbine shaft power, electrical power and available pneumatic power)
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
320 GB
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
Local PC storage
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
Static data stored at three computers
</td> </tr> </table>
**TABLE 2.6: BI-RADIAL TURBINE PERFORMANCE AT MUTRIKU**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Biradial_Turbine_Mutriku
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Assess turbine performance and collect extensive data on drivers of
components fatigue such as high rpm and accelerations; electrical, temperature
and pressure load cycles; humidity in the cabinet (which exacerbates
electrical stress damage); rate of salt accumulation and corrosion.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
* Mutriku Wave Power Plant.
* Bi-radial turbine-generator set and chamber #9 have been instrumented for the project.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* MS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Statistical and spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
≤ 50 GB (6-month recording x 150 measurements @ 4 Hz)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
On-site database server and real-time replication onto cloudhosted database
server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
**TABLE 2.7: BI-RADIAL TURBINE PERFORMANCE AT BIMEP**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Biradial_Turbine_BiMEP
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Internal water level, chamber pressure/temperature/humidity, rotation speed
and torque to assess turbine efficiency in response to different sea states to
compare turbine performance drivers of components fatigue
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
* MARMOK-A-5 prototype.
* Bi-radial turbine-generator set and hull structure have been instrumented for the project.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
MySQL database for real-time dataMS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Statistical and spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
≤ 100 GB (12-month recording x 150 measurements at 4 Hz)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
On-site database server and real-time replication onto cloudhosted database
server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
**2.4**
**POWER OUTPUT DATA**
Experimental data will be collected at electrical test rigs of UCC [5] and
TECNALIA [6] . Besides, field tests data will be collected at Mutriku
shoreline plant and at the BiMEP open sea test site. Numerical models will be
also used to extend the dataset beyond sea-trials data. In the latter,
specialist software may be needed for further processing the data. Selected
parts of the generated datasets generated will be made public. Power output
data will be both experimental and observational, raw and derived such as
mean, standard deviation, minimum and maximum values.
Power output data will be useful to identify sources of uncertainty in power
performance prediction and for the certification processes of other technology
developers.
This dataset will be the basis for at least one scientific publication. A
short description of the power output datasets is given below.
### TABLE 2.8: POWER OUTPUT AT ELECTRICAL TEST RIG
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Power_Output_Lab
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Generator speed, voltage, frequency and electric power.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Electrical test rigs of UCC and TECNALIA
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Experimental and Simulation
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
MS Excel
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
MATLAB numerical model of the Mutriku Wave Power Plant
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
20 GB (7 CLs x approx. 300 MB)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
Network storage
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
Daily back-ups
</td> </tr> </table>
### TABLE 2.9: POWER OUTPUT AT MUTRIKU
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Power_Output_Mutriku
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Generator speed, voltage, frequency and electric power, including phase
voltages & currents.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
* Mutriku Wave Power Plant.
* Bi-radial turbine-generator set and chamber #9 have been instrumented for the project.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* MS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Statistical and spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
≤ 50 GB (6-month recording x 150 measurements @ 4 Hz)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
On-site database server and real-time replication onto cloudhosted database
server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
### TABLE 2.10: POWER OUTPUT AT BIMEP
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Power_Output_BiMEP
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Generator speed, voltage, frequency and electric power, including phase
voltages & currents.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
MARMOK-A-5 prototype. Bi-radial turbine-generator set and hull structure
have been instrumented for the project.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* MS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Statistical and spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
≤ 100 GB (12-month recording x 150 measurements at 4 Hz)
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
On-site database server and real-time replication onto cloudhosted database
server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
## 2.5 POWER QUALITY DATA
Experimental data will be collected at electrical test rig of UCC [10]. Also,
field tests data will be collected at the Mutriku shoreline plant. Simulated
models may be used to assess the power quality for other operating conditions,
such as varying control algorithms, resource conditions, grid strengths, and
control using a dry-lab to create a wider profile for the WEC. Power quality
data will be both experimental and observational, raw and derived (using
statistical parameters and models). Selected parts of the experimental
datasets generated will be made public.
Power quality data will be useful to identify sources of uncertainty in
assessing the impact of the wave energy converter on the performance of the
grid. They will be also valuable for the certification processes of other
technology developers.
This dataset will be the basis for at least one scientific publication.
A short description of the power quality datasets is given next.
**TABLE 2.11: POWER QUALITY AT ELECTRICAL TEST RIG**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Power_Quality_Lab
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Current, voltage, power quality characteristic parameters (such as voltage
fluctuations, harmonics, inter-harmonics, active/reactive power, and flicker).
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Electrical test rig at UCC
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Experimental and Simulation
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
MS Excel
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
MATLAB Simulink numerical model of the Mutriku Wave Power Plant
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
* Maximum 1.2 GB per 10-minute test (at 20 kHz sampling frequency).
* 4 signals at 20 kHz for 10 minutes per test
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
Network storage
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
Daily back-ups
</td> </tr> </table>
**TABLE 2.12: POWER QUALITY AT MUTRIKU**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Power_Quality_Mutriku
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Data will be collected from both a single turbine and the plant as a whole,
obtaining valuable conclusions about how aggregation of multiple turbines
affects the power quality.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Mutriku Wave Power Plant.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Observational and derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
* MySQL database for real-time data
* MS Excel for derived and filtered data
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
* LabView
* Statistical and spectral analysis software
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
* > 200 GB (12-month recording x 12 measurements @ 20 kHz).
* Given the large data storage requirements, the measurements will be triggered, and not carried out continuously. After sufficient power quality analysis has been carried out at 20 kHz, the sampling rate will then be reduced (to approximately 12 kHz, and 10 kHz).
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
On-site database server and real-time replication onto cloudhosted database
server
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
* Daily back-ups on both local and cloud-hosted servers.
* 15-day retention period for incremental backups in the latter.
</td> </tr> </table>
## 2.6 OFFSHORE OPERATIONS DATA
Field tests will be conducted at the BiMEP open sea test site. The offshore
operations data will be combined with the environmental monitoring dataset to
derive the final dataset. Collected datasets will be made public. Offshore
operations data will be observational and derived.
Offshore operations data will be useful to reduce the uncertainty on the
determination of risk and cost of offshore operations, and to optimise these
activities. The offshore logistics experience can be extrapolated to different
scenarios of larger deployment with a view to more accurately assess the
economies of scale and identify logistics bottlenecks when deployed in large
arrays.
Although the raw datasets are useful by themselves, it is the objective of the
OPERA project to use the dataset as a basis for at least one scientific
publication.
A short description of the offshore operations datasets is given below.
**TABLE 2.13: OFFSHORE OPERATIONS**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
DS_Offshore_Operations
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Failures, type of maintenance, offshore resources (such as vessels,
equipment, personnel, parts and consumables), health & safety, and activity
log.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Unlike the previous datasets, these are not based on process instrumentation
and therefore will not be stored in the WP1 database.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Observational
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
MS Excel
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
n/a
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
10 MB
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
Network storage
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
Daily back-ups on a separate server
</td> </tr> </table>
# 3\. DATA STANDARDS AND METADATA
The following standards should be used for data documentation:
* Ocean Data Standards Project [7] : it contains an extensive number of references on Oceanographic Data Management and Exchange Standards. It includes references on Metadata, Date and Time, Lat/Lon/Alt, Country names, Platform instances, Platform types, Science Words, Instruments, Units, Projects, Institutions, Parameters, Quality Assurance and Quality Control.
* ISO 19156:2011 [8] : it defines a conceptual schema for observations, and for features involved in sampling when making observations. These provide models for the exchange of information describing observation acts and their results, both within and between different scientific and technical communities.
* IEC TS 62600-101 [9] : technical specification for wave energy resource assessment and characterisation.
* DNVGL-OS-E301 [10] : it contains criteria, technical requirements and guidelines on design and construction of position mooring systems. The objective of this standard is to give a uniform level of safety for mooring systems, consisting of chain, steel wire ropes and fibre rope.
* IEC TS 62600-10 [11] : technical specification for assessment of mooring system for Marine Energy Converters (MECs).
* IEC TS 62600-100 [12] technical specification on power performance assessment of electricity producing wave energy converters
* IEC TS 62600-102 [13] technical specification on wave energy converter power performance assessment at a second location using measured assessment data
* IEC TS 62600-30 [14] technical specification on electrical power quality requirements for wave, tidal and other water current energy converters
* IEC 61000-4-7:2002 [15] further instructions on processing harmonic current components are given in for power supply systems and equipment connected thereto. 🞂 FRACAS [16] Failure the Reporting, Analysis and Corrective Action System
* ISO 14224:2006 [17] collection and exchange of reliability and maintenance data for equipment.
Metadata records will accompany the data files in order to describe,
contextualise and facilitate external users to understand and reuse the data.
OPERA will adopt the DataCite Metadata Schema [18] , a domain agnostic
metadata schema, as the basis for harvesting and importing metadata about
datasets from data archives. The core mission of DataCite is to build and
maintain a sustainable framework that makes it possible to cite data through
the use of persistent identifiers.
The following metadata should be created to identify datasets:
* Identifier: A unique string that identifies the dataset
* Author/Creator: The main researchers involved in producing the data in priority order
* Title: A name or title by which a data is known
* Publisher: The name of the entity that holds, archives, publishes prints, distributes, releases, issues, or produces the data.
* Publication Year: The year when the data was or will be made publicly available 🞂 Subject: Subject, keyword, classification code, or key phrase describing the resource.
* Contributor: Name of the funding entity (i.e. "European Union" & "Horizon 2020")
* Size: Unstructured size information about the dataset (in GBs)
* Format: Technical format of the dataset (e.g. cvs, txt, xml, ...)
* Version: The version number of the dataset
* Access rights: Provide a rights management statement for the dataset. Include embargo information if applicable
* Geo-location: Spatial region or named place where the data was gathered
# 4\. DATA SHARING AND REUSE
During the life cycle of the OPERA project datasets will be stored and
systematically organised in a database tailored to comply with the
requirements of WP1 (for more details on the database architecture, please see
D1.1 Process instrumentation definition [2] ). An online data query tool was
operational in Month 12, and available for open dissemination by Month 18. The
database schema and the queryable fields, will be also publicly available to
the database users as a way to better understand the database itself.
In addition to the project database, relevant datasets will be also stored in
ZENODO [19] [10] , which is the open access repository of the Open Access
Infrastructure for Research in Europe, OpenAIRE [20]
All collected datasets will be disseminated without an embargo period unless
linked to a green open access publication. Data objects will be deposited in
ZENODO under:
* Open access to data files and metadata and data files provided over standard protocols such as HTTP and OAI-PMH.
* Use and reuse of data permitted. 🞂 Privacy of its users protected.
Data access policy is summarised in the following table.
## TABLE 4.1: DATA ACCESS POLICY
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Data access policy**
</th> </tr>
<tr>
<td>
DS_Wave_Mutriku
</td>
<td>
* Unrestricted since no confidentiality or IPR issues are expected regarding the environmental monitoring datasets
* Licence: CC-BY
</td> </tr>
<tr>
<td>
DS_Wave_BiMEP
</td> </tr>
<tr>
<td>
DS_Tethers_Lab
</td>
<td>
* Restricted to WP2 participants, in order to protect the commercial and industrial prospects of exploitable results (KER1 and KER3).
* Samples of aggregated data (e.g. load averages or extreme load ranges) will be shared in the open domain for the most relevant sea states. Licence: CC-BY-ND
</td> </tr>
<tr>
<td>
DS_Mooring_BiMEP
</td> </tr>
<tr>
<td>
DS_Biradial_Turbine_Lab
</td>
<td>
* Restricted to WP3 participants, in order to protect the commercial and industrial prospects of exploitable results (KER1 and KER2).
* Samples of aggregated data (e.g. chamber pressure, air flow, mechanical power) will be shared in the open domain.
* Licence: CC-BY-ND
</td> </tr>
<tr>
<td>
DS_Biradial_Turbine_Mutriku
</td> </tr>
<tr>
<td>
DS_Biradial_Turbine_BiMEP
</td> </tr>
<tr>
<td>
DS_Power_Output_Lab
</td>
<td>
* Restricted to WP4 and WP5 participants, in order to protect the commercial and industrial prospects of exploitable results (KER1, KER4 and KER6).
* Samples of aggregated data (e.g. electric power for the different control laws) will be shared in the open domain.
* Licence: CC-BY-ND
</td> </tr>
<tr>
<td>
DS_Power_Output_Mutriku
</td> </tr>
<tr>
<td>
DS_Power_Output_BiMEP
</td> </tr>
<tr>
<td>
DS_Power_Quality_Lab
</td>
<td>
* Restricted to WP5 participants, in order to protect the commercial and industrial prospects of exploitable results (KER1, KER4 and KER6).
* Samples of aggregated data (e.g. active, reactive power and power factor) will be shared in the open domain.
* Licence: CC-BY-ND
</td> </tr>
<tr>
<td>
DS_Power_Quality_Mutriku
</td> </tr>
<tr>
<td>
DS_Offshore_Operations
</td>
<td>
* The aggregated dataset (e.g. operation time, forecast vs recorded wave conditions) will just be shared in the open domain in order to protect the commercial and industrial prospects of exploitable results (KER1 and KER8).
* Licence: CC-NC-BY-ND
</td> </tr> </table>
# 5\. DATA ARCHIVING AND PRESERVATION
The OPERA project database will be designed to remain operational for 5 years
after project end. By the end of the project, the final dataset will be
transferred to the ZENODO repository, which ensures sustainable archiving of
the final research data.
Items deposited in ZENODO will be retained for the lifetime of the repository,
which is currently the lifetime of the host laboratory CERN and has an
experimental programme defined for the at least next 20 years. Data files and
metadata are backed up on a nightly basis, as well as replicated in multiple
copies in the online system. All data files are stored along with a MD5
checksum of the file content. Regular checks of files against their checksums
are made.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0181_MIND_661880.md
|
**3 MIND Data-sets**
The MIND Data Management Plan (DMP) has three main purposes:
1. To support the project partners when publishing the datasets generated by the project.
2. Support the Coordinator in assuring the availability of the data generated by the project.
3. Assist external parties in analysing the work performed and accessing the data generated by the project.
To ensure that all three purposes are fullfilled we have standardized the
information that needs to be given regarding each of the datasets generated by
the project.
The project partner responsible for producing a dataset will fill out the two
tables show below which will be added to this dokument as individula sub-
chapters to chapter 3 in this document (3.1, 3.2, 3.3 etc.). Assitance in
doing this will be available through both the work package managers and the
coordinators.
<table>
<tr>
<th>
**Metadata**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Dataset reference and/or name**
</td>
<td>
Unique name identifying the dataset. Identifier should start with EU-
MIND2020-WPx where “x” is the relevant work package number followed by a three
digit number.
Example: “ _EU-MIND2020-WP4-001_ ”
</td> </tr>
<tr>
<td>
**MIND Datatype**
</td>
<td>
Choose one or more of the relevant data types: Experimental Data,
Observational Data, Raw Data, Derived Data, Physical Data (samples), Models,
Images and Protocols. Alternatives are further described in Appendix 1.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Source of the data. Reference should include work package number, task number
and the main project partner or laboratory which produced the data.
</td> </tr> </table>
**Dataset description** Provides a brief description of the data set along
with the purpose of the data and whether it underpins a scientific
publication. This should allow potential users to
determine if the data set is useful for their needs.
<table>
<tr>
<th>
**Standards and metadata**
</th>
<th>
</th>
<th>
Provides a brief description of the relevant standards used and list relevant
metadata in accordance with the description in Appendix 1.
The usage of the Directory Interchange Format is optional.
</th> </tr> </table>
**Science Keywords** List relevant scientific key words to ensure that the
data can be efficiently indexed so others may locate the data.
<table>
<tr>
<th>
**Data sharing**
</th>
<th>
</th>
<th>
Description of how data will be shared both during and after the MIND2020
project. Include access procedures, embargo periods (if any), outlines of
technical mechanisms for dissemination and necessary software and other tools
for enabling re-use, and definition of whether access will be widely open or
restricted to specific groups.
Information should include a reference to the repository where data will be
stored.
In case the dataset cannot be shared, the reasons for this should be mentioned
(e.g. ethical, rules of personal data, intellectual property, commercial,
privacy-related, security-related).
</th> </tr> </table>
**Archiving and** Description of the procedures that will be put in place for
long-term preservation of **preservation** the data.
# 3.1 “Dataset one”
<table>
<tr>
<th>
**Metadata**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Dataset reference and/or name**
</td>
<td>
…
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
…
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
…
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
…
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
…
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
…
</td> </tr> </table>
# 3.2 “Dataset two”
<table>
<tr>
<th>
**Metadata**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Dataset reference and/or name**
</td>
<td>
…
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
…
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
…
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
…
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
…
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
…
</td> </tr> </table>
# APPENDIX A: MIND2020 DATA TYPES
## 1 Experimental Data
### 1.1 Dataset description
The experimental data originate from measurements performed in a laboratory
environment, be it _in situ_ or _ex situ_ . The data comprise point or
continuous numerical measurements (e.g. pH, temperature), microbial counts,
dose measurements)
The data will be collected either on a sample basis (sampling an experiment at
a certain point in time) or on an experiment scale (without sampling the
experimental set-up). Data can be derived from either destructive of
preservative analyses. Experimental data collection can occur automatically or
manually, and will be available in a digital or a hard copy format. In the
case of the latter, experimental data will first be copied to e.g. a lab book
and then digitized.
Experimental data are supposed to be unique, in the way that new experiments
will be set-up, producing fresh data. In some cases, similar data will be
available from previous/other experiments within the project, within the
partners’ institution or from overlapping projects, allowing comparison and
integration of the newly obtained data.
Experimental data will be used in downstream statistical analyses (hypothesis
testing, correlations, etc.), interpretations, quantifications and modelling
approaches.
### 1.2 Standards and metadata
Experimental data are obtained using standardized laboratory techniques which
are calibrated when applicable. Positive and negative controls are used and
standards, internal or external, are introduced.
Metadata (summary information) can optionally be provided according to a
Directory Interchange Format (DIF). A DIF allows users of data to understand
the contents of a dataset and contains those fields which are necessary for
users to decide whether a particular dataset would be useful for their needs.
## 2 Observational Data
### 2.1 Dataset description
Observational research (or field research) is a type of correlational (i.e.,
non-experimental) research in which a researcher observes ongoing behaviour.
### 2.2 Standards and metadata
The metadata for observational data should include any standards used and the
necessary information so that an external researcher has the possibility to
analyse how the data was gathered.
## 3 Raw Data
### 3.1 Dataset description
_Raw data_ are primary data collected from a source, not subjected to
processing or any other manipulation.
Raw data are derived from a source, including analysis devices like a
sequencer, spectrometer, chromatograph etc. In most cases, raw data are
digitally available. In some cases (e.g. sequencing), the raw data will be
very extensive datasets.
Raw data has the potential to become information after extraction,
organization, analysis and/or formatting. It is therefore used as input for
further processing.
### 3.2 Standards and metadata
Raw data are obtained using standardized laboratory techniques which are
calibrated when applicable. Positive and negative controls are used and
standards, internal or external, are introduced.
Metadata should at least include standards, techniques and devices used.
Metadata can optionally be provided according to a DIF. A DIF allows users of
data to understand the contents of a dataset and contains those fields which
are necessary for users to decide whether a particular dataset would be useful
for their needs.
## 4 Derived Data
### 4.1 Dataset description
Derived data are the output of the processing or manipulation of raw data.
Derived data originate from the extraction, organization, analysis and/or
formatting or raw data, in order to derive information from the latter. In
most cases, derived data are digitally available, as are the raw data. Derived
data will allow for the interpretation of laboratory experiments, e.g. through
statistical analysis or bioinformatics processing.
### 4.2 Standards and metadata
Manipulation of data will be performed using a ‘scientific code of conduct’,
i.e. maintaining scientific integrity and therefore not falsifying the output
or its representation.
Metadata should include any standard or method or best practice used in the
analysis. Metadata can optionally be provided according to a Directory
Interchange Format. A DIF allows users of data to understand the contents of a
dataset and contains those fields which are necessary for users to decide
whether a particular dataset would be useful for their needs.
## 5 Physical Data (samples)
### 5.1 Dataset description
Physical data are samples that have been produced by an experiment or taken
from a given environment. Sampling of an environment or experiment is
performed in order to obtain information through analyses. As such,
experimental, raw and or derived data will be obtained from physical data.
When the analyses are destructive, the samples cannot be stored for later use.
When the analyses are preservative, samples can be stored for later use, but
only for a limited time. Environmental samples will primarily be samples from
the Underground Research Facilities and analogue sites.
### 5.2 Standards and metadata
When sampling an environment or experiment, blank samples are taken as well,
as a reference. In case of microbiological samples, a blank can be a non-
inoculated experiment.
Metadata should include description of the origin of the sample, age,
processing, storage conditions and expected viability of the sample (as some
sets of samples can only be stored for a limited time, due to their nature).
## 6 Models
### 6.1 Dataset description
Representation or simplified version of a concept, phenomenon, relationship,
structure or system used for facilitating understanding by eliminating
unnecessary components.
### 6.2 Standards and metadata
References and metadata should include existing standards of the discipline
used, tools used in the modelling and focus of the modelling.
## 7 Images
### 7.1 Dataset description
Imaging data are optical semblances of physical objects.
Objects of macro- and microscopic scale can be imaged in a variety of ways
(e.g. photography, electron microscopy), enabling the optical appearance to be
captured for later use or for sharing. When required, the optical appearance
can be magnified (e.g. microscopy) and manipulated to enable the
interpretation of the objects (mostly samples from an environment or
experiment). Imaging data support the interpretation of other data, like
experimental data. Some imaging data will be raw data (3.3), which need to be
derived through image processing to enable interpretation.
### 7.2 Standards and metadata
Advanced imaging devices are calibrated to ensure prospering visualization.
Metadata which are provided are time of imaging, device settings and
magnification/scale when appropriate. In addition, metadata will be provided
about the object that is being imaged.
## 8 Protocols
### 8.1 Dataset description
A protocol is a predefined written procedural method in the design and
implementation of experiments or sampling. In addition to detailed procedures
and lists of required equipment and instruments, protocols often include
information on safety precautions, the calculation of results and reporting
standards, including statistical analysis and rules for predefining and
documenting excluded data to avoid bias.
### 8.2 Standards and metadata
Protocols enable standardization of a laboratory method to ensure successful
replication of results by others in the same laboratory or by partners’
laboratories.
Metadata for Protocols should include the purpose of the protocols, references
to standards and literature.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0182_OpenAIRE2020_643410.md
|
# OPENAIRE2020 INITIAL DATA MANAGEMENT PLAN
2.1 Content collected from providers
**Data set Data set description Standards and Data sharing Archiving and**
## reference and metadata preservation name metadata
publications Records are collected from Dublin Core Not public at this stage
of No need to
(XML records) institutional/thematic repositories, processing; OpenAIRE
preserve the journals or aggregators of these. The must keep synchronized
data, which can records match their original data models with data providers
and be recollected and are therefore heterogeneous. respect their will of any
time from the
visibility (i.e. all records in original data
datasets (XML Records are collected from data DataCite OpenAIRE must be
sources.
records) repositories or aggregators of these. The available from one of the
records match their original data models data providers). The data and are
therefore heterogeneous. is not shared as it is not in
projects (XML Records are collected from so-called proprietary the mission of
OpenAIRE records) entity registries, which are data sources to do so; the same
records providing and maintaining and can be collected from the authoritative
list of project entities (e.g. original data sources.
CORDA, WellcomeTrust, FCT Portugal). The records match their original data
models and are therefore heterogeneous.
CRIS metadata Records are collected from CRIS CERIF-XML
(XML records) systems and regard publications, OpenAIRE datasets, persons, and
projects. The records match their original data models and are therefore
heterogeneous. files of publications
full-texts (files of Files of publications relative to metadata PDFs, XML,
HTML Files are used for the Regular backups publications records collected in
OpenAIRE. Files are purpose of mining and only. relative to collected only if
the data sources are never distributed to
metadata records giving the permission to do so. OpenAIRE end-users or
collected in third-party services:
OpenAIRE) agreement with content
providers.
Files are not shared due to agreements with the data sources, which provide
the files for the purpose of
mining/inference, but want to remain the reference for downloads by users or
services world-wide ("get
<table>
<tr>
<th>
2.2 Content collected from OpenAIRE users
</th>
<th>
the hits").
</th> </tr> </table>
**Data set Data set description Standards and Data sharing Archiving and**
**reference and metadata preservation name**
Claims Relationships between The metadata
(relationships publications/datasets and created are between
projects/datasets/publications. Such relationships publications/data
relationships are provided by authorized between sets and (logged-in) end-
users and are meant to publications and projects/datasets enrich or fix the
OpenAIRE information projects, datasets
/publications) space (aggregated information, and projects, enhanced by
inference processing). publications and dataset. The relationships are XML
records, following an internally defined schema.
Not public at this stage of Backup of the processingRelationships DB that
contains alone do not make any the relationships sense, they will be openly
are kept on a shared and accessed regular basis. once integrated in the The
OpenAIRE OpenAIRE information infrastructure is space. the keeper of this
information and must preserve it over time. Preservation is ensured by ICM
data centre preservation policies.
2.3 Generated and aggregated contents
## Data set reference and Data set description Standards and name metadata
Disambiguated metadata generated for internal Proprietary internal
(Similarity relationships purposes representation between publication, author
and organization objects)
Data produced by the Origin: All this data is The data conforms to
OpenAIRE mining system ingested from internal schemas developed IIS: (1)
Available at least OpenAIRE databases internally. The in the upcoming version
(Information Space) and schemas are of the beta instance of eventually lands
in internal described using Avro the portal databases (Information Interface
Description (new.openaire.eu): Space). Nature: The data Language.The citation
links between consists of information system enriches documents,similarity
extracted from scientific existing metadata of relationships between
documents. Scale: the documents
documents,classification Terabytes of data. Existence available in labels
attached to of similar data: Similar data OpenAIRE's documents (labels such is
provided by other systems Information Space as “chemistry”, that allow for
exploring with inferred “medicine”, artefacts of scholarly information (e.g.
“electrochemistry”, communication: Google citation links between
“legal”),links from Scholar, Microsoft Academic documents, documents to
projects Search, ArnetMiner, classification labels) that founded these
CiteSeerX. documents, links from documents to their socalled EGI contexts,
links from documents to data sets cited by these documents. (2) Available
inside IIS but not integrated with the rest of OpenAIRE system yet:
affiliations of the authors of scientific documents, references from documents
to Protein Database entries corresponding to proteins mentioned in these
documents. (3) Planned: references to some other biomedical databases.
Information Space generated for internal The Information (Object/knowledge
graph purposes Space assumes of metadata, formed by several aggregating
content manifestations: collected form providers - HBASE internal and from
users and representation
enriching it with content - XML files on HDFS
we generated) - XML files on OAI-
PMH Publisher
\- Statistics in a relational database **Data sharing Archiving and
preservation** Not public at this These data can stage of be regenerated.
processing.
The data is System's ingested philosophy is not internally by to store any
data. other OpenAIRE It ingests data systems and from OpenAIRE eventually
Information presented to the Space, user through processes it, and
portal produces data
openaire.eu or that is then through some exported back to
machine- OpenAIRE readable APIs. Information Space. The volume of produced
data is of the order of terabytes of data.
Publicly and These data can openly available be regenerated.
from:- web portal (www.openaire.e u)- APIs (api.openaire.eu)
: OAI-PMH,
REST search, LOD (to be implemented)
2.4 Content published openly through the portal and the API
**Data set Data set description Standards and Data sharing Archiving and**
**reference metadata preservation**
## and name
metadata (original and enriched by cleaning and mining)
publications Origin, nature and scale: (XML, JSON, CSV, Publicly and openly
The portal and the APIs
The content we publish in the TSV, HTML) available from: are available systems
datasets portals and APIs, is the content https://www.openair - web portal for
searching and that we collect from providers, e.eu/schema/0.2/oa
(www.openaire.eu) accessing publications' from users, and the one we
f-result-0.2.xsd - APIs (api.openaire.eu): metadata. The
projects process and generate. These XML, JSON, CSV, OAI-PMH, HTTP API The
management of the are described in details above. TSV, HTML oaf schemata are
internally metadata is part of the So the origin, nature and scale
https://www.openair developed and are for the underlying systems. are directly
connected to the e.eu/schema/0.2/oa XML records. JSON origin, nature and scale
of that f-project-0.2.xsd records are simple
content. interpretations of the XML
people It is useful to users who are HTML format to JSON format.
interested in the scientific results https://www.openair CSV, TSV and HTML of
funded research (e.g. e.eu/schema/0.2/oa records contain subsets of
researchers, project officers, f-person-0.2.xsd the metadata of the XML
funders...). It also promotes the records. We have a special organizations OA
initiative. XML, HTML schema between
The data can be reused through https://www.openair OpenAIRE and the the API
and it is already used e.eu/schema/0.2/oa European Commission that
datasources from the EC Participant Portal, f-org-0.2.xsdHTML is for XML
publications and EC CORDIS portal, and various https://www.openair project
records; these also projects to present their e.eu/schema/0.2/oa contain a
subset of the publications in their pages, etc. f-datasource- metadata of the
oaf for more information consult 0.2.xsd schemata.
https://www.openaire.eu/index.p hp?
option=com_content&view=articl e&id=719:api&catid=61:newslett er-items
users This information comes directly As the portal is User personal data are
not The OpenAIRE from the users, upon signing up based on Joomla disclosed to
Third Party platform uses the to the OpenAIRE portal and CMS, the users are
Entities and are only provided user data for through the editing possibility
of stored following the aggregated and used authentication and their personal
data. schema that within the context of the offering controlled
Joomla has defined Openaire2020 Project. Any access to the provided
. personal data published by online services. Backup
the user voluntarily on the of the DB that contains web application will be
the user metadata are visible to other users of the kept on a regular basis.
web application.User Access to the DB that activity is monitored and contains
user metadata collected for internal usage information is open only statistics
and evaluation of to the administrators of the portal services. the platform.
Passwords are kept encrypted for security reasons and are secret to all users,
including administrators. The OpenAIRE
infrastructure is the keeper of this information and must preserve it over
time. Preservation is ensured by ICM data centre preservation policies.
articles The articles are authored by HTML. As the portal Publicly and openly
Backups of the DB that registered users with special is based on Joomla
available from web portal contains the articles are role. These articles
contain CMS, the articles (www.openaire.eu). kept on a regular basis.
information related to Open are stored in the DB Possibly shared on third The
OpenAIRE Access and OpenAIRE2020 that is provided by party social sites
infrastructure is the projects related topics Joomla, following (Facebook,
Twitter, etc) keeper of this
the schema that through social sharing. information and must
Joomla has defined preserve it over time.
for articles. Preservation is ensured
by ICM data centre preservation policies.
## Data Management Plan
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0183_LIBRA_665937.md
|
# Data sharing
1. For Survey monkey staff data, are securely stored at CRG professional dropbox and access is regulated through ID and passwords. Due to feedback from the evaluator during the first review meeting, we decided that we will not make the collected data publically available. The criticism was towards the survey questions and the potential ambiguity of collected answers.
2. Project Implicit pre-analysed the data and shared them with the CRG Project Manager. Project Implicit is only allowed to store and pre-analyse data. Further analysis and utilisation of these data is responsibility of LIBRA. The LIBRA coordinator is collaborating with José GarcíaMontalvo (UPF) to analyse the data in detail. An agreement for sharing the data has been signed between CRG and UPF. Once the data analysis is published the underlying data will be made publically available in Zenodo (unfortunately after the end of the project). Raw data sets cannot be entirely shared to the public in the original format since in some cases individuals could be identified through the combination of profile data (e.g. female PI researcher in an institute where there is only few female PIs). Thus, data will be accessed carefully before made public, always guarantying anonymity.
3. Data from each LIBRA IO will be analysed by ASDO and reports based on the data are shared with the consortium.
# Data Archiving and Preservation
The raw data will be archived 20 years and the intermediate data will be
preserved for at least 2 years more after the end of the project at CRG’s data
infrastructure.
There are no associated costs for archiving the raw and intermediate data at
the CRG infrastructure.
# Data Management Policy at ASDO for institutional data from LIBRA
In the framework of WP1 (Initial assessment) and WP7 (Monitoring and
evaluation), data have been collected and managed as follows.
## 1\. Data origin
Data used for WP1 and WP7 pertaining to the implementing organisations (IOs)
came from the following sources:
* Data from scientific and policy literature
* Official IO documents
* Institutional IO websites
* Internal documents provided by the IOs
* Direct participation in meetings and events organised by the IOs
* Monitoring sessions through at distance interviews with members of the LIBRA Team
* Monitoring sessions carried out through face to face meetings with members of the LIBRA Team
* Focus groups conducted with representatives of the IOs
* Interviews with representatives of the IOs
### 1.2. Data collection purposes
Data have been gathered with the aims of:
* As for WP1 (Initial assessment)
* Developing the basis for the Gender Equality Plans (GEPs) of the 10 IOs
* Developing a picture of gender arrangements at each IO
* Providing the basis of higher-level comparisons and benchmarking
* Providing an information basis for designing and implementing WPs 3–6
* As for WP7 (Monitoring and evaluation)
* Overseeing the flow of actions in the GEPs (to verify progress, see how well it aligns with the expected results and impacts, and to monitor any problems arising)
* Controlling the compliance with GEP deadlines
* Determining the main obstacles during implementation and facilitate their mitigation.
### 1.3. Data types, utility and public availability
All the collected data have been used for developing two deliverables, i.e.,
D1.3 “Diagnostic report of the IOs, including relevant resources” and D7.3
“Mid-term report on monitoring and assessment at IO and WP level”.
As for the **types of data** , apart from written sources already publicly
available or made available by the IOs, they include:
* Electronic questionnaires filled by representatives of the IOs where information (also of statistical nature) were asked;
* Audio-recordings of face-to-face activities (focus groups, monitoring sessions, part of the interviews conducted with representatives of IOs), stored in MP3 format
* Audio-recordings of the at-distance monitoring sessions, stored in MP3 format
* Anonymised transcriptions (text files) of the audio recordings
* Anonymised transcriptions (text files) of handwritten notes of interviews.
As for the **utility** of these data, they have been used only for the
purposes described above and exclusively in the framework of the project.
As for the **public availability** of these data, the following issues are to
be considered:
* Both D1.3 and D7.3 are confidential and cannot be (and will be not) made publicly available
* In both reports, personal data and opinions have been anonymised, and any personal features of the interviewee/participant are not mentioned when it could lead to their identification; this is not the case of GEP team leaders, who have been identified as such, even though their names have not been mentioned
* Original recordings and their transcription, presently stored by ASDO, are not and will not be made publicly available due to privacy and data protection reasons
* Original recordings will be deleted after five years from the final payment of the project, when the obligation of keeping all project documentation to the aim of a possible audit will be expired.
# Data storage security at Survey Monkey
In this Annex we address the question about “What arrangements or assurances
have been made or provided from Survey Monkey for the data that is stored on
their servers. Are they in compliance with EU law in this area?”
As described in deliverable D9.2 (Data Management Plan), LIBRA used the Survey
Monkey platfrom to run the survey about staff perception about gender equality
realted topics. The deliverable D9.2 was submitted in April 2016 and approved
during the first REA organised project evaluation in Februar 2017 to be
compliant with the European Data Protection Directive (Directive 95/46/EC on
the protection of individuals with regard to the processing of personal data).
We run the surevy during the first half of the year of 2016 on the
institutional Survey Monkey account owned by the BI team. The account holder
is responsible for the management of the collected data. After closing the
survey, BI downloaded the data from the Survey Monkey server to the BI
servers. **All data were deleted from the Survey Monkey servers** . All raw
data were transferred to the CRG servers to be archived as foreseen in the
Data Managemnt Plan and deleted on the BI servers.
Also after the enforcement of the **General Data Protection Regulation (EU)
2016/679 (GDPR)** in in May 2018 Survey Monkey keeps to be compliant the EU
law _as stated on their website_ (
_https://help.surveymonkey.com/articles/en_US/kb/surveymonkey-gdpr#requests_ )
. Cited from the website:
_Under GDPR, EU data subjects are entitled to exercise the following rights:_
_Right of Access: Find out what kind of personal information is held about you
and get a copy of this information._
_Right of Rectification: Ask for your information to be updated or corrected._
_Right to Data Portability: Receive a copy of the information you've provided
under contract so you can provide it to another organization._
_Right to Restrict Use: Ask for your personal information to stop being used
in certain cases, including if you believe that the personal information about
you is incorrect or the use is unlawful._
_Right to Object: Object to use of your information where a party is
processing it on legitimate interest basis, and object to have your personal
information deleted._
_Right to Erasure (also known as Right to be Forgotten): Request that your
personal information be deleted in certain cases._
Personal Data requests (such as right to access, right to erase, right to be
forgotten, or others) can be submitetd by e.g. account holder or respondents:
_https://help.surveymonkey.com/contact?form=GDPR_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0184_Lynx_780602.md
|
# INTRODUCTION
This document is the Data Management Plan (DMP) of the project. The final
version of this document will be available as “D2.8 Final report of the data
management activities” in M36. This document is complemented by “D7.2 IPR and
Data Protection Management”, which was delivered in M6.
The Data Management Plan adheres to and complies with the _H2020 Data
Management Plan – General Definition_ given by the EC online, where the DMP is
described as follows:
_“A DMP describes the data management life cycle for the data to be collected,
processed and/or generated by a Horizon 2020 project. As part of making
research data findable, accessible, interoperable and reusable (FAIR), a DMP
should include information on:_
* _the handling of research data during and after the end of the project_
* _what data will be collected, processed and/or generated_
* _which methodology and standards will be applied_
* _whether data will be shared/made open access and_
* _how data will be curated and preserved (including after the end of the project)”_
Section 2 follows the template proposed by the EC 1 . Lynx adopts policies
compliant with the official FAIR guidelines [1] (findable, accessible,
interoperable and re-usable).
Lynx participates Open Research Data Pilot (ORDP) and is obliged to deposit
the produced research data in a research data repository. For such effect, the
Zenodo repository has been chosen, which exposes the data to OpenAIRE (a
European project supporting Open Science) granting its long term preservation.
The description of the most relevant datasets for compliance have been
published in a Lynx Data Portal, using the open source data portal CKAN
software 2 . Metadata is provided for every relevant dataset, and data is
selectively provided whenever it can be republished without license
restrictions and relevance for the project is high. This deliverable also
describes a catalogue of relevant legal and regulatory data models and a
strategy for the homogenisation of the data sources.
Finally, the document describes the _Multilingual Legal Knowledge Graph_ for
Compliance, or Legal Knowledge Graph for short (Section 6), which is the
backbone on when the Lynx services rest (Figure 1).
**European Directives**
General legal goals for every
European Member State
**National Legislation**
Every Member State has
different national and
regional legislation in force
**European Regulations**
Legislative act binding in every
**Industry standards**
Technical documents in
occasions necessary to
achieve certification
**Case law**
Judgements,
sentences
European Member State
**Figure 1.** Schematic description of the Multilingual Legal Knowledge Graph
for Compliance
# DATA MANAGEMENT PLAN
This Section is the Data Management Plan as of M18. It follows the template
proposed by the EC and is applicable to the data used in or generated by Lynx,
with the sole exception of pilot-specific data, whose management may be
further specified in per-pilot DMPs. If the implementation of the pilots
required a different DMP, either new DMP documents or new additions to this
document shall be defined by the pilot leaders and the resulting work included
in future versions of this document.
The EC promotes the access to and reuse of research data generated by Horizon
2020 projects through the Open Research Data Pilot. This project commit to the
rules 2 on open access to scientific peer reviewed publications and research
data that beneficiaries have to follow in projects funded or cofunded under
Horizon 2020 [33]. In particular:
― Lynx has developed and maintains keep up-to-date a Data Management Plan
(this version is a snapshot of a continuously evolving document).
― Lynx has deposited the data in a research data repository –Zenodo. Lynx has
a community in Zenodo, and CKAN provides a stable repository for data results.
The data outcomes of the project live in CKAN.
― Lynx makes sure third parties can freely access, mine, exploit, reproduce
and disseminate it – where applicable and not in conflict with any IPR
considerations.
― Lynx has made clear what tools will be needed to use the raw data to
validate research results – standard formats have been used for data at every
moment.
The next sections and the questions are taken from the Horizon 2020 FAIR DMP
template, which is recommended by the EU commission but voluntary.
## DATA SUMMARY
<table>
<tr>
<th>
**1\. Data summary**
</th> </tr>
<tr>
<td>
a) What is the purpose of the data collection / generation and its relation to
the objectives of the project?
</td> </tr>
<tr>
<td>
</td>
<td>
The main objective of Lynx is “to create an ecosystem of smart cloud services
to better manage compliance, based on a legal knowledge graph (LKG) which
integrates and links heterogeneous compliance data sources including
legislation, case law, standards and other aspects”. In order to deliver these
smart services, data is collected and integrated into a Legal Knowledge Graph,
described in more detail in Section 6.
</td> </tr>
<tr>
<td>
b) What types and formats of data will the project generate / collect?
</td> </tr>
<tr>
<td>
</td>
<td>
The very nature of this project makes the number of formats too high as to be
foreseen in advance. However, the project will be keen on gathering data in
RDF format or producing RDF data itself. RDF will be the format of choice for
the meta model, using standard vocabularies and ontologies as data models.
More details on the initially considered data models are given in Section 4.
</td> </tr>
<tr>
<td>
c) Will you re-use any existing data and how?
</td> </tr>
<tr>
<td>
</td>
<td>
The core part of the LKG is created by reusing existing datasets, either
copying them into the consortium servers (only if strictly needed) or using
them directly from the sources.
</td> </tr>
<tr>
<td>
d) What is the origin of the data?
</td> </tr>
<tr>
<td>
</td>
<td>
Although Lynx is greedy in gathering and linking as much compliance-related
data as possible from any possible source, it can be foreseen that the Eur-Lex
portal will become the principal data source. Users of the Pilots may
contribute their own data (e.g. private contracts, paid standards), which will
be neither included into the LKG nor made publicly available.
</td> </tr>
<tr>
<td>
e) What is the expected size of the data?
</td> </tr>
<tr>
<td>
</td>
<td>
The strong reliance of Lynx in external open data sources minimizes the amount
of data that Lynx will have to physically store. No massive data storage
infrastructure is foreseen.
</td> </tr>
<tr>
<td>
f) To whom might the data be useful ('data utility')?
</td> </tr>
<tr>
<td>
</td>
<td>
Data will be useful for SMEs and EU citizens alike through different portals.
</td> </tr> </table>
## FAIR DATA
<table>
<tr>
<th>
**2\. FAIR data**
</th> </tr>
<tr>
<td>
**2.1 Making data findable, including provisions for metadata**
</td> </tr>
<tr>
<td>
a) Are the data produced and / or used in the project discoverable and
identifiable?
</td> </tr>
<tr>
<td>
</td>
<td>
Data is discoverable through a dedicated data portal (http://data.lynx-
project.eu), further described in Section 3. Data assets will be identified
with a harmonized policy to be defined in the forthcoming months. Research
data may be linked to the corresponding publications and vice versa via their
DOIs.
</td> </tr>
<tr>
<td>
b) What naming conventions do you follow?
</td> </tr>
<tr>
<td>
</td>
<td>
A specific URI minting policy has been defined in Section 5 to identify data
assets.
</td> </tr>
<tr>
<td>
c) Will search keywords be provided that optimize possibilities for re-use?
</td> </tr>
<tr>
<td>
</td>
<td>
Open datasets described in the Lynx data portal are findable through standard
forms including keyword search.
</td> </tr>
<tr>
<td>
d) Do you provide clear version numbers?
</td> </tr>
<tr>
<td>
</td>
<td>
Zenodo supports DOI versioning.
</td> </tr>
<tr>
<td>
e) What metadata will be created?
</td> </tr>
<tr>
<td>
</td>
<td>
Metadata records describing each dataset is downloadable as DCAT-AP entries in
the CKAN. Assets in Zenodo have also metadata records.
</td> </tr>
<tr>
<td>
**2.2 Making data openly accessible**
</td> </tr>
<tr>
<td>
a) Which data produced and / or used in the project will be made openly
available as the default?
</td> </tr>
<tr>
<td>
</td>
<td>
**Open data** : **data in the LKG** .
The adopted approach is “as open as possible, as closed as necessary”. Data
assets produced during the project will preferably be published as open data.
Nevertheless, during the project some datasets will be created from existing
private resources (e.g. dictionaries by KDictionaries), whose publication
would irremediable damage their business model. These datasets will not be
released as open data.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Datasets in the LKG will be in any case published along with a license. This
license will be specified as a metadata record in the data catalog, which can
also be exported as RDF using the appropriate vocabulary terms (dtc:license)
and eventually using machine readable licenses.
**Open data: research data.**
In December 2013, the EC announced their commitment to open data through the
Pilot on Open Research Data, as part of the Horizon 2020 Research and
Innovation Programme. The Pilot’s aim is to “improve and maximise access to
and reuse of research data generated by projects for the benefit of society
and the economy”. In the frame of this Pilot on Open Research Data, results of
publicly-funded research should be disseminated more broadly and faster, for
the benefit of researchers, innovative industry and citizens.
The Lynx project chose to participate in the Open Research Data Pilot (ORDP).
Consequently, publishing as “open” the digital research data generated during
the project is a contractual obligation (GA Art. 29.3). This provision does
not include the pieces of data which are derivative of private data of the
partners. Their openness would endanger their economic viability and
jeopardize the Lynx project itself (which is sufficient reason not to open the
data as per GA Art. 29.3).
Every Lynx partner will ensure Open Access to all peer-reviewed scientific
publications relating to its results. Lynx uses Zenodo as the online
repository (https://zenodo.org/communities/lynx/) to upload public
deliverables and possibly part of the scientific production. Zenodo is a
research data repository created by OpenAIRE to share data from research
projects. Records are indexed immediately in OpenAIRE, which is specifically
aimed to support the implementation of the EC and ERC Open Access policies.
Nevertheless, in order to avoid fragmentation, the Lynx webpage will act as
the central information node.
The following categories of outputs require Open Access to be provided free of
charge by Lynx partners, to related datasets, in order to fulfil the H2020
requirements of making it possible for third parties to access, mine, exploit,
reproduce and disseminate the results contained therein:
* _Public deliverables_ will be available both at Zenodo and the Lynx website at http://lynxproject.eu/publications/deliverables. See Figure 2 and Figure 3.
* _Conference and Workshop presentations_ may be published at Slideshare under the account https://www.slideshare.net/LynxProject.
* _Conference and Workshop papers and articles for specialist magazines_ may be also reproduced at: http://lynx-project.eu/publications/articles.
* _Research data and metadata_ are also available. Metadata and selected data is available in the CKAN data portal, http://data.lynx-project.eu, produced research data at Zenodo.
Information will be also given about tools and instruments at the disposal of
the beneficiaries and necessary for validating the results.
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**Figure 2.** Lynx public deliverable at Zenodo.
**Figure 3.** Deliverables on the Lynx website
</th> </tr>
<tr>
<td>
b) How will the data be made accessible (e.g. by deposition in a repository)?
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Data descriptions (metadata) are accessible through a dedicated data portal,
hosted in Madrid and available under http://data.lynx-project.eu. Data from
small datasets is also available from the web server –where _small_ means a
file size that does not compromise the web server availability. Eventually the
metadata descriptions will be uploaded into other repositories, such as Retele
3 resources in Spanish language, ELRC-SHARE 4 in general and others to be
identified. In addition, the cooperation with the CEF eTranslation 5
TermBank project will be considered, in view of sharing terminological domain-
specific resources.
</th> </tr>
<tr>
<td>
c) What methods or software tools are needed to access the data?
</td> </tr>
<tr>
<td>
</td>
<td>
Relevant datasets whose license is liberal is available as downloadable files.
Eventually, a SPARQL endpoint will be set in place for those dataset in RDF
form. Also, the CKAN technology in which the portal is based on, offers an API
using standard JSON structures to access the data. The CKAN platform provides
the documentation on how to use the API
(http://docs.ckan.org/en/ckan2.7.3/api/).
</td> </tr>
<tr>
<td>
d) Is documentation about the software needed to access the data included?
</td> </tr>
<tr>
<td>
</td>
<td>
Yes, tools to visualize RDF and JSON are given.
</td> </tr>
<tr>
<td>
e) Is it possible to include the relevant software (e.g. in open source code)?
</td> </tr>
<tr>
<td>
</td>
<td>
Some of the software to be developed in Lynx is expected to be published as
Open Source. Other software to be developed in Lynx will be derived from
private or non-open source code and, thus, not be made publicly accessible.
</td> </tr>
<tr>
<td>
f) Where will the data and associated metadata, documentation and code be
deposited?
</td> </tr>
<tr>
<td>
</td>
<td>
Lynx uses a private source code repository (https://gitlab.com/superlynx).
Open data is deposited in the Lynx open data portal; consortium-internal data
within the project intranet. The choice of Nextcloud is justified as the
information resides within UPM secured servers in Madrid, avoiding third
parties and granting the privacy and confidentiality of the data. Gitlab, as a
major provider and host of code repositories, is a common choice among
developers but if necessary code might be also hosted at UPM.
</td> </tr>
<tr>
<td>
g) Have you explored appropriate arrangements with the identified repository?
</td> </tr>
<tr>
<td>
</td>
<td>
Zenodo already foresees the existence of H2020 consortiums.
</td> </tr>
<tr>
<td>
h) If there are restrictions on use, how will access be provided?
</td> </tr>
<tr>
<td>
</td>
<td>
All metadata in Zenodo are openly accessible as soon as the record is
published, even if there are restrictions like an embargo on the publications
or research data themselves. In this way, it is always possible to contact the
author of the data to ask for individual agreements on accessing the data,
even if there are general restrictions.
</td> </tr>
<tr>
<td>
i) Is there a need for a data access committee?
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
As of today, there is no need for a Data Access Committee 6 .
</th> </tr>
<tr>
<td>
j) Are there well described conditions for access (i.e. a machine readable
license)?
</td> </tr>
<tr>
<td>
</td>
<td>
Description of data assets include a link to well-known licenses, for which
machine readable versions exist. Either Creative Commons Attribution
International 4.0 (CC-BY) or Creative Commons Attribution Share-Alike
International 4.0 (CC-BY-SA) will be the recommended licenses.
</td> </tr>
<tr>
<td>
k) How will the identity of the person accessing the data be ascertained?
</td> </tr>
<tr>
<td>
</td>
<td>
The Lynx intranet (Nextcloud) provides standard access control
functionalities. The servers are located in a secured data centre at UPM. The
access point is https://delicias.dia.fi.upm.es/lynxnextcloud/. Access is
secured by asymmetric keys or passwords and communications use SSL
</td> </tr>
<tr>
<td>
**2.3 Making data interoperable**
</td> </tr>
<tr>
<td>
a) Are the data produced in the project interoperable?
</td> </tr>
<tr>
<td>
The LKG preferred format is RDF, granting interoperability between
institutions, organisations and countries. This choice optimally facilitates
re-combinations with different datasets from different origins.
Zenodo uses standard interfaces, protocols, metadata, etc. CKAN implements
standard api access.
</td> </tr>
<tr>
<td>
b) What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?
</td> </tr>
<tr>
<td>
</td>
<td>
Specific data and metadata vocabularies will be defined throughout the entire
project. An initial collection has already been edited and has been published
at http://lynx-project.eu/data2/datamodels (see also Figure 4).
</td> </tr>
<tr>
<td>
c) Will you be using standard vocabularies for all data types present in your
data set, to allow interdisciplinary interoperability?
</td> </tr>
<tr>
<td>
</td>
<td>
Standard vocabularies will be used inasmuch as possible, like the ECLI
ontology, the Ontolex model and other vocabularies similarly spread. These
choices grant inter-disciplinary collaboration. For example, Ontolex 7 is
standard in the language resources and technologies communities, whereas the
ELI ontology 8 (European Law Identifier) is standard in the European legal
community.
</td> </tr>
<tr>
<td>
d) In case it is unavoidable that you use uncommon or generate project
specific ontologies or vocabularies, will you provide mappings to more
commonly used ontologies?
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
If vocabularies or ontologies are further defined, they will be published
online, documented and mapped to other standard ontologies. Figure 4
illustrates a possible visualization for the data models.
**Figure 4.** A catalogue of relevant ontologies and vocabularies
</th> </tr>
<tr>
<td>
**2.4**
</td>
<td>
**Increase data re-use (through clarifying licences)**
</td> </tr>
<tr>
<td>
a)
</td>
<td>
How will the data be licensed to permit the widest re-use possible?
</td> </tr>
<tr>
<td>
</td>
<td>
Data in Zenodo is openly licensed.
</td> </tr>
<tr>
<td>
b)
</td>
<td>
When will the data be made available for re-use?
</td> </tr>
<tr>
<td>
_**G** _
</td>
<td>
_**uidance:** If an embargo is sought to give time to publish or seek patents,
specify why and how long this will _
</td> </tr>
<tr>
<td>
_ap_
</td>
<td>
_ply, bearing in mind that research data should be made available as soon as
possible._
</td> </tr>
<tr>
<td>
</td>
<td>
No data embargoes are foreseen. Public data is published as soon as possible,
but private data will remain private as long as the interested parties,
rightsholders of the data, decide.
</td> </tr>
<tr>
<td>
c)
</td>
<td>
Are the data produced and / or used in the project useable by third parties,
in particular after the end of
</td> </tr>
<tr>
<td>
th
</td>
<td>
e project?
</td> </tr>
<tr>
<td>
</td>
<td>
Lynx aims at building a LKG towards compliance. In the long term, the LKG may
be repurposed and the data portal may become a reference entry point to find
open, linguistic legal information as RDF.
</td> </tr>
<tr>
<td>
d)
</td>
<td>
How long is it intended that the data remains re-usable?
</td> </tr>
<tr>
<td>
</td>
<td>
Some of the datasets require maintenance (e.g. legislation and case law must
be kept up to date). Whereas a core of information may still be of interest
even with no maintenance, those datasets directly used by services under
exploitation will be maintained. In any case, metadata records describing the
datasets will include a field informing on the last modification date.
</td> </tr>
<tr>
<td>
e)
</td>
<td>
Are data quality assurance processes described?
</td> </tr>
<tr>
<td>
</td>
<td>
Only formal aspects of data quality are expected to be assured. In particular,
the 5-stars 9 paradigm is considered, and the data portal describes this
quality level in due time.
</td> </tr> </table>
## ALLOCATION OF RESOURCES
<table>
<tr>
<th>
**3 Allocation of resources**
</th> </tr>
<tr>
<td>
a) What are the costs for making data FAIR in your project?
</td> </tr>
<tr>
<td>
</td>
<td>
The cost of publishing FAIR data includes (a) maintenance of the physical
servers; (b) time devoted to the data generation and (c) long term
preservation of the data. Zenodo is free. Maintaining the hosting for CKAN
costs money, but this has been foreseen in the budget.
</td> </tr>
<tr>
<td>
b) How will these be covered?
</td> </tr>
<tr>
<td>
</td>
<td>
Resources to maintain and generate data are covered by the project. Long term
preservation of data is free by uploading the research data at Zenodo.
</td> </tr>
<tr>
<td>
c) Who will be responsible for data management in your project?
</td> </tr>
<tr>
<td>
</td>
<td>
UPM is responsible for managing data in the data portal, and for managing
private data in the intranet. UPM is not responsible for keeping personal data
collected to provide the pilot services but the directly involved partners
(openlaws, Cuatrecasas, DNV GL).
UPM is responsible for the Zenodo account, and must approve ( _curate_ ) every
upload.
</td> </tr>
<tr>
<td>
d) Are the resources for long term preservation discussed?
</td> </tr>
<tr>
<td>
</td>
<td>
Public deliverables and research data are being uploaded to Zenodo, which
grants the long term preservation. A specific community has been created in
Zenodo 10 . Alternatively, if difficulties are found with Zenodo, datasets
may also be uploaded to Figshare 11 or B2Share 12 where a permanent DOI is
retrieved. Other sites such as META-SHARE, ELRC-SHARE or the European Language
Grid may be considered in addition to grant long term preservation and
maximize the impact and dissemination.
</td> </tr> </table>
## DATA SECURITY
<table>
<tr>
<th>
**4 Data security**
</th> </tr>
<tr>
<td>
a) Is the data safely stored in certified repositories for long term
preservation and curation?
</td> </tr>
<tr>
<td>
</td>
<td>
UPM is physically storing data on their servers: webpage, files and data in
the Nextcloud system, the CKAN data catalogue and mailing lists. Source code
is hosted at Gitlab on a Dutch data center.
These pieces of data are both digitally and physically secured in a data
centre. Backups are made of these systems, to external hard disks or other
machines. In principle, no personal data will be kept at UPM, and the pilot
leaders will define specific DMP with specific data protection provisions and
specific data security details.
</td> </tr>
<tr>
<td>
b) What provisions are in place for data security?
</td> </tr> </table>
Relevant data which is open, shall be uploaded to Zenodo. In addition,
relevant language datasets produced in the course of Lynx will be uploaded to
catalogues of language resources.
## LEGAL, ETHICAL AND SOCIETAL ASPECTS
<table>
<tr>
<th>
**5 Ethical aspects**
</th> </tr>
<tr>
<td>
a) Are there any ethical or legal issues that can have an impact on data
sharing?
</td> </tr>
<tr>
<td>
</td>
<td>
**Legal framework**
EU citizens are granted the rights of privacy and data protection by the
Charter of Fundamental rights of the EU. In particular, Art. 7 states that “
_everyone has the right respect for private and family life, home and
communications_ ”, whereas Art. 8 regulates that “ _everyone has the right to
the protection of personal data concerning him or her_ ” and that processing
of such data must be “ _on the basis of the consent of the person concerned or
some other legitimate basis laid down by law_ .”
These rights are developed in detail by the General Data Protection Regulation
(GDPR), Regulation 2016/679/EC, which is in force in every Member State since
25 th May 2018. This regulation imposes obligations to the Lynx consortium,
which is also reminded by Art. 39 of the Lynx Grant Agreement (GA): “ _the
beneficiaries must process personal data under the Agreement in compliance
with applicable EU and national law on data protection_ ” The same GA also
reminds that beneficiaries “ _may grant their personnel access only to data
that is strictly necessary for implementing, managing and monitoring the
Agreement_ ” (GA Art. 39.2).
_Personal data_ is, according to GDPR art. 4.1 “ _any information relating to
an identified or identifiable natural person (‘data subject’); an identifiable
natural person is one who can be identified, directly or indirectly, in
particular by reference to an identifier such as a name, an identification
number, location data, an online identifier or to one or more factors specific
to the physical, physiological, genetic, mental, economic, cultural or social
identity of that natural person_ ”, whereas _data processing_ is (art. 4.2): “
_any operation or set of operations which is performed on personal data or on
sets of personal data, whether or not by automated means, such as collection,
recording, organisation, structuring, storage, adaptation or alteration,
retrieval, consultation, use, disclosure by transmission, dissemination or
otherwise making available, alignment or combination, restriction, erasure or
destruction_ ”. With these definitions, Pilot 1 will most likely have to
collect and process personal data, and possibly other Pilots as well.
The purposes for which personal data will be collected are justified in
compliance with art.5.b, and the processing of personal data is legitimate in
compliance with art. 6. The implementation of the Pilot 1 and other pilots
processing personal data will have to implement the necessary legal provisions
to respect the rights of the data subjects.
Several internal communication channels have been established for Lynx:
mailing lists, a website and an intranet. The three servers are hosted at UPM
and comply with the Spanish legislation.
The Lynx web site (http://lynx-project.eu) is compliant regarding the
management of cookies with _Ley 34/2002, de 11 de julio, de servicios de la
sociedad de la información y de comercio electrónico_ . Lynx will most likely
handle datasets with personal data (Pilot 1), as users will be registered in
the Lynx platform to enjoy personalised services and to upload contracts with
personal data. The consortium will adopt any measure to comply with the
current legislation.
**Ethical and societal aspects**
The ethical aspect of greatest interest is the processing of personal data.
The processing of personal data may become a possibility in the framework of
Pilot 1. GA Article 34 “Ethics and research integrity” is binding and shall be
respected. Ethical and privacy related concerns are fully addressed in Section
3.2 of Deliverable 7.2 “ _IPR and Data Protection management documents_ ”.
Besides, the ethics issues identified are already being handled by the pilot
organisations during their
</td> </tr>
<tr>
<td>
</td>
<td>
daily operation activities, as they confront with national laws and EU
directives regarding the use of information in their daily services, as
clearance for the processing, storing methods, data destruction, etc. has been
provided to such organisation a priori and is not case specific. The research
to be done during Lynx does not raise any other issues, and the project will
make sure that it will follow the same patterns and rules used by the pilot
organisations, that will guarantee the proper handling of ethical issues and
the adherence to national, EU wide and international law and directives that
do not violate the terms of the programme.
The societal impact of this project is expected to be positive, enhancing the
access of EU citizens to legislation and contributing towards a fairer Europe.
In addition to the best effort made by the project partners, members of the
Advisory Board may be requested to issue a statement on the ethical and
societal impact of the Lynx project. An more detailed internal assessment of
the Legal, Ethical and Societal impact of this project is made in Section 2.6.
Finally, the Lynx websites will try to comply with the W3C recommendations on
accessibility, such as the Web Content Accessibility Guidelines (WCAG) 2.0
–which covers a wide range of recommendations for making Web content more
accessible.
</td> </tr>
<tr>
<td>
b) Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?
</td> </tr>
<tr>
<td>
</td>
<td>
Whenever the operation of the piltos start, pilot leaders will report these
consent documents.
</td> </tr> </table>
## ASSESSMENT OF LEGAL, ETHICAL AND SOCIETAL IMPACT ASPECTS
### Lynx methodology for the impact assessment
The Lynx strategy for dealing with legal, ethical and societal aspects was
initially included in _D2.1 Initial Data Management Plan_ and _D7.2 IPR and
Data Protection Management Documents._ The main issue identified as posing
potential risks in terms of ethical, legal and societal impact was the
potential affection of some Human Rights, and in particular, the right to
privacy and data protection. To manage this risks the Consortium put in place
a series of measures as a result of the Initial Recommendations.
At this stage of the project, as part of the Ongoing Monitoring devised in
Paragraph 3.3.4 of D7.2, the UAB partner has proceeded to review the status of
implementation of the risk management strategy. Furthermore an ethical and
societal impact assessment has been conducted to verify that no other issues
have arisen now that the project has advanced in the development of the Lynx
solution.
Paragraph 2.2 below contains the Ethical and Societal impact assessment. This
assessment has been conducted following the methodology developed by the H2020
e-SIDES project. 13 In particular, Deliverable 2.2. of the e-SIDES project
contains a list of ethical, legal societal and economic issues of Big Data
technologies. This list has been verified against the Lynx project, explaining
how Lynx deals with avoiding each one of the issues on the lists.
Paragraph 2.3 presents the review of the status of implementation of the
Initial Recommendations. The original strategy for the management of privacy
and data protection presented in _D7.2 IPR and Data Protection Management
Documents_ , included a two-fold perspective: recommendations for the
requirements elicitation techniques to be deployed in Tasks 1.1. and 4.1., and
recommendations for the Lynx Solution. Since then, Tasks 1.1. and 4.1. have
finished and been reported in the corresponding deliverables ( _D1.1
Functional requirements analysis report and D4.1 Pilots requirements analysis
report_ ). Therefore, an update is necessary only in relation to the
recommendations for the Lynx solution. Below we have included a review of the
status of implementation of each of this recommendations at this stage of the
project, as well as the indication of whether there are still some concerns
related to some of them, in the form of mid-term recommendations.
### General ethical and societal aspects: Ethical and societal impact
assessment
#### Ethical impact assessment
* **Human welfare:** Discrimination of humans by big data-mediated prejudice can occur. Detrimental implications can emerge in the contexts of employment, schooling or travelling by various forms of big data-mediated unfair treatment of citizens.
**Lynx** : Personal data is not the type of data relevant for Lynx. Lynx
integrates and links heterogeneous compliance data sources including
legislation, case law, standards and other private documents such as
contracts. Within this sources personal data may be contained. However,
personal data _per se_ is not analysed or processed in order to extract
patterns, trends, decisions or connexions related to humans and human
behaviour. Therefore Lynx will not impact in human welfare.
* **Autonomy:** Big data-driven profiling practices can limit free will, free choice and be manipulative in raising awareness about, for instance, news, culture, politics and consumption.
**Lynx** : Lynx does not entail automated decision making nor profiling,
therefore autonomy is preserved.
* **Non-maleficence:** Non-transparent data reuse in the world of big data are vast and could have diverse detrimental effects for citizens. This puts non-maleficence as a value under pressure.
**Lynx** : The only foreseen reuse is that of personal data contained in case-
law. However, this is openly available data and therefore can be used as part
of the legal documents to provide compliance services. The reuse is therefore
transparent and there is no risk of maleficence.
* **Justice (incl. equality, non-discrimination, digital inclusion):** Systematic unfairness can emerge, for instance, by generating false positives during preventative law enforcement practices or false negatives during biometric identification processes. (Such instances put constant pressure on the value of justice.)
**Lynx** : Lynx does not entail automated decision making nor profiling. The
aim of Lynx is not to identify, characterize or give access to services to
individuals.
* **Accountability (incl. Transparency):** For instance, in the healthcare domain patients or in the marketing domain consumers often do not know what it means and who to turn to when their data is shared via surveys for research and marketing purposes.
**Lynx** : As part of their Data Protection Policy, users of the Lynx
technology should disclose to their clients that their personal data may be
processed by the Lynx technology.
* **Trustworthiness (including honesty and underpinning also security):** Citizens often do not know how to tackle a big data-based calculation about them or how to refute their digital profile, in case there are falsely accused, e.g.: false negatives during biometric identification, false positives during profiling practices. Their trust is then undermined. The technology operators trust at the same time lies too much in the system.
**Lynx** : Lynx does not entail automated decision making nor profiling. It
does not generate any type of conclusion on individuals or individual’s
behaviours.
* **Privacy:** Simply the myriad of correlations between personal data in big data schemes allows for easy identifiability, this can lead to many instances for privacy intrusion.
**Lynx** : Privacy and data protection implication of Lynx are described in
further detail in the List of legal issues.
* **Dignity:** For instance, when revealing too much about a user, principles of data minimization and design requirements of encryption appear to be insufficient. Adverse consequences of algorithmic profiling, such as discrimination or stigmatization also demonstrate that dignity is fragile in many contexts of big data.
**Lynx** : Lynx does not entail automated decision making nor profiling,
therefore autonomy is preserved.
* **Solidarity:** Big data-based calculations in which commercial interests are prioritized rather than nonprofit- led interests, are examples of situations in which solidarity is under pressure. For instance, immigrants are screened by big data-based technologies, they may not have the legal position to defend themselves from potential false accusations resulting from digital profiling which can be seen as a non-solidary treatment.
**Lynx** : Lynx does not entail automated decision making nor profiling,
therefore autonomy is preserved.
* **Environmental welfare** : Big data has rather indirect effects on the environment. But for instance, lithium mining for batteries is such. (But extending the life-expectancy of batteries and, for instance, using more sun-energy for longer-lasting batteries could be helpful.)
#### Societal impact assessment
* **Unequal access** : People are not in the same starting position with respect to data and data-related technologies. Certain skills are needed to find one’s way in the data era. Privacy policies are usually long and difficult to understand. Moreover, people are usually not able to keep their data out of the hands of parties they don’t want to have them.
**Lynx** : Lynx technologies are foreseen to be used by experienced, trained
professionals. No personal data will be processed other than that contained in
case law (openly available data) and private documents such as contracts
(consent and privacy policy of user). The users of the Lynx technologies will
make sure that their clients understand when their personal data may be
processed by the Lynx technologies. However, it is important to remember that
personal data per se will not be analysed or processed in order to extract
patterns, trends, decisions or connexions related to humans and human
behaviour.
* **Normalisation:** The services offered to people are selected on the basis of comparisons of their preferences and the preferences of people considered similar to them. People are put into categories whose characteristics are determined by what is most common. There is pressure toward conformity: the breadth of choices is restricted, and pluralism and individuality are pushed back.
**Lynx** : Lynx does not collect nor process any data on preferences and or
characteristics of individuals. It is important to remember that personal data
per se will not be analysed or processed in order to extract patterns, trends,
decisions or connexions related to humans and human behaviour.
* **Discrimination:** People are treated differently based on different individual characteristics or their affiliation to a group. The possibility to reproach people with things they did years ago or to hold people accountable for things they may do in the future affects people’s behaviour. The data as well as the algorithms may be incorrect or unreliable, though.
**Lynx** : Lynx does not process any data on characteristics of individuals or
behaviours. It is important to remember that personal data per se will not be
analysed or processed in order to extract patterns, trends, decisions or
connexions related to humans and human behaviour.
* **Dependency:** People depend on governmental policy for security and privacy purposes. It is considered a misconception that people can be self-governing in a digital universe defined by big data.
People choosing not to disclose personal information may be denied critical
information, social support, convenience or selection. People also depend on
the availability of services provided by companies. It is considered a risk if
there are no alternatives to services that are based on the collection or
disclosure of personal data.
**Lynx** : Lynx does not determine access to public services. As for private
companies Lynx adds value to the service provided by their users to their
clients. If a client rejects the processing of his/her personal data by the
Lynx technologies the company will provide the service nonetheless, just
without the improvement in efficiency
* **Intrusiveness:** Big data has integrated itself into nearly every part of people’s online life and to some extent also in their offline experience. There is a strong sentiment that levels of data surveillance are too intimate but nevertheless many press ‘agree’ to the countless number of ‘terms and conditions’ agreements presented to them.
**Lynx** : Lynx does not request personal data from its users or third
parties. It does not intrude individual’s private lives.
* **Non-transparency:** Algorithms are often like black boxes to people, they are not only opaque but also mostly unregulated and thus perceived as incontestable. People usually cannot be sure who is collecting, processing or sharing which data. Moreover, there are limited means for people to check if a company has taken suitable measures to protect sensitive data.
**Lynx** : Lynx users will make sure that their privacy policy includes all
the relevant information on the Lynx platform, the processing of personal
data, the data controller and processors, etc. More information on this can be
found in the list of legal issues.
* **Abusiveness:** Even with privacy regulations in place, large-scale collection and storage of personal data make the respective data stores attractive to many parties including criminals. Simply anonymised data sets can be easily attacked in terms of privacy. The risk of abuse is not limited to unauthorised actors alone but also to an overexpansion of the purposes of data use by authorised actors (e.g. law enforcement, social security).
**Lynx** : Lynx does not entail large-scale collection and storage of personal
data. Minor amounts of personal data may be processed as part of some of the
sources used by Lynx, namely case-law and private documents such as contracts.
# CATALOGUE OF DATASETS
This section describes a catalogue of relevant legal, regulatory and
linguistic datasets. Datasets in the Legal Knowledge Graph are those necessary
to provide compliance related services that also meet the requirement of being
published as linked data. The purpose of Lynx Task 2.1 is twofold:
1. Identify as many as possible open dataset possibly relevant to the problem in question (either in
RDF or not)
2. Build the Legal Knowledge Graph by identifying existing linked data resources or by transforming existing datasets into linked data whenever necessary
Figure 5 represents the Legal Knowledge Graph as a collection of dataset
published as linked data. The LKG lies amidst another cloud of datasets, in
various formats either structured or not (such as PDF, XLS or XML). The
section contains: (a) the methodology followed to describe datasets of
interest; (b) the methodology to transform existing resources into LKG
datasets; (c) a description of the Lynx data portal and the related technology
and (d) an initial list of relevant datasets.
Legal Knowledge Graph
(
RDF
)
Other datasets of interest
(
PDF, XLS, XML…
)
**Figure 5.**
Datasets in
the LKG and out of it
## METHODOLOGY FOR CATALOGUING DATASETS
Data assets potentially relevant to the Lynx project are those that might help
providing multilingual compliance services. They might be referenced by
datasets in the LKG as external references.
The identification and description of these datasets is being made during the
project in a cooperative way, during the entire project lifespan. The
methodology has consisted of the following steps:
1. _Identification of datasets of possible interest_
Identification of relevant datasets by the partners;
Discovery of relevant datasets by browsing data portals, reviewing
literature and making general searches;
2. _Description of resources_
Description of the resources identified in Step 1 using an agreed template
(spreadsheet) with metadata records (see Section 3.1.1).
3. _Publication of dataset descriptions_
Publication of the dataset description in the CKAN Open Data Portal via CKAN
form
Transformation of the metadata records to RDF using the vocabulary DCAT-AP
(to be an automated task from the spreadsheet)
This process is being iteratively carried out throughout the project.
### Template for data description
Every Lynx partner, within their domain of expertise, has described an initial
list of data sources of interest for the project. In order to homogeneously
describe the data assets, a template with metadata records has been created
with the due consensus among the partners.
The template for data description contains two main blocks: one with general
information about the dataset and another with information about the resource.
Within this context, “dataset” makes reference to the whole asset, while
“resource” defines each one of the different formats in which the dataset is
published. For instance, the UNESCO thesaurus is a single dataset which can be
found as two different resources: as a SPARQL Endpoint and as a downloadable
file in RDF.
Thereby, the metadata records in Table 1 describe information about the
dataset as a whole.
As the project progressed, it was required to add a new property to the first
metadata selection reported in the D2.1, _Initial Data Management Plan_ .
At this stage of the project, Lynx Data Portal collets a wide amount of
resources; however, not all of them are included in the Legal Knowledge Graph.
Such external resources are present in the portal since they can be useful in
further processes. Therefore, a classification between those datasets in the
LKG and the external resources is performed by the use of the Boolean
parameter “Directly LKG Link”.
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Title
</td>
<td>
the name of the dataset given by the author or institution that publishes it.
</td> </tr>
<tr>
<td>
URI
</td>
<td>
identifier pointing to the dataset.
</td> </tr> </table>
Type in the LKG type of dataset in the legal knowledge graph (language, data,
etc.).
<table>
<tr>
<th>
Type
</th>
<th>
type of dataset (term bank, glossary, vocabulary, corpus, etc.).
</th> </tr>
<tr>
<td>
Domain
</td>
<td>
topic covered by the dataset (law, education, culture, government, etc.).
</td> </tr>
<tr>
<td>
Identifiers
</td>
<td>
other type of identifiers assigned to the dataset (ISRN, DOI, Standard ID,
etc.).
</td> </tr> </table>
Description a brief description of the content of the dataset.
<table>
<tr>
<th>
Availability
</th>
<th>
if the dataset is available online, upon request or not available.
</th> </tr>
<tr>
<td>
Languages
</td>
<td>
languages in which the content of the dataset are available.
</td> </tr>
<tr>
<td>
Creator
</td>
<td>
author or institution that created the dataset.
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
institution publishing the dataset.
</td> </tr>
<tr>
<td>
License
</td>
<td>
license of the dataset (Creative Commons, or others).
</td> </tr>
<tr>
<td>
Other rights
</td>
<td>
if the dataset contains personal information.
</td> </tr>
<tr>
<td>
Jurisdiction
</td>
<td>
jurisdiction where the dataset applies (if necessary).
</td> </tr>
<tr>
<td>
Date of this entry
</td>
<td>
date of registration of the dataset in the CKAN.
</td> </tr>
<tr>
<td>
Proposed by
</td>
<td>
Lynx partner or Lynx organisation proposing the dataset.
</td> </tr>
<tr>
<td>
Number of entries
</td>
<td>
number of terms, triplets or entries that the dataset contains.
</td> </tr>
<tr>
<td>
Last update
</td>
<td>
date in which the last modification of the dataset took place.
</td> </tr>
<tr>
<td>
Dataset organisation
</td>
<td>
name of the Lynx organisation registering the dataset.
</td> </tr>
<tr>
<td>
Direct LKG Link **[NEW]**
</td>
<td>
indicates whether a dataset is directly represented in the LKG or if it is a
external resource.
</td> </tr> </table>
**Table 1.** Fields describing a data asset
The second set of metadata records, listed in Table 2, gives additional
information about the resource in which the metadata can be accessed. This
section is repeated as many times as needed (depending on the number of
formats of the metadata).
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Description
</td>
<td>
description of the type of resource (i.e. downloadable file, SPARQL endpoint,
website search application, etc.).
</td> </tr>
<tr>
<td>
Data format
</td>
<td>
the format of the resource (RDF, XML, SKOS, CSV, etc.).
</td> </tr>
<tr>
<td>
Data access
</td>
<td>
technology used to expose the resource (relational database, API, linked data,
etc.).
</td> </tr>
<tr>
<td>
Open format
</td>
<td>
if the format of the resource is open or not.
</td> </tr>
<tr>
<td>
URI
</td>
<td>
the URI pointing to the different resources.
</td> </tr> </table>
**Table 2.** Fields describing a resource associated to a data asset
The template was materialized as a spreadsheet distributed among the partners.
### Lynx Data Portal
With the aim of publishing the metadata of the harvested datasets, a data
portal has been made available under http://data.lynx-project.eu.
This data portal uses the technology of CKAN. The Comprehensive Knowledge
Archive Network (CKAN) is a web-based management system for the storage and
distribution of open data. The system is open source 14 , and it has been
deployed on the UPM servers using containerization technologies –Rancher 15
, a leading solution to deploy Docker containers in a Platform as a Service
(PaaS).
The CKAN open data portal gives access to the resources gathered by all the
members of the Lynx project. In the same way, members are able to register and
describe their harvested resources to jointly create the Lynx Open Data
Portal. To correctly display the relevant information about the datasets, CKAN
application uses the metadata described in Section 4.2.1. As a result, each
dataset presents the interface as shown by Figure 6 .
**Figure 6.** Screenshot of the Lynx Data Portal
The “Data and Resources” section corresponds to the “Resource information”
metadata block and “Additional Info” contains the metadata of the “Dataset
information” table.
The CKAN data portal allows faceted browsing, with filters such as language,
format and jurisdiction. At this moment, there are 67 datasets classified in
the CKAN, but this number will grow. For the metadata records to be correctly
displayed on the website, it was required to establish a correspondence
between the metadata in the spreadsheet and the structure in the JSON file
that gives shape to the CKAN platform.
In the Lynx Data Portal, each dataset can be accessed through their own URI,
that is built by using the ID of each resource. Datasets IDs are shown in
Table 3, contained in the next section. As a result, dataset URIs look like
the example below, where the ID would be unesco-thesaurus:
http://data.lynx-project.eu/dataset/unesco-thesaurus
The CKAN API enables a direct access to the metadata records. The API is
intended for developers who want to write code that interacts with CKAN sites
and their data, and it is documented online 16 . For example, the REST GET
method:
http://data.lynx-project.eu/api/rest/dataset/unesco-thesaurus
will return the following answer:
{"license_title": null, "maintainer": null, "private": false,
"maintainer_email": null, "num_tags": 0, "id":
"efaf72c9-f8da-4257-b77e-c1f90952d71a", "metadata_created":
"2018-04-11T08:35:41.813169", "relationships": [],
"license": null, "metadata_modified": "2018-04-11T08:39:59.429186", "author":
null, "author_email": null,
"download_url": "http://skos.um.es/sparql/", "state": "active", "version":
null, "creator_user_id": "3b131ddc-4bbf-
42ff-9c33-ee1c4f7adb5c", "type": "dataset", "resources": [{"Distribuciones":
"SPARQL endpoint", "hash": "",
"description": "SPARQL endpoint", "format": "SKOS", "package_id":
"efaf72c9-f8da-4257-b77e-c1f90952d71a",
"mimetype_inner": null, "url_type": null, "formatoabierto": "", "id":
"2a610dc8-15cd-4f17-aee0-149201c427cd",
"size": null, "mimetype": null, "cache_url": null, "name": "SPARQL endpoint",
"created": "2018-04-
11T08:39:13.979840", "url": "http://skos.um.es/sparql/", "cache_last_updated":
null, "last_modified": null,
"position": 0, "resource_type": null}, {"Distribuciones": "Downloadable
files", "hash": "", "description":
"Downloadable files in RDF and Turtle.", "format": "RDF", "package_id":
"efaf72c9-f8da-4257-b77e-c1f90952d71a", "mimetype_inner": null, "url_type":
null, "formatoabierto": "", "id": "81ddd071-4018-4850-b5d8-04b4f5badd7d",
"size": null, "mimetype": null, "cache_url": null, "name": "Downloadable
files", "created": "2018-04-
11T08:39:59.170137", "url": "http://skos.um.es/unescothes/downloads.php",
"cache_last_updated": null,
"last_modified": null, "position": 1, "resource_type": null}],
"num_resources": 2, "tags": [], "groups": [],
"license_id": null, "organization": {"description": "", "title": "OEG",
"created": "2018-04-05T08:10:35.821305",
"approval_status": "approved", "is_organization": true, "state": "active",
"image_url": "", "revision_id":
"66f3c9c3-9bdf-4ebe-8ed2-54b4aea30375", "type": "organization", "id":
"d4250a6e-d1d4-4a2d-8e40-b663271d8404", "name": "oeg"}, "name": "unesco-
thesaurus", "isopen": false, "notes_rendered": "<p>The UNESCO Thesaurus is a
controlled and structured list of terms used in subject analysis and retrieval
of documents and publications in several fields.</p>", "url": null,
"ckan_url": "http://data.lynx-project.eu/dataset/unesco-thesaurus", "notes":
"The UNESCO Thesaurus is a controlled and structured list of terms used in
subject analysis and retrieval of documents and publications in several
fields.\r\n", "owner_org": "d4250a6e-d1d4-4a2d-8e40-b663271d8404",
"ratings_average": null, "extras": {"lkg_type": "language", "domain":
"Education, Science, Culture, Politics, Countries, Information",
"total_number": "4408 (skos concepts)", "language": "en, es, fr, ru",
"creator": "Research group of Information Technology (University of Murcia)",
"publisher": "UNESCO", "jurisdiction": "", "other_rights":
"no", "last_update": "2015", "licence": "Creative Commons 3.0,
https://creativecommons.org/licenses/by-ncsa/3.0/deed.es_ES", "date":
"11/04/18", "partner": "UPM", "identifier": "", "availability": "online"},
"ratings_count": 0, "title": "UNESCO Thesaurus", "revision_id":
"67553ea8-aa13-4dfe-905d-eb499d2d78e9"}
## TRANSFORMATION OF RESOURCES
The minimum content of the LKG is the collection of datasets necessary for the
execution of the Lynx pilots that are published as linked data. Whereas
transformation of resources to linked data is not a central activity of Lynx,
the project foresees that some resources will exist but not as linked data,
and a transformation process will be necessary.
The cycle of activities usually made when publishing linked data is shown in
Figure 7.
**Figure 7.** Usual activities for publishing linked data. Figure taken from
[25].
Whereas the specification is derived from the pilots and the use case needs,
the modelling process leans on existing data models, to be harmonized as
described in Section 4.2. The generation of linked data is the transformation
of existing resources. These transformation will be different depending on the
source format:
From unstructured text, extraction tools (PoolParty, OpenCalais,
SketchEngine etc.) and dedicated harvesters to create resources in the LKG.
From relational databases, technologies such as R2RML exist and its use is
foreseen, but as of M18 no use of them has been made.
For tabular data, Open Refine and similar tools have been used.
## CATALOGUE OF DATASETS
This section contains the datasets catalogued as of M18.
### Datasets in the regulatory domain
Within the initial version of this document (D2.1), three datasets in the
regulatory domain were identified:
Eur-Lex: Database of legal information containing: EU law (EU treaties,
directives, regulations, decisions, consolidated legislation, etc.)
preparatory acts (legislative proposals, reports, green and white papers,
etc.), EU case-law (judgments, orders, etc.), international agreements, etc. A
huge database updated daily with some texts dating back to 1951.
Openlaws: Austrian laws (federal laws and of the 9 regions) and rulings
(from 10 different courts), German federal laws, European laws (regulations,
directives) and rulings (general court, European Court of Justice). It
includes Eur-Lex, 11k national acts and 300k national cases in a neo4j graph.
DNV-GL: Standards, regulations and guidelines to the public, usually in PDF.
As the project has progressed, many other datasets have been collected and a
new structure has been accordingly defined. Pilot 1 changed the focus from
“Data Protection” into “Contracts”. Therefore, harvested legal corpora has
been organised accordingly: Contracts, Labour Law and Industrial Standards.
Regarding Pilot 1, contract corpora is provided by openlaws. Most of documents
are in Austrian German containing personal data that is to be disclosed. Thus,
this kind of files are private and not published in the Data Portal.
Nevertheless, openlaws will provide more contracts in future stages and some
of them are expected to be bilingual, combining German and English
information. Hence, they will need to be processed ad hoc.
Since Labour Law, Pilot 2, is a huge field itself, these specific corpora is,
in turn, divided into three subtopics:
Collective agreements, official documents about conditions of work for a
specific sector at the same level as ordinary laws.
Judgements, case law related to labour law in the different jurisdictions.
Legislation, at European Union level and Member State Level.
Finally, each corpus is accordingly separated as per the four languages of the
project: English, German, Spanish and Dutch. See Figure 8 to get a clear idea
of the structure of Lynx datasets in the regulatory domain.
Finally, since Pilot 3, Industrial Standards, is led by DNV, most of the
documents cover Dutch language. However, a few of them are also in English.
Just like Pilot 1, at this moment, Industrial Standards corpus is for private
use only.
### Datasets in the language domain
Using the methodology described in Section 3.1, several sites and repositories
have been surveyed. One of the sources of most interest for linguistic open
data is the Linked Open Data Cloud 17 or LOD cloud, due to its open nature
and its adequate format as linked data or RDF. In particular, the Linguistic
Linked Open Data Cloud 18 is a subset of the LOD cloud which provides
exclusively linguistic resources sorted by typology. Different types of
datasets in the Linguistic Linked Open Data Cloud are:
Corpora
Terminology, thesauri and Knowledge Bases
Lexicons and Dictionaries
Linguistic Resource Metadata
Linguistic Data Categories
Typological Databases
Within this project, the three first types of resources have been shortlisted
as the most useful.
Besides consuming linked data or RDF in general, other valuable non-RDF
resources can be included in the graph, possibly once converted to RDF. Many
non-RDF resources of interest in this context can be found in data portals
like the European Data Portal, the Library of Congress or the Termcoord public
portal, which is of particular interest for the multilingual glossaries in the
domain of law.
Due to the huge amount of information and open data available nowadays, it is
essential to establish these limits to gather only the relevant resources. In
the case that more types of datasets are required, they will be harvested at a
later stage. Thus, some of the resources already published as linked data and
that have been identified as of interest for Lynx are listed below:
STW Thesaurus for Economics: a thesaurus that provides a vocabulary on any
economic subject. It also contains terms used in law, sociology and politics
(monolingual in English) [30].
Copyright Termbank: a multilingual term bank of copyright-related terms that
has been published connecting WIPO definitions, IATE terms and definitions
from Creative Commons licenses
(multilingual) .
EuroVoc: a multilingual and multidisciplinary thesaurus covering the
activities of the EU. It is not specifically legal, but it contains pertinent
information about the EU and their politics and law (multilingual).
AGROVOC: a controlled vocabulary covering all the fields of the Food and
Agriculture Organization (FAO) of the United Nations. It contains general
information and it has been selected since it shares many structures with
other important resources (multilingual).
IATE: a terminological database developed by the EU which is constantly
being updated by translators and terminologists. Amongst other domains, the
terms are related with law and EU governments (multilingual). A transformation
to RDF was made in 2015.
Resources published in other formats have been considered as well. Structured
formats include TBX (used for term bases), CSV and XLS. Exceptionally,
resources published in non-machine-readable formats might be considered.
Consequently, the following resources published by the EU have also been
listed as usable, although they are not included in the Linguistic Linked Open
Data Cloud:
INSPIRE Glossary: a term base developed by the INSPIRE Knowledge Base of the
European Union. Although this project is related with the field of spatial
information, the glossary contains general terms and definitions that specify
the common terminology used in the INSPIRE Directive and in the INSPIRE
Implementing Regulations (monolingual, en).
EUGO Glossary: a term base addressed to companies and entrepreneurs that
need to comply with administrative or professional requirements to perform a
remunerated economic activity in Spain. This glossary is part of a European
project and contains terms about regulations that are valuable for Lynx
purpose (monolingual in Spanish).
GEMET: a general thesaurus, conceived to define a common general language to
serve as the core of general terminology for the environment. This glossary is
available in RDF and it shares terms and structures with EuroVoc
(multilingual).
Termcoord: a portal supported by the European Union that contains glossaries
developed by the different institutions. These glossaries cover several fields
including law, international relations and government. Although the resources
are available in PDF, at some point these documents could be treated and
transformed into RDF if necessary (multilingual).
In the same way, the United Nations also counts with consolidated
terminological resources. Given their intergovernmental domain, the following
resources have been selected:
UNESCO Thesaurus: a controlled list of terms intended for the subject
analysis of texts and document retrieval. The thesaurus contains terms on
several domains such as education, politics, culture and social sciences. It
has been published as a SKOS thesaurus and can be accessed through a SPARQL
endpoint (multilingual).
InforMEA Glossary: a term bank developed by the United Nations and supported
by the European Union with the aim of gathering terms on Environmental Law and
Agreements. It is available as RDF and it will be upgraded to a thesaurus
during the following months (multilingual).
International Monetary Fund Glossary: a terminology list containing terms on
economics and public finances related with the European Union. It is available
as a PDF downloadable file; however, it may be transformed as a future work
(multilingual).
On the other hand, other linguistic resources (not supported by the EU nor the
UN) have been spotted. Some of them are already converted into RDF:
Termcat (Terminologia Oberta): a set of terminological databases supported
by the government of Catalonia. They contain term equivalents in several
languages. Part of these terminological databases were converted into RDF
previously and are part of the TerminotecaRDF project. They can be accessed
through a SPARQL endpoint (multilingual).
German Labour Law Thesaurus: a thesaurus that covers all main areas of
labour law, such as the roles of employee and employer; legal aspects around
labour contracts. It is available through a SPARQL endpoint and as RDF
downloadable files (monolingual, de).
Jurivoc: a juridical thesaurus developed by the Federal Supreme Court of
Switzerland in cooperation with Swiss legal libraries. It contains juridical
terms arranged in a monohierarchic structure (multilingual).
SAIJ Thesaurus: a thesaurus that organises legal knowledge through a list of
controlled terms which represent concepts. It is available in RDF and intended
to ease users’ access information related to the argentine legal system that
can be found in a file or in a documentation centre (monolingual, es).
CaLaThe: a thesaurus for the domain of cadastre and land administration that
provides a controlled vocabulary. It is interesting because it shares
structures and terms with AGROVOC and the GEMET thesaurus, and it can be
downloaded as an RDF file (monolingual, en).
CDISC Glossary: a glossary contains definitions of terms and abbreviations
that can be relevant for medical laws and agreements It is available in
several formats, including OWL (monolingual, en).
Finally, one last resource available in other PDF has also been considered due
to different facts:
Connecticut Glossary: a glossary that contains legal terms published by the
Judicial Branch of the State of Connecticut. It can be transformed into a
machine-readable format and from there into RDF since it provides with
equivalences of legal terms from English into Spanish (bilingual).
Table 3 lists all the resources as a review of the information presented
above. On the other hand, the set of the identified linguistic resources has
also been represented in an interactive graph, in which each dataset is
coloured as per the domain it covers (Figure 9).
**ID Name Description Language**
<table>
<tr>
<th>
**iate**
</th>
<th>
IATE
</th>
<th>
EU terminological database.
</th>
<th>
EU languages
</th> </tr>
<tr>
<td>
**eurovoc**
</td>
<td>
Eurovoc
</td>
<td>
EU multilingual thesaurus.
</td>
<td>
EU languages
</td> </tr>
<tr>
<td>
**eur-lex**
</td>
<td>
EUR-Lex
</td>
<td>
EU legal corpora portal.
</td>
<td>
EU languages
</td> </tr>
<tr>
<td>
**conneticutlegal-glossary**
</td>
<td>
Conneticut Legal
Glossary
</td>
<td>
Bilingual legal glossary.
</td>
<td>
en, es
</td> </tr>
<tr>
<td>
**unescothesaurus**
</td>
<td>
UNESCO Thesaurus
</td>
<td>
Multilingual multidisciplinary thesaurus.
</td>
<td>
en, es, fr, ru
</td> </tr>
<tr>
<td>
**library-ofcongress**
</td>
<td>
Library of Congress
</td>
<td>
Legal corpora portal.
</td>
<td>
en
</td> </tr>
<tr>
<td>
**imf**
</td>
<td>
International Monetary Fund
</td>
<td>
Economic multilingual terminology.
</td>
<td>
en, de, es
</td> </tr>
<tr>
<td>
**eugo-glossary**
</td>
<td>
EUGO Glossary
</td>
<td>
Business monolingual dictionary.
</td>
<td>
es
</td> </tr>
<tr>
<td>
**cdisc-glossary**
</td>
<td>
CDISC Glossary
</td>
<td>
Clinical monolingual
</td>
<td>
en
</td> </tr>
<tr>
<td>
**stw**
</td>
<td>
STW Thesaurus for Economics
</td>
<td>
Economic monolingual thesaurus.
</td>
<td>
en
</td> </tr>
<tr>
<td>
**edp**
</td>
<td>
European Data Portal
</td>
<td>
EU datasets.
</td>
<td>
EUlanguages
</td> </tr>
<tr>
<td>
**inspire**
</td>
<td>
INSPIRE Glossary (EU)
</td>
<td>
General terms and definitions in English.
</td>
<td>
en
</td> </tr>
<tr>
<td>
**saij**
</td>
<td>
SAIJ Thesaurus
</td>
<td>
Controlled list of legal terms.
</td>
<td>
es
</td> </tr>
<tr>
<td>
**calathe**
</td>
<td>
CaLaThe
</td>
<td>
Cadastral vocabulary
</td>
<td>
en
</td> </tr>
<tr>
<td>
**gemet**
</td>
<td>
GEMET
</td>
<td>
General multilingual thesauri.
</td>
<td>
en, de, es, it
</td> </tr>
<tr>
<td>
**informea**
</td>
<td>
InforMEA Glossary (UNESCO)
</td>
<td>
Monolingual glossary on environmental law.
</td>
<td>
en
</td> </tr>
<tr>
<td>
**copyrighttermbank**
</td>
<td>
Copyright Termbank
</td>
<td>
Multi-lingual term bank of copyrightrelated terms
</td>
<td>
en, es, fr, pt
</td> </tr>
<tr>
<td>
**gllt**
</td>
<td>
German labour law thesaurus
</td>
<td>
Thesaurus with labour law terms.
</td>
<td>
de
</td> </tr>
<tr>
<td>
**jurivoc**
</td>
<td>
Jurivoc
</td>
<td>
Juridical terms from Switzerland.
</td>
<td>
de, it, fr
</td> </tr>
<tr>
<td>
**termcat**
</td>
<td>
Termcat
</td>
<td>
Terms from several fields including law.
</td>
<td>
ca, en, es, de, fr,
it
</td> </tr>
<tr>
<td>
**termcoord**
</td>
<td>
Termcoord
</td>
<td>
Glossaries from EU institutions and bodies.
</td>
<td>
EU languages
</td> </tr> </table>
**agrovoc** Agrovoc Controlled general vocabulary. 29 languages
**Table 3.** Initial set of resources gathered.
**Figure 9.** Datasets represented by domain.
# DATA MODELS
## INTRODUCTION
### Existing data models in the regulatory domain
A number of vocabularies and ontologies for documents in the legal domain has
been published in the last few years. Núria Casellas surveyed 52 legal
ontologies in 2011 [18], and in the meantime many other new ontologies have
appeared, but in practice, only a few of them have direct interest for the
LKG, as not every published legal ontology is created with the intention of
supporting data models. Some ontologies had the intent of formalizing abstract
conceptualizations. For example, ontology design patterns in the legal domain
have been explored [17] –but these works have little interest for supporting
data publication.
The XML schema Akoma Ntoso 19 was initially funded by the United Nations to
become some years later an OASIS specification as Legal RuleML 20 . MetaLex
[12] was an XML vocabulary for the encoding of the structure and content of
legislative documents, which included in newer versions functionality related
to timekeeping and version management. The European Committee for
Standardization (CEN) adopted MetaLex and evolved the schema to an OWL
ontology. MetaLex was extended in the context of the FP6 ESTRELLA project
(2006-2008) which developed a network of ontologies known as Legal Knowledge
Interchange Format (LKIF). The LKIF ontologies are still available and a
reference in the area 21 [14]. Licenses used for the publication of
copyrighted work have been modelled with the ODRL (Open Digital Rights
Language) language [27].
The European Legislation Identifier (ELI) is a system to make legislation
available online in a standardised format, so that it can be accessed,
exchanged and reused across border [13]. ELI describes a new common framework
to unify and link national legislation with European legislation. ELI, as a
framework, proposes a URI template for the identification of legal resources
on the web and it also provides an OWL ontology for supporting the
representation of metadata of legal events and documents. The European Case
Law Identifier (ECLI), much like ELI, was introduced recently for modelling
case laws. The BO-ECLI project, funded under the Justice Programme of the
European Union (2015-2017), aimed to broaden the use of ECLI and to further
improve the accessibility of case law.
### Data models in the linguistic domain
Similarly, a large amount of language resources can already be found across
the Semantic Web. Such datasets are represented with various schemas,
depending on given factors such as the inner structure of the dataset,
language, content or the objective of its publication, to mention but a few.
_Simple Knowledge Organization System_ ( _SKOS_ ) is aimed to represent the
structure of organization systems such as thesauri and taxonomies, since they
share many similarities. It is widely used within the Semantic Web context,
since it provides an intuitive language and can be combined with formal
representation languages such as the Web Ontology Language (OWL). _SKOS XL_
works as an extension of SKOS to represent lexical information [23].
With regard to multilingualism in ontologies, _Linguistic Information
Repository_ ( _LIR_ ) was proposed as model for ontology localisation: it
grants the localisation of the ontology terminological layer, without
modifying the ontology conceptualisation. LIR allows enriching ontology
entities with the linguistic information necessary for the localisation and
cultural adaptation of the ontology [24].
Another model intended for the representation of linguistic descriptions
associated to ontology concepts is _Lexinfo_ [20]. It contains a complete
collection of linguistic categories. Currently, it is used in combination with
other models such as Ontolex (described in the next paragraph), to describe
the properties of the linguistic objects that describe ontology entities.
Other repositories of linguistic categories are ISOcat 22 , OLiA 23 or
GOLD 24 .
The _Lexicon Model for Ontologies_ or _lemon_ [26] was especially created to
represent lexical information in the Semantic Web, covering some needs that
previous models did not. This model has evolved in the context of a W3C
Community Group into _lemon-Ontolex_ first, now better known as _Ontolex_ 25
. In this model, linguistic descriptions are as well separated from the
ontology, and point to the corresponding concept in the ontology. The
structure of this model is divided into a core set of classes and different
modules containing various types of linguistic information that range from
morpho-syntactic properties of lexical entries, lexical and terminological
variation and translation, decomposition of phrase structures, syntactic
frames and mappings to the ontological predicates, and morphological
decomposition of lexical forms. Linguistic annotations such as data categories
and linguistic descriptors are not captured in the model but referred to by
pointing to models that contain them (see LexInfo model above).
## LYNX DATA MODELS
### Strategy for the harmonisation of data models
Users of the LKG need a uniform collection of data models in order to
integrate heterogeneous resources, which is initially provided in this
Deliverable but which will be in constant maintenance until the end of the
project.
In order to select the data models, a simultaneous top down and bottom up
approaches has been conducted, as illustrated by Figure 10. A parallel work
has been carried out, where in the one hand a top down approach has been
conducted, extracting a list of formats, vocabularies and ontologies which can
be chosen to satisfy the functional requirements of the pilots, whereas in the
other hand a bottom up approach has been followed, exploring every possible
format, vocabulary or ontology of interest, with special attention to the most
widely spread ones.
Identification of vocabularies
and ontologies in the domain
Generation of minimal
metadata description
Publication in the Lynx web as
a catalogue of vocabularies
Analysis of functional
requirements
Analysis of technical
requirements
Identification of vocabularies
and formats necessary
Selection of vocabularies
and ontologies
Top down approach
An analysis of the functional and
technical requirements of the pilots
determines a list of vocabularies and
ontologies of choice
Bottom up approach
A survey of ontologies and vocabularies
tries to comprehensively identify the
most widely spread formats
**Figure 10.** Strategy for the selection of data models in Lynx
### Definition of Lynx Documents
The added value of the Lynx services revolves around a better processing of
heterogenous, multilingual documents in the legal domain. Hence, the most
important data structure is the _Lynx Document_ . Lynx Documents may be
grouped in _Collections_ , and may be enriched with _Annotations_ .
The main entities to deal with can be defined as follows:
* **Lynx Documents** are the basic information units in Lynx: identified pieces of text, possibly with structure, metadata and annotations. A **Lynx Document Part** is a part of Lynx documents.
* **Collections** are groups of Lynx Documents with any logical relation. There may be one collection per use case, per jurisdiction, etc.
* **Annotations** are enrichments of Lynx Documents, such as summaries, translation, recognized entities, etc.
Because most of AI algorithms dealing with documents focus on text
-manipulation of images, videos or tables is less developed-, the essence of a
Lynx Document is its text version. Thus, the key element in a Lynx Document is
an identified piece of text. This document can be annotated with an arbitrary
number of metadata elements (creation date, author, etc.), and eventually
structured for a minimally attractive visual representation.
Original documents are transformed as represented in Figure 11: first, they
are acquired by harvesters from their heterogeneous sources and formats, being
structured and represented in a uniform manner. Then, they are enriched with
annotations (such as named entities like persons, organisations, etc.).
**Original document**
Harvester
**LynxDocument**
id
text
metadata
parts
Enrichment
workflows
**LynxDocumentAnnotated**
id
text
metadata
parts
annotations
**Figure 11 Original documents and Lynx Documents**
The elements in a complete Lynx Document, with annotations, are depicted in
Figure 12. Metadata is defined as a list of pairs attribute-values. Parts are
defined as text fragments delimited by two offsets, possibly with a title and
a parent, so that they can be nested. Annotations also refer to text fragments
delimited by two offsets, and describe in different manners such a fragment
(e.g. ‘it refers to a Location which is Madrid, Spain’).
**LynxDocument**
id
text
metadata
prop1: value1a,value1b...
prop2: value2, value2b...
...
parts
part: id, ini, end, title, parent
part: id, ini, end, title,parent
...
annotations
annotation: ini, end, anchorOf,
classReference, id...
annotation: ini, end, anchorOf,
classReference, id...
...
**Figure 12 Elements in a Lynx Document**
Lynx Documents can be serialized as RDF documents. Explicit support is given
to its serialization as JSONLD version 1.0, and a JSON-LD context is available
at:
http://lynx-project.eu/doc/jsonld/lynxdocument.json
The format of a Lynx Document is shared among the three pilots and is valid
for every type of documents. Refinements of this schema are possible –for
example, even if an initial table of metadata records is described, new fields
can be added as they become necessary for the pilot implementation.
### Lynx Documents with metadata
The simplest possible Lynx Document as a JSON file is shown in the listing
below.
{
"@context": "http://lynx-project.eu/doc/jsonld/lynxdocument.json",
"@id": "doc001",
"@type": "http://lynx-project.eu/def/lkg/LynxDocument",
"text" : "This is the first Lynx document, a piece of identified text."
}
The first line declares the context (@context), which describes how to
interpret the rest of the JSON LD document. It references an external file.
The second one (@id) declares the identifier of the element. The complete URI
to identify the document is created from this string and also from the @base
declared in the context. The @type declares what is the type of the document,
and finally the text element represents the text of the document.
The text is not repeated in the fragments, in order to save space. Alternative
transformations of this JSON structure are possible and recommended for every
specific implementation need (e.g. OLS in Pilot
1).
The JSON-LD version can, however, be automatically converted into other RDF
syntaxes. For example, the Turtle version of the same document follows.
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
<http://lkg.lynx-project.eu/res/doc001> a <http://lynx-
project.eu/def/lkg/LynxDocument> ; rdf:value "This is the first Lynx document,
a piece of identified text." .
Metadata is a collection of pairs property-list of values. This is better
illustrated with the example below.
{
"@context": "http://lynx-project.eu/doc/jsonld/lynxdocument.json",
"@id": "doc002",
"@type": "http://lynx-project.eu/def/lkg/LynxDocument",
"text" : "This is the second Lynx document.",
"metadata" : {
"title": ["Second Document"],
"subject": ["testing", "documents"]
} }
Which is rendered as RDF Turtle in the next listing.
<table>
<tr>
<th>
@prefix lkg: <http://lkg.lynx-project.eu/def/lkg/> .
@prefix dc: <http://purl.org/dc/terms/> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
<http://lkg.lynx-project.eu/res/doc002>
a <http://lynx-project.eu/def/lkg/LynxDocument> ; lkg:metadata [
dc:subject "testing", "documents";
dc:title "Second Document"
] ; rdf:value "This is the second Lynx document." .
</th> </tr> </table>
The language tag can be defined with the @language JSON-LD element, as an
additional context element. This will make strings (RDF literals) to have the
language tag set to Spanish.
<table>
<tr>
<th>
{
"@context": ["http://lynx-project.eu/doc/jsonld/lynxdocument.json",
{"@language": "es"}], "@id": "doc003",
"@type": "http://lynx-project.eu/def/lkg/LynxDocument", "text" : "Un documento
en español." }
</th> </tr> </table>
**Figure 17 Example of Lynx Document with language tag (JSON-LD)**
### Lynx Documents with structuring information
Parts and structuring information can be included as shown in the next
example. Parts are defined by the offset (begin and final character of the
excerpt). They can be nested because they have a parent property and they can
be possibly identified. Fragment identifiers can be built as described in the
NIF specification 26 . The example below shows an example of nested
fragments, as Art. 2.1
{
"@context": "http://lynx-project.eu/doc/jsonld/lynxdocument.json",
<table>
<tr>
<th>
"@id": "doc004",
"@type": "http://lynx-project.eu/doc/lkg/LynxDocument",
"text": "Art.1 This is the fourth Lynx document. Art.2 This is the fourth Lynx
document. Art 2.1.
Empty.",
"metadata": {
"title": ["A document with parts."]
},
"parts": [
{
"offset_ini": 0,
"offset_end": 39,
"title": "Art.1"
},
{
"@id": "http://lkg.lynx-project.eu/res/doc004/#offset_41_94",
"offset_ini": 41, "offset_end": 94,
"title": "Art.2"
},
{
"offset_ini": 80,
"offset_end": 94,
"title": "Art.2.1",
"parent": {
"@id": "http://lkg.lynx-project.eu/res/doc004/#offset_41_94"
}
}
]
}
</th> </tr> </table>
**Figure 18 Example of Lynx Document with structure (JSON-LD)**
In the following example, the Turtle RDF version is shown.
<table>
<tr>
<th>
@prefix eli: <http://data.europa.eu/eli/ontology#> .
@prefix nif: <http://persistence.unileipzig.org/nlp2rdf/ontologies/nif-core#>
.
@prefix dc: <http://purl.org/dc/terms/> .
@prefix lkg: <http://lkg.lynx-project.eu/def/lkg/> . @prefix rdf:
<http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
<http://lkg.lynx-project.eu/res/doc004>
a <http://lynx-project.eu/doc/lkg/LynxDocument> ; eli:has_part [
nif:beginIndex 0 ; nif:endIndex 39 ; dc:title "Art.1"
], <http://lkg.lynx-project.eu/res/doc004/#offset_41_94>, [ lkg:parent
<http://lkg.lynx-project.eu/res/doc004/#offset_41_94> ; nif:beginIndex 80 ;
nif:endIndex 94 ; dc:title "Art.2.1"
] ;
lkg:metadata [ dc:title "A document with parts." ] ;
rdf:value "Art.1 This is the fourth Lynx document. Art.2 This is the fourth
Lynx document. Art 2.1. E mpty."^^.
<http://lkg.lynx-project.eu/res/doc004/#offset_41_94> nif:beginIndex 41 ;
nif:endIndex 94 ; dc:title "Art.2" .
</th> </tr> </table>
**Figure 19 Simple example of Lynx Document (Turtle)**
Two classes suffice for representing Lynx Documents without annotations as UML
objects (See Figure 20).
**Figure 20 UML class diagram representation of Lynx document and Lynx
document part.**
### Lynx document with annotations
Annotations are represented using NIF. The next example shows a Lynx Document
with one annotation, highlighting the existence of a reference to London,
which is a Location.
<table>
<tr>
<th>
{
"@context": "http://lynx-project.eu/doc/jsonld/lynxdocument.json",
"@id": "doc005",
"@type": "http://lynx-project.eu/doc/lkg/LynxDocument",
"text": "I was born in London long time ago.",
"metadata": {
"title": [
"An annotated document"
]
},
"annotations": {
"annotation": [
{
"@id": "http://lynx-project.eu/res/id000#offset_29_35",
"@type": [
"nif:String",
"nif:RFC5147String" ],
"anchorOf": "London",
"offset_ini": "14",
"offset_end": "20",
"referenceContext": "http://lkg.lynx-project.eu/res/doc005",
"taClassRef": "http://dbpedia.org/ontology/Location",
"taIdentRef": "http://dbpedia.org/resource/London"
}
]
}
}
</th> </tr> </table>
**Figure 21 Annotated Lynx Document (JSON LD).**
The equivalent RDF Turtle excerpt follows, with the prefixes as above.
<table>
<tr>
<th>
<http://lkg.lynx-project.eu/res/doc005>
a <http://lynx-project.eu/doc/lkg/LynxDocument> ; lkg:metadata [ dc:title "An
annotated document" ] ;
lkg:annotations [ lkg:annotation <http://lynx-
project.eu/res/id000#offset_29_35> ] ; rdf:value "I was born in London long
time ago." .
<http://lynx-project.eu/res/id000#offset_29_35> a nif:String,
nif:RFC5147String ; nif:anchorOf "London" ; nif:beginIndex 14 ; nif:endIndex
20 ;
nif:referenceContext <http://lkg.lynx-project.eu/res/doc005> ;
</th> </tr> </table>
itsrdf:taClassRef <http://dbpedia.org/ontology/Location> ; itsrdf:taIdentRef
<http://dbpedia.org/resource/London> .
**Figure 22 Annotated Lynx Document (Turtle).**
The use of nif:annotationUnit is optional, but useful for avoiding colliding
annotations. The last line should be replaced then by the following excerpt.
See more details on NIF on Table 6.
nif:annotationUnit [ itsrdf:taIdentRef <http://vocabulary.semantic-
web.at/CBeurovoc/C8553> . ] .
### List of recommended metadata fields and their representation
<table>
<tr>
<th>
**Group Property Usage**
</th>
<th>
**RDF property**
</th> </tr>
<tr>
<td>
**basic elements**
</td>
<td>
id
</td>
<td>
Lynx identifier of the document
</td>
<td>
dct:identifier
</td> </tr>
<tr>
<td>
text
</td>
<td>
Text of the document
</td>
<td>
rdf:value
</td> </tr>
<tr>
<td>
parts
</td>
<td>
Parts of the document
</td>
<td>
eli:has_part
</td> </tr>
<tr>
<td>
**general**
</td>
<td>
type
</td>
<td>
Type of document (legislation, case law, etc.)
</td>
<td>
dct:type
</td> </tr>
<tr>
<td>
rank
</td>
<td>
Sub-type of document (constitution, law, etc.)
</td>
<td>
eli:type_document
</td> </tr>
<tr>
<td>
language
</td>
<td>
Language of the document
</td>
<td>
dct:language
</td> </tr>
<tr>
<td>
jurisdiction
</td>
<td>
Jurisdiction using ISO
</td>
<td>
eli:jurisdiction
</td> </tr>
<tr>
<td>
wasDerivedFrom
</td>
<td>
Original URL if the document was extracted from the web
</td>
<td>
prov-o:wasDerivedFrom
</td> </tr>
<tr>
<td>
title
</td>
<td>
Title of the document
</td>
<td>
dct:title
</td> </tr>
<tr>
<td>
hasAuthority
</td>
<td>
Authority issuing the document
</td>
<td>
lkg:hasAuthority
</td> </tr>
<tr>
<td>
nick
</td>
<td>
Alternative names of the document
</td>
<td>
foaf:nick
</td> </tr>
<tr>
<td>
version
</td>
<td>
Consolidated, draft or bulletin
</td>
<td>
eli:version
</td> </tr>
<tr>
<td>
subject
</td>
<td>
Subjects or keywords of the document
</td>
<td>
dtc:subject
</td> </tr>
<tr>
<td>
**identifier s**
</td>
<td>
id_local
</td>
<td>
Local identifier (e.g. BOE-A-20191234)
</td>
<td>
eli:id_local
</td> </tr>
<tr>
<td>
identifier
</td>
<td>
Official identifier (e.g. ELI etc.)
</td>
<td>
dct:identifier
</td> </tr>
<tr>
<td>
**dates**
</td>
<td>
first_date_entry_in_force
</td>
<td>
Date when enters into force
</td>
<td>
eli:first_date_entry_in_force
</td> </tr>
<tr>
<td>
date_no_longer_in_force
</td>
<td>
Date when repealed / expired
</td>
<td>
eli:date_no_longer_in_force
</td> </tr>
<tr>
<td>
version_date
</td>
<td>
Date of publication of the document
</td>
<td>
eli:version_date
</td> </tr>
<tr>
<td>
**mappings**
</td>
<td>
hasEli
</td>
<td>
Official identifier (ELI, ECLI or equivalent)
</td>
<td>
lkg:hasEli
</td> </tr>
<tr>
<td>
hasPDF
</td>
<td>
Link to the PDF version
</td>
<td>
lkg:hasPDF
</td> </tr>
<tr>
<td>
hasDbpedia
</td>
<td>
Link to the equivalent dbpedia version
</td>
<td>
lkg:hasDbpedia
</td> </tr>
<tr>
<td>
hasWikipedia
</td>
<td>
Link to the equivalent wikipedia version
</td>
<td>
lkg:hasWikipedia
</td> </tr>
<tr>
<td>
sameAs
</td>
<td>
Equivalent document
</td>
<td>
owl:sameAs
</td> </tr>
<tr>
<td>
seeAlso
</td>
<td>
Related documents
</td>
<td>
rdfs:seeAlso
</td> </tr>
<tr>
<td>
**Internal**
</td>
<td>
creator
</td>
<td>
Creators of the documents in Lynx (person or software)
</td>
<td>
dct:creator
</td> </tr>
<tr>
<td>
created
</td>
<td>
Date when created in Lynx (internal)
</td>
<td>
dct:created
</td> </tr> </table>
##### Table 4 List of recommended metadata fields and their representation
Table 4 lists the recommended metadata fields and their representation and.
<table>
<tr>
<th>
**Element**
</th>
<th>
**Meaning**
</th>
<th>
**Values / example**
</th> </tr>
<tr>
<td>
**itsrdf:taClassRef**
</td>
<td>
Class of the annotated context
</td>
<td>
dbo:Person, dbo:Location, dbo:Organization, dbo:TemporalExpression
</td> </tr>
<tr>
<td>
**itsrdf:taIdentRef**
</td>
<td>
URL from external resource, such as DBPedia, Wikidata, Geonames, etc.
</td>
<td>
http://dbpedia.org/resource/London
</td> </tr>
<tr>
<td>
**itsrdf:taConfidence**
</td>
<td>
Confidence
</td>
<td>
[0..1]
</td> </tr>
<tr>
<td>
**nif:summary**
</td>
<td>
Summary
</td>
<td>
text
</td> </tr> </table>
**Table 5 List of some NIF-related properties and their values** Table 6 lists
the prefixes used in this section.
<table>
<tr>
<th>
**Vocabulary**
</th>
<th>
**Prefix**
</th>
<th>
**URL**
</th> </tr>
<tr>
<td>
**LKG Ontology**
</td>
<td>
lkg
</td>
<td>
http://lkg.lynx-project.eu/def/
</td> </tr>
<tr>
<td>
**Dublin Core**
</td>
<td>
dct
</td>
<td>
http://purl.org/dc/terms/
</td> </tr>
<tr>
<td>
**RDF**
</td>
<td>
rdf
</td>
<td>
http://www.w3.org/1999/02/22-rdf-syntax-ns#
</td> </tr>
<tr>
<td>
**European Legislation Ontology**
</td>
<td>
eli
</td>
<td>
http://data.europa.eu/eli/ontology#
</td> </tr>
<tr>
<td>
**W3C Provenance Ontology**
</td>
<td>
prov-o
</td>
<td>
https://www.w3.org/TR/prov-o/
</td> </tr>
<tr>
<td>
**Friend of a Friend Ontology**
</td>
<td>
foaf
</td>
<td>
http://xmlns.com/foaf/spec/
</td> </tr>
<tr>
<td>
**NLP Interchange Format**
</td>
<td>
nif
</td>
<td>
http://persistence.uni-
leipzig.org/nlp2rdf/ontologies/nif-core#
</td> </tr>
<tr>
<td>
**ITS 2.0 / RDF Ontology**
</td>
<td>
itsrdf
</td>
<td>
http://www.w3.org/2005/11/its/rdf#
</td> </tr> </table>
**Table 6 Prefixes used in this document**
# URI MINTING POLICY
## BACKGROUND
This section highlights the importance of choosing a good URI naming strategy.
URIs (or IRIs to be more precise, as per RFC 3987 on Internationalized
Resource Identifiers) are the natural identifiers for resources in Lynx. An
IRI is a sequence of Unicode characters (Unicode/ISO 10646) that can be used
to mint identifiers that use a wider set of characters than the one defined
for the URIs in RFC3986. Choosing good IRIs are key at least for the following
reasons:
― Make humans easier to understand what is the resource in question. URIs with
information on the identified resource and its nature (e.g. class) are easier
for humans to remember and understand. URIs play the role of documenting
ontologies and RDF resources in natural language. This is a misuse of URIs,
and hardens the operation of resources in multilingual environments [32], but
it is a common practice.
― Make easier the execution of automated tasks, such as resource mapping [34],
information extraction [35] or natural language generation.
The W3C Consortium does not provide a normative recommendation on how to mint
URIs. However, it was Tim Berners-Lee himself who as early as 1998 wrote in
his article _Cool URIs don’t change_ 27 a list of good practices. Berners-Lee
introduced the concept of _URI design_ , which has proven to be a challenge
for the Semantic Web community.
A second reference is the W3C Note _Common HTTP Implementation problems_ 28 ,
issued in the context of the Technical Architecture Group. This note
elaborates the ideas of Berners-Lee’s article, specifying some rules for
choosing URIs: (i) Use short URIs as much as possible, (ii) Choose a case
policy, (iii) Avoid URIs in mixed case, and (iv) As a case policy choose
either “all lowercase” or “first letter uppercase”. More recently, the _Best
Practices for Publishing Linked Data_ 29 specification issued by a W3C
Working Group only recommended: ‘ _A URI structure will not contain anything
that could change_ ’ and that URIs shall be constructed ‘ _to ensure ease of
use during development_ ’.
However, no more precise rules are given by W3C. Some recommend using hyphens,
other claim a camel case policy for the local names suffices.
## ALTERNATIVE URI MINTING STRATEGIES
Given that technically there is no clear recommendation on how to choose sets
of URIs, two alternatives can be considered: either they are meaningful
conveying information on the resource and its structure or they are
meaningless because URIs should not be semantically interpreted. For example,
given a certain sentence (judgment), one might consider including in the URI
either:
― the title of the judgment
― the unique reference number for the judgment
― the internal record number in the Lynx databases
This section describes the pros and cons of these alternatives.
### Structured, non-opaque URIs
Once the semantic web has grown mature and widely accepted, public
institutions have also issued guides on URI minting, all of them leaning
towards structured, non-opaque URIs. Most notably, the UK Cabinet Office
published the recommendation “Designing URIs for the public sector”, the
government in
Netherlands issued a similar document 30 and the Spanish one issued the
Norma Técnica de Interoperabilidad contains a chapter for that “ _Definición
de un esquema de URI_ ” 31 . Finally, the European Commission published in
2014 the document _Towards a common approach for the management of persistent
HTTP URIs by EU Institutions_ to be used in the EU portals. These documents
specify the path structure for URIs, establishing a clear separation of
different types of data (a bus line is not a police office) and defining
naming conventions.
These conventions emphasize the need of stability and scalability and
specifically address the problem of managing large amounts of data on the Web.
**Spanish case** . For example, the Spanish norm defines the following URI
pattern:
http://{base}/{carácter}[/{sector}][/{dominio}][.{ext}][#{concepto}]
If this strategy was applied to Lynx, _Base_ would be lynx-project.eu;
_character_ would be either **def** (for ontologies and vocabularies), **kos**
(for dictionary data, thesauri, taxonomies and other knowledge organization
data), **cat** (for catalogue information) or **res** (for resources, such as
a document); _sector_ would be one word describing the domain sector (economy,
justice-legislation, etc.). For Lynx this might be
(standards/legislation/caselaw/doctrine/others); _dominio_ would be the
specific data type (e.g. Judgment) and _concept_ would be the id of the
resource (ext being the extension). An example of URI using the Spanish
recommendation would be:
http://lynx-project.eu/res/caselaw/judgment/C23987
**UK case** . The UK recommendation is a well detailed document, which
proposes the following URI pattern for documents:
http://{domain}/doc/{concept}/{reference}. This would mean, applied to Lynx,
having this URI for the same:
http://lynx-project.eu/doc/judgment/C23987
**Holland case** . The Dutch administration has adopted the URI pattern:
http://{domain}/{type}/{concept}/{reference}. The example for Lynx would read:
http://lynx-project.eu/id/judgment/C23987
### Opaque URIs
In the _Architecture of the World Wide Web 33 _ , which is a W3C
Recommendation, we read “ _Agents making use of URIs should not attempt to
infer properties of the referenced resource_ ”. This recommendation is
directly opposed to the strategies mentioned in the section before, and leads
to enabling opaque URIs or at least with less semantics in it. For example,
Tim Berners-Lee (1998) recommended not to put too much semantics in the URI,
and not to bind URIs to some classification or topic (as one change the point
of view).
Opaque URIs can be generated automatically, are easier to manage and do not
convey character encoding problems –in a project intrinsically multilingual
such as Lynx, there should not be a cultural bias against languages with
accents and other local characters (such as the Spanish Ñ).
An examples of opaque URI would be one chosen from the Spanish National
Library (BNE) to identify the writer Miguel de Cervantes:
http://datos.bne.es/ **persona** /XX1718747
From this URI, it can be inferred that it refers to a person, but no clue is
given on which person. On the contrary, the dbpedia policy for cervantes hides
the type of entity, but makes clear who is the referred writer:
http://es.dbpedia.org/resource/ **Miguel_de_Cervantes**
## LYNX URI MINTING STRATEGY
Considering the advantages and disadvantages examined in the previous section,
Lynx has chosen the URI patterns as described in Table 7.
<table>
<tr>
<th>
**Type of resource**
</th>
<th>
**URI pattern**
</th> </tr>
<tr>
<td>
Ontology _Example_
</td>
<td>
http:// lkg.lynx-project.eu/def/{onto_id} _http://_ lkg. _lynx-
project.eu/def/core_
</td> </tr>
<tr>
<td>
Ontology element _Example_
</td>
<td>
http://lkg.lynx-project.eu/def/{onto_id}/{element} _http://_ lkg. _lynx-
project.eu/def/core/Document_
</td> </tr>
<tr>
<td>
KOS (thesauri, terminologies) _Example_
</td>
<td>
http://lkg.lynx-project.eu/kos/{kos_id}/{id} _http://_ lkg. _lynx-
project.eu/kos/contracts_terms/24232_
</td> </tr>
<tr>
<td>
Resource Example
</td>
<td>
http://lkg.lynx-project.eu/res/{id} _http://_ lkg. _lynx-project.eu/res/23983_
</td> </tr> </table>
**Table 7. URI patterns for different resources** Advantages of this choice
are:
― Problems derived from character encoding are solved
― Automatic generation of ids is possible, avoiding auto-increment derived
problems
― Freedom of choice of ids for the different implementors
― No collision between resources sharing a name
― Relatively short URIs
― Easy scalability (no types of resources are predefined)
― Lynx URIs do not compete with official ones such as ELI or ECLI.
# THE MULTILINGUAL LEGAL KNOWLEDGE GRAPH
As stated in the introduction, a secondary goal of this document is to define
the Legal Knowledge Graph that will be developed during the Lynx project with
a linguistic regulatory Linked Open Data Cloud.
## SCOPE OF THE LEGAL KNOWLEDGE GRAPH
The amount of legal data made accessible either in open or under payment
modalities by legal information providers can be hardly imagined. Lexis Nexis
claimed 32 to have 30 Terabytes of content, WestLaw accounted for more than
40,000 _databases_ . Their value can be roughly estimated: as of 2012, the
four big players (WestLaw, Lexis Nexis, Wolters Kluwer and Bloomberg Legal)
totalled about $10,000M in revenues. Language data (e.g. resources with any
kind of linguistic information) belongs to a much smaller domain, but still,
unmanageable as a whole.
The Lynx project is interested in a small fraction of the information
belonging to these domains. In particular, Lynx is in principle interested
only in using the data necessary to provide the compliance services described
in the pilots. Data of interest is regulatory data (legal and standards-
related) and language data (to cover the multilingual aspects of the
services). The intersection of these domains is of the utmost interest and
Lynx will try to comprehensively identify every possible open dataset in this
core category. These ideas are represented in Figure 23.
Language data
Legal data
Legal data for
compliance in
the Lynx pilots
Language data
for compliance in
the Lynx pilots
**Core Data**
**Lynx**
**Multlingual LKG**
Corpora
TerminologIcal databases
Thesauri, glossaries
Lexicons and dictionaries
Linguistic resource metadata
Typological databases
Law
Case law
Opinions, recommendations
Doctrine, books, journals
Standards, technical norms
Sectorial good practices
**Figure 23.** Scope of the multilingual Legal Knowledge Graph
The definitions of both _language data_ and _regulatory data_ are indeed
fuzzy, but flexible as to introduce data of many different kinds whenever
necessary (geographical data, user information, etc.). Because data in the
Semantic Web is inseparable from the data models, and data models are accessed
in the same manner as data is, ontologies and vocabularies are part of the LKG
as well. Moreover, any kind of metadata (describing documents, standards etc.)
is also part of the LKG, as well as the description of the entities producing
the documents (courts, users, jurisdictions). In order to provide the
compliance services, and with different degree of interest, both primary and
secondary law are of use, and any relevant document in a wide sense may become
part of the Legal Knowledge Graph. This is illustrated in Figure 25.
Lynx Multilingual LKG
Multilingual LKG
Resources whose IRI is within
the lynx-project.eu domain
Resources whose IRI is out
of the lynx-project.eu do-
main but are directly linked
**Figure 24 Lynx LKG and LKGs**
We may define the Lynx Multilingual LKG as the set of entities and relations
whose IRIs are within the http://lynx-project.eu top level domain. However,
the resources in it are connected to other resources published by other
entities, which constitute a wider LKG. Figure 24 represents this idea,
together with the notion of private resources, which are only accessible to
the authorized users (e.g. contracts only visible for the parties).
**Figure 25.** Types of information in the Legal Knowledge Graph
## KNOWLEDGE GRAPHS
In the realm of Artificial Intelligence, a knowledge graph is a data structure
to represent information, where entities are represented as nodes, their
attributes as node labels and the relationship between entities are
represented as edges. Knowledge graphs such as Google’s 33 , Freebase [2]
and WordNet [3] turn data into knowledge, and they have become important
resources for many AI and NLP applications such as information search, data
integration, data analytics, question answering or context-sensitive
recommendations.
Large knowledge graphs include millions of concepts and billions of
relationships. For example, DBpedia describes about 30M entities connected
through 10,000M relationships. Entities belong to classes described in
ontologies. There are different manners of representing knowledge graphs, not
the least important being the one using W3C specifications of the Semantic
Web: RDF, RDFS, OWL. RDF data is accessible online in different forms: as file
dumps, through a SPARQL endpoints or dedicated APIs or simply published online
as Linked Data [4].
### Legal Knowledge Graphs
In the last few years, a number of Legal Knowledge Graphs have been created in
different applications. The MetaLex Document Server offers legal documents as
versioned Linked Data [10], including Dutch national regulations. Finnish [9]
and Greek [8] legislation are also offered as Linked Data.
The Publications Office of the EU maintains the central content and metadata
CELLAR repository for storing official publications and bibliographic
resources produced by the institutions of the EU [11]. The content of CELLAR,
which includes EU legislation, is made publicly available by the Eur-Lex
service and it offers also an SPARQL endpoint.
The FP7 EUCases project (2013-2015) offered European and national case law and
legislation linked in an open data stack (http://eucases.eu).
Finally, Openlaws offers a platform based on linked open data, open source
software and open innovation processes [5][6][7]. Lynx will benefit from the
expertise of Openlaws, which will be the preferred source for the data models,
methods and algorithms. New H2020 projects in the area of data protection are
also using semantic web technologies, such as the H2020 Special 34 , devoted
to ease the collection of user consents and represent policies as RDF or the
H2020 Mirel 35 (2016-2019), with a network of experts to define a formal
framework and to develop tools for mining and reasoning with legal texts, or
e-Compliance, an FP7 project (2013-2016), focused on using semantic web
technologies for regulatory compliance in the maritime domain.
### Linguistic Knowledge Graphs
In the last few years, the language technology community has shaped the
Linguistic Linked Open Data Cloud: the graph with those language resources
available in RDF and published as Linked Data [16]. The graph represented in
Figure 26, resembles the one of the Linked Data Cloud, but limited to the
language domain.
**Figure 26.** Linguistic Linked Open Data Cloud 36
A major resource contained in this graph is _DBpedia_ , a vast network that
structures data from Wikipedia and links them with other datasets available on
the Web [3]. The result is published as Open Data available for the
consumption of both humans and machines. Different versions of DBpedia exist
for different languages.
Another core resource in the LOD Cloud is _BabelNet_ [15], a huge multilingual
semantic network, generated automatically from various resources and
integrating the lexicographical information of _WordNet_ and the encyclopaedic
knowledge of Wikipedia. BabelNet also applies Machine Translation to get
information from several languages. As a result, BabelNet is considered an
encyclopaedic dictionary that contains concepts and named entities connected
thanks to a great amount of semantic relations.
_Wordnet_ , is one of the best known Linguistic Knowledge Graphs, since it is
a large online lexical database that contains nouns, verbs, adjectives and
adverbs in English [3]. These words are organised in sets of synonyms that
represent concepts, known as _synsets_ . WordNet uses these synonyms to
represent word senses; thus, synonymy is WordNet’s most important relation.
Four additional relations are also used by this network: antonymy (opposing-
name), hyponymy (sub-name), meronymy (part-name), troponymy (manner-name) and
entailment relations. Other resources equivalent to WordNet have been
published for different languages, such as EuroWordNet [29].
However, there are other semantic networks (considered linguistic knowledge
graphs) that do not appear in the LOD Cloud but are also worth to mention.
This is the case of _ConceptNet_ [28], a semantic network designed to
represent common sense and support textual reasoning about documents in the
real word. It represents part of human experiences and tries to share this
common-sense knowledge with machines. ConceptNet is often integrated with
natural language processing applications to speed up the enrichment of AI
systems with common sense [4].
### The Lynx Multilingual Legal Knowledge Graph
Building on these previous experiences, we are in the position to define the
Lynx Multilingual Legal Knowledge Graph.
The **Lynx Multilingual Legal Knowledge Graph (LKG)** is a knowledge graph
using W3C specifications with the necessary information to provide
multilingual compliance services. The Lynx LKG builds on previous initiatives
reusing open data and will evolve adding new resources whenever needed to
provide compliance services. The LKG preferred form of publication is Linked
Data, although other access mechanisms will be provided.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0186_MERCES_689518.md
|
Results or Background (including thesis dissertations), and the use of names,
logos and trademarks will be regulated by the MERCES Consortium Agreement,
which will assure the suitable results protection measures and legitimate
interests of all parties involved in the project.
To ensure the widest access possible to science produced by MERCES, all
datasets and scientific publications will be deposited in the dedicated Zenodo
community, in the account already created for MERCES project (i.e., “MERCES
project” community). Zenodo has been selected as repository platform since it
is free and it allows Horizon 2020 grant support since the data are
automatically exported to OpenAIRE. Furthermore, it allows to have Digital
Object Identifier (DOI, both for datasets and scientific publications), as
well as restricted usage in case of need for embargo periods (both for
datasets and scientific publications).
All partners will strive to publish articles in open access. This will enhance
the transparency of MERCES research results, ensuring at the same time
immediate access to data and results by policy and/or business, by
stakeholders, end-users and scientists. It is also envisaged that the partners
will be able to produce approximately 100 contributions to international
scientific symposia and business conferences. Moreover, it has been envisaged
that MERCES will coordinate at least 3 special sessions organized in the
framework of international symposia.
**3\. General characteristics and typology of datasets**
MERCES will collect and collate data spanning from scientific and
environmental to socio-economic contexts, mainly in numerical and textual
formats. All different data sets will be allocated in the internal (web site
private area) and public (Zenodo) repositories in order to be:
1. discoverable (e.g. assigning each data set to a Digital Object Identifier, DOI);
2. accessible (e.g., providing each data set with clear information about their scope, licences, rule for commercial exploitation and re-usage);
3. measurable and intelligible (ready for scientific scrutiny and peer review);
4. useable beyond their original purpose (e.g. ensuring data preservation during the project’s after life);
5. interoperable to specific quality standards (e.g. adhering to standards for data annotation and exchange, compliant with multiple software applications, and allowing recombination with different datasets from different sources).
# 3.1 Datasets from literature
Datasets created from literature review (e.g. metadata on existing habitats,
census of impacted deep sea ecosystems), as datasets foreseen in WP1, will be
deposited on Zenodo, as well as made available in open international portals,
under the responsibility of WP1 leaders (HCMR and NUIG). Appropriate metadata
created for each dataset will be based on the Infrastructure for Spatial
Information in the European Community (INSPIRE) specifications (listed at
_http://inspire.jrc.ec.europa.eu/_ , _http://inspire-geoportal.ec.europa.eu/_
) and will be compatible with EMODNET/EuSeaMap/MARATLAS portals requirements.
Where appropriate the integration of mapping exercises and knowledge related
to habitats and ecosystem services changes will be based on the MAES process
(Mapping and Assessment of Ecosystems and their Services,
_http://biodiversity.europa.eu/maes_ of the BISE: Biodiversity Information
System for Europe). MERCES will interact with MAES for typology,
standardization, classification and visualization.
# 3.2 Datasets on environmental-, biological/communities-, ecosystem
services-, legal and policy-, socio-economic- and business-related data
Datasets created in all WP1-8 will be deposited in Zenodo, in accessible
formats (e.g., excel, access, etc.) depending on the typology of the data and
the partner responsible for each dataset. Once in Zenodo, each dataset will
include all raw data as well as all the info related to the data (e.g., area,
environment, ecosystem typology, in which the data have been collected).
Moreover, the datasets will be accompanied by the name of the data’s owner, a
brief description and a DOI. In the cases in which a period of embargo will be
necessary, the datasets will be temporary protected by a password. Once the
embargo will finish, the datasets will be immediately made available. This
kind of datasets will be made available for internal usage among MERCES
partners, by depositing them also on the restricted area of MERCES web site,
with the associated link to Zenodo repository. This will enhance the
transparency of MERCES research results, ensuring at the same time immediate
access to data and results by policy and/or business, by stakeholders, end-
users and scientists.
# 3.3 Scientific publications
MERCES foresees the publication of top-level and specialized papers in
excellent quality journals (e.g. Restoration Ecology, Ecological Engineering)
highlighting the outputs of the project. All partners will strive to publish
articles in open access (“gold open access”). To ensure the widest access
possible to the science produced by MERCES, all scientific publications will
be deposited in the dedicated Zenodo community.
All the papers, including the related datasets and associated metadata, will
be deposited into a research repository (i.e., Zenodo) and where possible, the
Creative Commons Licence (CC-BY or CC0 tool) will be attached to the papers.
In the cases in which a period of embargo will be necessary, the papers and
datasets will be temporary protected by a password. Once the embargo will
finish, the papers will be immediately made available. This kind of datasets
will be made available for internal usage among MERCES partners, by depositing
them also on the restricted area of MERCES web site, with the associated link
to Zenodo repository.
In those cases in which the scientific papers do not have the full open access
(“gold open access”) to the version of the paper edited by the Publisher,
several options can be considered. Following the Publishers’ indications, the
papers could be made available under the “green open access” (as example see
_https://www.elsevier.com/about/open-science/open-access_ ), or by the self-
archiving (publishing online the pdf version of the post-print of the article,
in which will be added all the information requested by the specific
Publisher, as example see
_http://olabout.wiley.com/WileyCDA/Section/id-406071.html_ ), or by the
personal web page of the main author. In each case, the options made by the
Publisher will be followed. At the same time the paper will be added in the
MERCES community in Zenodo, following one of the abovementioned options or a
combination of them. Once on Zenodo, the paper will be accompanied by all the
information requested by the Horizon2020 funding program, a Digital Object
Identifier (DOI), the all related metadata, and by the link to the Journal web
page or to the personal web page of the main author. The pdf file upload on
Zenodo will be available or protected by a password, depending on the options
given by the Publisher. In the case by which the Publisher will not allow to
make available the pdf file, the link to the personal web page of the main
author will be added, where the pdf file (post-print version, according to the
Publisher option) will be uploaded. In the case by which the main author does
not have a personal web page, it will be created on the MERCES web site.
In the MERCES web site will be also created a dedicated page, with the entire
list of scientific publications, with the DOI and the direct links to Zenodo
products.
# Data sharing
All the datasets and publications will be deposited in the MERCES community on
Zenodo, in this way all products will be immediately identified as MERCES
outputs. The use of citable and permanent DOIs for all datasets and scientific
publications archived in Zenodo ensures the long term availability of MERCES
data. If necessary, the access to datasets and the scientific publications
will be protected by a password during an embargo period. Short embargo
periods for datasets or scientific publications could be required by the
editorial policies of scientific journals. Once the embargo will finish, the
open access will be immediately ensured. For each product, a brief description
and a DOI will be provided, in order to make each dataset and publication
identifiable and univocally citable.
For the internal (among partners) and external (stakeholders, general public)
usage and sharing of datasets, specific sections for the allocation of
datasets in the MERCES website have been created. In particular, in the
restricted area there will be the datasets files as well as their link to
Zenodo repository, whereas in the public area of the web site, there will be
the links to Zenodo. For each dataset, also a brief description of the data
and the DOI will be provided.
# Archiving and preservation (including storage and backup)
Long-term archiving (more than 5 years) and a backup of these datasets will be
guaranteed by the institutes responsible for the Data Management Plan
(ECOREACH and UNIVPM). Each dataset will be identified by its own DOI.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0189_SerIoT_780139.md
|
# Executive Summary
This report describes the initial Data Management Plan for the SerIoT project,
funded by the EU’s Horizon 2020 Programme under Grant Agreement number 780139.
The purpose is to set out the main elements of the SerIoT consortium data
management policy for the datasets generated by the project.
The DMP presents the procedure for the management of datasets created during
the lifetime of the project and describes the key data management principles.
Specifically, the DMP describes the data management life cycle for all the
datasets to be collected, processed and/or generated by a research project,
including the following processes:
* handling of research data during and after the project implementation,
* what data will be collected, processed or generated,
* what methodology and standards will be applied,
* whether data will be shared/made open, and how data will be curated and preserved.
The methodology for the DMP is as follows:
1. Create a general data management policy (the strategy that will be used by the consortium to address all the datasets);
2. A DMP template will be created and sent to the partners of the consortium in order to be filled with information for each relative data set; 3. Analyze the completed by the project’s partners DMP templates.
4\. Creating an updated version of SerIoT project DMP.
The current document formulates the general data management policy. The
project’s partners provided preliminary information. DMP Template was created
(see Appendix 1). As the detailed assumptions regarding Use Cases will be
formulated by M12, the DMP template will be sent for complement/ revised and
the DMP document will be updated accordingly. The initial version of the
SerIoT DMP was developed according to guidance by EUROPEAN COMMISSION (HORIZON
2020): HORIZON 2020 DMP [1].
The structure of the document is as follows: Section 1 provides the initial
assumptions of the datasets generated during the lifetime of the project,
including assumed types and formats of data, the expected size of the datasets
and the data utility. The specific description of how SerIoT will make this
research data findable, accessible, interoperable and reusable (FAIR) is
outlined in Section 2. Sections 3 to 6 outline the policy in relation to data
resources, security and ethics. Section 6 contains the conclusions.
# Project Participants
<table>
<tr>
<th>
</th>
<th>
**Instytut Informatyki Teoretycznej i Stosowanej Polskiej Akademii Nauk**
**(IITiS, Coordinator, Poland)**
</th> </tr>
<tr>
<td>
</td>
<td>
**Centre for Research and Technology Hellas, Information Technologies
Institute**
**(CERTH, Quality Manager, Greece)**
</td> </tr>
<tr>
<td>
</td>
<td>
**Joint Research Centre – European Commission**
**(JRC, Belgium)**
</td> </tr>
<tr>
<td>
</td>
<td>
**Technische Universität Berlin**
**(TUB, Germany)**
</td> </tr>
<tr>
<td>
</td>
<td>
**Deutsche Telekom AG**
**(DT, Germany)**
</td> </tr>
<tr>
<td>
</td>
<td>
**Hispasec Sistemas S.L.**
**(HIS, Spain)**
</td> </tr>
<tr>
<td>
</td>
<td>
**HOP UBIQUITOUS SL**
**(HOPU, Spain)**
</td> </tr>
<tr>
<td>
</td>
<td>
**Organismos Astikon Sygkoinonion Athinon**
**(OASA, Greece)**
</td> </tr>
<tr>
<td>
</td>
<td>
**ATOS SPAIN S.A.**
**(ATOS, Spain)**
</td> </tr>
<tr>
<td>
</td>
<td>
**University of Essex**
**(UESSEX, UK)**
</td> </tr>
<tr>
<td>
</td>
<td>
**Institute of Communication and Computer Systems**
**(ICCS, Greece)**
</td> </tr>
<tr>
<td>
</td>
<td>
**Fundacion TECHNALIA Research & Innovation **
**(TECNALIA, Spain)**
</td> </tr>
<tr>
<td>
</td>
<td>
**AUSTRIATECH - GESELLSCHAFT DES BUNDES FUR**
**TECHNOLOGIEPOLITISCHE MASSNAHMEN GMBH**
**(AUSTRIATECH, Austria)**
</td> </tr>
<tr>
<td>
</td>
<td>
**Grupo de Ventas Hortofrutícolas**
**(GRUVENTA, Spain)**
</td> </tr>
<tr>
<td>
</td>
<td>
**HIT Hypertech Innovations LTD**
**(HIT, Cyprus)**
</td> </tr> </table>
# Data Summary
## Purpose of the data collection and generation
The main purpose of data generation and collection is to introduce the
prototype implementation of IoT platform across all IoT domains (e.g.,
embedded mobile devices, smart homes/cities, security & surveillance, etc.)
and to optimize the information security in IoT networks in a holistic, cross-
layered manner (i.e., IoT platforms and devices, Honeypots, Fog networking
nodes, SDN routers and operator’s controller).
SerIoT will produce a number of datasets/databases during the lifetime of the
project. The data will be both analyzed using a range of methodological
perspectives for project development and scientific purposes, and will be
available in a variety of accessible data formats.
The data sources are IoT devices and IoT systems deployed in the use cases
locations, created in cooperation with project’s four industrial partners:
OASA, AustriaTech, HOPU and Tecnalia (see Fig.1). According to that on general
level four separate datasets categories will be created. For example:
Intelligent Transport Systems in Smart Cities: Partner AustriaTech, which
will provide data from the Road Side stations within the development phase of
the SerIoT’s monitoring, will include several data sets with C-ITS messages.
**Fig. 1. SerIoT global acquisition architecture.**
The main goal of work in the project is designing and implementing the SerIoT
system and its components. Specifically, the data generated within WP1 and WP2
will have impact on architectural formal modelling, analysis, and synthesis,
verification of the SDN-Controller and Secure Router Design as well as
automated penetration testing.
Two types of data will be generated in WP2. The first type will be models, in
a selected model checker language. The second type of data generated in this
WP will be the security, safety and quantitative properties of the IoT
communication architecture. Such security properties are in accordance with
rules of confidentiality, integrity, availability, authentication,
authorization and non-repudiation. Quantitative properties could be for
example, "what is the probability of a failure causing the system to shut down
within 4 hours?", "what is the probability of the protocol terminating in
error state, over all possible initial configurations?".
Furthermore, for task T2.3, formal verification will work at the property
level (a group of output points make up a property). A common misconception,
which should be avoided, is that formal verification ensures a 100% bug-free
design. Simulation evaluations are not as effective in detecting all the
potential issues within today’s complex chips, despite the big progress that
has been achieved in stimulus generation. Besides, extracted netlists of
modern designs are in most cases too large for (re-)simulation, which creates
a gap in the verification flow. On the other hand, given a property, formal
verification exhaustively searches all possible input and state conditions for
failures. Therefore, if the set of properties formally verified, collectively
constitutes the specifications, it can be assumed that a design meets its
specifications. Formal verification can be classified further into two
categories, equivalence checking which determines whether two implementations
are functionally equivalent and property checking that takes in a design and a
property which is a partial specification of the design and proves or
disproves that the design has the property.
According to results of WP1 and WP2, the SDN network, which is the core of
SerIoT network system, will be implemented. WP3 partners will develop and
implement software algorithms and methods described in WP1 and WP2, as well as
algorithms and methods developed within work of WP4 partners. The outcome of
the work will be the source code of prototype modules, extending capabilities
of SDN switch and SDN controller, as well as capabilities of existing fog
architecture elements.
The source code will be programmed in e.g. C, C++, Java, Python, etc.,
accompanied by files enabling their compilation and deployment in devices used
for testing - project files, makefiles etc., and split into programming
projects. Thus, team members will be able to download, compile and deploy the
code for testing, bug fixing or further development.
The safe and reliable way of depositing the source code is using the VCS
repositories. Repositories of VCS may be stored locally on partners' servers
or on external servers. Part of solutions will be shared with the community as
Open Source projects. In that case, public repositories like GitHub or GitLab
will be used.
The source code of software implementing new methods developed in the SerIoT
project will be created during work in mainly WP3, WP4, WP5 and WP6. The code
development may concern also WP2 (e.g., extensions to model verification
software), WP7 (e.g., software enabling integration of solutions developed by
partners) and WP8 (e.g., test scripts).
The largest volume of projects’ data will be obtained from test sites.
Corresponding WP4, (IoT Monitoring Security and Mitigation) will deal with the
research and development of a crosslayer data collection infrastructure, as
well as the actual data generated by IoT devices.
More specifically, the data will be delivered by IoT data collection
infrastructure and will include key measurement mechanisms in order to deal
with the effective management of information related to the IoT threat
landscapes. Moreover, a sophisticated multi-layer anomaly detection framework
will run on the datasets and will detect in early stages malicious attacks at
peripheral devices, honeypots and core network nodes. Real-time data will be
processed in order to extract important IoT system features for anomaly
detection monitoring and generate design-driven security monitors. The reason
is to detect non-consistent IoT behaviors utilizing IoT design artefacts such
as requirements, architecture and behavioral models. Effective and resource-
efficient cross-layered mitigation techniques deployed on the data, will
tackle with emerging vulnerabilities in the IoT landscape. Finally, the data
processed through a robust cross-layer Decision Support framework will assist
in the identification of malicious activities, attacks and root cause analysis
related to the IoT-enabled ecosystem.
## Types and formats of datasets
Data produced by SerIoT includes the following categories: experimental data
(related to data from test sites), models, simulations and software. At the
current, early stage of the project implementation, the final list of
datasets, formats and access rules cannot be predicted in detail. Most of the
project data will be related to testing in test sites - real IoT environments
(according to list of assumed use cases and scenarios). The general SerIoT
data types are presented in Table 1\.
**Table 1. Dataset generic description fields.**
<table>
<tr>
<th>
#
</th>
<th>
Datasets
</th>
<th>
Related WP
</th> </tr>
<tr>
<td>
1
</td>
<td>
Models, system design, specifications
</td>
<td>
WP1, WP2
</td> </tr>
<tr>
<td>
2
</td>
<td>
Repository of codes, code documentation
</td>
<td>
WP2, WP3, WP5, WP6
</td> </tr>
<tr>
<td>
3
</td>
<td>
UC1: Surveillance
</td>
<td>
WP7, WP8
</td> </tr>
<tr>
<td>
4
</td>
<td>
UC2: Intelligent Transport Systems in Smart Cities
</td>
<td>
WP7, WP8
</td> </tr>
<tr>
<td>
5
</td>
<td>
UC3: Flexible Manufacturing Systems
</td>
<td>
WP7, WP8
</td> </tr>
<tr>
<td>
6
</td>
<td>
UC4: Food Chain
</td>
<td>
WP7, WP8
</td> </tr> </table>
According to each pilot use case, types and formats of anonymized data,
collected for SerIoT will differ. For example, HOPU will provide data
regarding food chain supplies such as temperature or humidity while
AustriaTech will provide data regarding vehicle traffic or emergencies on the
road. The related datasets will consist of C-ITS messages, captured by a test
vehicle with the use of dedicated application. These datasets will be provided
as Wireshark PCAPs files with the same capture protocol stack:
ETH/GN/BTP/<target>, “target” being <CAM|DENM|IVI|SPAT|MAP> with
GeoNetworking/BTP stack transport type SingleHop Broadcasting (SHB).
All this data will be collected by the data acquisition platform (WAPI server)
in order to be processed then by the different modules.
Gruventa will provide data collected from the vehicle (track), and will
contain tracks’ information (Vehicle ID, total Km covered by the vehicle,
partial Km, GPS status, GPS position, date, time, dashboard alerts, dashboard
light status, on board temperature, outdoor temperature, insight temperature,
Humidity, VOC level).
In order to perform the evaluation in the Automated Driving Scenario
(TECNALIA) different performance indicators will be assessed using a range of
measures that will be monitored and logged. For that, both, sensors and
questionnaires, will be used depending on the nature of the performance
indicator. The required measures to calculate and evaluate the performance
indicators will be defined in a validation matrix. Raw data will be acquired
through sensors, intended a sensor as any method to obtain relevant data in
the tests. This information will be post-processed obtaining the derived
measures from raw data, and also synchronizing the data coming from different
data loggers in order to have coherent global registers from TECNALIA´s pilot
site. Then data will be logged to a local data base following a data format
and table structure agreed by project partners. TECNALIA will store also the
logging files in their own local server. After storing all logged data in the
local server, these files will be sent to the SerIoT Central Data Repository
using ftp communications. This repository is nowadays available and there is a
directory for each pilot site with enough space to store all data to be logged
for the project. This will allow partners (within the evaluations in WP8) to
compile early reports and also to provide a backup service for the pilot
sites.
Detailed information of the use cases and application scenarios are currently
formulated (more detailed assumptions will be made in second version of D1.2,
that will be issued by M12) and the detailed analysis will be finished with
the second issues of D1.2 deliverable. Thus, the first version of the document
presents the basic assumptions.
## Origin of data
In SerIoT, the assumed origins of data are:
* Honeypots (WP5)
* SDN router packet inspection (WP3, WP5)
* SDN router high-level communications (WP3)
* Different IoT traffic (devices, vehicular IoT etc). (WP4, WP7, WP8)
The honeypots will provide data dynamically to the detector algorithms, to
detect anomalies and malware installed on the device. Different IoT traffic is
the traffic collected form test sites (data collected from IoT devices,
sensors, actuators and additional modules installed). SDN router packet
inspection data results from analysis of flows of data that are analyzed,
collected and sent to controllers by network nodes. SDN router high-level
communications are collected by higher layers of the SerIoT framework e.g.
related to timely information to/from analytics module, root causes analysis &
mitigation engine, multi-level visualization engine.
## Re-use of existing data
In specific cases datasets already exist, e.g., obtained by industrial
partners from existing IoT environments. For example, for the Smart City Use
Case (with key partner AustriaTech) some of C-ITS data already exist and will
be used to develop the monitoring application of SerIoT. This can be used to
improve recognition of incorrect information and be able to therefore monitor
the incoming as well as outgoing communications of the Road Side Stations.
Those data were previously captured during testing and evaluation of C-ITS
projects which use the ETSI standardized C-ITS specifications.
## Expected size of the data
Dataset size might vary, depending on the pilot and the amount of information
sent to the data collection infrastructure by each IoT sensor. The dataset
size corresponds also to the amount of messages collected during the operation
and the needs of the monitoring device. The expected size of the produced Use
Cases datasets will be between 5MB and 5GB.
The other datasets (related to WP1-3) are code repositories, model
descriptions, modeling and simulation results. The expected sizes will be
relatively small of about 1GB.
The information about expected and actual sizes of the data will be updated.
## Data utility
Except the internal needs to use the dataset (in order to develop SerIoT
component e.g. monitoring application for C-ITS Road Side Stations, and test
them), the data may be useful for research purposes in future projects, which
have interest in IoT devices and Cyber-security. Moreover, the dataset will
include data related to automated transport, and will be useful to researchers
in automated transport more focused on secure communications for safety.
# FAIR data
## Data management policy
In general, being in line with the EU’s guidelines regarding the DMP [1], each
dataset collected, processed and/or generated in the project comprise of/
includes the following elements:
1. Dataset reference and name
2. Dataset description
3. Standards and metadata
4. Data sharing
5. Archiving and preservation
All datasets in project repositories (public and confidential) will be
supplemented with additional metadata, identifiers, keywords as described in
the following subsection, where, we provide a generic description of datasets
elements in order to ensure their understanding by the partners of the
consortium.
## Making data findable, including provisions for metadata
At first, databases are created and used by corresponding WPs and maintained
in local repositories of responsible partners. At this stage, datasets will be
confidential and only the members participating in the deployment of WPs or
the consortium members will have access to them. Then, the selected data will
be made accessible through the data repository (See 2.3.1). The more detailed
specification of the datasets that will be available to the public will be
presented in the updated version of DMP.
A DOI is assumed to be assigned to key datasets (assumed at least for the
central repository) for effective and persistent citation. DOI will be
assigned when a dataset is uploaded to the repository. This DOI can be used in
any relevant publications to direct readers to the underlying dataset.
### Data identification
SerIoT will follow the minimum Data Cite metadata standards [2] in order to
make data infrastructure easy to cite, as a key element in the process of
research and academic discourse. Recommended DataCite format for data citation
is relatively simple and follows the format:
_Creator (PublicationYear). Title. Publisher. Identifier_
It may also be desirable to include information about two optional properties,
Version and Resource Type. If so, the recommended form is as follows:
_Creator (PublicationYear). Title. Version. Publisher. ResourceType.
Identifier_
E.g.
Organisation for Economic Co-operation and Development (OECD) (2018-04-06). Main Economic Indicators (MEI): Finance | Country: Argentina | Indicator ID: CCUS, 01/1959 - 12/2017. Data Planet™ Statistical Datasets: A SAGE Publishing Resource [Dataset]. Dataset-ID:
062-003-004. https://doi.org/10.6068/DP163F9ED671E6
### Naming convention
SerIoT naming convention for project datasets will comprise of the following:
1. A prefix "SerIoT" indicating a SerIoT dataset.
2. A unique chronological number of the dataset
3. The title of the dataset
4. For each new version of a dataset it will be allocated with a version number which will, for example, start at v1_0.
5. A unique identification number linking e.g. with the dataset work package and/or deliverable/task, e.g., "WP4_D4.3".
E.g.
SerIoT.11234.serSDN_edge_node_traffic.v1_12.WP3_T2
### Version number
On general level simple version numbering is assumed. Version number consists
of Version/ subversion (e.g. mesurements_1.12). For specific cases selected
database versioning best practices are recommended and applied (e.g. for
integration of source code with external databased in [3]).
For the WP2 purposes (IoT Architectural Analysis & Synthesis) two stages of
formal modelling and analysis are assumed. Therefore, there will be two
version numbers. The first stage is preliminary, counting from M1 to M12.
Within this stage, the infrastructure will be set up and studied. Moreover,
formal modelling in architectural and high performance level will be
conducted. The second stage, counting from M13 will perform formal modelling
and verification in code level. More specifically, scripts will be run in
order to observe if particular parameters or constraints are being verified.
For the code versioning compilation number will be included to the version
number. The code versioning system will also be adopted (e.g. SVN). Tutorial
and examples can be found e.g. in SVN Tutorial [5].
### Metadata
The specific metadata content is presented in table below. The content (See )
is preliminary, contains generic data and will be further defined in future
versions of the DMP.
The assumed file format for metadata is XML. The detailed metadata structures
to describe specific content will be developed and presented in an updated
version of DMP. Additionally, content specific metadata are linked in the
generic description.
**Table 2. Dataset generic description fields.**
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
Name according to naming conventions
</th> </tr>
<tr>
<td>
**Title**
</td>
<td>
The specific title of the dataset
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
The keywords associated with the dataset
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
Related Work Package of the project
</td> </tr>
<tr>
<td>
**Dataset Description**
</td>
<td>
A brief description of the dataset
</td> </tr>
<tr>
<td>
**Dataset Benefit**
</td>
<td>
The benefits of the dataset
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Type of dataset (XML, JSON, XLSX, PDF, JPEG, TIFF, PPT)
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
The approximate size of the dataset
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
How/why was the dataset generated
</td> </tr>
<tr>
<td>
**Repository**
</td>
<td>
Expected repository to be submitted
</td> </tr>
<tr>
<td>
**DOI (if set)**
</td>
<td>
The DOI assigned (if valid) when dataset has been deposited in the repository
</td> </tr>
<tr>
<td>
**Date of Submission**
</td>
<td>
The date of submission to the repository once it has been submitted
</td> </tr>
<tr>
<td>
**Date of Update**
</td>
<td>
The date of update
</td> </tr>
<tr>
<td>
**Publisher/Responsible partner**
</td>
<td>
Lead partners responsible for the creation of the dataset
</td> </tr>
<tr>
<td>
**Version**
</td>
<td>
Version/ subversion number (to keep track of changes to the datasets)
</td> </tr>
<tr>
<td>
**Link to additional metadata**
</td>
<td>
Link to content specific metadata (will be defined in next versions of DMP)
</td> </tr> </table>
### Dataset description
There are not formal requirements for the dataset description formulated yet.
In particular, the description will depend on the type of a dataset. However,
it is recommended to publishers of dataset to provide following information:
* The nature of the dataset (scope of data)
* The scale of the dataset (amount of data)
* To whom could the dataset be useful
* Whether the dataset underpins a scientific publication (and which publications)
* Information on the existence (or not) of similar datasets
* Possibilities for integration with other datasets and reuse
It is also possible that the description will have additional internal
structure (XML).
### Keywords
Search keywords will be provided when the dataset is uploaded to the
repository, which will optimize possibilities for reuse of the data. Keywords
will be part of a general metadata structure.
## Making data openly accessible
In general, research data is owned by the partner who generates the data. Each
partner has to disseminate its results as soon as possible, unless there is a
legitimate interest to protect the results. WP leaders will propose the access
procedure to their developed datasets, conditions for making datasets public
(if applicable) and specify the embargo periods for all the datasets that will
be collected, generated, or processed in the project. In case the dataset
cannot be shared, the reasons for this will be mentioned (e.g., ethical, rules
of personal data, intellectual property, commercial, privacy-related,
security-related, etc.).
A partner that intends to disseminate its scientific results has to give an
advance notice to the other partners (at least 45 days) together with
sufficient information on the results it will disseminate (according to Grant
Agreement). Research data that underpins scientific publications should be by
default deposited in the SerIoT data repository as soon as possible, unless a
decision has been taken to protect results. Specifically, research data needed
to validate the results in the scientific publications should be deposited in
the data repository at the same time as publication.
### Data repository
Data and the associated metadata, documentation and code will be initially
deposited in the repositories created by each SerIoT pilot. For example,
AustriaTech will provide credentials to WP4 partners that need access to their
datasets. HOPU will store data securely on a platform on the FIWARE and
MongoDB architecture. Then, the dataset could be transferred to the SerIoT
data repository (https://opendata.iti.gr/seriot) , which is established by
Partner CERTH (see Fig. 2).
**Fig. 2. SerIoT central repository (screenshot of main page).**
The datasets will be confidential and only the task / consortium members will
have access to them. If a dataset or specific portions of it (e.g., metadata,
statistics, etc.) is decided to become of widely open access, it will be
uploaded to the SerIoT open data platform. This data will be anonymized, in
order to avoid any potential ethical issues with their publication and
dissemination.
### Methods or software tools are needed to access the data
Two ways of accessing the data repository exist. Firstly, (email and password)
credentials are needed in order to have administrator privileges. Such
privileges are online editing of the datasets, adding new datasets or
downloading public datasets. On the other hand, a single button tagged as
“ACCESS”, gives anonymized access to the public datasets only for the purpose
of downloading them
To process data stored in a form of XML or JSON files there are available
libraries to process the data. For some scientific data MATLAB or OCTAVE have
to be used. To use and compile data in code repositories the related software
platform has to be used (Linux with JAVA, Python, C++, etc.). The information
about platforms will be included to content specific metadata.
### Data access committee
The Grant Agreement (GA) does not describe the existence of a data access
committee. The access policies will be determined by the owners of the data in
agreement with the coordinator and related WP leaders/ partners.
### Identity of the person accessing the data
Confidential datasets are stored in each responsible partner’s local
repository, accessed by credentials. When the datasets are agreed to become
public, only then the data is uploaded to the SerIoT open access repository.
Each pilot use case stores its datasets into local repositories at its
premises. The partner provides credentials to other partners that need access
to their confidential datasets, and the accessing person is identified. And
only when pilots agree, together with the rest consortium, the datasets become
anonymized and are uploaded to central open access SerIoT repository (see
2.3.1).
The public part is open accessed and no identification of the person accessing
the data is assumed. Valid credentials to identify accessing person are
required for editing or uploading new data.
## Making data interoperable
### Interoperability of data
The data produced in the SerIoT project are interoperable, allowing data
exchange and re-use only inside the SerIoT consortium. Thus, since the SerIoT
consortium is composed of fifteen partners from eight different countries,
data exchange and re-use will be accomplished.
The use cases data will be first collected by the data acquisition platform,
which consists of a WAPI server. Then, the WAPI server will be responsible for
the data distribution amongst the different modules, such as the Analytics and
the Decision Support System (DSS) module or the Mitigation Engine module. This
way the data is made interoperable, allowing re-use between the SerIoT modules
(for the use cases data modules developed within WP4).
### Metadata vocabularies, standards or methodologies for interoperability
A metadata schema which defines constraints about metadata records is a
fundamental resource for metadata interoperability. Existing metadata schemas
are assumed to be used, to develop a new schema in order to minimize newly
defined metadata vocabularies [6]
Key concepts considered are DSP as a formal basis of metadata schema and LOD,
as a framework to connect metadata schema resources. We assume to study and
apply two approaches:
* search metadata terms and description set profiles using resources registered at schema registries and the like,
* search metadata terms using metadata instances included in a LOD dataset.
## Increase data re-use (through clarifying licenses)
### Licensing and data sharing policies
In general, the coordinator partner (IITIS) along with all work package
leaders, will define how data will be shared. WP leaders will propose the
access procedure to developed datasets, set conditions for making it public
(if applicable), set the embargo periods, the necessary accompanied software
and other tools for enabling re-use, for all datasets that will be collected,
generated, or processed in the project. In case the dataset cannot be shared,
the reasons for this will be mentioned (e.g., ethical, rules of personal data,
intellectual property, commercial, privacy-related, security-related, etc.).
The plan will be prepared in advance and will be presented in updated version
of DMM.
Detailed data sharing policies have not been decided yet but European Union
Public License (EUPL) V. 1.1 is considered [4] as a license that has been
created and approved by the European Commission.
### Data availability for re-use
The time for making the data available for re-use, has not been decided yet.
It has also not been decided yet for how long the data will remain re-usable.
# Allocation of resources
The data repository has been created by the responsible partner (CERTH) to the
extent of making data ‘FAIR’. In order to access the public datasets stored in
the repository for editing or uploading new ones, valid credentials are
required. Whereas, only downloading the data does not require any credentials.
Furthermore, the repository will use the HTTPS protocol, which helps in the
authentication of the accessed repository and protection of the privacy and
integrity of the exchanged data while in transit. The coordinator partner
(IITIS) is responsible for the data management.
## Long term data preservation
Resources for long-term data preservation are intended to be discussed in
future meetings of the SerIoT project. The details will be presented in the
updated version of DMP.
The long-term preservation of open to public datasets is assumed, by archiving
them for at least 5 years after the end of the project. The partners will
decide and describe the procedures that will be used in order to ensure long-
term preservation of the remaining data sets.
# Data security
Pilot/use cases data in the first period of the project will be stored in use
cases partners’ repositories. In terms of WP4 data, the CIA triad principles
will be followed. The CIA includes the principles confidentiality, integrity,
and availability, which are the heart of information security. In other words,
confidentiality is the property, that datasets are not made available or
disclosed to unauthorized individuals, entities, or processes. Integrity
stands for maintaining and assuring the accuracy and completeness of data over
its entire lifecycle. Lastly, the meaning of availability is to ensure that
the data is available at all times when it is needed.
A central repository (with valid HTTPS certificate) created by CERTH will be
maintained for long-term preservation. In this repository, the portion of the
dataset that is not restricted by intellectual property rights will be decided
to become of open access, whereas the other will remain confidential and will
not be uploaded to this repository. The repository will be periodically backed
up.
# Ethical aspects
The SerIoT project has taken into account ethical and legal issues that can
have an impact on data sharing, and has dedicated a WP (WP11: Ethics
requirements) to ensure compatibility of the activities carried out with
ethical standards and regulations. Under this WP, the relevant complex, legal
and ethics issues will be tackled. Moreover, deliverable D11.1 (title “ _H -
Requirement No. 1_ ”) is pointing out ethical issues, including informed
consent for data sharing and long-term preservation rules (included in
questionnaires concerning personal data). In order to make the widest re-use
possible, the data will be anonymized, to avoid any potential ethical issues
with their further distribution. Since the datasets in most cases will not
contain personal information (name, surname) data sharing can be spread
amongst third parties. In case of confidential datasets containing sensitive
information, the re-use of the data will be possible by third parties in order
to avoid any potential ethical issues.
# Other issues
## National and EU regulations
Regulations based on the country of origin of the dataset together with the
regulations of the country where the data will be processed, will be followed.
More specifically, D11.1 points out all national and EU regulations to be
followed.
# Conclusions
The purpose of this document was to provide the initial plan for managing the
data generated and collected during the SerIoT project. Specifically, the DMP
described the data management life cycle for all datasets to be collected,
processed and/or generated by a research project. Following the EU’s
guidelines regarding the DMP, this document will be updated. Current version
was created in early state of the project (M6) and details regarding data that
will be produced by use cases has not formulated yet.
Dataset from test sites will be supplemented with the ID, metadata and (if
applicable) with the related software and documentation. It is assumed to
provide at least one dataset of each scenario to the public, available through
central SerIoT repository. Finally, datasets will be preserved after the end
of the project on the web server.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0191_LIBRA_665937.md
|
# Data sharing
In general all data produced by LIBRA will be securely stored at the CRG by
the Bioinformatics Unit (Dr. Ponomarenko, Head of the Unit). Nevertheless,
depending on which type of data we process the sharing plans change.
Each LIBRA WP should have access to the data through ID and passwords.
1. For Survey monkey staff data, CRG will securely store data and give access through ID and passwords first to P7, P5 and P6 for them to analyse it. Once the data are analysed it will be shared to all LIBRA partners most likely via LIBRA Professional Dropbox.
2. For Project Implicit data, CRG will have access to the data recorded and stored by Project Implicit at the US using a password protected web account. Project Implicit will analyse the data and give access to the CRG Project Manager. Project Implicit is only allowed to store and analyse data. Further analysis and utilisation of these data is responsibility of LIBRA. After a detailed second analysis of the data it will be shared through ID and passwords to the LIBRA partners.
3. Data from each LIBRA IO will be analysed by ASDO and after that shared to all LIBRA WP though ID and passwords.
Data sets cannot be entirely shared to the public in the original format so no
data can be traced back in some way (e.g. female PI researcher in an institute
where there is only few female PIs). Thus, data made public through
publications, news on the website and similar will be in formats that protect
anonymity. For example, research data made public can be visualised as
histograms, thus informing but preserving anonymity.
# Data Archiving and Preservation
The raw data will be archived 20 years and the intermediate data will be
preserved for at least 2 years more after the end of the project at CRG’s data
repository. Raw and intermediate data stored at the UPF – Open Access
Repository will be previously selected depending on its output and always
guarantying anonymity.
There are no associated costs for archiving the intermediate data at the
infrastructure website of CRG since the amount of data is not big enough. The
costs for archiving data results at the open access repository are also
included in the ordinary fees CRG pays to the UPF library. The cost for
storing and managing data by Dr. Hermoso and Dr. Ponomarenko (Bioinformatics
Unit) still have to be determine once the amount of data and data sets we are
dealing with are clear. These costs are all eligible since we are
participating in the ORD pilot experiment.
4
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0195_SponGES_679849.md
|
2. **Initial Data Management Plan**
**Project Data Contact:** Amelie Driemel ( [email protected]_ )
**Specification of the expected datasets created during SponGES:**
**2.1 Baseline environmental data**
**Data set description:**
During various campaigns with research vessel G.O. Sars (Norway) and probably
other vessels, various baseline datasets will be assembled:
* CTD/Rosette (=>water chemistry profiles),
* Multibeam/Synthetic aperture sonars (=>bathymetry),
* Remotely Operated Vehicle (=>sea-bottom pictures) and
* Sediment cores (=> historic sponge spicule abundance, sediment chemistry, stratigraphic data)
These datasets will serve as a baseline set for all Work Packages and define
the environmental context of the project. Due to the standard instrumentation
used, the datasets will be directly comparable to datasets from other
regions/areas (e.g. _https://doi.pangaea.de/10.1594/PANGAEA.753658_ .
**Standards and metadata:**
Standard oceanographic and geoscientific instrumentation will be used
(CTD/Rosettes, ROVs, gravity cores etc.) to obtain the data. The analysis of
ocean water and sediment will take place in the research institutes of the
consortium.
Raw data will be marked differently to quality controlled data and the latter
will be submitted to PANGAEA, data publisher for georeferenced data from earth
system research ( _www.pangaea.de_ ). The scientist will retain his work-file
until he is confident that the data quality is sufficient for archiving. After
that the data will be submitted to PANGAEA and an independent quality check
will be performed by the data curator of PANGAEA (units correct? parameter
names unique? metadata sufficient?). The data and metadata will then be
archived in PANGAEAs relational database. Metadata supplied to PANGAEA are:
Latitude/longitude, depth, date/time of sample, the PI, the authors of the
dataset, dataset title, methodology, link to article where data has been used,
specific comments if needed.
PANGAEA supports several metadata standards such as ISO19xxx series, DIF,
Dublin Core, Darwin Core etc. and supports existing metadata portals (e.g.
OAIster, ScientificCommons) to disseminate its data/metadata by using the OAI-
PMH and other standards.
**Data sharing:**
The use of citable and permanent DOIs for all datasets archived in PANGAEA
ensures the longterm availability of SponGES data.
The data will be access restricted during a moratorium period (password
protected datasets), but will already be archived in PANGAEA and will have the
project label "SponGES" for the fast identification of and search for project
data. For the baseline datasets a project access will be created in PANGAEA,
so that all project members can access and use (and cite) the same version for
their own work. At the latest after the moratorium the data will be freely
accessible, citable (DOI) and directly downloadable in tab format (at
_www.pangaea.de_ ).
All data will be shared by the Creative Commons Attribution 3.0 Licence
(CCBY).
**Archiving and preservation (including storage and backup):**
Long-term archiving (>10 years) and a backup of these datasets (and all costs
hereof) are guaranteed by the institutes operating PANGAEA (Alfred-Wegener
Institut, Bremerhaven and Center for Marine Environmental Sciences, Bremen),
see also information above. Each dataset will have a unique and persistent
DOI.
4
**Sponge genetic, metabolic and microbiological data**
**Data set description:**
The genome of selected sponge species will be analyzed (=> gene code data).
Furthermore, an analysis of the sponge-associated microbes/symbionts (16S
amplicon sequence data) will be conducted. Other expected data include:
Secondary metabolite gene clusters (nucleotide sequences) of for example PKS,
NRPS, saccharides, terpenes, siderophores, lantipeptides other
biotechnologically relevant secondary metabolite gene clusters (by ways of
metagenomics, metatranscriptomics) and enzyme-encoding genes (nucleotide
sequences) of for example halogenases and bioluminescent enzymes and other
biotechnologically relevant enzymes (by ways of metagenomics,
metatranscriptomics).
**Standards and metadata:**
State-of-the-art high-throughput sequencing facilities and the expertise
available in the consortium will be used to obtain the data.
Gene codes will be archived in a standard gene code repository such as
GenBank. Other associated data will be submitted to PANGAEA, data publisher
for georeferenced data from earth system research ( _www.pangaea.de_ ). The
link to the respective GenBank entry will be added to the datasets stored in
PANGAEA (example: _https://doi.pangaea.de/10.1594/PANGAEA.855513_ ). Metadata
supplied to PANGAEA are: Latitude/longitude, water depth, date/time of sample,
the PI, the authors of the dataset, dataset title, methodology, link to
article where data has been used, specific comments if needed.
PANGAEA supports several metadata standards such as ISO19xxx series, DIF,
Dublin Core, Darwin Core etc. and supports existing metadata portals (e.g.
OAIster, ScientificCommons) to disseminate its data/metadata by using the OAI-
PMH and other standards.
**Data sharing:**
The use of citable and permanent DOIs for all datasets archived in PANGAEA
ensures the longterm availability of SponGES data.
The data will be access restricted during a moratorium period (password
protected datasets), but will already be archived in PANGAEA (or GenBank). At
the latest after the moratorium the data will be freely accessible, citable
and directly downloadable in tab format.
All data will be shared by the Creative Commons Attribution 3.0 Licence (CC-
BY). (Exceptions could be datasets which are relevant to the biotechnological
potential of sponges, in which case the legal grounds have to be determined by
the institutes first).
**Archiving and preservation (including storage and backup):**
Long-term archiving (>10 years) and a backup of PANGAEA datasets (and all
costs hereof) are guaranteed by the institutes operating PANGAEA (Alfred-
Wegener Institut, Bremerhaven and Center for Marine Environmental Sciences,
Bremen), see also information above. Each dataset will have a unique and
persistent DOI.
GenBank also is an open access database aimed at the long-term availability of
nucleotide sequence data ( _http://www.ncbi.nlm.nih.gov/genbank/_ ). Each
dataset here has a unique GenBank
Identifier.
5
**Element flux data**
**Data set description:**
In in-situ and ex-situ experiments the element fluxes and budgets
(sources/sinks) of carbon, nitrogen and silicon through deep-sea sponge
grounds will be determined.
**Standards and metadata:**
Standard instrumentation will be used (e.g. benthic flux chambers, VacuSip
system for in-situ sampling) to obtain the data. The analysis of the water and
sediment samples will take place in the research institutes of the consortium.
Raw data will be marked differently to quality controlled data and the latter
will be submitted to PANGAEA, data publisher for georeferenced data from earth
system research. The scientist will retain his work-file until he is
confident, that the data quality is sufficient for archiving. After that the
data will be submitted to PANGAEA and an independent quality check will be
performed by the data curator of PANGAEA (units correct? parameter names
unique? metadata sufficient?). The data and metadata will then be archived in
PANGAEAs relational database ( _www.pangaea.de_ ). Metadata supplied to
PANGAEA are (for ex-situ experiments some do not apply): Latitude/longitude,
water depth, date/time of sample, the PI, the authors of the dataset, dataset
title, methodology, link to article where data has been used, specific
comments if needed.
PANGAEA supports several metadata standards such as ISO19xxx series, DIF,
Dublin Core, Darwin Core etc. and supports existing metadata portals (e.g.
OAIster, ScientificCommons) to disseminate its data/metadata by using the OAI-
PMH and other standards.
**Data sharing:**
The use of citable and permanent DOIs for all datasets archived in PANGAEA
ensures the longterm availability of SponGES data.
The data will be access restricted during a moratorium period (password
protected datasets), but will already be archived in PANGAEA and will have the
project label "SponGES" for the fast identification of and search for project
data. At the latest after the moratorium the data will be freely accessible,
citable (DOI) and directly downloadable in tab format (at www.pangaea.de). All
data will be shared by the Creative Commons Attribution 3.0 Licence (CC-BY).
**Archiving and preservation (including storage and backup):**
Long-term archiving (>10 years) and a backup of these datasets (and all costs
hereof) are guaranteed by the institutes operating PANGAEA (Alfred-Wegener
Institut, Bremerhaven and Center for Marine Environmental Sciences, Bremen),
see also information above. Each dataset will have a unique and persistent
DOI.
6
**Model data**
**Data set description:**
Various environmental and ecological models (generic, dynamic) will be
developed and applied on SponGES baseline data and on data obtained from the
literature (e.g. bycatch statistics, historic distribution of sponges,
physiology/biomass information). Resulting datasets will be e.g.:
* Predictions of fishing impacts on sponge grounds
* Sponge recovery trajectories following significant disturbance scenarios
* Present and future species distribution models (maps of likely distribution)
* Dynamic food-web and biogeochemical model (to assess sponge-ecosystem functioning) The datatypes will either be maps, raster/shape files, or complete sets (zips files) of model codes and underlying data tables (e.g. as in: _https://doi.pangaea.de/10.1594/PANGAEA.842757_ ).
**Standards and metadata:**
The documentation of the model used and the algorithms applied will be stored
together with the model output. Other metadata stored with the model data are
the PI, the authors of the dataset, dataset title, methodology, and the link
to the article where model data has been used (if applicable).
**Data sharing:**
The use of citable and permanent DOIs for all datasets archived in PANGAEA
ensures the longterm availability of SponGES data.
The model data will be access restricted during a moratorium period (password
protected datasets), but will already be archived in PANGAEA and will have the
project label "SponGES" for the fast identification of and search for project
data. For those model outputs needed for other work packages a project access
will be created in PANGAEA, so that all project members can access and use
(and cite) the same version for their work. At the latest after the moratorium
the data will be freely accessible, citable (DOI) and directly downloadable in
tab format (at _www.pangaea.de_ ). All data will be shared by the Creative
Commons Attribution 3.0 Licence (CC-BY).
**Archiving and preservation (including storage and backup):**
Long-term archiving (>10 years) and a backup of these datasets (and all costs
hereof) are guaranteed by the institutes operating PANGAEA (Alfred-Wegener
Institut, Bremerhaven and Center for Marine Environmental Sciences, Bremen),
see also information above.
Each dataset will have a unique and persistent DOI.
7
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0196_Lynx_780602.md
|
# 1 INTRODUCTION
This document contains the initial version of the Data Management Plan (DMP).
The final version of this document will be available as “D2.4 Data Management
Plan” in M18. This document is complemented by “D7.2 IPR and Data Protection
Management”, to be delivered by M6 as well.
The Data Management Plan adheres to and complies with the _H2020 Data
Management Plan – General Definition_ given by the EC online, where the DMP is
described as follows:
“A DMP describes the data management life cycle for the data to be collected,
processed and/or generated by a Horizon 2020 project. As part of making
research data findable, accessible, interoperable and reusable (FAIR), a DMP
should include information on:
* the handling of research data during and after the end of the project
* what data will be collected, processed and/or generated
* which methodology and standards will be applied
* whether data will be shared/made open access and
* how data will be curated and preserved (including after the end of the project)”
Section 2 follows the template proposed by the EC 1 . Lynx adopts policies
compliant with the official FAIR guidelines [1] (findable, accessible,
interoperable and re-usable).
Lynx participates Open Research Data Pilot (ORDP) and is obliged to deposit
the produced research data in a research data repository. For such effect, the
Zenodo repository has been chosen, which exposes the data to OpenAIRE granting
its long term preservation. The description of the most relevant datasets for
compliance have been published in a Lynx Data Portal, using CKAN technology.
Metadata is provided for every relevant dataset, and data is selectively
provided whenever it can be republished without license restrictions and
relevance for the project is high. This deliverable also describes a catalogue
of relevant legal and regulatory data models and a strategy for the
homogenisation of the data sources.
Finally, the document describes the concept of a Multilingual Legal Knowledge
Graph for Compliance, or Legal Knowledge Graph for short (Section 5), which is
the backbone on when the Lynx services will rest (Figure 1).
**European Directives**
General legal goals for every
European Member State
**National Legislation**
Every Member State has
different national and
regional legislation in force
**European Regulations**
Legislative act binding in every
**Industry standards**
Technical documents in
occasions necessary to
achieve certification
**Case law**
Judgements,
sentences
European Member State
**Figure 1.** Schematic description of the Multilingual Legal Knowledge Graph
for Compliance
# 2 DATA MANAGEMENT PLAN
This Section is the Initial Data Management Plan. It follows the template
proposed by the EC and is applicable to the data used in or generated by Lynx,
with the sole exception of pilot-specific data, whose management may be
further specified in per-pilot DMPs. If the implementation of the pilots
required a different DMP, either new DMP documents or new additions to this
document shall be defined by the pilot leaders and the resulting work included
in D2.4.
## 2.1 DATA SUMMARY
**Purpose** . The main objective of Lynx is “to create an ecosystem of smart
cloud services to better manage compliance, based on a legal knowledge graph
(LKG) which integrates and links heterogeneous compliance data sources
including legislation, case law, standards and other aspects”. In order to
deliver these smart services, data will be collected and integrated into a
Legal Knowledge Graph, to be described in more detail in Section 3.
**Formats** . The very nature of this project makes the number of formats too
high as to be foreseen in advance. However, the project will be keen on
gathering data in RDF format or producing RDF data itself. RDF will be the
format of choice for the meta model, using standard vocabularies and
ontologies as data models. More details on the initially considered data
models are given in Section 5.
**Data reuse** . The core part of the LKG will be created by reusing existing
datasets, either copying them into the consortium servers (only if strictly
needed) or using them directly from the sources.
**Data origin** . Although Lynx will be greedy in gathering and linking as
much compliance-related data as possible from any possible source, it can be
foreseen that the Eur-Lex portal will be the principal data source. Users of
the Pilots may contribute their own data (e.g. private contracts, paid
standards), which will be neither included into the LKG nor made publicly
available.
**Data size** . The strong reliance of Lynx in external open data sources
minimizes the amount of data that Lynx will have to physically store. No
massive data storage infrastructure is foreseen.
**Data utility** . Data will be useful for SMEs and EU citizens alike through
different portals.
## 2.2 FAIR DATA
### 2.2.1 Making data findable, including provisions for metadata
**Discoverability** . Data will be discoverable through a dedicated data
portal (http://data.lynxproject.eu), further described in Section 4. Data
assets will be identified with a harmonized policy to be defined in the
forthcoming months.
**Naming convention** . A specific URI minting policy will be used to identify
data assets. The policy will be specified in the forthcoming months after the
publication of this deliverable.
**Search keywords** . Open datasets described in the Lynx data portal are
findable through standard forms including keyword search.
**Versioning** . Versioning is an intrinsic part of the URI strategy to be
devised.
**Metadata** . Metadata records describing each dataset will be downloadable
as DCAT-AP entries.
### 2.2.2 Making data openly accessible
**Open data** : **data in the LKG** .
The adopted approach is “as open as possible, as closed as necessary”. Data
assets produced during the project will preferably be published as open data.
Nevertheless, during the project some datasets will be created from existing
private resources (e.g. dictionaries by KDictionaries), whose publication
would irremediable damage their business model. These datasets will not be
released as open data.
Datasets in the LKG will be in any case published along with a license. This
license will be specified as a metadata record in the data catalog, which can
also be exported as RDF using the appropriate vocabulary terms (dtc:license)
and eventually using machine readable licenses.
**Open data: research data.**
In December 2013, the EC announced their commitment to open data through the
Pilot on Open Research Data, as part of the Horizon 2020 Research and
Innovation Programme. The Pilot’s aim is to “improve and maximise access to
and reuse of research data generated by projects for the benefit of society
and the economy”. In the frame of this Pilot on Open Research Data, results of
publicly-funded research should be disseminated more broadly and faster, for
the benefit of researchers, innovative industry and citizens.
The Lynx project chose to participate in the Open Research Data Pilot (ORDP).
Consequently, publishing as “open” the digital research data generated during
the project is a contractual obligation (GA Art. 29.3). This provision does
not include the pieces of data which are derivative of private data of the
partners. Their openness would endanger their economic viability and
jeopardize the Lynx project itself (which is sufficient reason not to open the
data as per GA Art. 29.3).
Every Lynx partner will ensure Open Access to all peer-reviewed scientific
publications relating to its results. Lynx will use Zenodo as the online
repository (https://zenodo.org/communities/lynx/) to upload public
deliverables and possibly part of the scientific production. Zenodo is a
research data repository created by OpenAIRE to share data from research
projects. Records are indexed immediately in OpenAIRE, which is specifically
aimed to support the implementation of the EC and ERC Open Access policies.
Nevertheless, in order to avoid fragmentation, the Lynx webpage will act as
the central information node.
The following categories of outputs require Open Access to be provided free of
charge by Lynx partners, to related datasets, in order to fulfil the H2020
requirements of making it possible for third parties to access, mine, exploit,
reproduce and disseminate the results contained therein:
* _Public deliverables_ will be available both at Zenodo and the Lynx website at http://lynxproject.eu/publications/deliverables. See Figure 1 and Figure 2.
* _Conference and Workshop presentations_ may be published at Slideshare under the account https://www.slideshare.net/LynxProject.
* _Conference and Workshop papers and articles for specialist magazines_ may be also reproduced at: http://lynx-project.eu/publications/articles.
* _Research data and metadata_ are also available. Metadata and selected data is available in the CKAN data portal, http://data.lynx-project.eu, produced research data at Zenodo.
Information will be also given about tools and instruments at the disposal of
the beneficiaries and necessary for validating the results.
**Figure 2.** Lynx public deliverable at Zenodo.
**Figure 3.** Deliverables on the Lynx website
**Accessibility** . Data descriptions (metadata) will be accessible through a
dedicated data portal, hosted in Madrid and available under http://data.lynx-
project.eu. Eventually, data from small datasets will be also made available
from the web server –where _small_ means a file size that does not compromise
the web server availability. Eventually the metadata descriptions will be
uploaded into other repositories, such as Retele 2 resources in Spanish
language, ELRC-SHARE 3 in general and others to be identified. In addition,
the cooperation with the CEF eTranslation 4 TermBank project will be
considered, in view of sharing terminological domain-specific resources.
**Necessary methods and tools to access the data and its documentation** .
Relevant datasets whose license is liberal will be available as downloadable
files. Eventually, a SPARQL endpoint will be set in place for those dataset in
RDF form. Also, the CKAN technology in which the portal is based on, offers an
API using standard JSON structures to access the data. The CKAN platform
provides the documentation on how to use the API
(http://docs.ckan.org/en/ckan-2.7.3/api/).
**Publication of software** . Some of the software to be developed in Lynx is
expected to be published as Open Source. Other software to be developed in
Lynx will be derived from private or non-open source code and, thus, not be
made publicly accessible.
**Data and code repositories and arrangement** . Lynx uses a private source
code repository
(https://gitlab.com/superlynx). Open data will be deposited in the Lynx open
data portal; consortiuminternal data within the project intranet. The choice
of Nextcloud is justified as the information resides within UPM secured
servers in Madrid, avoiding third parties and granting the privacy and
confidentiality of the data. Gitlab, as a major provider and host of code
repositories, is a common choice among developers but if necessary code might
be also hosted at UPM.
**Data Access Committee** . As of today, there is no need for a Data Access
Committee 5 .
**Conditions for access** . Description of data assets include a link to well-
known licenses, for which machine readable versions exist. Either Creative
Commons Attribution International 4.0 (CC-BY) or Creative Commons Attribution
Share-Alike International 4.0 (CC-BY-SA) will be the recommended licenses.
**Access control** . The Lynx intranet (Nextcloud) provides standard access
control functionalities. The servers are located in a secured data centre at
UPM. The access point is https://delicias.dia.fi.upm.es/lynx-nextcloud/.
Access is secured by asymmetric keys or passwords and communications use SSL.
### 2.2.3 Making data interoperable
**Interoperability** . The LKG preferred format is RDF, granting
interoperability between institutions, organisations and countries. This
choice optimally facilitates re-combinations with different datasets from
different origins.
**Data and metadata vocabularies** . Specific data and metadata vocabularies
will be defined throughout the entire project. An initial collection has
already been edited and will be soon published at http://lynx-
project.eu/data/data-models (see also Figure 3).
**Standard vocabularies and inter-disciplinarity** . Standard vocabularies
will be used inasmuch as possible, like the ECLI ontology, the Ontolex model
and other vocabularies similarly spread. These choices grant inter-
disciplinary collaboration. For example, Ontolex 6 is standard in the
language resources and technologies communities, whereas the ELI ontology 7
(European Law Identifier) is standard in the European legal community.
**Mappings of vocabularies and ontologies developed by Lynx** . If
vocabularies or ontologies are further defined, they will be published online,
documented and mapped to other standard ontologies. Figure 3 illustrates a
possible visualization for the data models.
**Figure 4.** A catalogue of relevant ontologies and vocabularies
### 2.2.4 Increase data reuse
**Embargoes** . No data embargoes are foreseen. Public data will be published
as soon as possible, but private data will remain private as long as the
interested parties, rightsholders of the data, decide.
**Data after the project** . Lynx aims at building a LKG towards compliance.
In the long term, the LKG may be repurposed and the data portal may become a
reference entry point to find open, linguistic legal information as RDF.
**Data validity after time** . Some of the datasets require maintenance (e.g.
legislation and case law must be kept up to date). Whereas a core of
information may still be of interest even with no maintenance, those datasets
directly used by services under exploitation will be maintained. In any case,
metadata records describing the datasets will include a field informing on the
last modification date.
**Data quality assurance** . Only formal aspects of data quality are expected
to be assured. In particular, the 5-stars 8 will be considered, and the data
portal will describe this quality level in due time.
## 2.3 ALLOCATION OF RESOURCES
**Costs** . The cost of publishing FAIR data includes (a) maintenance of the
physical servers; (b) time devoted to the data generation and (c) long term
preservation of the data.
**Coverage of the costs** . Resources to maintain and generate data are
covered by the project. Long term preservation of data is free by uploading
the research data at Zenodo.
**Responsibility of the Data Management** . UPM is responsible for managing
data in the data portal, and for managing private data in the intranet. UPM is
not responsible of keeping personal data collected to provide the pilot
services but the directly involved partners (openlaws, Cuatrecasas, DNV GL).
**Long term preservation** . Public deliverables and research data will be
uploaded to Zenodo, which grants the long term preservation. A specific
community has been created in Zenodo 9 . Alternatively, if difficulties are
found with Zenodo, datasets may also be uploaded to Figshare 10 or B2Share
11 where a permanent DOI is retrieved. Other sites such as META-SHARE, ELRC-
SHARE or the European Language Grid may be considered in addition to grant
long term preservation and maximize the impact and dissemination.
## 2.4 DATA SECURITY
### 2.4.1 Data Security
**Data security** . UPM is physically storing data on their servers: webpage,
files and data in the Nextcloud system, the CKAN data catalogue and mailing
lists. These pieces of data are both digitally and physically secured in a
data centre. Backups are made of these systems, to external hard disks or
other machines. In principle, no personal data will be kept at UPM, and the
pilot leaders will define specific DMP with specific data protection
provisions and specific data security details.
**Long term preservation** . Relevant data which is open, shall be uploaded to
Zenodo. In addition, relevant language datasets produced in the course of Lynx
will be uploaded to catalogues of language resources.
## 2.5 LEGAL, ETHICAL AND SOCIETAL ASPECTS
### 2.5.1 Legal framework
EU citizens are granted the rights of privacy and data protection by the
Charter of Fundamental rights of the EU. In particular, Art. 7 states that “
_everyone has the right respect for private and family life, home and
communications_ ”, whereas Art. 8 regulates that “ _everyone has the right to
the protection of personal data concerning him or her_ ” and that processing
of such data must be “ _on the basis of the consent of the person concerned or
some other legitimate basis laid down by law_ .”
These rights are developed in detail by the General Data Protection Regulation
(GDPR), Regulation 2016/679/EC, which is in force in every Member State since
25 th of May of 2018. This regulation imposes obligations to the Lynx
consortium, which is also reminded by Art. 39 of the Lynx Grant Agreement
(GA): “ _the beneficiaries must process personal data under the Agreement in
compliance with applicable EU and national law on data protection_ ” The same
GA also reminds that beneficiaries “ _may grant their personnel access only to
data that is strictly necessary for implementing, managing and monitoring the
Agreement_ ” (GA Art. 39.2).
_Personal data_ is, according to GDPR art. 4.1 “ _any information relating to
an identified or identifiable natural person (‘data subject’); an identifiable
natural person is one who can be identified, directly or indirectly, in
particular by reference to an identifier such as a name, an identification
number, location data, an online identifier or to one or more factors specific
to the physical, physiological, genetic, mental, economic, cultural or social
identity of that natural person_ ”, whereas _data processing_ is (art. 4.2): “
_any operation or set of operations which is performed on personal data or on
sets of personal data, whether or not by automated means, such as collection,
recording, organisation, structuring, storage, adaptation or alteration,
retrieval, consultation, use, disclosure by transmission, dissemination or
otherwise making available, alignment or combination, restriction, erasure or
destruction_ ”. With these definitions, Pilot 1 (Compliance Assurance Services
in Data Protection) will most likely have to collect and process personal
data, and possibly other Pilots as well.
The purposes for which personal data will be collected are justified in
compliance with art.5.b, and the processing of personal data is legitimate in
compliance with art. 6. The implementation of the Pilot 1 and other pilots
processing personal data will have to implement the necessary legal provisions
to respect the rights of the data subjects.
Several internal communication channels have been established for Lynx:
mailing lists, a website and an intranet. The three servers are hosted at UPM
and comply with the Spanish legislation.
The Lynx web site (http://lynx-project.eu) is compliant regarding the
management of cookies with _Ley 34/2002, de 11 de julio, de servicios de la
sociedad de la información y de comercio electrónico_ . Lynx will most likely
handle datasets with personal data (Pilot 1), as users will be registered in
the Lynx platform to enjoy personalised services and to upload contracts with
personal data. The consortium will adopt any measure to comply with the
current legislation.
### 2.5.2 Ethical aspects
The ethical aspect of greatest interest is the processing of personal data.
The processing of personal data may become a possibility in the framework of
Pilot 1. GA Article 34 “Ethics and research integrity” is binding and shall be
respected. Ethical and privacy related concerns are fully addressed in Section
3.2 of Deliverable 7.2 “ _IPR and Data Protection management documents_ ”.
Besides, the ethics issues identified are already being handled by the pilot
organisations during their daily operation activities, as they confront with
national laws and EU directives regarding the use of information in their
daily services, as clearance for the processing, storing methods, data
destruction, etc. has been provided to such organisation a priori and is not
case specific. The research to be done during Lynx does not raise any other
issues, and the project will make sure that it will follow the same patterns
and rules used by the pilot organisations, that will guarantee the proper
handling of ethical issues and the adherence to national, EU wide and
international law and directives that do not violate the terms of the
programme.
### 2.5.3 Societal impact
The societal impact of this project is expected to be positive, enhancing the
access of EU citizens to legislation and contributing towards a fairer Europe.
In addition to the best effort made by the project partners, members of the
Advisory Board may be requested to issue a statement on the ethical and
societal impact of the Lynx project.
# 3 CATALOGUE OF DATASETS
This section describes a catalogue of relevant legal, regulatory and
linguistic datasets. Datasets in the Legal Knowledge Graph are those necessary
to provide compliance related services that also meet the requirement of being
published as linked data. The purpose of Lynx Task 2.1 is twofold:
1. Identify as many as possible open dataset possibly relevant to the problem in question (either in
RDF or not)
2. Build the Legal Knowledge Graph by identifying existing linked data resources or by transforming existing datasets into linked data whenever necessary
Figure 5 represents the Legal Knowledge Graph as a collection of dataset
published as linked data. The LKG lies amidst another cloud of datasets, in
various formats either structured or not (such as PDF, XLS or XML). The
section contains: (a) the methodology followed to describe datasets of
interest; (b) the methodology to transform existing resources into LKG
datasets; (c) a description of the Lynx data portal and the related technology
and (d) an initial list of relevant datasets.
Legal Knowledge Graph
(
RDF
)
Other datasets of interest
(
PDF, XLS, XML…
)
**Figure 5.**
Datasets in
the LKG and out of it
## 3.1 METHODOLOGY FOR CATALOGUING DATASETS
Data assets potentially relevant to the Lynx project are those that might help
providing multilingual compliance services. They might be referenced by
datasets in the LKG as external references.
The identification and description of these datasets is being made during the
project in a cooperative way, during the entire project lifespan. The
methodology has consisted of the following steps:
1. _Identification of datasets of possible interest_
Identification of relevant datasets by the partners;
Discovery of relevant datasets by browsing data portals, reviewing
literature and making general searches;
2. _Description of resources_
Description of the resources identified in Step 1 using an agreed template
(spreadsheet) with metadata records (see Section 4.2.1).
3. _Publication of dataset descriptions_
Publication of the dataset description in the CKAN Open Data Portal via CKAN
form
Transformation of the metadata records to RDF using the vocabulary DCAT-AP
(to be an automated task from the spreadsheet)
This process is being iteratively carried out throughout the project.
### 3.1.1 Template for data description
Every partner of Lynx, within their domain of expertise, has described an
initial list of data sources of interest for the project. In order to
homogeneously describe the data assets, a template with metadata records has
been created with the due consensus among the partners.
The template for data description contains two main blocks: one with general
information about the dataset and another with information about the resource.
Within this context, “dataset” makes reference to the whole asset, while
“resource” defines each one of the different formats in which the dataset is
published. For instance, the UNESCO thesaurus is a single dataset which can be
found as two different resources: as a SPARQL Endpoint and as a downloadable
file in RDF.
Thereby, the metadata records in Table 1 describe information about the
dataset as a whole.
<table>
<tr>
<th>
**Field Description**
</th> </tr>
<tr>
<td>
Title
</td>
<td>
the name of the dataset given by the author or institution that publishes it.
</td> </tr>
<tr>
<td>
URI
</td>
<td>
identifier pointing to the dataset.
</td> </tr>
<tr>
<td>
Type in the LKG
</td>
<td>
type of dataset in the legal knowledge graph (language, data, etc.).
</td> </tr>
<tr>
<td>
Type
</td>
<td>
type of dataset (term bank, glossary, vocabulary, corpus, etc.).
</td> </tr>
<tr>
<td>
Domain
</td>
<td>
topic covered by the dataset (law, education, culture, government, etc.).
</td> </tr>
<tr>
<td>
Identifiers
</td>
<td>
other type of identifiers assigned to the dataset (ISRN, DOI, Standard ID,
etc.).
</td> </tr>
<tr>
<td>
Description
</td>
<td>
a brief description of the content of the dataset.
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
if the dataset is available online, upon request or not available.
</td> </tr>
<tr>
<td>
Languages
</td>
<td>
languages in which the content of the dataset are available.
</td> </tr>
<tr>
<td>
Creator
</td>
<td>
author or institution that created the dataset.
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
institution publishing the dataset.
</td> </tr>
<tr>
<td>
License
</td>
<td>
license of the dataset (Creative Commons, or others).
</td> </tr>
<tr>
<td>
Other rights
</td>
<td>
if the dataset contains personal information.
</td> </tr>
<tr>
<td>
Jurisdiction
</td>
<td>
jurisdiction where the dataset applies (if necessary).
</td> </tr>
<tr>
<td>
Date of this entry
</td>
<td>
date of registration of the dataset in the CKAN.
</td> </tr>
<tr>
<td>
Proposed by
</td>
<td>
Lynx partner or Lynx organisation proposing the dataset.
</td> </tr>
<tr>
<td>
Number of entries
</td>
<td>
number of terms, triplets or entries that the dataset contains.
</td> </tr>
<tr>
<td>
Last update
</td>
<td>
date in which the last modification of the dataset took place.
</td> </tr>
<tr>
<td>
Dataset organisation
</td>
<td>
</td> </tr>
<tr>
<td>
name of the Lynx organisation registering the dataset.
</td> </tr> </table>
**Table 1.** Fields describing a data asset
The second block of metadata (whose fields are listed in Table 2) gives
additional information about the resource in which the metadata can be
accessed. This section is repeated as many times as needed (depending on the
number of formats of the metadata).
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Description
</td>
<td>
description of the type of resource (i.e. downloadable file, SPARQL endpoint,
website search application, etc.).
</td> </tr>
<tr>
<td>
Data format
</td>
<td>
the format of the resource (RDF, XML, SKOS, CSV, etc.).
</td> </tr>
<tr>
<td>
</td>
<td>
technology used to expose the resource (relational database, API, linked data,
etc.).
</td> </tr>
<tr>
<td>
Data access
</td> </tr>
<tr>
<td>
Open format
</td>
<td>
if the format of the resource is open or not.
</td> </tr>
<tr>
<td>
URI
</td>
<td>
the URI pointing to the different resources.
</td> </tr> </table>
**Table 2.** Fields describing a resource associated to a data asset
The template was materialized as a spreadsheet distributed among the partners.
### 3.1.2 Lynx Data Portal
With the aim of publishing the metadata of the harvested datasets, a data
portal has been made available under http://data.lynx-project.eu.
This data portal uses the technology of CKAN. The Comprehensive Knowledge
Archive Network (CKAN) is a web-based management system for the storage and
distribution of open data. The system is open source 12 , and it has been
deployed on the UPM servers using containerization technologies –Rancher 13
, a leading solution to deploy Docker containers in a Platform as a Service
(PaaS).
The CKAN open data portal gives access to the resources gathered by all the
members of the Lynx project. In the same way, members are able to register and
describe their harvested resources to jointly create the Lynx Open Data
Portal. To correctly display the relevant information about the datasets, CKAN
application uses the metadata described in Section 4.2.1. As a result, each
dataset presents the interface as shown by Figure 5 .
**Figure 6.** Screenshot of the Lynx Data Portal
The “Data and Resources” section corresponds to the “Resource information”
metadata block and “Additional Info” contains the metadata of the “Dataset
information” table.
The CKAN data portal allows faceted browsing, with filters such as language,
format and jurisdiction. At this moment, there are 26 datasets classified in
the CKAN, but this number will grow. For the metadata records to be correctly
displayed on the website, it was required to establish a correspondence
between the metadata in the spreadsheet and the structure in the JSON file
that gives shape to the CKAN platform.
In the Lynx Data Portal, each dataset can be accessed through their own URI,
that is built by using the ID of each resource. Datasets IDs are shown in
Table 3, contained in the next section. As a result, dataset URIs look like
the example below, where the ID would be unesco-thesaurus:
http://data.lynx-project.eu/dataset/unesco-thesaurus
The CKAN API enables a direct access to the metadata records. The API is
intended for developers who want to write code that interacts with CKAN sites
and their data, and it is documented online 14 . For example, the method:
http://data.lynx-project.eu/api/rest/dataset/unesco-thesaurus
will return the following answer:
{"license_title": null, "maintainer": null, "private": false,
"maintainer_email": null, "num_tags": 0, "id":
"efaf72c9-f8da-4257-b77e-c1f90952d71a", "metadata_created":
"2018-04-11T08:35:41.813169", "relationships": [],
"license": null, "metadata_modified": "2018-04-11T08:39:59.429186", "author":
null, "author_email": null,
"download_url": "http://skos.um.es/sparql/", "state": "active", "version":
null, "creator_user_id": "3b131ddc-4bbf-
42ff-9c33-ee1c4f7adb5c", "type": "dataset", "resources": [{"Distribuciones":
"SPARQL endpoint", "hash": "",
"description": "SPARQL endpoint", "format": "SKOS", "package_id":
"efaf72c9-f8da-4257-b77e-c1f90952d71a",
"mimetype_inner": null, "url_type": null, "formatoabierto": "", "id":
"2a610dc8-15cd-4f17-aee0-149201c427cd",
"size": null, "mimetype": null, "cache_url": null, "name": "SPARQL endpoint",
"created": "2018-04-
11T08:39:13.979840", "url": "http://skos.um.es/sparql/", "cache_last_updated":
null, "last_modified": null,
"position": 0, "resource_type": null}, {"Distribuciones": "Downloadable
files", "hash": "", "description":
"Downloadable files in RDF and Turtle.", "format": "RDF", "package_id":
"efaf72c9-f8da-4257-b77e-c1f90952d71a",
"mimetype_inner": null, "url_type": null, "formatoabierto": "", "id":
"81ddd071-4018-4850-b5d8-04b4f5badd7d",
"size": null, "mimetype": null, "cache_url": null, "name": "Downloadable
files", "created": "2018-04-
11T08:39:59.170137", "url": "http://skos.um.es/unescothes/downloads.php",
"cache_last_updated": null,
"last_modified": null, "position": 1, "resource_type": null}],
"num_resources": 2, "tags": [], "groups": [],
"license_id": null, "organization": {"description": "", "title": "OEG",
"created": "2018-04-05T08:10:35.821305",
"approval_status": "approved", "is_organization": true, "state": "active",
"image_url": "", "revision_id":
"66f3c9c3-9bdf-4ebe-8ed2-54b4aea30375", "type": "organization", "id":
"d4250a6e-d1d4-4a2d-8e40-b663271d8404", "name": "oeg"}, "name": "unesco-
thesaurus", "isopen": false, "notes_rendered": "<p>The UNESCO Thesaurus is a
controlled and structured list of terms used in subject analysis and retrieval
of documents and publications in several fields.</p>", "url": null,
"ckan_url": "http://data.lynx-project.eu/dataset/unesco-thesaurus", "notes":
"The UNESCO Thesaurus is a controlled and structured list of terms used in
subject analysis and retrieval of documents and publications in several
fields.\r\n", "owner_org": "d4250a6e-d1d4-4a2d-8e40-b663271d8404",
"ratings_average": null, "extras": {"lkg_type": "language", "domain":
"Education, Science, Culture, Politics, Countries, Information",
"total_number": "4408 (skos concepts)", "language": "en, es, fr, ru",
"creator": "Research group of Information
Technology (University of Murcia)", "publisher": "UNESCO", "jurisdiction": "",
"other_rights": "no", "last_update": "2015", "licence": "Creative Commons 3.0,
https://creativecommons.org/licenses/by-nc-sa/3.0/deed.es_ES", "date":
"11/04/18", "partner": "UPM", "identifier": "", "availability": "online"},
"ratings_count": 0, "title": "UNESCO Thesaurus", "revision_id":
"67553ea8-aa13-4dfe-905d-eb499d2d78e9"}
## 3.2 TRANSFORMATION OF RESOURCES
The minimum content of the LKG is the collection of datasets necessary for the
execution of the Lynx pilots that are published as linked data. Whereas
transformation of resrouces to linked data is not a central activity of Lynx,
the project foresees that some resources will exist but not as linked data,
and a transformation process will be necessary.
The cycle of activities usually made when publishing linked data (see Figure
7) is
**Figure 7.** Usual activities for publishing linked data. Figure taken from
[25].
Whereas the specification is derived from the pilots and the use case needs,
the modelling process will lean on existing data models, to be harmonized as
described in Section 4.2. The generation of linked data will be the
transformation of existing resources. These transformation will be different
depending on the source format:
From unstructured text, extraction tools (PoolParty, OpenCalais,
SketchEngine etc.) will be used, before creating the entities.
From relational databases, technologies such as R2RML exist, but no relation
database is expected to be necessary.
For tabular data, Open Refine and similar tools will be used.
The publication means is to be decided but it will be made using either
PoolParty or Open Link Virtuoso in local servers.
## 3.3 INITIAL CATALOGUE OF DATASETS
This section contains only preliminary information, and it will be completed
by M18 with D2.4.
### 3.3.1 Datasets in the regulatory domain
These are the initially identified datasets in the regulatory domain:
Eur-Lex: Database of legal information containing: EU law (EU treaties,
directives, regulations, decisions, consolidated legislation, etc.)
preparatory acts (legislative proposals, reports, green and white papers,
etc.), EU case-law (judgments, orders, etc.), international agreements, etc. A
huge database updated daily with some texts dating back to 1951.
Openlaws: Austrian laws (federal laws and of the 9 regions) and rulings
(from 10 different courts), German federal laws, European laws (regulations,
directives) and rulings (general court, European Court of Justice). It
includes Eur-Lex, 11k national acts and 300k national cases in a neo4j graph.
DNV-GL: Standards, regulations and guidelines to the public, usually in PDF.
### 3.3.2 Datasets in the language domain
Using the methodology described in Section 4.2, several sites and repositories
have been surveyed. One of the sources of most interest for linguistic open
data is the Linked Open Data Cloud 15 or LOD cloud, due to its open nature
and its adequate format as linked data or RDF. In particular, the Linguistic
Linked Open Data Cloud 16 is a subset of the LOD cloud which provides
exclusively linguistic resources sorted by typology. Different types of
datasets in the Linguistic Linked Open Data Cloud are:
Corpora
Terminology, thesauri and Knowledge Bases
Lexicons and Dictionaries
Linguistic Resource Metadata
Linguistic Data Categories
Typological Databases
Within this project, the three first types of resources have been shortlisted
as the most useful.
Besides consuming linked data or RDF in general, other valuable non-RDF
resources can be included in the graph, possibly once converted to RDF. Many
non-RDF resources of interest in this context can be found in data portals
like the European Data Portal, the Library of Congress or the Termcoord public
portal, which is of particular interest for the multilingual glossaries in the
domain of law.
Due to the huge amount of information and open data available nowadays, it is
essential to establish these limits to gather only the relevant resources. In
the case that more types of datasets are required, they will be harvested at a
later stage. Thus, some of the resources already published as linked data and
that have been identified as of interest for Lynx are listed below:
STW Thesaurus for Economics: a thesaurus that provides a vocabulary on any
economic subject. It also contains terms used in law, sociology and politics
(monolingual in English) [30].
Copyright Termbank: a multilingual term bank of copyright-related terms that
has been published connecting WIPO definitions, IATE terms and definitions
from Creative Commons licenses
(multilingual) .
EuroVoc: a multilingual and multidisciplinary thesaurus covering the
activities of the EU. It is not specifically legal, but it contains pertinent
information about the EU and their politics and law (multilingual).
AGROVOC: a controlled vocabulary covering all the fields of the Food and
Agriculture Organization (FAO) of the United Nations. It contains general
information and it has been selected since it shares many structures with
other important resources (multilingual).
IATE: a terminological database developed by the EU which is constantly
being updated by translators and terminologists. Amongst other domains, the
terms are related with law and EU governments (multilingual). A transformation
to RDF was made in 2015.
Resources published in other formats have been considered as well. Structured
formats include TBX (used for term bases), CSV and XLS. Exceptionally,
resources published in non-machine-readable formats might be considered.
Consequently, the following resources published by the EU have also been
listed as usable, although they are not included in the Linguistic Linked Open
Data Cloud:
INSPIRE Glossary: a term base developed by the INSPIRE Knowledge Base of the
European Union. Although this project is related with the field of spatial
information, the glossary contains general terms and definitions that specify
the common terminology used in the INSPIRE Directive and in the INSPIRE
Implementing Regulations (monolingual, en).
EUGO Glossary: a term base addressed to companies and entrepreneurs that
need to comply with administrative or professional requirements to perform a
remunerated economic activity in Spain. This glossary is part of a European
project and contains terms about regulations that are valuable for Lynx
purpose (monolingual in Spanish).
GEMET: a general thesaurus, conceived to define a common general language to
serve as the core of general terminology for the environment. This glossary is
available in RDF and it shares terms and structures with EuroVoc
(multilingual).
Termcoord: a portal supported by the European Union that contains glossaries
developed by the different institutions. These glossaries cover several fields
including law, international relations and government. Although the resources
are available in PDF, at some point these documents could be treated and
transformed into RDF if necessary (multilingual).
In the same way, the United Nations also counts with consolidated
terminological resources. Given their intergovernmental domain, the following
resources have been selected:
UNESCO Thesaurus: a controlled list of terms intended for the subject
analysis of texts and document retrieval. The thesaurus contains terms on
several domains such as education, politics, culture and social sciences. It
has been published as a SKOS thesaurus and can be accessed through a SPARQL
endpoint (multilingual).
InforMEA Glossary: a term bank developed by the United Nations and supported
by the European Union with the aim of gathering terms on Environmental Law and
Agreements. It is available as RDF and it will be upgraded to a thesaurus
during the following months (multilingual).
International Monetary Fund Glossary: a terminology list containing terms on
economics and public finances related with the European Union. It is available
as a PDF downloadable file; however, it may be transformed as a future work
(multilingual).
On the other hand, other linguistic resources (not supported by the EU nor the
UN) have been spotted. Some of them are already converted into RDF:
Termcat (Terminologia Oberta): a set of terminological databases supported
by the government of Catalonia. They contain term equivalents in several
languages. Part of these terminological databases were converted into RDF
previously and are part of the TerminotecaRDF project. They can be accessed
through a SPARQL endpoint (multilingual).
German Labour Law Thesaurus: a thesaurus that covers all main areas of
labour law, such as the roles of employee and employer; legal aspects around
labour contracts. It is available through a SPARQL endpoint and as RDF
downloadable files (monolingual, de).
Jurivoc: a juridical thesaurus developed by the Federal Supreme Court of
Switzerland in cooperation with Swiss legal libraries. It contains juridical
terms arranged in a monohierarchic structure (multilingual).
SAIJ Thesaurus: a thesaurus that organises legal knowledge through a list of
controlled terms which represent concepts. It is available in RDF and intended
to ease users’ access information related to the argentine legal system that
can be found in a file or in a documentation centre (monolingual, es).
CaLaThe: a thesaurus for the domain of cadastre and land administration that
provides a controlled vocabulary. It is interesting because it shares
structures and terms with AGROVOC and the GEMET thesaurus, and it can be
downloaded as an RDF file (monolingual, en).
CDISC Glossary: a glossary contains definitions of terms and abbreviations
that can be relevant for medical laws and agreements It is available in
several formats, including OWL (monolingual, en).
Finally, one last resource available in other PDF has also been considered due
to different facts:
Connecticut Glossary: a glossary that contains legal terms published by the
Judicial Branch of the State of Connecticut. It can be transformed into a
machine-readable format and from there into RDF since it provides with
equivalences of legal terms from English into Spanish (bilingual).
Table 3 lists all the resources as a review of the information presented
above. On the other hand, the set of the identified linguistic resources has
also been represented in an interactive graph, in which each dataset is
coloured as per the domain it covers (Figure 8). A second version of the graph
has also been created in order to make a distinction between those datasets in
RDF (green) and those in different formats (grey) (Figure 9). The graph also
represents the relations between each asset, since most of those in RDF share
structures and terms.
**ID Name Description Language**
<table>
<tr>
<th>
**iate**
</th>
<th>
IATE
</th>
<th>
EU terminological database.
</th>
<th>
EU languages
</th> </tr>
<tr>
<td>
**eurovoc**
</td>
<td>
Eurovoc
</td>
<td>
EU multilingual thesaurus.
</td>
<td>
EU languages
</td> </tr>
<tr>
<td>
**eur-lex**
</td>
<td>
EUR-Lex
</td>
<td>
EU legal corpora portal.
</td>
<td>
EU languages
</td> </tr>
<tr>
<td>
**conneticutlegal-glossary**
</td>
<td>
Conneticut Legal
Glossary
</td>
<td>
Bilingual legal glossary.
</td>
<td>
en, es
</td> </tr>
<tr>
<td>
**unescothesaurus**
</td>
<td>
UNESCO Thesaurus
</td>
<td>
Multilingual multidisciplinary thesaurus.
</td>
<td>
en, es, fr, ru
</td> </tr>
<tr>
<td>
**library-ofcongress**
</td>
<td>
Library of Congress
</td>
<td>
Legal corpora portal.
</td>
<td>
en
</td> </tr>
<tr>
<td>
**imf**
</td>
<td>
International Monetary Fund
</td>
<td>
Economic multilingual terminology.
</td>
<td>
en, de, es
</td> </tr>
<tr>
<td>
**eugo-glossary**
</td>
<td>
EUGO Glossary
</td>
<td>
Business monolingual dictionary.
</td>
<td>
es
</td> </tr>
<tr>
<td>
**cdisc-glossary**
</td>
<td>
CDISC Glossary
</td>
<td>
Clinical monolingual
</td>
<td>
en
</td> </tr>
<tr>
<td>
**stw**
</td>
<td>
STW Thesaurus for Economics
</td>
<td>
Economic monolingual thesaurus.
</td>
<td>
en
</td> </tr>
<tr>
<td>
**edp**
</td>
<td>
European Data
Portal
</td>
<td>
EU datasets.
</td>
<td>
EUlanguages
</td> </tr>
<tr>
<td>
**inspire**
</td>
<td>
INSPIRE Glossary
(EU)
</td>
<td>
General terms and definitions in
English.
</td>
<td>
en
</td> </tr>
<tr>
<td>
**saij**
</td>
<td>
SAIJ Thesaurus
</td>
<td>
Controlled list of legal terms.
</td>
<td>
es
</td> </tr>
<tr>
<td>
**calathe**
</td>
<td>
CaLaThe
</td>
<td>
Cadastral vocabulary
</td>
<td>
en
</td> </tr>
<tr>
<td>
**gemet**
</td>
<td>
GEMET
</td>
<td>
General multilingual thesauri.
</td>
<td>
en, de, es, it
</td> </tr>
<tr>
<td>
**informea**
</td>
<td>
InforMEA Glossary (UNESCO)
</td>
<td>
Monolingual glossary on environmental law.
</td>
<td>
en
</td> </tr>
<tr>
<td>
**copyrighttermbank**
</td>
<td>
Copyright Termbank
</td>
<td>
Multi-lingual term bank of copyrightrelated terms
</td>
<td>
en, es, fr, pt
</td> </tr>
<tr>
<td>
**gllt**
</td>
<td>
German labour law thesaurus
</td>
<td>
Thesaurus with labour law terms.
</td>
<td>
de
</td> </tr>
<tr>
<td>
**jurivoc**
</td>
<td>
Jurivoc
</td>
<td>
Juridical terms from Switzerland.
</td>
<td>
de, it, fr
</td> </tr>
<tr>
<td>
**termcat**
</td>
<td>
Termcat
</td>
<td>
Terms from several fields including law.
</td>
<td>
ca, en, es, de,
fr, it
</td> </tr>
<tr>
<td>
**termcoord**
</td>
<td>
Termcoord
</td>
<td>
Glossaries from EU institutions and bodies.
</td>
<td>
EU languages
</td> </tr> </table>
**agrovoc** Agrovoc Controlled general vocabulary. 29 languages
**Table 3.** Initial set of resources gathered.
**Figure 8.** Datasets represented by domain.
**Figure 9.** Datasets represented by format.
# 4 DATA MODELS
## 4.1 INTRODUCTION
### 4.1.1 Data models in the regulatory domain
A number of vocabularies and ontologies for documents in the legal domain has
been published in the last few years. Núria Casellas surveyed 52 legal
ontologies in 2011 [18], and in the meantime many other new ontologies have
appeared, but in practice, only a few of them have direct interest for the
LKG, as not every published legal ontology is created with the intention of
supporting data models. Some ontologies had the intent of formalizing abstract
conceptualizations. For example, ontology design patterns in the legal domain
have been explored [17] –but these works have little interest for supporting
data publication.
The XML schema Akoma Ntoso 17 was initially funded by the United Nations to
become some years later an OASIS specification as Legal RuleML 18 . MetaLex
[12] was an XML vocabulary for the encoding of the structure and content of
legislative documents, which included in newer versions functionality related
to timekeeping and version management. The European Committee for
Standardization (CEN) adopted MetaLex and evolved the schema to an OWL
ontology. MetaLex was extended in the context of the FP6 ESTRELLA project
(2006-2008) which developed a network of ontologies known as Legal Knowledge
Interchange Format (LKIF). The LKIF ontologies are still available and a
reference in the area 19 [14]. Licenses used for the publication of
copyrighted work have been modelled with the ODRL (Open Digital Rights
Language) language [27].
The European Legislation Identifier (ELI) is a system to make legislation
available online in a standardised format, so that it can be accessed,
exchanged and reused across border [13]. ELI describes a new common framework
to unify and link national legislation with European legislation. ELI, as a
framework, proposes a URI template for the identification of legal resources
on the web and it also provides an OWL ontology for supporting the
representation of metadata of legal events and documents. The European Case
Law Identifier (ECLI), much like ELI, was introduced recently for modelling
case laws. The BO-ECLI project, funded under the Justice Programme of the
European Union (2015-2017), aimed to broaden the use of ECLI and to further
improve the accessibility of case law.
### 4.1.2 Data models in the linguistic domain
Similarly, a large amount of language resources can already be found across
the Semantic Web. Such datasets are represented with various schemas,
depending on given factors such as the inner structure of the dataset,
language, content or the objective of its publication, to mention but a few.
_Simple Knowledge Organization System_ ( _SKOS_ ) is aimed to represent the
structure of organization systems such as thesauri and taxonomies, since they
share many similarities. It is widely used within the Semantic Web context,
since it provides an intuitive language and can be combined with formal
representation languages such as the Web Ontology Language (OWL). _SKOS XL_
works as an extension of SKOS to represent lexical information [23].
With regard to multilingualism in ontologies, _Linguistic Information
Repository_ ( _LIR_ ) was proposed as model for ontology localisation: it
grants the localisation of the ontology terminological layer, without
modifying the ontology conceptualisation. LIR allows enriching ontology
entities with the linguistic information necessary for the localisation and
cultural adaptation of the ontology [24].
Another model intended for the representation of linguistic descriptions
associated to ontology concepts is _Lexinfo_ [20]. It contains a complete
collection of linguistic categories. Currently, it is used in combination with
other models such as Ontolex (described in the next paragraph), to describe
the properties of the linguistic objects that describe ontology entities.
Other repositories of linguistic categories are ISOcat 20 , OLiA 21 or
GOLD 22 .
The _Lexicon Model for Ontologies_ or _lemon_ [26] was especially created to
represent lexical information in the Semantic Web, covering some needs that
previous models did not. This model has evolved in the context of a W3C
Community Group into _lemon-Ontolex_ first, now better known as _Ontolex_ 23
. In this model, linguistic descriptions are as well separated from the
ontology, and point to the corresponding concept in the ontology. The
structure of this model is divided into a core set of classes and different
modules containing various types of linguistic information that range from
morpho-syntactic properties of lexical entries, lexical and terminological
variation and translation, decomposition of phrase structures, syntactic
frames and mappings to the ontological predicates, and morphological
decomposition of lexical forms. Linguistic annotations such as data categories
and linguistic descriptors are not captured in the model but referred to by
pointing to models that contain them (see LexInfo model above).
## 4.2 STRATEGY FOR THE HARMONISATION OF DATA MODELS IN LYNX
The LKG needs a uniform collection of data models in order to integrate
heterogeneous resources. The definition of these data models will be provided
in Deliverable 2.4.
In order to select the data models, a simultaneous top down and bottom up
approaches will be conducted, as illustrated by Figure 11. A parallel work is
carried out, where in the one hand a top down approach is conducted,
extracting a list of formats, vocabularies and ontologies which can be chosen
to satisfy the functional requirements of the pilots, whereas in the other
hand a bottom up approach is followed, exploring every possible format,
vocabulary or ontology of interest, with special attention to the most widely
spread ones.
Identification of vocabularies
and ontologies in the domain
Generation of minimal
metadata description
Publication in the Lynx web as
a catalogue of vocabularies
Analysis of functional
requirements
Analysis of technical
requirements
Identification of vocabularies
and formats necessary
Selection of vocabularies
and ontologies
Top down approach
An analysis of the functional and
technical requirements of the pilots
determines a list of vocabularies and
ontologies of choice
Bottom up approach
A survey of ontologies and vocabularies
tries to comprehensively identify the
most widely spread formats
**Figure 10.** Strategy for the selection of data models in Lynx
# 5 THE MULTILINGUAL LEGAL KNOWLEDGE GRAPH
As stated in the introduction, a secondary goal of this document is to define
the Legal Knowledge Graph that will be developed during the Lynx project with
a linguistic regulatory Linked Open Data Cloud.
## 5.1 SCOPE OF THE LEGAL KNOWLEDGE GRAPH
The amount of legal data made accessible either in open or under payment
modalities by legal information providers can be hardly imagined. Lexis Nexis
claimed 24 to have 30 Terabytes of content, WestLaw accounted for more than
40,000 _databases_ . Their value can be roughly estimated: as of 2012, the
four big players (WestLaw, Lexis Nexis, Wolters Kluwer and Bloomberg Legal)
totalled about $10,000M in revenues. Language data (e.g. resources with any
kind of linguistic information) belongs to a much smaller domain, but still,
unmanageable as a whole.
The Lynx project is interested in a small fraction of the information
belonging to these domains. In particular, Lynx is in principle interested
only in using the data necessary to provide the compliance services described
in the pilots. Data of interest is regulatory data (legal and standards-
related) and language data (to cover the multilingual aspects of the
services). The intersection of these domains is of the utmost interest and
Lynx will try to comprehensively identify every possible open dataset in this
core category. These ideas are represented in Figure 4.
Language data
Legal data
Legal data for
compliance in
the Lynx pilots
Language data
for compliance in
the Lynx pilots
**Lynx core**
linguistic legal data.
Corpora
TerminologIcal databases
Thesauri, glossaries
Lexicons and dictionaries
Linguistic resource metadata
Typological databases
Law
Case law
Opinions, recommendations
Doctrine, books, journals
Standards, technical norms
Sectorial good practices
**Figure 11.** Scope of the multilingual Legal Knowledge Graph
The definitions of both _language data_ and _regulatory data_ are indeed
fuzzy, but flexible as to introduce data of many different kinds whenever
necessary (geographical data, user information, etc.). Because data in the
Semantic Web is indissociable from the data models, and data models are
accessed in the same manner as data is, ontologies and vocabularies are part
of the LKG as well. Moreover, any kind of metadata (describing documents,
standards etc.) is also part of the LKG, as well as the description of the
entities producing the documents (courts, users, jurisdictions). In order to
provide the compliance services, and with different degree of interest, both
primary and secondary law are of use, and any relevant document in a wide
sense may become part of the Legal Knowledge Graph. This is illustrated in
Figure 5.
**Figure 12.** Types of information in the Legal Knowledge Graph
## 5.1 KNOWLEDGE GRAPHS
In the realm of Artificial Intelligence, a knowledge graph is a data structure
to represent information, where entities are represented as nodes, their
attributes as node labels and the relationship between entities are
represented as edges. Knowledge graphs such as Google’s 25 , Freebase [2]
and WordNet [3] turn data into knowledge, and they have become important
resources for many AI and NLP applications such as information search, data
integration, data analytics, question answering or context-sensitive
recommendations.
Large knowledge graphs include millions of concepts and billions of
relationships. For example, DBpedia describes about 30M entities connected
through 10,000M relationships. Entities belong to classes described in
ontologies. There are different manners of representing knowledge graphs, not
the least important being the one using W3C specifications of the Semantic
Web: RDF, RDFS, OWL. RDF data is accessible online in different forms: as file
dumps, through a SPARQL endpoints or dedicated APIs or simply published online
as Linked Data [4].
### 5.1.1 Legal Knowledge Graphs
In the last few years, a number of Legal Knowledge Graphs have been created in
different applications. The MetaLex Document Server offers legal documents as
versioned Linked Data [10], including Dutch national regulations. Finnish [9]
and Greek [8] legislation are also offered as Linked Data.
The Publications Office of the EU maintains the central content and metadata
CELLAR repository for storing official publications and bibliographic
resources produced by the institutions of the EU [11]. The content of CELLAR,
which includes EU legislation, is made publicly available by the Eur-Lex
service and it offers also an SPARQL endpoint.
The FP7 EUCases project (2013-2015) offered European and national case law and
legislation linked in an open data stack (http://eucases.eu).
Finally, Openlaws offers a platform based on linked open data, open source
software and open innovation processes [5][6][7]. Lynx will benefit from the
expertise of Openlaws, which will be the preferred source for the data models,
methods and algorithms. New H2020 projects in the area of data protection are
also using semantic web technologies, such as the H2020 Special 26 , devoted
to ease the collection of user consents and represent policies as RDF or the
H2020 Mirel 27 (2016-2019), with a network of experts to define a formal
framework and to develop tools for mining and reasoning with legal texts, or
e-Compliance, an FP7 project (2013-2016), focused on using semantic web
technologies for regulatory compliance in the maritime domain.
### 5.1.2 Linguistic Knowledge Graphs
In the last few years, the language technology community has shaped the
Linguistic Linked Open Data Cloud: the graph with those language resources
available in RDF and published as Linked Data [16]. The graph represented in
Figure 6, resembles the one of the Linked Data Cloud, but limited to the
language domain.
**Figure 13.** Linguistic Linked Open Data Cloud 28
A major resource contained in this graph is _DBpedia_ , a vast network that
structures data from Wikipedia and links them with other datasets available on
the Web [3]. The result is published as Open Data available for the
consumption of both humans and machines. Different versions of DBpedia exist
for different languages.
Another core resource in the LOD Cloud is _BabelNet_ [15], a huge multilingual
semantic network, generated automatically from various resources and
integrating the lexicographical information of _WordNet_ and the encyclopaedic
knowledge of Wikipedia. BabelNet also applies Machine Translation to get
information from several languages. As a result, BabelNet is considered an
encyclopaedic dictionary that contains concepts and named entities connected
thanks to a great amount of semantic relations.
_Wordnet_ , is one of the best known Linguistic Knowledge Graphs, since it is
a large online lexical database that contains nouns, verbs, adjectives and
adverbs in English [3]. These words are organised in sets of synonyms that
represent concepts, known as _synsets_ . WordNet uses these synonyms to
represent word senses; thus, synonymy is WordNet’s most important relation.
Four additional relations are also used by this network: antonymy (opposing-
name), hyponymy (sub-name), meronymy (part-name), troponymy (manner-name) and
entailment relations. Other resources equivalent to WordNet have been
published for different languages, such as EuroWordNet [29].
However, there are other semantic networks (considered linguistic knowledge
graphs) that do not appear in the LOD Cloud but are also worth to mention.
This is the case of _ConceptNet_ [28], a semantic network designed to
represent common sense and support textual reasoning about documents in the
real word. It represents part of human experiences and tries to share this
common-sense knowledge with machines. ConceptNet is often integrated with
natural language processing applications to speed up the enrichment of AI
systems with common sense [4].
### 5.1.3 The Lynx Multilingual Legal Knowledge Graph
Building on these previous experiences, we are in the position to define the
Lynx Multilingual Legal Knowledge Graph.
The **Lynx Multilingual Legal Knowledge Graph (LKG)** is a knowledge graph
using W3C specifications with the necessary information to provide
multilingual compliance services. The Lynx LKG builds on previous initiatives
reusing open data and will evolve adding new resources whenever needed to
provide compliance services. The LKG preferred form of publication is Linked
Data, although other access mechanisms will be provided.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0197_MELOA_776825.md
|
# INTRODUCTION
## Project Overview
The MELOA project proposes to develop a low-cost, easy-to-handle, wave
resilient, multi-purpose, multi-sensor, extra light surface drifter for use in
all water environments, ranging from deep-sea to inland waters, including
coastal areas, river plumes and surf zones. The device will be developed as an
upgrade to the WAVY drifter conceived by the Faculty of Engineering of the
University of Porto, which was used to measure the surface circulation forced
by wave breaking, including detailed structure of rifts and the littoral drift
current.
The philosophy of the WAVY drifter will essentially be respected:
* a small-size sphere with just enough room to accommodate power source, GNSSreceiver, communications modules, antennae, sensors and data processor;
* optimised buoyancy to prevent the drifter trajectory responding to the wind instead of the current, while providing just enough exposure of the antennae to ensure acquisition of the GNSS signal at the required rate and reliable near real-time communications.
Given the low influence of wind upon the drifters’ displacements, MELOA will
provide a cheap effective way to monitor surface currents and surface dynamic
features anywhere in the World Ocean. Through equipping the drifters with
thermistors at two different levels, the possibility is open for monitoring
“near-skin temperature” and nearsurface vertical temperature gradients, which
will be invaluable for calibration/validation of satellite derived SST fields.
<table>
<tr>
<th>
**General Information**
</th> </tr>
<tr>
<td>
Project Title
</td>
<td>
Multi-purpose/Multi-sensor Extra Light Oceanography Apparatus
</td> </tr>
<tr>
<td>
Starting Date
</td>
<td>
1st December 2017
</td> </tr>
<tr>
<td>
Duration in
Months
</td>
<td>
39
</td> </tr>
<tr>
<td>
Call (part)
Identifier
</td>
<td>
H2020-SC5-2017-OneStageB
</td> </tr>
<tr>
<td>
Topic
</td>
<td>
SC5-18-2017 Novel in-situ observation systems
</td> </tr>
<tr>
<td>
Fixed EC
Keywords
</td>
<td>
Market development, Earth Observation / Services and applications,
Technological innovation, In-Situ Instruments / sensors
</td> </tr>
<tr>
<td>
Free Keywords
</td>
<td>
Novel measurements; Cost reduction
</td> </tr> </table>
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement No 776825\.
## Scope
The Data Management Plan (DMP) will detailing what data the project will
generate, whether and how it will be exploited or made accessible for
verification and re-use, and how it will be curated and preserved. The purpose
of the DMP is to support the data management life cycle for all data that will
be collected, processed or generated by the project.
## Responsibilities
The table below provides information on who's contributed to this document and
to which sections.
Table 3 Document Responsibilities
<table>
<tr>
<th>
**Name**
</th>
<th>
**Institution**
</th>
<th>
**Responsibilities**
</th> </tr>
<tr>
<td>
Diego Lozano García
</td>
<td>
DMS
</td>
<td>
All sections
</td> </tr>
<tr>
<td>
Nuno Almeida
</td>
<td>
DME
</td>
<td>
Revision
</td> </tr>
<tr>
<td>
Félix Pedrera García
</td>
<td>
DMS
</td>
<td>
Revision
</td> </tr>
<tr>
<td>
Jorge Silva
</td>
<td>
IH
</td>
<td>
Revision
</td> </tr>
<tr>
<td>
Joaquin del Rio
</td>
<td>
UPC
</td>
<td>
Revision
</td> </tr> </table>
## Document Structure
This document is structured as following:
* Section 1 provides a project overview and then goes on to describes the scope, responsibilities and structure of this deliverable.
* Section 2 will describe the datasets generated in the project.
* Section 3 will analyse each of the aspects of the FAIR data: Findable, Accessible, Interoperable and Re-use
* Section 4 will present the resources allocation
* Section 5 will deal with data security
# Data Summary
MELOA will develop a family of five versions of a low-cost, light-weight
multiparameter drifters. These WAVY drifters will be very easy to carry around
and to deploy. In contrast and in spite of that, the data they produce have
far-reaching applications, directly providing valuable information that will
help to derive answers to diverse scientific, environmental and societal needs
and achieving multiple objectives, from complementing observational gaps in
ocean observation, to delivering validation datasets to satellite ground-
truthing, along with the real possibility of their effective use by the common
citizen.
The data generated in the MELOA project will be acquired in the test campaign
and demonstrations of the WAVY drifters. A data set in MELOA is the collection
of data samples acquired by a WAVY during a campaign. The contents of the data
samples depend on the type of WAVY drifter. The common information for all the
types is the GNSS (Time, position, velocity and direction) and the battery
power. The table below presents the contents of the data samples for each type
of WAVY drifter and the approximate size.
Table 4 WAVY dataset contents
<table>
<tr>
<th>
**WAVY**
**type**
</th>
<th>
**Sensors**
</th>
<th>
**Data sample contents**
</th>
<th>
**Sample size**
**(approx. in**
**CSV)**
</th> </tr>
<tr>
<td>
WAVY
basic
</td>
<td>
GNSS (1Hz)
Thermistor (0.17Hz)
</td>
<td>
Timestamp, Position, velocity, direction, n.
satellites, HDOP (76 bytes) 1x temperature (7 bytes)
Battery power (7 bytes)
</td>
<td>
90 bytes
</td> </tr>
<tr>
<td>
WAVY
littoral
</td>
<td>
GNSS (1Hz)
IMU (20Hz)
</td>
<td>
Timestamp, Position, velocity, direction, n. satellites, HDOP (76 bytes) wave
parameters (Wavelength, Amplitude, Period & Speed) + 5 fourier coefficients
(120 bytes) battery power (7 bytes)
</td>
<td>
203 bytes
</td> </tr>
<tr>
<td>
WAVY ocean
</td>
<td>
GNSS (1Hz)
2xThermistors
(0.17Hz)
IMU (20Hz)
</td>
<td>
Timestamp, Position, velocity, direction, n. satellites, HDOP (76 bytes) wave
parameters (Wavelength, Amplitude, Period
& Speed) + 5 fourier coefficients (120 bytes)
2x temperatures (14 bytes) battery power (7 bytes)
</td>
<td>
217 bytes
</td> </tr>
<tr>
<td>
WAVY
ocean plus
</td>
<td>
GNSS (1Hz)
2xThermistors
(0.17Hz)
IMU (20Hz)
</td>
<td>
Timestamp, Position, velocity, direction, n. satellites, HDOP (76 bytes)
wave parameters (Wavelength, Amplitude, Period
& Speed) + 5 fourier coefficients (120 bytes)
2x temperatures (14 bytes) battery power (7 bytes)
</td>
<td>
217 bytes
</td> </tr>
<tr>
<td>
WAVY ocean atmo
</td>
<td>
GNSS (1Hz)
2xThermistors
(0.17Hz)
IMU (20Hz) 1xAir pressure gauge (0.17Hz)
2xThermistors
(0.17Hz)
</td>
<td>
Timestamp, Position, velocity, direction, n.
satellites, HDOP (76 bytes)
wave parameters (Wavelength, Amplitude, Period
& Speed) + 5 fourier coefficients (120 bytes)
2x temperatures (14 bytes)
1x air pressure value (7 bytes) 2x air temperatures (7 bytes) battery power (7
bytes)
</td>
<td>
238 bytes
</td> </tr> </table>
The format of the data set is equivalent to the L1 Product format, which is
defined in the deliverables WAVY L1 Product Specifications (V1 D4.04, V2
D4.12). In the current version, it basically consists of a CSV file containing
the data samples acquired by a WAVY and a metadata JSON file specifying the
WAVY id, campaign, time period and location. In further iterations during the
project, other data formats like O&M JSON and GeoJSON may be supported.
The size of a data set depends on the WAVY type, the sampling rate and the
duration of the WAVY activity during the campaign. The sampling rate varies
according to the transmission channel used to receive the WAVY data: some
minutes with ARGOS Satellite for WAVY ocean, a few seconds with GPRS for WAVY
littoral and 1 second with WIFI (sampling rate of the GNSS). For instance, the
size of a data set of a WAVY littoral during a day (assuming a sampling rate
of 1Hz) will be around 17 MB in CSV format. Note that the raw data from the
IMU (recorded at 20Hz) are used to calculate the wave parameters that are the
ones stored (with lower rate) in the data sets offered to the user.
The number of datasets obtained in a campaign depends on the duration of the
campaign, the type and number of Wavys. The data obtained from a WAVY could be
divided into several data sets for different time intervals.
The Field Test campaigns will be defined in the deliverable Field Tests
Campaigns Plan (V1 D6.01, V2 D6.02). The lists of MELOA data sets cannot be
identified yet and will be described in the next version of the document,
after the validation campaigns. The Field Test Campaigns will be divided into
groups by open ocean, Argos V4 (Mediterranean sea) and coast (Portuguese,
Irish and Spain). Each of these groups will have a dedicated Test Report with
two versions, one for each Test period of the project, they will summarise the
obtained results and the validity of the data sets for each test campaign:
* Portuguese Coast Field Tests Report (D6.03 and D6.04)
* Irish Coast Field Tests Report (D6.05 and D6.06)
* Spanish Coast Field Tests Report (D6.07 and D6.08)
* Argos V4 Field Tests Report (D6.09 and D6.10)
* Open Ocean Field Tests Report (D6.11 and D6.12)
# FAIR data
## Making data findable, including provisions for metadata
The MELOA Catalogue solution is based on CKAN, a tool used by national and
local governments, research institutions, and other organisations who manage
and publish lots of data collections. Once the data sets are published, users
can use its faceted search features to browse and find the data set they need,
and preview it using maps, graphs and tables. Each data set is associated to a
campaign (CKAN group) and an organisation, so that the user can easily browse
among the data sets belonging to a campaign or an organisation.
The MELOA catalogue stores the WAVY data sets with metadata following the OGC
Observation&Measurement profile and formatted in JSON. Also, there will be a
dictionary of the metadata compliant with INSPIRE (ISO 19115). The metadata
shall include the following information:
* a title and description for the data acquisition campaign
* the unique ID of the WAVY
* the location and the date of the launch of the campaign
* all the configurations of all the sensors in the WAVY
As introduced above, the format and naming convention of the data sets is
defined in the deliverables WAVY L1 Product Specifications (V1 D4.04, V2
D4.12). The filename of a data set includes the version number that makes
unique the data set product.
## Making data openly accessible
The data sets are mainly acquired in the Field Test Campaigns during the MELOA
project as described in the section 2. The raw data obtained by the WAVY
drifters in the campaigns are revised (e.g. cleaning some spurious samples)
and in some cases postprocessed (e.g. calculating wave parameters) in order to
obtain the final data sets that will be offered to the users. The MELOA data
sets will be openly accessible after their revision and sometimes after their
publication. So far, no access restriction has been identified to the data
sets that will be generated in MELOA. In the case that certain data sets will
require any access restrictions, they will be clearly identified and explained
in the second version of the Data Management Plan.
To have access to the MELOA data sets, a user has to register in the MELOA web
portal. The registration will be free and it will allow the users to use the
MELOA web applications, in particular the Catalogue and the Geo portal. The
Catalogue allows users browsing and searching the data sets collected by the
WAVYs for downloading them or previewing them using maps, graphs and tables.
It also provides APIs for the access to the data sets by the MELOA Geo portal
and other tools such as federated Catalogues.
The Geo portal provides the capability of visualisation of the MELOA data sets
in a way that is easy to find and interpret by the general public. The Geo
portal retrieves the data sets from the MELOA Catalogue and offers links to
the Catalogue for downloading them. The SW user manuals, version 1 and 2,
correspond to the deliverables D4.01/D4.02 for the Catalogue and D4.06/D4.07
for the Geo portal. These manuals will be available online in the knowledge
base of the Helpdesk in the MELOA web portal and also as a link in the web
applications. The MELOA catalogue and Geo portal will be online accesible in
the respective URLs:
* _http://catalogue.ec-meloa.eu_
* _http://geoportal.ec-meloa.eu_
In order to get closer to relevant user communities, the metadata of the MELOA
data sets will be federated with data hubs such as GEOSS and Copernicus. There
will be a metadata link between the WAVY data catalogue and the nextGEOSS
catalogue. Furthermore, the link to the Copernicus programme will be assured
by linking to the CORDA portal and the EuroGOOS, with the provision of data
services based on WMS layers (to be provided by the geoportal in V2).
Also, WAVYs data sets will be accesible in the FIWARE catalogue in order to
use them by the FIWARE community in the scope of the FIWARE Lab to test
integration of the FIWARE SW components with devices such as the WAVYs. FIWARE
is an open source community that generates open data and open source code
[R-6]. Opening the WAVY data sets to such communities will open new
opportunities of data exploitation to the market.
By the time being, we are not planning to deposite the Wavy's data sets in
other repositories. However, the possibility will be analysed further taken
into account certified repositories from the registry of Research Data
repositories (https://www.re3data.org/), in special the ones supported by the
openAIR.
## Making data interoperable
For WAVYs data sets, interoperability with other similar platforms will be
achieved by using standard implementations from the Open Geospatial Consortium
such as the Observation&Measurements profile, Web Map Service, Web Feature
Service and Sensor Observation Service. The metadata of the data sets will be
compliant with the OGC Observation&Measurements profile. The OGC WMS/WMTS/WFS
will be used to export data sets of L1 WAVYs datasets and added-value products
to Copernicus data hubs. The API to connect with the FIWARE platform will be
supported by the MELOA Data services. These standards and methodologies are
useful to federate the data sets with other data hubs and catalogues (FIWARE,
NextGEOSS) that are used by other user communities, organisations or
institutions.
It will be considered the use of certified repositories that are supported by
openAIR, in that case its requirements shall be taken into account [R-1].
The actual data in the MELOA data sets are formatted in CSV files that are
easily readable by many standard tools (e.g. Open Office). The CSV format
simply allows sharing the MELOA data sets with other users and researchers.
Moreover, data from other sources can be translated to CSV files and combine
with the WAVY data using commonly used SW applications. Other formats will be
evaluated and eventually implemented during the project, such as O&M JSON or
GeoJSON.
## Increase data re-use (through clarifying licences)
The MELOA data sets will have an open license that will require the reusers to
give attribution to the source of the data (MELOA project). It is still to be
decided if the open license will require that the derived data must be shared
with the same license (called share-alike). Instead of creating our own
license, we may select an existing open license, possible candidates are:
Creative Commons (CC-by [R-2], CC-by-sa [R-3]) or Open Data Commons (ODC-by
[R-4], ODbL [R-5]). The license will be indicated in the MELOA web portal.
The data sets offered to the users in the MELOA Catalogue are revised to
guarantee their quality and validity before they are published with open
access. In general, they will be analysed to check that they satisfy the test
objectives and are valid. The time to perform the analysis may include the
creation of the report for the associated Field test campaign in which the
conclusions of the test are agreed.
The MELOA web portal will be kept operational during the lifetime of the MELOA
project, although it will be available later as long as the EGI infrastructure
keeps the resources. After the project lifetime, the data sets downloaded from
the catalogue will remain reusable in accordance with the terms of MELOA open
license.
# Allocation of resources
The activities for making data FAIR are covered by the tasks in the WP4, in
particular by:
* T4.1 Development of catalogue and data storage component: it implements the discovery and access to the MELOA data sets as well as the connectors to FIWARE and the links to other data hubs (Copernicus and NextGEOSS).
* T4.2 Development data processing component (Level 1): this task defines the metadata and formats of the data sets, which are generated by the data processing component implemented in this task.
* T4.3 Development of Level 1 Data visualisation portal: the component developed in this task is provisioning of OGC WMS/WMTS/WFS layers for external applications.
In this way, the cost of making data FAIR in MELOA was already covered by the
estimations done for these tasks and it represents a small part of it.
The resources for the long term preservation have not been discussed yet in
the project.
# Data security
The EGI infrastructure provides redundancy of the HW storage with an
availability of 99.4%. A backup policy is defined to store all the servers
data (including MELOA data sets and Catalogue databases) in a online secure
cloud storage (amazon S3). In case of dramatic lost of data in the MELOA web
portal, it will be restored from this backup. The backup may include WM images
and periodic snapshots to facilitate the recovery procedure.
For the long term, we have not planned yet to store the data sets in a
certified repository. They will be available in the MELOA web portal as long
as the EGI infrastructure keeps the resources for the project.
**END OF DOCUMENT**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0198_5G-Xcast_761498.md
|
# Data summary
_The template is a set of questions that you should answer with a level of
detail appropriate to the project._
_**It is not required to provide detailed answers to all the questions in the
first version of the DMP that needs to be submitted by month 6 of the
project.** Rather, the DMP is intended to be a living document in which
information can be made available on a finer level of granularity through
updates as the implementation of the project progresses and when significant
changes occur. Therefore, **DMPs should have a clear version number and
include a timetable for updates.** As a minimum, the DMP should be updated in
the context of the periodic evaluation/assessment of the project. If there are
no other periodic reviews envisaged within the grant agreement, an update
needs to be made in time for the final review at the latest. _
The Data Management Plan (DMP) describes the management life cycle for the
data to be collected, processed and generated by this Horizon 2020 project,
5G-Xcast. This DMP represents a first version of the final document. It is
intended to be updated, making it available on a finer level of granularity
through updates as the implementation of the project progresses and when
significant changes occur. The DMP will be updated in the context of the
periodic evaluation of the project. The next updates are planned to be
submitted to the official project website:
_Table 1. Timetable for updates of the data management plan._
<table>
<tr>
<th>
**Version**
</th>
<th>
**Date**
</th> </tr>
<tr>
<td>
First version
</td>
<td>
M3
</td> </tr>
<tr>
<td>
Revision for first periodic evaluation
</td>
<td>
M12
</td> </tr>
<tr>
<td>
Revision for final evaluation
</td>
<td>
M23
</td> </tr> </table>
## Purpose of data collection
_What is the purpose of the data collection/generation and its relation to the
objectives of the project?_
Throughout the project, partners of the consortium will naturally generate
data in the form of results and presentations whilst carrying out research
activities related to respective project objectives. The collection and
sharing of this information within the project is essential to allow the
effective coordination of research tasks among the task contributors. Data
will be shared internally through an internal repository, accessible only to
project partners.
In addition to the internal data sharing activities, a series of public
deliverables and presentations are planned for open publication. This will
make the more key research discoveries available to the wider research
community and industry, including other 5GPPP phase-2 projects. 5G-Xcast will
coordinate with other 5G projects related to broadcast and media to maximize
the exploitation of findings. This includes sharing project deliverables and
exploring possible exchange of results with other projects; i.e. where
appropriate, 5G-Xcast results could be used in other projects and vice versa.
The sharing of and building upon knowledge is the foundation of research and
discovery. The project will ensure the technical and scientific relevance of
the results as well as promote the image of the consortium by supervision of
the look & feel quality of its output. The 5G-Xcast project aims at providing
its main results and ideas through the official website. This public website
will be also the central hub for the dissemination activities.
Open access to scientific publications will be ensured by publishing submitted
paper in compliance with IEEE rules. A periodic newsletter will include
information on the latest achievements of the project and links to recent
public deliverables and forthcoming events.
## Types of data
_What types and formats of data will the project generate/collect?_
During the project, different types and formats of **open access** data will
be generated and collected to be shared in the public website of the project:
* **Deliverables** : during the project, a series of confidential and public open Deliverables (D) will be developed related to specific tasks. All documents will be shared with 5G-PPP projects for inter-project cooperation. Public deliverables will be released to the open public. To monitor the work progress of the tasks, a first draft version will be released several months before the last and final version.
* **Presentations** : main presentations summarising global results of the different working packages (WP) will be shared as well through the official website of the project. Organisation of workshops will be proposed to relevant conferences. In order to have the maximum impact, the presentations made in these events will be also accessible to the open public.
* **Results:** during the project, specific results are expected to be shared. Examples of these results would be Matlab data files, coverage maps in terms of signal strength level, or some information about field trials such as Global Position System (GPS) position, signal level, interference, etc.
* **Standardization technical contributions** from project partners will be also shared in the official webpage. Examples of standardization forums are 3GPP or DVB.
Currently, project members do not expect to make public any code related to
simulation platforms or specific tools. All types of data mentioned above will
be also shared internally among the members of the consortium throughout the
drafting phase. By making use of the EBU repository and email lists dedicated
to this purpose, project partners can collaborate, jointly building the
deliverables with shared access to all data. In addition, project partners
will share **privately** (allowing access to project partners) other types of
data, in order to ensure that the objectives of the project are fulfilled:
* **Software:** during the project, some simulation platforms and software tools will be shared among the partners. Different repositories will be used, depending on the nature of the simulation tool. For instance, a common air interface simulator will be developed among several partners by making use of the Git system. Git is a free and open source distributed version control system designed to handle very large projects with speed and efficiency.
* **Preliminary results** in the form of presentations, spreadsheets, figures, etc., will be shared among partners through the internal repository.
* **Preliminary research ideas** presented in teleconferences will be shared among partners through the internal repository as well.
* **Standardization technical documents:** partners submitting contributions to standards developing organisations (SDOs) will make the contribution public if the SDOs documents are public (for example 3GPP). For others SDOs that have documents restricted to members (for examples DVB) partners will check with the chairman and aim to make the documents public whenever possible.
## Re-use of existing data
_Will you re-use any existing data and how? What is the origin of the existing
data?_
5G-Xcast will make use of existing data developed and validated by partners
outside the framework of this project. Public specifications coming from DTT
(Digital Terrestrial Television) committees such as DVB (Digital Video
Broadcasting) or ATSC (Advanced Television Systems Committee), as well as 3GPP
(3rd Generation Partnership Project ) technical specifications (TS) and
technical reports (TR), will be used as starting point for the further
development of required data.
5G-Xcast will not develop technologies entirely from scratch; partners will
build upon concepts already developed in 3GPP and 5G-PPP phase-1 projects, for
unicast PTP transmissions. Baseline data such as simulators and tools based on
current specifications will be considered as a benchmark for the 5G-Xcast
technology solutions developed within the project.
Likewise, scientific journal and conference publications, technical reports,
white papers and workshops will be also considered for calibration and cross-
checking of the technological data deployed.
## Expected size
_What is the expected size of the project data?_
The size of the data will depend on the outcomes of the project research
tasks. The 5GXcast project is expected to provide:
* At least one presentation per WP summarizing the main findings of the work.
* Specific results such as field trials, figures or data files.
* Several specification technical contributions.
* Eight confidential deliverables .
* Nineteen open deliverables.
* Several videos about demonstrations and showcases could be produced and release. This could be one or more of the following events:
* Demonstration in International Broadcasting Convention (IBC), in 2018 o Demonstration in either the Mobile World Congress (MWC) o Demonstration at the European Conference on Networks and Communications (EuCNC), in 2019.
* Showcase at the European Championships of 2018.
* Dissemination activities:
* 40 journal papers, whitepapers and conference papers. o 10 filed patents. o 15 standard contributions. o 10 keynotes and panels.
* 10 participations in 5G or broadcast events and forums. o 8 workshops in major IEEE conferences. o 4 summer schools/trainings.
## Data utility
_To whom might it be useful ('data utility')?_
5G-Xcast data will be available for the Research and Development industry, the
European creative media industry and the telecom industry. The project will
facilitate the exploitation of the outcomes into future products and services
and provides good knowledge for faster deployment of 5G networks in Europe. In
addition, the different types of data facilitated will be useful not only to
the industry, but also to universities, research institutes, scientific
magazines and specialised press. Concerning the press, contacts will be
established with the relevant trade press in order to extend the utility and
reach of communication activities.
# Findable, accessible, interoperable and re-usable (FAIR) data
## Making data findable, including provisions for metadata
_Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?_
Internally, documents (presentations, figures, deliverables, etc.) will use a
specific format for tracking all internal sharing per WP, within the project
internal repository.
Externally, several metadata tracking methods will be used, depending on the
type of data. Publications derived from the projects will follow the DOI
(Digital Object Identifiers) mechanisms already established in the scientific
research community. These publications will also include keyword metadata as
well as descriptive titling. As such, these will become indexed and searchable
by any academic or research search tool (including IEEEXplore, Research Gate,
as well as most public search engines such as Google, Bing, etc.).
Patents follow a well-established form of description via metadata as well as
possessing unique identifiers, which vary depending on country filed and
patent type. Once again searchable via patent indexing services and most
search engines.
_What naming conventions do you follow? Do you provide clear version numbers?_
The naming conventions are explained in deliverable D1.1[2] and are summarized
below. Deliverables have unique IDs and always are presented with their full
title. This will make them accessible via search engines and easily through
the project website.
Project partners will also use an internal versioning following naming
convention. The nomenclature fixed throughout the 5G-Xcast project is as
follows:
1. Working documents in the repository will have names _**5G-Xcast_DZ.T_ Draft_vX.Y.docx** for the draft version. Once the document is reviewed and ready to be released, the document will be made available to the EC and in the project website and the naming will be changed. _
2. Final versions Deliverables have name _**5G-Xcast_DZ.T_Title_vX.Y.docx** _ , where _DZ.T_ denotes the deliverable number and _vX.Y_ is the version number.
* Version numbering shall only arrive at _v_ 1.0 once the document is ready to be sent to the EC.
* Different versions may be differentiated by using _v_ 0\. _Y_ with _Y_ being integer numbers between 0 and 9.
* The version being transmitted to the EC will be labelled _v_ 1\. If the EC requests modifications, the updated version will be labelled _v_ 2\. The intermediate versions will be labelled _v_ 1.1, _v_ 1.2, etc.
3. Internal documents will be stored in the repository and might be used as a basis for public deliverables. Internal documents of WP will also have the following names: **5G-Xcast_WPn_Title_VX.Y.ext** (the extension ext depends on the type of document)
4. For the case of the **Quarterly Reports (QR)** , the nomenclature will be:
* Partner QR will be named _**5G-Xcast_QRY_Partner.docx** _ where _Partner_ is the acronym of the project partner and Y is the quarter number (for example, _5GXcast_QR1_UPV.docx_ ).
* A WP quarterly report will be prepared by the WP leader and should be named
_**5G-Xcast_WPX_QRY.docx** _ , where _X_ is the WP number ( _for example,
5GXcast_WP1_QR3.docx_ ).
* Quarterly Management Reports (QMR) will be prepared by the project manager and should be named _**5G-Xcast_QMRY.docx** _ .
5. For journals, articles, conference papers, standard contributions the naming standard is as follows: _** <Event>_<yyyy>_<Authors>_<Title> ** _
* _ <Event> _ : indicates the journal, conference, standardization body (e.g. VTC, IEEECommMag, 3GPP, etc.)
* _ <yyyy> _ : Year of the publication
* _ <Authors> _ : indicates the first three letters of the last name of the author(s). In case of several authors, only the first three letters of the last name of the main author will be indicated, appending ‘ _etal_ ’.
* _ <Title> _ : indicates the title of the document. Only two meaningful words indicating the contents of the document will be used. Titles will be kept shorter than 10 letters by using abbreviations.
_Will search keywords be provided that optimize possibilities for re-use?_
Keywords are also provided in all public deliverables to optimize
possibilities for re-use.
## Making data openly accessible
_Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions._ As a result of the 5G-Xcast project,
many results will be generated and produced in order to fulfil the objectives
and tasks planned. Some of the results produced will be shared (or shared
under certain restrictions) to the open public. Currently, particular data
results to be shared are not defined. The specific results to be released from
each project partner will be specified in future versions of this document.
Open deliverables produced and used within the project will be made openly
available as the default as well.
The closed data due to legal and contractual reasons will not be shared. This
specific type of data that may have a very sensitive commercial/technological
value to some partners will be shared only to the level and the number of
partners required for the execution of the specific project tasks, such as
demonstrations and showcases.
_How will the data be made accessible (e.g. by deposition in a repository)?_
The collection and sharing of information within the project is essential to
allow the effective coordination of research tasks among the task
contributors. Data will be shared internally through a workspace powered by
the European Broadcasting Union (EBU), current partner of the project
consortium. Data will be only accessible to project partners. The 5G-Xcast
project will also provide its main results, thoughts and ideas by making use
of the official project website ( _http://5g-xcast.eu/_ ) . The website is
open and accessible to the general public.
In order to ensure the largest possible exposure of the project, different
social media and networking tools are used (LinkedIn and Twitter). A YouTube
channel is used as well to capture presentations from e.g. industry forum
demonstrations, workshops, and test-bed trials.
_What methods or software tools are needed to access the data?_
Specific software tools will be required for the correct access to the data
generated. Open deliverables and presentations (dissemination activities,
workshops, WP summaries, etc.) will be uploaded using the Portable Document
Format (PDF). Microsoft Office or equivalents will also be required as a basic
tool to open DOC, XLS and PPT documents.
_Is documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?_
Currently, there is no information about software to be shared or published in
the website. The same applies to documentation related to tests and field
trials. In the future as field trials become more specific partners may wish
to share some of the data. This will be updated in upcoming version.
_Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible._
Open access data, associated metadata and documentation will be deposited in
the File Download Area of the official project website (
_http://5g-xcast.eu/documents/_ ) . Internally, all data and associated
metadata will be deposited in the repository folder created for the associated
work package.
_Have you explored appropriate arrangements with the identified repository?_
All project partners and contributors fulfil the appropriate arrangements with
the internal repository. Note that this repository is implemented within a
workspace that belongs to a consortium member, i.e. the European Broadcasting
Union. This repository is a customised implementation of the Atlassian
Confluence software. For more information, please see
_https://www.atlassian.com/software/confluence_ .
_If there are restrictions on use, how will access be provided? How will the
identity of the person accessing the data be ascertained?_
Individual member registration is required to access the internal repository
and ascertain the identity of the person accessing the data. Although anyone
can sign up as new user, specific content such as the 5G-Xcast project
repository is restricted to member organizations and individual partners.
On the other hand, no registration is required to visit the different tags or
access the data provided in the project webpage. Documents are open and
accessible to the general public without any restriction.
## Making data interoperable
_Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different origins)?_
Data produced in the project will be interoperable. The project will allow to
the extent possible data exchange and re-use of published results between open
public i.e. individual researchers, institutions, organisations, etc.
All reports, deliverables and public presentations will be presented (or where
not able, translations provided) in English. Final versions of such documents
will also be provided in PDF format, offering wide support and readability on
a wide range of devices and platforms with many free and open source readers
available. Where this is not possible, the project will endeavour to provide
data in formats which are open, widely accepted and are accessible to the
wider public through open source utilities.
## Increase data re-use
_How will the data be licensed to permit the widest re-use possible?_
Public project outputs such as public deliverables, papers, presentations and
project results will be available on the project website and can be reused by
other projects.
Some of the contributions of the project will also be available in different
standard developing organisations (SDOs). Some of these organisations allow
open access to the general public (such 3GPP) while some other allow access
only to members (Such DVB, IEEE). An indirect access to some of project
results will be possible via these standard and technical organisations.
However, specific rules apply for each organisation (E.g. 3GPP allows access
to results but they cannot be reproduced / used without permission) Specific
confidential material will require direct licensing from the originating
company.
_How long is it intended that the data remains re-usable?_
Data produced within the 5G-Xcast framework and openly published in the
website will be useable by third parties, during and after the end of the
project. On use, there is a requirement for appropriate attribution back to
the 5G-Xcast project. Any modifications to the original data or results must
be indicated clearly. Data will remain accessible for as long as the project
website is kept open. Data obtained will remain useable indefinitely.
_When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible._
Note that this DMP represents a first version of the document, released on M3
of the project. Therefore, information about embargos or when the data will be
available for reuse is still unknown. This information will be specified in
future versions of the deliverable.
_Are data quality assurance processes described?_
The data quality including all the review process and risk mitigation for all
project outputs is described in the project proposal [1] and project
guidelines [2] documents.
# Allocation of resources
_What are the costs for making data FAIR in your project?_
The project has as entire Work Package, WP7, dedicated to dissemination,
standardisation and exploitation of the data and research produced within the
project. This acts as a focused resource in which a key responsibility is
ensuring data is findable, accessible, interoperable and reusable.
WP7 Objectives:
1. Promote the project, its objectives, results and the concepts developed.
2. Influence standardisation bodies to adopt the concepts developed in 5G-Xcast.
3. Coordinate with other 5G European projects for maximum synergy.
4. Build awareness on the use of broadcast with its different applications and how 5G-Xcast helps in this regards.
5. Maximize the exploitation of the project results by consortium members.
6. Maximize the innovation impact of the project.
To best achieve this, WP7 will work closely with all other WP leaders to
ensure key research output and accompanying insights are shared to a wider
audience, promoting the project, expanding knowledge in the field and
promoting further research in the field of 5G broadcast.
WP7 has resources dedicated throughout the project, starting in month 1 and
ending in month 24. In total this forms 69 Person Months of the total project
budget and a minimum of 1 person months have been assigned to each project
partner (with most partners getting 2 or more). The full breakdown is provided
in Table 2. This allows for time by each partner to be dedicated to
interfacing with the external world and making their research available to the
wider academic, scientific and industrial community. This encourages
establishing and continuation of communication with different parties, making
data available through the public project website, maximising scientific
visibility through publication in major conferences and high impact journals
(IEEE etc).
_Table 2. WP7 Resource Allocation per Participant_
<table>
<tr>
<th>
**Participant number**
</th>
<th>
1
</th>
<th>
2/3
</th>
<th>
4
</th>
<th>
5
</th>
<th>
6
</th>
<th>
7
</th>
<th>
8
</th>
<th>
9
</th>
<th>
10
</th>
<th>
11
</th>
<th>
12
</th>
<th>
13
</th>
<th>
14
</th>
<th>
15
</th>
<th>
16
</th>
<th>
17
</th>
<th>
18
</th> </tr>
<tr>
<td>
**Short name of participant**
</td>
<td>
UPV
</td>
<td>
NOK
</td>
<td>
BBC
</td>
<td>
BT
</td>
<td>
BPK
</td>
<td>
BLB
</td>
<td>
EXP
</td>
<td>
FS
</td>
<td>
IRT
</td>
<td>
LU
</td>
<td>
NOM
</td>
<td>
O2M
</td>
<td>
SEUK
</td>
<td>
TIM
</td>
<td>
TUAS
</td>
<td>
EBU
</td>
<td>
UNIS
</td> </tr>
<tr>
<td>
**Person months per participant:**
</td>
<td>
12
</td>
<td>
8
</td>
<td>
1
</td>
<td>
2
</td>
<td>
2
</td>
<td>
2
</td>
<td>
3
</td>
<td>
4
</td>
<td>
2
</td>
<td>
3
</td>
<td>
4
</td>
<td>
1
</td>
<td>
13
</td>
<td>
2
</td>
<td>
2
</td>
<td>
6
</td>
<td>
2
</td> </tr> </table>
Note that apart from this, some additional costs may be covered in this WP,
e.g. website costs. However, additional costs will depend on the data and
results shared during and after the project.
_Who will be responsible for data management in your project?_
The person responsible for data management within the 5G-Xcast project will be
Dr Belkacem Mouhouche, from Samsung Electronics R&D UK (SEUK). Dr Mouhouche is
the Innovation Manager of the project, and leader of this WP7.
_Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?_
Data is intended to be long-term preserved, after the end of the project.
Internal and confidential reports, as well as results, presentations and all
types of data are expected to be available in the EBU internal repository in a
static copy for at least 5 years.. UPV could keep the public website open for
a minimum of two years following the end date of the project. Note that this
is an early version of the DMP, and no commitment has been done in this
regard. More information will be given in future versions of this deliverable.
# Data security
_What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?_
## Shared project data
The sharing of all non-public data within the project is carried out through a
team collaboration platform provided by the EBU. Access to the platform
requires each individual to generate a personal username and password.
Passwords are encrypted and only known to the individual herself/himself, i.e.
neither the EBU nor the platform provider has access to passwords. Each
individual must then be associated to the project space by the EBU
administrators in agreement with the project management team. Only once this
association has been made is access to the project space enabled to the user.
The platform provided by the EBU is part of the company information
infrastructure and is protected by the state-of-the-art security systems. It
employs enterprise strength encryption and an enterprise level backup plan
implemented in the event of system failure (i.e. daily back-up on the entire
content of the repository).
## Data within each partner institution
The consortium is comprised of established and respected institutions, each of
which is expected to have measures in place to protect and preserve data, as
well as relevant policies to ensure compliance. Furthermore, each partner has
agreed to comply with the consortium agreement, requiring observation of
obligations under the EU data Protection Directive 95/46/EC [1]. For the
duration of the project, the data generated by each partner whilst carrying
out their respective research activities is subject to their own internal
measures of safety and security. Partners are requires to provide updates and
share the outcomes of this research on a regular basis through the project, at
which point documentation will be uploaded to the EBU collaboration platform
and be subject to the storage and security levels outlines above.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0203_BETTER_776280.md
|
# 1.Introduction
## 1.1 Project Overview
The main objective of BETTER is to implement an EO Big Data intermediate
service layer devoted to harnessing the potential of the Copernicus and
Sentinel European EO data directly from the needs of the users.
BETTER aims to go beyond the implementation of generic EO data tools and
incorporate those tools with user experience, expertise and resources to
deliver an integrated EO intermediate service layer. This layer will deliver
customized solutions denominated Data Pipelines for large volume EO and non-EO
datasets access, retrieval, processing, analysis and visualisation. The BETTER
solutions will focus in addressing the full data lifecycle needs associated
with EO Big Data to bring more downstream users to the EO market and maximise
exploitation of the current and future Copernicus data and information
services.
BETTER developments will be driven by a large number of Data Challenges to be
set forward by the users deeply involved in addressing the Key Societal
Challenges.
The World Food Programme, the European Union Satellite Centre and the Swiss
Federal Institute of Technology- Zurich working in the areas of Food Security,
Secure Societies and GeoHazards will be the challenge promoters.
During the project each promoter will introduce 9 challenges, 3 in each
project year, with an additional nine brought by the “Extending the market”
task, in a total of 36 challenges. The Data Pipelines will be deployed on top
of a mature EO data and service support ecosystem which has been under
consolidation from previous R&D activities.
The ecosystem and its further development in the scope of BETTER rely on the
experience and versatility of the consortium team responsible for service/tool
development from DEIMOS and Terradue. This is complemented by Fraunhofer
Institute’s experience in Big Data systems, which brings to the consortium
transversal knowledge extraction technologies and tools that will help bridge
the current gap between the EO and ICT sectors.
<table>
<tr>
<th>
**General Information**
</th> </tr>
<tr>
<td>
Project Title
</td>
<td>
Big-data Earth observation Technology and Tools Enhancing Research and
development
</td> </tr>
<tr>
<td>
Starting Date
</td>
<td>
1st November 2017
</td> </tr>
<tr>
<td>
Duration in
Months
</td>
<td>
36
</td> </tr>
<tr>
<td>
Call (part) Identifier
</td>
<td>
H2020-EO-2017
</td> </tr>
<tr>
<td>
Topic
</td>
<td>
EO-2-2017
EO Big Data Shift
</td> </tr>
<tr>
<td>
Fixed EC Keywords
</td>
<td>
Visual techniques / Visual analytics / Intelligent data understanding, Earth
Observation /
Services and applications, Space data exploitation, Data mining and searching
techniques, Downstream industry
</td> </tr>
<tr>
<td>
Free Keywords
</td>
<td>
Data Challenges, Data Pipelines
</td> </tr> </table>
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement No 776280\.
## 1.2 Scope
This document defines the Data Management approach for the datasets used and
generated in the project.
## 1.3 Responsibilities
The table below provides information on who contributed to this document and
to which sections.
Table 3 Document Responsibilities
<table>
<tr>
<th>
**Name**
</th>
<th>
**Institution**
</th>
<th>
**Responsibilities**
</th> </tr>
<tr>
<td>
Diego Lozano García
</td>
<td>
DMS
</td>
<td>
All sections
</td> </tr>
<tr>
<td>
Nuno Grosso
</td>
<td>
DME
</td>
<td>
Section 2.1 and Revision
</td> </tr>
<tr>
<td>
Fabrice Brito
</td>
<td>
TDUE
</td>
<td>
Section 2.2, 3.1, 3.2 and 3.3
</td> </tr>
<tr>
<td>
Pedro Gonçalves
</td>
<td>
TDUE
</td>
<td>
Revision
</td> </tr>
<tr>
<td>
Koushik Panda
</td>
<td>
DME
</td>
<td>
Revision
</td> </tr>
<tr>
<td>
Simon Scerri
</td>
<td>
IAIS
</td>
<td>
Revision
</td> </tr>
<tr>
<td>
Thomas Filippa
</td>
<td>
DMS
</td>
<td>
Section 2.1
</td> </tr> </table>
## 1.4 Document Structure
This document is structured as following:
* Section One provides a project overview and then goes on to describes the scope, responsibilities and structure of this deliverable.
* Section Two will describe the datasets used/generated by the data pipelines.
* Section Three will analyse each of the aspects of the FAIR data: Findable, Accessible, Interoperable and Re-use
* Section Four will present the resources allocation and deal with data security
# Data Summary
## Datasets used/generated by the Data pipelines
The tables below provide the main information related to the data sets
management: the Data format, Preferential Data source, Access restrictions,
area of interest, the Expected data Volume and the long-term data preservation
policy. The data sets generated by the BETTER data processing pipelines are
raster files in GeoTIFF format. Currently, the tables below provide the list
of input and output datasets used/generated by the Data pipelines defined in
the first and second cycle of the project. They will be updated in the next
version of the document in order to include the datasets of the last cycle
(no-defined yet) and modify some fields partially defined yet.
<table>
<tr>
<th>
**Input datasets included in all first cycle challenges**
</th> </tr>
<tr>
<td>
**Name**
</td>
<td>
**Challenges involved**
</td>
<td>
**Data format**
</td>
<td>
**Preferential Data source**
</td>
<td>
**Access restrictions**
</td>
<td>
**Area of interest**
</td>
<td>
**Expected data Volume**
</td>
<td>
**Long-term data preservation**
</td> </tr>
<tr>
<td>
Sentinel-1 GRD acquisitions of matching orbit directions
</td>
<td>
WFP-01-01 -
Hazards and Change detection using Sentinel-
1 SAR data
</td>
<td>
SAFE
</td>
<td>
Sentinel Open Data Hub
</td>
<td>
\-
</td>
<td>
AOI 1 NW - South Sudan
POLYGON((26.832 9.5136,
28.6843 9.5136, 28.6843
7.8009, 26.832 7.8009, 26.832
9.5136))
AOI 2 Renk - Blue Nile
POLYGON((32.0572 12.4549,
33.9087 12.4549, 33.9087
10.7344, 32.0572 10.7344,
32.0572 12.4549))
AOI 3 Niger Delta, Mali (updated after D2.2 delivered to EC) POLYGON((-5.5
17.26, -1.08
17.26, -1.08 13.5, -5.5 13.5, 5.17 17.26))
AOI 4 NE - Nigeria
POLYGON((12.9415 13.7579,
14.6731 13.7579, 14.6731
12.0093, 12.9415 12.0093,
12.9415 13.7579))
</td>
<td>
~ 40
Sentinel1A/B / 12 days ~ 63 GB / 12 days (1.58 GB / product)
</td>
<td>
\-
</td> </tr>
<tr>
<td>
SRTM 30m/90m
for Terrain Correction
</td>
<td>
Geotiff
</td>
<td>
NASA LTA
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
S1 precise orbit file(s)
</td>
<td>
SAFE
</td>
<td>
Sentinel Open Data Hub
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Sentinel-1 GRD
</td>
<td>
WFP-01-02 -
</td>
<td>
SAFE
</td>
<td>
Sentinel Open
</td>
<td>
\-
</td>
<td>
AOI 1 Niger
</td>
<td>
~ 40
</td>
<td>
\-
</td> </tr> </table>
This project has received funding from the European Union’s Horizon 2020
Research and
Innovation Programme under grant agreement no 776280
<table>
<tr>
<th>
</th>
<th>
Land cover changes and inter-annual vegetation performance
</th>
<th>
</th>
<th>
Data Hub
</th>
<th>
</th>
<th>
POLYGON((6.4788 14.5973,
7.5577 14.5973, 7.5577
13.6328, 6.4788 13.6328,
6.4788 14.5973))
AOI 2 Tajikistan
POLYGON((67.7116 37.9032,
68.791 37.9032, 68.791
36.9211, 67.7116 36.9211,
67.7116 37.9032))
AOI 3 Mali
POLYGON((-10.3668 15.3471, -
9.3518 15.3471, -9.3518
14.3406, -10.3668 14.3406, 10.3668 15.3471))
AOI 4 Afghanistan
POLYGON((67.6243 36.7228,
68.116 36.7228, 68.116
35.6923, 67.6243 35.6923,
67.6243 36.7228))
</th>
<th>
Sentinel1A/B / 12 days ~ 63 GB / 12 days (1.58 GB / product)
</th>
<th>
</th> </tr>
<tr>
<th>
Sentinel-2 L1C
</th>
<th>
WFP-01-02 - Land cover changes and inter-annual vegetation
performance
</th>
<th>
SAFE
</th>
<th>
Sentinel Open Data Hub
</th>
<th>
\-
</th>
<th>
~ 75
Sentinel2A/B / 10 days ~ 75 GB / 10 days (~1 GB / product)
</th>
<th>
\-
</th> </tr>
<tr>
<th>
Landsat 8 L1C Reflectances
</th>
<th>
WFP-01-02 - Land cover changes and inter-annual vegetation performance
</th>
<th>
Geotiff
</th>
<th>
EarthExplorer
</th>
<th>
\-
</th>
<th>
~ 15 Landsat
8 / 16 days ~ 15 GB / 16 days (~1 GB / product)
</th>
<th>
\-
</th> </tr>
<tr>
<td>
Copernicus (VGTProbaV) 1Km LAI time series. NRT product (RT0, RT2 and RT6)
</td>
<td>
WFP-01-03 EO indicators
for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</td>
<td>
netCDF
</td>
<td>
Copernicus Land
Monitoring
Service
</td>
<td>
\-
</td>
<td>
All WFP regions
</td>
<td>
1 product
LAI: ~310MB,
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Copernicus (VGTProbaV) 1Km fAPAR time
</td>
<td>
WFP-01-03 EO indicators for global Early
</td>
<td>
netCDF
</td>
<td>
Copernicus Land
Monitoring
Service
</td>
<td>
\-
</td>
<td>
All WFP regions
</td>
<td>
1 product
FAPAR: ~380
MB;
</td>
<td>
\-
</td> </tr> </table>
<table>
<tr>
<th>
series. NRT product (RT0, RT2 and RT6)
</th>
<th>
Warning,
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Time series of
MODIS MOD11C2
LST 2000-present
</td>
<td>
WFP-01-03 EO indicators
for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</td>
<td>
HDF
</td>
<td>
NASA LPDAAC
</td>
<td>
\-
</td>
<td>
All WFP regions
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
CHIRPS RFE 5Km resolution daily
data
</td>
<td>
WFP-01-03 EO indicators
for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</td>
<td>
Geotiff
</td>
<td>
Climate Hazards
Group InfraRed
Precipitation with
Station data
</td>
<td>
\-
</td>
<td>
All WFP regions
</td>
<td>
1.77 GB / 1 month (CHIRPS: ~58 MB per product)
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Sentinel-2 L1C
</td>
<td>
SATCEN-0101 - Thematic
Indexes for
Land Use
Indentification
</td>
<td>
SAFE
</td>
<td>
Sentinel Open Data Hub
</td>
<td>
\-
</td>
<td>
AOI 1 Columbia - Cauca-Narino
2°44'38.51"N 78°19'27.30"W, 2°
5'57.77"N 77°14'51.12"W,
0°55'38.70"N 78° 1'49.27"W,
1°34'56.24"N 79° 0'39.68"W
AOI 2 Albania border with
Greece
</td>
<td>
~ 220
Sentinel2A/B L2A / 10 days
~ 220 GB / 10 days ( ~1 GB / product)
</td>
<td>
\-
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
41° 0'57.81"N 20°44'47.45"E,
40°51'0.67"N 21°17'1.40"E,
39°34'1.65"N 20°22'47.57"E,
39°52'46.66"N 19°53'54.20"E
AOI 3 Afghanistan - Helmand - Nad-e-Ali
73°22'24.90"W
64°44'54.92"E,32° 0'17.86"N 65°
0'7.32"E, 30°25'32.61"N
63°57'57.16"E, 31°31'55.09"N
63°55'9.38"E
AOI 4 Serbia
45°56'55.10"N 18°42'39.05"E,
45°57'44.72"N 18°59'43.44"E,
45° 7'2.00"N 19°40'15.66"E,
44°48'47.50"N 18°55'57.11"E
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Sentinel-1 A/B 1 SLC StripMap
</td>
<td>
SATCEN-0102 - Change Detection based on SAR single look complex (SLC) data
</td>
<td>
SAFE
</td>
<td>
Sentinel Open Data Hub
</td>
<td>
\-
</td>
<td>
AOI 1 - 15°18'50.37"N,
8°22'52.84"E, 15°28'49.28"N,
10° 2'15.33"E, 12°17'43.44"N,
10°38'54.40"E, 12° 9'54.62"N, 9°
2'22.33"E
AOI 2 - 23° 8'15.91"N,
12°55'57.41"E, 23°27'11.02"N,
14°28'12.27"E, 19° 6'4.65"N,
15°24'34.73"E, 18°52'42.78"N,
13°50'31.39"E
AOI 3 20° 6'22.43"N,
5°20'58.79"E, 20°13'57.72"N, 6°
3'52.23"E, 18°30'49.60"N,
6°25'48.99"E, 18°23'13.00"N,
5°42'55.45"E
</td>
<td>
~ 380
Sentinel1
A/B / YEAR
~ 3 TB / YEAR ( ~8 GB / product)
</td>
<td>
\-
</td> </tr> </table>
<table>
<tr>
<th>
Sentinel-2 L2A
</th>
<th>
SATCEN-01-
03 - Illicit Crop
Monitoring with Optical data
</th>
<th>
SAFE
</th>
<th>
Sentinel Open Data Hub
</th>
<th>
\-
</th>
<th>
AOI 1 Columbia - Cauca-Narino
2°44'38.51"N 78°19'27.30"W, 2°
5'57.77"N 77°14'51.12"W,
0°55'38.70"N 78° 1'49.27"W,
1°34'56.24"N 79° 0'39.68"W
AOI 2 Albania border with
Greece
41° 0'57.81"N 20°44'47.45"E,
40°51'0.67"N 21°17'1.40"E,
39°34'1.65"N 20°22'47.57"E,
39°52'46.66"N 19°53'54.20"E
AOI 3 Afghanistan - Helmand - Nad-e-Ali
73°22'24.90"W
64°44'54.92"E,32° 0'17.86"N 65°
0'7.32"E, 30°25'32.61"N
63°57'57.16"E, 31°31'55.09"N
63°55'9.38"E
</th>
<th>
~ 220
Sentinel2A/B L2A / 10 days
~ 220 GB / 10 days ( ~1 GB / product)
</th>
<th>
\-
</th> </tr>
<tr>
<td>
Envisat ASAR dataset, observation period 2002-2010
</td>
<td>
ETHZ-01-01 Global catalogue of co-seismic deformation
</td>
<td>
N1
</td>
<td>
EOLI
</td>
<td>
Depending on processing level ESA might need to approve the pre-procesing of
image
</td>
<td>
Coordinates of the Earthquake epicenters with magnitude higher than 5 (10 km
box around them)
</td>
<td>
~ 4 ENVISAT
ASAR / event ~ 2.3 GB / days ( ~ 600 MB / product)
</td>
<td>
\-
</td> </tr>
<tr>
<td>
SRTM-1 (30m)
</td>
<td>
ETHZ-01-01 Global catalogue of co-seismic deformation
</td>
<td>
Geotiff
</td>
<td>
NASA LTA
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
DLR systematic
</td>
<td>
ETHZ-01-02 -
</td>
<td>
SAFE
</td>
<td>
DLR
</td>
<td>
\-
</td>
<td>
~ InSAR
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Sentinel-1 interferograms
</td>
<td>
Exploitation of differential SAR
interferograms
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Browse MedRes output / event ~ 200 MB / event ( ~ 33 MB / product)-
</td>
<td>
</td> </tr>
<tr>
<td>
ShakeMap from USGS
</td>
<td>
ETHZ-01-02 -
Exploitation of
differential SAR
interferograms
</td>
<td>
Geotiff
</td>
<td>
USGS
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Sentinel 1 product for generating interferograms
</td>
<td>
ETHZ-01-03 Automated detection of changes due to earthquakes
</td>
<td>
SAFE
</td>
<td>
Sentinel Open Data Hub
</td>
<td>
\-
</td>
<td>
~ 4 Sentinel1
A/B / event ~ 22.8 GB / event ( ~5.7 GB / product)-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
USGS Earthquake catalogue
</td>
<td>
ETHZ-01-0 Automated detection of changes due to earthquakes
</td>
<td>
Geotiff
</td>
<td>
USGS
</td>
<td>
\-
</td>
<td>
\-
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
**Output datasets included in all first cycle challenges**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Name**
</td>
<td>
**Challenges involved**
</td>
<td>
**Data format**
</td>
<td>
**Preferential Data source**
</td>
<td>
**Access restrictions**
</td>
<td>
**Area of interest**
</td>
<td>
**Expected data Volume**
</td>
<td>
**Long-term data preservation**
</td> </tr>
<tr>
<td>
Sentinel-1 SLC image pair, repeat pass 6 or
</td>
<td>
WFP-01-01 -
Hazards and
Change
</td>
<td>
Geotiff
</td>
<td>
From Data
Pipeline to be developed in
</td>
<td>
\-
</td>
<td>
AOI 1 NW - South Sudan
POLYGON((26.832 9.5136,
28.6843 9.5136, 28.6843
</td>
<td>
Sentinel-1
Backscatter
Time Series ~
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
12 days
</th>
<th>
detection using Sentinel1 SAR data
</th>
<th>
</th>
<th>
BETTER
</th>
<th>
</th>
<th>
7.8009, 26.832 7.8009, 26.832
9.5136))
AOI 2 Renk - Blue Nile
POLYGON((32.0572 12.4549,
33.9087 12.4549, 33.9087
10.7344, 32.0572 10.7344,
32.0572 12.4549))
AOI 3 Niger Delta, Mali (updated after D2.2 delivered to EC) POLYGON((-5.5
17.26, -1.08
17.26, -1.08 13.5, -5.5 13.5, 5.17 17.26))
AOI 4 NE - Nigeria
POLYGON((12.9415 13.7579,
14.6731 13.7579, 14.6731
12.0093, 12.9415 12.0093,
12.9415 13.7579))
</th>
<th>
210 GB /12 days (~5.27 GB / product) TOTAL SPACE FORESEEN:
~18.5 TB
Sentinel-1
Coherence Time
Series ~ 135 GB
/12 days (~3 GB
/ product)
TOTAL SPACE
FORESEEN ~12
TB
</th>
<th>
</th> </tr>
<tr>
<td>
Sentinel-2 L1C derived NDVI, NDWI, MNDWI,
NDBI Indices
</td>
<td>
WFP-01-02 - Land cover changes and inter-annual vegetation performance
</td>
<td>
Geotiff
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
AOI 1 Niger
POLYGON((6.4788 14.5973,
7.5577 14.5973, 7.5577
13.6328, 6.4788 13.6328,
6.4788 14.5973))
AOI 2 Tajikistan
POLYGON((67.7116 37.9032,
68.791 37.9032, 68.791
36.9211, 67.7116 36.9211,
67.7116 37.9032))
AOI 3 Mali
POLYGON((-10.3668 15.3471, -
9.3518 15.3471, -9.3518
14.3406, -10.3668 14.3406, -
</td>
<td>
~ 44 GB /10 days (~600 MB / product)
TOTAL SPACE
FORESEEN ~4.7
TB
</td>
<td>
</td> </tr>
<tr>
<td>
Sentinel 2 derived
NDVI, NDWI,
MNDWI, NDBI smoothed indices
</td>
<td>
WFP-01-02 - Land cover changes and inter-annual vegetation performance
</td>
<td>
Geotiff
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
</td> </tr>
<tr>
<td>
Landsat-8 L1C derived NDVI,
</td>
<td>
WFP-01-02 - Land cover
</td>
<td>
Geotiff
</td>
<td>
From Data
Pipeline to be
</td>
<td>
\-
</td>
<td>
~ 55 GB / 16 days (~3.66 GB /
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
NDWI, MNDWI,
NDBI Indices
</th>
<th>
changes and inter-annual vegetation performance
</th>
<th>
</th>
<th>
developed in BETTER
</th>
<th>
</th>
<th>
10.3668 15.3471))
AOI 4 Afghanistan
POLYGON((67.6243 36.7228,
68.116 36.7228, 68.116
35.6923, 67.6243 35.6923,
67.6243 36.7228))
</th>
<th>
product)
TOTAL SPACE
FORESEEN ~3.7
TB
</th>
<th>
</th> </tr>
<tr>
<th>
Landsat-8 derived
NDVI, NDWI,
MNDWI, NDBI smoothed indices
</th>
<th>
WFP-01-02 - Land cover changes and inter-annual vegetation performance
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
</th> </tr>
<tr>
<td>
Copernicus (VGTProbaV) 1Km LAI time series smoothed and
gap-filled
</td>
<td>
WFP-01-03 - EO indicators for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</td>
<td>
Geotiff
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
All WFP regions
</td>
<td>
~268 MB / 1 month (~ 67 MB / product)
TOTAL SPACE
FORESEEN ~9.5
GB
</td>
<td>
</td> </tr>
<tr>
<td>
Copernicus (VGTProbaV) 1Km fAPAR time series
smoothed and
gap-filled
</td>
<td>
WFP-01-03 - EO indicators for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</td>
<td>
Geotiff
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
</td> </tr>
<tr>
<td>
Time series of
Copernicus (VGT-
</td>
<td>
WFP-01-03 -
EO indicators
</td>
<td>
Geotiff
</td>
<td>
From Data
Pipeline to be
</td>
<td>
\-
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
ProbaV) LAI temporally aggregated
</th>
<th>
for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
</th>
<th>
developed in BETTER
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Time series of
Copernicus (VGTProbaV) fAPAR temporally aggregated
</th>
<th>
WFP-01-03 - EO indicators for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
</th> </tr>
<tr>
<th>
Long term averages of Copernicus (VGT-ProbaV) fAPAR (2 indicators * 2
aggregation functions (max and avg) * 9 aggregation time windows = 36 LTA
sets to be generated)
</th>
<th>
WFP-01-03 - EO indicators for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
</th> </tr>
<tr>
<th>
Time series of
Copernicus (VGT-
</th>
<th>
WFP-01-03 -
EO indicators
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be
</th>
<th>
\-
</th>
<th>
</th> </tr> </table>
<table>
<tr>
<th>
ProbaV) fAPAR anomalies at a
variety of temporal aggregations
</th>
<th>
for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
</th>
<th>
developed in BETTER
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Long term averages of Copernicus (VGT-ProbaV) LAI (2 indicators * 2
aggregation functions (max and avg) * 9 aggregation time windows = 36 LTA
sets to be generated)
</th>
<th>
WFP-01-03 - EO indicators for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
</th> </tr>
<tr>
<th>
Time series of
Copernicus (VGTProbaV) LAI anomalies at a
variety of temporal aggregations
</th>
<th>
WFP-01-03 - EO indicators for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
</th> </tr>
<tr>
<th>
Time series of
MODIS MOD11C2
LST temporally
</th>
<th>
WFP-01-03 - EO indicators for global Early
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
</th>
<th>
\-
</th>
<th>
~ 408 MB / 1 month for 3 years
</th>
<th>
</th> </tr> </table>
<table>
<tr>
<th>
aggregated values
(N = 1 (no aggregation), 3, 6,
9, 12, 15, 18, 27,
36)
</th>
<th>
Warning,
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
</th>
<th>
BETTER
</th>
<th>
</th>
<th>
</th>
<th>
WFP-01-03-01 aggregations (~ 34 MB/ product) ~ 9 MB / 1 month for 3 years
WFP-01-03-02 aggregations (~
1 MB / product)
</th>
<th>
</th> </tr>
<tr>
<th>
Long term averages of MODIS MOD11C2 LST (2 indicators * 2 aggregation
functions (max and avg) * 9 aggregation time windows = 36 LTA
sets to be generated)
</th>
<th>
WFP-01-03 - EO indicators for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
</th> </tr>
<tr>
<th>
Time series of
MODIS MOD11C2
LST 2000-present anomalies
</th>
<th>
WFP-01-03 - EO indicators for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
</th> </tr>
<tr>
<th>
Time series of aggregated CHIRPS RFE 1981present (N = 10,
</th>
<th>
WFP-01-03 - EO indicators for global Early Warning,
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in BETTER
</th>
<th>
\-
</th>
<th>
~ 12 MB / 1 month (~2 MB / product)
TOTAL SPACE
</th>
<th>
</th> </tr> </table>
<table>
<tr>
<th>
30, 60, 90, 120,
150, 180, 270,
365 days)
</th>
<th>
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
FORESEEN ~432 MB
</th>
<th>
</th> </tr>
<tr>
<th>
Long term averages of CHIRPS RFE 1981present (2 indicators * 2 aggregation
functions (max and avg) * 9 aggregation time windows = 36 LTA
sets to be generated)
</th>
<th>
WFP-01-03 - EO indicators for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
</th> </tr>
<tr>
<th>
Time series of anomalies of aggregated CHIRPS RFE 1981present and current
anomalies
</th>
<th>
WFP-01-03 - EO indicators for global Early
Warning,
Seasonal Monitoring and
Climatology
Studies
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
</th> </tr>
<tr>
<td>
Sentinel-2 A/B
L1C NDVI and
NDWI indices
</td>
<td>
SATCEN-0101 - Thematic
Indexes for
Land Use
Identification
</td>
<td>
DIM
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
AOI 1 Columbia - Cauca-Narino
2°44'38.51"N 78°19'27.30"W, 2°
5'57.77"N 77°14'51.12"W,
0°55'38.70"N 78° 1'49.27"W,
1°34'56.24"N 79° 0'39.68"W
</td>
<td>
~ 6.8 GB / 12 days (~400 MB / product)
TOTAL SPACE
FORESEEN ~620
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
AOI 2 Albania border with
Greece
41° 0'57.81"N 20°44'47.45"E,
40°51'0.67"N 21°17'1.40"E,
39°34'1.65"N 20°22'47.57"E,
39°52'46.66"N 19°53'54.20"E
AOI 3 Afghanistan - Helmand - Nad-e-Ali
73°22'24.90"W
64°44'54.92"E,32° 0'17.86"N 65°
0'7.32"E, 30°25'32.61"N
63°57'57.16"E, 31°31'55.09"N
63°55'9.38"E
AOI 4 Serbia
45°56'55.10"N 18°42'39.05"E,
45°57'44.72"N 18°59'43.44"E,
45° 7'2.00"N 19°40'15.66"E,
44°48'47.50"N 18°55'57.11"E
</th>
<th>
GB
</th>
<th>
</th> </tr>
<tr>
<th>
Co-registered stack of Sentinel2 A/B L1C
</th>
<th>
SATCEN-0101 - Thematic
Indexes for
Land Use
Identification
</th>
<th>
DIM
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
~3.7 TB / YEAR (~10 GB / product)
TOTAL SPACE
FORESEEN ~11.1
TB
</th>
<th>
</th> </tr>
<tr>
<td>
Multi-temporal stack of Sentinel1 A/B 1 SLC
StripMap
</td>
<td>
SATCEN-0102 - Change Detection based on SAR single look complex (SLC) data
</td>
<td>
DIM
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
AOI 1 - 15°18'50.37"N,
8°22'52.84"E, 15°28'49.28"N,
10° 2'15.33"E, 12°17'43.44"N,
10°38'54.40"E, 12° 9'54.62"N, 9°
2'22.33"E
AOI 2 - 23° 8'15.91"N,
12°55'57.41"E, 23°27'11.02"N,
14°28'12.27"E, 19° 6'4.65"N,
15°24'34.73"E, 18°52'42.78"N,
13°50'31.39"E
AOI 3 20° 6'22.43"N,
5°20'58.79"E, 20°13'57.72"N, 6°
3'52.23"E, 18°30'49.60"N,
</td>
<td>
~9.3 TB / YEAR (~ 25 GB / product)
TOTAL SPACE
FORESEEN ~27.9
TB
</td>
<td>
</td> </tr>
<tr>
<td>
Multi-temporal stack of Coherence products derived from Multitemporal stack of
</td>
<td>
SATCEN-0102 - Change Detection based on SAR single look complex (SLC)
</td>
<td>
DIM
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
~3.7 TB / YEAR (~10 GB / product)
TOTAL SPACE
FORESEEN ~11.1
TB
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
Sentinel-1 A/B 1 SLC StripMap
</th>
<th>
data
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
6°25'48.99"E, 18°23'13.00"N,
5°42'55.45"E
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Sentinel-2 A/B L1C DVI Maps
</td>
<td>
SATCEN-01-
03 - Illicit Crop
Monitoring with Optical data
</td>
<td>
Geotiff
DIM
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
AOI 1 Columbia - Cauca-Narino
2°44'38.51"N 78°19'27.30"W, 2°
5'57.77"N 77°14'51.12"W,
0°55'38.70"N 78° 1'49.27"W,
1°34'56.24"N 79° 0'39.68"W
AOI 2 Albania border with
Greece
41° 0'57.81"N 20°44'47.45"E,
40°51'0.67"N 21°17'1.40"E,
39°34'1.65"N 20°22'47.57"E,
39°52'46.66"N 19°53'54.20"E
AOI 3 Afghanistan - Helmand - Nad-e-Ali
73°22'24.90"W
64°44'54.92"E,32° 0'17.86"N 65°
0'7.32"E, 30°25'32.61"N
63°57'57.16"E, 31°31'55.09"N
63°55'9.38"E
</td>
<td>
~1 TB / 10 days (~5 GB / product) TOTAL SPACE
FORESEEN
~109.5 TB--
</td>
<td>
</td> </tr>
<tr>
<td>
Co-registered stack of Sentinel2 A/B L1C
</td>
<td>
SATCEN-01-
03 - Illicit Crop
Monitoring with Optical data
</td>
<td>
Geotiff
DIM
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
</td> </tr>
<tr>
<td>
Color composites of co-registered stack of Sentinel2 A/B L1C
</td>
<td>
SATCEN-01-
03 - Illicit Crop
Monitoring with Optical data
</td>
<td>
Geotiff
DIM
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
</td> </tr>
<tr>
<td>
Co-seismic deformation maps from Envisat ASAR dataset
</td>
<td>
ETHZ-01-01 Global catalogue of co-seismic deformation
</td>
<td>
Geotiff
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
Coordinates of the Earthquake epicenters with magnitude higher than 5 (10 km
box around them)
</td>
<td>
~ 200 MB / event (~ 100 MB/ output set)
</td>
<td>
</td> </tr>
<tr>
<td>
Map of areas with SAR coherence decrease after earthquakes M>5 derived from
Envisat ASAR
</td>
<td>
ETHZ-01-01 Global catalogue of co-seismic deformation
</td>
<td>
Geotiff
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
dataset
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Composite map including areas with SAR coherence decrease larger than a
defined threshold & coseismic deformation larger than a defined threshold
after earthquakes with M>5 + USGS shakemap derived from Envisat ASAR dataset
</th>
<th>
ETHZ-01-01 Global catalogue of co-seismic deformation
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
</th> </tr>
<tr>
<th>
Filtered DInSAR interferograms derived from DLR systematic Sentinel-1
interferograms
</th>
<th>
ETHZ-01-02 -
Exploitation of
differential SAR
interferograms
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
~ 200 MB / event (~ 33 MB / product)
TOTAL SPACE
FORESEEN ~ 57
GB
( current 292 events )
</th>
<th>
</th> </tr>
<tr>
<th>
Co-seismic deformation maps derived from Sentinel-1
</th>
<th>
ETHZ-01-03 Automated detection of changes due to earthquakes
</th>
<th>
Geotiff
</th>
<th>
From Data
Pipeline to be developed in
BETTER
</th>
<th>
\-
</th>
<th>
~ 12 GB / event (~ 6 GB/ output set)
</th>
<th>
</th> </tr>
<tr>
<td>
Coherence changes map
(areas with SAR coherence decrease after earthquakes M>5) derived from
Sentinel-1
</td>
<td>
ETHZ-01-03 Automated detection of changes due to earthquakes
</td>
<td>
Geotiff
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Composite map including areas with SAR coherence decrease larger than a
defined threshold & coseismic deformation larger than a defined threshold
after earthquakes with M>5 + USGS shakemap derived from Sentinel-1
</td>
<td>
ETHZ-01-03 Automated detection of changes due to earthquakes
</td>
<td>
Geotiff
</td>
<td>
From Data
Pipeline to be developed in
BETTER
</td>
<td>
\-
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**Input Datasets included in all second cycle challenges**
</th> </tr>
<tr>
<td>
**Name**
</td>
<td>
**Challenges involved**
</td>
<td>
**Data format**
</td>
<td>
**Preferential Data source**
</td>
<td>
**Access restrictions**
</td>
<td>
**Area of interest**
</td>
<td>
**Expected data**
**Volume**
</td>
<td>
**Long-term data preservation**
</td> </tr>
<tr>
<td>
_LST MODIS_
_MOD11C2 - 5.6Km_ _resolution 8 day_ _data_
</td>
<td>
WFP-02-01 - MODIS TERRA
Land Surface
Temperature
(LST) -
Aggregations and Anomalies
</td>
<td>
GEOTIFF
</td>
<td>
</td>
<td>
\-
</td>
<td>
Global
**For validation:**
Years 2015, 2016 and 2017 **For production:**
Full archive processing of the data
(2000 - present)
</td>
<td>
\-
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
MODIS Aqua/Terra
Snow Cover 8-Day L3 Global 500m Grid, Version 6.
</th>
<th>
WFP-02-01 -
MODIS
TERRA/AQUA
Snow Cover - Aggregations and Anomalies
</th>
<th>
GEOTIFF
</th>
<th>
</th>
<th>
\-
</th>
<th>
Central Asia - POLYGON((25.0
48.0, 94.0 48.0, 94.0 23.1, 25.0 23.1, 25.0 48.0)) **For validation:**
Years 2015, 2016 and 2017 for winter season (August - July) **For
production:**
Full archive processing of the data
(2000 - present) for winter season
(August - July)
</th>
<th>
\-
</th>
<th>
</th> </tr> </table>
<table>
<tr>
<th>
_Sentinel-2 A/B L1C_
</th>
<th>
WFP-02-03 - Smooth & gap-
filled Sentinel-
2
</th>
<th>
GeoTiff
</th>
<th>
</th>
<th>
\-
</th>
<th>
**For validation:**
Gambella (S2 tiles: 36PXQ,
36PYQ, 36NXP, 36NYP) Marchfeld (S2 tiles: 33UXP) **For production:**
AOI defined by Analyst triggering the pipeline
</th>
<th>
\-
</th>
<th>
</th> </tr>
<tr>
<td>
Product 1:
* Two Radar images, Image
1 and Image 2 Sensor: Sentinel-1 A/B
* Processing level: L1 GRD
* Acquisition mode:
Interferometric
Wide Mode
* Pass: same pass
for all the stack (e.g.
DESCENDING)
* Orbit : same
orbit throughout the stack
</td>
<td>
SATCEN-02-01
\- Change
Detection and Characterizatio n 2 (SAR
Change
Detection with
GRD data)
</td>
<td>
GEOTIFF
</td>
<td>
</td>
<td>
\-
</td>
<td>
Peru's Madre de Dios region
UL 70.5659 W 12.4567 S
BR 69.1411 W 13.0922 S
**For validation**
Two images, covering 99 % of the area, (S1 IW Descending mode) above the area
on dates
2018-08-12 and 2018-09-05
Image 1
S1B_IW_GRDH_1SDV_20180812T
101414_
20180812T101439_012228_0168
7D_BCA9
Image 2
S1B_IW_GRDH_1SDV_20180905T
101415_
20180905T101440_012578_0173
58_3259 **For Production**
images from April 2019 to
October 2019
Frequency: when a new image, acquired in the same interferometric conditions,
is
</td>
<td>
\-
</td>
<td>
</td> </tr>
<tr>
<td>
Product 2:
* DEM Model
* DEM image
</td>
<td>
GEOTIFF
</td>
<td>
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
should be used for the Terrain
Correction (often automatically
downloaded by
specific software)
Orbit file should be used for Orbital Correction (often automatically
downloaded by
specific software)
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
available
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Sensor: Sentinel-2
A/B
Processing Level: Level2A (level 1B
for the test in
2018)
Bands: All 12 spectral bands (in fact, only bands 2,4, 11 and 12 are required)
and the cloud mask
</td>
<td>
SATCEN-02-02
\- Thematic
Indexes 2 (mineral indexes with Optical data
</td>
<td>
GEOTIFF
</td>
<td>
</td>
<td>
\-
</td>
<td>
Peru's Madre de Dios region
UL 70.5659 W 12.4567 S BR
69.1411 W 13.0922 S
</td>
<td>
\-
</td>
<td>
</td> </tr>
<tr>
<td>
Sensor: Sentinel-2 A/B
</td>
<td>
SATCEN-02-03 - Land
</td>
<td>
GEOTIFF
</td>
<td>
</td>
<td>
</td>
<td>
Peru's Madre de Dios region UL 70.5659 W 12.4567 S BR
</td>
<td>
\-
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
Processing Level: Level2A (level 1B
for the test in
2018)
Bands: All 12 spectral bands (in fact, only bands 11,8, 4 and 2 are required)
and the cloud mask
</th>
<th>
Use/Land
Cover (Illegal deforestation
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
69.1411 W 13.0922
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
* 1.Weekly report on active volcanoes _https_ _://volcano.si.ed_
_u/reports_wee_ _kly.cfm#_ ;
* 2\. Sentinel-1 imagery;
* 3\. SRTM-1 (30m
</td>
<td>
ETHZ-02-01 - Radar interferometry in active volcanic regions
</td>
<td>
GEOTIFF
</td>
<td>
</td>
<td>
</td>
<td>
Volcanoes with ongoing activity and with new activity (from
WVAR)
</td>
<td>
\-
</td>
<td>
</td> </tr>
<tr>
<td>
:S1 GRD high resolution imagery;
</td>
<td>
ETHZ-02-02 - Large surface displacements measured with feature tracking
</td>
<td>
GEOTIFF
</td>
<td>
</td>
<td>
</td>
<td>
Galapagos Islands (Isabela and
Fernandina); Great Aletsch Area, Switzerland; Slumgullion landslide, Colorado.
</td>
<td>
\-
</td>
<td>
</td> </tr>
<tr>
<td>
Wrapped or unwrapped interferogram
</td>
<td>
ETHZ-02-03 - Systematic modeling of
</td>
<td>
GEOTIFF
</td>
<td>
</td>
<td>
</td>
<td>
All volcanoes systematically monitored by DLR with S1
</td>
<td>
\-
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
surface deformation at
active volcanoes
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Sensor: Sentinel-1
A/B
* Processing level: L1 GRD
* Acquisition mode: IW
* Pass: same pass
for all the stack
(ASCENDING / DESCENDING)
* Orbit : same
orbit throughout the stack
</td>
<td>
EXT-02-01 -
Change
Detection in Rural Localities for SemiAutomated
Border
Surveillance
Applied to Insecure Areas in Lake Chad
Basin
</td>
<td>
GEOTIFF
</td>
<td>
</td>
<td>
</td>
<td>
(ideally areas where there is already ground truth)
**For Validation**
AOI2 (border with Burkina Faso) - 5m
Xmin= 1.47, Xmax= 2.17¡
Ymin=,12.56 Ymax= 13.65
AOI1 (Diffa/Geidam Region - Lake
Chad) - 10m
Xmin=11, Xmax=15
Ymin=12.5, Ymax=15
**For production**
AO3 (border with Nigeria - region
of Zinder) - 5m
Xmin=8.39, Xmax=10.65
Ymin=12.17, Ymax= 15.49
Polygon corner coordinates
X1=8.94, Y1=12.64
X2=9.86, Y2=12.37
X3=10.54, Y3=12.73
X4=10.50, Y4=15.05
X5=10.06, Y5=15.43
X6=8.40, Y6=15.31
X7=8.94, Y7=12.64
AOI4 (Markoye/Teguey Region) - 10m
Xmin=-0.4, Xmax=1
</td>
<td>
\-
</td>
<td>
</td> </tr>
<tr>
<td>
DEM Model. DEM image should be used for the Terrain Correction (often
automatically downloaded by specific software). Orbit file should be used for
Orbital Correction (often automatically downloaded by specific software)
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
Ymin=13.5, Ymax=15.
AOI5 (Agadez Region) - 10 m
Xmin=4, Xmax=8.5
Ymin = 16.8, Ymax=19.45
Polygon corner coordinates
X1=0.76, Y1=11.74
X2=2.46, Y2=12.24
X3=1.99, Y3=14.40
X4=1.02, Y4=13.97
X5=0.76, Y5=11.74
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Sentinel 1 (A or B)
Mode: IW
(Interferometric
Wide Swath)
Product Level: GRD
Polarization: VV or
VV+VH
</td>
<td>
EXT-02-02 - Satellite observations of oil sheen of natural oil seepage in
Disko Bay,
West
Greenland
</td>
<td>
GEOTIFF
</td>
<td>
Sentinel Open Data Hub
</td>
<td>
</td>
<td>
Location of known seepage in
Disko – Nuussuq – Svartenhuk Halvø region along the West coast of Greenland.
Map from Bojesen-Koefoed et al., 2007. Petroleum seepages at Asuk Disko, West
Greenland: Implications for regionals petroleum exploration. Journal of
Petroleum Geology 30, 219–236.
doi:10.1111/j.17475457.2007.00219.x
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Landsat-7 and -8 Level 2A
(atmospherically corrected, includes cloud and cloud shadow mask)
</td>
<td>
EXT-02-03 -
Crop Loss Detection using NDVI anomalies
</td>
<td>
GEOTIFF
</td>
<td>
EarthExplorer
</td>
<td>
</td>
<td>
**For validation:** one of the events identified by
period for the parcels affected by
that event **For production:**
All events/parcel coordinates in
Portugal to be provided by
Fidelidade
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
L1T ASTER, L1T
LANDSAT8 (TIRS), L1 SLSTR Sentinel 3, L1C Sentinel2.
</th>
<th>
EXT-02-04 - Surface temperature map evolution
</th>
<th>
GEOTIFF
</th>
<th>
EarthExplorer
</th>
<th>
</th>
<th>
Etna, Vesuvio, Campi Flegrei, Stromboli
</th>
<th>
</th>
<th>
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**Output Datasets included in all second cycle challenges**
</th> </tr>
<tr>
<td>
**Name**
</td>
<td>
**Challenges involved**
</td>
<td>
**Data format**
</td>
<td>
</td>
<td>
**Preferential Data source**
</td>
<td>
**Access restrictions**
</td>
<td>
**Area of interest**
</td>
<td>
**Expected data**
**Volume**
</td>
<td>
**long-term data preservation**
</td> </tr>
<tr>
<td>
Time series of
MODIS MOD11C2 LST 2000present gap-filled, smoothed and interpolated to 10 days
</td>
<td>
WFP-02-01 - MODIS TERRA
Land Surface
Temperature
(LST) -
Aggregations and Anomalies
</td>
<td>
Geotiff
</td>
<td>
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\- Global
**For validation:**
Years 2015, 2016 and 2017 **For production:**
Full archive processing of the data
(2000 - present)
</td>
<td>
\-
</td>
<td>
</td> </tr>
<tr>
<td>
Time series of LST temporally aggregated values
</td>
<td>
Geotiff
</td>
<td>
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
</td> </tr>
<tr>
<td>
Land Surface
Temperature Time Series products for a reference number of years
</td>
<td>
Geotiff
</td>
<td>
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Time series of LST
</td>
<td>
Geotiff
</td>
<td>
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
anomalies
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Snow cover characterization time series start date = 8 cons. snow days finish
date= 8 cons.
no snow days
</td>
<td>
WFP-02-01 -
MODIS
TERRA/AQUA
Snow Cover - Aggregations and Anomalies
</td>
<td>
Geotiff
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
Central Asia - POLYGON((25.0
48.0, 94.0 48.0, 94.0 23.1, 25.0 23.1, 25.0 48.0) **For validation:**
Years 2015, 2016 and 2017 for winter season (August - July) **For
production:**
Full archive processing of the data
(2000 - present) for winter season
(August - July)
</td>
<td>
\-
</td>
<td>
</td> </tr>
<tr>
<td>
Long Term snow season characterization
</td>
<td>
Geotiff
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Difference between annual snow cover characterization maps and the reference
snow cover climatology
</td>
<td>
Geotiff
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Time series of temporally smoothed and gap-
filled Sentinel-2
NDVIs
</td>
<td>
WFP-02-03 - Smooth & gap-
filled Sentinel-
2
</td>
<td>
Geotiff
</td>
<td>
\-
</td>
<td>
</td>
<td>
**For validation:**
Gambella (S2 tiles: 36PXQ,
36PYQ, 36NXP, 36NYP) Marchfeld (S2 tiles: 33UXP) **For production:**
AOI defined by Analyst triggering the pipeline
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
time series of temporally smoothed and gap-
filled Sentinel-2 reflectances.
</td>
<td>
Geotiff
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Raster with three bands: the two intensities (after
</td>
<td>
SATCEN-02-01
\- Change
Detection and
</td>
<td>
Geotiff
</td>
<td>
\-
</td>
<td>
</td>
<td>
Peru's Madre de Dios region
UL 70.5659 W 12.4567 S
BR 69.1411 W 13.0922 S
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
the pre-processing) and the map of changes.
</th>
<th>
Characterizatio n 2 (SAR
Change
Detection with
GRD data)
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
**For validation**
Two images, covering 99 % of the area, (S1 IW Descending mode) above the area
on dates
2018-08-12 and 2018-09-05
Image 1
S1B_IW_GRDH_1SDV_20180812T
101414_
20180812T101439_012228_0168
7D_BCA9
Image 2
S1B_IW_GRDH_1SDV_20180905T
101415_
20180905T101440_012578_0173
58_3259
**For Production**
images from April 2019 to
October 2019
Frequency: when a new image, acquired in the same interferometric conditions,
is available
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
RGB time series of the indexes above and cloud mask
</td>
<td>
SATCEN-02-02
\- Thematic
Indexes 2 (mineral indexes with Optical data
</td>
<td>
GEOTIFF
</td>
<td>
\-
</td>
<td>
</td>
<td>
Peru's Madre de Dios region
UL 70.5659 W 12.4567 S BR
69.1411 W 13.0922 S
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
NDVI, Bare Soil Index, vegetation
</td>
<td>
SATCEN-02-03 - Land
</td>
<td>
GEOTIFF
</td>
<td>
\-
</td>
<td>
</td>
<td>
Peru's Madre de Dios region UL 70.5659 W 12.4567 S BR
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
mask and cloud mask
</th>
<th>
Use/Land
Cover (Illegal deforestation
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
69.1411 W 13.0922
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Surface displacements
</td>
<td>
ETHZ-02-01 - Radar interferometry in active volcanic regions
</td>
<td>
GEOTIFF
</td>
<td>
\-
</td>
<td>
</td>
<td>
Volcanoes with ongoing activity
and with new activity (from
WVAR)
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Surface displacements
</td>
<td>
ETHZ-02-02 - Large surface displacements measured with feature tracking
</td>
<td>
GEOTIFF
</td>
<td>
\-
</td>
<td>
</td>
<td>
Galapagos Islands (Isabela and
Fernandina); Great Aletsch Area, Switzerland; Slumgullion landslide, Colorado.
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Surface displacement
(synthetic) in LOS
</td>
<td>
ETHZ-02-03 - Systematic modeling of surface deformation at
active volcanoes
</td>
<td>
GEOTIFF
</td>
<td>
\-
</td>
<td>
</td>
<td>
All volcanoes systematically monitored by DLR with S1
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Difference between
surface displacement (synthetic) and measured (InSAR
)in LOS
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Change Detection
Map at 5m (Vector with polygons covering the changes between two
</td>
<td>
</td>
<td>
</td>
<td>
\-
</td>
<td>
</td>
<td>
(ideally areas where there is already ground truth)
**For Validation**
AOI2 (border with Burkina Faso) - 5m
Xmin= 1.47, Xmax= 2.17¡
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
periods and one attribute: No change = 0;
Change =1)
</th>
<th>
EXT-02-01 -
Change
Detection in Rural Localities for SemiAutomated
Border
Surveillance
Applied to Insecure Areas in Lake Chad
Basin
</th>
<th>
SHP
</th>
<th>
</th>
<th>
</th>
<th>
Ymin=,12.56 Ymax= 13.65
AOI1 (Diffa/Geidam Region - Lake
Chad) - 10m
Xmin=11, Xmax=15
Ymin=12.5, Ymax=15
**For production**
AO3 (border with Nigeria - region
of Zinder) - 5m
Xmin=8.39, Xmax=10.65
Ymin=12.17, Ymax= 15.49
Polygon corner coordinates
X1=8.94, Y1=12.64
X2=9.86, Y2=12.37
X3=10.54, Y3=12.73
X4=10.50, Y4=15.05
X5=10.06, Y5=15.43
X6=8.40, Y6=15.31
X7=8.94, Y7=12.64
AOI4 (Markoye/Teguey Region) - 10m
Xmin=-0.4, Xmax=1 Ymin=13.5, Ymax=15.
AOI5 (Agadez Region) - 10 m
Xmin=4, Xmax=8.5
Ymin = 16.8, Ymax=19.45
Polygon corner coordinates
X1=0.76, Y1=11.74
X2=2.46, Y2=12.24
X3=1.99, Y3=14.40
X4=1.02, Y4=13.97
X5=0.76, Y5=11.74
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Change Detection
Map at 10 m (Vector with polygons covering the changes between two periods and
one attribute: No change = 0;
Change =1)
</th>
<th>
\-
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr> </table>
<table>
<tr>
<th>
Oil sheen location map
(0-1 data with
locations of oil sheen)
</th>
<th>
EXT-02-02 - Satellite observations of oil sheen of natural oil seepage in
Disko Bay,
West
Greenland
</th>
<th>
GEOTIFF
</th>
<th>
\-
</th>
<th>
</th>
<th>
Location of known seepage in
Disko – Nuussuq – Svartenhuk Halvø region along the West coast of Greenland.
Map from Bojesen-Koefoed et al., 2007. Petroleum seepages at Asuk, Disko, West
Greenland: Implications for regionals petroleum exploration. Journal of
Petroleum Geology 30, 219–236. doi:10.1111/j.17475457.2007.00219.x
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
NDVI values
</td>
<td>
EXT-02-03 -
Crop Loss Detection using NDVI anomalies
</td>
<td>
GEOTIFF
</td>
<td>
\-
</td>
<td>
</td>
<td>
**For validation:** one of the events identified by
Fidelidade
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
NDVI statistics values for entire growing season for each year
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
NDVI statistics values for entire growing season for each year
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
NDVI statistics values for entire growing season for each year
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
NDVI statistics values for entire growing season for each year
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Long term averages
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
of NDVI statistics values for entire growing season for each year
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
the reference time period for the parcels affected by that event **For
production** :
All events/parcel coordinates in
Portugal to be provided by
Fidelidade
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
NDVI statistics values for entire growing season for each year
</th>
<th>
\-
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Long term averages of NDVI statistics values for entire growing season for
each year
</th>
<th>
\-
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
NDVI statistics values for entire growing season for each year
</th>
<th>
\-
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Long term averages of NDVI statistics values for entire growing season for
each year
</th>
<th>
\-
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
NDVI statistics values for entire growing season for each year
</th>
<th>
\-
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Long term averages of NDVI statistics values for entire growing season for
</th>
<th>
\-
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr> </table>
<table>
<tr>
<th>
each year
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
NDVI statistics values for entire growing season for each year
</th>
<th>
\-
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Delineation map
</td>
<td>
EXT-02-04 - Surface temperature map evolution
</td>
<td>
GEOTIFF
</td>
<td>
\-
</td>
<td>
</td>
<td>
Etna, Vesuvio, Campi Flegrei, Stromboli
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Temperature map
(in °C)
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
* source-err: a series for tracking the systematic processing where the identifiers not successfully processed are inserted
* results: a series for the thematic data where the generated products metadata and enclosures are stored
An entry in the tracking index (and associated series) is an OWS Context
document that contains at least:
* An identifier (the identifier of the input product)
* A title
* A date/time coverage
* A spatial coverage
* A via link pointing to the input resource (required for recovery scenarios)
* A published date (the date of the source-in stage)
* A category expressing the processing stage (source-queue, source-in, sourceout, source-err)
2.2
Catalogue and Metadata of the BETTER data pipelines
The systematic processing of
a BETTER data pipeline is supported by a tracking catalogue index
and its associated tracking series.
A BETTER pipeline has an index in the catalog.
This concept applied to the Word Food Programme data processor for the WFP
\-
01
\-
01
\-
01
Sentinel
\-
1
backscatter
timeseries gives us the index:
_https://catalog.terradue.com/bette_
_r_
_-_
_wf_
_p_
_-_
_0000_
_1_
This index has five series, accessible at the URL
_htt_
_p_
_s://catalog.terradue.com/bette_
_r_
_-_
_wf_
_p_
_-_
_0000_
_1_
/
series/search
:
source
\-
queue: a series for tracking the identifiers in the queue
source
\-
in: a series for tracking the systematic processing where the identifiers
being
processed are inserted
source
\-
out: a series for
tracking the systematic processing where the identifiers
successfully processed are inserted
* The generator as version of the processing
* The output opensearch offerings limited by the geo:uid element
* The processing offering
* The URLs for the WPS GetCapabilities and Describe Process GET requests
* The URL for the WPS GetStatus GET request
* The URL for the WPS Execute POST request
* The WPS POST request
The model can be extended with elements from the OGC EO Metadata profile of
O&M (a.k.a EOP) [EOP_OM] or the ISO-19115 metadata model.
This project has received funding from the European Union’s Horizon 2020
Research and Innovation Programme under grant agreement no 776280
# FAIR data
## Making data findable, including provisions for metadata
As introduced in the section above, the catalogue containing the BETTER data
pipeline uses the OGC® OpenSearch Geo and Time extensions standard as a
possible metadata model to provide discovery functions to all the elements of
a data pipeline. The baseline of the standard is the OpenSearch v1.1
specification ( _A9.com_ , Inc, n.d.) [OS]. The Geo and Time extensions
provide with the main queryables to manage geospatial data products, and the
specification is standardised at the Open Geospatial Consortium (OGC, n.d.).
The data generated by the BETTER data processing pipelines have an entry in a
data catalogue that includes metadata with at least the geo and time
attributes. This minimum set of metadata ensures this data can be discovered
using geographical and time search criteria.
The discoverability of data generated by the BETTER data processing pipelines
is guaranteed with the data catalogue OpenSearch search engine that exposes an
OpenSearch Description document (OSDD) describing the search template and
returns several search output formats such as Atom or JSON. The response
includes the enclosure to the data file (or files).
The OSDD will have a DOI assigned using the ZENODO platform, which is
associated to results of a data pipeline. The process requires to upload the
OSDD XML document to Zenodo and add all the content needed for the **dataset
landing page.** The DOI will thus resolve to item in Zenodo that contains the
OSSD XML document and the description of the dataset.
The landing page will include the following:
* Info necessary to create citations in various different citation standards ( I.e title, authors, data created etc)
* provisions for machine readability
* Long term preservation - if the data is no longer available, the landing page should remain to explain why
* Links to the data - where possible, these should be links to the data itself, but it may also be contact information to coordinate having the data sent on physical media for extremely large datasets.
The landing page will be hosted by Zenodo.
According to the selected standard baseline and extensions, the data catalogue
queryable elements are discovered by requesting the OpenSearch description
document containing the URL templates and response formats.
For each of the associated data pipeline series, the OpenSearch description
document can be retrieved with the URL:
_https://catalog.terradue.com/ < _ d ata pipeline
identifier>/series/<stage>/description
Where
* <data pipeline identifier> is the data pipeline unique identifier that can be traced in the BETTER deliverable D3.1. Example better-wfp-00001 for the WFP-01-01-01 Sentinel-1 backscatter timeseries data pipeline. <stage> is one of: o source-queue: to search the messages in the data pipeline queue o source-in: to search the messages of the data pipeline being processed o source-out: to search the messages of the data pipeline successfully processed o source-err: to search the messages of the data pipeline not successfully processed o results: to search for the products generated by the data pipeline
The policy for the discovery functions is open.
The OpenSearch description document obtaining from the previous URL contains
the list of parameters to do a query against the metadata, such as:
* start: start of the temporal interval (RFC-3339)
* stop: stop of the temporal interval (RFC-3339)
* bbox: Rectangular bounding box
## Making data openly accessible
The Terradue Ellip Cloud Platform is the privileged storage and cataloguing
resource for managing the dataset produced by the BETTER data pipelines and
described above in the section of Data summary. In principle, all these
datasets would be make open accessible if not access or size restrictions are
declared. As mandatory, the datasets used in publications will be open
accessible and available at openAIRE.
As declared in the previous section, the discovery of a given BETTER data
pipeline is open and accessible via the OpenSearch mechanism (available on the
Web). The data access requires software tools allowing to do GET requests to
the data enclosure pointing to the storage where the data is hosted. The data
access can be done with the same set of tools as for the data discovery.
The metadata associated to the the data generated by the BETTER data
processing pipelines is deposited on the data catalog accessible at the URL
_https://catalog.terradue.com._ One must know the BETTER data processing
pipeline catalog index to access the OSDD URL.
The data generated by the BETTER data processing pipelines is hosted on the
data storage accessible at the URL _https://store.terradue.com_ . The
metadata associated to the the data generated by the BETTER data processing
pipelines contains the enclosure.
The access to a given BETTER data pipeline results may:
* Be subject to an authentication/authorization process according to an agreed policy with the BETTER Challenge Promoter (e.g. embargo period before full public release).
* Be open for download without any authentication or authorization
Even when the results are subject to an authentication process, there will be
an registration process for getting a user credentials freely. If any access
policy is applied to a data set, it is described in the column 'access
restrictions' in the table of the section 2.1.
## Making data interoperable
As introduced above, the results produced by the BETTER data pipelines are
discoverable using OpenSearch [OS]. The response of an opensearch query is a
list of records containing an OWS Context document [OWS_C]. The contents of
the OWS Context document are described in the section 2.2 and they can be
extended with elements from the OGC EO Metadata profile of O&M [EOP_OM].
As can be found in the column Format in the table of the section 2.1, the
results themselves are formatted in general in well known formats: geotiff,
netcdf, DIM or SAFE. Most of these formats are supported by standard tools.
Furthermore, the outputs of a data pipeline can be used as an input to another
data pipeline.
## Increase data re-use (through clarifying licences)
The data is available and catalogued as soon as the data pipeline generated
it.
The owner of the data will be the BETTER Challenge Promoter of the
corresponding challenge. They are responsible for setting the right policies
of use of the generated data. If a data usage license is applied to any data
it should be compatible with the FAIR data principles. The BETTER Promoters
have not defined any license for the re-use of the data generated at the
moment. If they are identified in the future, they will be reported in the
column 'access restrictions in the updates of the table in section 2.1.
The data retention policy of the BETTER data pipelines results is to agree and
define with the BETTER Challenge Promoter a period of time (retention time) in
which this data is stored and replicated on the Ellip platform. After that
period, the results are eliminated in a FIFO approach. At the moment, we have
not defined yet measures for the long term preservation of the data generated
in the project such as publishing the data in other platforms or repositories.
The data in the Ellip platform will not persist even during the duration of
the project. The long term preservation of the data will be addressed in the
updates of the table in section 2.1.
# Other issues
## Allocation of resources
The cost for making data FAIR is covered by the tasks in the WP3 of the
project. The Ellip Platform, provided in Task 3.1, supports the catalogue
opensearch for the data pipelines. The data ingestion modules of the data
pipelines are developed in the task 3.3, they will format the input and output
data adding the associated metadata for their cataloguing. Some effort is
allocated to the Challenge Promoters to define the access and use policy of
the data generated by the data pipelines.
The resources for the long term preservation have not been discussed yet, they
will be addressed in the next version of this document.
## Data security
The Ellip platform ensures the safety of the data during the configured
retention time. The data is protected at different levels of the platform
storage system:
* At the service level, the storage is a “distributed-replicated” type meaning that “distributed-replicated” volumes distribute files across replicated servers for the same volume.
When replicating, servers are organised in pairs. Therefore, it’s possible to
lose up to one entire server for each pair of servers constituting the storage
service.
* At the server level, the storage is configured with RAID 6 using a hardware-RAID Controller and 2 HotSpace disks.
This configuration ensures double-parity RAID and thus resilient up to 4 disks
failures within the RAID set before any data is lost.
For the long term preservation of the data, as said in previous sections, the
measurements to take with the data are still to be defined.
**END OF DOCUMENT**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0204_SEMIoTICS_780315.md
|
# EXECUTIVE SUMMARY
This deliverable shows how quality, innovation and data management aspects are
considered in a variety of processes and activities within the SEMIoTICS
project.
The interrelated quality/innovation/data management processes namely quality
management, quality control and quality assurance, innovation management, and
data management have impact on the project work from the requirements &
architectural definition to the project’s implementation in 3 different usage
scenarios.
In chapter 3, Quality Management Plan refers to reporting procedures, the
definition of roles and responsibilities, quality control & assurance policies
for deliverables and publications with a well-defined internal review process
and implementation throughout the project’s duration.
In chapter 4, Innovation Management Plan provides detailed plan for activities
and processes for identifying internal and external opportunities for and
realising innovation.
In chapter 5, Data Management Plan provides details about how to manage the
data generated within technical work-packages within the consortium. The
processes and criteria described in this document may also be updated if
additional needs arise during the execution of the project in regular WP1
deliverables (Yearly project reports and project plan updates).
# INTRODUCTION
Global networks like IoT create an enormous potential for new generations of
IoT applications, by leveraging synergies arising through the convergence of
consumer, business and industrial Internet, and creating open, global networks
connecting people, data, and “things”. A series of innovations across the IoT
landscape have converged to make IoT products, platforms and devices
technically and economically feasible. However, despite these advancements the
realization of the IoT potential requires overcoming significant business and
technical hurdles, e.g. Dynamicity, Scalability, Heterogeneity, End-to-end
Security and Privacy.
To address these challenges, SEMIoTICS aims “ _To develop a pattern-driven
framework, built upon existing IoT platforms, to enable and guarantee secure
and dependable actuation and semi-autonomic behaviour in IoT/IIoT
applications. The SEMIoTICS framework will support cross-layer intelligent
dynamic adaptation, including heterogeneous smart objects, networks and
clouds. To address the complexity and scalability needs within horizontal and
vertical domains, SEMIoTICS will develop and integrate smart programmable
networking and semantic interoperability mechanisms. The above will be
validated by industry, using three diverse usage scenarios in the areas of
renewable energy, healthcare, and smart sensing and will be offered through an
open_
_API._ ”
An essential measure to reach the highly ambitious and challenging aim and
subsequent objectives of SEMIoTICS project is to set the appropriate quality
management processes and criteria. These processes and criteria are described
in this document in order to successfully ensure high quality project
outcomes.
This deliverable is addressed to any interested public reader. It will be
practically useful for the consortium members who can use it as a basis for
the general management of all project activities. Guidelines/criteria given in
this deliverable (Chapter 3) will ensure that all kind or project reports and
publications will follow high quality standards and will give all consortium
members a comprehensive overview on the SEMIoTICS procedures for publication
and reporting. Chapter 4 provides details about Innovation Management plan and
project’s different tasks which will lead to innovation and its assessment.
Chapter 5 gives details about project’s Data Management Plan (DMP) which for
the consortium internal usage.
This document refers to:
1. the Description of Action (part B) [1]
2. the Consortium Agreement [2]
3. the Grant Agreement [3]
# QUALITY MANAGEMENT PLAN (QMP)
## Roles and Responsibilities
Roles and responsibilities for maintaining and updating deliverables/plans are
linked to roles within SEMIoTICS. In case new personnel is assigned to a
relevant role, responsibilities with respect to previously assigned tasks are
also taken over.
The project management roles and the structure of SEMIoTICS are described, and
the following table gives a quick reference to the detailed description of
each role:
<table>
<tr>
<th>
Project Coordinator (PC)
</th>
<th>
The Coordinator will consolidate the input and will do the continuous
reporting (online).
</th> </tr>
<tr>
<td>
Technical Project Manager (TPM)
</td>
<td>
TPM, also known as Scientific, Technical and Innovation Project Manager, is
responsible of the technical and innovation management; and transparently
communicates innovation management related issues within PCC and PTC.
</td> </tr>
<tr>
<td>
Project Coordination Committee (PCC)
</td>
<td>
PCC approval is required for all disclosure of confidential project results
outside the consortium. PCC and PTC are responsible to take appropriate
actions according to the rules on innovation management and intellectual
property creation. PCC constitutes all the named personnel in CA [2] of
SEMIoTICS
</td> </tr>
<tr>
<td>
Project Technical Committee (PTC)
</td>
<td>
PTC has a strong focus on development and protection of intellectual property
and its efforts will be supported by TPM and PCC in order to create a solid
base for industrial and commercial exploitation. PTC constitutes all the WP
leads of SEMIoTICS
</td> </tr>
<tr>
<td>
Work Package Leaders
</td>
<td>
Work package leaders are responsible for quality control measures within their
work package and will monitor that this quality management plan is followed.
</td> </tr>
<tr>
<td>
Task Leaders
</td>
<td>
Task leaders have to give work package leader support in effectively
monitoring the QMP implementation. Work package leaders are responsible to
report incidents of the QMP not being followed to the PTC
</td> </tr>
<tr>
<td>
Reviewer
</td>
<td>
Reviewer is assigned by WP Leader/PTC who reviews the deliverable/other
material based on respective quality criteria
</td> </tr>
<tr>
<td>
Advisory Committee (AC)
</td>
<td>
Advisory Committee (AC) consisting of relevant external stakeholders from
research, academia and industry. The AC will follow the project development
and will provide necessary feedback in order to ensure that the scientific and
technological evolution of the project is in the direction to fulfill its
goals and provide an external global viewpoint.
</td> </tr> </table>
_**TABLE 1: PROJECT MANAGEMENT ROLES** _
## Management Bodies
SEMIoTICS has a lean management structure supporting an effective project
execution including the tasks and activities related to innovation and
innovation management.
A clear definition of roles is also given there and can be summarized as
follows:
* During project execution, the TPM will be responsible of driving the technical and innovation management and transparently communicate innovation management related issues within PCC and PTC.
* PCC and PTC are responsible to take appropriate actions according to the rules on innovation management and intellectual property creation.
* Partners of SEMIoTICS have departments for maintaining the intellectual property portfolio and existing interfaces of consortium partners will be used to protect intellectual property developed within the project according to local law
* SEMIoTICS consortium partners have experience in the collaboration of their own legal / IP departments and projects and existing processes will support creation of intellectual property within the project (e.g., use of tools like internal/external patent databases).
* The PTC will have a strong focus on development and protection of intellectual property and its efforts will be supported by TPM and PCC in order to create a solid base for industrial and commercial exploitation.
* Different exploitation strategies were already discussed during the proposal phase and Task 6.2 and
6.1 has allocated resources for work on exploitation and impact creation.
* PCC approval is required for all disclosure of confidential project results outside the consortium and decisions will be taken according to this deliverable.
* SEMIoTICS has an interface established to interact with other projects within IOT-EPI programme in order to drive the collaborative exercise of all partners.
* Consortium partners contributing expertise on business, technologies, application domain, and research which enable innovation aligned to the business activities of the partners, and thus, will lead to either the development of a product, a service, or future research.
* Legal aspects of innovation, intellectual property being created within the project, joint ownership of results (if applicable), joint exploitation strategies, and all related confidentiality issues are clarified in the consortium agreement [2].
## Quality Management
### REPORTING PROCEDURES
All project reporting procedures have to follow the terms and conditions as
stated in the Grant Agreement. In general, SEMIoTICS facilitates quality
management by regular project reporting of all partners being used as input
for the project reports for the EC and the Project Officer. SEMIoTICS will use
continuous reporting to the EC via the web-based project management portal.
Therefore, WP leads have to give short reports on WP related activities and
achievements to the Coordinators at the end of each quarter (March, June,
September, and December). The Coordinator will consolidate the input and will
do the continuous reporting (online). Reporting includes progress report
(against baseline), achievements, resources, and risks. Content for the
reporting deliverables, namely yearly reports (D1.3, D1.4, and D1.5) is
created based on information from continuous reporting, as well as specific
information on closed, active and upcoming WPs directly given by corresponding
WP leads.
Detailed technical content and detailed progress information of each WP is
reported from Task leads towards WP lead and to Project Technical Committee
(PTC) via WP lead.
Monthly PTC meetings/calls are used to review progress, review/update the risk
register, updates of DMP, and dissemination plan, inter-WP collaboration, as
well as for reporting towards PCC. PTC call minutes will be forwarded to PCC
within 5 working days after each PTC call.
### QUALITY CONTROL – GENERAL
Work package leaders are responsible for quality control measures within their
work package and will monitor that this quality management plan is followed.
Task leaders have to give them support in effectively monitoring the QMP
implementation. Work package leaders are responsible to report incidents of
the QMP not being followed to the PTC. PTC will decide on mitigation actions,
if possible. In case mitigation is not possible through PTC, PTC will inform
PCC for further actions.
Detailed quality control measures for different types of results are described
in the following sections.
## Quality control for publications
### RULES FOR PUBLICATION
The following procedure ensures a high quality of joint publications related
to SEMIoTICS and take care that IPR of other parties is not infringed. Timing
is aligned so that each project partner can also complete mandatory internal
approval procedures.
<table>
<tr>
<th>
At least 4 weeks before deadline
</th>
<th>
Venue is registered in the Dissemination Plan. Author will send notification
to Task 6.1 lead and PTC will start tracking the status of that publication.
This includes planned authors, title, abstract, and planned venue.
</th> </tr>
<tr>
<td>
At least 3 weeks before deadline
</td>
<td>
Outline of presentation is ready, and authors start partner internal approval
processes.
</td> </tr>
<tr>
<td>
At least 1 week before deadline
</td>
<td>
Work package internal (authors / subject matter experts) review started,
authors are responsible to integrate / discuss comments with reviewers.
</td> </tr>
<tr>
<td>
Before submission
</td>
<td>
Authors declare successful (internal) review and all authors agree to the
submission of that final version (e.g., via email). PTC will store that
information on the repository together with the submitted version.
</td> </tr>
<tr>
<td>
After submission / dissemination / acceptance of publication
</td>
<td>
Authors ensure that status of publication is tracked and updated in
Dissemination Plan (update notifications to Task 6.1 lead).
</td> </tr> </table>
#### TABLE 2A: PROCEDURE FOR JOINT PUBLICATION
For publications with only one partner being involved, the procedure is
simplified:
<table>
<tr>
<th>
At least 4 weeks before deadline
</th>
<th>
Venue is registered in the Dissemination Plan and PTC will start tracking the
status of publication and submission.
This includes planned authors, title, abstract/outline, and planned venue.
</th> </tr>
<tr>
<td>
After submission / dissemination / acceptance of publication
</td>
<td>
Authors ensure that status of publication is tracked and updated in
Dissemination Plan (update notifications to Task 6.1 lead).
</td> </tr> </table>
_**TABLE 2B: PROCEDURE FOR PUBLICATION OF ONE PARTNER** _
Objections will be handled according to the procedures given in the Grant
Agreement.
### ACKNOWLEDGEMENT
Acknowledgement to the EC for its funding must be clearly indicated on every
publication and presentation for which project funding will be claimed.
Typical text is as follows:
This [paper/presentation/...] has received funding from the European Union's
Horizon 2020 research and innovation programme
H2020-IOT-2016-2017/H2020-IOT-2017, under grant agreement No. 780315\.
### DISCLAIMER
It is recommended to include a disclaimer on every publication and
presentation. Typical text is as follows:
This [paper/presentation/...] reflects only the authors' views and the
European Commission is not responsible for any use that may be made of the
information it contains.
## Quality control for deliverables
The project coordinator, together with PCC and Technical Manager will closely
coordinate technical quality checks for all deliverables. All deliverables
will be subject to a review within the work package before forwarding them to
PCC for final review and approval. Where necessary, the Project Coordinator
could request further work of the partners on a deliverable, to ensure that it
complies with the project’s contractual requirements. All deliverables will
include the names of the editor (responsible person), the authors of the
content, the reviewers, as well as the approvers. After PCC approval,
deliverables are submitted to the EC by the coordinator. Escalations in case
of quality concerns will follow the procedures given in the “Conflict
Resolution” of Grant Agreement [3].
To ensure that this process can be followed through, the following time plan
has been agreed:
<table>
<tr>
<th>
Start of tasks contributing to a deliverable
</th>
<th>
## Quality control for other material
### DISSEMINATION ACTIVITIES (INCL. IOT-EPI PROGRAMME LEVEL TOPICS)
Detailed procedures on dissemination activities are given in the Dissemination
Plan (e.g., event description, activity description, and timing). Quality
control related actions for dissemination activities are:
* Dissemination activities have to be coordinated with the WP6 lead
* For “standard” dissemination activities, a SEMIoTICS project presentation slide deck is available in the project repository.
* For all additional technical content in dissemination activities a project internal peer review is foreseen, and PCC has to give approval for dissemination after peer review is done. Peer review is initiated by the author by sending a request to PTC. PTC will assign a reviewer. After review is finished PTC will give a recommendation for approval to PCC.
For publications to go to public, prior written notice with abstract of any
planned publication (irrespective if for scientific journals, conferences,
online publications, or the like) needs to be given to the consortium members
at least forty-five (45) days before the planned publication date ((i.e. the
day the journal will be published, the day the conference is scheduled for,
etc.). Any objection from PTC/PCC members as specified in consortium agreement
to the planned publication shall be made in writing to all the consortium
members within thirty (30) days after receipt of the written notice. If no
objection is made within the time limit stated above, the publication is
permitted. The internal approval process is mentioned earlier in Table 1A and
Table 1B.
### DEMONSTRATION ACTIVITIES (INCL. IOT-EPI PROGRAMME LEVEL TOPICS)
Demonstration activities will follow the same procedure as dissemination
activities with technical content (e.g., peer review of technical content).
All demonstration activities have to be coordinated with PTC well in time -
due to extended visibility, partners might have to follow internal approval
procedures which need lead time. Demonstrations will be coordinated by
dedicated (internal) workgroups and these workgroups will directly report to
PTC. Quality assurance will follow the procedures given in the Grant Agreement
[3].
## Quality assurance
Quality assurance will follow the procedures given in the Grant Agreement,
namely: Quality assurance will be performed in all project phases through WP1
by PCC that will undertake to secure SEMIoTICS quality and relevant
documentation at all development stages of the project. SEMIoTICS is adopting
the Plan-Do-CheckAct (PDCA) principle to achieve proper monitoring of all
project activities. With PDCA all work done within the WPs and tasks will be
closely monitored on a continuous basis resulting in PCC/PTC initiated
corrective actions and changes to the project plan when necessary.
### QA FOR DELIVERABLES
QA recommends and expects certain Quality levels / Quality Assurance levels
for project deliverables, as well as the criteria and processes for assessing
them, including responsible project internal stakeholders (Task leader/Editor,
Contributor, Reviewer, WP Leader, PTC, TPM, PCC, PC).
The title page of the SEMIoTICS deliverables clearly identifies authors,
reviewers, and approvers. This gives a transparent view on the persons
involved in quality control and deliverables can only be released after the
quality assurance levels (e.g., internal reviews/processes) are successfully
passed.
Each Task leader/editor will follow the criteria such as:
1. The deliverable Table of Contents are prepared according to PERT chart (see section 6.5) of the project.
2. Incorporating suggestions from PTC, TPM, PCC and PC for continuous improvement of the deliverable
3. Making sure timely inputs from the contributors of the task.
4. Checking and managing the flow of information in the technical contents as per task objectives fulfilling the WP objectives.
Each contributor of the deliverable will follow the criteria such as:
1. Timely contribution of the technical contents to the Editor of the deliverable.
2. Checking the flow of information in the technical contents.
Each WP lead will follow the criteria such as:
1. Monitor technical progress w.r.t. WP objectives in comparison to resource consumption each of its tasks within WP.
2. Check each of WP’s deliverable as per PERT chart (see section 6.5) of the project and ask for corrections from the Editor, if necessary.
3. Assign a reviewer/s which will check the consistency of the deliverable from the information flow point of view.
The deliverable reviewer assigned by WP lead will follow the criteria such as:
1. Check whether each deliverable starting from high level concepts, and then presents technologies and details in separate sections.
2. Check for English grammatical errors, broken links.
3. Check the layout of the deliverables w.r.t. the reporting template
After the deliverable is properly reviewed internal to the WP, TPM sends it to
PTC for their review and checking for any obvious artifacts which may affect
other tasks in their respective WPs. After their approval TPM sends it to PCC
for final approval. After PCC approval, PC uploads the deliverable on EC
portal.
### QA FOR PUBLICATIONS
All SEMIoTICS publication activities will be captured in the Dissemination
Plans of WP6 and also information quality control gates are recorded there.
This includes proper documentation that the quality control gates are
successfully passed, as well as the status of external peer reviews (e.g.,
publication submitted to double blind peer review and was accepted for
publication on the conference).
### QA FOR OTHER MATERIAL
All internal reviews for other SEMIoTICS dissemination material will be
documented in the dissemination plan (per dissemination activity). This
includes information on authors, reviewers, and approvers.
## Risk Management
The overall risks, considering the project to be technically too broad and
ambitious, is controlled by the excellence and experience of the consortium
partners. It is important that the consortium identify the risks that may
originate from the approaches used to achieve the project goals, as well as
the measures that the consortium could take to minimize them. The potential
project risks can be classified into the following groups: i. Execution risks,
1. Partner related,
2. Planning problems
3. Consortium Collaboration Issues ii. Technological risks, iii. External risks.
SEMIoTICS’s major risks, including description, possible impacts, and related
contingency plans are described in the Risk register which is regularly
updated in monthly PTC meetings.
In case a serious issue arises in any deliverable, deliverable responsible
person/reviewer/PTC/Technical Project Manager/PCC/Project Coordinator can
raise that issue directly with the involved persons in the project directly or
indirectly through Technical Project Manager/Project Coordinator.
## Deliverable tracking process
Transparency of roles and responsibilities has a big impact on the project
success. Uncertainty can dramatically affect individual, organisational as
well as the consortium performance. Therefore, responsible persons for each
organisation and per WP were defined. The tables below show initial
assignments of the Deliverables, its reviewers and Milestones of the project;
and detailed tacking incl. reasons for delay (if any) will be done throughout
the WP1 activities. While Deliverable leading organisations were already
defined within the DoA [1], the concrete editor responsible for requesting and
guiding partner inputs towards a punctual and high-quality submission, were
named at the project start till Milestone 2 (MS2). The later assignments will
be mentioned in subsequent deliverables of WP1 (Yearly and half-yearly project
plan updates). In line with the quality control process for deliverables
(described in chapter 3.4) at least one specific internal reviewer for each
deliverable was defined and clear deadlines for the draft version, internal
review as well as for the pre-final version and final version submission were
established.
Note: For simplicity, Table 4A is filled till Milestone MS2 and Table 4B is
filled till Milestone MS1. In the forthcoming WP1 deliverables this table will
be updated continuously.
This table is linked to file “Deliverables_tracking_Editors and
reviewers.xlsx”
_https://overseer1.erlm.siemens.de/repository/Document/downloadWithName/Deliverables_tracking_
_Editors%20and%20reviewers.xlsx?reqCode=downloadWithName&id=24675185
<table>
<tr>
<th>
Deliverables for Project 780315
</th> </tr>
<tr>
<td>
Deliverables
</td> </tr>
<tr>
<td>
WP No
</td>
<td>
Del No
</td>
<td>
Title
</td>
<td>
Editor
</td>
<td>
Reviwer/s
</td>
<td>
Comments
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.1
</td>
<td>
Project web site and internal communication platform
</td>
<td>
Andreas Miaoudakis (FORTH)
</td>
<td>
Nikolaos Petroulakis (FORTH), Vivek Kulkarni (SIEMENS)
</td>
<td>
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
D2.1
</td>
<td>
Analysis of IoT value drivers
</td>
<td>
Prof. Georgios Spanoudakis (Sphynx)
</td>
<td>
Vivek Kulkarni (SIEMENS)
</td>
<td>
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
D2.2
</td>
<td>
SEMIoTICS usage scenarios and requirements
</td>
<td>
Vivek Kulkarni (SIEMENS)
</td>
<td>
Use case wise peer-reviewers
</td>
<td>
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.2
</td>
<td>
Initial Quality, Innovation and Data Management Plan
</td>
<td>
Volkmar Döricht (SIEMENS)
</td>
<td>
Nikolaos Petroulakis (FORTH)
</td>
<td>
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.7
</td>
<td>
Periodic project plan updates (M6)
</td>
<td>
Nikolaos Petroulakis (FORTH)
</td>
<td>
Vivek Kulkarni (SIEMENS)
</td>
<td>
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
D2.3
</td>
<td>
Requirements specification of SEMIoTICS framework
</td>
<td>
Mirko Falchetto (ST-I)
</td>
<td>
Peer-reviewers of each section
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.1
</td>
<td>
Impact creation, dissemination and exploitation plan
</td>
<td>
Christos Verikoukis (CTTC)
</td>
<td>
Nikolaos Petroulakis (FORTH)
</td>
<td>
</td> </tr>
<tr>
<td>
MS1
</td>
<td>
</td>
<td>
Finalization of the Requirements
</td>
<td>
Danilo Pau (ST-I)
</td>
<td>
Nikolaos Petroulakis (FORTH)
</td>
<td>
Approxmiate 1 month delay in completion of the milestone was agreed with the
PO at that time.
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.3
</td>
<td>
Year 1 project report and project plan updates
</td>
<td>
Vivek Kulkarni (SIEMENS)
</td>
<td>
Nikolaos Petroulakis (FORTH)
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.5
</td>
<td>
Field-level middleware & networking toolbox (first draft)
</td>
<td>
Prodromos Vasileios Mekikis (IQUADRAT)
</td>
<td>
Ermin Sakic (SIEMENS)
</td>
<td>
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
D2.4
</td>
<td>
SEMIoTICS high level architecture (first draft)
</td>
<td>
Mirko Falchetto (ST-I)
</td>
<td>
Łukasz Ciechomski (BlueSoft)
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.1
</td>
<td>
Software defined programmibilty for IoT devices (first draft)
</td>
<td>
Ermin Sakic (SIEMENS)
</td>
<td>
Nikolaos Petroulakis (FORTH)
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.2
</td>
<td>
Network Functions Virtualization for IoT (1st draft)
</td>
<td>
Luis Sanabria Russo (CTTC)
</td>
<td>
Ermin Sakic (SIEMENS)
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.3
</td>
<td>
Bootstrapping and interfacing SEMIoTICS field level devices (1st draft)
</td>
<td>
Darko Anicic (SIEMENS)
</td>
<td>
Kostas Ramantas (IQUADRAT)
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.4
</td>
<td>
Network-level Semantic Interoperability (first draft)
</td>
<td>
Iason Somarakis (Sphynx)
</td>
<td>
Ermin Sakic (SIEMENS)
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.1
</td>
<td>
SEMIoTICS SPDI Patterns (first draft)
</td>
<td>
Konstantinos Fysarakis (Sphynx)
</td>
<td>
Nikolaos Petroulakis (FORTH)
</td>
<td>
</td> </tr>
<tr>
<td>
MS2
</td>
<td>
</td>
<td>
First version of SEMIoTICS architecture, End of 1st Field and Network level
mechanisms development cycle
</td>
<td>
SIEMENS
</td>
<td>
Nikolaos Petroulakis (FORTH)
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.2
</td>
<td>
SEMIoTICS Monitoring, prediction and diagnosis mechanisms (first draft)
</td>
<td>
ENG
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.3
</td>
<td>
Embedded Intelligence and local analytics (first draft)
</td>
<td>
ST-I
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.4
</td>
<td>
Semantic interoperability mechanisms for IoT (first draft)
</td>
<td>
FORTH
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.5
</td>
<td>
SEMIoTICS Security and privacy mechanisms (first draft)
</td>
<td>
UNI PASSAU
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.6
</td>
<td>
Implementation of SEMIoTICS BackEnd API (Cycle 1)
</td>
<td>
BlueSoft
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
MS3
</td>
<td>
</td>
<td>
End of 1st Pattern-driven smart behavior of IIoT mechanisms development cycle
and the 1st backend implementation cycle
</td>
<td>
Sphynx
</td>
<td>
Nikolaos Petroulakis (FORTH)
</td>
<td>
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.8
</td>
<td>
Periodic project plan updates (M18)
</td>
<td>
FORTH
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.2
</td>
<td>
Interim report on impact creation, dissemination activities
</td>
<td>
CTTC
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.3
</td>
<td>
Interim report on exploitation activities (M18)
</td>
<td>
ENG
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.4
</td>
<td>
Interim report on standardization activities
</td>
<td>
SIEMENS
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.6
</td>
<td>
Field-level middleware & networking toolbox (second draft)
</td>
<td>
IQUADRAT
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.1
</td>
<td>
SEMIoTICS KPIs and Evaluation Methodology
</td>
<td>
UNI PASSAU
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
D2.5
</td>
<td>
SEMIoTICS high level architecture (final)
</td>
<td>
BlueSoft
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.7
</td>
<td>
Implementation of SEMIoTICS BackEnd API (Cycle 2)
</td>
<td>
BlueSoft
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.2
</td>
<td>
Software system integration (Cycle 1)
</td>
<td>
BlueSoft
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.3
</td>
<td>
IIoT Infrastructure set-up and testing (Cycle 1)
</td>
<td>
IQUADRAT
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
MS4
</td>
<td>
</td>
<td>
Final version of SEMIoTICS architecture, end of 2nd implem.cycle, End of setup
and testing 1st cycle, Evaluation methodology defined
</td>
<td>
FORTH
</td>
<td>
Vivek Kulkarni (SIEMENS)
</td>
<td>
</td> </tr> </table>
### TABLE 4A: DELIVERABLE INITIAL ASSIGNMENTS (EDITOR/RESPONSIBLE PERSON,
REVIEWER)
<table>
<tr>
<th>
Deliverables for Project 780315
</th> </tr>
<tr>
<td>
Deliverables
</td> </tr>
<tr>
<td>
WP No
</td>
<td>
Del No
</td>
<td>
Title
</td>
<td>
Lead Beneficiary
</td>
<td>
Nature
</td>
<td>
Dissemination Level
</td>
<td>
Est. Del. Date (annex I)
</td>
<td>
Receipt Date
</td>
<td>
Status
</td>
<td>
Reasons for delay, if any
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.1
</td>
<td>
Project web site and internal communication platform
</td>
<td>
FORTH
</td>
<td>
Report
</td>
<td>
Confidential
</td>
<td>
30-Jan-18
</td>
<td>
28-May-18
</td>
<td>
Submitted
</td>
<td>
Project start happened 1 month earlier than consortium's expected start, Web
site was already operational and internal communication was in operation. The
delay was agreed with the PO at that time.
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
D2.1
</td>
<td>
Analysis of IoT value drivers
</td>
<td>
Sphynx
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-Mar-18
</td>
<td>
03-May-18
</td>
<td>
Submitted
</td>
<td>
The deliverable submission delay was agreed with the PO at that time.
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
D2.2
</td>
<td>
SEMIoTICS usage scenarios and requirements
</td>
<td>
SIEMENS
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Apr-18
</td>
<td>
14-Jun-18
</td>
<td>
Submitted
</td>
<td>
The deliverable submission delay was agreed with the PO at that time.
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.2
</td>
<td>
Initial Quality, Innovation and Data Management Plan
</td>
<td>
SIEMENS
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Jun-18
</td>
<td>
23-Jul-18
</td>
<td>
Submitted
</td>
<td>
The deliverable submission delay was agreed with the PO at that time.
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.7
</td>
<td>
Periodic project plan updates (M6)
</td>
<td>
FORTH
</td>
<td>
Report
</td>
<td>
Confidential
</td>
<td>
30-Jun-18
</td>
<td>
02-Aug-18
</td>
<td>
Submitted
</td>
<td>
The previously delayed deliverables brought additional delay.
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
D2.3
</td>
<td>
Requirements specification of SEMIoTICS framework
</td>
<td>
ST-I
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Jun-18
</td>
<td>
07-Aug-18
</td>
<td>
Submitted
</td>
<td>
The previously delayed deliverables brought additional delay.
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.1
</td>
<td>
Impact creation, dissemination and exploitation plan
</td>
<td>
CTTC
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Jun-18
</td>
<td>
03-Jul-18
</td>
<td>
Submitted
</td>
<td>
The previously delayed deliverable esp. D2.1 and D2.2 brought additional
delay.
</td> </tr>
<tr>
<td>
MS1
</td>
<td>
</td>
<td>
Finalization of the Requirements
</td>
<td>
ST-I
</td>
<td>
</td>
<td>
</td>
<td>
30-Jun-18
</td>
<td>
07-Aug-18
</td>
<td>
Submitted
</td>
<td>
Approxmiate 1 month delay in completion of the milestone was agreed with the
PO at that time.
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.3
</td>
<td>
Year 1 project report and project plan updates
</td>
<td>
SIEMENS
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-Dec-18
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.5
</td>
<td>
Field-level middleware & networking toolbox (first draft)
</td>
<td>
IQUADRAT
</td>
<td>
Other
</td>
<td>
Confidential
</td>
<td>
31-Dec-18
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
D2.4
</td>
<td>
SEMIoTICS high level architecture (first draft)
</td>
<td>
ST-I
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
28-Feb-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.1
</td>
<td>
Software defined programmibilty for IoT devices (first draft)
</td>
<td>
SIEMENS
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
28-Feb-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.2
</td>
<td>
Network Functions Virtualization for IoT (1st draft)
</td>
<td>
CTTC
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
28-Feb-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.3
</td>
<td>
Bootstrapping and interfacing SEMIoTICS field level devices (1st draft)
</td>
<td>
SIEMENS
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
28-Feb-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.4
</td>
<td>
Network-level Semantic Interoperability (first draft)
</td>
<td>
Sphynx
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
28-Feb-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.1
</td>
<td>
SEMIoTICS SPDI Patterns (first draft)
</td>
<td>
Sphynx
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
28-Feb-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
MS2
</td>
<td>
</td>
<td>
First version of SEMIoTICS architecture, End of 1st Field and Network level
mechanisms development cycle
</td>
<td>
SIEMENS
</td>
<td>
</td>
<td>
</td>
<td>
28-Feb-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.2
</td>
<td>
SEMIoTICS Monitoring, prediction and diagnosis mechanisms (first dr
</td>
<td>
a ENG
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-May-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.3
</td>
<td>
Embedded Intelligence and local analytics (first draft)
</td>
<td>
ST-I
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-May-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.4
</td>
<td>
Semantic interoperability mechanisms for IoT (first draft)
</td>
<td>
FORTH
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-May-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.5
</td>
<td>
SEMIoTICS Security and privacy mechanisms (first draft)
</td>
<td>
UNI PASSAU
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-May-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.6
</td>
<td>
Implementation of SEMIoTICS BackEnd API (Cycle 1)
</td>
<td>
BlueSoft
</td>
<td>
Other
</td>
<td>
Confidential
</td>
<td>
31-May-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
MS3
</td>
<td>
</td>
<td>
End of 1st Pattern-driven smart behavior of IIoT mechanisms development cycle
and the 1st backend implementation cycle
</td>
<td>
Sphynx
</td>
<td>
</td>
<td>
</td>
<td>
31-May-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.8
</td>
<td>
Periodic project plan updates (M18)
</td>
<td>
FORTH
</td>
<td>
Report
</td>
<td>
Confidential
</td>
<td>
30-Jun-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.2
</td>
<td>
Interim report on impact creation, dissemination activities
</td>
<td>
CTTC
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Jun-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.3
</td>
<td>
Interim report on exploitation activities (M18)
</td>
<td>
ENG
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Jun-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.4
</td>
<td>
Interim report on standardization activities
</td>
<td>
SIEMENS
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Jun-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.6
</td>
<td>
Field-level middleware & networking toolbox (second draft)
</td>
<td>
IQUADRAT
</td>
<td>
Other
</td>
<td>
Confidential
</td>
<td>
30-Sep-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.1
</td>
<td>
SEMIoTICS KPIs and Evaluation Methodology
</td>
<td>
UNI PASSAU
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-Oct-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
D2.5
</td>
<td>
SEMIoTICS high level architecture (final)
</td>
<td>
BlueSoft
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Nov-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.7
</td>
<td>
Implementation of SEMIoTICS BackEnd API (Cycle 2)
</td>
<td>
BlueSoft
</td>
<td>
Other
</td>
<td>
Confidential
</td>
<td>
30-Nov-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.2
</td>
<td>
Software system integration (Cycle 1)
</td>
<td>
BlueSoft
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Nov-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.3
</td>
<td>
IIoT Infrastructure set-up and testing (Cycle 1)
</td>
<td>
IQUADRAT
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Nov-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
MS4
</td>
<td>
</td>
<td>
Final version of SEMIoTICS architecture, end of 2nd implem.cycle, End of setup
and testing 1st cycle, Evaluation methodology defined
</td>
<td>
FORTH
</td>
<td>
</td>
<td>
</td>
<td>
30-Nov-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.4
</td>
<td>
Year 2 project report and project plan updates
</td>
<td>
SIEMENS
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-Dec-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.8
</td>
<td>
Interim report on exploitation activities (M24)
</td>
<td>
ENG
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-Dec-19
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.7
</td>
<td>
Software defined programmability for IoT devices (final)
</td>
<td>
SIEMENS
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
29-Feb-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.8
</td>
<td>
Network Functions Virtualization for IoT (final)
</td>
<td>
CTTC
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
29-Feb-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.9
</td>
<td>
Bootstrapping and interfacing SEMIoTICS field level devices (final)
</td>
<td>
SIEMENS
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
29-Feb-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.10
</td>
<td>
Network-level Semantic Interoperability (final)
</td>
<td>
Sphynx
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
29-Feb-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
D3.11
</td>
<td>
Field-level middleware & networking toolbox (final)
</td>
<td>
IQUADRAT
</td>
<td>
Other
</td>
<td>
Confidential
</td>
<td>
30-Apr-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.8
</td>
<td>
SEMIoTICS SPDI Patterns (final)
</td>
<td>
Sphynx
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Apr-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.9
</td>
<td>
SEMIoTICS Monitoring, prediction and diagnosis mechanisms (final)
</td>
<td>
ENG
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Apr-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.10
</td>
<td>
Embedded Intelligence and local analytics (final)
</td>
<td>
ST-I
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Apr-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.11
</td>
<td>
Semantic interoperability mechanisms for IoT (final)
</td>
<td>
FORTH
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Apr-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.12
</td>
<td>
SEMIoTICS Security and privacy mechanisms (final)
</td>
<td>
UNI PASSAU
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Apr-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.9
</td>
<td>
Periodic project plan updates (M30)
</td>
<td>
FORTH
</td>
<td>
Report
</td>
<td>
Confidential
</td>
<td>
30-Jun-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
D4.13
</td>
<td>
Implementation of SEMIoTICS BackEnd API (Final Cycle)
</td>
<td>
BlueSoft
</td>
<td>
Other
</td>
<td>
Confidential
</td>
<td>
30-Jun-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.9
</td>
<td>
Interim report on exploitation activities (M30)
</td>
<td>
ENG
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
30-Jun-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
MS5
</td>
<td>
</td>
<td>
End of all development and imlementation
</td>
<td>
BlueSoft
</td>
<td>
</td>
<td>
</td>
<td>
30-Jun-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
MS8
</td>
<td>
</td>
<td>
Feedback from local ethical board on ethical guidlelines
</td>
<td>
ENG
</td>
<td>
</td>
<td>
</td>
<td>
30-Jun-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.4
</td>
<td>
Demonstration and validation of IWPC-Energy (Cycle 1)
</td>
<td>
SIEMENS
</td>
<td>
Demonstrator
</td>
<td>
Public
</td>
<td>
31-Jul-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.5
</td>
<td>
Demonstration and validation of SARA-Health (Cycle 1)
</td>
<td>
ENG
</td>
<td>
Demonstrator
</td>
<td>
Public
</td>
<td>
31-Jul-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.6
</td>
<td>
Demonstration and validation of IHES-Generic IoT (Cycle 1)
</td>
<td>
ST-I
</td>
<td>
Demonstrator
</td>
<td>
Public
</td>
<td>
31-Jul-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.7
</td>
<td>
Software system integration (Cycle 2)
</td>
<td>
BlueSoft
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-Aug-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.8
</td>
<td>
IIoT Infrastructure set-up and testing (Cycle 2)
</td>
<td>
IQUADRAT
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-Aug-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
MS6
</td>
<td>
</td>
<td>
End of 2nd setup and testing cycle, End of 1st demonstation cycle
</td>
<td>
IQUADRAT
</td>
<td>
</td>
<td>
</td>
<td>
31-Aug-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.5
</td>
<td>
Year 3 project report and project plan updates
</td>
<td>
SIEMENS
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-Dec-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP1
</td>
<td>
D1.6
</td>
<td>
Final project report
</td>
<td>
SIEMENS
</td>
<td>
Report
</td>
<td>
Confidential
</td>
<td>
31-Dec-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.9
</td>
<td>
Demonstration and validation of IWPC-Energy (Cycle 2)
</td>
<td>
SIEMENS
</td>
<td>
Demonstrator
</td>
<td>
Public
</td>
<td>
31-Dec-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.10
</td>
<td>
Demonstration and validation of SARA-Health (Cycle 2)
</td>
<td>
ENG
</td>
<td>
Demonstrator
</td>
<td>
Public
</td>
<td>
31-Dec-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
D5.11
</td>
<td>
Demonstration and validation of IHES-Generic IoT (Cycle 2)
</td>
<td>
ST-I
</td>
<td>
Demonstrator
</td>
<td>
Public
</td>
<td>
31-Dec-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.5
</td>
<td>
Final report on impact creation, dissemination activities
</td>
<td>
CTTC
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-Dec-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.6
</td>
<td>
Final report on exploitation activities
</td>
<td>
ENG
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-Dec-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
D6.7
</td>
<td>
Final report on standardization activities
</td>
<td>
SIEMENS
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
31-Dec-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
MS7
</td>
<td>
</td>
<td>
Completion of Demonstration and Evaluation
</td>
<td>
SIEMENS
</td>
<td>
</td>
<td>
</td>
<td>
31-Dec-20
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
MS9 Selection and consent of Users ENG 30-Dec-20
**TABLE 4B: DELIVERABLE DETAILED TRACKING**
# INNOVATION MANAGEMENT PLAN (IMP)
## Innovation Management
SEMIoTICS has a consortium agreement [2] in place which covers ownership, and
access / usage rights of foreground and background IP, during and after
project.
High-profile academic partners, as well as strong involvement of the IP
departments of industry partners will ensure that IP is identified and
appropriately protected.
The Description of Action (Section 3.2.3) [1] also defines the implementation
of Innovation Management in SEMIoTICS as described below:
Beyond the advancements to the state of the art in individual technology and
scientific areas, the overarching groundbreaking objective of SEMIoTICS is to
deliver a framework, available through an open API that will enable and
guarantee secure and dependable actuation and intelligent semiautonomous
adaptation of IIoT and IoT applications involving heterogeneous smart objects.
SEMIoTICS approach is characterized by three pillars:
i. SPDI patterns-based development of IIoT/IoT application to ensure SPDI
properties [WP4 deliverables] ii. Semi-autonomic behaviour and evolution based
on cross-layer embedded intelligence [WP3, WP4 and WP5 deliverables] and iii.
Supporting interoperability across heterogeneous IIoT/IoT platforms and
vertical domains [WP3 (esp. Task 3.3, Task 3.4), WP4 (esp. Task 4.4) and WP5
deliverables].
which during the course of the project may lead to following innovations:
1. Platform innovation stemming from security driven design foundations empowering SEMIoTICS to become a basis for building secure and trustworthy IoT ecosystems.
2. Application and Systems innovation stemming from the development of a SPDI-centric layered infrastructure involving network, software and device innovations integrated to form the SEMIoTICS framework.
3. Service innovation stemming from the deployment of the SEMIoTICS framework in lab trials, the procedures and applications developed through this process as well as lessons learned from the lab trials of SEMIoTICS.
Innovation Management will be effectively performed by the leader of Task 6.2
Exploitation of results with the support of the Technical Project Manager
(Scientific and Technical Project Manager (STIPM)) and ultimately of the
Project Coordination Committee (PCC). The innovation collection process will
be managed by the consortium in the context of relevant WPs and will be
integrated in the overall project results. The Task 6.2 leader will assist the
WP leaders and the Technical Project Manager in handling all matters
concerning Intellectual Property protection for the produced innovations as
well as their inclusion in the project’s exploitation plan in D6.1.
### INNOVATION SUPPORTING TASKS AND PROCESSES
Driving Research and Innovation is an integral part of SEMIoTICS and deeply
anchored inside the work structure. Following tasks and deliverables are
directly related to support, enable, or drive innovation in SEMIoTICS (for
detailed descriptions of tasks and deliverables see [6]):
Task 6.1: Impact Creation and Dissemination
Task 6.2: Exploitation of results
Task 6.3: Standardization
<table>
<tr>
<th>
Related Task
</th>
<th>
Created Output
</th> </tr>
<tr>
<td>
**Task 2.2:** Specification of use case scenarios & applications and their
requirements
**Task 2.3:** Specification of infrastructure requirements
**Task 2.4:** SEMIoTICS architecture design
</td>
<td>
**D2.3:** Requirements specification of SEMIoTICS framework (M6)
**D2.5:** SEMIoTICS high level architecture (M26)
</td> </tr>
<tr>
<td>
**Task 3.3:** Semantics-based bootstrapping & interfacing
**Task 3.4:** Network-level semantic Interoperability **Task 3.5:**
Implementation of Field-level middleware & networking toolbox.
</td>
<td>
**D3.9:** Bootstrapping and interfacing SEMIoTICS field level devices (final)
(M26)
**D3.10:** Network-level Semantic Interoperability
(final) (M26)
**D3.11:** Field-level middleware & networking toolbox (final) (M28)
</td> </tr>
<tr>
<td>
**Task 4.1:** Architectural SPDI patterns
**Task 4.2:** Monitoring, prediction and diagnosis
**Task 4.3:** Embedded Intelligence and local analytics
**Task 4.4:** End-to-End Semantic Interoperability
**Task 4.5:** End-to-End Security and Privacy
**Task 4.6:** Implementation of SEMIoTICS backend
API
</td>
<td>
**D4.8:** SEMIoTICS SPDI Patterns (final) (M28) **D4.9:** SEMIoTICS
Monitoring, prediction and diagnosis mechanisms (final) (M28)
**D4.10:** Embedded Intelligence and local analytics (final) (M28)
**D4.11:** Semantic interoperability mechanisms for
IoT (final) (M28)
**D4.12:** SEMIoTICS Security and privacy mechanisms (final) (M28)
**D4.13:** Implementation of SEMIoTICS BackEnd API (final) (M30)
</td> </tr>
<tr>
<td>
**Task 5.3:** IIoT Infrastructure set-up and testing
**Task 5.4:** Demonstration and validation of IWPC- Energy scenario
**Task 5.5:** Demonstration and validation of SARAHealth scenario
**Task 5.6:** Demonstration and validation of IHES-
Generic IoT scenario
</td>
<td>
**D5.4/5.9; D5.5/5.10; D5.6/5.11:** Demonstration and
Validation of respective usage scenarios (M31/M36)
</td> </tr>
<tr>
<td>
**Task 6.1:** Impact Creation and Dissemination
**Task 6.2:** Exploitation of results
**Task 6.3:** Standardization
</td>
<td>
**D6.1:** Impact creation, dissemination and exploitation plan (M6)
**D6.3:** Interim report on exploitation activities (M18)
**D6.4:** Interim report on standardization activities (M18)
**D6.8:** Interim report on exploitation activities(M24)
**D6.9:** Interim report on exploitation activities (M30)
**D6.6:** Final report on exploitation activities (M36)
**D6.7:** Final report on standardization activities
(M36)
</td> </tr> </table>
#### TABLE 5: INNOVATION RELATED TASK AND OUTPUT
After M28, SEMIoTICS will be in a position to understand both market and
technical problems at hand, with a goal of successfully implementing
appropriate creative ideas demonstrated by WP3 and WP4. Following
classification shows different phases of Innovation management as per (Specht,
2002).
_Source: Classification of technology, R &D and innovation management (Specht,
2002) _
Being a RIA project, SEMIoTICS focusses on Technology management in the
technical work packages WP2, WP3, WP4 and WP5. In Task 6.2 of WP6, SEMIoTICS
consortium plans to come up with leaner business model canvas for SEMIoTICS
depicting a new or improved product or service or process from each partner
perspective. This may empower SEMIoTICS consortium to respond to an external
(e.g. testing Business Models in different scenarios) or internal opportunity
(e.g. use SEMIoTICS for internal product development). This will be
checked/pursued by the consortium members from time to time during the project
or even after the project end. The exact proceedings and decisions will be
taken directly in Task 6.2 of WP6.
## Exploitation Management
SEMIoTICS presented already a draft exploitation strategy at proposal stage
(see Section 2.2.3 in [1]) and elaborations of further details on the
exploitation plan are subject to WP6.
Each partner delivered a high-level exploitation plan (Section 2.2.3) [1] at
proposal stage and these exploitation plans will be detailed further within
work of WP6.
Task 6.2 will address exploitation of the project results via e.g. commercial
and scientific exploitation strategies, plans and implementation. SEMIoTICS
will investigate different routes to be used for exploitation (e.g. use for
further research, developing and selling own products/services, spin-off
activities, and standardization activities/new standards/ongoing procedures)
Exploitation of project results will be a topic on the agenda of consortium
meetings in order to support exploitation of results on consortium level.
Results are documented in Deliverables D6.3, D6.8, D6.9 and D6.6.
## Communication / Dissemination Management
All aspects of communication and dissemination management are captured in the
Dissemination Plan [5] and will not be covered here.
## Capturing and handling IPR
IPR handling is covered in SEMIoTICS consortium agreement [2] (CA) and allows
each partner the exploitation of their own results. Also joint IPR is covered
by the CA and will ensure proper exploitation of the project results.
The SEMIoTICS consortium is composed in a way that each partner has strong
expertise in a certain technical/business domain and is responsible for
driving this domain forward. This ensures that the essential elements of
successful Innovation management are contributed by experts in the field (good
research practice by high profile academic partners, technologies, industrial
research, business aspects, and industry domain knowledge by leading industry
partners).
Capturing of IPR is supported by close interaction of technical and
exploitation tasks throughout the project.
Awareness of proper IPR capturing and handling will be raised by regular
sessions on this topic at all SEMIoTICS consortium meetings and PTC meetings.
For avoiding additional overhead in the project, PTC meeting is serving as
Intellectual Property Rights Committee (IPRC) to deal with intellectual
property that either is introduced to the project by a partner or produced as
a work package outcome. IPRC will be responsible for the definition of access
rights and licensing (if required so) of the project results as guided by
section 9.4 of consortium agreement [2] (CA), which provide details on access
rights for exploitation especially regarding exploitation of project outcomes
as a whole.
The table in the annex will give the overview of IPR filed within SEMIoTICS
and each partner is responsible to keep that list up to date (see also terms
in [2]). Data recorded in that table will also be used to update information
for the periodic reports in the EC project management web portal.
## Standards & Regulations
SEMIoTICS has a dedicated task on Standardization (Task 6.3) with resources of
leading industry partners being allocated. This task will use the
“Standardization and Open Source Engagement” namely, AIOTI, W3C, ETSI, IEEE,
ISO as baseline and will continuously refine and update involvement of
SEMIoTICS. Task 6.3 is also used to monitor progress and activities or related
these standardization activities, as well as for creating input for
standardization bodies if possible. More details and a summary on related
standardization activities will be given in the deliverables D6.4 and D6.7.
## Innovation Assessment
Assessment of SEMIoTICS result will be done on three levels.
1. Project level objectives: SEMIoTICS has a clear definition of its objectives (see Section 1.1.2. in [1]). These definitions include a description of the objective and the corresponding measures of success.
2. SEMIoTICS as Research and Innovation Action (RIA) is clearly targeting only lab trials which corresponds to a TRL 4-6 (according to definition by EC) as stated in section 1.3.2 of [1].
3. Lab Trial evaluation on technical level is foreseen in WP5 and uses defined requirements and KPIs as a result from respective D2.3 and D5.1 deliverable.
In addition, SEMIoTICS also establishes an advisory committee during runtime
in order to get external views and assessment results of the achievements as
well.
# DATA MANAGEMENT PLAN (DMP)
## Expected data
SEMIoTICS is a three-year project and will produce a number of technical
results relevant for IoT networks, specifically in WP5. This includes data
created in lab experiments and demos for Wind turbine use case, healthcare use
case and Generic IoT use case. Wind energy use case related data used in
SEMIoTICS will be related to critical infrastructures data and therefore will
not be publicly accessible. For the healthcare use case, there will be no
person-related data which will be generated/stored/transferred by that use
case. Moreover, based on the GA 29.3 Open access to research data "Not
applicable" for SEMIoTICS. Following sections provide detailed description of
the specific data sets in 3 different use cases which will be handled only
within the consortium and the data will be classified as per use case.
## Data formats and Metadata
### DATA FORMATS
Following table gives initial version of the data formats in the consortium-
level DMP which will be handled only within the consortium. There are no real-
world field trials in SEMIoTICS’s 3 use cases. The data generated in all the 3
use cases will only be in lab-environment.
Detailed descriptions of the expected information of each cell are given at
the end of this section.
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
**Data set name**
</th>
<th>
Data Set Description
</th>
<th>
Standards and metadata
</th>
<th>
Data sharing
</th>
<th>
Archiving and preservation (including storage and backup)
</th>
<th>
Contact Person/ source of data
</th> </tr>
<tr>
<td>
SEMIoTICS_UC1
</td>
<td>
Test Data
</td>
<td>
Test Data
</td>
<td>
NA
</td>
<td>
confidential
</td>
<td>
_Link_
</td>
<td>
Ermin Sakic
</td> </tr>
<tr>
<td>
SEMIoTICS_UC2
</td>
<td>
Test Data
</td>
<td>
Test Data
</td>
<td>
NA
</td>
<td>
confidential
</td>
<td>
_Link_
</td>
<td>
Domenico Presenza
</td> </tr>
<tr>
<td>
SEMIoTICS_UC3
</td>
<td>
Test Data
</td>
<td>
Test Data
</td>
<td>
NA
</td>
<td>
confidential
</td>
<td>
_Link_
</td>
<td>
Mirko Falchetto
</td> </tr> </table>
#### TABLE 6: DATA FORMATS
Note:
This table is linked to file “SEMIoTICS DMP.xlsx”
**_https://overseer1.erlm.siemens.de/repository/Document/downloadWithName/SEMIoTICS%20DMP.xlsx_
** **_?reqCode=downloadWithName &id=12740979 _ **
The following table gives a detailed description of the fields used in the
data formats table of Section 5.2.1.
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
Identifier for the data set to be produced
</th> </tr>
<tr>
<td>
Data set description
</td>
<td>
Origin (in case it is collected), scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration and reuse.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The dataset cannot be shared publicly as GA 29.3 - Open access to research
data is not applicable to SEMIoTICS
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td>
<td>
In general, the procedure described in Section 5 will be applied. This cell
gives a data specific description of the procedures that will be put in place
for longterm preservation of the data (if required).
Indication of how long the data should be preserved, what is its approximated
end volume, what the associated costs are and how these are planned to be
covered (if required).
</td> </tr> </table>
_**TABLE 7: FIELDS USED IN THE DATA FORMATS** _
### METADATA
SEMIoTICS plans to create and share data in relation to project deliverables
or publications. Deliverables and publications will give all relevant
information, including the meaning of data sets, the methods of data
acquisition/processing, as well as specific methods/algorithms for usage (if
required). Thus, deliverables and publication can be considered as main piece
of metadata for all data sets created within the project.
## Data sharing and access
Project related documents (release version of document – not raw format) with
dissemination level “public” will be accessible via the project website
_http://www.SEMIoTICS-project.eu_ .
Registration (free of charge) is required to get access. The dissemination
level is initially proposed by the corresponding author and will be reviewed
and approved by PTC and PCC (details see Section 6.2).
As far as possible depending on the publishers’ policy pre-prints of the
publications will be made available open access via the project website, as
well as ARXIV/OpenAIRE or other means. In case embargo periods (e.g., given by
publishers) have to be considered, open access will be given after the embargo
period expires. In particular, an open-access policy is foreseen for all
scientific publications in the context of SEMIoTICS, which is in-line with the
EC open access policy (see: _https://www.openaire.eu/how-do-i-make-my-
publicationsopen-access-to-comply-with-the-ec-s-open-access-policy_ ) .
## Data archiving and preservation
All project related documents (raw formats), deliverables, reports,
publications, data, and other artifacts will be stored in a repository
accessible during project duration for all partners. This repository is hosted
(with backup) by the Coordinator and the link is/was distributed at the first
consortium meeting. Access to the repository is given to registered persons
from project partners only. The folder structure of the repository is managed
by the coordinator and changes of the structure need to be coordinated with
the Coordinator. Corresponding partners will keep the above-mentioned
repositories operational during the project lifetime. After project closure,
repositories will be maintained for at least one more year. After project
closure the administrating partner can change access policies (e.g.,
restricted access / access on demand) in order to keep maintenance costs at a
minimum.
The Data Management Plan is maintained by the Project Coordination Committee
(PTC). Although SEMIoTICS is not liable for “Open Access to Research Data” as
Article 29.3 is not applicable to the project, PTC members reviews of the DMP
are a regular agenda item of PTC meetings, conference calls, and work package
(WP) results will be checked with respect to relevant information for the DMP.
The sole purpose of the DMP is how to handle the research data within the
consortium.
WP leads (WP3, WP4 and WP5) are responsible that results of tasks within their
work package are aligned with the definitions in the DMP. WP leads are also
responsible that the table in DMP is updated as soon as data is created within
their WP (for details on the update procedure see Section 7.2). Updates of the
tables in DMP are communicated from the Project Technical Committee (PTC) to
the PCC together with the minutes of the monthly PTC calls (see also section
on update procedures).
In order to ensure that this DMP is implemented and followed, reviews (by PCC
and/or PTC) of all kinds of project related documents (e.g., reports,
deliverables, publications) will include also a check for used data and the
proper documentation and use in-line with this DMP.
In case the contact person for data is leaving the project, the affiliation of
the original contact person will take over the responsibility and will assign
a new contact person.
# CONCLUSION
This deliverable presented the initial quality management plan of SEMIoTICS
where the different roles and bodies are presented. Moreover, the different
levels of quality controls and assurance are also described. In addition, the
different innovation plans regarding the exploitation plans and innovation,
standards and the innovation management are also detailed. Furthermore, the
data management plan is also provided by analyzing the different expected
data, data formats and sharing strategies, Finally, the deliverable provides,
in the Annex, links on the different procedures followed and the IPR as well
as the PERT diagram with interaction between the Tasks.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0206_CHIC_760891.md
|
# 1 Summary
This document outlines the data management strategies that will be implemented
throughout the CHIC research data lifecycle. In particular, it describes (i)
the type, format and volume of the generated data, (ii) the metadata and
documentation provided to make it findable, interoperable and reusable, (iii)
the long-term preservation plan, (iv) how data will be shared and licensed for
re-use, (v) the resources that need to be allocated to data management, (vi)
data storage and back up policies during the active phase of the project, and
(vii) the handling of personal data.
As stipulated in the Guidelines on FAIR Data Management in Horizon 2020, this
DMP will be updated when important changes to the project occur and, at least,
as part of the periodic reviews and at the end of the project.
The data management plans of the CHIC project and of the NEWCOTIANA project
(grant agreement 760331) have been generated in close collaboration between
these two projects. Both projects will generate datasets on technical
performance, safety assessment, socio-economic and stakeholder interactions
related to the use of NPBT for the development of multipurpose crops for
molecular farming. Aligning and standardising the data management between
these projects will facilitate data reuse and data interoperability. In
addition, no reporting and metadata standards are currently available for
NPBTs. The CHIC and the NEWCOTIANA projects will together contribute to the
development of reporting requirements for datasets related to NPBTs.
# 2 CHIC Data Summary
CHIC aims to develop new chicory varieties with improved dietary fiber
characteristics and improved terpene composition. Additionally, we will
address the self-incompatibility which hampers the breeding efforts for this
crop. This goal will be achieved by new plant breeding techniques NPBTs. More
precisely in CHIC we will develop and apply gene editing approaches all based
on the CRISP/Cas technology. We will use stable agrobacteriummediated gene
editing, transient gene editing techniques and the application of
ribonucleoproteins to edit the genome DNA in the chicory protoplasts.
In the CHIC project data to assess the technological performance of these
different methods will be collected. Additionally, the data related to the
risk assessment of different NPBT techniques used, such as the off-target
effects, will be generated. In improved chicory lines data about the dietary
fiber and terpene composition and bioactivity will be evaluated. Economic
feasibility and socio-economic impact of the newly produced chicory varieties
will be evaluated. The data generated will contribute to evidence-based
informed decisions on the legal status and regulation of NPBT crops.
To accomplish this a series of datasets will be generated:
* Improved genome assembly of _C. Intybus_
* RNAseq data on _C. intybus_
* gRNA inventory and gene editing efficiencies
* genetic part and construct designs
* Dietary fiber characterisation
* Terpene characterisation
* Data on regulatory networks for secondary metabolite biosynthesis in _C. intybus_ \- Bioactivity data for dietary fiber / terpenes
* Phenotypic and agricultural parameters of newly developed _C. intybus_ varieties
* Safety assessment, including untargeted effects, of different NPBT applications
* Socio-economic impacts
* Broader societal impacts - Stakeholder views
Table 1 provides a list of research data categories that will be produced in
CHIC and the expected data volume for each of them.
CHIC will re-use the constructs, RNAseq data and available genome data as well
as established protocols for tissue-culture cultivation, protoplast
transformation and regeneration of chicory that is available at different
partners to maximize the use of resources.
Stakeholder views on commercial cultivation and use of GE chicory will be
collected in order to clarify possible hurdles and facilitating factors for
chicory innovation using GE techniques. Stakeholder views will be collected in
the course of document reviews, interviews, questionnaires, workshops and
focus groups. Data will be gathered as audio recordings, transcripts,
interviews, and workshop notes. Only data gathered in the course of the CHIC
project will be used
These data will not only serve to meet the objectives of the current project,
but will also be useful for stakeholders including the scientific community,
plant breeders, farmers, industry, legislators and regulators, and the general
public.
Thus, the scientific community will benefit from the development of the NPBT
techniques for chicory. The improved gene editing and knowledge on the off-
target effects can be applied broader for gene editing of (the asteraceous)
crops. The project will create added value for chicory farmers, by providing
improved dietary fibre yield and quality and terpene yields. Additionally, the
work on chicory incompatibility and the development of NPBTs will benefit
chicory breeders. Finally, the generated data on the utility, efficiency and
safety of NPBTS as well as the generated communication materials will help EU
and National legislators and regulators and the general public make informed
decisions on the regulation and public acceptance of NPBTs.
To enhance the usability of the data, open or otherwise widely-used file
formats will be the preferred option for data collection (see Table 1).
Formats that are open and/or in widespread use stand the best change to be
readable in the future; on the contrary, proprietary formats used only by a
particular software are prone to becoming obsolete. In those cases in which
the laboratory instrument used to perform the measurement outputs the data in
an instrument specific proprietary format, a converted version of the output
file to an open data format will be shared together with the original file
thus fostering data interoperability.
Table 1. CHIC foreseen data types, size and selected file formats.
<table>
<tr>
<th>
Research data
</th>
<th>
</th> </tr>
<tr>
<td>
Data Type
</td>
<td>
File Format
</td> </tr>
<tr>
<td>
Genome sequence data (raw and
processed data)
</td>
<td>
bam, fastq
</td> </tr>
<tr>
<td>
DNA parts and constructs
</td>
<td>
genbank, fasta plain text, ASCII (.txt), Ab1 (.ab1),
</td> </tr>
<tr>
<td>
qPCR data
</td>
<td>
Raw data, comma-separated values (.csv), text (tab delimited) (*.txt)
</td> </tr>
<tr>
<td>
RNA-Seq data (raw and processed data)
</td>
<td>
.ffn
</td> </tr>
<tr>
<td>
Metabolomics data (raw and processed data)
</td>
<td>
mzML, netCDF (.cdf), comma-separated values (.csv), text (tab delimited)
(*.txt)
</td> </tr>
<tr>
<td>
Images (e.g. microscopy, immunoblots)
</td>
<td>
TIFF (.tiff), png (.png), jpeg (.jpg)
</td> </tr>
<tr>
<td>
Tabular data (e.g. ELISA tests, metabolite yield, purity and functionality)
</td>
<td>
comma-separated values (.csv), text (tab delimited) (*.txt), MS excel (.xlsx)
</td> </tr>
<tr>
<td>
Plant phenotypic data (contained and field conditions)
</td>
<td>
text (tab delimited) (*.txt), comma-separated values (.csv), MS excel (.xlsx)
</td> </tr>
<tr>
<td>
Plant genotypic descriptions
</td>
<td>
text (tab delimited) (*.txt), comma-separated values (.csv), MS excel (.xlsx)
</td> </tr>
<tr>
<td>
Stakeholder views : audio recordings, transcripts, interview and workshop
notes, questionnaires
</td>
<td>
audio recordings (mp3), MS Word (.docx), MS excel (.xlsx), comma-separated
values (.csv).
</td> </tr>
<tr>
<td>
Standard operating procedures, protocols
</td>
<td>
pdf (.pdf), MS word (.docx)
</td> </tr>
<tr>
<td>
Scientific publications
</td>
<td>
pdf (.pdf), MS word (.docx)
</td> </tr>
<tr>
<td>
Project reports
</td>
<td>
pdf (.pdf), MS word (.docx)
</td> </tr> </table>
# 2\. FAIR Data
## 2.1. Making data findable, including provisions for metadata
The provision of adequate metadata (a description of the key attributes and
properties of each dataset) is fundamental to enable the finding,
understanding and reusability of the data, as well as the validation of
research results. Descriptive metadata in particular, aims to provide
searchable information that makes data discovery and identification possible.
CHIC will adopt the DataCite Metadata Schema, one of the broadest crossdomain
standards available, as the basis for dataset description. The minimum set of
descriptors established for a CHIC dataset include:
* Type: a description of the resource.
Recommended best practice: use of a controlled vocabulary such as the DCMI
Type Vocabulary).
* Identifier: a unique string that identifies a resource. Provided by repository where the dataset is stored.
Preferred option: digital object identifier (DOI); also accepted URL, URN,
Handle, PURL, ARK.
* Publication date: date when the data was or will be made publicly available.
Format: YYYY-MM-DD
* Title: a name by which a resource is known (free text).
* Authors: the main researcher(s) involved in producing the data, or the authors of the publication, in priority order and affiliation. Recommended inclusion of a name identifier (e.g. ORCID)
Personal name format: family, given. Affiliation format: free text
* Description: additional information that does not fit in any of the other categories. Example: publication abstract.
Format: open.
* Version: the version number of the resource.
Format: track major_version.minor_version. Examples: 1.0, 2.1
* Language: primary language of the resource
* Rights: information about rights held in and over the resource
Values: openAccess, embargoedAccess, restrictedAccess, closedAccess.
* Licence: information about the type of licence applying to the dataset
* Contributors: institution or person responsible for collecting, managing, distributing, or otherwise contributing to the development of the resource.
This property must also be used to allow unique and persistent identification
of the funder. Values: European Commission (EU), H2020, Research and
Innovation action, CHIC, Grant Agreement Number 760891 .
* Subject: subject, keywords, classification code, or key phrase describing the resource (free text).
Additionally, metadata elements and documentation providing specific
information about the data collection processes, methodology, data analysis
procedures, variable definitions, or relationships between the different files
of a dataset will be compiled to ensure data interpretability and reusability.
These metadata elements will be covered in section 2.3. The relevant metadata
categories mentioned above will also be applied for data related to
stakeholder interactions.
## 2.2. Making data openly accessible
CHIC project results will be made openly accessible provided that open
publication does not interfere with the obligation to protect and exploit the
results or the protection of personal data.
Regarding protection of results, to ensure that dissemination of the CHIC
research outputs does not jeopardize their exploitation potential, project
results will be subject to evaluation prior to any dissemination activity.
CHIC IPR management and dissemination strategies are described in document
D7.1 – PEDR. Results approved for dissemination will be made accessible
through a variety of channels including project webpage (www.chicproject.com)
social-media, scientific conferences, scientific publications in peer-reviewed
journals, and data repositories, among others.
Regarding the protection of personal data, stakeholder views will be either
audio recorded or documented in writing as interview or workshop notes. A
restricted access policy will be implemented for stakeholder consultation data
in order to insure confidentiality of personal data. These raw data will be
only be handled and analysed by the teams conducting the respective research
tasks. Summaries of stakeholder views will be presented in project reports
which will be made publicly available on the project website and in open-
access repositories. In these reports stakeholder views will be presented in a
pseudonymised way. No reference will be made to individual stakeholder
representatives or individual stakeholder organisations.
Being part of the Open Research Data Pilot (ORDP), the CHIC consortium is
committed to provide Open Access (free-of-charge access) to all scientific
publications and associated research data. The Open Access policy
implementation is described in D7.1 – PEDR.
Open Access (OA) to CHIC peer reviewed scientific publications will be mostly
granted through "Gold" OA, although "Green" OA will be also be considered if
"Gold" OA is not provided by the selected journal. Final versions of articles
accepted for publication and their associated metadata (see section 2.1 and
below) will be deposited in Zenodo, an interdisciplinary open data repository
service created through the European Commission’s OpenAIRE project and hosted
at CERN, and will be made openly accessible at the time of publication ("Gold"
OA) or with a maximum of 6 months embargo (for "Green" OA). Zenodo is
compliant with the FAIR principles: it assigns a DOI to each deposited object,
supports DOI versioning, is compliant with the DataCite Metadata Schema, is
searchable, provides clear and flexible licensing, and provides secure back-up
(see section 4).
In addition to the scientific publication, OA will also be provided to the
research data required to validate the published results. Although Zenodo
allows the deposit of data as well as publications, the use of
disciplinespecific repositories is often a more convenient option since (i)
they have been developed to cover the subject specific needs and (ii) being
widely used by the community, facilitate integration with other datasets. At
present time, several discipline-specific repositories are under consideration
for the deposit of CHIC datasets. These include:
* Metabolights: a metabolomics cross-platform and cross-species repository maintained by the European Bioinformatics Institute (EMBL-EBI). Metabolights supports the Core Information for Metabolomics Reporting (CIMR) metadata standard and submission of datasets follows the ISA-Tab format, a general purpose framework with which to collect and communicate complex metadata used by a growing number a repositories and publishers.
* Gene Expression Omnibus (GEO), Sequence Read Archive (SRA): two public data repositories at the US National Center for Biotechnology Information (NCBI) suitable for the deposit of RNA-Seq data (GEO) and highthroughput sequencing data (SRA) which are compliant with the Minimum Information about a highthroughput SEQuencing Experiment (MINSEQE) standard.
All data deposited in a discipline-specific repository will also have a record
in Zenodo for the associated publication with a link to the externally
deposited data files. Additionally, Zenodo will be the repository of choice
for those data types for which a disciplinary repository is not available. The
deposited dataset will include all the information needed to interpret and re-
use the data following reporting standards when available (see section 2.3).
These will include: publication file, raw and processed data files (in open or
widely used formats), detailed protocols with information on instruments and
settings used, a codebook for the variables used, and a readme file describing
the files that compose the dataset and the relation between them.
As already mentioned, open or widely used file formats that can be accessed
with open software (or software that is in widespread use) will be the
preferred option for data collection. When the use of proprietary formats is
necessary, the name and version of the software used to generate the file will
be indicated in a readme.txt file included in the dataset.
All data deposited in a repository will be made openly accessible under no
access restrictions other than the embargo period for "Green" OA publications
mentioned above.
## 2.3. Making data interoperable
Promoting data exchange and integration to its full potential requires the use
of standardised data formats, metadata elements, and ontologies that ensure
the reusability of the underlying data. As discussed in section 1, open or
otherwise widely used file formats will be used to collect and share the data
derived from CHIC research activities, thus facilitating data retrieval and
analysis by other users.
With regard to metadata, likewise discipline-specific repositories, discipline
specific metadata schemes broadly accepted by the scientific community should
be the preferred alternative since they have been developed to cover subject
specific needs. Accordingly, disciplinary repositories often show compliance
with such specific metadata standards in combination with (recommended)
controlled vocabularies. Metadata standards and ontologies that will be used
to document datasets generated within the CHIC project include:
* Core Information for Metabolomics Reporting (CIMR) (metabolomics data)
* Minimum Information about a high-throughput SEQuencing Experiment (MINSEQE) (RNA-Seq and genome sequence data)
* Minimum Information about a Plant Phenotyping Experiment (MIAPPE) (plant phenotypic data)
* Minimum Information about a Proteomics Experiment (MIAPE) (protein mass spectrometry data)
* Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) (qPCR data) • Data Documentation Initiative (DDI) (survey data)
* Plant Ontology
* Gene ontology
* OBI ontology
* NCBI taxonomy
There is currently no available reporting standard for CRISPR experiment
metadata. To cover this need, the NIST Genome Editing Consortium works on the
development of suggested minimal information reporting for public studies and
the generation of a common lexicon for genome editing. CHIC will follow the
progress of the Genome Editing Consortium on the development of a standard
CRISPR metadata.
At the same time, NEWCOTIANA and CHIC partners have initiated a common
dialogue to define the metadata elements that should be collected for each
genome editing experiment in order to facilitate sharing, validation, and
interpretability of the results. A first draft metadata checklist (see Annex
1) covering the whole genome editing workflow has been assembled as a result
of this work. This draft will continue to be refined in future working
discussions involving both projects.
## 2.4. Increase data re-use (through clarifying licences)
Re-use is one of the pillars of FAIR data. Data re-use increases the impact
and visibility of research, maximises transparency and accountability,
promotes the improvement and validation of research methods, stimulates
innovation through new data uses, and saves resources avoiding unnecessary
replications. For data to be reusable it should be in an open or widely-used
file format, well described with rich metadata that meet domainrelevant
community standards, and released under a clear data usage licence. The way
CHIC will approach the first two points has already been discussed in section
1 (file formats) and sections 2.1 and 2.3 (metadata). Regarding licensing, as
a default standard CHIC will share scientific publications and the associated
research data under a Creative Commons Attribution Licence CC-BY whenever
possible. CC-BY does not impose any restriction on access and reuse of the
data; it allows users to copy, distribute, transmit, adapt and make commercial
use of the data with the sole condition that the creator is appropriately
credited. Most data repositories as well as most open access and hybrid
publishers support the use of CC-BY licence.
Data quality check is the responsibility of the partners involved in the
generation of the dataset and will be supported by a peer-review process at
publication. Should errors be detected in already published data, these will
be corrected and adequately documented in a new version of the dataset.
# 4\. Allocation of resources
Adequate data management is an integral part of good research practice and as
such it concerns every person involved in the research process. All CHIC
partners have agreed to the general guidelines set up in this DMP and it is
the responsibility of the group leaders to ensure that they are known and
implemented by all members of their research group. For each dataset, the
partner that generates the data is accountable for registering and storing all
data and metadata according to the guidelines of this DMP, applying adequate
back up policies, and sharing all public data through the selected open access
repository. The project coordinator is in addition responsible for the
maintenance of the project website and the Sharepoint hosting service (see
Section 4) for the sharing and storing of CHIC main documents during the
active phase of the project.
As indicated in section 2.2, "Gold" OA publication will be chosen as the
preferred publication option. Article processing charges for OA publishing
were budgeted at the proposal stage and will be covered by the main partner of
the publication out of their allocated funds. The estimated costs of applying
open access publication is 2500€. It is not possible at this stage to
determine the number of publications that will be produced. Resources for data
storage and back up during the active phase of the project will be provided by
the respective partner’s institutions (costs included in standard indirect
costs). No direct costs for data sharing and long term preservation are
anticipated given that all the considered data repositories are free of
charge.
# 5\. Data security
All CHIC partners have adequate storage capability and back up policies at
their respective institutions that guarantee the safe storage of the generated
research data during the active phase of the project. Additionally, a variety
of platforms are being used for internal data sharing, which also serve to the
purpose of backup storage. All project documents (grant and consortium
agreements, deliverables, meeting minutes, project reports and presentations,
scientific manuscripts) are stored in a shared folder in Sharepoint, an
Wageningen University and Research hosted sharing platform administered by WR
that supports control access back-up and file version control.
Sustainable long-term preservation of the data beyond project completion is
guaranteed by the use of trustworthy repositories such as Zenodo. Zenodo
accessibility principles guarantee that deposited data and metadata will be
retained for the lifetime of the repository, which is currently the lifetime
of the host laboratory CERN, with an experimental programme defined for the
next 20 years at least. Data files and metadata are backed up nightly and
replicated into multiple copies in the online system ensuring file
preservation. Finally, to preserve data authenticity and integrity, all files
are stored along with a MD5 checksum of the file content and are regularly
checked against their checksums to assure that file content remains constant.
Audio recordings and written notes of stakeholder views, as well as internal
reports will be stored on passwordprotected servers only accessible for the
partner conducting the research tasks.
# 6\. Ethical aspects
In order to comply with the EU General Data Protection Regulation (GDPR)
stakeholders participating in the project will be informed about the purpose,
method, storage, processing, and publication of personal stakeholder data and
data containing stakeholder views and asked for their permission. Stakeholder
data collected in the course of the CHIC project will not include any
sensitive personal data in the meaning of the GDPR. In publicly available
reports stakeholder views will be presented in a anonymized way.
There are no other ethical or legal issues relevant to data sharing of
stakeholder data nor on the ethics Deliverables.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0209_CARBAFIN_761030.md
|
# 2.2 Making data openly accessible
Underlying data of scientific publications produced in the project will be
made openly available as the default. However, we will still keep the
possibility to partially opt-out for the individual datasets. Datasets from
life cycle assessment and economic analysis cannot be shared (or need to be
shared under restrictions) as they have a major impact on development of
business plans for our industrial beneficiaries.
The underlying data of scientific publications and associated metadata will be
made accessible by deposition in the research data and publication repository
Zenodo. Zenodo is a certified repository which supports open access but also
enables closed access. Access to datasets shared under restriction will be
discussed in more detail during the second data management plan. Zenodo
accepts data under a variety of licenses in order to be inclusive.
Software tools that can read CSV files (spreadsheet) and SCF files (DNA
sequence viewer) are needed to access our data. We follow the file format
guide currently supported by the Sequence Read Archives (SRA) at NCBI, EBI,
and DDBJ for gene and protein sequence format. Therefore, documentation about
the software is not needed to access the data included.
# 2.3 Making data interoperable
The underlying data of scientific publications produced in our project will be
interoperable, that is allowing data exchange and re-use between researchers,
institutions, organisations, countries, etc. The data will adhere to standards
for formats, as much as possible compliant with available (open) software
applications.
According to the DCC homepage ( _http://www.dcc.ac.uk/resources/metadata-
standards_ ) we will follow data and metadata vocabularies, standards or
methodologies from Biology (in particular from Synthetic Biology, Molecular
Biology, Biochemistry, Biotechnology and Bioprocess engineering) to make our
data interoperable. We will use the STRENDA Guidelines, registered in
_FAIRsharing.org_ , as reference for metadata and standards within our
discipline ( _http://www.beilstein-institut.de/en/projects/strenda/guidelines_
). _FAIRsharing.org_ is a web portal that collects interrelated data
standards, databases, and policies in the life, environmental and biomedical
sciences.
We will be using standard vocabularies for all data types present in our
datasets, to allow interdisciplinary interoperability. In case it is
unavoidable that we use uncommon or generate project specific vocabularies, we
will provide mappings to more commonly used ontologies.
# 2.4 Increase data re-use (through clarifying licences)
To permit the widest re-use possible, we will license the data by Creative
Commons Attribution CC-BY 4.0. CC-BY 4.0 permits unrestricted use,
distribution and reproduction in any medium provided that the original
document is properly cited. It is a machine readable license available free of
charge from _creativecommons.org_ .
The underlying data of scientific publications will be made available for re-
use once the publication is accepted. Zenodo offers the function “Reserve
DOI”, so we can already use the right DOI when writing the publication. A text
field will display the DOI that our record will have once it is published.
This will not register the DOI yet, nor will it publish our record. Next to
open access publications, Zenodo offers the possibility to upload embargoed,
restricted or closed access publications. When we publish with “green” open
access we will do self-archiving of the publication in Zenodo. We will
consider embargo periods imposed by the Journal and link it to our
publications.
According to Zenodo the data remains re-usable forever. Our data is stored in
CERN Data Center. Both data files and metadata are kept in multiple online and
independent replicas. CERN has considerable knowledge and experience in
building and operating large scale digital repositories and a commitment to
maintain this data centre to collect and store 100s of PBs of LHC data as it
grows over the next 20 years. In the highly unlikely event that Zenodo will
have to close operations, they guarantee that they will migrate all content to
other suitable repositories, and since all uploads have DOIs, all citations
and links to Zenodo resources (such as our data) will not be affected.
According to the statement above the underlying data of scientific
publications produced in our project will be usable by third parties also
after the end of our project using CC-BY 4.0
# Allocation of resources
According to Zenodo`s Terms-of-Use, content may be uploaded free of charge by
those without ready access to an organized data centre. As we do not have an
organized data centre available within our consortium we assume that the costs
for making data FAIR in our project are limited to the costs for open access
publishing (gold open access).
Anyway, costs for open access publishing as well as costs related to open
access to research data are eligible for reimbursement during the duration of
the project as part of the Horizon 2020 grant.
The project manager together with the General Assembly members will be
responsible for data management in our project.
The resources for long term preservation are going to be discussed for the
second data management plan. Discussion will include questions on costs and
potential value, who decides and how what data will be kept and for how long.
# Data security
We will follow provisions of _help.zenodo.org/features_ for data security
(including data recovery as well as secure storage and transfer of sensitive
data). At Zenodo the research output is stored safely for the future in the
same cloud infrastructure as research data from CERN's Large Hadron Collider.
They are using CERN's battle-tested repository software Invenio, which is used
by some of the world's largest repositories such as INSPIRE HEP and CERN
Document Server.
The underlying data of scientific publications as well as the publication
itself will be safely stored in the certified research data repository Zenodo
for long term preservation and curation.
# Ethical aspects
Concerning underlying data of scientific publications we do not see any
ethical or legal issues that can have an impact on data sharing. For ethics
reviews see Deliverables D8.1 and D8.2.
If we do questionnaires dealing with personal data we will include an informed
consent for data sharing and long term preservation.
# Other issues
We do not make use of other national/funder/sectorial/departmental procedures
for data management at the moment.
Literature
1. Gardossi, L., Poulsen, P.B., Ballesteros, A., Hult, K., Švedas, V.K., Vasić-Rački, Đ., Carrea, G., Magnusson, A., Schmid, A., Wohlgemuth, R., and Halling, P.J. (2010) Guidelines for reporting of biocatalytic reactions. _Trends in Biotechnology_ , 28 (4): 171-180.
2. Tipton, K.F., Armstrong, R.N., Bakker, B.M., Bairoch, A., Cornish-Bowden, A., Halling, P.J., Hofmeyr, J.-H., Leyh, T.S., Kettner, C., Raushel, F.M., Rohwer, J., Schomburg, D., and Steinbeck, C. (2014) Standards for Reporting Enzyme Data: The STRENDA Consortium: What it aims to do and why it should be helpful. _Perspectives in Science_ , 1 (1): 131-137.
3. _STRENDA GUIDELINES - LIST LEVEL 1A. Data required for a complete Description of an Experiment._ 2016, doi:10.3762/strenda.17
4. _STRENDA GUIDELINES LIST LEVEL 1B. Description of Enzyme Activity Data._ 2016, doi:10.3762/strenda.27
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0212_XILforEV_824333.md
|
# EXECUTIVE SUMMARY
This Deliverable describes the technical and organisational measures to be
implemented within the XILforEV project for the data management of the project
results and assets. These results and assets are subject of both internal
access by the consortium participants and open access in selected cases. The
document introduces general information about relevant data, standards, and
quality assurance methods. Detailed data specifications are formulated for
four use cases, which are being investigated in the XILforEV project.
Particular attention is given to procedures for data archiving and
preservation as well as data repository.
_Attainment of the objectives and explanation of deviations:_ The project
works related to this Deliverable are being carried out in full compliance
with the XILforEV objectives. There are no deviations from the actions set in
the Grant Agreement.
**KEYWORDS:** Data management, Open Access, Research data.
# GENERAL INFORMATION
## Background
The Data Management Plan, established in the XILforEV project, is based on
internal practices of the consortium organisations and also uses outcomes of
relevant measures, which are efficiently realised by the consortium
coordinator (TUIL) in previous Horizon 2020 projects EVE 1 and ITEAM 2 .
The objectives of the data management are:
* Realise a proper access to project results for both consortium participants and target audiences outside of the consortium;
* Ensure easy public search and access to publications, which are directly arising from the research funded by the EU;
* Allow reuse of research results, produced in the project, to enhance value of the project outcomes to all potential stakeholders; • Avoid unnecessary duplication of research activities;
* Guarantee transparency of research process.
## Data Categories
The research and development works of the project will produce three general
categories of analytical and experimental data:
* Publishable raw data;
* Publishable analysed data;
* Data not selected for publication.
### Publishable Raw Data
The digitised data of parameters, which are recorded during the experiments,
are allocated to the category of publishable raw data. The recorded parameters
can include:
* Signals of sensors installed on test rigs, driving simulators and vehicle systems used in experiments;
* Data describing the testing environments, i.e. ambient temperature and moisture;
* State signals from communication and measuring devices used on test equipment.
The publishable raw data are stored in a digital format, which is determined
by corresponding data processing and acquisition systems. For instance, most
common data formats in this case are MATLAB data files and Microsoft Excel
spreadsheet files. The stored data will be supplemented with “readme” file
describing the raw content and the procedures of collecting raw data.
### Publishable Analysed Data
The publishable analysed data may include figures, tables, charts, video
recordings and other relevant visual objects, which are created during the
processing and analysis of raw data. These data can be used for the project
reporting as well as for publications, presentations and other related
dissemination and communication activities.
### Data not Selected for Publication
Some raw and analysed data can be tagged by the consortium as unpublished.
First of all, such a tagging can be applied in the cases predefined by IP/IPR
management procedures. Another tagging case is when the data have intermediate
character and used for preliminary works. Nevertheless, the data from this
category will be screened for quality and available upon request for potential
external users.
## Relevant Regulatory Documents
The consortium uses instructions and recommendations from corresponding
regulatory documents to fulfil data management cycle during the project life.
It concerns detailing the character of data generated in the project and
linked metadata, exploitation, sharing, curation and preservation of these
data. All these actions are being performed in a strong compliance with the
following documents:
* ISO/IEC JTC 1/SC 32 - Data management and interchange;
* ISO 9001:2008 - Quality management systems;
* ISO 27001:2013 - Information Security Management Systems;
* Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of individuals with regard to the processing of personal data and on the free movement of such data.
## Quality Assurance and Control
The quality assurance and control measures, related to the data management,
are under responsibility of all project beneficiaries and includes supervision
of the project coordinator. The relevant measures cover three groups
identified next.
_**Measures for quality assurance before data collection:** _
* Definition of the standard(s) for measurements and recording prior to the data collection;
* Definition of the digital format for the data to be collected;
* Specification of units of measurement;
* Definition of required metadata;
* Assignment of responsibility to a person over quality assurance for each test series;
* Design of Experiments (DoE) for each test series;
* Design of a data storage system with sufficient performance;
* Design of a purpose-built database structure for data organization.
_**Measures for quality assurance and control during data collection and
entry:** _
* Calibration of sensors, measuring devices and other relevant instruments to check the precision, bias and scale of measurements;
* Taking multiple measurements and observations in accordance with the established DoE;
* Setting up validation rules and input masks in data entry software;
* Unambiguous labelling of variable and record names;
* Implementation of double entry rule – ensuring that two persons, performing the tests, can independently enter the data;
* Use of reference mechanisms (a relational database) to minimize the number of times the data need to be entered.
_**Measures for quality control during data checking:** _
* Documentation of any modifications to the dataset to avoid duplicate error checking;
* Checking the dataset for missing or irregular data entries to ensure the data completeness;
* Performing statistical summaries with checking for outliers by using graphical methods as probability and regression plots, scatterplots et al.;
* Verifying random samples of the digital data against the original data;
* Ensuring the data peer review both by scientific and technical criteria.
# CONTENT OF DATA SETS
Next sections specifies the content of expected data sets for each use case
defined in the XILforEV project description.
## Use Case 1 – Brake Blending
Data sets:
* Documentation to testing environment, incl. specification of software, hardware and communication components;
* Documentation to the hardware-in-the-loop brake test rig, the powertrain test rig, the brake dynamometer, and the brake robot;
* Vehicle dynamics model incl. a full vehicle, multi-body dynamics model of the target vehicle and real-time versions of the models for the use in dSPACE environment;
* Programme code of brake blending controller and related control functions;
* Results of validation and testing of models and brake blending controller in XILforEV testing environment, incl. recorded signals of sensors and measurement techniques on test rigs / setups.
Features of data sets:
* Online storage on the project SharePoint server;
* Data collected by the operators of test rigs / setups;
* No personal data and no ethical issues are identified;
* Use in documentation, reports and publications;
* Selected parts of data sets can be shared with external partners or provided for open access;
* Preservation in the archive.
## Use Case 2 – Ride Blending
Data sets:
* Documentation to testing environment, incl. specification of software, hardware and communication components;
* Documentation to the driving simulator and suspension test rig;
* Vehicle dynamics model incl. a full vehicle, multi-body dynamics model of the target vehicle, tyre model, and real-time versions of the models for the use in dSPACE environment;
* Programme code of ride blending controller and related control functions;
* Results of validation and testing of models and ride blending controller in XILforEV testing environment, incl. recorded signals of sensors and measurement techniques on test rigs / setups.
Features of data sets:
* Online storage on the project SharePoint server;
* Data collected by the operators of test rigs / setups;
* Ethical issues can be related to the test persons operating the driving simulator that will be handled in accordance with the procedures stated in the project Deliverables 8.1-8.3;
* Use in documentation, reports and publications;
* Selected parts of data sets can be shared with external partners or provided for open access;
* Preservation in the archive.
## Use Case 3 – Integrated Chassis Control
Data sets:
* Documentation to testing environment, incl. specification of software, hardware and communication components;
* Documentation to the driving simulator, hardware-in-the-loop brake test rig, the powertrain test rig, and suspension test rig;
* Vehicle dynamics model incl. a full vehicle, multi-body dynamics model of the target vehicle, tyre model, and real-time versions of the models for the use in dSPACE environment;
* Programme code of integrated chassis controller and related control functions;
* Results of validation and testing of models and integrated chassis controller in XILforEV testing environment, incl. recorded signals of sensors and measurement techniques on test rigs / setups.
Features of data sets:
* Online storage on the project SharePoint server;
* Data collected by the operators of test rigs / setups;
* Ethical issues can be related to the test persons operating the driving simulator that will be handled in accordance with the procedures stated in the project Deliverables 8.1-8.3;
* Use in documentation, reports and publications;
* Selected parts of data sets can be shared with external partners or provided for open access;
* Preservation in the archive.
## Use Case 4 – Fail-safe and Robustness Study
Data sets:
* Documentation to testing environment, incl. specification of software, hardware and communication components;
* Documentation to the powertrain and chassis component test rigs;
* Models of vehicle subsystems and operational environments, and real-time versions of the models for the use in dSPACE environment;
* Programme code of fail-safe controllers of powertrain and chassis subsystems as well as related control functions;
* Results of validation and testing of models and fail-safe controllers in XILforEV testing environment, incl. recorded signals of sensors and measurement techniques on test rigs / setups.
Features of data sets:
* Online storage on the project SharePoint server;
* Data collected by the operators of test rigs / setups;
* No personal data and no ethical issues are identified;
* Use in documentation, reports and publications;
* Selected parts of data sets can be shared with external partners or provided for open access;
* Preservation in the archive.
# HANDLING OF DATA SETS
## Archiving and Preservation
All data sets will be centrally stored on the secured server established by
the project coordinator TUIL. This server is linked to the project webpage
under the address
_HTTPS://SHAREPOINT.TU-ILMENAU.DE/WEBSITES/KFT/PROJEKTE/XILFOREV/_ , Figure 1.
It can be seen on Figure 1 the corresponding folders “Use Case #”, where the
corresponding data sets are being stored.
**Figure 1 – Screenshot of the secured consortium area of the XILforEV
project.**
The access to this secure consortium area is organized for the persons
involved in the project through individual login names and passwords. The
login names and passwords are issued by the secure server administrator from
TUIL.
The data sets will be stored on this server at least for five years following
the project end date. The access for this time frame will be ensured for all
registered users as well.
The consortium has no restrictions for participating beneficiaries to keep
archives of data sets on their institutional servers under conditions that all
required data management actions will be properly handled.
## Data Sharing
The knowledge sharing outside of the consortium will be realized through two
main instruments:
* The consortium will define a set of documents and reports with the analysis of the project results and assets that will be available for open access on the project website. Most of the project presentations delivered on professional events will be also published on the website for free download.
* The consortium will aim at granting free access for all the scientific publication that will be prepared during the project activities. The planned publications will be subjected to the “green” open access model. In addition to this, presentation of program activities and relative results will be published on the consortium website.
The publishable and analysed raw data can be reused upon request in exchange
for authorship and/or establishment of a formal collaboration.
**Figure 2 – Screenshot of the XILforEV webpage with the publishable project
results.**
The data with open access will be available through two channels: the project
webpage and the project area on the ResearchGate portal. The data on the
project webpage are available through the link < _HTTPS://XIL.CLOUD/RESULTS/_
> , Figure 2. The project area on the ResearchGate portal is created and
accessible through the link
< _HTTPS://WWW.RESEARCHGATE.NET/PROJECT/XILFOREV_ > , Figure 3.
**Figure 3 – Screenshot of the XILforEV project area on the ResearchGate
portal.**
Individual consortium participants can share any type of data linked to the
project, such as articles, conference papers, presentations, posters et al.
Specifically for the XILforEV project area on the ResearchGate portal, there
is the possibility to preserve the data by means of the DOI codification. In
the “Digital Object Identifier” field it is either possible to assign a new
DOI, automatically generated by the ResearchGate service, or to use the
original code in order to allow the other users to easily and unambiguously
cite the uploaded file.
# CLOSING REMARKS
The presented document regarding the data management plan in the XILforEV
project has a living character and will be regularly updated to address
eventual emergence of new data sets, changes in regulatory guidelines and
other relevant issues. In the case of changes, the content of the data
management plan will be adapted, and the applied actions will be reported in
project dissemination reports (Deliverables 6.3 and 6.4).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0214_SERENA_767561.md
|
# Executive Summary
The **SERENA** project participates in the ORDP. As such, the current document
describes the initial version of its Data Management Plan as this was
developed through the first period M01-M06 of the project.
The deliverable outlines the handling of research data that will be generated
during and after the lifetime of **SERENA** . The possible ways of archiving
and management of the data through available web-based platforms will be
investigated. Furthermore, online databases for storing research data have
been examined and the most suitable was selected to be used both by the
consortium partners as well as from interested people/organizations from
outside the project.
<table>
<tr>
<th>
**1**
</th>
<th>
**Introduction**
</th> </tr> </table>
Computer applications have multiple data sources defined depending on the
supported functionalities and their purpose. Source data constitute a valuable
source of information. Data sources can be a database, a dataset, a
spreadsheet or even hardcoded data.
Although raw data, often mentioned as source data, have the potential to
become information, meaning useful digital information for a specific
application and purpose, it requires selective extraction, organization,
analysis and formatting for presentation. Once processed data may reveal
valuable information and characteristics of their origin or even enable
certain predictive analytics forecasting, for example, future trends. Thus, it
becomes clear that the acquisition, preservation and proper management of data
may enable more efficient data-driven decision making approaches for
companies, forecasting, analysis of their current practices, and
identification of potential bottlenecks as well as the verification of
scientific and commercial published research results.
**SERENA** tackles with the acquisition of raw machine/sensor data and their
analysis towards enabling predictive analytics aiming towards forecasting
potential failures of the equipment. Such identification may result in
appropriate predictive maintenance operations to take place, while an early
failure identification may additionally result in a more effective scheduling
of the production operation with respect to a predictive maintenance plan,
thus, reducing the overall production cost.
During the lifetime of the **SERENA** project, various types of raw data will
be generated through the different pilot cases. These data will contain both
machine and sensor data. In addition, datasets will be generated through the
intermediate processing steps of the **SERENA** systems such as KPIs for
machine’s condition evaluation and/or training datasets for the machine
learning algorithms.
## 1.1 Purpose of the DMP
A DMP typically contains information on how data are created outlining the
steps for sharing and preserving them. In the context of the H2020, a DMP
details what kind of data will the project generate, whether and how they will
be exploited or made accessible for verification and reuse and how they will
be managed and preserved [1].
This particular document has been created in order to present and analyze the
first steps towards the creation of the **SERENA** project DMP. An
investigation of the data needed for the various developed sub-systems within
the project is ongoing and their formats and prerequisites are under
examination.
In addition, the deliverable focuses on the available web-based solutions for
archiving, accessing and preserving Project’s data made publicly available. At
this point, it should be stated that the data to be made available to the
public audience will be first examined for confidentiality issues and if
possible made anonymous.
## 1.2 Objectives and tasks of WP7
WP7 aims at the creation of impact through the dissemination of the project
results as widely as possible making them known to all relevant stakeholders,
maximizing at the same time the exploitation of the project’s results to the
benefit of the **SERENA** partners.
WP7 is appropriately structured into tasks that focus on achieving the above
objectives:
* Task 7.1: is the task that focuses on the establishment of the project’s web portal intended for the communication with the public, in order to effectively disseminate the project’s results.
* Task 7.2: is the task that obtains the activities concerning the dissemination of projects results to scientific community and industry.
* Task 7.3: focuses on the exploitation of the project’s results with respect to the background and foreground IPR policies and the respective articles of the Grant Agreement.
The consortium of the **SERENA** project acknowledges that impact may be
created through knowledge circulation and innovation. Making data publicly
available is recognized by the members of the consortium as well as by the
European Commission as an effective approach towards innovation in the public
and private sectors. As a result, an approach for the DMP of the **SERENA**
system that will be introduced and developed during the project, is presented
in the following sections. The confidentiality of the data will be examined
too, as well as the prerequisites for archiving, making them anonymous and
preserving them.
## 1.3 Background of the DMP
The DMP specifications are governed by the “Open access to research data”
article (article 29.2) of the AGA [2]. As such, the guidelines and rules are
defined on open access to scientific peer-reviewed publications and research
data that all beneficiaries have to follow in projects funded or co-funded
under Horizon 2020 programme.
In the context of research and innovation, OA includes providing online access
to scientific information free of charge and reusable [3]. Scientific
information can be:
1. Research data, meaning data used in publications, curated data and/or raw machine/sensor data.
2. Peer-reviewed scientific articles which have been published in a journal.
1.3.1 Open access to peer-reviewed journal Open access provided by journals is
called “gold” open access while open access delivered by repositories is
called “green” open access. Both terms are used by the OA community focused on
how OA is implemented. Gold stands for publications made available directly
from the publisher while Green means that a version is available somewhere
else, such as a repository. However, there are several dimensions in OA
including the following: Rea der rights
* Reuse rights
* Copyrights
* Author posting rights
* Machine readability
* Publishing costs
* Peer review
In both cases, open access to publications and/or research data is a decision
of the grant beneficiaries and not an obligation. The main points towards
ensuring OA to research data and publications in the
context of the **SERENA** project is **Figure 1: SERENA OA approach for data
sets and publications** illustrated in Figure 1, which has been adapted from
[3].
1.3.2 Open access to research data
Apart from publishing to an open access journal, self-archiving to an
institutional repository such as INDIGO [4], or a repository supported by the
EC, such as ZENODO [5], or other like the re3data repository [6], could be an
option towards making something publicly available. In fact, making data
publicly available is more related to making science open, which may enable
the following benefits: 1. Effective scientific practices include a level
communicating the evidence and validating the results.
2. Open data practices have enabled breakthroughs in certain areas of research such as crystallography, Earth observation, DNA sequencing, AI, especially when data could be reused.
3. As a result, open data may accelerate discovery through the reuse of data from the academic system and others.
<table>
<tr>
<th>
**2**
</th>
<th>
**Guiding principles**
</th> </tr> </table>
This deliverable is a living document, which will be updated regularly during
the lifetime of the project. The intention of the DMP is to describe numerical
models and/or datasets collected or created within **SERENA** during the
runtime of the project following the guiding principles of Annex 1 as well as
of the FAIR original policies [7].
Due to the fact that the project started in October 2017, there is no dataset
generated or collected by the time of the compilation of this deliverable. The
datasets to be made publicly available will deliver information considering
the following:
* **Dataset reference and name** : Identifier for the data set to be produced. In order to be able to identify and distinguish each data set, unique object identifiers will be assigned.
* **Dataset description** : Descriptions of the data that will be generated or collected, the description element includes its types (text, spreadsheets, software, models, images, movies, audio, etc.), source (human observation, laboratory, field instruments, experiments, simulations, compilations, etc.), volume (volume of data, number of files, etc.), data and file formats (non-proprietary formats, used within community).
* **Standards and metadata** : Reference to existing suitable standards of the discipline, such as Dublin Core. If these do not exist, an outline on how and what metadata will be created. Metadata helps to categorize, understand and interpret data and may provide details about experimental setup as well as facilitate identification and discovery of new data. Metadata also tunes the data that is suggested to users.
* **Data sharing** : Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be wide open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating, in particular, the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g.
ethical, rules of personal data, intellectual property, commercial, privacy-
related, security-related).
* **Archiving and preservation** (including storage and backup): Description of the procedures that will be put in place for the long-term preservation of the data. An indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered.
The information listed above reflects the current concept and design of the
individual work packages. The information follows a specific template and will
be updated by the project-partners responsible for the different datasets to
be created. With respect to the FAIR data principles [8], an initial version
of a dataset template to be used for making data FAIR in an automatic approach
is included in Appendix A:
XML template, while a description of each field is provided in the following
section.
<table>
<tr>
<th>
**3**
</th>
<th>
**Data Management related to SERENA**
</th> </tr> </table>
The **SERENA** consortium recognizing the importance of making research data
available and easily reusable is participating in the Open Research Data Pilot
in Horizon 2020. As such and with respect to section “2.2.5 Management of
Data” of the DoA, observational data consisting of sensor and machine
recordings have started to be collected, but are not available at the time of
compiling this document. As a result, the data format is constrained to the
raw data format of each source. After the collection of those datasets, from
their source, the conversion to the suitable data format will be defined along
with the appropriate sharing format. In order to facilitate the retrieval and
reuse of the related dataset appropriate metadata values will be defined and
integrated, after resolving any confidentiality issues that may be raised by
the data provider. Towards this direction anonymization approaches will be
considered. Datasets that will be decided to become publically available will
be following the dataset format that is presented in section 3.2 of this
document. Apart from sensor data, the consortium will evaluate during the
development stage of the several **SERENA** components making publicly
available through the channel described in the following section additional
experimental data.
## 3.1 DMP Platforms introduction and documentation
For the **SERENA** project, the Zenodo platform has been selected for the data
which will be decided by the members of the consortium to become publicly
available. All research outputs from the entire scientific field can be stored
in the particular platform, such as publications, posters, presentations,
images and videos/audio.
A first trial account for **SERENA** project purposes was created in Zenodo.
After the profile is registered and the account is activated the user can
easily upload and manipulate his data files. The profile constitutes an
example profile in order to serve the presentation of the platform
installation to the needs of the project. A space or community for the
**SERENA** project has been established, named **SERENA** Data under the
following link: _https://zenodo.org/communities/serena/edit/_ .
One of the main aspects that the platform offers is the creation of the
aforementioned communities. Communities imply the dedicated storage space for
a defined entity. This entity could be from research project to any other
scientific procedure which demands data storage for archiving and reuse
purposes Figure 2.
**Figure 2: SERENA project community creation in ZENODO**
After the creation of the community, the creator or administrator may access
it and proceed to any of the following options:
1. view the uploaded contents,
2. manage them, and
3. export the datasets
Moreover, any user with access to the community link may either search and
download content or upload new datasets. In order to upload new datasets the
creation of a new account is required or use of an existing one from GitHub or
ORCID. In order to download pre-existing files, no registration is required.
Furthermore, it provides the user with the option of uploading to establish
the access rights of the files. Four types of access rights can be selected as
it is depicted in, depending on
the confidentiality of the data.
License type can be configured in
the relative tab
as well as **Figure 3: Log in screen, access rights and license options**
funding related information to be provided Figure 3\.
Two example files have been uploaded to the community of the **SERENA** Data,
for which the Creative Commons Attribution-Share – Alike 4.0 has been
selected. The type of the license can be reconfigured depending on the terms
of each suggested license and the confidentiality level of the data Figure 4.
**Figure 4: SERENA data community uploaded test files**
## 3.2 Dataset template description
As mentioned in section 2, an initial dataset template in the form of XML has
been created for storing data. The XML format suggested can automate in the
future the upload of data to ZENODO through a mechanism that will consume the
XML and take all the required info from the XML elements. Such a mechanism can
make the upload and manipulation of data a very efficient procedure and will
be investigated in the future. A short description of the main data field
elements included in the template is provided in the table below.
**Table 1: XML elements**
### ELEMENT NAME PURPOSE
<table>
<tr>
<th>
**SERENA_subject**
</th>
<th>
The root name of each datasets referring to the **SERENA** community
</th> </tr>
<tr>
<td>
**datasetID**
</td>
<td>
A unique identifier of the dataset
</td> </tr>
<tr>
<td>
**datasetDescription**
</td>
<td>
A textual description of the dataset
</td> </tr>
<tr>
<td>
**sharingOptions**
</td>
<td>
It included the sharing options of the **SERENA** subject, embargo periods,
licenses, etc.
</td> </tr>
<tr>
<td>
**origin**
</td>
<td>
It defines the main source of the dataset, such as machine name
</td> </tr>
<tr>
<td>
**volume**
</td>
<td>
It includes the size of the dataset in MBs or GBs
</td> </tr>
<tr>
<td>
**Date**
</td>
<td>
The date element includes the initial upload and any modification date.
Furthermore, it contains a reference element which can be further linked to
any other element of the XML.
</td> </tr>
<tr>
<td>
**contents**
</td>
<td>
Under the content element, multiple elements may be included such as images,
videos, documents (doc, docx, docm, pdf, ppt, etc.) as well as raw data either
as plain text or in another format such as odt.
</td> </tr>
<tr>
<td>
**standards**
</td>
<td>
This field defines any incorporated mechanism for encoding the specific
dataset, along with the organization and the description of the standard.
</td> </tr>
<tr>
<td>
**metadata**
</td>
<td>
The metadata element contains additional information over the dataset
including the total number of downloads, the times that the dataset has been
parsed, the ranking of the **SERENA** subject as well as the last time it was
updated.
</td> </tr> </table>
<table>
<tr>
<th>
**4**
</th>
<th>
**Conclusions**
</th> </tr> </table>
In conclusion, the requirements imposed on **SERENA** with regards to granting
OA to research data have been discussed. The adopted online platform for
archiving and preserving research data under the guiding principles of Annex 1
has been described. Additionally, the first steps towards populating the newly
created **SERENA** data community has been made by means of two test files.
The responsible partner for the archiving of the data in the proposed online
platform will manage the in time update of the aforementioned tables in order
for the consortium to be kept updated on the project outcomes.
This DMP includes also an XML schema to upload in a formatted and structured
approach the datasets that are intended to become publicly available. Last but
not least, OA regarding publications have been discussed, however for the
publications, papers or deliverables, as well as for the data that are not
made anonymous or are confidential the project portal will be used.
At this point the **SERENA** consortium has initiated the process of
collecting data from the pilot cases. However this process also considering
the confidentiality policies and data sharing restrictions of each company
will require additional time. In the next stages of the project and under the
decision of the responsible companies, datasets consisting primarily of sensor
data may be uploaded and managed by the responsible partners according to the
guidelines described in this document.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0215_PROCESS_777533.md
|
### Executive Summary
This report has been created to support all PROCESS team members in their
daily work for PROCESS. The specific purpose of this document is to complement
quality related aspects of the key contractual documents of the project:
* The EC Grant Agreement, including the Annex 1 "Description of Action" (DoA)
* PROCESS Consortium Agreement (GA)
From the practical point of view, these documents establish primarily the
contents of the project deliverables and the schedule for their delivery to
the Commission (Annex 1), and the structure and the decision-making processes
of the PROCESS project organisation. The quality plan complements these two
documents by describing in a more detailed manner the processes that are
utilised to ensure that the project outputs are of the highest possible
quality. The quality in this context refers to to the outputs being accurate
and fit for the purposes stated explicitly in the DoA or implied in the
informal communications (such as presentations), and complying certain
technical and formal characteristics (e.g. using document template). The
quality assessment will thus provide foundations for successful dissemination
and exploitation activities.
This document contains the descriptions of the quality management processes
and responsibilities of the different project organisations that provide
additional guidance on the implementation of the clauses in the DoA and GA.
These include:
* Release procedure of deliverables and other project outputs carrying the PROCESS brand
* Communication practices and tools
* Software quality approach
* Management of IPR-issues
The Data Management Plan (DMP) will present the overall goals, background and
constraints the project needs to take into account when generating and
publishing reusable datasets. The multifaceted nature of the project and its
stakeholders is discussed in some detail, before presenting the potential data
assets the project will generate or be responsible for. The use cases of the
project are largely based on providing tools aimed at making orders of
magnitude improvements in the efficiency and convenience of extracting value
from existing data assets. For this reason, the data management plan includes
relative detailed description of the heuristics for determining whether the
project should publish an actual derived datasets based on already published
data sources or just publish the algorithm and tools for replicating the
steps.
The DMP will also include a short summary of the potential sources of reusable
dataset emerging from each of the use cases.
### 1 Quality Plan
#### 1.1 Summary of the quality-related processes in the DoA
The project management structure is based on the following three components:
* Strategic decisions: General Assembly (GA) with all partners represented
* Innovation management, with innovation manager monitoring any issues arising from work packages WP2 (Motivation, Future Requirements, and Emerging Opportunities) and WP9 (Dissemination, Engagement and Exploitation)
* Day-to-day coordination: Work package leaders coordinating the work within the work packages and exchanging information with the project coordinator and other WP leaders through the project executive committee. WP leaders will have autonomy in deciding on the approaches to be used within the work package, as long as they can support other WP leaders and the project coordinator in their tasks.
All of these groups will use the project Wiki (Confluence tool discussed in
the next chapter) for circulating agendas and to record the minutes of the
project meetings. The meetings of these three management structures have the
following cycle:
* GA meetings: three times per year at the minimum,
* Day-to-day coordination: weekly Project Executive Committee (PEC) teleconferences,
* Innovation management: status review in the PEC meetings, in-depth analysis on demand and ad hoc contacts between WP2 and WP9 leaders when a need arises.
#### 1.2 Summary of the quality-related processed in the Consortium Agreement
The PROCESS Consortium Agreement is based on the DESCA model, and as such it
does not have a direct bearing to the project quality management. It includes
certain provisions related to timing of the meeting invitations and
finalisation of agendas - both for the ordinary and the extraordinary
meetings. Any partner may call for an extraordinary meeting.
#### 1.3 Tools used by the project
##### 1.3.1 Collaborative online working spaces - Confluence and GitLab
The project uses a _Confluence collaboration tool_ installed at LRZ as an
online collaboration space. Confluence is a Wiki-style system that supports
collaborative editing of pages and their relationships with each other. It
provides fine-grained access control for the content and advanced
collaboration mechanisms that allow users to subscribe to notifications (for
example page edits), assign tasks and leave comments on the pages.
Confluence is used as the main hub collecting links to all project internal
tools, documentation and outputs. The deliverables are edited primarily using
the confluence system to minimise the barriers for contributions and the
additional workload needed for manual integration of contributions from
multiple sources.
The software developed by the project is stored in the GitLab installation
provided by LRZ as a service for LMU ( _gitlab.lrz.de_ ) . This mature,
widely-used version management system supports well-defined software
development processes and facilitates uptake by the other developer
communities using GitLab by providing familiar interface for the published
software. The details of the software development process are described in
some detail in the section _"Software Quality Assurance"_ in this document.
##### 1.3.2 Communication tools
The communication tools range from mailing lists (one for each work package,
WP leaders' list and list containing all members of any of the PROCESS list)
to conference call systems. The mailing lists are listed in the Confluence
system and can be added if need arises.
The primary conference calling system used by PROCESS is Gotomeeting
( _https://www.gotomeeting.com/_ ) , which was deemed to provide the best
balance of ease-ofuse, support of multiple platforms, features (e.g. screen
sharing and chat functionalities that are not available on traditional
conference call systems) while complying with the IT security policies of all
of the partners.
##### 1.3.3 Meeting practices
The basic schedule of the meetings is already defined in the Description of
Action (DoA), Annex 1 of the Grant Agreement. The minutes of the meetings are
stored on the confluence system, linked to a common "Meetings" page. This
provides a common repository and a way to track any corrections made to the
meeting minutes as changes are logged (timestamp and username).
#### 1.4 Deliverable process
The deliverable process was discussed in the kick-off meeting and resulted in
a simple, straightforward approach documented on the Confluence page (Quality
Assurance process). The process agreed is as follows:
* Every official piece of paper (e.g. deliverable) will be sent to the Executive Board two weeks before official deadline ( [email protected]_ )
* Executive Board has one week to suggest changes/improvements
* No reaction means silent consent
* One week left in order to include changes or improve on document for authors
* Final version has to be circled after being published to all project members ( [email protected]_ )
This basic mechanism can be refined during the project lifetime if needed
(e.g. to accommodate urgent requests from parties project collaborates with).
The main quality-related issue is the tacit approval of deliverables - in a
multifaceted project consisting of platform development and highly autonomous
use case pilots, waiting for explicit approval from all the members would
increase risks of delays while not necessarily improving the coverage of the
review process.
#### 1.5 Software quality assurance
The foundations of the software quality assurance are the guidelines developed
and best practices documented by NLeSC in their internal software development
guide (accessible at _https://nlesc.gitbooks.io/guide/content/_ ) . This
approach is a natural choice since NLeSC is responsible for the WP8
(Validation) and has extensive experience in applying the guide in other NLeSC
projects. The guide is also so called "live document" that is continuously
updated and refined, making it easy to encompass lessons learned from the
PROCESS work in a way that benefits automatically a larger group of projects.
The scope of the guideline documentation extends beyond the software quality
aspects, extending to areas covered by other PROCESS documents (e.g.
publishing of the results) and covers some details that need to be adjusted
(e.g. exact repository used for the software). In the process context other
PROCESS documents will naturally have a precedence - and in case there is a
danger of misunderstanding the exact approach is documented in the project
Wiki (described earlier in this section).
Some of the key principles stemming from the NLeSC guide are:
1. In case of doing proof-of-concept/prototyping work that doesn't comply with the software development process, state this explicitly in all the communications
2. Version control - apply consistent practices from the beginning of the project
3. Arrange formal code reviews as part of the development process
4. Automate testing as much as possible
5. Apply standards and language-specific implementation guides is available
6. Do an in-depth assessment of the IPR-related issues at least in two stages:
1. Finalising the design of the software
2. Before making software publicly available
The implementation of these approaches will be reviewed in the executive board
meetings, based on information collected by the WP8 leader.
#### 1.6 Dissemination and exploitation quality issues
##### 1.6.1 DoA dissemination aspects
The DOA includes a list of potential dissemination channels and KPIs that
project partners identified at the time of writing the proposal. These will be
reviewed and complemented during the project lifetime, with the first update
documented in the deliverables D9.1 ("Initial DEP and market research
Report"). The dissemination-related Key Performance Indicators (KPIs) are
defined as follows:
_Table 1 Dissemination-related Key Performance Indicators_
<table>
<tr>
<th>
**Target area**
</th>
<th>
**Indicator**
</th>
<th>
**Expected progress**
**(cumulative numbers unless otherwise stated)**
</th> </tr>
<tr>
<th>
**After M12**
</th>
<th>
**After M24**
</th>
<th>
**After M36**
</th> </tr>
<tr>
<td>
Scientific
</td>
<td>
Number of publications, talks, presentations in conferences and workshops
</td>
<td>
4
</td>
<td>
12
</td>
<td>
20
</td> </tr>
<tr>
<td>
Scientific
</td>
<td>
Number of lectures, courses or training events (including extreme scaling
workshops)
</td>
<td>
2
</td>
<td>
8
</td>
<td>
16
</td> </tr>
<tr>
<td>
Other projects
</td>
<td>
Number of meetings with other project presence (either hosted or participated)
</td>
<td>
5
</td>
<td>
12
</td>
<td>
20
</td> </tr>
<tr>
<td>
Website
</td>
<td>
Unique monthly visitors (best threemonth average)
</td>
<td>
200
</td>
<td>
400
</td>
<td>
600
</td> </tr>
<tr>
<td>
Website
</td>
<td>
Returning monthly visitors (best threemonth average)
</td>
<td>
50
</td>
<td>
100
</td>
<td>
150
</td> </tr>
<tr>
<td>
Press
</td>
<td>
Number of mentions in paper press, online media, TV/Radio
</td>
<td>
4
</td>
<td>
12
</td>
<td>
30
</td> </tr>
<tr>
<td>
Social media
</td>
<td>
Number of followers/friends on social media networks (across all platforms)
</td>
<td>
80
</td>
<td>
150
</td>
<td>
450
</td> </tr>
<tr>
<td>
Developers
</td>
<td>
Monthly downloads of technical documentation: White papers, architecture
descriptions or software releases (best three month average)
</td>
<td>
50
</td>
<td>
150
</td>
<td>
200
</td> </tr> </table>
From the quality perspective, the scientific goals require balancing the type
of output and its impact: for example a talk or a presentation in a small
workshop that targets an ideal niche audience is of much higher value than a
publication that is perhaps more prestigious on the surface but doesn't
provide similar targeted audience. This concrete approaches to solve this
challenge will be discussed in the D9.1 and in case they require changes to
the overall quality processes of the project this deliverable will be updated
to reflect the next practices.
##### 1.6.2 Internal guidelines
The project has an internal guideline document complementing the plans
presented in the DOA and providing rapid guidance to supplement the procedures
and methods presented in the previous chapters. The document contains
reflections, best practices and templates for identifying, refining and
promoting project success stories. This covers both direct activities of the
project team as well as activities that support dissemination and exploitation
indirectly, such as forming alliances with entities that have synergies with
PROCESS goals, building and managing communities and so one.
#### 1.7 IPR-related quality assurance issues
PROCESS needs to take IPR issues into account in several parts of its
activities:
* Publishing the software solutions through the GitLab installation
* Integrating software components with other open source solutions
* Publishing derived datasets (to ensure that the constraints of the original license dataset is published in respected)
* Submitting publications to scientific journals (to ensure at least green open access)
* Preparing presentation material (e.g. ensuring that photographs used as an illustration do not infringe licensing terms)
* Dealing with potential infringement of IPR generated by the project
Hence, the IPR issues form a part of Software Quality Assurance, Dissemination
and exploitation activities as well as forming an integral part of the Data
Management Plan of the project. Thus several groups and individuals are
dealing with the issues and need to act with relatively high degree of
autonomy. The overall coordination of the IPR issues is the responsibility of
the Innovation Manager, who will ensure that the relevant IPR-related
information is made available to everyone involved in the day-today IPR
management.
#### 1.8 Privacy issues
While the datasets used by the use cases and pilots are not planned to include
data that would have privacy issues, any changes to this practice need to be
reviewed by the Innovation Manager. The work package leader of WP9 will ensure
that any contact information collected will be used only for the purposes the
consent was obtained for (and in a way that is compliant with GDPR). WP9
leader will report on the privacy issues in the PEC meetings at the minimum
twice a year.
### 2 Data Management Plan
#### 2.1 Introduction
The requirements for the Data Management Plan (DMP) are laid out in grant
agreement (GA) and supporting documentation provided by the EC. The GA states
that:
_"Regarding the digital research data generated in the action (‘data’), the
beneficiaries must:_
_(a) deposit in a research data repository and take measures to make it
possible for third parties to access, mine, exploit, reproduce and disseminate
— free of charge for any user — the following:_
1. _the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;_
2. _other data, including associated metadata, as specified and within the deadlines laid down in the data management plan" (N.B. need to check the text hasn't changed in PROCESS GA before submitting the deliverable!)_
The prerequisites for complying with these requirements include:
* Identifying the data generated that could form the basis of becoming a reusable data asset
* Identifying and securing access to optimal repositories for long-term preservation of the data
* Review and refine the metadata so that it provides information that is relevant and understandable also when separate from the PROCESS project context. This is important already for the project internal use as the data assets of the PROCESS project are multi-disciplinary in nature.
* Map the data with publications made by the project
* Having necessary _due diligence_ processes in place to ensure publication of data will not - directly or indirectly - raise any additional compliance issues.
Fulfilling these requirements in a multifaceted project such as PROCESS
requires a two-stage approach: creating an initial data management plan
describing potential reusable data assets that project can generate during its
lifetime, and common principles and approaches used to choose optimal
approaches for making them reusable in the longer term. The initial data
management plan will be refined during the project lifetime as the nature of
the data assets generated will become clearer. However, it should be noted
that due to the interdisciplinary nature of the project, it is likely that the
project will generate several data management plans to match the specific
requirements and community conventions of each of the disciplines involved.
Maximising the potential for reuse will also depend on successful
identification of the potential secondary user communities, as this is a
prerequisite for successful reviewing and refining of metadata specifications
and identification of the optimal repositories that are to be used for storing
the data generated during the PROCESS lifetime.
One of the common characteristics of the PROCESS use cases is that they do not
do primary data collection themselves. Instead, they will generate data,
either based on the publicly available datasets or (especially in case of the
UC#4, see next section) provide simulation results based on statistical
distributions of actual, private datasets that are used as background of the
project work.
#### 2.2 Background
PROCESS is a project delivering a comprehensive set of mature services
prototypes and tools specially developed to enable extreme scale data
processing in both scientific research and advanced industry settings. These
service prototypes are validated by the representatives of the communities
around the five use cases:
* UC#1: Exascale learning on medical image data
* UC#2: Square Kilometre Array/LOFAR
* UC#3: Supporting innovation based on global disaster risk data/UNISDR
* UC#4: Ancillary pricing for airline revenue management
* UC#5: Agricultural analysis based on Copernicus data
From the data management perspective, each of these five use cases presents
challenges that are complementary to each other, and with different potential
for direct generation of exploitable data assets. This mapping is presented in
the table below:
# Table 2 Key challenges of use cases
<table>
<tr>
<th>
**Use case**
</th>
<th>
**Key challenge**
</th>
<th>
**Type of reusable data asset**
</th> </tr>
<tr>
<td>
UC#1
</td>
<td>
Machine learning using massive, public datasets; exploitation requires high
degree of privacy
</td>
<td>
More challenging datasets based on the published ones (e.g. with noise of
artefacts simulating mistakes made during the scanning of a tissue slide,
rotation of regions of interest etc.)
</td> </tr>
<tr>
<td>
UC#2
</td>
<td>
Extreme volume of data (LOFAR reduced data set 5-7PB per year, SKA centrally
processed data rate: 160Gbps)
</td>
<td>
For the most part the data assets will remain in LOFAR LTA (long term
archive), however a disk copy of test observation could be useful for software
testing and validation
</td> </tr>
<tr>
<td>
UC#3
</td>
<td>
Usability of extreme scale tools to support emerging big data user
communities: The UNISDR Global Assessment Report (GAR) datasets have been made
publicly available for non-commercial use since early 2017. The process to be
used for the 2019 will be fundamentally different, with considerably larger
group of experts with heterogeneous and evolving data curation practices
involved in the data production and curation.
</td>
<td>
The 2015 and 2017 GAR datasets and the results of the CIMA showcase.
</td> </tr>
<tr>
<td>
UC#4
</td>
<td>
Very large datasets, extreme responsiveness requirements, high financial
risks/potential rewards; exploitation requires demonstrating high degree of
security and auditability of the PROCESS solutions.
</td>
<td>
Tools, documentation and parameter files for generating simulated transaction
datasets
</td> </tr>
<tr>
<td>
UC#5
</td>
<td>
Support wide range of uses of a very large dataset of satellite images
(growing at the rate of 7.5PB per month)
</td>
<td>
Tools, documentation and parameter files for accessing Copernicus data,
possibly specialised derived sets of data (e.g. time series of specific
location)
</td> </tr> </table>
All these use cases have distinct communities, practices and
documentation/metadata conventions, thus any component that can be used as
part of all five demonstrators can be considered a proven, generalizable data
management component with very high exploitation and uptake potential.
#### 2.3 Potential Data assets
The potential reusable datasets will emerge from the following primary
sources:
* The work focused on the use cases and supporting the communities around it
* The work on the general purpose exascale data solution that supports the use cases.
The use case-related data assets will almost certainly represent the majority
of the assets that are used. The technical platform development may develop
tools and technologies that are e.g. used for testing, benchmarking or
validation of the PROCESS solution. The primary data management approach will
be based on the software quality process, leveraging as much as possible
metadata and repository structure used for the software releases and link the
services storing the physical datasets to the PROCESS software repository.
#### 2.4 Common approaches to data management
As in all of the five use cases the foundation of the development is the use
of existing datasets, the decision to store and publish a derived set is based
on assessment of the potential value this derived dataset might represent for
other users. The assessment is currently based on the following abstract
"checklist" that will be refined and formalised during the project lifetime
based on the experiences gained in its application:
1. Would publishing the dataset raise potential privacy issues (e.g. allowing deanonymising subjects)?
2. Does the license of the original dataset allow publishing a derived set (IPR)?
3. Does publishing dataset lead to potential savings (in terms of time, computational resources etc.) when compared to re-generating the derived set?
4. Does the project need to keep a derived dataset already for its internal use (e.g. for testing, benchmarking, validation)?
a) Would these datasets be needed by third parties to fully validate correct
behaviour of PROCESS tools
5. Can a suitable (i.e. a repository that would facilitate discovery of the data asset by its intended users), managed repository that would be willing to host the dataset be found?
6. If not, can the long-term commitments needed for formally publishing a dataset be met by the partners?
Regarding the last point, LMU has secured storage space for 20TB of raw data
at LRZ, with a minimum commitment of providing managed, high-availability
access at least for three years after the end of the project. It is assumed
that if the datasets will be used, this time period can be extended, or the
dataset migrated to a community-specific data repository.
Data Management Plan
PROCESS will aim at complying with the FAIR principles 1 with all of its
data publication activities. Any deviations from these principles will be
documented, together with the reasons for them (e.g. constraints imposed by
the practices of the specific community).
#### 2.5 Use-case specific data management aspects
The following sections will present _a priori_ assessment of potential
reusable datasets generated in the context of each of the use cases. The
details of the data processing workflows and requirements are presented in the
deliverable D4.1, so this deliverable will present only very brief summary of
the potential reusable data assets and the possible approaches their use could
be supported. This section is expected to be refined considerably in the
future editions of the PROCESS DMP.
##### 2.5.1 UC#1
_Background datasets_
The UC#1 uses the following published datasets as background material, each of
them already published in repositories that are well known by the medical
informatics community.
# Table 3 Background datasets of use case 1
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
**Estimated size**
</th>
<th>
**Description**
</th>
<th>
**Format**
</th>
<th>
**Annotations**
</th> </tr>
<tr>
<td>
Camelyon17
</td>
<td>
>3TB
</td>
<td>
1000 WSI, 100 patients
</td>
<td>
BIGTIFF
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
Camelyon16
</td>
<td>
>1TB
</td>
<td>
400 WSI
</td>
<td>
BIGTIFF
</td>
<td>
XML file + Binary Mask
</td> </tr>
<tr>
<td>
TUPAC16
</td>
<td>
>3TB
</td>
<td>
WSI
</td>
<td>
BIGTIFF
</td>
<td>
CSV file
</td> </tr>
<tr>
<td>
TCGA
</td>
<td>
>3TB
</td>
<td>
WSI
</td>
<td>
BIGTIFF
</td>
<td>
TXT file
</td> </tr>
<tr>
<td>
PubMed
Central
</td>
<td>
~5 million images
</td>
<td>
Low resolution
</td>
<td>
Multiple formats
</td>
<td>
NLP of image captions
</td> </tr>
<tr>
<td>
SKIPOGH
</td>
<td>
>30TB
</td>
<td>
WSI
</td>
<td>
BIGTIFF
</td>
<td>
</td> </tr> </table>
_Data generated_
The UC#1 will generate two types of data assets of potential interest:
Derived datasets based on one of the published ones (Camelyon17, Camelyon16
...) that support more comprehensive training of machine learning algorithms.
The methods include rotation of images, adding noise or simulated processing
artefacts (e.g. foreign bodies like hairs in the scanned tissue slide).
Actual neural networks trained by the datasets.
_Publishing approach_
In case of the derived datasets, it is likely that in most cases it would be
most appropriate to publish the method for generating the derived dataset
(software and artefact images). A small sample of trained networks could also
be of interest.
Data Management Plan
In the former case publishing the "recipe" for a derived dataset would ideally
be done in the context of the original repository, whereas in the latter the
software repository might be the most natural location.
##### 2.5.2 UC#2
_Background datasets_
The work in the UC#2 will rely on accessing the data from the LOFAR Long Term
Archive (LTA - _https://lta.lofar.eu/_ ) . The publishable data assets would
likely be a minimal set for validation testing of the software.
##### 2.5.3 UC#3
_Background datasets_
The work in the UC#3 is based on using the UNISDR community as a pilot
community for advanced PROCESS tools. The project will support the data
management of the community, however for now the work does not result in
generation of publishable datasets by the project.
##### 2.5.4 UC#4
_Background datasets_
The background datasets are the private transaction records kept by LSY that
are used as basis for generating statistically similar simulated datasets for
testing the ancillary pricing mechanism.
_Data generated_
The simulated data needs to be evaluated based on the checklist presented in
the section "Common approaches to data management". It is likely that the
value of large datasets is relatively low compared to publishing of the
generation algorithm.
##### 2.5.5 UC#5
_Background datasets_
The datasets will be based on the Copernicus data service.
_Data generated_
Tools, documentation and parameter files for accessing Copernicus data,
possibly specialised derived sets of data (e.g. time series of specific
location).
_Publishing approach_
There are three potential channels that could be interested in the data assets
generated in the UC#5 context
1. Users of the PROMET software
2. Broader agronomy research community interested in easy access to satellite data
3. Providers of generalised Copernicus access services
Evaluating (based on the experiences of the first pilot versions) which of
these channels will be most promising channels for the project data assets
will be one of the key focus areas of the update of this DMP.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0217_PROCESS_777533.md
|
# Executive Summary
This updated Data Management Plan (DMP) presents an overview of the evolution
of the goals, background and constraints the project needs to take into
account when generating and publishing reusable datasets. The initial
assessment of the role of the use cases being focused on providing tools aimed
at making orders of magnitude improvements in the efficiency and convenience
of extracting value from existing data assets has not changed, thus most of
the new material in this updated DMP is stemming from the experiences gained
in the work that is directly related to the use cases. For convenience, the
detailed background analysis of the use cases presented in the deliverable
D1.1 has been included in this document as an annex.
# Updated Data Management Plan
## Introduction
As stated in the Deliverable D1.1, the requirements of the PROCESS DMP stem
from the EC Grant Agreement. These requirements need to be applied in the
contexts of the five service pilots included in PROCESS, each of them with
unique opportunities and constraints related to the reuse of the data
generated.
These foundations of the DMP have not changed during the first 18 project
months, but for convenience the initial analysis of the situation from the
deliverable D1.1 is included in the Annex 1 of this document.
## Potential Data assets
The deliverable D1.1 identified two groups of sources for reusable datasets:
* The work focused on the use cases and supporting the communities around it
* The work on the general purpose exascale data solution that supports the use cases.
The first group was assumed to contain the majority of the assets that are
used, and the experiences of the first 18 months tend to support this initial
assessment. It is possible that the validation step of the PROCESS solution
will generate datasets that could be used as basis for more general benchmarks
of extreme data applications, but this remains to be confirmed during the
second half of the project.
## Common approaches to data management
The initial assessment framework determining whether the project should
publish a dataset or not is still valid:
1. Would publishing the dataset raise potential privacy issues (e.g. allowing deanonymising subjects)?
2. Does the license of the original dataset allow publishing a derived set (IPR)?
3. Does publishing dataset lead to potential savings (in terms of time, computational resources etc.) when compared to re-generating the derived set?
4. Does the project need to keep a derived dataset already for its internal use (e.g. for testing, benchmarking, validation)?
a) Would these datasets be needed by third parties to fully validate correct
behaviour of PROCESS tools?
5. Can a suitable (i.e. a repository that would facilitate discovery of the data asset by its intended users), managed repository that would be willing to host the dataset be found?
6. If not, can the long-term commitments needed for formally publishing a dataset be met by the partners?
Applying this framework has produced following observations:
* There are potential derived datasets stemming from use cases (UC) #1 and #4. However, at least in the case of UC#4, it is likely that publishing the algorithm would be more efficient way of allowing reuse. In case of UC#1, the project is using derived dataset internally. However, these data assets are deemed to be very specific to the project and not of interest to third parties.
* The dataset used as a starting point for the UC#3 has certain licensing issues that need to be taken into account. However, the complementary datasets identified do not carry this limitation.
* There are some promising – albeit early stage – discussions that indicate that the resource requirements of long-term preservation of datasets can possibly be met through collaborative arrangements with other projects.
## Use-case specific data management aspects
The following sections will present an update of the potential reusable
datasets generated in the context of each of the use cases. The details of the
data processing workflows and requirements are presented in the deliverable
D4.1.
### UC#1
_Background datasets_
The ongoing UC#1 activities are focused on the Camylyon17 and Camelyon16
datasets, with the other background datasets kept as candidates for further
testing and validation at the end of the project. The project may also gain
access to datasets collected and used by the ExaMode project 1 , in which
case the use of other already published datasets will have a considerably
lower priority.
_Table 1 Updated background dataset summary of use case 1 – data sets under
active study highlighted_
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
**Estimated size**
</th>
<th>
**Description**
</th>
<th>
**Format**
</th>
<th>
**Annotations**
</th> </tr>
<tr>
<td>
Camelyon17
</td>
<td>
>3TB
</td>
<td>
1000 WSI,
100 patients
</td>
<td>
BIGTIFF
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
Camelyon16
</td>
<td>
>1TB
</td>
<td>
400 WSI
</td>
<td>
BIGTIFF
</td>
<td>
XML file + Binary Mask
</td> </tr>
<tr>
<td>
TUPAC16
</td>
<td>
>3TB
</td>
<td>
WSI
</td>
<td>
BIGTIFF
</td>
<td>
CSV file
</td> </tr>
<tr>
<td>
TCGA
</td>
<td>
>3TB
</td>
<td>
WSI
</td>
<td>
BIGTIFF
</td>
<td>
TXT file
</td> </tr>
<tr>
<td>
PubMed Central
</td>
<td>
~5 million images
</td>
<td>
Low
resolution
</td>
<td>
Multiple formats
</td>
<td>
NLP of image captions
</td> </tr>
<tr>
<td>
SKIPOGH
</td>
<td>
>30TB
</td>
<td>
WSI
</td>
<td>
BIGTIFF
</td>
<td>
</td> </tr>
<tr>
<td>
ExaMode
</td>
<td>
Tens of TB
</td>
<td>
_TBD_
</td>
<td>
_TBD_
</td>
<td>
_TBD_
</td> </tr> </table>
_Data generated_
As described in the original DMP, the UC#1 will generate two types of data
assets for the project internal use:
* Derived datasets based on one of the published ones (Camelyon17, Camelyon16 ...).
* Actual neural networks trained by the datasets.
_Publishing approach_
As in the original DMP – the project will focus on documenting the processes
used to develop derived datasets.
### UC#2
_Background datasets_
The work in the UC#2 will rely on accessing the data from the LOFAR Long Term
Archive (LTA) 2 and produce tools that allow more efficient use of the LTA
contents.
Publishing datasets retrieved from LTA is not deemed necessary at this stage,
as any actual analysis performed by third party users would need to access the
official archive as an authoritative source of data. Furthermore, it is
possible to validate the UC#2-related service pilot by using files filled with
random values.
### UC#3
_Background datasets_
The work in the UC#3 was based on using the UNISDR community as a pilot
community for advanced PROCESS tools. The original dataset consists of about
2TB of data and is openly accessible via http://unisdr.mnm-team.org. This
resource is described in more detail in Annex 2 of this deliverable.
As the new, community-based process used by UNISDR is still evolving, the
project is at the moment considering enhancing the original datasets with
other assets. The primary candidate for testing this approach is based on the
datasets produced by the CliMex project 3 . The project has generated fifty
climate simulation models for the time period of 1950 to 2100 covering Central
Europe and North-Eastern North America. This data is used as an input for
hydrological simulations to identify extreme flooding scenarios associated to
climate change. The CliMex project will aim at publishing ~200TB dataset
during 2019, with a suitable open license (details of the exact license still
under review). The integration work to make these two datasets available
through a single interface is ongoing.
In parallel to this technical work, the project is investigating ways to
benefit from the synergies with the LEXIS project 4 , that has two pilot
activities dealing with disaster risk modelling (Earthquake and Tsunami,
Weather and Climate).
### UC#4
_Background datasets_
The background datasets are the private transaction records kept by LSY that
are used as basis for generating statistically similar simulated datasets for
testing the ancillary pricing mechanism.
_Data generated_
The simulated data needs to be evaluated based on the checklist presented in
the section "Common approaches to data management". It is likely that the
value of large datasets is relatively low, i.e. it wouldn’t add value to
publishing of the generation algorithm.
### UC#5
The used datasets base on pre-processed data from the Copernicus Sentinel
data.
_Data generated_
Tools, documentation and parameter files for the pre-processed Copernicus data
used in PROMET 5 and output generated with PROMET.
_Publishing approach_
There are three potential channels that could be interested in the data assets
generated in the UC#5 context
1. Users of the PROMET software
2. Broader agronomy research community interested in easy access to satellite data
3. Providers of generalised Copernicus access services
Evaluating which of these channels are the best ones for promotion of third-
party reuse is still ongoing.
# introduction and background of the PROCESS DMP
## Introduction
The requirements for the Data Management Plan (DMP) are laid out in grant
agreement (GA) and supporting documentation provided by the EC. The GA states
that:
_"Regarding the digital research data generated in the action (‘data’), the
beneficiaries must:_
_(a) deposit in a research data repository and take measures to make it
possible for third parties to access, mine, exploit, reproduce and disseminate
— free of charge for any user — the following:_
1. _the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;_
2. _other data, including associated metadata, as specified and within the deadlines laid down in the data management plan"_
The prerequisites for complying with these requirements include:
* Identifying the data generated that could form the basis of becoming a reusable data asset
* Identifying and securing access to optimal repositories for long-term preservation of the data
* Review and refine the metadata so that it provides information that is relevant and understandable also when separate from the PROCESS project context. This is important already for the project internal use as the data assets of the PROCESS project are multi-disciplinary in nature.
* Map the data with publications made by the project
* Having necessary _due diligence_ processes in place to ensure publication of data will not - directly or indirectly - raise any additional compliance issues.
Fulfilling these requirements in a multifaceted project such as PROCESS
requires a two-stage approach: creating an initial data management plan
describing potential reusable data asset that project can generate during its
lifetime, and common principles and approaches used to choose optimal
approaches for making them reusable in the longer term. The initial data
management plan will be refined during the project lifetime as the nature of
the data assets generated will become clearer. However, it should be noted
that due to the interdisciplinary nature of the project, it is likely that the
project will generate several data management plans to match the specific
requirements and community conventions of each of the disciplines involved.
Maximising the potential for reuse will also depend on successful
identification of the potential secondary user communities, as this is a
prerequisite for successful reviewing and refining of metadata specifications
and identification of the optimal repositories that are to be used for storing
the data generated during the PROCESS lifetime.
One of the common characteristics of the PROCESS use cases is that they do not
do primary data collection themselves. Instead, they will generate data,
either based on the publicly available datasets or (especially in case of the
UC#4, see next section) provide simulation results based on statistical
distributions of actual, private datasets that are used as background of the
project work.
## Background
PROCESS is a project delivering a comprehensive set of mature services
prototypes and tools specially developed to enable extreme scale data
processing in both scientific research and advanced industry settings. These
service prototypes are validated by the representatives of the communities
around the five use cases:
* UC#1: Exascale learning on medical image data
* UC#2: Square Kilometre Array/LOFAR
* UC#3: Supporting innovation based on global disaster risk data/UNISDR
* UC#4: Ancillary pricing for airline revenue management
* UC#5: Agricultural analysis based on Copernicus data
From the data management perspective, each of these five use cases presents
challenges that are complementary to each other, and with different potential
for direct generation of exploitable data assets. This mapping is presented in
the table below:
_Table 2 Key challenges of use cases_
<table>
<tr>
<th>
**Use case**
</th>
<th>
**Key challenge**
</th>
<th>
**Type of reusable data asset**
</th> </tr>
<tr>
<td>
UC#1
</td>
<td>
Machine learning using massive, public datasets; exploitation requires high
degree of privacy
</td>
<td>
More challenging datasets based on the published ones (e.g. with noise of
artefacts simulating mistakes made during the scanning of a tissue slide,
rotation of regions of interest etc.)
</td> </tr>
<tr>
<td>
UC#2
</td>
<td>
Extreme volume of data (LOFAR reduced data set 5-7PB per year, SKA centrally
processed data rate: 160Gbps)
</td>
<td>
For the most part the data assets will remain in LOFAR LTA (long term
archive), however a disk copy of test observation could be useful for software
testing and validation
</td> </tr>
<tr>
<td>
UC#3
</td>
<td>
Usability of extreme scale tools to support emerging big data user
communities: The UNISDR Global Assessment Report (GAR) datasets have been made
publicly available for non-commercial use since early 2017. The process to be
used for the 2019 will be fundamentally different, with considerably larger
group of experts with heterogeneous and evolving data curation practices
involved in the data production and curation.
</td>
<td>
The 2015 and 2017 GAR datasets and the results of the CIMA showcase.
</td> </tr>
<tr>
<td>
UC#4
</td>
<td>
Very large datasets, extreme responsiveness requirements, high financial
risks/potential rewards; exploitation requires demonstrating high degree of
security and auditability of the PROCESS solutions.
</td>
<td>
Tools, documentation and parameter files for generating simulated transaction
datasets
</td> </tr>
<tr>
<td>
UC#5
</td>
<td>
Support wide range of uses of a very large dataset of satellite images
(growing at the rate of 7.5PB per month)
</td>
<td>
Tools, documentation and parameter files for accessing Copernicus data,
possibly specialised derived sets of data (e.g. time series of specific
location)
</td> </tr> </table>
All these use cases have distinct communities, practices and
documentation/metadata conventions, thus any component that can be used as
part of all five demonstrators can be considered a proven, generalizable data
management component with very high exploitation and uptake potential.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0218_SYLFEED_745591.md
|
# Executive summary
The overall objective of WP7 in SYLFEED is to “ _continuously monitor and
provide means for the SYLFEED partners to share their knowledge within the
consortium and to integrate the research activities as well as to exploit the
developments, and/or communicate and disseminate the results to the scientific
and industrial community and to the wider audience. The global objective is to
prepare and encourage the use and wide acceptance of project outputs_ ”, as
per the Description of Action (DoA). Additionally, internal communication is
an integral part of the success of SYLFEED, as described in WP8, specifically
Task 8.3.
Deliverable 7.6, contributes towards these goals and to the success of SYLFEED
by creating a "Data Management Plan (DMP)", as per the Horizon 2020 Open
Research Data Pilot. The current deliverable contains two SYLFEED DataSets
(SDS) detailing the content of datasets used within SYLFEED, how it will be
preserved, and steps taken to make data publically available after the project
end. Additional SDS will be created as the project progresses.
This deliverable is set up by MATIS with the help of ARBIOM and both partners
will update and add to the DMP.
# Deliverable report
SYLFEED Data Management Plan (DMP) under the H2020 Open Research Data Pilot.
# Introduction
Over the course of a research project, considerable amounts of data are
gathered and generated. Often, these data are not preserved or made available
for reuse later on, causing time and effort to be spent in other projects
gathering similar data. The goal of the Horizon 2020 Open Research Data Pilot
is to remedy this issue, by ensuring that research data generated through a
project is made available for reuse after the project ends.
# H2020 Open Research Data Pilot
The H2020 Open Research Data Pilot is based on the principle of making data
**FAIR** :
* **Findable**
* **Accessible**
* **Interoperable**
* **Reusable**
# SYLFEED Data Management Plan
As a way of managing the data used during a project lifetime, a Data
Management Plan (DMP) must be created. The DMP-forms includes details on:
* the handling of research data during and after the end of the project
* what data will be collected, processed and/or generated
* which methodology and standards will be applied
* whether data will be shared/made open access
* how data will be curated and preserved (including after the end of the project)
* ethical issues related to the data
* estimated costs associated with data archiving/sharing
The creation of the DMP is included in deliverable 7.6 (D7.6). As per the DoA,
D7.6 will fulfill three requirements as a participant in the H2020 Open
Research Data Pilot:
* "Firstly, the collected research data should be deposited in data repository (the SYLFEED project will use the Zenodo repository),
* Secondly, the project will have to take measures to enable third parties to access, mine, exploit, reproduce and disseminate this research data,
* Finally, a Data Management Plan (DMP) has to be developed detailing what kind of data the project is expected to generate, whether and how it will be exploited or made accessible for verification and reuse, and how it will be curated and preserved".
Two types of data will be generated during the project:
* _Raw data:_ This data will be stored on partners secured server and is not intended to be shared with the consortium. Partners ensure that the server capacity is adequate to store the data of SYLFEED project.
* _Consolidated data:_ This data will be shared within the consortium. It will be stored on Aymingsphere, the internal project website, further described below. In addition, consolidated data will be at least stored on the data’s owner secured server.
The project data identified at this stage of the project, and that will be
generated during the project life have been compiled in a table ; this table
is confidential and is made available to the project partners on the project
collaborative platform Aymingsphere. This table will be regularly updated
during the project in order to include all new data that could be identified
at a later stage.
Data is classified by Work Package, and additional information regarding the
database where the data will be stored, and the author is mentioned.
# The SYLFEED DataBase and DataSets
Currently the SYLFEED DataBase (SDB) includes several SYLFEED DataSets (SDS).
These can be found in the appendix at the end of this document. However,
during the later stages of the project and as the project progresses, relevant
SDS will be uploaded to the SDB. At or near the project end, datasets will be
uploaded from the SDB to OpenAire ( _openaire.eu/_ ) or other relevant and
appropriate locations as agreed upon by the SYLFEED consortium.
A part of the SDB are two webpages; SYLFEED public page ( _www.sylfeed.eu_ )
and AYMINGSPHERE, the internal page for consortium communication.
<table>
<tr>
<th>
**Name of database**
</th>
<th>
SYLFEED website (WP7)
</th> </tr>
<tr>
<td>
Data summary
</td>
<td>
Sylfeed project website is functional since end of November 2017. It
highlights graphical elements such as logo, spiral, footer, and a common set
of colors. The objective of the website is to present SYLFEED project,
objectives, work packages, and inform interested parties about the latest news
of the project. All data displayed on the SYLFEED website is open to the
public. The website will be available throughout the lifetime of the project
at the current domain, www.sylfeed.eu, and the content of the website will be
available after the end of the project under the website of the coordinator,
ARBIOM, through web page redirect from current address.
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
The data is available on the project website and is open source. All data can
be found through regular websearch and through a regular keyword search. Each
website post or page is labelled with relevant metadata for easier access.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
All website data is open source, hosted within the WordPress Content
Management system (CMS)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and address by the consortium as a whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backed-up on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
N/A
</td> </tr> </table>
<table>
<tr>
<th>
**Name of database**
</th>
<th>
Website for internal matters (AYMINGSPHERE / SharePoint; WP8)
</th> </tr>
<tr>
<td>
Data summary
</td>
<td>
AYMINGSPHERE ensures that partners have access to adequate resources,
monitoring and planning procedures for an efficient management of the whole
technological work carried out in the framework of the SYLFEED project.
Partners can exchange and share documents, participate in discussion forums…
The structure is organized as follows:
* Project records: includes Communication documents, quality documents, management documents, meetings documents,
* Certified documents
* Steering tools: action list, roadmap, issues, decisions, risks
* Financial follow-up with subsections per partner - Work packages with a separate folder per WP.
* Useful links
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data on the AYMINGSPHERE website is for internal communication only and is not
available for the public, unless otherwised determined by the SYLFEED
consortium.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is not available for the public, unless otherwised determined by the
SYLFEED consortium.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable by the consortium only, unless otherwised
determined by the SYLFEED consortium.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data Security
</td>
<td>
Data is stored on a SharePoint Content Management System. Data reside on a
secure server and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
N/A
</td> </tr> </table>
# Conclusion
The Appendix describes the content of the different datasets, the ways in
which data will be stored and how/if it will be made available at the project
end. Due to the project being at an early stage, and because different work
packages are at a different time schedule, not all forms share the same level
of detail or same level of dissemination.
The DMP is intended to be a "living" document and will evolve as the project
progresses. Periodic revisions of the DMP are planned as described in the DoA.
Extra revisions might be scheduled should it be needed. The table on page 2 in
this document "Document history" provides a summary of revisions carried out
over the lifetime of this Data Management Plan. It provides a version number,
the date of the latest revision, the editor, and a comment describing the
change made.
# Appendix: DataSets
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Demo plant equipment specifications & data sheet (WP1)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
* Brochures
* Booklets
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Demo plant drawings (WP1)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
* Drawings
* Electronic documents
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Demo plant SOP (WP1)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
* Brochures
* Booklets
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Demo plant operational outputs (T, P, flow rate, conc, …) (WP1)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
Matrix of data
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Demo plant sourcing
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
* Report
* LOIs
* Contract and other commercial documents
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom
and NSG.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Demo plant construction, commissioning and operation
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
Reports
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom
and NSG.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Bioprocess data (WP2)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
* R&D Bioprocess Data,
* Reports on SCP production at small scale
* Reports on SCP production at small scale Reports on optimisation of SCP production at small
* Reports on optimisation of SCP production at small
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom
and SPP.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Fish feed nutritional analysis (WP3)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
* Intermediary reports (3) on nutritional analysis, undesirable components and methods for optimization
* Final report on nutritional analysis, undesirable components and methods for optimization
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Matís
and Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Feed formulation (WP3)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
Report on feed formulation of commercial diets for carnivorous fish
(Atlantic salmon) and omnivorous fish (Tilapia)
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Matís
and Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Small & large scale feed production (WP3)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
* Report on production methods and quality parameters for small scale feed production
* Report on production methods and quality parameters fo large scale feed production
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Matís
and Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Small & large scale trials (WP3)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
Report on Small scale feeding trials describing test setup, fish’s growth
rate or deformations Report on large scale feeding trials describing test
setup, fish’s growth rate or deformations
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Matís
and Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Feed formulation from C5 and lignin streams (WP4)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
* Intermediary report on lignin incorporation in fish feed formulation
* Final report on lignin incorporation in fish feed formulation
* Intermediary report on C5 fermentation into SCP and for C5 use in fish feed formulation
* Final report on C5 fermentation into SCP and for C5 use in fish feed formulation
* Intermediary report on lignin incorporation in fish feed formulation
* Final report on lignin incorporation in fish feed formulation
* Intermediary report on C5 fermentation into SCP and for C5 use in fish feed formulation
* Final report on C5 fermentation into SCP and for C5 use in fish feed formulation
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Matís
and Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Cost of ownership of SCP (WP5)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
Techno economic evaluation of the cost of goods and cost of operations.
calculations, market data
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Business cases (WP5)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
* Presentations
* Report
* Market analysis Projections
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Report on LCA on production of proteins from lignocellulose based on ARBIOM’s
process (WP6)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
Report
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom
and Ostfold.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
List of protein sources to be compared (WP6)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
Report
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom
and Ostfold.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Report on existing LCA for other protein sources (WP6)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
Report
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom
and Ostfold.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Report on LCA of other protein sources (WP6)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
Report
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom
and Ostfold.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Report on LCA for all protein sources (WP6)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
Report
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom
and Ostfold.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
LCA data sets (WP6)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
Report
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is open access and is findable through research data repository.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is open access.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom
and Ostfold.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Communiction/dissemination tools (WP7)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
* Brochures
* Posters
* Infographics including animations
* Videos
* Newsletters
* Press releases
* Social media accounts
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
The data is available on the project website and is open source. All data can
be found through regular websearch and through a regular keyword search. Each
website post or page is labelled with relevant metadata for easier access.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
All website data is open source, hosted within the WordPress Content
Management system (CMS))
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
N/A
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Registration of SCP (Single Cell Protein) (WP7)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
Document submitted to the EU to use our SCP as fish feed ingredient/
Authorisation from the
EU/FDA to use our product for feed applications
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Exploitation data (WP7)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
* Project business cases
* SCP based product competitive advantages SYLFEED demonstration impact analysis Update of business plans and identification of steps
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Meeting minutes & reports (WP8)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
* Minutes from all SYLFEED meetings
* Internal progress reports
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Published data will be re-usable. Confidential data will be assessed on a case
by case basis upon official request, and addressed by the consortium as a
whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will stored on a secure server at Arbiom.
</td> </tr> </table>
<table>
<tr>
<th>
**DataSet reference and name**
</th>
<th>
Quality & Risk Management Plan (WP8)
</th> </tr>
<tr>
<td>
DataSet summary
</td>
<td>
reports
</td> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
(through clarifying licences)
</td>
<td>
Confidential data will be assessed on a case by case basis upon official
request, and addressed by the consortium as a whole.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Data reside on a secure server (SSL) and is backedup on a regularly basis.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
Upon completion of the project, data will be stored on a secure server at
Arbiom.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0219_GHOST_770019.md
|
# Introduction
## Executive summary
The Project Coordinator (CRF) is responsible for technical coordination and
scientific quality assurance throughout the project. This task involves
monitoring of the technical progress, coordinating data management, the
various work packages and tasks, and risk monitoring. When necessary
corrective actions will be taken including potential work reallocation which
will be coordinated by the Project Coordinator and agreed by the Executive
Board consisting of all WP leaders. An approval procedure was defined during
the kick-off meeting and it has been agreed that the Executive Board is the
body for quality assurance in line with the Consortium Agreement. D1.1 the
Quality Assurance Guideline contains more information on the related internal
roles, rules and procedures.
This Deliverable report addresses both the risk and data management plans and
includes definitions and details of the processes being implemented with
respect to the GHOST project.
The Risk Management Plan (RMP) is an on-going process which is continuously
updated throughout the lifetime of the project to monitor known risks,
identify emerging risks, and where necessary to respond to them. As part of
the risk management plan, a risk register is compiled from risks identified in
the GHOST Grant Agreement, Deliverables to date, and from risks which are
identified during the course of the project. This register is monitored and
discussed during the Executive Board meetings and adjusted when necessary.
The Data Management Plan (DMP), which is also an on-going process, defines how
different project data will be handled during the project and aims to ensure
the appropriate level of access to and re-use of data generated by the GHOST
project in particular with respect to the public domain Deliverable reports.
The Project Coordinator (CRF) is responsible for the implementation of the DMP
and its coordination is assisted by the Executive Board.
Both the RMP and DMP should be considered to be complimentary with respect to
the other
Deliverable reports and in particular Deliverable D1.1 “Quality Assurance
Guidelines” and D9.2
“Dissemination and communication plan”. Furthermore, all the partners have
signed a Consortium Agreement, in which all relevant issues necessary for the
proper execution of the project are described in detail including: the
responsibilities (General Assembly, Project Coordinator and individual
parties), liabilities, voting rules, intellectual property rights (IPR),
knowledge management, rules for publishing information, conflict resolution,
etc.
# Overview of the GHOST Project
## Objectives and Scope
The aim of the GHOST project is to develop an InteGrated and PHysically
Optimised Battery System for Plug-in Vehicles Technologies.
The overall objectives of the GHOST project, looking at both the existing Li-
ion battery technologies and the future commercial post-lithium-ion ones, are:
* Design of a novel and modular battery system with higher energy density (in weight) up to 20% based on the state- of-the-art of lithium-ion battery cell technologies through:
Implementation of advanced light and functionalized battery system (BS)
housing material;
Innovative, modular, energy and cost efficient thermal management
architectures & strategies;
Optimal selection of the right battery cell technology for different
applications and use-cases that will be demonstrated in the proposed project;
* Increase of the energy density of the battery system up to 30% based on novel Dual Battery System concept based on new emerging battery technologies and high power lithium-ion battery;
* Development of mass producible innovative and integrated design solutions to reduce the battery integration cost at least by 30% through smart design: starting from cell up to recycling, testing and modelling approaches;
* Definition of new test methodologies and procedures to evaluate reliability, safety and lifetime of different Battery Systems;
* Design of novel prototyping, manufacturing and dismantling techniques for next generation of lithium-ion BS;
Evaluation of 2nd life battery potential, applications and markets starting
from requirements and specifications;
Demonstration of GHOST solutions in two demonstrators (BEV bus with ultrafast
partial charge capability and P-HEV) and one lab demonstrator (module level)
for the post Lithium-Ion technology.
The aim is to achieve these key innovations at affordable cost in order to
strengthen Europe’s competitive position in terms of Battery System, a crucial
field for electrified vehicles.
Technologies developed in the frame of the project will aim for first market
introduction between 2023 and 2024.
Importantly, the technology devised will have a strong impact on the
electrically chargeable vehicles (BEVs and P-HEVs) performance increase
(including range and related battery lifetime and reliability).
<table>
<tr>
<th>
Aim
</th>
<th>
GHOST Objective
</th> </tr>
<tr>
<td>
Thermal, electrical and mechanical design of battery systems based on lithium
and post lithium cells aiming at highly increased energy density and
modularity
</td>
<td>
Mechanical design:
Replacement of bulky and heavy housings that are used in the current
conventional BS by reliable and functionalized lightweight materials,
contributing up to 30% weight reduction and provide improved safety by higher
specific energies;
Redesign of the BS by using novel materials at module level to reduce further
the weight up to 20% and to maintain the required mechanical stability and
safety.
PCM
with
Al-foam
Cell
Cooling
system
Al
plates
Thermal design:
Development of a module level novel and modular thermal management through
advanced thermal concepts, which is independent of the cooling concept
and selected media;
Adaptive and smart active control of the cooling circuit pumps/fans;
Thermal management design for bus and future BEVs fast charging (up to 350-450
kW).
Electrical design:
Development of a modular battery module architecture (i.e. 48V), which can be
easily scaled up to 400-800 V with full commonalities in the field of used
mechanical connections between the cells, fuses, safety, and battery
controller unit concept;
Novel Dual-Cell-Battery architecture
</td> </tr>
<tr>
<td>
Battery cost reduction
</td>
<td>
Implementation of intelligent more integrated and simplified harness sensing
and communications (i.e. for temperature
and state of heath estimation) with high reliability;
Complete redesign of the BS by taking into account the dismantling and
recycling aspects
(reduction of integration cost up to 20% and reduction time up to 30-40%);
Standardized and innovative parameterization test protocols, models and state
functions which can speed up the battery module and system development process
by 20% compared to SoA contributing to reduce the integration cost as a
consequence;
Defining the needs to be taken into account to obtain a modular balancing
concept solution suitable for automotive and second life applications;
Improved modelling and simulation tools for BS improvement/development using
virtual modelling approach, which is mainly based on the concept of
Simulation-In-Loop (SIL), addressed through the application of the knowledge
that will be generated in the project on thoroughly insight ageing mechanisms,
SoH, SoC, SoF and electro-thermal modelling.
</td> </tr> </table>
<table>
<tr>
<th>
Aim
</th>
<th>
GHOST Objective
</th> </tr>
<tr>
<td>
Design for manufacturing, recycling and second use
</td>
<td>
In the GHOST Project, efficient manufacturing processes will be applied for
the BS that will be designed taking into account the best experiences in the
field.
In addition, the manufacturing will be processed in such way towards recycling
in a cost efficient way thanks to the innovative solution of the physical
integration of battery modules.
</td> </tr>
<tr>
<td>
Prototyping and mass-production technologies for battery systems
</td>
<td>
New environmental friendly design of prototyping and manufacturing processes
of BS for automotive will be considered and analysed;
Identification of the cost efficient BS solution for mass-production.
</td> </tr>
<tr>
<td>
Demonstration of performance, lifetime and safety behaviour including lab
testing and demonstration under real life conditions in vehicles
</td>
<td>
Demonstration of GHOST BS solutions at lab level (TRL 5) and within 2
demonstrators (PHEV
500X and BEV bus) under real life conditions based on performances and
operational/functional safety.
</td> </tr>
<tr>
<td>
Advanced physical integration technologies for high energy/power density
battery packs should take into account safety and modularity
</td>
<td>
Development of a Novel Dual-Cell-Battery architecture for next generation BS
that comprises high-power (HP) battery and next generation high-energy battery
technologies (HE) with highly integrated and efficient physical integration
thanks to advanced DC/DC converter based on newest semiconductors technologies
(Si and WBG technologies). This concept will be demonstrated at lab level (TRL
4).
The integration, manufacturing and safety methodology aspects considered for
lithium-ion technology will be transferred for the Dual-Cell-Battery
architecture.
</td> </tr>
<tr>
<td>
Demonstration of performance, lifetime and safety behaviour including bench
testing and demonstration under real life conditions in
</td>
<td>
Test methodologies and procedures to evaluate the functional safety and
lifetime of the battery from cells, modules to system levels:
Advanced and reliable standardized test procedure focusing on lifetime,
safety, reliability for Liion and post Li-ion as well. New technologies may
need new accurate and reliable testing methods and evaluation procedures to
reflect real-life scenarios, because different C-rates, operation
temperatures, safety limits, etc. are required to verify the operational BS.
More realistic test methods can decrease testing time and enhance safety and
competitiveness;
Moreover, the novel BS architecture will require devoted new procedures for
the experimental evaluation to be applied in the validation phase;
</td> </tr>
<tr>
<td>
Aim
</td>
<td>
GHOST Objective
</td> </tr>
<tr>
<td>
vehicles
</td>
<td>
Reliability enhancement due to:
BS with less external and internal connections (i.e. advanced harness sensing
for temperature), cabling, components and also for the vehicle protection
relays architecture; Innovative simplified connection methods between BS and
battery controller unit.
Safety improvement thanks to:
In GHOST project, novel thermal-management architectures and strategies will
be implemented to increase the safety and to guarantee insulation protection;
Battery state function will be implemented to detect possible failure
mechanisms;
The battery system will be investigated in depth based on the functional
safety during the verification, validation and integration within the vehicle.
</td> </tr> </table>
## Workplan Overview
* PHASE I: Define the specifications starting from the requirements and constraints (WP2);
* PHASE II: Define the modular novel BS architecture for lithium based technology (WP3):
New functionalized lightweight materials for reducing the weight/volume of the
BS;
Novel and cost efficient modular Thermal Management at battery module level
that can be used for different types of vehicles (i.e. 400 or 800V) through
refrigerant direct cooling or liquid cooling, depending of the application
requirements;
Advanced modular electric design which can guarantee higher level of safety
and simplification of the battery system;
Modular battery system approach for current and midterm lithium-ion battery
technology; Recycle and reuse design approach for cost reduction; Eco-design
analysis of the selected material.
* PHASE III: Advanced prototyping and manufacturing of processing of BS and verification (WP3 & WP4):
Integration of the developed state functions and harness sensing in the
battery controller unit; Prototyping battery systems towards manufacturing
processes;
* PHASE IV: Evaluation of the safety requirements of the battery system at controlled environment (WP5);
* PHASE V: Integration within the vehicle and demonstration in real environment (WP6);
* PHASE VI: Analysis of recycling and second use of batteries at end of first life in the vehicle (WP4, WP7); Implementation and exploitation of the results and disseminate the findings (WP9).
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
GHOST Work Breakdown Structure
</td> </tr> </table>
## Expected Impacts of the GHOST Project
<table>
<tr>
<th>
Expected impact
</th>
<th>
GHOST
Contribution
</th>
<th>
Related Deliverables
</th> </tr>
<tr>
<td>
Battery integra- tion costs (ex- cluding cell cost) reduced by 20 to 30%
</td>
<td>
Implementation of intelligent harness sensing (i.e. for temperature and state
of heath estimation) with high reliability and safety;
Complete redesign of the battery system by taking into account the dismantling
and recycling aspects (like reduction of integration cost up to 20% and
dismantling time by 30-40%);
Standardised and innovative parameterization test protocols and models that
work with existing and next generation cell technologies (such as lithium
sulphur or next generation) and which can speed up the battery model
development process by 20% compared to state of the art models. Thus, the
proposed solution will contribute to reduce the integration cost and the
development time;
Defining the needs to be taken into account in order to obtain a modular
balancing concept solution suitable for automotive and second life
applications;
Improved modelling and simulation tools for battery system improve-
ment/development thanks to the use of virtual modelling approach, which is
mainly based on the concept of Simulation-in-Loop (SiL), addressed through the
application of the knowledge that will be generated in the project on
thoroughly insight ageing mechanisms, SoH, SoC, SoF and electro-thermal
modelling.
</td>
<td>
D3.4
D3.1
D4.1
D4.5
D4.5
</td> </tr>
<tr>
<td>
Strengthening the EU value chain, from design and manufacturing to dismantling
and recycling.
</td>
<td>
Efficient manufacturing processes will be considered for the BS that will be
designed taking into account the best experiences in the field. In addition,
the manufacturing will be processed in such way towards recycling in a cost
efficient way thanks to the innovative solution of the physical integration of
battery modules;
New design of prototyping and manufacturing processes of battery system for
automotive will be considered and analysed;
Identification of new cost-efficient BS for mass production.
</td>
<td>
D3.6
D3.7
D8.2
</td> </tr>
<tr>
<td>
Contributing to climate action and sustainable development objectives
</td>
<td>
A modular battery system will be designed that targets weight reduction. The
newly integrated battery system will not only be more efficient on pack level;
the increased total energy density will also yield a significant improvement
of the overall vehicular energy efficiency and the upstream CO2 emissions
linked to the generation of the electricity used during charging of the
vehicles.
</td>
<td>
D3.5
D7.1
D7.2
</td> </tr>
<tr>
<td>
Contributing to climate action and sustainable development objectives
</td>
<td>
The GHOST proposal will investigate how reusing and refurbishing second-life
batteries can be enabled (reducing barriers and gaining leverages) in order to
meet specific sustainability goals. The dismantling process of a module will
be simplified to decrease the handling cost for second life usage. The
manufacturing of components and the mining of materials of the battery system
have an important environmental and social impact. Optimizing the usage and
need of these materials and components with a circular approach has the
potential to reduce the environmental and social impacts significantly.
</td>
<td>
D3.5
D7.1
D7.2
</td> </tr> </table>
<table>
<tr>
<th>
Expected Impact
</th>
<th>
GHOST
Contribution
</th>
<th>
Related deliverables
</th> </tr>
<tr>
<td>
Energy density improvement of battery packs in the order of 15 to 20%
</td>
<td>
Mechanical design:
Replacement of bulky metallic housing of battery system that are used in the
current or conventional battery systems (i.e. BMW i3 or TESLA) by reliable and
lightweight materials, which will contribute up to 30% weight reduction;
Redesign of battery system by using novel materials such as Al-foam at module
level to reduce further the weight up to 20% and to maintain the required
mechanical stability and safety according to the standards of on-road
vehicles.
Thermal design:
Optimization of thermal management through efficient integration of phase
change materials (PCM) into Al-foam to increase the thermal buffer on one hand
and to simplify the thermal architecture on another hand. The incorporated PCM
inside Al-foam will be coupled with refrigerant or liquid cooling at the
bottom of the battery module to achieve an efficient cooling concept;
Adaptive and smart active control of the cooling circuit pumps/fans;
Design of thermal management applicable for fast charging opportunities.
Electrical design:
Development of a modular battery module architecture (i.e. 48V), which can be
easily scaled up to 400-800V or high capacity applications with full
commonalities in the field of used mechanical connections between the cells,
fuses, safety, and opti- mised electrical integration and control thanks to
battery control system (BCS);
Novel Modular Dual Battery System architecture composed by one high-power (Li-
Ion) and one high-energy (Li-Sulphur) battery modules, combined by a DC/DC
converter with integrated control unit.
</td>
<td>
D3.1 D3.2
D3.3 D3.4
D8.1 D8.2 D8.3
</td> </tr> </table>
# Risk Management
## Introduction
Risk is defined as an event that has a likelihood of occurring, and could have
a negative impact on a project. A risk may have one or more causes and, if it
occurs, one or more impacts. All projects assume some element of risk, and
it’s through risk management to monitor and track those events that have the
potential to impact the outcome of a project.
Risk management has four stages: risk identification, analysis, evaluation and
mitigation underpinned by continuous monitoring and control. These stages are
described below in more detail as well as the roles and responsibilities
connected to the risk management.
Figure 1: Stages of risk management
Risk management is an on-going process that continues throughout the life of a
project. It includes processes for risk management planning, identification,
analysis, monitoring and control. It is the objective of risk management to
decrease the likelihood and impact of events averse to the project. On the
other hand, any event that could have a positive impact should be exploited.
## Types of Risk
Typically the risks in a technical research and innovation project of this
type include:
* Technological risks
* Partnership risks
* Market risks
* Legal risks
* Management risks
* Environmental/regulation /safety risks
The purpose of this document is to describe the procedures being implemented
within the GHOST project for identifying and handling risks. This risk
management plan applies to all partners in the project.
## Risk likelihood
The risk likelihood is the chance that a risk will occur in the life time of
the project. The following chart shows the risk likelihood definitions. For
each of the identified risks the potential likelihood that a given risk will
occur must be assessed, and appropriate risk likelihood is selected from the
chart below.
<table>
<tr>
<th>
**Likelihood Category**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Certain
</td>
<td>
Risk event expected to occur
</td> </tr>
<tr>
<td>
Likely
</td>
<td>
Risk event more likely than not to occur
</td> </tr>
<tr>
<td>
Moderate
</td>
<td>
Risk event may or may not occur
</td> </tr>
<tr>
<td>
Unlikely
</td>
<td>
Risk event less likely than not to occur
</td> </tr>
<tr>
<td>
Rare
</td>
<td>
Risk event not expected to occur
</td> </tr> </table>
**Table A: Risk Likelihood**
## Risk impact
The risk impact is the cause or effect of the risk in the project’s progress.
It is classified in five levels:
* Very serious: The risk would jeopardize the project’s continuity or would significantly affect the projects outcomes. A very serious impact would be that the project needs to stop.
* Serious: The risk would jeopardize the project’s continuity or would significantly affect the project outcomes. Usually, when a serious impact risk occurs, there is a need of changing the project contract, eg. one of the partners abandons the project.
* Moderate: the risk has a significant impact on the project, but it is perceived that the objectives will be still achieved eg. difficulties of defining RTD specifications lead to 6 months delay.
* Slight: the effect on the project is minor e.g. a task leader drops out the project.
* Low: the effect on the project is negligible, e.g. Shift of budget between partners.
The complexity, the technical challenges and the size of GHOST require an
adequate risk management. It is therefore necessary that potential risks are
clearly identified, assessed, and that possible recovery actions be prepared.
Potential risks can be related to delays, performance, collaboration and
management. Risk is by definition the product of Probability and Impact. A
preliminary analysis of GHOST risks associated with the work plan and using a
Risk ProbabilityImpact Matrix approach is presented by a sequential step-
approach:
* Identify potential impacts of risks;
* Evaluate Probability Impact Scores;
* Prioritize Risks for Management Action;
* Determine risk mitigation measures into actions.
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
IMPACT
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
TRIVIAL
</th>
<th>
MINOR
</th>
<th>
MODERATE
</th>
<th>
MAJOR
</th>
<th>
EXTREME
</th> </tr>
<tr>
<td>
PROBABILITY
</td>
<td>
RARE
</td>
<td>
LOW
</td>
<td>
LOW
</td>
<td>
LOW
</td>
<td>
MEDIUM
</td>
<td>
MEDIUM
</td> </tr>
<tr>
<td>
UNLIKELY
</td>
<td>
LOW
</td>
<td>
LOW
</td>
<td>
MEDIUM
</td>
<td>
MEDIUM
</td>
<td>
MEDIUM
</td> </tr>
<tr>
<td>
MODERATE
</td>
<td>
LOW
</td>
<td>
MEDIUM
</td>
<td>
MEDIUM
</td>
<td>
MEDIUM
</td>
<td>
HIGH
</td> </tr>
<tr>
<td>
LIKELY
</td>
<td>
MEDIUM
</td>
<td>
MEDIUM
</td>
<td>
MEDIUM
</td>
<td>
HIGH
</td>
<td>
HIGH
</td> </tr>
<tr>
<td>
VERY LIKELY
</td>
<td>
MEDIUM
</td>
<td>
MEDIUM
</td>
<td>
HIGH
</td>
<td>
HIGH
</td>
<td>
HIGH
</td> </tr> </table>
**Table B: Probability-impact matrix**
## Specific aspects of the GHOST Risk Management Plan
### Roles and responsibilities
All project partners should identify the project risks and give input to the
reports concerning those risks. The risks must be defined and reported through
the progress report template, indicating its risk likelihood, impact,
contingency plan, and responsible partner and in what period of the project
the risk is valid and should be monitored.
The Project Coordinator (CRF) with the support of the Back Office (VUB) will
review and assess the identified risks. If necessary, feedback will be
provided to the identified responsible partner. If the risk exposure is
critical or there is a need for further discussion, the Project Coordinator is
responsible for raising the issue during the Executive Board and Consortium
meetings (i.e. General Assembly’s and/or Progress Meetings). In that case, the
mitigation plan must be established through a consensus decision process, and
it may require the involvement of the Project Officer and all the partners.
### Reporting and monitoring
The risks must be identified and reported as part of the progress reports
which are scheduled at nominally 6-month intervals in conjunction with the
General Assembly meetings. All project partners are required to participate in
the identification of the project risks and give input to the progress reports
regarding those risks.
<table>
<tr>
<th>
**Updates of the RMP**
</th>
<th>
</th> </tr>
<tr>
<td>
**Version number**
</td>
<td>
**Project month**
</td>
<td>
**Nature**
</td> </tr>
<tr>
<td>
**1**
</td>
<td>
M8
</td>
<td>
Original version
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
M12
</td>
<td>
Update
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
M18 – RP1
</td>
<td>
Update
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
M24
</td>
<td>
Update
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
M30
</td>
<td>
Update
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
M36 – RP2
</td>
<td>
Update
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
M42 – RP3
</td>
<td>
Update
</td> </tr> </table>
### The Risk Management register
The risks are registered in a repository that is presented below.
The Risk Management register is available on EMDESK, on the online management
platform used for GHOST. Work Package leaders are expected to collect feedback
from the partners involved in their Work Packages, revise and if necessary
update the register at the end of each bi-annual internal reporting period.
**Figure 2: Risk register on EMDESK**
### Foreseen risks
An overview on significant risks and associated contingency plans is given in
the Table below. The project management approach provides mechanisms to
identify and resolve potential risks, such as continuous controls of the
project plan with its milestones and key deliverables. The progress and
resource reporting will enable the Project Management team (i.e. Coordinator,
Back-office and Executive Board) to be continuously aware of potential
problems. Hence, the team can initiate countermeasures in a timely fashion
before a problem becomes jeopardizing and fall-back solutions can be defined
and implemented in time.
<table>
<tr>
<th>
**No.**
</th>
<th>
**Description of risk**
</th>
<th>
**WP**
</th>
<th>
**Risk-mitigation measures**
</th>
<th>
**Probability**
</th>
<th>
**Effect**
</th> </tr>
<tr>
<td>
**Delays**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
1 Delays in providing the All Track development progress and components in
time for focus efforts especially in the most following WPs activities
sensible components.
</td>
<td>
Medium
</td>
<td>
High
</td> </tr>
<tr>
<td>
**Performance**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2
</td>
<td>
Detail level of component models when relevant data is not being supplied by
the partners
</td>
<td>
All
</td>
<td>
Setup 1:1 confidentiality agreements where needed.
</td>
<td>
Medium
</td>
<td>
Medium
</td> </tr>
<tr>
<td>
3
</td>
<td>
Integration effort of the components higher than expected
</td>
<td>
WP3, WP7
</td>
<td>
Upfront virtual design validation need to be applied where possible.
</td>
<td>
Medium
</td>
<td>
Medium
</td> </tr>
<tr>
<td>
4
</td>
<td>
Final assessment does not show expected target
</td>
<td>
WP3,
WP7,
WP8
</td>
<td>
Re-evaluation of specification and recommendations for future improvement.
</td>
<td>
Medium
</td>
<td>
Medium
</td> </tr>
<tr>
<td>
5
</td>
<td>
Component failure during
</td>
<td>
WP3,
</td>
<td>
Since from the proposal phase,
</td>
<td>
Medium
</td>
<td>
Medium
</td> </tr>
<tr>
<td>
**No.**
</td>
<td>
**Description of risk**
</td>
<td>
**WP**
</td>
<td>
**Risk-mitigation measures**
</td>
<td>
**Probability**
</td>
<td>
**Effect**
</td> </tr>
<tr>
<td>
</td>
<td>
testing
</td>
<td>
WP4,
WP7,
WP8
</td>
<td>
the potential component failure critical pathways have been identified and the
availability of a proper number of spare parts to avoid delays on the Project
timing have been planned.
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
6
</td>
<td>
Bench devices failure during testing
</td>
<td>
WP3,
WP7,
WP8
</td>
<td>
Back-up plan for alternative test capabilities.
</td>
<td>
Low
</td>
<td>
Low
</td> </tr>
<tr>
<td>
7
</td>
<td>
Availability of high quality li-ion cells
</td>
<td>
WP3, WP4
</td>
<td>
Back-up suppliers involvement.
</td>
<td>
Low
</td>
<td>
High
</td> </tr>
<tr>
<td>
8
</td>
<td>
Availability of high quality Li-S cells proto
</td>
<td>
WP8
</td>
<td>
Contact with different possible Projects, organisations able to supply proto
cells.
</td>
<td>
Medium
</td>
<td>
Medium
</td> </tr>
<tr>
<td>
**Collaboration**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
9 Poor cooperation ALL In the monthly WP leader phone between partners calls,
the effectiveness of the
partner interactions will be continuously monitored. If this problem happens,
it will be managed identifying the reasons why and solving them at WP level or
at Project level depending on which are the involved partners and the nature
of the problem.
</td>
<td>
Low
</td>
<td>
High
</td> </tr>
<tr>
<td>
**Management**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
10
</td>
<td>
Partners leave or Partners become insolvent
</td>
<td>
ALL
</td>
<td>
Back-up partners list or inside Consortium solution.
</td>
<td>
Low
</td>
<td>
High
</td> </tr> </table>
**Table C: List of foreseen risks and associated mitigation measures**
### Preliminary Identification of Risks
As indicated in the Grant Agreement, the GHOST Project consortium has
identified at a preliminary stage some of the main barriers, obstacles and
framework conditions that may limit and/or reduce the level of achievement of
the previously described expected impacts.
The achievement of the potential impacts that will be demonstrated via the
GHOST project is dependent upon the market adoption of the technologies.
Therefore, anything putting at risk their cost effective realization can be
considered as potential barrier or obstacle. In particular there are foreseen
to be:
* The lack of industrial and international standards for the investigated architecture solution to set certain technical specifications, dimensions, and mechanical, electrical, and communication interfaces. Without a standard, OEMs and suppliers will be cautious in their development programs because of the financial risks associated with the risk not to be able to achieve economy of scale advantages;
* Uncertain deployment of the European Energy Taxation regulation (COM/2011/169 and COM/2011/168) and a modification of the directive 96/53/EC on weight and dimensions of commercial vehicles will result in planning uncertainty in the automotive supply chain for electrified commercial vehicles;
* The lack of homogeneous, and in some case adequate, government incentives in the different European Member States in order to stimulate the deployment of, in particular, electrified vehicles.
### International Standards
As is documented in the APPENDIX to this report, a preliminary survey of the
type approval regulatory body has been performed in order to identify any
possible hindrance to a future homologation of the developed enhanced vehicle,
taking into account those specifications and requirements which might be
affected by the introduced advanced technologies; special attention has been
paid to the hazardous aspects of the changes planned to the battery pack with
a particular focus on crashworthiness, and the protection of occupants and
vulnerable road users.
# Data Management
## Introduction
According to the H2020 Programme Guidelines on FAIR Data Management in Horizon
2020:
“Data Management Plans (DMPs) are a key element of good data management. A DMP
describes the data management life cycle for the data to be collected,
processed and/or generated by a Horizon 2020 project. As part of making
research data findable, accessible, interoperable and reusable (FAIR), a DMP
should include information on: the handling of research data during and after
the end of the project, what data will be collected, processed and/or
generated, which methodology and standards will be applied, whether data will
be shared/made open access and how data will be curated and preserved
(including after the end of the project).
The DMP is a living document to be updated over the course of the project
whenever significant changes arise and as a minimum in time with the periodic
evaluation/assessment of the project. The consortium will, at the time of the
bi-annual internal progress reporting assess whether the DMP needs to be
updated.
The aim of the initial version of the GHOST DMP is to provide a general
overview of data types collected and/or created by different project partners
and on the usage of such data in line with the project objectives. The Horizon
2020 FAIR DMP template has been used and as the project proceeds, more
questions will be addressed in detail, if needed.
<table>
<tr>
<th>
**Updates of the DMP**
</th>
<th>
</th> </tr>
<tr>
<td>
**Version number**
</td>
<td>
**Project month**
</td>
<td>
**Nature**
</td> </tr>
<tr>
<td>
**1**
</td>
<td>
M8
</td>
<td>
Original version
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
M12
</td>
<td>
Assessment whether update is needed
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
M18 – RP1
</td>
<td>
Update
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
M24
</td>
<td>
Assessment whether update is needed
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
M30
</td>
<td>
Assessment whether update is needed
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
M36 – RP2
</td>
<td>
Update
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
M42 – RP3
</td>
<td>
Update
</td> </tr> </table>
## The GHOST DMP
<table>
<tr>
<th>
**Component**
</th>
<th>
**Issues to be addressed**
</th>
<th>
**GHOST Project DMP**
</th> </tr>
<tr>
<td>
1\. Data summary
</td>
<td>
State the purpose of the data collection/generation
</td>
<td>
A wide variety of technical data will be collected, primarily via simulation
and experimental testing during the activities of the project primarily to
enable direct comparison, in order to validate the simulation models, and to
evaluate the performance of the technical solutions developed.
</td> </tr>
<tr>
<td>
Explain the relation to the objectives of the project
</td>
<td>
The generation of data through the testing and simulation activities will
focus on evaluating the performance of the innovative battery systems
developed during the project. In this context, the generation of data is
directly relevant to the objectives of the project.
</td> </tr>
<tr>
<td>
Specify the types and formats of data generated/collected
</td>
<td>
The exact types and formats of the data generated/collected, whether any
existing data is being re-used, the origin of the data, and the expected size
of specific databases, will be defined in due course as a result of the
testing and simulation activities which are planned to be conducted throughout
the project.
</td> </tr>
<tr>
<td>
Specify if existing data is being reused (if any)
</td> </tr>
<tr>
<td>
Specify the origin of the data
</td> </tr>
<tr>
<td>
State the expected size of the data (if known)
</td> </tr>
<tr>
<td>
Outline the data utility: to whom will it be useful
</td>
<td>
The data will be primarily of use to the partners within the GHOST consortium,
bearing in mind that the exchange of technical information is effectively one
of the fundamental and essential elements of a collaborative technical
research project.
Furthermore key data generated within the project, and reported within the
Public (PU) Domain Deliverable reports, are likely to be useful to other
stakeholders external to the GHOST consortium working in fields related to the
development and deployment of battery systems for
(primarily) automotive applications.
</td> </tr> </table>
<table>
<tr>
<th>
2\. FAIR Data
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
2.1 Making data findable, including provisions for metadata
</td>
<td>
Outline the discoverability of data (metadata provision)
</td>
<td>
</td> </tr>
<tr>
<td>
Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital
Object Identifiers?
</td>
<td>
With respect to the provision and sharing of data, particularly those data
relating to Deliverable reports classified as PU, every effort will be made to
ensure that the data are easily identifiable and that standard identification
mechanisms will be used.
</td> </tr>
<tr>
<td>
Outline naming conventions used
</td>
<td>
Together with the creation and making available of data generated within the
project, the naming conventions used will be specified. At this stage, the
approaches being adopted for keyword searching and versioning will be
specified if appropriate.
</td> </tr>
<tr>
<td>
Outline the approach towards search keyword
</td> </tr>
<tr>
<td>
Outline the approach for clear versioning
</td> </tr>
<tr>
<td>
Specify standards for metadata creation (if any). If there are no standards in
your discipline describe what type of metadata will be created and how
</td>
<td>
In this context it is assumed that the term metadata refers to “data
[information] that provides information about other data”. With respect to
this specific project, no specific existing standards are known. Nevertheless,
if and when metadata are created, the approach adopted will be specified.
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
Specify which data will be made openly available? If some data is kept closed
provide rationale for doing so
</td>
<td>
The data which will be made openly available will correspond to the
Deliverables that are classified as PU and hence will be in the Public Domain.
</td> </tr>
<tr>
<td>
Specify how the data will be made
</td>
<td>
The data will be made available using the EMDESK platform
( _https://www.emdesk.com/en/_ ) which is the instrument
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
available
</th>
<th>
selected for the storage and exchange of all key documents and information
within the project.
</th> </tr>
<tr>
<th>
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?
</th>
<th>
The EMDESK platform which will be used for data storage and exchange is a
standard instrument which can be accessed directly. Should any specific
instructions be required to facilitate data access, such as information
relating to the nature and origin of the data, then these will be provided
when the data is deposited.
</th> </tr>
<tr>
<th>
Specify where the data and associated metadata, documentation and code are
deposited
</th>
<th>
All relevant documentation, codes, etc. relating to the data to be made openly
accessible will be made available also within the EMDESK platform via public
links that are accesible by those external users with whom the URL has been
shared.
</th> </tr>
<tr>
<th>
Specify how access will be provided in case there are any restrictions
</th>
<th>
The data selected for open access relating to public-domain Deliverable
reports will be made available without restrictions. Conversely data relating
to Deliverables which are classified at Consortium-level will be stored within
the EMDESK platform with access restricted to the partners in the consortium.
</th> </tr>
<tr>
<td>
2.3
Making data interoperable
</td>
<td>
Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you
will follow to facilitate interoperability.
</td>
<td>
Due to the nature of the data which will be generated, primarily from the
simulation and testing of the battery systems to be developed in the project,
the data will be strictly linked to the specific system or component under
investigation. Full details of the respective system/component and
test/simulation will be provided in the corresponding public-domain
Deliverable report relating to the data made available.
</td> </tr>
<tr>
<td>
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-
</td>
<td>
Standard and conventional vocabulary, nomenclature, measurement units will be
used throughout the project. A full glossary of abbreviations and acronyms
will also be provided if and when necessary.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
disciplinary interoperability? If not, will you provide mapping to more
commonly used ontologies?
</th>
<th>
</th> </tr>
<tr>
<td>
2.4 Increase data re-use (through clarifying licences)
</td>
<td>
Specify how the data will be licenced to permit the widest reuse possible
</td>
<td>
Currently there are no plans to license the use of key data related to public-
domain Deliverables, but instead to allow open access to permit wide re-use
during and after the project. Should it instead become necessary to alter this
policy, and license the use of data, this will be specified in subsequent
version of the GHOST Data Management Plan.
</td> </tr>
<tr>
<td>
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data
embargo is needed
</td>
<td>
In general, the data for open access will be made available for re-use once
deposited on the EMDESK platform, the timing of which would nominally
correspond to the delivery date of the public-domain Deliverable reports to
which the data is related. Should it become necessary, during the course of
the project, to alter this process, eg. delay the release of key data with
respect to when the Deliverable report is deposited, then this will be
specified in a subsequent release of the GHOST DMP.
</td> </tr>
<tr>
<td>
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why
</td>
<td>
The key data which will be made publically available and hence useable by
third parties during the course of the project will be made available for
between 12 months and 36 months following project completion.
</td> </tr>
<tr>
<td>
Describe data quality assurance processes
</td>
<td>
The data quality assurance process is outlined below.
</td> </tr>
<tr>
<td>
Specify the length of time for which the data will remain reusable
</td>
<td>
Typically data will remain usable for 24 months after project completion.
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
Estimate the costs for making your data FAIR. Describe how you intend to cover
these costs
</td>
<td>
The task of making key data from the project FAIR was envisaged from the
outset and are therefore covered by the project budget.
</td>
<td>
</td> </tr>
<tr>
<td>
Clearly identify responsibilities for data management in your project
</td>
<td>
The responsibilities for data management are described below.
</td>
<td>
</td> </tr>
<tr>
<td>
Describe costs and potential value of long term preservation
</td>
<td>
The cost of long-term preservation (ie. for longer than 36 months) will be
estimated if and when the need become apparent. Currently long-term
preservation is not planned since it is likely that data will become obsolete
within a 2-3 year timeframe following completion of the project due to new
technical developments and innovations.
</td>
<td>
</td> </tr>
<tr>
<td>
4\.
Data security
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive data
</td>
<td>
It is foreseen that none of the data to be stored and made available will be
of a sensitive nature.
</td>
<td>
</td> </tr>
<tr>
<td>
5\.
Ethical aspects
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td>
<td>
No aspects which may be considered to be sensitive from an ethical perspective
will be addressed during this project.
</td>
<td>
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
Refer to other national/funder/secto rial/departmental procedures for data
management that you are using (if any)
</td>
<td>
References to other national/funder/sectorial/departmental procedures for data
management will be made in due course if used during the project.
</td>
<td>
</td> </tr> </table>
**Table D: Overview of the GHOST DMP at project launch**
## Overview of individual WPs with respect to the generation of data
WP2 (Requirements & Specifications) collects requirements from the application
point of view, related to the vehicle demonstrators to define the Battery
System complete specification both for PHEV and BEV applications. At the same
time, the available cell technologies to target the most suitable for the
application will be investigated in order to have the most promising tested
within WP4. In parallel, in WP2 the test typologies need to assess the
technology development of GHOST versus the requirements previously listed.
In WP3 (Modular Battery Architecture, Design, Prototyping and Manufacturing),
first an investigation and an evaluation of the best suited battery module
architecture will be performed followed by a detailed design of the modular
battery modules for the targeted P-HEV and BEV battery systems. The outcome
will contain data relating to detailed mechanical, electrical and thermal
design of two battery systems based on the same flexible and scalable battery
modules that enable thermally optimized ultra-fast charging at up to 350kW
(for the BEV system).
WP4 (Battery Cell Characterization, Modelling and Testing for Automotive &
Second Life Batteries) focuses on developing a good understanding of the
behaviour of the battery cells considered in the project. In this work package
the cells will be characterized experimentally for the development of
electrical, thermal and lifetime models, hence generating data from
measurements. Furthermore, WP4 will provide the required information and
technical background regarding the thermal behaviour of the battery cells,
which in WP3 will be extended to module and battery system level. In addition,
in this Work Package dedicated analysis will be carried out on advanced
battery technologies e.g. Li-S and on considered lithium-ion battery cells
during second life.
WP5 (System Safety Validation of the Advanced Battery System) a methodology to
build-up a multiaspect system assurance case of the novel battery system will
be developed in order to demonstrate system safety for different system
environments (e.g. P-HEV, BEV) and different use cases (e.g. 1st life as
battery in the vehicle, 2nd life as storage system). A multi-concern safety
analysis approach will be then applied to perform the battery system safety
analysis; the ISO 26262 will be considered as initial base to start the
analysis, using methods, like the hazard analysis and risk assessment,
suggested by ISO 26262. The outcomes of these analyses will provide the
baseline data for performing appropriate safety analysis such as FMEA in order
to identify the safety critical elements of the general concepts. The analysis
will consider, in an integrated approach, electrical, thermal and functional
safety aspects of the battery system. A comparative analysis and impact
evaluation of installing the battery system in different system environments
on achieving system safety in different contexts will be also performed. Once
the multi-concern safety of the general concepts have been analysed, the
defined safety measures will be verified under lab conditions but with
realistic build-in and operational conditions.
WP6 (Vehicle Integration, Testing and Demonstration) focuses on the in-vehicle
integration of the previously developed and commissioned battery systems for
Li-ion applications. In WP6, actual vehicle testing activities in real
conditions, either on road or on rolling bench, will be performed and hence
measurement data will be generated.
Finally, a comprehensive assessment of the battery system versus the
specifications defined in WP2 will take place, considering all the tests
performed in WP4, WP5 and WP6 and also the outcomes of the second life testing
and analysis performed in WP7.
In WP7 (Dismantling and Second life use of batteries), dismantling studies on
existing batteries will provide data regarding how to improve battery design
of the new battery pack with the target to reduce efforts and cost at the EoL-
management including getting the new packs optimized for remanufacturing,
reparation, reuse and second life. This finally will be demonstrated by a
design approval based on dismantling studies. Furthermore a future oriented
concept for reuse and 2nd life application will be developed, technically and
commercially evaluated to show beneficial solutions for the expanded usage of
battery packs and/or valuable components after the first life.
In WP8 (Dual Battery System Design, Prototyping and Bench Testing), high power
Li-Ion and Li-S technology dual battery concept will be studied with the aim
of incrementing the energy density and efficiency of current Li-Ion-based
battery systems. The modular concept for Li-Ion technology developed in WP3 is
used and adapted for the design of a dual battery system together with the
development of a DC/DC converter that emerges in this novel system. A
validation and assessment of the novel dual battery system will be
investigated with a scaled-down prototype to carry out a quantitative
comparison between the scope of the dual battery system concept and the Li-Ion
modular one.
<table>
<tr>
<th>
Number
</th>
<th>
Title
</th>
<th>
Type
</th>
<th>
WP
</th>
<th>
Lead partner
</th>
<th>
Due
</th> </tr>
<tr>
<td>
D1.2
</td>
<td>
Risk and data management plan
</td>
<td>
Report
</td>
<td>
WP1
</td>
<td>
CRF
</td>
<td>
M6
</td> </tr>
<tr>
<td>
D2.2
</td>
<td>
Cells specification report for battery system prototype
</td>
<td>
Report
</td>
<td>
WP2
</td>
<td>
VUB
</td>
<td>
M6
</td> </tr>
<tr>
<td>
D2.3
</td>
<td>
Concept validation plan report
</td>
<td>
Report
</td>
<td>
WP2
</td>
<td>
CRF
</td>
<td>
M8
</td> </tr>
<tr>
<td>
D3.3
</td>
<td>
Prototyping, commissioning and functional verification of the designed battery
systems
</td>
<td>
Demonstrator
</td>
<td>
WP3
</td>
<td>
AVL
</td>
<td>
M28
</td> </tr>
<tr>
<td>
D4.1
</td>
<td>
Metholodology test, characterisation test and electro-thermal battery model
report
</td>
<td>
Report
</td>
<td>
WP4
</td>
<td>
VUB
</td>
<td>
M12
</td> </tr>
<tr>
<td>
D8.2
</td>
<td>
Assembly, commissioning of the dual cell Li-ion, Li-S battery module and
validation & assessment of the dual battery system concept
</td>
<td>
Demonstrator
</td>
<td>
WP8
</td>
<td>
IKERLAN
</td>
<td>
M40
</td> </tr>
<tr>
<td>
D9.1
</td>
<td>
Public website
</td>
<td>
Website
</td>
<td>
WP9
</td>
<td>
VUB
</td>
<td>
M3
</td> </tr>
<tr>
<td>
D9.2
</td>
<td>
Dissemination and communication plan
</td>
<td>
Report
</td>
<td>
WP9
</td>
<td>
VUB
</td>
<td>
M6
</td> </tr>
<tr>
<td>
D9.4
</td>
<td>
Report on liaison with ongoing relevant EU projects in the field of battery
system development and testing
</td>
<td>
Report
</td>
<td>
WP9
</td>
<td>
VUB
</td>
<td>
M42
</td> </tr> </table>
**Table E: List of GHOST deliverables with Public dissemination level**
## Quality Assurance and Responsibilities
### Scientific and Technical Quality
The quality of the overall outcome of the project is primarily dependent upon
the quality of the execution of the innovation and demonstration activities.
Formally, the quality of the work is monitored throughout the project by the
General Assembly, the Executive Board and the Project Coordination team.
Informally, each and every project team member, including the WP leaders and
the Coordinator, has the responsibility to critically consider the quality of
the work and strive for the best possible results. Potential deviations from
the project plan must be anticipated and identified in a timely manner to
allow mitigating actions to be developed and planned. In this way, quality can
subsequently be maintained by taking suitable corrective actions to recover
the deficiency in output or time delay. In this process, particular attention
will be paid to monitoring, and supporting good communication and cooperation
between work packages in order to avoid a fragmentation of the activities,
which could lead to a mismatch between interrelated work packages.
### Quality of Results
The formal deliverables of the GHOST project are the output of the research
and innovation activities and, as such, should be high-quality representations
of the activities undertaken. The quality of the deliverables will be managed
through a straightforward review process, which was agreed during the Kick-off
meeting, in the Consortium Agreement and described in D1.1.
The quality assurance mainly aims at the quality of deliverables by ensuring
smooth cooperation within the consortium and defining the process of decision
making and knowledge sharing. The deliverables primarily shall deliver all the
initially agreed information which is required by partners to carry out their
own work and to fulfil their obligations in the project. Furthermore, the
deliverables must be in line with the project targets.
During the Kick-off Meeting it was agreed that the Executive Board is the main
body for quality assurance. The procedure rests on the premise that the author
of the deliverable is the technical expert on the topic of the deliverable and
as such is responsible for the technical content. The work package leader
should check the deliverable, with a focus on the check with the objectives of
the work package and its fit with the overall work within the work package
(consistency check). Furthermore, all deliverables are reviewed by at least
one and ideally by two reviewer who is not directly involved in the
preparation of the deliverable. The Project Coordinator performs the final
review of the deliverable, focusing on general fit with the project
objectives. If the deliverable serves as input for other work packages, focus
should be put on whether the deliverable serves this purpose, both an a
qualitative as well as quantitative level. Subsequently, the Project
Coordinator is responsible for the delivery of the report to the EC.
Another important aspect of quality management is the administrative processes
of a project. The Project Coordinator, CRF, complies with the procedures
indicated by the ISO 9001 certified Quality System. The Project Coordinator,
strongly supported by VUB, will regularly monitor administrative processes in
such matters as finances and any legal issues which may arise. Key points to
be monitored such that the administration of the project is traceable and
justifiable include, finances, risks and risk mitigation, changes which may
affect the Grant Agreement, timing and reporting.
The Project Coordinator, CRF, with the support of the Back-Office, VUB,
coordinates the periodic management reports and the final report. In this
task, the H2020 Tool, which was specifically designed for the collection of
detailed use of resources and the technical status of projects, will be used.
The Coordinator will collect, check and send to the EC the required cost
statements, on the basis of the scheduled plan. The Project Coordinator with
the support of the Back-Office will monitor progress through the nominally
monthly Executive Board meetings, bi-annual Work Progress reports and annual
General Assembly meetings.
All finalised deliverables will be stored at the internal EMDESK platform for
partners. All public deliverables with a public (PU) dissemination level will
be made available on EMDESK via public sharing links also for external users
and will also be placed on the GHOST public website. Instead, if the
deliverable has a dissemination level of confidential (CO) only the
publishable executive summary will be placed on the public website.
Furthermore, the key data relating to the public deliverables which will be
made available according to the GHOST DMP (see Table D) will be made available
on the EMDESK platform and accessible for those who possesses the public link.
EMDESK, the GHOST platform serves as central data storage and secure
repository for document management and sharing. It allows the project partners
to define specific permissions for folders or documents, to track document
versions, share large documents or datasets, to share direct document link
e.g. via email, to make documents accessible to public via public sharing
link, to assign documents to related project items for easy and quick document
retrieval and to receive email digests on activities in the document manager.
Within the project, document templates have been developed for deliverables.
These templates can be found on EMDESK. As outlined in D1.1, documents shall
be named in a way that makes them easily identifiable and findable. Suggested
process to be used for giving titles to project documents: [Project] - [WP] -
[Document] - [Owner] - [Version].
For every deliverable the author will take the review comments into account. A
motivated explanation is required if it is not possible to process the
comments as requested by the reviewer, or if the author(s) disagree with the
comments received.
In case of remaining disagreement, the Project Coordinator will guide the
process and will ensure a convergence of the process towards a final result.
## IPR: handling of intellectual property rights and data access
All the partners have signed a Consortium Agreement. This agreement addresses
the exploitation of the results and the patent and licensing issues as well as
procedures with respect to the dissemination of results. A guiding rule is
that partners investing in research should have an advantage compared to those
who do not. This means that knowledge created during projects that offer
commercial interest must be safeguarded and protected for exploitation by the
owner. On the other hand, partners of this project need to come together in
order to collaborate and benefit from their respective resources and
competencies. Thus, added value through the sharing of knowledge and promoting
exploitation represents a clear objective and driving forces of this
collaborative project. This approach to knowledge management and IPR is
detailed and regulated in the Consortium Agreement, which has been signed by
all partners at the start of the project.
Some of the major aspects covered are indicated below:
* Background knowledge, specific limitations and/or conditions for implementation and for exploitation; Project Results, their (joint) ownership and the transfer of Results;
* Access rights to Background and Results, for consortium partners and their affiliates;
* Publications, procedures for dissemination of results and research data and open access hereto.
Background (= existing know how or pre-existing intellectual property) of a
specific partner shall be made available to the Partner (or Partners) within
the consortium that needs this information for the proper execution of their
tasks within the scope of the project. The use of Background is strictly
limited for use to the achievement of the project goals and for the duration
of the project. The receiving partner or partners will sign appropriate non-
disclosure agreements with the providing partner. An overview of the
Background was included as an annex to the consortium agreement. All partners
shall be entitled to license their Background. Licensing of Background to
third parties will be done on commercial conditions whereas licensing of
Background to partners of the consortium will be done on fair and reasonable
conditions. Results (e.g. results, including intellectual property generated
in the project) shall be owned by the partner or partners who developed the
results.
Each partner is responsible for taking the appropriate steps for securing
intellectual property of the knowledge or results created in the project (e.g.
filing of patent applications). Each partner is obligated to fully inform the
project coordinator of the filing of patent applications of knowledge or
results created in the field of the project within two weeks of the date of
filing. Each partner that owns a specific Result shall be free to exploit
their Result as it sees fit. Appropriate joint ownership agreements will be
drawn up. The participating research institutes/universities are entitled to
use knowledge or results from the project, which either have been published or
have been declassified, for research and teaching purposes. The project’s
website will also contain an overview/archive of all published information
and/or links hereto.
Access Rights to Background and Results shall be free of charge to partners of
the consortium for research and development purposes within the scope and the
duration of the project. Access Rights to Background and/or Results that are
owned by one or more of the partners shall be granted on fair and reasonable
conditions and to the extent necessary to enable these partners to exploit
their own results. For this purpose, the involved partners are entitled to
conclude appropriate (bi, tri or multilateral) license, supply, product and/or
service agreements.
# APPENDIX Overview of Relevant International Safety Standards
The international standards concerning type-approval requirements for the
general safety of motor vehicles have been reviewed and analyzed with
reference to the technical innovations in the framework of the GHOST project.
Within the European Union, two systems of type approval are in force. The
first is based on the European Commission directives and regulations, and
targets the entire vehicle system as well the subsystems and components. The
second is based on the UN regulation, and targets subsystems and components of
the vehicle, but not the whole vehicle.
The main directive for the vehicle type approval in the EU is the 2007/46/EC
of the European parliament and of the council. The directive establishes a
framework for the approval of motor vehicles and their trailers, and of
systems, components and separate technical units intended for such vehicles.
This Framework poses requirements for the different subsystem constituting the
vehicle. Hence, to gain the whole approval of the vehicle, the various
subsystems shall be verified against their compliance.
The above mentioned directive provides a list of Regulatory acts for EC type-
approval of vehicles produced in unlimited series and includes, for example:
* General Safety Regulation (EC) No 661/2009
* Electric safety Regulation (EC) No 661/2009 UNECE Regulation No 100
The main directive for the vehicle type approval with respect to the General
safety is the Regulation (EC) No 661/2009. It sets out requirements regarding
both the general safety of motor vehicles and the environmental performance of
tires.
The subsystems reported in Annex 1 of the regulation are in scope of the
requirements expressed in Article 5 (1) and (2), e.g.:
1. Manufacturers shall ensure that vehicles are designed, constructed and assembled so as to minimize the risk of injury to vehicle occupants and other road users,
2. Manufacturers shall ensure that vehicles, systems, components and separate technical units comply with the relevant requirements set out in this Regulation and its implementing measures, including among other the requirements relating to:
* vehicle structure integrity, including impact tests;
* systems to provide the driver with visibility and information on the state of the vehicle and the surrounding area, including glazing, mirrors and driver information systems;
* electromagnetic compatibility;
* heating system
* electrical safety
In general, pursuant with par. 7 of UNECE Regulation No. 94, any increase in
mass of the vehicle greater than 8 per cent might imply a bigger testing
effort in order to demonstrate compliance with the regulation provisions,
depending on the judgement of the Type Approval Authority.
Product certification is a fundamental precondition to establish a product at
the market. The certification comprises the tests regarding required standards
and robustness. Generally, depending on the mission profile the vehicle
manufacturer specifies the requirements on the electrical and electronic
equipment. The supplier of the Battery System is liable to observe these
requirements and to perform the appropriate tests to ensure functionality,
reliability and safety.
The Battery System as an electric device have to undergo through various tests
and likewise its electrical and electronic components have to meet various
requirements and standards. Such components are for example, circuit boards,
sensors, actuators, integrated circuits (ICs), semiconductors, active and
passive components etc.
An important standard for qualification of electrical and electronic equipment
in road vehicles is the ISO 16750, Road vehicles - Environmental conditions
and electrical testing for electrical and electronic equipment. The ISO 16750
[4-8] applies to electric and electronic systems and components for vehicles.
It describes the potential environmental stresses and specifies tests and
requirements recommended for the specific mounting location on or in the
vehicle. The ISO 16750 consists of the following 5 parts:
* ISO 16750-1: General
* ISO 16750-2: Electrical loads
* ISO 16750-3: Mechanical loads
* ISO 16750-4: Climatic loads
* ISO 16750-5: Chemical loads
Similarly to device level, the electrical and electronic components that build
up the Battery System have to undergo tests before assembling. In general, the
responsible international standardization organisations for electrotechnical
and electronic applications are:
* IEC -International Electrotechnical Commission
* CENELEC -European Committee for Electrotechnical Standardization
* JEDEC Solid State Technology Association
In the automotive field, the Automotive Electronics Council (AEC) based in the
United States was established for the purpose of setting common part
qualification and quality-system standards for the supply of components in the
automotive electronics industry. The AEC Component Technical Committee is the
standardization body for establishing standards for reliable, high quality
electronic components. Components meeting these specifications are suitable
for use in the harsh automotive environment without additional component-level
qualification testing.
The quantity, value and complexity of electronics in passenger vehicles
continue to rise. Therefore, it is suggested to ensure to the vehicle
manufacturer that the supplier of the meets the following standards of the AEC
- Q100 norm which in turn refers to many JEDEC standards.
JEDEC has been the global leader in developing open standards and publications
for the microelectronics industry. JEDEC brings manufacturers and suppliers
together to participate in more than 50 committees and subcommittees, with the
mission to create standards to meet the diverse technical and developmental
needs of the industry. JEDEC’s collaborative efforts ensure product
interoperability, benefiting the industry and ultimately consumers by
decreasing time-to-market and reducing product development costs. The JEDEC
Automotive Electronics Forum is going to bring together experts from the
worldwide automotive electronics industry to evaluate current standardization
efforts and future industry needs.
* AEC - Q101 - Failure mechanism based stress test qualification for discrete semiconductors
* AEC - Q200 - Stress test qualification for passive components
The AEC standard is well-known to customers of electrical and electronic
components. The application and compliance with AEC standards, respectively,
create clarity in reliability and standardization issues and save time in
communication between supplier and customer.
The signal integrity is also an important issue to consider regarding
electrical requirements. Generally, signal integrity is closely related to the
EMC. If a design was implemented under consideration of its physical
requirements, both fields achieve best results. As far as the enquiry resulted
in there is no particular standard respective to the signal integrity. The
signal integrity is a set of measures describing the quality of an electric
signal. Some of the main issues of concern for signal integrity are ringing,
crosstalk, ground bounce, distortion, signal loss, and power supply noise.
Today's transfer rates require a combination of simulation, modeling and
measurement in order to already avoid signal integrity issues in the design.
When integrating fast data lines, signal integrity is one of the most
important parameters at all levels of electronics packaging and assembly (from
internal connections of an IC, through the package, the printed circuit board
(PCB), the backplane, and inter-system connections) because various effects
can degrade the electrical signal to the point where errors occur and the
system or device fails.
The design of a new battery module with integrated thermal management requires
that attention is given to the safety requirements regarding the vehicles
equipped with electric power train in the event of frontal or lateral
collision, pursuant UNECE Regulations No. 94 and No. 95; thus following the
test conducted in accordance with the procedure defined in Annex 3 to
Regulation No. 94 and in Annex 4 to Regulation No. 95, the electrical power
train operating on high voltage, and the high voltage components and systems,
which are galvanically connected to the high voltage bus of the electric power
train, shall ensure that the vehicle passengers are not exposed to voltages
higher than 30 VAC or 60 VDC, or alternatively the total energy (TE) on the
high voltage buses shall be in the limits established in Annex 11 of UNECE
Regulation No. 94 or Annex 9 of UNECE Regulation No. 95.
Until 30 minutes after the impact no electrolyte from the Rechargeable Energy
Storage Systems (REESS) shall spill into the passenger compartment and no more
than 7 per cent of electrolyte shall spill from the REESS.
REESS located inside the passenger compartment shall remain in the location in
which they are installed and REESS components shall remain inside REESS
boundaries.
No part of any REESS that is located outside the passenger compartment for
electric safety reasons shall enter the passenger compartment during or after
the impact test.
Complying also with regulation No. 100 Uniform provisions concerning the
approval of vehicles with regard to specific requirements for the electric
power train, the protection DEGREE IPXXD against direct contact with high
voltage live parts should be provided and the resistance between all exposed
conductive parts and the electrical chassis shall be lower than 0.1 ohm when
there is current flow of at least 0.2 ampere.
Isolation resistance should meet the prescriptions of regulation No. 100,
besides those exposed in regulation 94, where it is measured as indicated in
Annex 4A of such abovementioned regulation.
The REESS shall overcome the tests established in Annex 8 to regulation No.
100, such as vibration test, thermal shock and cycling test; moreover, fire
resistance shall also be granted complying with regulation No. 100.
Besides, since the employment of battery cells other than those used in the
current vehicle has been planned for the prototypal battery module, external
short circuit protection, overcharge/overdischarge protection and over-
temperature protection shall also be ensured pursuant regulation No.
100\.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0222_FORCE_689157.md
|
# 1 INTRODUCTION
A Data Management Plan (DMP) describes the data management life cycle for the
data to be collected, processed and/or generated by a Horizon 2020 project 1
. As part of making research data findable, accessible, interoperable and
reusable ('FAIR'), a DMP should include information on:
* the handling of research data during and after the end of the project
* what data will be collected, processed and/or generated
* which methodology and standards will be applied
* whether data will be shared/made open access and
* how data will be curated and preserved (including after the end of the project).
The due date of the first version of the DMP is month 6 (i.e. 28 February
2017).
The first version of the DMP does not provide detailed answers to all the
questions in Annex 1 of the guideline. The DMP is intended to be _a living
document_ in which information can be made more detailed and specific through
updates as the implementation of the project progresses and when significant
changes occur. Thus, the DMP has a clear version number and includes a
timetable for updates.
The DMP must be updated, as a minimum, in time with the periodic
evaluation/assessment of the project, or whenever significant changes arise,
such as (but not limited to):
* new data
* changes in consortium policies (e.g. new innovation potential, decision to file for a patent)
* changes in consortium composition and external factors (e.g. new consortium members joining or old members leaving).
This first version of the DMP focuses mainly on data management in the four
lead partnerships, and presents the initial thoughts on data collection and
management in the 12 local partnerships. A more comprehensive description for
the local partnerships will be included in the second version of the DMP in
M18 (January 2018).
The second version of the DMP will also include aspects of the General Data
Protection Regulation 2 that shall apply from 25 May 2018, and the Directive
3 by 6 May 2018 **.**
# 2 DATA SUMMARY
## 2.1 WP3: PLASTIC WASTE
Plastic waste data collected in WP3 will support decision making regarding the
transition towards a more sustainable use of plastic materials. The aim is to
generate and collect data, which provides a more accurate image of the
recycling potentials, environmental, social and economic impact of different
types of plastic waste, with an emphasis on flexible plastics.
The data generated in WP3 might be utilised to qualify the efficiency and
quality of the three new collection schemes, which will be set up as part of
task 3.1. Furthermore, it will be used to describe the challenges and
possibilities that the industrial partners experience when recycling household
plastic waste in new products as well as the specific requirements regarding
waste composition, cleanliness, and uniformity that they set for their
production processes. Thus, the data will be useful for waste handlers across
the value chain and production companies with an interest in utilising more
recycled plastics in their production. Furthermore, the data might be useful
for other municipalities/local authorities aiming to increase their recycling
rates of household plastic waste and improving their collaboration efforts
with the private industry.
Specifically, the following data collection is expected:
For task 3.1 data will be collected to monitor the efficiency of the three
collection schemes for household plastic waste, which will be set up. This
includes:
* Quantitative data on:
* material quantities (kg) o time of collection (week no.)
* composition (PET, HDPE, PP, flexible plastics, other plastics, contaminants) o NIR sorting efficiency (%)
* colours o economic value (euro)
* avoided CO 2 emissions from activities (tonnes) Qualitative data from:
* citizens (e.g. focus group interviews)
* retailers
The qualitative data will concern citizens perception and use of the sorting
schemes as well as their suggestions as to improvement.
* Other: data in the format of photos and videos of the waste materials and the collection and sorting equipment.
We will reuse existing data from the City of Copenhagen for the drafting of
the baseline analysis (D3.1). This includes data regarding collection rates
and frequency, collection scheme coverage, and waste recycling. Furthermore,
it includes data from the annual citizen satisfaction survey. The baseline
analysis will also include reused data that originates from continuous surveys
conducted by the City of Copenhagen and partners regarding the efficiency of
existing waste collection schemes.
For task 3.2 it is necessary to collect a number of data in order to identify
20 promissing applications. The data includes material demands for the
different products and data for quality of obtained polymeric materials from
processing of different types of collected and pre-sorted materials. For the
20 promising applications, sufficient data must be generated to select 10
product applications, which will be tested at production facilities and used
to prepare business cases.
The quantitative data with specification of amounts and composition/quality
characterisation of materials. In connection with the 10 promising products
end market potentials for three kinds of products will be estimated. The data
will be based on data generated in the project for treatment of the collected
materials supplemented with existing knowledge from the participating
companies regarding their products and technologies.
It is difficult to assess the size of the database generated through WP3.
However, the aim is to generate data from the management of around 1,000
tonnes of plastic waste.
## Local partnerships on plastic waste
In the table below, we have presented the initial thoughts on data collection
and management for the three local partnerships on plastic waste.
<table>
<tr>
<th>
**Partner**
</th>
<th>
**Activity**
</th>
<th>
**What**
</th>
<th>
**How**
</th>
<th>
**How to store the data**
</th> </tr>
<tr>
<td>
**SRH**
**Hamburg**
</td>
<td>
Provision of collection infrastructure (Purchase of 20 grid boxes for 10
receipt stations, 1 open container, 1 additional press container)
</td>
<td>
Number of filled grid boxes/time, ideally per recycling station Weight of
filled boxes (tbd)
</td>
<td>
Manual documentation at receipt stations
(by staff) Digital documentation: unclear if data (weight) can be
differentiated by receipt station
</td>
<td>
Internally:
Documentation in SAP ERP system / container delivery
(not single boxes)
Aggregate
(anonymous) on
Confluence
</td> </tr>
<tr>
<td>
Waste composition
</td>
<td>
To be developed
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**City of Lisbon**
</td>
<td>
Event exhibition with 10 urban pieces
</td>
<td>
Amount of plastic (kilograms)
</td>
<td>
Weight
</td>
<td>
Municipal data server in data sheets
</td> </tr>
<tr>
<td>
CO 2 footprint
</td>
<td>
CO 2 calculation (conversion indicators)
</td>
<td>
Aggregated data in confluence
</td> </tr>
<tr>
<td>
Number of collaborating artists
</td>
<td>
Counting
</td>
<td>
</td> </tr>
<tr>
<td>
**City of**
**Genoa**
</td>
<td>
To be developed
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## 2.2 WP4: STRATEGIC METALS
To gather information about the supply and demand of used electronic for the
analysis of market for second hand equipment, we will collect data from
online-shops (e.g. ebay, rebuy, momox, Shpock and NGO-advertisements) and
local advertisements in newspapers. This will be done with Big Data
Technology.
Through a portal/app for collection points and repair services we will gather
information about all registered repair facilities for electronic devices and
geo-coordinate this data with suitable search parameters and be displayed on a
public portal. All facilities will receive a possibility to update their
entries constantly i.e. to generate a link to their direct websites. The
Stadtreinigung Hamburg provides information on recycling areas, storage
containers and other recycling possibilities. Websites which provide repair
instructions will be connected also.
The DST development in WP4 and WP7 is based on the same Big Data collection.
The designated user and their user interface will be quite different and also
the corresponding responses to the user:
In WP4 and WP 7.5 in the context of DST two separate data streams are used: -
Day to Day DST (only developed and applied in the context of WP4) 2 :
* Data is collected in an ongoing process from internet sites such as Ebay. The regularly collected data is used for real-time analysis about specific EEE (e.g. used electronic goods) and their availability, selling prices and options for repair. The results of the analysis will be available for citizens and other actors via the App/Portal (to be developed in WP4). The App/Portal only provides information about the real-time situation. The Day to Day DST will not provide any data analysis over longer periods of time.
* The App/Portal will have areas of limited access (i.e. only registered users can access certain areas): Registered users can provide individual information on e.g.
disassembling of equipment, locations of repair shops and their offerings etc.
for publication. A data base of personal repair instructions can be created.
* The end product (App/Portal) can be useful to everyone (users, repair cafés, recycling companies etc.) to inform decisions on how to reuse, repair and/or recycle EEE. Users can adjust their behaviour according to the real-time information provided.
\- DST (developed in WP7 for the waste streams EEE and wood): Provides
specific analysis based on the sampled data from the Day to Day DST within a
given timeframe, i.e. the realtime data gathered by the Day to Day DST is
aggregated, stored and analyzed for longer time periods. Changes in waste
generation, resale, reuse, repair and recycling can be analyzed over longer
time periods. This information can be used to inform decision makers from the
public and private sectors on how to steer processes in order to increase the
reuse, repair and/or recycling of EEE. Data generated by the analysis function
of DST can be used by other parties interested in such statistical information
(e.g. decision makers in waste collection companies).
The origin of the continuously collected data is the Internet (web sites such
as Ebay). The offers and sales of specific EEE (or furniture) in internet
based second-hand markets will be downloaded. Personal data, like names or
e-mail addresses, will be excluded from sampling. All data will be stored in
MongoDB in Xml, JSON formats.
## Local partnerships on strategic metals
In the table below, we have presented the initial thoughts on data collection
and management for the three local partnerships on strategic metals.
<table>
<tr>
<th>
**Partner**
</th>
<th>
**Activity**
</th>
<th>
**What**
</th>
<th>
**How**
</th>
<th>
**How to store the data**
</th> </tr>
<tr>
<td>
**City of**
**Copenhagen**
</td>
<td>
Measuring tests
results
</td>
<td>
Amount/volume
Value
Costs
</td>
<td>
Collection vehicle or manual weighing Assessment by partners
</td>
<td>
Placed in relevant City of Copenhagen set up for e-storage of documents
(folder, eDoc)
</td> </tr>
<tr>
<td>
Setting up the partnership
</td>
<td>
Contracting documents
</td>
<td>
Documents for partnership agreement developed by CPH
</td> </tr>
<tr>
<td>
**City of Lisbon**
</td>
<td>
Repair cafés
</td>
<td>
Amount of reused equipment (kilograms)
</td>
<td>
Weight
</td>
<td>
Municipal data server in data sheets
</td> </tr>
<tr>
<td>
Aggregated data in confluence
</td> </tr>
<tr>
<td>
**City of**
**Genoa**
</td>
<td>
To be developed
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
hand shops in the internet, paper based advertisements and other data. The
outcome will be connected to an App/Portal open for all stakeholders (incl.
individuals) along the value chain.
## 2.3 WP5: FOOD WASTE PREVENTION AND BIOWASTE
Food waste data collected will support managing the transition process for
recovering food waste and produce a more accurate image of the economic,
social and environmental impact of food waste, through the cross-referencing
with data sources from the waste collection and treatment processes.
Data collected regarding food waste will comply with the _Food Waste Loss
Accounting and Reporting Standard_ 3 in order to facilitate its validation
for future research purposes. Types and formats for other datasets to be
cross-referenced with food waste data will be defined at a later stage, as it
is still not known at this point, what datasets it will be possible to import
to the system.
The parameters regarding food waste that has to be collected are: Recovered
food, measured in mass (kilograms) Typology of recovered food (categorical):
o Soup o Complements o Main dish o Uncooked foodstuff.
* Geographical location of waste production and recovery namely regarding:
o Coordinates according to the World Geodetic System (WGS 84) o Country o
Municipality o Parish.
* Date of food recovery (month and year)
* Destination of recovered food
* Number of beneficiaries of recovered food for social use Number of participants of the food recovery supply chain.
The known, generated data at this point are:
* Economic generated value (in euro) Avoided CO 2 emissions (in tonnes) Avoided organic residue (in tonnes).
As the tool manages the food recovery transitional process, the data above is
recorded and generated through the transactions and registered in the ICT tool
databases. The metrics might be changed as the process is adapted to the ICT.
Other data might be added to this list, as it becomes more clear, which
datasets on waste treatment and collection that can be successfully
incorporated. Besides incorporating these datasets, the ICT platform with
historical data sets from the Zero Waste Network in Lisbon will migrate its
operations to the new platform.
The expected size of the data cannot be asserted at this point as the data
from waste treatment and collection has not been identified and categorized.
Both the data collected and generated might be useful for:
* Public administration as a monitoring tool for compliance of food waste prevention and waste management goals.
* Food waste producers’/donor entities, as a source of data for historical analysis of food waste both at individual level and at a sectorial level as well as a metric for the social, economic and environmental value generated from the recovered food.
* Receiving entities, for the social, economic and environmental value generated from the recovered food.
* Academic and scientific purposes.
## Local partnerships on food waste and biowaste
In the table below, we have presented the initial thoughts on data collection
and management for the three local partnerships on food waste and biowaste.
<table>
<tr>
<th>
**Partner**
</th>
<th>
**Activity**
</th>
<th>
**What**
</th>
<th>
**How**
</th>
<th>
**How to store the data**
</th> </tr>
<tr>
<td>
**City of**
**Copenhagen**
</td>
<td>
Technology mapping for treatment of biowaste
</td>
<td>
Waste composition and of digestate
</td>
<td>
Collection of data from partners in waste management
</td>
<td>
Data management in CPH complies with EU regulation on data management.
</td> </tr>
<tr>
<td>
Treatment technology data
</td>
<td>
Collection of data from partners and previous works
</td> </tr>
<tr>
<td>
Survey in households about the collection of biowaste
</td>
<td>
Citizens opinions about the waste management system
</td>
<td>
Anonymous surveys
</td> </tr>
<tr>
<td>
Citizens suggestions for improvement of
the collection of waste
</td>
<td>
Anonymous surveys
</td> </tr>
<tr>
<td>
**SRH**
**Hamburg**
</td>
<td>
Implementation and test of 10 underground collection systems for biowaste
disposal (for residents of apartment buildings)
</td>
<td>
Perform surveys with residents in order to
understand challenges and obstacles concerning biowaste disposal
</td>
<td>
Survey with residents and stakeholders on acceptance of the
tested collection system Weighing and review of quality of collected biowaste
</td>
<td>
Stadtreinigung internal standard procedures in terms of data security and
safety are followed
</td> </tr>
<tr>
<td>
**City of**
**Genoa**
</td>
<td>
To be developed
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## 2.4 WP6: WOOD WASTE
The purposes of the data collection/generation are:
* to develop the Value Chain Based Partnership by identifying and involving all relevant stakeholders to re-engineer wood waste streams and collection schemes;
* to implement wood collection schemes within a specific Urban Lab applied in a city district;
* to promote research activities in order to develop technology solutions to close the wood chain loop;
* to analyse and test market applications in terms of business model sustainability.
Different types and formats of data will be generated/collected in WP6 over
the project lifecycle, as for instance:
* Quality and quantities of wood from contributors and re-manufacturers
* Number and surface of beach resorts
* Qualitative and quantitative data regarding market applications (e.g. market growth rate, size of the market, availability of raw materials, bargaining power, social acceptance for a product) and technology applications.
Existing data will be used mainly to develop the Value Chain Based
Partnership. To serve this purpose, it may be necessary to share some data, in
the foreseen newsletter, local press information, communications or meetings
and possibly through the partners’ communication channels (website,
newsletters, social media, etc.).
Other existing data include:
* Amiu company’s data referring to the city (citizens’ waste disposal information (when available), including data from Fabbrica del Riciclo and EcoVan App, etc.)
* Statistics at municipal level (population density and demographics)
* Municipality Open Data Catalogue
* Geoportal of the municipality (Genoa)
* Data of the administrative districts such as list of local associations GIS regional data on Fire Damaged Areas 2003 – 2013
* Regional data on forests.
Data come from institutional databases (region, metropolitan city,
municipality, submunicipalities) or from local partners, from scientific
publication, public and private statistics and outcomes of previous projects
(Silvamed, Robinwood).
The expected size of data is not yet possible to evaluate.
Data may be useful for:
* the local level to improve the efficiency of collection schemes
* the local and national level to share value chain data in terms of quantities and quality to promote market development
* the local and EU level to establish a new governance model
* EU, national and local level to assess new technology applications, feasibility and market attractiveness.
Big Data and DST (see also WP 7.5): Similar to the collection of data about
used EEE in WP4 but only for a period of about six months a Big Data
application will collect data about the offerings of used furniture in the
internet. The data analysis done with the DST may allow to identify patterns
in citizen behaviour with regard to waste generation, reselling, reuse,
recycling and repair and furniture (what used furniture is offered and where,
what is bought with which price and where etc.). This information will be used
to (re-)design approaches to citizen involvement and will be considered when
developing recommendations on citizen involvement (task 7.3).
## Local partnerships on wood waste
In the table below, we have presented the initial thoughts on data collection
and management for the three local partnerships on wood waste.
<table>
<tr>
<th>
**Partner**
</th>
<th>
**Activity**
</th>
<th>
**What**
</th>
<th>
**How**
</th>
<th>
**How to store the data**
</th> </tr>
<tr>
<td>
**City of**
**Copenhagen**
</td>
<td>
Measuring tests
results
</td>
<td>
Amount/volume
Value
Costs
</td>
<td>
Collection vehicle or manual weighing Assessment by partners
</td>
<td>
Placed in relevant CPH set up for estorage of documents
(folder, eDoc)
</td> </tr>
<tr>
<td>
Setting up the partnership
</td>
<td>
Contracting documents
</td>
<td>
Partnership agreement developed by CPH
</td> </tr>
<tr>
<td>
**SRH**
**Hamburg**
</td>
<td>
Introduction of new wood waste services (shredding, chimney wood production)
</td>
<td>
Identification of suitable properties (property owners) for new „wood“
services => identification of properties with gardens, contact of property
owners to offer/disseminate services
</td>
<td>
Existing data from waste fee payers may be used to identify property owners
Aerial photographs may be used to identify properties with gardens/ trees
</td>
<td>
Stadtreinigung internal standard procedures in terms of data security and
safety are followed
Information about property owners will remain internal at SRH
</td> </tr>
<tr>
<td>
**City of Lisbon**
</td>
<td>
Sorting and storage of wood waste
</td>
<td>
Amount of wood reusable (kilograms)
</td>
<td>
Weight
</td>
<td>
Municipal data server in data sheets
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Aggregated data in confluence
</td> </tr> </table>
## 2.5 WP7: GOVERNANCE AND DECISION SUPPORT
The following sections present the data collection and use for tasks 7.1-7.5.
### 2.5.1 Development of governance models (tasks 7.1, 7.2 and 7.4)
Within the process of developing the governance models, we will gather data in
the four partner cities via expert interviews.
**Identification and selection of interviewees**
Interviews will be conducted with key stakeholders for the implementation of
eco-innovative solutions and governance arrangements in the respective local
contexts of the four cities. Key stakeholders will include project partners of
the value chain partnerships implemented in WP3-6 as well as non-project
partners. Interviewees will be identified based on the results of the analysis
of local framework conditions and the stakeholder analysis, in coordination
with the city cluster coordinators and the WP3-6 Leaders respectively. The
identified stakeholders (public institutions, enterprises, intermediate
organisations) will be contacted to identify interviewees within the
respective organisations.
**Processing and protection of data gathered in interviews**
We will contact interviewees to arrange face-to-face interviews (if possible).
Via interviews, we will gather qualitative data on the local situation,
framework conditions and cooperation processes. Before conducting the
interview, each interviewee will be informed about the purpose of the analysis
and the further use of the interview material 4 . The involvement of
research participants (interviewees) will follow the recommendations on good
research practices of the European Science Foundation 5 and the proposals
for safeguarding good scientific practice
of the Commission on Professional Self-Regulation in Science of Deutsche
Forschungsgemeinschaft 6 7 .
The interviews will be recorded and transcribed (if permission by the
interviewee is given). The interview transcripts form the basis for the
qualitative analysis of the interviews (tasks 7.1 and 7.2). In order to ensure
confidentiality, the full interview transcripts will not be published and not
be shared with other persons (nor within the FORCE consortium). The
interviewee will receive a brief summary of the interview for approval. During
the interview analysis, the gathered data will be anonymised and aggregated.
Only aggregated and anonymised analysis results will be published in project
reports and scientific papers.
### 2.5.2 Evaluating citizen involvement (task 7.3)
In the context of evaluating local citizen involvement tools, qualitative
and/or quantitative data may be gathered from local citizens in the four
cities. Details of how and to which extent this will be done during the
project will be developed together with the project partners.
Irrespective of the applied methods to gather information about the
perspectives and/or behaviour of local citizens, the collected data will be
anonymised before the start of the analysis. Participating citizens will be
informed about the FORCE project, the purpose of the interview/questionnaire
and the further use their data 8 . The involvement of citizens will follow
the recommendations on good research practices of the European Science
Foundation and the proposals for safeguarding good scientific practice of the
Commission on Professional SelfRegulation in Science of Deutsche
Forschungsgemeinschaft. Only aggregated and anonymised analysis results will
be published in project reports and scientific papers.
**2.5.3 Development of decision support tool(s) (task 7.5)**
See section 2.2.
## 2.6 WP9: EXPLOITATION, REPLICATION AND MARKET DEPLOYMENT ACTIVITIES
### 2.6.1 Exploitation Plan
The preparation of the FORCE exploitation plan, which is a strategy on how to
exploit the project results, is based on input provided in the Grant
Agreement, the work programme and on literature review / desk research.
Literature research has been done based on an internet research where open
accessible files available for free use where checked. All references are
marked in a reference list and citations in the text are marked recognizable
and according to scientific standards. Project partners were asked to check
and to contribute to the plan.
The exploitation plan (D.9.1) is confidential and only for internal use of the
FORCE consortium members including the Commission Services.
### 2.6.2 Stakeholder analysis
In order to acquire the deployment perspectives for the results gathered in
frame of the project a stakeholder analysis will be carried out. It will focus
on the opportunities and obstacles with regard to the project results in the
participating cities and beyond.
As a first step, all FORCE city cluster coordinators will be asked to provide
organisation names, contact data and website links of relevant value chain
stakeholders on local, national and international level engaged in the four
waste streams. All contact data received will be treated strictly confidential
and will not be used for any other purposes other than the intended
(stakeholder analysis; invitations to business workshops). Based on this
information tailored questionnaires will be designed taking different waste
streams and stakeholder levels into consideration. Questionnaires will be sent
either by mail or personalized e-mail. Answers and information gathered in
frame of the survey will be evaluated anonymously. Within the stakeholder
analysis report, compiled data and information will also be presented in a way
which ensures anonymity of personal information.
The stakeholder analysis (D.9.2) is confidential and only for internal use of
the FORCE consortium members including the Commission Services. It will serve
as a baseline and additional information for project partners to prepare their
project results according to the needs and including insights gathered from
external relevant stakeholders.
### 2.6.3 Exploitation and market deployment strategy
Project partners will be asked to identify their results and outcomes which
have potential for further exploitation and which have been referenced to in
the exploitation plan. Based on the identification of project results, an
exploitation and market deployment strategy will be developed for the four
waste streams (D.9.3).
Relevant data for this strategy will be gathered through interviewing FORCE
consortium members. Insights gained will be backed by further empirical,
secondary data from additional scientific research. All research activities
will follow the recommendations on good research practices of the European
Science Foundation and the proposals for safeguarding good scientific
practice of the Commission on Professional Self-Regulation in Science of
Deutsche Forschungsgemeinschaft.
The exploitation and market deployment strategy (D.9.3) is confidential and
only for internal use of the FORCE consortium members including the Commission
Services. It will serve as a baseline for project partners to further prepare
and design the exploitation of their results.
### 2.6.4 Business modelling strategies
Four business modelling strategies (D.9.4) will be produced based on the
business cases, achievements and feasibility studies developed within the four
waste streams. Data relevant for this task will be gathered by interviewing /
using data provided by project partners. The level of detail of partner’s
project data will be agreed with the project partners beforehand in order to
ensure IPR. The strategies will be backed with latest market information
including status quo analysis and prognosis. (Scientific guiding principles
see above).
The business modelling strategies be made available for public use via the
project website.
### 2.6.5 Workshops with business stakeholders
In order to raise awareness of the project and to enhance network activities
and collaboration between business stakeholders and project partners for
potential exploitation of project results, various workshops with business
stakeholders will be carried out during the project.
Stakeholders who will participate in the workshops will be invited by HAW and
contacted by project partners through their value chain networks. Additional
participants including corresponding contact details will be gathered by
internet research. Participant lists will be prepared (including name of
participants and organisations name) and shared during the workshops. Data
protection standards will be respected. The same applies for photos to be
taken during the workshops, i.e. only with prior agreement may these be used
in the project context.
# 3 WP3: PLASTIC WASTE
## 3.1 FAIR DATA
### 3.1.1 Making data findable, including provisions for metadata
In order to ensure the comparability of data, standard naming conventions will
be used whenever suitable.
For the waste materials, the European list of waste 9 will be used as a
standard nomenclature. The plastic resins resulting from sorting and
reprocessing will be classified following the naming convention used in
PlastEurope’s Plastics Exchange 11 .
Due to the nature of the project, which focuses on development of new
prototypes of plastic products based on innovative, new plastic resin mixtures
and production processes, it is not expected that the use of standard naming
conventions will always be suitable. In these cases, a suitable categorisation
will be developed in collaboration with the experts involved in the project.
In addition, some plastic resin mixtures might be considered as a trade secret
by the involved companies. In this case, the project management will engage in
productive dialogue with the companies about which data can and cannot be
published.
### 3.1.2 Making data openly accessible
The project aims to make all data openly accessible and exploitable, which
might be relevant for the key stakeholders pointed out in the stakeholder
analysis. However, it will not always be possible to publish data, due to
contractual reasons. Data which is found to be of vulnerable nature to the
involved stakeholders (e.g. trade secrets, process data) will not be made
public without prior consent of these stakeholders. This data is contractually
protected.
Qualitative data resulting from interviews with citizens, stakeholders etc.
will only be made public on an aggregated level, meaning that the overall
conclusions and summaries will be published; not the individual results.
### 3.1.3 Making data interoperable
The use of standardised nomenclature such as PlastEurope’s Plastics Exchange
and the European List of Waste will ensure the interoperability, reusability
and exchangeablity of the data generated.
When possible, quantitative data will be collected, managed, and stored in
Excel format, ensuring the interoperability with other possible users.
### 3.1.4 Increase data re-use (through clarifying licenses)
Data licensing matters will be clarified by the project partners once the main
datasets have been developed. At this point, it is too early to define the
licensing issues.
The increase of data-reuse is also dependant on the success of the
dissemination and exploitation activities as described in the Communication
and Dissemination Plan and the Exploitation Plan.
Data will be available for reuse on a continuous basis as soon as the data is
ready and cleared for publishing. All data, which is made public during the
project period, is useable by third parties also after the end of the project
period. Public data will be available via the project website, which will be
online until one year after the end of the project period. After this, the
public datasets will be available through direct contact with the project
partners.
Data quality assurance processes will be presented in the second version of
the DMP.
### 3.2 ALLOCATION OF RESOURCES
The City of Copenhagen is responsible for collection and storage the data
generated in task 3.1, including data on waste sorting, quantities, polymer
types etc. The Danish Technological Institute is responsible for collection
and storage of data generated in task 3.2, including data related to
regranulation and production of ten prototypes.
Costs for making data FAIR are under consideration by the City of Copenhagen
and the Danish Technological Institute.
### 3.3 DATA SECURITY
All data is securely stored by the project partners, and will follow the
respective security measures deployed in their organisations. This includes
generation of data backups to ensure that all data can be recovered. Regarding
the transfer of sensitive data relating to the project, the project partners
will discuss how to best manage this issue.
### 3.4 ETHICAL ASPECTS
Templates of the informed consent form and an information sheet about the
FORCE project has been prepared as part of the deliverables in WP1 Ethics,
D1.4: H – Requirement no 4.
The informed consent form will be used when beneficiaries collect information
via interviews, questionnaires, workshops and similar activities in the five
work packages, WP3-WP7. The form includes a brief presentation of the project,
a description of how participants will be involved, and how data will be used
in the project, all in the native language.
**3.5 OTHER**
All procedures followed comply with national and international legislation.
# 4 WP4: STRATEGIC METALS
## 4.1 FAIR DATA
The Day to Day DST (WP4) only provides information and data, if someone runs
it. Who and under what conditions this will be done after the end of the
project is part of the exploitation plan in WP9.
WP7 Task 7.5 DST: The collected data from the internet in the mongoDB has to
stay with Consist because the amount of data will be too much to hand it over
and the e.g. ebay usage rights would not allow to do that either. However, the
DST provides analysis using the collected data. These analysis (could be huge
tables themselves) are open to further use.
### 4.1.1 Making data findable, including provisions for metadata
EAN numbers will be used as metadata for the metal for the analysis of data
generated by the Day to Day DST. Naming conventions and categorizations from
WEEE regulations will be further used for specifying keywords. Analysis always
refer to a given timeframe and region.
### 4.1.2 Making data openly accessible
The Day to Day DST will be re-usable and freely accessible to the public
beyond the end of the project if someone runs it. The analysis from the DST
will be made openly accessible. All project related data, as well as the code
and the documentation will be long term stored on the Consist server.
### 4.1.3 Making data interoperable
The DST tool can be reused in other fields. The DST interface will be
developed based on the specific needs of the metal (and wood chains),
therefore certain adjustments might be needed for further usage.
### 4.1.4 Increase data re-use (through clarifying licences)
The DST tool can be reused in other fields. The DST interface will be
developed based on the specific needs of the metal (and wood chains),
therefore certain adjustments might be needed for further usage. The
statistical data generated by the DST in case of a metal chain will remain
useful for e.g. time series analysis as long as the categorization in the WEEE
has not changed.
The DST will be made freely accessible to the public beyond the end of the
project. This statement is valid for principle data categories and structures
being used in the project as well. By that, it shall be possible to transfer,
adapt and reuse the Big Data application as a base component in other cities
or regions after the project has finished 10 .
Data quality assurance processes will be presented in the second version of
the DMP.
### 4.2 ALLOCATION OF RESOURCES
All data management will be done under the surveillance of the data protection
officer of Consist and according to the data protection rules of Consist. All
employees have to agree to these rules and their statements are being
recorded. (The DST for the wood chain will be also developed by Consist ITU,
therefore the same regulations will be followed).
### 4.3 DATA SECURITY
At Consist ITU, there are standard procedures to ensure data security of
company’s projects. All data is stored on the separate servers in Kiel and
Hamburg, as well as on the backup servers provided by third parties. The Data
Protection Act of the Free and Hanseatic City of Hamburg (Hamburgisches
Datenschutzgesetz) is followed. (The DST for the wood chain will be also
developed by Consist ITU, therefore the same regulations will be followed).
### 4.4 ETHICAL ASPECTS
Day to Day DST: Registered users can provide own information on e.g.
disassembling of equipment, repair shop offerings, collections point etc. for
publication. In the context of registration, users will be informed about the
use of their data via the ‘Informed consent’ (in conformance to the rules of
the respective country), all in the native language.
The DST will not provide any personal information, therefore there is no need
to consider ethical aspects here.
### 4.5 OTHER
Consist ITU applies the following procedures for data management in WP4
(includes DST for WP6):
* Standard procedures for data security at Consist ITU
* Data Protection Act of the Free and Hanseatic City of Hamburg (Hamburgisches Datenschutzgesetz)
* German User data protection regulations.
# 5 WP5: FOOD AND BIOWASTE
## 5.1 FAIR DATA
### 5.1.1 Making data findable, including provisions for metadata
Regarding naming convention, the _Food Waste Loss Accounting and Reporting
Standard_ 11 will be followed when possible. If, and when, this standard
fails to encompass a given subject further research will be done to find an
adequate standard, if one is available, if not a convention will be defined in
collaboration with experts in the given field.
Matters related with metadata, identification mechanisms and versioning
numbers are under consideration with the developer, Addapt Creative.
### 5.1.2 Making data openly accessible
Raw Data collected through the transactional processing of food waste recovery
cannot be made publicly available due to contractual reasons. The access to an
individual donors or receptors food waste data might enable insights of
operational, commercial or other value that must be protected to ensure their
engagement with the project. As such, this data is contractually protected.
On the other hand, aggregated data (e.g. sectoral data, geographical data)
will be made public whenever the privacy of the donor or recipient is not at
stake. This will be done online, through a public access webpage accessible by
any browser.
Access to the disaggregated data might be possible for academic purposes under
a confidentiality agreement. Data can be supplied in CVS File format in order
to be universally accessible.
The need for a data access committee will be established once the datasets to
be supplied regarding the collection and treatment process are determined.
Until then it is difficult to analyse the full involvement of each partner in
the decision process. When such is established conditions for access as well
as the methodology to do so will be defined.
The data and associated documentation will be deposited on the ICT tool
itself.
### 5.1.3 Making data interoperable
The use of the _Food Waste Loss Accounting and Reporting Standard_ will
support the interoperability, reusability and exchangeability of the data
gathered and generated through the ICT tool.
Technical aspects of interoperability are under consideration by the
developer, Addapt Creative.
### 5.1.4 Increase data re-use (through clarifying licences)
Data licensing matters will be established once the datasets to be supplied
regarding the collection and treatment process are determined. Until then it
is difficult to analyse the full involvement of each partner in the decision
process. When such is established conditions for access as well as the
methodology to do so will be defined.
Public data is made accessible as it is inserted in the system during the
transitional process management.
Disaggregated data will be accessible at the official launch of the ICT tool,
after the trial period, under de conditions described above.
The data will remain available after the end of the project, especially
because is intended that the ICT tool remains active beyond 2020 collecting
data.
Currently, _Zero Desperdicio Network_ data quality assurance processes will be
adapted to the ICT tool and included in the user manual.
### 5.2 ALLOCATION OF RESOURCES
Data management is a co-responsibility for Addapt Creative, with
responsibility for the technical aspects, and DARiACORDAR as a data curator.
Long term preservation of data processes will be established once the datasets
to be supplied regarding the collection and treatment process are determined.
Until then it is difficult to analyse the full involvement of each partner in
the decision process. The preservation of this data is essential has, at this
time, there are no available time series for sectorial food waste recovery,
among others.
Costs for making data FAIR are under consideration by the developer, Addapt
Creative.
**5.3 DATA SECURITY**
Security provisions are under consideration by the developer, Addapt Creative.
### 5.4 ETHICAL ASPECTS
The data collected and generated by the ICT tool does not include personal
data. As such, there are no ethical issues that impact on data sharing beyond
the contractual limitations pointed out in section 2.3. Still, any concern
that might arise as well as all the foreseen data uses will be included in the
terms of use of the ICT tools.
For data collection via interviews, questionnaires, workshops and similar
activities, templates of the informed consent form and an information sheet
about the FORCE project has been prepared as part of the deliverables in WP1
Ethics, D1.4: H – Requirement no 4. The informed consent form includes a brief
presentation of the project, a description of how participants will be
involved, and how data will be used in the project, all in the native
language.
**5.5 OTHER**
All procedures followed comply with national and international legislation.
# 6 WP6: WOOD WASTE
## 6.1 FAIR DATA
### 6.1.1 Making data findable, including provisions for metadata
The data produced and used in the project will be discoverable with metadata,
identifiable and locatable by means of the standard identification mechanism
in use in the Municipality of Genoa that refers to different methodology (RNDT
methodology, INSPIRE methodology. The software used for the metadata catalogue
will be GeoNetwork (open source).
The metadata available from by the Geoportal are _Inspire Compliance_ that is
adhering to EU legislation 12 . The municipality of Genoa will make
available all the considered/expected information collected by the Geoserver
opensource platform of the Geoportal through interoperability services of WMS
and WFS.
An example of a metadata set that will be created by the Geonetwork
application is:
IDENTIFICATION INFORMATION Date
* Cited responsible party
* Point of contact
* Resource maintenance
* Resource constraints
* Equivalent scale
* Topic category
* Geographic bounding box
DISTRIBUTION INFORMATION Distributor
REFERENCE SYSTEM INFORMATION
DATA QUALITY INFO Date
METADATA File identifier
* Metadata language
* Character set
* Hierarchy level
* Date stamp
* Metadata standard name
* Metadata standard version
Data already openly available, is the Open Data portal of the Municipality of
Genoa at the following link: _http://dati.comune.genova.it/_ licenses chosen
are Creative Commons 3.X e 4.X.
Issues regarding search keywords and version numbers will be decided upon
during the project.
### 6.1.2 Making data openly accessible
Aspects of data management will include open access and support protection of
personal data to allow effective and secure exploitation. Public interests and
the protection of intellectual property will be well-balanced (Grant Agreement
p. 174).
Shared data on existing platform:
Municipality: Open data catalogue
_http://dati.comune.genova.it/search/type/dataset_ The public geographic
databases are accessible at:
_http://mappe.comune.genova.it/geoserver/wms?service=wms
&version=1.3.0&request=GetCapa _ _bilities_
Ticass and ActiveCells may manage some confidential information relative to
the task 6.3 “Innovative applications” at the aim of the protection of IP
necessary for to provide the incentives for private investment and
exploitation of the results (according to the Grant Agreement, p. 174).
### 6.1.3 Making data interoperable
Open access to the Geoserver opensource platform of the Geoportal would be
provided through interoperability services of WMS and WFS (see above).
### 6.1.4 Increase data re-use (through clarifying licences)
The data will be licensed to permit the widest re-use possible according to
the Exploitation Plan and Communication and Dissemination Plan. In the case of
WP6, Guidelines for scalability and replication they will be developed in
general with licenses depending on the degree of product data openings. It
will be defined in more detail in the next version of the DMP.
The data produced and/or used in the project will be useable by third parties,
in particular after the end of the project, according to WP6 Guidelines for
scalability and replication and Italian Data Protection Code.
In general, licenses will depend on the degree data access. It will be defined
later on.
For the time being, we expect data will remain re-usable during the project
plus one year.
Data quality assurance processes will be presented in the second version of
the DMP.
### 6.2 ALLOCATION OF RESOURCES
For data protection and system management the responsible office is the
Informative Systems of the Municipality of Genoa. For data collection and data
quality, the responsible office is the Environmental Department.
The DST for the wood chain will be also developed by Consist ITU, so all data
management will be done under the surveillance of the data protection officer
of Consist and according to the data protection rules of Consist. All
employees have to agree to these rules and their statements are being
recorded.
Costs for making data FAIR are under consideration by the City of Genoa.
### 6.3 DATA SECURITY
Security provisions will be in accordance with Municipality of Genoa standards
(e.g. security protocol and policy of disaster recovery).
### 6.4 ETHICAL ASPECTS
D1.3: H – Requirement no 3: Details on the procedures and criteria that will
be used to identify/recruit research participants for the interviews in WP6.
For data collection via interviews, questionnaires, workshops and similar
activities, templates of the informed consent form and an information sheet
about the FORCE project has been prepared as part of the deliverables in WP1
Ethics, D1.4: H – Requirement no 4. The informed consent form includes a brief
presentation of the project, a description of how participants will be
involved, and how data will be used in the project, all in the native
language.
### 6.5 OTHER
We will use information on cross application founded by own municipality
resource (the Geoportal) and regional resources (e.g. cartography 1:5000).
More details will follow in the second version of the DMP.
# 7 WP7: GOVERNANCE AND DECISION SUPPORT
This section describes data management in task 7.1-7.4. Task 7.5 is presented
together with strategic metals in sections 2.2 and 4.
## 7.1 FAIR DATA
**7.1.1 Making data findable, including provisions for metadata**
Not applicable to the qualitative data gathered for implementing of tasks (cf.
section 2.5).
### 7.1.2 Making data openly accessible
In order to ensure confidentiality, the full interview transcripts from the
expert interviews and the citizen involvement will not be published and not be
shared with other persons (also not within the FORCE consortium). This allows
research participants to freely express their opinions and thus improves the
research results of the project. During the interview analysis, the gathered
data will be anonymized and aggregated. Only aggregated and anonymised
analysis results will be published. The data will be published in project
reports and scientific journal articles, conference papers and (scientific)
conference presentations.
**7.1.3 Making data interoperable**
Not applicable to the qualitative data gathered for implementing tasks.
**7.1.4 Increase data re-use (through clarifying licences)**
Not applicable to the qualitative data gathered for implementing tasks.
### 7.2 ALLOCATION OF RESOURCES
At HCU, there are standard procedures in place for data management, which
ensure data security (including data recovery, secure data storage and
transfer) of data gathered and processed in research projects. These include
the storage of on separate servers and networks with restricted access only
for selected research staff of HCU. Data security standards are overseen by
the responsible data protection officer. Moreover, the Data Protection Act of
the Free and Hanseatic City of Hamburg (Hamburgisches Datenschutzgesetz) is
followed.
Since standard procedures are followed which are already in place and
generally applied to all research data there will be no additional costs for
managing project data.
### 7.3 DATA SECURITY
At HCU, there are standard procedures in place to ensure data security
(including data recovery, secure data storage and transfer) of data gathered
and processed in research projects. These include data storage on separate
servers and networks with restricted access only for selected research staff
of HCU. Data security standards are overseen by the responsible data
protection officer. Moreover, the Data Protection Act of the Free and
Hanseatic City of Hamburg (Hamburgisches Datenschutzgesetz) is followed 13 .
### 7.4 ETHICAL ASPECTS
Since the gathered data is anonymised and aggregated before publication there
are no ethic issues that impact data use or sharing (see above for details).
### 7.5 OTHER
HCU applies the following procedures for data management gathered in tasks:
* Recommendations on good research practices of the European Science Foundation
* Proposals for safeguarding good scientific practice of the Commission on Professional Self-Regulation in Science of Deutsche Forschungsgemeinschaft
* Data Protection Act of the Free and Hanseatic City of Hamburg (Hamburgisches Datenschutzgesetz)
* Standard procedures for data security at Hamburg’s universities.
# 8 WP9: EXPLOITATION, REPLICATION AND MARKET DEPLOYMENT STRATEGIES
## 8.1 FAIR DATA
**8.1.1 Making data findable, including provisions for metadata** Not
applicable to the data gathered for implementation of tasks in WP9.
### 8.1.2 Making data openly accessible
Most of the data collected and used in frame of WP9 is relevant to draft
information in an aggregated, not individualized manner (exploitation plan,
stakeholder analysis, market deployment strategy). It is meant to serve
project partners as background and guiding information when planning the
exploitation of project results. According to the Grant Agreement, these
documents are for internal (FORCE consortium and EC services) use only and
will not be published openly. Nevertheless, data / information collected when
preparing the stakeholder analysis will be handled anonymously and the
corresponding reports for partners will not include any data that allows
reference to a particular entity or person. Only aggregated and anonymised
analysis results will be published. The business modeling strategies will also
be drafted to serve as guiding documents for project partners in order to
enable them plan their business cases/model for exploiting their results in a
commercial way but the strategies will be made available and accessible via
the project’s website. No special software tools will be necessary to access
the data.
### 8.1.3 Making data interoperable
Not applicable to the data gathered for implementing tasks in WP9.
**8.1.4 Increase data re-use (through clarifying licences)** Not applicable to
the data gathered for implementing tasks in WP9.
### 8.2 ALLOCATION OF RESOURCES
HAW has the same procedures for managing data as the HCU (section 7.2).
Furthermore, HAW is also governed public law, and therefore the Data
Protection Act of the City of Hamburg will also be followed.
**8.3 DATA SECURITY**
Same applies for HAW Hamburg as presented in section 7.3.
**8.4 ETHICAL ASPECTS**
The same applies for data gathered in WP9 as in section 7.4.
### 8.5 OTHER
HAW applies the following procedures for data management gathered for tasks to
be undertaken in WP9:
* Recommendations on good research practices of the European Science Foundation
* Proposals for safeguarding good scientific practice of the Commission on Professional
Self-Regulation in Science of Deutsche Forschungsgemeinschaft
* Data Protection Act of the Free and Hanseatic City of Hamburg (Hamburgisches Datenschutzgesetz)
* Standard procedures for data security at Hamburg’s universities.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0223_INVITE_763651.md
|
# Introduction
The current document constitutes the interim version of the Data Management
Plan (DMP) elaborated in the framework of INVITE, which has received funding
from the European Union’s Horizon 2020 Research and Innovation programme under
Grant Agreement No 763651.
INVITE is set on co-creating a well-connected European Open Innovation (OI)
ecosystem. It envisions an OI ecosystem in which knowledge meaningfully flows
across borders and is translated into marketable innovations, bringing
increased socio-economic benefits to EU citizens. To this end, INVITE will co-
design, pilot and demonstrate a pan-European service platform, the **Open
Innovation 2.0 Lab** , aiming to better link the currently fragmented
innovation systems of the EU by facilitating meaningful cross-border knowledge
flows; empower EU businesses with the skill-sets required to tap into Europe’s
knowledge-base and turn it into value; and increase the participation of
private investors in OI and collaborative innovation projects.
The Open Innovation 2.0 (OI2) Lab will experiment with novel bottom-up
collaborative models of innovation and leverage **OI support services** and
**ICT tools** to stimulate and support OI across Europe, building a vibrant
community of OI actors and stakeholders (including business, science and
research, public authorities and civil society) along the way. The valuable
knowledge, evidence and experiences gained through the experiments of the OI2
Lab will be diffused across the EU so as to fuel their replication and scale-
up for the benefit of the European economy and society as a whole.
To this end, INVITE brings together a well-balanced and complementary
**consortium** , that consists of **9 partners across 5 different European
countries** , as presented in the following table.
## _Table 1: INVITE partners_
<table>
<tr>
<th>
**No**
</th>
<th>
**Name**
</th>
<th>
**Short name**
</th>
<th>
**Country**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Q-PLAN INTERNATIONAL ADVISORS
</td>
<td>
Q-PLAN
</td>
<td>
Greece
</td> </tr>
<tr>
<td>
2
</td>
<td>
STEINBEIS INNOVATION
</td>
<td>
SEZ
</td>
<td>
Germany
</td> </tr>
<tr>
<td>
3
</td>
<td>
EUROPE UNLIMITED
</td>
<td>
E-UNLIMITED
</td>
<td>
Belgium
</td> </tr>
<tr>
<td>
4
</td>
<td>
RTC NORTH
</td>
<td>
RTC NORTH
</td>
<td>
United Kingdom
</td> </tr>
<tr>
<td>
5
</td>
<td>
NINESIGMA EUROPE
</td>
<td>
NINESIGMA
</td>
<td>
Belgium
</td> </tr>
<tr>
<td>
6
</td>
<td>
INTRASOFT INTERNATIONAL
</td>
<td>
INTRASOFT
</td>
<td>
Luxembourg
</td> </tr>
<tr>
<td>
7
</td>
<td>
CENTRE FOR RESEARCH AND TECHNOLOGY HELLAS
</td>
<td>
CERTH/ITI
</td>
<td>
Greece
</td> </tr>
<tr>
<td>
8
</td>
<td>
WIRTSCHAFTSFOERDERUNG REGION STUTTGART
</td>
<td>
WRS
</td>
<td>
Germany
</td> </tr>
<tr>
<td>
9
</td>
<td>
NORTH EAST LOCAL ENTERPRISE PARTNERSHIP
</td>
<td>
NELEP
</td>
<td>
United Kingdom
</td> </tr> </table>
In this context, **all partners of INVITE’s consortium adhere to sound data
management** in order to ensure that the meaningful data collected, processed
and/or generated throughout the duration of the project is well-managed,
archived and preserved, in line with the _Guidelines on Data Management in
Horizon 2020_ .
Along these lines, this **interim version of the DMP** builds upon and
significantly enriches the initial version of INVITE’s DMP in order to achieve
the following **objectives** :
* Describe the data management lifecycle for the data to be collected and/or generated in the framework of INVITE, serving as a key element of good data management.
* Outline the methodology employed to safeguard the sound management of the data collected, and/or generated as well as to make them Findable, Accessible, Interoperable and Re-usable (FAIR).
* Provide information on the data that will be collected and/or generated and the way in which it will be handled during and after the end of the project along with the standards applied to this end.
* Describe how the data will be made openly accessible and searchable to interested stakeholders as well as its curation and preservation.
* Present an estimation of the resources allocated to make data FAIR, while also identifying the responsibilities pertaining to data management and addressing data security.
In order to achieve its objectives, this interim version of the DMP takes into
account and builds upon common best practices and guidelines for sharing the
project’s data (such as the practices described by the _Consortium of
European Social Science Data Archives tour guide on data management,_ _UK
Data_ _Service_ , etc.) and facilitating open access (through the use of
_openly accessible repositories_ ) to the data collected/generated while
taking into account the recommendations provided by the European Commission
(hereinafter referred to as Commission) based on the evaluation of INVITE’s
initial DMP.
With the above in mind, this interim version of **the DMP is structured in 7
distinct chapters** , as follows:
* **Chapter 1** provides introductory information about the DMP, the context in which it has been elaborated as well as about its objectives and structure.
* **Chapter 2** presents a summary of the data to be collected/generated during the activities of INVITE including the purpose of its collection/generation as well as its types and formats. Additionally, it outlines its origin, expected volume and the stakeholders that may find it useful.
* **Chapter 3** describes the methodology that is applied in the framework of INVITE in order to safeguard the effective management of data across their entire lifecycle, making it FAIR.
* **Chapter 4** estimates the resources required for making the project’s data FAIR, while also identifying data management responsibilities.
* **Chapter 5** outlines the data security strategy applied within the context of INVITE along with the respective secure storage solutions employed.
* **Chapter 6** addresses ethical aspects as well as other relevant considerations pertaining to the data collected/generated during the implementation of the project.
* **Chapter 7** concludes on the next steps foreseen in the framework of the project with respect to its data management plan.
Finally, the **Annex** of this document includes the privacy policy adopted by
the project’s Web Portal as well as an initial draft version of the privacy
policy that will be employed in the framework of OI2 Lab. Moreover, templates
for the informed consent form and information sheet used in the implementation
of the project’s activities that collect/generate data are settled in the
Annex of this document.
**The DMP is not a fixed document** . It evolves during the lifespan of the
project. More specifically, the DMP will be **updated at least once more
during INVITE** **(i.e. as D7.4 at M36)** . Additional ad hoc updates may also
be realised (if necessary), to include new data, better detail and/or reflect
changes in the methodology or other aspects relevant to its management (such
as costs for making data FAIR, size of data, etc.), changes in consortium
policies and plans or other potential external factors. Q-PLAN is responsible
for elaborating the DMP and with the support of all partners will update and
enrich it when required.
# Data summary
INVITE will collect/generate meaningful non-sensitive data that do not fall
into any special categories 1 of personal data as those are described within
the General Data Protection Regulation 2 3 (GDPR). This data may be
quantitative, qualitative or a blend of those in nature and will be analysed
from a range of methodological perspectives with a view to producing insights
that will successfully feed INVITE’s activities, enable us to deliver
evidence-based results and ultimately achieve the objectives of the project.
With that in mind, the second chapter of the Data Management Plan (DMP) starts
by explaining the purpose for which this data will be collected/generated and
how it relates with INVITE. It proceeds by describing the different types and
formats of this data as well as its origin and expected volume, before
concluding with an overview of potential stakeholders for whom it may prove
useful for re-use.
## Purpose of data collection/generation and its relation to the project
In order to successfully meet its objectives and ensure the production of
evidence-based results, INVITE entails several activities during which data
will be collected/generated. The purpose for which this data is
collected/generated is interrelated with the objective of the activity during
which it is produced.
In particular, these activities along with their objectives in the framework
of INVITE are as follows:
* **Analysis of successful European Open Innovation (OI) support service providers and platforms** , which are interconnected with INVITE, in order to identify potential gaps and opportunities in the respective market targeted by the Open Innovation 2.0 (OI2) Lab and its value propositions.
* **Analysis of** **the needs and requirements of prospective users and stakeholders** , aimed at shedding ample light on their views on and experiences with the European OI ecosystem and ultimately at fuelling the demand-driven design and development of INVITE’s pilots and OI2 Lab.
* **Analysis of ideas and feedback** **collected during the INVITE Co-Creation Workshop** , with a view to informing the co-design of INVITE’s pilots and OI2 Lab as well as for elaborating recommendations on how they may be implemented and customised according the needs of users and stakeholders.
* **Monitoring, co-evaluation and validation of INVITE’s pilots and OI2 Lab** , in order to improve and fine-tune their design and offers based on data and feedback collected/generated by users of the OI2 Lab and stakeholders participating in the project’s pilot activities.
* **Improvement and validation of the business models designed for the post project market rollout of the OI2 Lab** aimed at producing a set of commercially viable and sustainable business models for the OI2 Lab taking into account the needs of potential users/customers as well as the interests and visions of INVITE’s consortium partners.
* **Monitoring and evaluation of the results produced by the project’s stakeholder engagement activities** in order to effectively measure and report the progress of these activities towards building up a vibrant community of diverse OI stakeholders around the OI2 Lab.
* **Monitoring and assessment of the dissemination and communication results** of the project with a view to measuring the impact of the project’s relevant activities, accordingly fine-tuning INVITE’s strategy in this respect as well as fulfilling its reporting requirements towards the Commission.
On top of the aforementioned, data will be collected/generated by the
applicants of the Open Innovation Competitions (OICs) to be launched in the
framework of INVITE. The purpose of this data collection/generation is to
select the award participants to be provided with financial support in the
form of innovation vouchers for short-term virtual Human Capital Mobility.
Still, neither the platform that will host the OICs nor the exact data to be
collected/generated are determined from the consortium partners. With that in
mind, this category of data will be further elaborated in the final version of
the DMP (M36). Along these lines, consortium partners acknowledge the
importance of this data, and thus they will handle it according to the
national and European laws with respect to the processing of personal data.
The following section provides further details on the different types and
formats of data collected\generated during the project’s activities.
## Types and formats of collected/generated data
During the activities of INVITE, data of different nature will be
collected/generated. The types of this data can be described in many different
ways depending on the source and physical format of the data. Nevertheless,
data is often seen as how it is created/captured 4 . Examples include
electronic text documents, spreadsheets, questionnaires and transcripts, among
others. Another way to think about data is the format in which different data
types (qualitative, quantitative, etc.) are stored. Along these lines,
INVITE’s data will be available in easily accessible formats, such as post
scripts (e.g. pdf, xps, etc.), machine readable formats (xml, html, etc.),
spreadsheets (e.g. xlsx, csv, etc.), text documents (e.g. docx, rtf, etc.),
compressed formats (e.g. rar, zip, etc.) or any other format required by the
objectives and methodology of the activity within the frame of which it is
produced.
In this respect, special attention will be paid in using **open formats** 5
(such as csv, pdf, zip, etc.) and/or **machine-readable formats** 6 (such as
xml, json, rdf, html, etc.) when possible in order to enhance the
**interoperability** and **re-use** of INVITE’s data. In doing so, we will be
providing data that is **easily readable** and **freely usable in any software
program** employed by third-parties interested in utilizing the data.
With that in mind, the type and formats of the data collected/generated in the
framework of INVITE can be divided into **3 distinct categories** , namely
**(i)** data collected/generated by direct input methods and **(ii)** users of
the OI2 Lab as well as **(ii)** data collected/generated from dissemination,
communication and stakeholder engagement activities, as further described in
the following subsections.
### _Data collected/generated through direct input methods_
In the framework of INVITE, direct input methods encompass methodologies for
collecting data through interactions between consortium partners and external
stakeholders, with the latter providing data to the former. Along these lines,
external stakeholders assume the role of a data subject that is a natural
person whose personal data is being processed 7 . In particular, the
identification and selection of suitable data subjects are based on purposeful
sampling according to which, external stakeholders are identified and selected
by consortium partners based on their role within the OI ecosystem and the
objectives of the respective activity for which data is collected 8 .
In this context, quantitative and qualitative data will be collected/generated
during the course of INVITE:
* **Quantitative data** is numerical and acquired through counting or measuring 9 . Examples of quantitative data are the yearly turnovers of a business, the hourly compensation of a worker, the number of SMEs in Europe, etc. This data may be represented by ordinal, interval or ratio scales and lend themselves to statistical manipulation.
* **Qualitative data,** sometimes referred to as categorical data, is data that can be arranged into categories based on physical traits, gender, colours or anything that does not have a number associated with it 10 . Moreover, written documents, interviews, and various forms of in-field observation are all sources of qualitative data. Examples of quantitative data are the preferences of learning, skillsets, country of origin, etc.
With that in mind, further details with respect to the different types and
formats of data that will be collected through direct input methods under the
frame of INVITE are provided in the remainder of this subsection.
#### Market gaps and opportunities
This data has been collected in two phases, with a view to supporting the
analysis of successful OI support service providers and platforms, which are
interconnected with INVITE. The first phase involved a desk review of
secondary data sources that are available in the public domain. The second
phase was implemented through a series of semi-structured in-depth interviews
with respondents who work for either the Enterprise Europe Network (EEN),
NineSigma or Steinbeis. The data collected consists of a combination of
information extracted from the secondary data sources and information provided
by the respondents during the in-depth interviews. In both cases, the data
collected are mainly of a qualitative nature and recorded in the form of
interview transcripts, containing information regarding how the OI
platforms/service providers under study currently operate in terms of their
value propositions, target audiences, service offerings (both online and
offline) and commercial models.
#### User needs and requirements
This data was collected during the qualitative interview-based survey that was
conducted in the framework of INVITE, which aimed at revealing the needs and
requirements of prospective users and stakeholders of the OI2 Lab. In
particular, a stratified purposeful sampling methodology was employed in order
to include diverse OI stakeholders of the quadruple helix (i.e. from
businesses and investors in the private sector over to academia and public
authorities as well as civil society) and across different regions in Europe.
Data in the form of interview transcripts was collected by means of semi-
structured interviews and encompass information on the different ways in which
OI actors are currently engaged (or not) in OI, various enablers / barriers
that appear to be fostering / hindering their participation in OI as well as
insights into the perceived knowledge, skills and support that they may need
in order to successfully adopt and apply OI.
#### Ideas and feedback collected during the INVITE Co-Creation Workshop
In the framework of the INVITE Co-Creation Workshop, a diverse group of OI
stakeholders engaged in a series of co-creative brainstorming and ideation
sessions to co-define demand-driven designs and features for INVITE’s pilots
and OI2 Lab. In particular, the World Café process 11 was used during the
workshop and six tables were set up to discuss six themes, each of which
addressed key aspects of the pilots, services and tools to be deployed through
the OI2 Lab. The discussions followed a semi-structured approach and were
guided by six key questions, unique to each theme. Data of both qualitative
and quantitative nature, as derived from the participants’ responses to each
of these six questions per theme along with any additional ideas, were written
up and collected by means of post-it notes 12 .
#### Pilot monitoring, co-evaluation and validation data
Users and stakeholders who will participate in the pilot activities of INVITE
and its OI2 Lab will be required to provide feedback as part of an ongoing
monitoring framework that will be established to keep track of, coevaluate and
validate their performance, ultimately fuelling their demand-driven
improvement.
To this end, both qualitative and quantitative data will be collected from
pilot participants by means of questionnaire-based surveys aimed at capturing
customer experience metrics (e.g. a Net Promoter Score) as well as key general
information relating to how users found out about the OI2 Lab, the main
objectives that drove them to use it, the components and services used as well
as to the experiences and outcomes derived from their participation in the
pilots. Specific data on the impact of the pilots will also be collected,
revolving around themes such as the degree of integration of OI in the users’
business model, external knowledge search and acquisition, collaboration with
other stakeholders, occasional vs continuous engagement in OI activities,
disruptive vs incremental innovation, internal innovation capability, time-to-
market, level of proficiency gained in collaborative innovation, scale
achieved in terms of outreach (volume, sectoral and geographical), fundraising
capacity, staff impact, organizational impact and cost-benefit.
#### Feedback on the OI2 Lab business models
This data will be collected through a questionnaire-based survey of pilot
participants as well as members of the project’s Advisory Board identified as
potential early adopters / lead users, with a view to gaining meaningful
feedback on how the business models of the OI2 Lab may be refined and further
shaped to fit the needs of potential users and stakeholders. The data will be
of both qualitative as well as quantitative nature addressing the
appropriateness and acceptance of different elements of the different Business
Model Canvases and Value Propositions (e.g. customer relationships,
collaborators, revenue streams, cost structure, etc.) that will constitute the
business models designed for the OI2 Lab.
Data collected/generated through direct input methods will be **stored in
standard .docx** as well as . **xlsx** **formats** . These kinds of formats
allow the documentation of information coming from various files and documents
so that it exists in a single location. By doing so, it is possible to
circulate raw data from transcripts, as well as text, images and other objects
from other files to one document file or multiple tabs of a single
spreadsheet. Moreover, both formats can be immediately converted into open and
machinereadable formats (such as .xml and .csv) boosting the interoperability
and re-usability of the datasets produced during the implementation of INVITE.
### _Data collected/generated by users of the Open Innovation 2.0 Lab_
The OI2 Lab platform aims to facilitate the needs of potential users and
stakeholders of an extended OI ecosystem. With that in mind, a suite of
tailored ICT tools will be leveraged with a view to structuring a vibrant web-
based community of OI actors and create value for all of them within the OI
ecosystem. The ICT tools include: **(i)** an **open multi-sided marketplace**
, **(ii)** an **online collaboration space** , **(iii)** an **e-learning
environmen** t as well as **(iv)** a **crowdfunding tool** . Within this
framework, users of various roles, such as SME representatives, advisors and
mentors with field expertise and account managers are expected to utilize the
functionalities offered by the ICT tools which in their turn will generate
valuable data for the consortium partners.
On another note, external data is expected to be sourced, reformatted and
presented accordingly from other open innovation initiatives such as, but not
limited, to the European Enterprise Network, NineSigma as well as data sourced
by local actors such consortium members active in the OI sphere in their
respective regions.
Along these lines, data collected by the users of the OI2 Lab incorporates:
* Data provided by OI2 Lab participants including personal details such as name, contact details, social media accounts, organisation, date of birth and other. Additionally, personal preferences will be collected with respect to fields of interest, expertise, activities as suggested directly by users that will help the OI2 Lab platform personalise the content delivered to users during the pilot phases.
* Data based on tracking the user’s activity across the OI2 Lab platform and will be utilised towards further enhancing the personalisation of the user experience throughout the platform. The overarching goal will be to support: (a) a data driven reiteration and adaptation development process for the consortium in order to identify processes that need enhancements and/or (b) functionalities that are of low or no interest and could be deprecated, as they provide no additional value to participants. Activity data will be collected for all roles and stand to not only streamline processes, but also allow the consortium members to identify the most prominent features required and utilised by OI participants, which will subsequently support finalisation of business modelling and commercialisation efforts.
Further details of data collected from the ICT tools are provided over the
following subsections.
#### Open multi-sided market place data
One of the main scopes of the overall architecture includes bringing together
all the ICT tools functionality under a well-designed Open multi-sided market
place. The main feature of this process is a global user registration and
profile creation, whereby potential users of the OI2 Lab will be able to
access the functional elements and subsequently the value that will be offered
by participating. As such, the registration process includes the following
data collection process:
* **Required Data:** This set of data will be required for registration and include personal data for profile creation including:
* Full Name o City, country o Date of birth o Organisation/SME o Job Function o Notification settings (email, platform only or desktop, etc.) o Privacy settings (publicly available or private profile)
* **Optional Data:** The second set of data to be collected will be optional but highly important in allowing the matchmaking algorithms to match users with relevant content throughout the platform and drive engagement and user experience. Such data will include among others:
* Basic interests (e.g. Technology offer/request, Competitions, etc.)
* Fields of interest/expertise (e.g. machine learning, renewable sources, e-health, etc.)
In addition to personal profile data, the Open multi-sided marketplace will
accommodate tracking of user activity to further inform the matchmaking engine
included in prioritizing notification settings and matching OI2 Lab users with
content or other users (via introductory notifications and search results
adaptive sorting) with common interests and activity. All personally
identifiable data will be tied with a unique userID per participant, which
will be utilised across the ICT tools and processes. In the case, whereby
users wish to withdraw their participation and delete their account, all
personally identifiable data will be deleted and the userID will no longer be
associated with a particular profile. The overall data collected will be
stored in the OI2 Lab MySQL database under the required schema to facilitate
process automation.
#### Online collaboration space data
In order to facilitate user interaction and collaboration the OI2 Lab will
foster an online collaboration space. This will leverage user profiles and
allow participants to:
* Communicate directly with account managers assigned to them as part of the support process;
* Ask questions in a Q&A format;
* Create topics for discussion among platform users;
* Create work groups on collaborative projects.
As such, the platform will have to store messages, attachments and activity in
an identifiable manner according to userID in the OI2 Lab MySQL database.
Furthermore, any registered user can tag the collaborative spaces or
discussions they initiated as private (i.e. among collaborating parties) or
public allowing open participation. UserIDs will link back to user profiles,
unless someone terminates its profile. As an extension to this, users apart
from being disassociated from content contributed, they may also have the
option to delete the relevant content they contributed to one of the
communication and collaboration spaces they participated.
#### E-Learning environment data
One aspect that stands to accelerate open innovation participation and
efficiency is access to learning material relevant to OI. The E-learning
environment to be integrated to the OI2 Lab, will be based on the _Moodle_
open source e-learning environment adapted accordingly to facilitate the needs
of the platform. Course creators will be able to upload content, append fields
of relevance and all necessary details for final submission. Conversely, live
webinars will be created including date and participant participation data
that need to be circulated with the instructors and among other users
depending on the level of anonymity required. As such the e-learning
environment will not only include course or webinar specific data but also
personal data in some cases.
All data including personal preferences and activity will be stored in the OI2
Lab database and lend themselves for access by the matchmaking engine of the
Open multi-sided marketplace to facilitate notifications, content placement
and personalisation, ultimately supporting engagement and re-engagement of
platform participants. Similarly, to the other use cases, personally
identifiable data will only be accessible as long as users remain registered
in the OI2 Lab platform environment.
#### Crowdfunding tool data
An important function of any open innovation platform in support of
participating SMEs is access to financing options. Besides state or EU driven
financing tools listed, the OI2 Lab will also provide a crowdfunding tool
whereby participants will have a chance to showcase their projects in search
of sourcing financial support from the general public or investors. In this
respect, crowdfunding campaign creators will upload their proposed projects
and relevant contents, stored in the platforms database, and decide whether
the campaign should be publicly accessible or only visible to registered
users. Following that, users (SMEs, individuals, investors) that want to
participate will be able to pledge their support in an anonymised manner,
whilst only the campaign creator will have access to their personal contact
info.
The matchmaking engine will have access to the campaign details as registered
in the OI2 Lab database and subsequently match them with users with expressed
or relevant interest with notifications and emails in order to drive
engagement. Campaign data will also be conveyed to account managers to enhance
support extending beyond the platform participants. Crowdfunding campaign
creators will have full control over the timeframe of their campaign up to a
designated time limit, inherently coded in the platform, and may proceed with
editing or deleting their campaign and relevant content if they so choose.
#### Web portal analytics
The OI2 Lab will be supported by platform exclusive tracking and analytics
software based on the _Matomo_ open source analytics platform. This
implementation will make sure that the project maintains 100% data ownership,
user privacy is protected and that user-centric insights can be generated and
leveraged across the board.
To track visitors, Matomo will be configured to use 1 st party cookies, set
on the OI2 Lab domain. Cookies created by Matomo start with: _pk_ref,
_pk_cvar, _pk_id, _pk_ses. Users that wish to be excluded from being tracked
using the cookie method, will be allowed to do so via opting out which will
create a cookie piwik_ignore set on the domain of the Matomo server hosted in
the OI2 Lab server environment. In case of account deletion all first party
cookies will be disabled from being set — for example for privacy reasons.
First party cookies track among others the following data:
* User IP address
* Optional User ID
* Date and time of the request
* Title of the page being viewed (Page Title)
* URL of the page being viewed (Page URL)
* URL of the page that was viewed prior to the current page (Referrer URL)
* Files that were clicked and downloaded (Download)
* Links to an outside domain that were clicked (Outlink)
* Pages generation time (the time it takes for webpages to be generated by the webserver and then downloaded by the user: Page speed)
* Location of the user: country, region, city, approximate latitude and longitude (Geolocation)
Other optional data or events may also be tracked in order to further enhance
a data driven implementation plan along future iterations of the platform.
These may include: (i) custom dimensions, (ii) custom variables, (iii)
campaigns, (iv) site search, (v) goals, (vi) events, (vii) e-commerce, (viii)
viewing and clicking on content.
### _Data collected/generated from dissemination, communication and
stakeholder engagement activities_
#### Social media statistics (including Facebook, Twitter, LinkedIn, YouTube)
This data will be collected/generated through a periodic monitoring of the
project’s social media statistics (including Facebook, Twitter, LinkedIn and
YouTube) with a view to measuring and assessing the performance and results of
the project’s social media activity in terms of dissemination and
communication. With that in mind, the data will be both qualitative as well as
quantitative in nature addressing the metrics reached on each channel (e.g.
followers, tweets impressions on twitter, friends, likes on Facebook etc.).
Additionally, this data will be followed by an analysis of the results
stemming from it and possible ways to improve the results so as to reach the
project’s targets. All in all, the data will be stored in a Microsoft excel
file (.xlsx) while at the same time the analysis of the results will be stored
in a standard word document (.docx).
_**Data collected from project events (e.g. co-creation workshop, stakeholder
engagement events, etc.)** _ This data will be collected in 2 ways during the
implementation of the project, that is:
* The stakeholder engagement events organised by INVITE (such as the co-creation workshop, regional stakeholder engagement events, etc.) consisting of the participants lists that will enclose demographic information about the participants;
* The participation of INVITE consortium partners in third party relevant events to reach out and engage stakeholders, thus including general information about the events attended and their outreach.
Along these lines, this data is collected so as to keep track of the results
of stakeholder engagement activities and provide the opportunity to project
partners to report on these activities. Moreover, this data will be updated
every time a partner attends an event, or a partner organises an event.
Finally, the data will be both quantitative and qualitative in nature and will
be stored in a standard spreadsheet (.xlsx).
#### Newsletter subscriptions (e.g. contact details of subscribers)
A subscription form hosted in the project’s _web portal_ will aid the
collection of this data in which any interested stakeholder can freely provide
his/her contact details in a dedicated sign-up form so as to receive the most
up-to-date news and outcomes of the project. A newsletter will be sent to
subscribers once per 4 months while a short version of it will be distributed
every month via an e-mail message. With that in mind, this data will be
collected so as interested stakeholders can be informed about the INVITE
project as well as the OI2 Lab. Along these lines, the data will be comprised
of a list of stakeholders along with their personal information. In this
context, the data collected include the following information: (i) email
address, (ii) first and last name, (iii) country, (iv) type of organisation,
(v) region and (vi) gender. A copy of this contact list will be stored to
MailChimp’s server ( _http://mailchimp.com_ ) , which is used for e-mail
campaigns and newsletters distribution. All personal information included in
this contact list will be used and protected according to MailChimp’s Privacy
Policy.
#### Data from dissemination and communication
This data will be collected through a periodic monitoring of the project’s
miscellaneous dissemination activities such as publications in relevant
journals, posts in the blogs, etc. The data will consist of a list of
publications and posts published by the consortium partners. The purpose of
collecting this data is to assess the outreach and efficiency of the
dissemination activities during the implementation of the project. For this
purpose, a template has been shared with all partners to recommend activities
to be performed and log the activities they performed. The template is
provided also online so as the partners can directly update their input.
Finally, all the data will be integrated in a single excel file (.xlsx).
#### Data from monitoring stakeholder engagement
This data is collected during the project’s stakeholder engagement activities
with a view to effectively measuring and reporting the progress of these
activities. With that in mind, a dedicated methodological tool has been
designed and is being employed throughout the duration of INVITE, namely the
Stakeholder Matrix. Each project partner sets-up an internal Stakeholder
Matrix ensuring the confidentiality of the data included. In this respect,
this Stakeholder matrix includes data about key stakeholder groups and
individual stakeholders spanning across the quadrable helix innovation system.
These are classified by organisation name, contact person (incl. gender,
region/nation), contact details and activities in which they have been
involved. At least 120 stakeholders shall be identified by each partner
throughout the duration of the project in order to bring together at least
1000 members for the Open Innovation 2.0 Lab community. Finally, project
partners must only send an anonymised Stakeholder Matrix (with only data on
organization type, gender and region / country) to the project’s Innovation
Manager, that is SEZ, for aggregating the data and updating the aggregated
Stakeholder Matrix of the project at least on a semester basis as well as ad
hoc when deemed necessary. The Stakeholder Matrix is stored in a standard
excel file (.xlsx).
## Origin of data and re-use of pre-existing data
In the context of INVITE, **new data** will be collected/generated by
consortium partners as well as external stakeholders participating in the
activities of the project and/or using the OI2 Lab. With that in mind and
aside consortium partners, **external groups of stakeholders from which new
data will originate include** :
* Innovative entrepreneurs (social or not) as well as CEOs and (OI) managers of SMEs (including microfirms) as well as OI managers and practitioners in large enterprises.
* Knowledge, technology and innovation solution providers (e.g. within academic institutions and their technology/knowledge transfer offices, non-university public research organisations, research and technology organisations, high-tech SMEs and large enterprises, etc.).
* Policy designers and implementers at regional, national and EU level (e.g. in regional/national/EU authorities, development agencies, etc.).
* Financers from the private funding sector in both mainstream finance markets (e.g. venture capital, business angels, etc.) as well as more alternative ones (e.g. award-based/equity-based crowdfunding, peer-to-peer consumer lending, etc.).
* Staff members of non-governmental organisations as well as representatives of civil society groups active within the European OI ecosystem and aiming to address social challenges and needs; and
* Other stakeholders (e.g. OI practitioners in cluster organisations and science parks, e-learning providers, etc.) including individual citizens (general public) that may be interested in the project’s results.
Moreover, specific **pre-existing data** may be utilised within the context of
the project as well. In particular, OI intermediaries, support networks and/or
relevant development agencies as well as online collaboration networks and
providers of OI, collective intelligence and knowledge platforms will be
provided with the opportunity to integrate with and enhance the accessibility
of their data-driven offers through the OI2 Lab (e.g. through its open multi-
sided market place or e-learning environment). Prime example of such
preexisting datasets is the sourcing of European Enterprise Network’s
profiles, including technology offers/requests, business offers/requests and
R&D requests. Other examples are inclusion of NineSigma’s competition or
challenges, textual content and redirects to local, national or European wide
financing options and lists of Venture Capital Funds throughout Europe. These
pre-existing datasets will foster an already populated environment with
engaging content that will resonate with the OI2 Lab’s targeted audience.
## Expected size of data
INVITE entails a series of activities aiming at setting the stage for and
ultimately facilitating the demanddriven evidence-based development, piloting,
evaluation, validation and fine-tuning of its OI2 Lab and value propositions.
With that in mind, the table that follows presents the different activities
implemented during the course of the project in which data is
collected/generated, the types and formats of the data as well as the expected
size of the data.
### _Table 2: Expected size of data_
<table>
<tr>
<th>
**Νο**
</th>
<th>
**Name of activity**
</th>
<th>
**Data**
</th>
<th>
**Type of data**
</th>
<th>
**Format of data**
</th>
<th>
**Expected size of data**
**(KB)***
</th> </tr>
<tr>
<td>
1
</td>
<td>
Analysis of European OI support service providers/platforms
</td>
<td>
Market gaps and opportunities
</td>
<td>
Interview transcripts
</td>
<td>
.docx
</td>
<td>
207 **
</td> </tr>
<tr>
<td>
2
</td>
<td>
Analysis of needs and requirements of prospective users and stakeholders
</td>
<td>
User needs and requirements
</td>
<td>
Interview transcripts
</td>
<td>
.docx
</td>
<td>
2,516.5 **
</td> </tr>
<tr>
<td>
3
</td>
<td>
Analysis of ideas and feedback collected during
INVITE Co-Creation
Workshop
</td>
<td>
Ideas and feedback
collected during the INVITE
Co-creation Workshop
</td>
<td>
Post-it notes
</td>
<td>
.docx
</td>
<td>
107 **
</td> </tr>
<tr>
<td>
4
</td>
<td>
Monitoring, co-evaluation and validation of INVITE’s pilots and OI2 Lab
</td>
<td>
Data collected through direct input methods
</td>
<td>
Questionnaires
</td>
<td>
.docx
</td>
<td>
100,000 *
</td> </tr>
<tr>
<td>
Open multi-side market place data
</td>
<td>
Machine & user generated
</td>
<td>
MySQL dB
</td>
<td>
80,000 *
</td> </tr>
<tr>
<td>
Online collaboration space data
</td>
<td>
Machine & user generated
</td>
<td>
MySQL dB
</td>
<td>
1,000,000 *
</td> </tr>
<tr>
<td>
E-learning environment data
</td>
<td>
Machine & user generated
</td>
<td>
MySQL dB
</td>
<td>
1,000,000 *
</td> </tr>
<tr>
<td>
Crowdfunding tool data
</td>
<td>
Machine & user generated
</td>
<td>
MySQL dB
</td>
<td>
100,000 *
</td> </tr>
<tr>
<td>
Web portal analytics
</td>
<td>
Machine generated
</td>
<td>
MySQL dB
</td>
<td>
20,000 *
</td> </tr>
<tr>
<td>
5
</td>
<td>
Improvement and validation
of the OI2 Lab’s Business Models
</td>
<td>
Feedback on the OI2 Lab business models
</td>
<td>
Questionnaires
</td>
<td>
.xlsx
</td>
<td>
250 *
</td> </tr>
<tr>
<td>
6
</td>
<td>
Monitoring and evaluation of the results produced by the project’s stakeholder
engagement activities
</td>
<td>
Data for monitoring stakeholder engagement
</td>
<td>
Stakeholder Matrix
</td>
<td>
.xlsx
</td>
<td>
61 *
</td> </tr>
<tr>
<td>
7
</td>
<td>
Monitoring and assessment
of the project’s dissemination and communication results
</td>
<td>
Social media statistics
</td>
<td>
Machine generated
</td>
<td>
.xslx
</td>
<td>
150 *
</td> </tr>
<tr>
<td>
Project events data
</td>
<td>
Spreadsheets
</td>
<td>
.xlsx
</td>
<td>
150*
</td> </tr>
<tr>
<td>
Newsletter subscriptions
</td>
<td>
Spreadsheets
</td>
<td>
.xslx
</td>
<td>
300*
</td> </tr>
<tr>
<td>
Data for dissemination and communication reporting
</td>
<td>
Spreadsheets
</td>
<td>
.xlsx
</td>
<td>
150*
</td> </tr> </table>
* The estimated expected size of the data is based on the adjusted size of data generated via similar activities of project partners in the past unless otherwise indicated.
** The collection/generation of this data has already been completed and the
size of the data represent real values (not estimations).
## Data utility
The stakeholders that may find meaningful utility for the data to be
collected/generated by the project (both within as well as outside of INVITE’s
consortium) along with the benefits that could arise for them by utilizing
this data, are concisely presented in the table that follows.
### _Table 3: Data utility_
<table>
<tr>
<th>
_**Stakeholder group** _
</th>
<th>
**_Data utility_ **
</th> </tr>
<tr>
<td>
**Researchers in the field of Open Innovation**
</td>
<td>
The field of OI, albeit having promising potential for generating sustainable
innovations with great market and social value, appears to still be relatively
under-researched and characterised by a limited evidence base of relevant
efforts worldwide 13 . Under this light, INVITE’s data can provide
researchers in the multi-disciplinary and cross-cutting field of OI with
valuable insights into how OI is currently taken place in Europe as well as
with empirical evidence generated from practical applications of OI and
collaborative models of innovation. Interested researchers may re-use the data
of INVITE as a basis to replicate similar studies within the same or different
contexts as well as to design and launch new studies, generating comparable
research findings to further advance the field and shed ample light on the
inner workings of OI within a quadruple helix innovation model.
</td> </tr>
<tr>
<td>
**Policy makers, implementers and funders**
</td>
<td>
Throughout its duration, INVITE is set on collecting and producing
quantifiable evidence on the effectiveness and impact of the support
mechanisms and measures to be piloted during the project (such as the
innovation voucher scheme or the e-learning interventions deployed through the
OI2 Lab), with a view to fostering their replication and scale-up beyond its
completion. Data generated to this end, may find great utility in the hands of
experts who design, implement and/or fund relevant innovation and business
support policies. Indeed, data on what really changed (or not), for whom and
why during the experiments conducted by the OI2 Lab, can provide them with
reliable input to analyse the potential successes (and failures) generated
under the pilot operation of the OI2 Lab. This can in turn help them gain a
better understanding of what could drive successful OI in their own context,
supporting them in facilitating knowledge flows from and to their respective
nations/regions, while also fostering OI, especially amongst SMEs.
</td> </tr>
<tr>
<td>
**Project partners**
</td>
<td>
The data collected/generated during INVITE is of paramount utility for project
partners in order to produce evidence-based results and ultimately achieve the
objectives of the project. Indeed, this data will enable the co-design,
development, validation and fine-tuning of the project’s pilots and OI2 Lab.
Moreover, the data will be used to design, improve and validate sustainable
business models for the rollout of the OI2 Lab, while also fostering the
replication and scale-up of its piloted solutions. At the same time, this data
may hold meaningful utility for project partners beyond the end of the project
as well, enabling them to build and capitalise upon interesting ideas and
opportunities that may emerge to ensure the long-term sustainability of the
OI2 Lab.
</td> </tr> </table>
# FAIR data
The _Guidelines on Data Management in Horizon 2020_ of the Commission
emphasise the importance of making the data produced by projects funded under
Horizon 2020 **Findable, Accessible, Interoperable as well as Reusable
(FAIR)** , with a view to ensuring its sound management. This means using
standards and metadata to make data discoverable, specifying data sharing
procedures and which data will be open, allowing data exchange via open
repositories as well as facilitating the reusability of the data. With that in
mind, the following sections of the DMP lay out the methodology followed in
the framework of INVITE with respect to making data findable, accessible and
interoperable as well as ensuring their preservation and open access, with a
view to increasing its re-use.
## Making data findable, including provisions for metadata
### _Data discoverability and identification mechanisms_
INVITE places special emphasis in enhancing the discoverability of the data
collected/generated during the course of its activities. To this end, the
project follows a metadata-driven approach so as to increase the searchability
of the data, while also facilitating its understanding and re-use. Metadata is
defined as “data about data” or “information about information” 14 . It is
usually structured textual information that describes something about the
creation, content, or context of a digital resource – be it a single file,
part of a single file, or a collection of many files. Metadata is the glue
which links information and data across the world wide web. It is the tool
that helps people to discover, manage, describe, preserve and build
relationships with and between digital resources 15 .
In particular, three distinct types of metadata exist 16 , as presented
below:
* **Descriptive metadata** , used to identify and describe collections and related information resources. Descriptive metadata at the local level helps with searching and retrieving. In an online environment, descriptive metadata helps to discover resources. Most of the times includes information such as the title, author, date, description, identifier, etc.
* **Administrative metadata** is used to facilitate the management of information resources. It is helpful for both short-term and long-term management and processing of data. This is information that will not usually be relevant to the public but will be essential for staff to manage collections internally. Such metadata may be location information, acquisition information, etc.
* **Structural metadata** enables navigation and presentation of electronic resources. Its documents how the components of an item are organized. Examples of structural metadata could be the way in which pages are ordered to form chapters of a book, a photograph that is included in a manuscript or a scrapbook or the JPEG and TIF files that were created from the original photograph negative, linked together.
With that in mind, **data produced/used during INVITE is discoverable with
metadata** suitable to its content and format. To this end, the project
employs **metadata standards** to produce rich and consistent metadata to
support the long-term discovery, use and integrity of its data (see Subsection
3.1.5 for more details on the metadata standards adopted by INVITE).
In parallel, to further increase data discoverability, the **data produced by
INVITE and deemed open for sharing and re-use, will be deposited to Zenodo** (
_www.zenodo.org_ ) , **an open data repository.** This data repository,
created by OpenAIRE and CERN, has been chosen to enable open access to the
project’s open data free of charge. In fact, Zenodo builds and operates a
simple service that enables researchers, scientists, EU projects and
institutions, among others, to share and showcase research results (including
data and publications) that are not part of the existing institutional or
subject-based repositories of the research communities. It accepts any file
format, promotes peer-reviewed openly accessible research, allows the creation
of own collections and it is available free of charge both for INVITE to
upload and share data as well as for other stakeholders to explore, download
and re-use this data.
Moreover, by employing this data repository, the **data produced during the
implementation of the project is locatable by means of a standard
identification mechanism.** Indeed, INVITE will be able to assign globally
resolvable **Persistent Identifiers (PIDs)** on any data uploaded to Zenodo.
An identifier is a unique identification code that is applied to a dataset, so
that it can be unambiguously referenced 17 . For example, a catalogue number
is an identifier for a particular specimen and an ISBN code is an identifier
for a particular book. PIDs are simply maintainable identifiers that allow for
permanent reference to a digital object. In other words, PIDs are a way of
giving digital resources, such as documents, images and data records, a unique
and persistent reference number.
Moreover, as a digital repository, Zenodo registers **Digital Object
Identifiers (DOIs)** for all submitted data through _DataCite_ , which is
the leading global non-profit organisation that provides PIDs (and
specifically DOIs) for research data, and preserves these submissions using
the safe and trusted foundation of CERN’s data centre, alongside the biggest
scientific dataset in the world, the LHC’s 100PB Big Data store 18 . This
means that the data preserved in Zenodo will be accessible for years to come,
and the DOIs will function as perpetual links to the resources. DOIs remain
valuable since they are future proofed against Uniform Resource Locator (URL)
or even protocol changes, through resolvers (such as _DOI)_ . With that in
mind, an example of a DOI retrieved from this open repository follows the
structure illustrated by Figure 1.
_**Figure** _
_**1** _
_**:** _
_**Typical DOI created by Zenodo** _
At the same time, **datasets not uploaded to Zenodo will be deposited in a
searchable resource (i.e. the web portal of the project) and utilise well-
tailored identification mechanisms** as well, in the form of standard naming
conventions that will safeguard their consistency and make them **easily
locatable** for project partners within the framework of the project. The
following subsection provides further details in this respect.
### _Naming conventions_
Following a consistent set of naming conventions in the development of the
project’s data files can greatly enhance their searchability. With that in
mind, INVITE creates consistent data file names that provide clues to their
content, status and versioning, while also increasing their discoverability.
In doing so, project partners as well as interested stakeholders can easily
identify a file as well as classify and sort them.
According to the UK Data Archive ( _UK Data Service, 2017b_ ) , a best
practice in naming convention is to create brief yet meaningful names for data
files, that facilitate classification. The naming convention should avoid the
utilisation of spaces, dots and special characters (such as & or !), whereas
the use of underscores is endorsed, to separate elements in the data file name
and make them understandable. At the same time, versioning should be a part of
a naming convention to clearly identify the changes and edits in a file.
With that in mind and to facilitate the reference of the datasets that will be
produced during its implementation, INVITE employs a **standard naming
convention** **that integrates versioning and takes into account the
possibility of creating multiple datasets** during an activity that entails
data collection/generation. Indeed, INVITE’s naming convention takes considers
this issue and addresses it by employing a unique element that captures the
number of datasets that are produced under the same activity.
In particular, the **naming convention employed by the project** **is
described below** .
**INVITE _ [Name of Study] _ [Number of dataset] _ [Issue Date] _ [Version
number]**
* **INVITE:** The name of the project.
* **Name of Study:** A short version of the name of the activity for which the dataset is created.
* **Number of dataset:** An indication of the number assigned to the dataset.
* **Issue Date:** The date on which the latest version of the dataset was modified (YYYY.MM.DD.).
* **Version number:** The versioning number of a dataset.
With the above in mind, some **indicative examples** to showcase the naming
structure applied in the context of INVITE are provided below:
* **INVITE_NeedsAndRequirements_Dataset1_2017.10.31_v1 –** The first dataset generated within the framework of the survey conducted to identify the needs and requirements of diverse OI stakeholders. This is the first version of the dataset that was last modified on the 31 st of October 2017 (31/10/2017).
* **INVITE_BMValidation_Dataset2_2018.02.01_v2 –** The second dataset created in the process of validating and improving the business models developed for the OI2 Lab with a view to feeding the elaboration of the business plan that will guide its market rollout beyond the end of the project. The last modification of this dataset, which in this case produced the second version of the dataset, was on the 1 st of February 2018 (01/02/2018).
### _Search keywords_
The project’s data will be provided with search keywords with a view to
optimizing its re-use by interested stakeholders during its entire lifetime.
With that in mind, the metadata standards employed by INVITE provide
opportunities for tagging the data collected/generated and its content with
keywords. In general, keywords are a subset of metadata and include words and
phrases used to name data. In the context of INVITE, keywords are used to add
valuable information to the data collected/generated as well as to facilitate
the description and interpretation of its content and value.
Along these lines, the project’s strategy on keywords is underpinned by the
following principles:
* The who, the what, the when, the where, and the why should be covered.
* Consistency among the different keyword tags needs to be ensured.
* Relevant, understandable and clear keywording ought to be sought.
In general, the keywords will comprise terms related to open innovation, co-
creation, the quadruple helix as well as SMEs. The keywords will accurately
reflect the content of the datasets and avoid words used only once or twice
within them.
### _Versioning_
Versioning of information makes a revision of datasets uniquely identifiable
and can be used to determine whether and how data changed over time and to
define specifically which version the creators/editors are working with.
Moreover, effective data versioning enables understanding if a newer version
of a dataset is available and which are the changes between the different
versions allowing for comparisons and preventing confusion. In this context,
**a clear version number indicator is used in the naming convention** of every
data file produced during the course of the INVITE in order to facilitate the
identification of different versions.
### _Standards for metadata creation_
**INVITE employs standards for creating metadata** for the data
collected/generated by the project, with a view to describing it with **rich
metadata** and thus improving their discoverability and searchability. In
result, effective searching, improved digital curation and easy sharing will
be realized. In addition, the metadata standards applied enable the
integration of metadata from a variety of sources into other technical
systems.
With that in mind, **for INVITE’s openly available data the** **metadata
standards provided by Zenodo will be used** . Zenodo creates metadata to
accompany the datasets that are uploaded to its repository, extending their
reach to a wider audience of interested stakeholders. This metadata can be
exported in several standard formats, including open and machine-readable ones
(such as MARCXML, Dublin Core, and DataCite Metadata Schema), following the
guidelines of OpenAIRE and are stored by Zenodo in JSON-format according to a
defined JSON schema 19 .
Project **data not available for re-use, will also be annotated with open and
machine-readable metadata** following the **Dublin Core Metadata standard** .
The Dublin Core Metadata element set (certified with the ISO Standard 15836)
is a standard which can be easily understood and implemented and as such, is
one of the best known metadata standards. It was originally developed as a
core set of elements for describing the content of web pages and enabling
their search and retrieval. Among the reasons for selecting this standard is
also the fact that **Zenodo is compatible with Dublin Core metadata formats**
and thus any initially closed data, that may become open at a later stage
(e.g. due to a change in the consortium’s policy), will not lose its metadata.
With that said, the Dublin Core metadata standard is a simple yet effective
set for creating rich metadata that will describe a wide range of resources.
The fifteen element "Dublin Core" described in this standard is part of a
larger set of metadata vocabularies and technical specifications maintained by
th e _Dublin_ _Core Metadata Initiative (DCMI)_ . The full set of
vocabularies, also includes sets of resource classes, vocabulary encoding
schemes, and syntax encoding schemes. **An online metadata generator will be
used** to produce the different metadata elements required (
_dublincoregenerator.com_ ) .
## Making data openly accessible
### _Openly available and closed data_
INVITE is part of the H2020 Open Research Data Pilot (ORDP) that aims to “
_make the data collected/generated by selected projects openly available with
as few restrictions as possible, while at the same time protecting sensitive
data from inappropriate access_ ” 20 . Under this light, the project adopts
the good practice encouraged by the ORDP, namely that of making data as open
as possible and as closed as necessary 21 . This calls for project partners
to disseminate the project’s data that have the potential to offer long-term
value to external stakeholders and do not harm the confidentiality and privacy
of the stakeholders that contributed in the collection/generation of this
data, with a view to maximising the beneficial impact of INVITE.
**Only anonymised and aggregated data will be made open** to ensure that data
subjects cannot be identified in any reports, publications and/or datasets
resulting from the project. The project partner serving as **the data
controller** 22 **in each case will undertake all the necessary anonymisation
procedures** to anonymise the data in such a way that the data subject is no
longer identifiable (more details on data management responsibilities are
provided in Section 4.2).
To this end, it is important to keep in mind that during the process of data
anonymisation, data identifiers need to be removed, generalised, aggregated or
distorted. Moreover, **anonymisation is different than pseudonymisation** ,
which falls under a distinct category in the GDPR - anonymisation
theoretically destroys any way of identifying the data subject, while
pseudonymisation allows for the data subject to be reidentified with
additional information. Along these lines, the table below provides a **list
of good practices** for the anonymisation of quantitative and qualitative data
derived from the tour guide on data management of the Consortium of European
Social Science Data Archives (CESSDA).
#### Table 4: Good practices for data anonymisation
<table>
<tr>
<th>
**Type of data**
</th>
<th>
</th>
<th>
**Good practices**
</th> </tr>
<tr>
<td>
Quantitative data
</td>
<td>
•
•
</td>
<td>
_Removing or aggregate variables or reduce the precision or detailed textual
meaning of a variable._
_Aggregate or reduce the precision of a variable such as age or place of
residence. As a general rule, report the lowest level of geo-referencing that
will not potentially breach respondent confidentiality._
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
_Generalise the meaning of a detailed text variable by replacing potentially
disclosive free-text responses with more general text._
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
_Restrict the upper or lower ranges of a continuous variable to hide outliers
if the values for certain individuals are unusual or atypical within the wider
group researched._
</td> </tr>
<tr>
<td>
Qualitative data
</td>
<td>
•
•
•
</td>
<td>
_Use pseudonyms or generic descriptors to edit identifying information, rather
than blanking-out that information;_
_Plan anonymisation at the time of transcription or initial write-up,
(longitudinal studies may be an exception if relationships between waves of
interviews need special attention for harmonised editing)._
_Use pseudonyms or replacements that are consistent within the research team
and throughout the project. For example, using the same pseudonyms in
publications and follow-up research;_
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
_Use 'search and replace' techniques carefully so that unintended changes are
not made, and misspelt words are not missed;_
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
_Identify replacements in text clearly, for example with [brackets] or using
XML tags such as <seg>word to be anonymised</seg>; _
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
_Create an anonymisation log (also known as a de-anonymisation key) of all
replacements, aggregations or removals made and store such a log securely and
separately from the anonymised data files._
</td> </tr> </table>
Source: Tour guide on data management of the CESSDA 23
With that in mind, the following table presents the data collected/generated
during the course of the project that will be made openly available. In case
certain data cannot be shared (or need to be shared under restrictions), a
justification for that choice is provided.
#### Table 5: Data availability
<table>
<tr>
<th>
**Νο**
</th>
<th>
**Data**
</th>
<th>
**Availability**
</th>
<th>
**Notes**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Market gaps and opportunities
</td>
<td>
Open
</td>
<td>
\-
</td> </tr>
<tr>
<td>
2
</td>
<td>
User needs and requirements
</td>
<td>
Open
</td>
<td>
\-
</td> </tr>
<tr>
<td>
3
</td>
<td>
Ideas and feedback collected during the INVITE Co-creation
Workshop
</td>
<td>
Open
</td>
<td>
\-
</td> </tr>
<tr>
<td>
4
</td>
<td>
Pilot monitoring, co-evaluation and validation data collected through direct
input methods
</td>
<td>
Open
</td>
<td>
\-
</td> </tr>
<tr>
<td>
5
</td>
<td>
Open multi-side market place data
</td>
<td>
Closed
</td>
<td>
Data provided and/or produced via the interaction of users with the OI2 Lab
will be closed and only accessible to platform account managers as they
include personally identifiable data. Furthermore, registered users will be
provided with options as to the privacy settings of their personal data.
</td> </tr>
<tr>
<td>
6
</td>
<td>
Online collaboration space data
</td> </tr>
<tr>
<td>
7
</td>
<td>
E-learning environment data
</td> </tr>
<tr>
<td>
8
</td>
<td>
Crowdfunding tool data
</td> </tr>
<tr>
<td>
9
</td>
<td>
Web portal analytics
</td>
<td>
Open
</td>
<td>
\-
</td> </tr>
<tr>
<td>
10
</td>
<td>
Feedback on the OI2 Lab business models
</td>
<td>
Closed
</td>
<td>
The data will remain closed (accessible only to members of the INVITE
consortium) so as to safeguard the commercial interests of project partners
with respect to the market rollout of the OI2 Lab.
</td> </tr>
<tr>
<td>
**Νο**
</td>
<td>
**Data**
</td>
<td>
**Availability**
</td>
<td>
**Notes**
</td> </tr>
<tr>
<td>
11
</td>
<td>
Data for monitoring stakeholder engagement
</td>
<td>
Open
</td>
<td>
\-
</td> </tr>
<tr>
<td>
12
</td>
<td>
Social media statistics
</td>
<td>
Open
</td>
<td>
\-
</td> </tr>
<tr>
<td>
13
</td>
<td>
Project events data
</td>
<td>
Closed
</td>
<td>
This data will remain closed (accessible only to consortium members) as it is
useful only for internal reporting purposes. On top of that, any anonymization
will leave no data within the dataset.
</td> </tr>
<tr>
<td>
14
</td>
<td>
Newsletter subscriptions
</td>
<td>
Closed
</td>
<td>
This data will remain closed (accessible only to consortium members) as it is
useful only for internal reporting purposes. On top of that, any anonymization
will leave no data within the dataset.
</td> </tr>
<tr>
<td>
15
</td>
<td>
Data for dissemination and communication reporting
</td>
<td>
Open
</td>
<td>
\-
</td> </tr> </table>
It is important to note that **all personal data collected / generated will be
considered as closed data prior to their anonymisation and aggregation** to
safeguard the confidentiality of the data subjects.
This data will be securely stored by the consortium partners that collected
them to be **preserved in their respective records** only for as long as
necessary for them to comply with their contractual obligations to INVITE’s
funding authority, namely the Research Executive Agency (REA) of the
Commission, and **no longer than 5 years from the project’s completion** .
During this period the personal data will be accessible only by authorised
individuals of INVITE’s consortium partner that collected this data and of the
REA. After this period the personal data will be deleted from the respective
consortium partner’s records.
### _Data accessibility and availability_
Public access to the open data will be made available through Zenodo, which
will automatically link to OpenAIRE. The data will be fully accessible thanks
to the included metadata and the search facility available on Zenodo. At the
same time, closed data will be stored and shared amongst authorised members of
the consortium through web portal of the project. With that in mind, the
following table presents where data will be made accessible in the context of
INVITE.
#### Table 6: Data accessibility
<table>
<tr>
<th>
**Νο**
</th>
<th>
**Data**
</th>
<th>
**Accessibility**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Market gaps and opportunities
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
2
</td>
<td>
User needs and requirements
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
3
</td>
<td>
Ideas and feedback collected during the INVITE Co-creation Workshop
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
4
</td>
<td>
Pilot monitoring, co-evaluation and validation data collected through direct
input methods
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
5
</td>
<td>
Open multi-side market place data
</td>
<td>
\-
</td> </tr>
<tr>
<td>
6
</td>
<td>
Online collaboration space data
</td>
<td>
\-
</td> </tr>
<tr>
<td>
7
</td>
<td>
E-learning environment data
</td>
<td>
\-
</td> </tr>
<tr>
<td>
8
</td>
<td>
Crowdfunding tool data
</td>
<td>
\-
</td> </tr>
<tr>
<td>
9
</td>
<td>
Web portal analytics
</td>
<td>
Zenodo/Matomo *
</td> </tr>
<tr>
<td>
10
</td>
<td>
Feedback on the OI2 Lab business models
</td>
<td>
\-
</td> </tr>
<tr>
<td>
11
</td>
<td>
Data for monitoring stakeholder engagement
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
12
</td>
<td>
Social media statistics
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
13
</td>
<td>
Project events data
</td>
<td>
\-
</td> </tr>
<tr>
<td>
14
</td>
<td>
Newsletter subscriptions
</td>
<td>
\-
</td> </tr>
<tr>
<td>
15
</td>
<td>
Data for dissemination and communication reporting
</td>
<td>
Zenodo
</td> </tr> </table>
*A subset of the web portal analytics relevant to the usage of the OI2 Lab tools and the overall user activity will be anonymised and (a) extracted in the form of .xlsx for upload to the Zenodo platform and (b) provided for open access via the Matomo analytics platform to interested stakeholders and the research community _._
### _Methods, software tools and documentation to access the data_
INVITE emphasises the accessibility of the data collected/generated during the
course of the project. With that in mind, **no specialised method, software
tool and/or documentation are needed** , at the moment, in order to access the
data. Stakeholders can access the data by simply using their web browser (e.g.
Mozilla, Google Chrome, Internet Explorer, Safari, etc.) through their
computers (either desktop or laptop), smart phones and/or tablets. More
specifically, they first need to access Zenodo through its webpage (following
the lin k _https://zenodo.org/_ ) and utilise the search engine of the
repository to search for interesting data. By typing the name of the project
(or any other relevant keyword connected to the data) the search engine will
direct the user to the project’s data, ready to be explored and re-used.
Moreover, since the data will be available in open formats we will be ensuring
that they can correctly be read by a range of different software programs that
most of the people employ in their everyday lives.
Closed data can be accessed only by authorised project partners through the
respective member section of INVITE’s web portal. Again, no specialised
method, software tool and/or documentation are needed to this end.
Nevertheless, the member section of the web portal is accessible only with the
provision of their unique username and password combination. Therefore, they
first need to login the INVITE web portal (following the lin k
_http://invite-project.eu/user_ ) through their digital device (e.g.
computers, smartphones, tables, etc.) and provide their respective usernames
and passwords. Then, they can find the uploaded data categorised under the
file browser section.
### _Data, metadata, code and documentation repositories_
INVITE’s open data along with their linking metadata as well as any relevant
code and documentation (if applicable) required to access this data, will be
deposited to and securely stored by Zenodo. It is quite unlike that Zenodo
will have to terminate its operation and stop providing its services, but in
such a case all data, metadata, code and documentation uploaded by INVITE will
be transferred and hosted to other suitable repositories 24 without undue
delay. In this respect, it is important to note that, since all of INVITE’s
openly available data will make use of PIDs (i.e. DOIs), the links to the data
will not be affected. In parallel, INVITE’s data that will not be openly
available for sharing will be deposited, together with their accompanying
metadata, code and documentation (if necessary), to the web portal of the
project.
### _Restrictions_
By utilising Zenodo for sharing the project’s openly available data, INVITE
can apply **different levels of accessibility** for this data taking into
account any relevant issues (such as ethical, rules of personal data,
intellectual property, commercial, privacy-related, security-related, etc.).
More specifically, **Zenodo offers the following levels of data
accessibility** :
* **Open access** : Data remains available for re-use. Nevertheless, the level in which this data can be reused is determined also by their accompanied licence for re-use (see subsection 3.4.1).
* **Embargoed** **status** : Access to the data will be restricted until the end of the embargo period, at which time, the content will automatically become publicly available.
* **Restricted access** : The data will not be made publicly available and sharing will be made possible only by the approval of the project partner that have the responsibility of the data.
* **Closed access** : The data is protected against unauthorized access at all levels and only members of the consortium have the right to access it.
**Project partners will mainly use the open access level** to disseminate the
project’s data amongst the interested stakeholders. Nevertheless, in some
cases embargo periods or restricted access may be used as described in
Subsection 3.2.1. Data that will not be available for re-use will be
accessible only by authorised partners of INVITE’s consortium and /or
authorised personnel from the REA of the Commission.
Moreover, **INVITE will ensure open access to all peer-reviewed scientific
publications** that may be produced in the framework of the project. In
particular, according to the Grant Agreement, INVITE will:
* As soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications as well as deposit, at the same time, the research data needed to validate the results presented in the deposited scientific publications.
* Ensure open access to the deposited publication — via the repository — at the latest on publication, if an electronic version is available for free via the publisher, or within six months of publication.
* Safeguard open access — via the repository — to the bibliographic metadata that identify the deposited publication. The bibliographic metadata shall be in a standard format and include the terms “European Union (EU)” and “Horizon 2020”; the name of the action, acronym and grant number; the publication date, and length of the embargo period if applicable; and a PID.
Along these lines, this section has provided the methodology applied in the
framework of INVITE so as to ensure that its data is as openly accessible as
possible by any stakeholder that may find it interesting for reuse. In this
context, INVITE also focuses on providing metadata standards and appropriate
metadata vocabularies to increase its data interoperability. The following
section provides further details in this respect.
## Making data interoperable
Data interoperability refers to the ability of systems and services that
create, exchange and use data to have clear, shared expectations for the
contents, context and meaning of that data 25 . With that in mind, INVITE
has adopted in its data management methodology the use of metadata
vocabularies, standards and methods that will increase the interoperability of
the data collected/generated through its activities.
More specifically, **the interoperability of the data that will not be
publicly shared will be facilitated by the use of the Dublin Core Metadata
standard.** This standard is a small “metadata element set” which accounts for
issues that must be resolved in order to ensure that data meet traditional
standards for quality and consistency, while still remaining broadly
interoperable with other data sources in the linked data environment. The
fifteen elements of the standard provide a vocabulary of concepts with
natural-language definitions (e.g. title, creator, author, etc.) that are
instantly converted into open machine-readable formats (such as XML, HTML,
etc.), enabling machine-processability. Each element is optional and may be
repeated, while the standard itself offer ways exist for refining them,
encouraging the use of encoding and vocabulary schemes. The vocabulary of the
Dublin Core Metadata standard is presented by the following table 26 :
### _Table 7: Dublin Core Metadata standard vocabulary_
<table>
<tr>
<th>
**Νο**
</th>
<th>
**Element**
</th>
<th>
**Element definition**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Title
</td>
<td>
A name given to the resource.
</td> </tr>
<tr>
<td>
2
</td>
<td>
Creator
</td>
<td>
An entity primarily responsible for making the content of the resource.
</td> </tr>
<tr>
<td>
3
</td>
<td>
Subject
</td>
<td>
The topic of the content of the resource.
</td> </tr>
<tr>
<td>
4
</td>
<td>
Description
</td>
<td>
An account of the content of the resource.
</td> </tr>
<tr>
<td>
5
</td>
<td>
Publisher
</td>
<td>
An entity responsible for making the resource available.
</td> </tr>
<tr>
<td>
6
</td>
<td>
Contributor
</td>
<td>
An entity responsible for making contributions to the content of the resource.
</td> </tr>
<tr>
<td>
7
</td>
<td>
Date
</td>
<td>
A date associated with an event in the life cycle of the resource
</td> </tr>
<tr>
<td>
8
</td>
<td>
Type
</td>
<td>
The nature or genre of the content of the resource.
</td> </tr>
<tr>
<td>
9
</td>
<td>
Format
</td>
<td>
The physical or digital manifestation of the resource.
</td> </tr>
<tr>
<td>
10
</td>
<td>
Identifier
</td>
<td>
An unambiguous reference to the resource within a given context.
</td> </tr>
<tr>
<td>
11
</td>
<td>
Source
</td>
<td>
A reference to a resource from which the present resource is derived.
</td> </tr>
<tr>
<td>
12
</td>
<td>
Language
</td>
<td>
A language of the intellectual content of the resource.
</td> </tr>
<tr>
<td>
13
</td>
<td>
Relation
</td>
<td>
A reference to a related resource.
</td> </tr>
<tr>
<td>
14
</td>
<td>
Coverage
</td>
<td>
The extent or scope of the content of the resource.
</td> </tr>
<tr>
<td>
15
</td>
<td>
Rights
</td>
<td>
Information about rights held in and over the resource.
</td> </tr> </table>
Along similar lines, **the interoperability of openly available data will be
facilitated through Zenodo** , since its metadata will be stored internally in
JSON format according to a defined JSON schema. This encloses HTML microdata
that allows machine-readable data to be embedded in HTML documents in the form
of nested groups of name-value pairs. Moreover, the JSON schema provides a
collection of shared vocabularies in microdata format that can be used to
mark-up pages in ways that can be understood by the major search engines.
Moreover, all metadata linked to the open data is exported via the Open
Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) and can be
harvested. The OAI-PMH develops and promotes interoperability standards that
facilitate the efficient dissemination of data amongst diverse communities 27
.
## Increase data re-use
### _License schemes to permit the widest use possible_
The application of a licence to INVITE’s open data is a simple way to ensure
that any interested third-party can re-use it. In this context, licences are
the instrument which permit a third-party to copy, distribute, display and/or
modify the project’s data only for the purposes that are set by the licence.
Licences typically grant permissions on condition that certain terms are met.
While the precise details vary, three conditions are commonly found in
licences which are the attribution, non-derivative, and non-commerciality.
Along these lines, INVITE publishes its openly available data under the
**Creative Commons licencing scheme** to foster their re-use and build an
equitable and accessible environment for them. In fact, Zenodo provides INVITE
the **opportunity to publish its open data under five Creative Common
licences** as follows:
* **Creative commons Attribution-Share Alike 4.0** (CC BY-SA 4.0) according to which any third party can freely copy, distribute, display and modify the datasets for any purpose. Remix, transform, or built upon data, must be distributed under the same license as the original. Third parties must give appropriate credit, provide a link to the license, and indicate if changes were made.
_**Figure** _
_**2** _
_**:** _
_**CC** _
_**BY** _
_**-** _
_**SA 4.0** _
_**Figure** _
_**3** _
_**:** _
_**CC BY 4.0** _
* **Creative Commons Attribution 4.0 International** (CC BY 4.0) according to which any third party can freely copy, distribute, display and modify the datasets for any purpose. Third parties must give appropriate credit, provide a link to the license, and indicate if changes were made.
* **Creative Commons Attribution-No Derivatives 4.0 International** (CC BY-ND 4.0) during which any third party can freely copy, distribute, display and modify the datasets for any purpose. Remix, transform, or built upon data, however must not be distributed. Third parties must give appropriate credit, provide a link to the license, and indicate if changes were made.
_**Figure** _
_**4** _
_**:** _
_**CC BY** _
_**-** _
_**ND 4.0** _
* **Creative Commons Attribution-NonCommercial 4.0 International** (CC BY-NC 4.0) based on which third parties can copy, distribute, display and modify the datasets for any purpose other than commercial unless they get a permission by project partners first. Third parties must give appropriate credit, provide a link to the license, and indicate if changes were made.
_**Figure** _
_**6** _
_**:** _
_**CC BY** _
_**-** _
_**NC 4.0** _
* **Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International** (CC BY-NC-ND 4.0) according to which third parties can copy, distribute, display and modify the datasets for any purpose other than commercial unless they get a permission by project partners first. Remix, transform, or built upon data, however, must not be distributed. Third parties must give appropriate credit, provide a link to the license, and indicate if changes were made.
_**Figure** _
_**5** _
_**:** _
_**CC BY** _
_**-** _
_**NC** _
_**-** _
_**ND 4.0** _
With that in mind, **the INVITE consortium considers that the CC BY-NC 4.0 is
an appropriate licensing scheme to ensure the widest re-use of the data** ,
while also taking into account the importance of recognising both the source
and the authority of the data as well as safeguarding the commercial interests
of the OI2 Lab.
Nevertheless, different licensing schemes may be selected to better fit the
need of INVITE’s open data ensuring not only their long-term preservation and
re-use but also the interests of the consortium along with the rights of
individuals for whom the data is about. In such a case, this subsection of the
DMP will be updated accordingly.
### _Availability for re-use_
The re-use of data is a key component of INVITE’s methodology for making data
FAIR. In fact, making data available for re-use ensures interested
stakeholders, other that project partners, can benefit from this data,
contributing towards maximising the impact of the project. **Rich metadata**
created based on metadata standards that enable proper discovery as well as
**appropriate licensing schemes** **facilitate the re-use of INVITE’s open
data** , allowing them to find valuable utility.
In principle, it is expected that data will become available for re-use no
later than 120 days after the end of its processing in the framework of the
project (i.e. collection, anonymisation, aggregation, etc.) to ensure that any
additional data management activities required to this end do not compete with
the timely delivery of the project’s planned outputs. Nevertheless, the data
that have been already collected/generated will be uploaded to Zenodo
immediately after the submission of the interim version of the Data Management
Plan, that is the 1 st of September 2018. This data refers to the activities
performed during the course of Task 1.1, Task 1.2 and Task 1.3. The period for
which the data will remain available for re-use depends on the lifetime of
their repository. In the case of data deposited to Zenodo, this is the
lifetime of CERN’s relevant laboratory, which at the moment has an
experimental programme defined for the next 20 years. In case Zenodo
discontinuous the data, this will be transferred and hosted to other suitable
repositories.
With that in mind, the expected time that INVITE’s data will be made openly
accessible and uploaded to Zenodo is indicatively provided in the following
table:
#### Table 8: Expected time that data will be made open through Zenodo 28
<table>
<tr>
<th>
**Νο**
</th>
<th>
**Name of activity**
</th>
<th>
**Expected time for making data open**
</th>
<th>
**Notes**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Market gaps and opportunities
</td>
<td>
01/09/2018
</td>
<td>
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
User needs and requirements
</td>
<td>
01/09/2018
</td>
<td>
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Ideas and feedback collected during the INVITE Co-creation Workshop
</td>
<td>
01/09/2018
</td>
<td>
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Pilot monitoring, co-evaluation and validation data collected through direct
input methods
</td>
<td>
1 st version: 31/01/2020
2 nd version: 31/10/2020
</td>
<td>
This data will be collected/generated during the iterative implementation of
the project’s pilots. In this context, the respective dataset will be updated
twice during the course of the project, once per each pilot round.
Accordingly, the
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
Web portal analytics
</td>
<td>
</td>
<td>
datasets will be made openly available no later than 120 days after the
completion of each round.
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
Data for monitoring stakeholder engagement
</td>
<td>
1 st version: 31/12/2018
2 nd version: 31/10/2020
</td>
<td>
The data will be updated regularly during the course of INVITE following the
monitoring of the project’s stakeholder engagement. An upto-date version of
the respective dataset will be uploaded after the end of each of the 2
reporting periods of INVITE.
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
Social media statistics
</td>
<td>
1 st version: 31/12/2018
2 nd version: 31/10/2020
</td>
<td>
This data will be collected throughout the duration of INVITE as the
dissemination and communication activities of the project run their course. An
up-to-date version of the
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
Data for dissemination and communication reporting
</td>
<td>
</td>
<td>
respective dataset will be uploaded after the end of each of the 2 reporting
periods of INVITE.
</td> </tr> </table>
### _Data quality assurance processes_
**Quality Assurance** (QA) and **Quality Control** (QC) activities are an
integral part of INVITE’s data management methodology and are implemented
prior to the publication of any data to Zenodo, safeguarding the transparency,
consistency, comparability, completeness and accuracy of the data.
**QA** is a planned system of review procedures conducted outside the
framework of developing a dataset, by personnel not directly involved in the
dataset development process 28 . In the context of INVITE, it takes the form
of **peer-reviews of methods and/or data summaries** to assess the quality of
the dataset and identify any need for improvement, ensures that the dataset
correctly incorporates the scientific knowledge and data generated.
**QC** is defined as a system of checks to assess and maintain the quality of
the dataset being compiled 29 . The relevant procedures of INVITE are
designed to provide routine technical checks as they measure and control data
consistency, integrity, correctness and completeness as well as identify and
address errors and omissions. In this context, QC checks cover everything from
data acquisition and handling, application of approved procedures and methods,
and documentation. Some of the general quality checks undertaken in the
framework of the project include checking (i) for transcription errors in data
input; (ii) that scale measures are within the range of acceptable values; and
(iii) whether proper naming conversions are used.
28. 2006 IPCC Guidelines for National Greenhouse Gas Inventories, Vol. 1 General Guidance and Reporting, CHAPTER 6 Quality Assurance / Quality Control and Verification.
29. 2006 IPCC Guidelines for National Greenhouse Gas Inventories, Vol. 1 General Guidance and Reporting, CHAPTER 6 Quality Assurance / Quality Control and Verification.
# Allocation of resources
## Estimated costs for making data FAIR
The costs required for making the data collected/generated during the course
of INVITE’s activities FAIR are integrated within the budget of the project.
With that in mind, the table which follows provides an overview of the
estimated costs of making data FAIR as well as their budget source within the
framework of INVITE.
### _Table 9: Estimated costs for making data FAIR_
<table>
<tr>
<th>
**No**
</th>
<th>
**Data Processing /**
**Management**
**Activity**
</th>
<th>
**Budget source**
</th>
<th>
**Total estimated effort in Person**
**Months** 29
</th>
<th>
**Total estimated cost in Euro** 30
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Collection
</td>
<td>
Budget allocated to the WP under which the respective data are processed
</td>
<td>
25.60
</td>
<td>
153,767.08
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Documentation
</td>
<td>
Budget allocated to the WP under which the respective data are processed
</td>
<td>
1.45
</td>
<td>
8,694.63
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Storage
</td>
<td>
Budget allocated to the WP under which the respective data are processed
</td>
<td>
0.88
</td>
<td>
5,261.79
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Access and security
</td>
<td>
Budget allocated to the WP under which the respective data are processed
</td>
<td>
0.88
</td>
<td>
5,261.79
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
Preservation
</td>
<td>
Budget allocated to the WP under which the respective data are processed
</td>
<td>
1.63
</td>
<td>
9,816.50
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
Availability and re-use
</td>
<td>
Budget allocated to the WP under which the respective data are processed
</td>
<td>
2.57
</td>
<td>
15,413.03
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
Overall data management
</td>
<td>
WP7
</td>
<td>
4.00
</td>
<td>
24,024.72
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
_**Total** _
</td>
<td>
_**222,239.54** _
</td> </tr> </table>
In order to produce the estimations of the costs for making data FAIR in the
context of INVITE, a series of **assumptions** were made, taking into account
the respective **guidelines** provided by the Research Data Management
Support, a multidisciplinary network of data experts within Utrecht University
31 , as well as of the UK Data Service and its data management costing tool
32 . With that in mind, the estimated costs for making INVITE’s data FAIR
cover **data-related activities and resources across the data lifecycle** ,
spanning from collection and documentation through storage and preservation
over to sharing and re-use.
In particular, costs for **data collection** cover activities necessary for
acquiring external datasets (if required), gathering/generating new data,
transcribing (if applicable), formatting and organising this data as well as
acquiring informed consent from data subjects. This data processing activity
reflects the majority of the costs required for making data FAIR as the
majority of INVITE’s data constitutes new data collected/generated over the
course of the project. At the same time, **data documentation** costs address
the effort required for describing data (e.g. marking data with variable and
value labels, code descriptions, etc.) as well as creating well-defined
metadata along with a meaningful description of the context and methodology of
how data was collected/generated and processed (where necessary).
Costs for **data storage** include both the resources required for ensuring
adequate storage space for the data as well as the effort necessary for
conducting data back-ups, while **data access and security** costs encompass
costs related to ensuring access to the data as well as for protecting it from
unauthorised access or use or from disclosure. Given that the storage of
INVITE’s data will not require the procurement of additional space (other than
what is already available to project partners) as well as that no special
measures or software are required to access and secure the data (other than
the what is inherently built in to the repositories of INVITE’s data), such
costs are kept to a minimum.
**Data preservation** costs, on the other hand, are estimated relatively
higher than data storage, access and security costs, as additional effort will
be required in several cases in order to convert the collected/generated data
from their original form (e.g. physical interview transcripts) to an open
and/or machine readable format suitable for long-term preservation (e.g. to an
.xlsx format.). Adequate effort for **data availability and re-use** costs is
also foreseen to safeguard the appropriate digitisation and anonymisation of
the data as well as cover any resources required for data sharing and
cleaning. Along the same lines, appropriate effort is foreseen **for overall
data management** as well, in order to cover the effort related with the
operationalisation of data management in the framework of INVITE
Finally, costs for **long-term preservation** in the framework of INVITE are
assumed to be negligible, since the open data of the project will be hosted in
the repository of Zenodo free of charge.
## Data management responsibilities
For the effective, proper and secure handling of the data collected/generated
in the framework of INVITE, specific data management roles have been
established within the data management methodology and procedures of the
project. These responsibilities are outlined in this section of the DMP and
are as follows.
**Project Coordinator (PC)** : The PC, Q-PLAN, is responsible for overall data
management in the framework of INVITE, including the elaboration of the DMP
and its updates (when necessary and with support of all partners). At the same
time, the PC is responsible for the elaboration of proper templates for the
informed consent form and information sheet to be appropriately adjusted and
utilised by project partners during the relevant activities of the project.
Finally, the PC coordinates with Work Package and Task Leaders to determine
whether and how the data collected/generated by INVITE are shared and become
available for re-use, contributes to its quality assurance and uploads the
project’s openly available data to Zenodo.
**Work Package Leaders (WPL)** : The WPL is responsible for coordinating the
implementation of the data processing activities performed under the WPs they
are leading. Moreover, they align with the PC and the respective Work Task
Leader on whether and how the data gathered/produced under the tasks that fall
within the WP they are leading will be shared and/or re-used. This includes
the definition of access procedures as well as potential embargo periods along
with any necessary software and/or other tools which may be required for data
sharing and re-use. Finally, the WPL are the main responsible for assuring the
quality of the data stemming from the activities of the WP they are leading,
including assessing their quality and indicating any need for improvement to
the respective Work Task Leaders.
**Work Task Leaders (WTL)** : The WTL act as **data controllers** 33 of the
data collected/generated in the frame of the tasks that fall under their
leadership, determining the purposes and means of processing this data as well
as safeguarding its appropriate and timely processing. Moreover, they are
responsible for properly adjusting the templates for the informed consent form
and information sheet to the needs and specificities of the activities carried
out in the task they are leading. Finally, they undertake any necessary
actions to prepare the data collected/ generated through the tasks they are
leading for sharing either within the consortium or openly (including the use
of proper naming conventions, application of suitable anonymisation
techniques, creation of appropriate metadata and documentation, etc.).
**Data processors** : Data processors are project partners that are tasked to
collect, digitise, anonymise, store, destroy and/or otherwise process data for
the specific purpose of the activity in which it has been collected/generated
within the framework of the project. They are responsible for appropriately
collecting the necessary consent for processing data as well as for ensuring
that the informed consent form and information sheet used to this end is
properly adjusted to the needs of the activity they are participating and any
particularities applicable to their organisation. Moreover, they are also
responsible for managing the consents they have retrieved with a view to
demonstrating their compliance with the relevant applicable EU and national
regulation. Finally, they perform quality checks to assess and maintain the
quality of the dataset(s) held within their records.
**Data repositories** : Data repositories are tasked with the storage and
long-term preservation of the project’s data. In this respect, Zenodo
maintains and preserves the openly available data of INVITE, enabling its
sharing and re-use. To this end, Zenodo assigns metadata and DOIs to the data,
while also taking all the necessary measures to securely back-up the data and
be in a position to restore it, safeguarding its long-term preservation.
Accordingly, the Web Portal of INVITE shall securely store and preserve the
project’s data available for sharing amongst authorised consortium members in
the framework of the project.
In this context, the following table illustrates the allocation of data
management responsibilities amongst the members of the INVITE consortium per
data collected/generated under each WP.
_**Table 10: Data management responsibilities of INVITE partners per data
collected/generated under each WP** _
<table>
<tr>
<th>
**WP**
</th>
<th>
**WPL**
</th>
<th>
**Data**
</th>
<th>
**Tasks**
</th>
<th>
**WTL** _**Data Controllers** _
</th>
<th>
**Data Processors**
</th> </tr>
<tr>
<td>
WP1
</td>
<td>
CERTH/ITI
</td>
<td>
Market gaps and opportunities
</td>
<td>
Task 1.1
</td>
<td>
RTC NORTH
</td>
<td>
RTC NORTH
</td> </tr>
<tr>
<td>
User needs and requirements
</td>
<td>
Task 1.2
</td>
<td>
CERTH/ITI
</td>
<td>
All partners
</td> </tr>
<tr>
<td>
Ideas and feedback collected during the INVITE Co-creation
Workshop
</td>
<td>
Task 1.3
</td>
<td>
RTC NORTH
</td>
<td>
All partners
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
RTC NORTH
</td>
<td>
Pilot monitoring, co-evaluation and validation data collected through direct
input methods
</td>
<td>
Task 3.2 &
Task 3.3
</td>
<td>
RTC NORTH &
CERTH/ITI
</td>
<td>
All partners
</td> </tr>
<tr>
<td>
Open multi-side market place data
</td>
<td>
Task 3.2 &
Task 3.3
</td>
<td>
RTC NORTH &
CERTH/ITI
</td>
<td>
INTRASOFT
</td> </tr>
<tr>
<td>
Online collaboration space data
</td>
<td>
Task 3.2 &
Task 3.3
</td>
<td>
RTC NORTH &
CERTH/ITI
</td>
<td>
INTRASOFT
</td> </tr>
<tr>
<td>
E-learning environment data
</td>
<td>
Task 3.2 &
Task 3.3
</td>
<td>
RTC NORTH &
CERTH/ITI
</td>
<td>
INTRASOFT
</td> </tr>
<tr>
<td>
Crowdfunding tool data
</td>
<td>
Task 3.2 &
Task 3.3
</td>
<td>
RTC NORTH &
CERTH/ITI
</td>
<td>
INTRASOFT
</td> </tr>
<tr>
<td>
Web portal analytics
</td>
<td>
Task 3.2 &
Task 3.3
</td>
<td>
RTC NORTH &
CERTH/ITI
</td>
<td>
INTRASOFT
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
Q-PLAN
</td>
<td>
Feedback on the OI2 Lab business models
</td>
<td>
Task 4.2
</td>
<td>
Q-PLAN
</td>
<td>
All partners
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
SEZ
</td>
<td>
Data for monitoring stakeholder engagement
</td>
<td>
Task 5.1
</td>
<td>
SEZ
</td>
<td>
All partners
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
E-UNLIMITED
</td>
<td>
Social media statistics
</td>
<td>
Task 6.1
</td>
<td>
E-UNLIMITED
</td>
<td>
E-UNLIMITED
</td> </tr>
<tr>
<td>
Project events data
</td>
<td>
Task 6.1
</td>
<td>
E-UNLIMITED
</td>
<td>
E-UNLIMITED,
NINESIGMA,
WRS and NELEP
</td> </tr>
<tr>
<td>
Newsletter subscriptions
</td>
<td>
Task 6.1
</td>
<td>
E-UNLIMITED
</td>
<td>
E-UNLIMITED
</td> </tr>
<tr>
<td>
Data for dissemination and communication reporting
</td>
<td>
Task 6.1
</td>
<td>
E-UNLIMITED
</td>
<td>
E-UNLIMITED
</td> </tr> </table>
# Data security
INVITE will **securely handle any collected/generated data** throughout its
entire lifecycle as it is essential to safeguard this data against accidental
loss and/or unauthorised manipulation. Particularly, in case of personal data
collection/generation it is crucial that this **data can only be accessible by
those authorised to do so** . With that in mind, the project’s back-up and
data recovery strategy aims at ensuring that no data loss will occur during
the course and after the completion of INVITE, either from human error or
hardware failure, as well as inhibit any unauthorised access.
In particular, all project partners responsible for processing 34 data
within their private servers will ensure that this **data is protected** and
any **necessary data security controls have been implemented** , so as to
minimize the risk of information leak and destruction. This case refers to the
data that will be closed and therefore will not be shared and/or re-used
within the framework of the project. In this case and to avoid data losses,
the data will be **backed up on a daily basis** and the **backed-up files will
be stored in external hard disk drives** so as to safeguard their
preservation, while also enabling their recovery at any time. Moreover,
**integrity checks** 35 will be carried out once a month (or more often, if
deemed necessary) ensuring that the stored data has not been changed or
corrupted. The tool that will support partners in undertaking integrity checks
is the **_MD5summer_ ** which generates and verifies MD5 checksums 36 .
Access to closed data will only be permitted to authorised project partners.
In case there is a personal data breach, project partners will notify, without
undue delay and, where feasible, not later than 72 hours after having become
aware of it, their competent national supervisory authorities (e.g. data
protection authorities) as well as the data subject(s) that may be affected by
the breach. Moreover, they will document any personal data breaches, including
information such as the facts relevant to the breach, its effects and the
remedial action(s) taken.
With that in mind, **identification and authentication access controls play an
important role** in the context of the project, as they help partners to
protect the data collected/generated during the course of INVITE and
especially personal data. To this end, each project partner is responsible for
and committed to ensuring the application of appropriate access controls to
the data they are processing within their private servers of their
organisation. At the same time, **technical access controls are built into the
Web Portal and OI2 Lab of INVITE** , setting out clear roles with access
rights to the data stored there, so that only authorised personnel have
access. Each project partner has been provided with unique accounts with one
or more roles assigned to them enforcing role-based security when its staff
processes the project’s data. These accounts are username/password protected
maximising access control. Moreover, in order to safeguard the privacy of the
users of the project’s Web Portal and OI2 Lab, dedicated **privacy policies**
have been created that clearly state the way in which these online spaces
collect, process and use personal data, the security procedures followed, the
users’ rights as well as the cookies policy employed (see Annex I and II of
this document).
On another note, INVITE’s **openly available** data will be stored safely for
long-term preservation on **Zenodo** , in the same cloud infrastructure as
research data from CERN’s Large Hadron Collider, using CERN’s **battletested
repository** **software** INVENIO, which is used by some of the world’s
largest repositories (such as INSPIRE HEP and the CERN Document Server). Along
these lines, data is stored and backed-up in CERN’s EOS service in an 18
petabytes disk cluster. Both data files and metadata are kept in **multiple
online replicas and independent replicas** ensuring their long-term
preservation as well as their recovery when necessary. Moreover, for each file
two independent MD5 checksums are stored. One checksum is stored by INVENIO,
used to detect changes to files made from outside of it whereas the other
checksum is stored by EOS, and used for automatic detection and recovery of
file corruption on disks. In this context, **access control is applied by the
different level of openness that Zenodo allows** (i.e. open, embargoed,
restricted and closed).
# Ethical aspects and other procedures
INVITE entails activities which involve the **processing of data that does not
fall into any special category of personal data** 37 (i.e. non-sensitive
data). The collection/generation of this data from individuals participating
in the project’s activities is based upon a **process of informed consent** .
In fact, any personal data collected/generated in the framework of INVITE is
processed according to the principles laid out by the **Regulation (EU)
2016/679 of the European Parliament and of the Council** **of 27 April 2016**
on the protection of natural persons with regard to the processing of personal
data and on the free movement of such data which has entered into force in May
2018 aiming to protect individuals’ rights and freedoms in relation to the
processing of their personal data, while also facilitating the free flow of
such data within the European Union. Along these lines, **data is
collected/generated only for specified, explicit and legitimate purposes**
relative to project’s objectives. Moreover, all project partners tasked with
processing data during the course of INVITE fully abide with their respective
applicable national as well as EU regulations.
Under this light, further details about the **scope of the activities that
entail data collection/generation** in the frame of INVITE along with the
procedures for identifying/recruiting suitable stakeholders to take part in
them as well as for obtaining their informed consent are provided in “ **D8.1:
H - Requirement No. 1** ”. Moreover, **evidence that all data handling
procedures** carried out by project partners are **in line with relevant EU
and national regulations** are provided in “ **D8.2: H - Requirement No. 2**
”. The templates for the Information Sheet and the Informed Consent Form, used
in the implementation of the project’s activities, are compliant with the
General Data Protection Regulation and annexed to this DMP (see Annex II).
In this respect, it is important to highlight that **each project partner is
responsible for ensuring that the templates for the Information Sheet as well
as the Informed Consent Form are appropriately adjusted** according to (i) the
needs of the activity for which they are being used by them as well as to (ii)
the relevant regulations applicable to their respective countries and/or
organisation. Moreover, **all partners should keep records to demonstrate that
an individual has consented to processing of his / her personal data** and use
consent management mechanisms that make it easy for individuals to withdraw
their consent.
Finally, no other national/funder/sectoral/departmental procedures for data
management are currently used in the framework of INVITE.
# Conclusions and way forward
The interim version of the DMP builds upon its initial version to further
elaborate on the methodology employed in the framework of INVITE. With that in
mind, it safeguards the sound management of the data collected/generated
during the course of the project’s activities across their entire lifecycle,
while also making them FAIR. Moreover, this version of the DMP provides an
estimation of the costs required for making data FAIR, outlines the provisions
pertaining to their security as well as addresses the ethical aspects
revolving around their collection/generation.
The DMP is considered to be a living document in the framework of INVITE and
is updated throughout the course of the project taking into account its latest
developments and available results. In fact, the interim version of the DMP
will be further elaborated and updated at least once more over the course of
INVITE, namely on M36 of the project. Ad hoc updates, may also be realised
when deemed necessary, with a view to delivering an accurate, up-to-date and
comprehensive DMP before the completion of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0224_ExoplANETS A_776403.md
|
# INTRODUCTION
In the framework of the ExoplANETS-A project, archival data from ESA Space
Science archives (HST) combined with NASA Space Archives (Spitzer, Kepler)
will be exploited with novel data calibration and spectral extraction tools,
novel retrieval tools, to produce a homogeneous and reliable catalog of
exoplanet atmosphere properties. In parallel, a coherent and uniform database
of the relevant properties of host stars will be developed from ESA Space
Science archives (XMM, Gaia, Herschel), combined with data from international
space missions and ground-based telescopes. These exoplanet and host star
catalogues will be accompanied/interpreted with models to assess the
importance of star – planet interactions.
The project gathers the expertise of seven laboratories:
This document presents an initial version of the data management plan (DMP) of
the project, the deliverable number 3 of the management workpackage (WP1) due
6 months after the start of the project. It follows the template given in
Reference Document 1 (see below). It is a living document which will be
updated as the implementation of the project progresses.
# APPLICABLE DOCUMENTS (AD)
<table>
<tr>
<th>
AD-1
</th>
<th>
ExoplANETS-A Grant Agreement
</th>
<th>
N° 776403
</th> </tr>
<tr>
<td>
AD-2
</td>
<td>
ExoplANETS-A Consortium Agreement
</td>
<td>
Version 3, 2017-12-22; DRF 0647_X30423
</td> </tr> </table>
# REFERENCE DOCUMENTS (RD)
<table>
<tr>
<th>
RD-1
</th>
<th>
_http://ec.europa.eu/research/participant
s/data/ref/h2020/gm/reporting/h2020tpl-oa-data-mgt-plan_en.docx_
</th>
<th>
Version 1.0 ; 13 October 2016
</th> </tr> </table>
# DATA SUMMARY
**4.1 PURPOSE OF THE DATA COLLECTION/GENERATION AND ITS RELATION TO THE
OBJECTIVES OF THE PROJECT?**
The objectives of the project are :
* To establish new knowledge on the atmosphere of exoplanets by exploiting archived space data (HST, Spitzer, Kepler) using novel data reduction methods, as well as improved techniques to retrieve atmospheric parameters from data.
* To establish new insight on the influence of the star on the planet atmosphere by exploiting archived space data (GAIA, XMM, Chandra, Herschel, IUE, HST) on the host stars, as well as complementary ground-based data.
* To disseminate new knowledge.
Data are thus essential to the ExoplANETS-A project. Archival data are at the
start of the project and new data sets with added scientific value are
generated by the project and are made available to the community via a
knowledge server. The global concept is shown below.
Figure 1 : starting from archive data on exoplanets and their host star, the
project will develop novel data reduction and retrieval techniques to get an
homogeneous catalogues of a hundred of exoplanets; the data will be available
from a knowledge server.
**4.2 WHAT TYPES AND FORMATS OF DATA WILL THE PROJECT GENERATE/COLLECT?**
The science products of the project consist of spectra of exoplanet
atmospheres (see Figure 2), exoplanets and host-star parameters (such as
molecular content of exoplanet atmosphere see Figure 3), modelling and
retrieval algorithms, tools to analyze the data, ab-initio models of sources…
Figure 2. Example of transmission spectrum (left) and emission spectrum
(right) of an exoplanet (WASP43 b), as observed with the WFC3 instrument on
board of HST (white circle); the dark blue line show the best fit models from
retrieval analysis. The feature observed around 1.4 microns is a water
feature. The insert on the right image shows
the photometric points from observations with the Spitzer Space Telescope. (L.
Kreidberg et al. ApJL **793** , Issue 2, L27-32; arXiv:1410.2255)
Figure 3: A) Examples of Tau-REx results for three simulated super-Earths HD
219134 b, GJ 1132 b and Kepler 78 b
(planetary parameters from exoplanet.eu). Top: atmospheric spectra for varying
compositions at Hubble/WFC3 wavelengths. Expected error bars for observations
are also shown. Bottom: Tau-REx retrieved constraints of H2O abundance for the
spectra shown above. B) Posterior distributions of complex likelihood
functions encountered in spectral retrievals. Parameter spaces are often
highly dimensional (>20D) with non-linear inter-parameter
correlations. We will fully map these correlations and, using manifold
learning, identify model degeneracies. C)
Simulated observational data analysed by the Tau-REx framework. Multiple
atmospheric components are shown visually.
Our aim is to use, as much as possible, the Virtual Observatory standard or
one of the standard formats in the astronomical community, i.e. the fits
format.
**4.3 WHAT IS THE ORIGIN OF THE DATA?**
The data will be of various origins:
* We will use archival data from observations of exoplanet atmospheres with space observatories, as well as ground-based observatories.
* Thanks to the development of novel data reduction, we will produce from the archival data, new calibrated data sets.
* From this new set of data and thanks to the development of new retrieval techniques, we will derive parameters of exoplanets atmospheres, such as its molecular contain.
* Data will also be generated from the modelling the atmosphere of exoplanets.
* We will use archival data from observations of exoplanet host stars with space observatories, as well as ground-based observatories. When needed, we will submit observing proposal to complete information on some of the host stars of our target list.
* From those data, we will derive, either directly or thanks to models, the star parameters:
effective temperature, luminosity, gravity (also as an age estimate),
metallicity, rotational period, variability, proper motion, multiplicity,
magnetic field, topology of the field, wind …
* From the parameters and thanks to star – planet interaction models, we will determine the importance of such interactions.
4. **WHAT IS THE EXPECTED SIZE OF THE DATA?**
To be determined precisely; but not big; in the gigaoctets range. One of the
end products will be a catalogue with the properties of the atmosphere of
about 100 targets.
<table>
<tr>
<th>
Work Package number
</th>
<th>
Deliverable type
</th>
<th>
Data format
</th>
<th>
Data size
</th> </tr>
<tr>
<td>
WP1
</td>
<td>
Documents
</td>
<td>
PDF, excel
</td>
<td>
At maximum in the
100 Moctets range
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
Calibrated spectra of about 100 exoplanets
Data reduction Codes
Documents, Scientific papers
</td>
<td>
Fits, PDF, excel
</td>
<td>
At maximum in the 10 gigaoctets range
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
Retrieved parameters for the atmosphere of about 100
exoplanets Retrieval codes
Documents, Scientific papers
</td>
<td>
Fits, PDF, Excel
</td>
<td>
At maximum in the 10 gigaoctets range
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
Catalogues of host stars
Codes
Documents, Scientific papers
</td>
<td>
TBD, FITS, PDF, Excel
</td>
<td>
At maximum in the 10 gigaoctets range
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
Models
Documents, Scientific papers
</td>
<td>
TBD, FITS, PDF, Excel
</td>
<td>
At maximum in the 10 gigaoctets range
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
Science products
Web site
Videos
MOOC – SpoC
Documents
</td>
<td>
TBD, FITS, PDF, Excel, Virtual Observatory standards, MP4, TXT,
PNG, JPEG, EPS
</td>
<td>
In the few 100
gigaoctets range
</td> </tr> </table>
5. **TO WHOM MIGHT IT BE USEFUL ('DATA UTILITY')?**
The data will be useful in first place to the scientific community working on
exoplanets. It will also be of interest to students, as well as general
public.
# MAKING DATA FINABLE ACCESSIBLE INTEROPERABLE AND RE-USABLE (FAIR)
The dissemination of knowledge is a key aspect of the project and a
WorkPackage, WP6, is dedicated to this aspect (see Figure 4).
Figure 4 : The data generated by the various WPs will be integrated to a
knowledge server.
The Knowledge Management WP aims at
* Capturing knowledge produced by the other WPs (see Figure 4) within a **knowledge base** including all scientific products (data, models, tools and interpretation).
* Providing open access to them through a **knowledge displayer** with two interfaces, one for the scientific community with a direct access to the science products, the second for the general public with educational resources based on the science products.
* Getting feedback from the users
Figure 5 describes the various aspects of knowledge management.
Figure 5: Overview of the knowledge management of the science products
1. **MAKING DATA FINDABLE AND OPENLY ACCESSIBLE**
The Definition, design and production of a Knowledge Base is scheduled from
Month 6 to Month 36. The data produced in the framework of the project will be
made findable by means of a standard identification mechanism. The data will
be encapsulated with metadata. Metadata categories will be defined to follow
the different types of science products (ex: Observation, Code…) or
educational resources (ex: Video, Image…). For science products, we plan to
follow the naming convention used for the Virtual Observatory.
The architecture of the knowledge base (KB) that will capture, record and
format the science products for dissemination will be defined first. Once
defined collectively with the WorkPackages (WP) 2, 3, 4, 5, the KB will be
produced by deploying a server. The data will be stored under a relational
database. Indexes, Data Base (DB) engines and cache mechanism will be set up
to ensure the scalability and efficiency of the overall data access. Regarding
binary files, an indexed and versioning file system will be deployed to keep
track of modifications without losing content. The KB will be hosted on a
server (NodeJS or PHP according to the actual requirements identified during
the specification phase) providing
REST (REpresentational State Transfer) access to the data (i.e. a set of
standardised Uniform Resource Identifier (URI) allowing to obtain the
information in a JSON format). Specific routes will be defined to query the KB
according to several criteria: hierarchy, semantic, historic, syntactic...
This server will also provide a standardised export mechanism to export data
to a user-friendly product (e.g. PDF files or JPG images). We expect a hundred
of exoplanet atmospheres at the beginning for the Beta version; but the
project will be dimensioned to be able to handle thousands of entries. The
Definition and production of a Knowledge Displayer (KD) will start on Month
12. Once recorded in the knowledge base, the scientific community and general
public will be granted open access to our science products using two dedicated
interfaces. The Knowledge displayer will be in charge of displaying the KB to
the end-user by accessing the Knowledge Base Server REST API’s (Application
Programming Interfaces). All the data will be made openly available at the
latest at the end of the project.
2. **MAKING DATA INTEROPERABLE**
The data produced in the project will be interoperable throughout virtual
Observatory standard for science products and Learning Tool Interoperability
(LTI) standard for education resources.
3. **INCREASE DATA RE-USE**
The data will be made available for re-use through the knowledge server
following the deliverables plan of the project. There is no restriction in the
re-use of the data generated by the project.
The idea is to keep on feeding the data base after the end of the ExoplANETS-A
project, for example in the framework of ARIEL; Ariel is the ESA space mission
just selected for the M4 slot of the ESA cosmic vision 2015 – 2025 program and
whose adoption is scheduled end of 2020. In any case, we can guarantee that
the database will be designed to remain operational for at least 5 years after
the project end (for example putting the data on the CERN ZENODO data base).
# ALLOCATION OF RESOURCES
The data management is one of the work packages of the project; its cost is
included in the project cost; it is covered partly by EC and partly by the
institutes participating in the project. 46 person.months have been attributed
to the workpackage. There is a lead from CEA and a co-lead from INTA.
# DATA SECURITY
The data security (including data recovery as well as secure storage) will be
taken into account in the design of the data base and of the knowledge server.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0225_TAKEDOWN_700688.md
|
# Executive Summary
Data management is a crucial issue for research and innovation projects and
many mistakes were made in the past, when no one was actually thinking about
what to do with the data and how to preserve them or make them available for
other researchers too. This first Data Management Plan (DMP) shows that there
are mainly eight data sets that will be produced as part of the project
activities and that are relevant to be included in the DMP. These data sets
cover the collected stakeholder contacts, public security services and digital
security solutions. Furthermore, the empirical research will generate data
from the quantitative survey, expert interviews, focus groups and workshops.
Additionally, also the validation of the TAKEDOWN solutions will generate
data.
Due to privacy and security concerns related with the sample size, the
qualitative research data will not be made openly accessible as primary data
but in a processed form. Due to the scope of the research and the intended
sample size, it is planned to make the data from the quantitative survey
openly accessible on the data repository Zenodo. Furthermore, reports working
with the qualitative data will also be accessible. The consortium will also
aim at open access when publishing papers and articles. However, these steps
will be done in accordance with the ethical guidelines elaborated in this
report as well as in D2.2.
The DMP is a living document and hence several issues will be updated and
further questions will be answered in the second version, which will be
finalized in month 18.
# 1\. Introduction
Research and innovation projects such as TAKEDOWN usually produce large sets
of data. Depending on the discipline, the data could come for example from
social science research, laboratory testing, field studies or observations.
However, it often remains unclear and uncertain, what will happen with the
data after they were analysed and the project was finished. Furthermore, a lot
of data sets are potentially interesting also for other researchers, but due
to the fact that they are either stored on a local serves or miss crucial
meta-data (or both), their potential value cannot be exploited. Hence,
researcher need to think about the data that they will produce at the
beginning of the research – and this is exactly the purpose of the Data
Management Plan (DMP).
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used in the TAKEDOWN
project and by the consortium with regard to the project research data. The
DMP covers the complete research data life cycle. It describes the types of
research data that will be generated or collected during the project, the
standards that will be used, how the research data will be preserved and what
parts of the datasets will be shared for verification or reuse.
The DMP is a living document, which will evolve during the lifespan of the
project, particularly whenever significant changes arise such as dataset
updates or changes in Consortium policies. This document is the first version
of the DMP, delivered in Month 6 of the project. It includes an overview of
the datasets to be produced by the project, and the specific conditions that
are attached to them. Although this report already covers a broad range of
aspects related to the TAKEDOWN data management, the upcoming versions will
get into more detail on particular issues such as data interoperability and
practical data management procedures implemented by the TAKEDOWN project
consortium.
The following section of the DMP provides an overview of the data sets, which
will be produced in the TAKEDOWN project. It describes the origins of the data
as well as the formats the allocation the particular WPs. Furthermore, it
highlights the purpose of the collection as well as information on the
utility. Section 3 clearly points out, which data will be made openly
accessible and which won’t – including detailed justifications for the
reasons. This is especially relevant for the primary data that will be
collected as part of the empirical research. Furthermore, the section also
provides details on the data repositories or other locations, where the data
will be stored. Section 4 highlights main aspects related to the costs of the
accessibility, whereas Section 5 discusses the main ethical issues. The final
section provides an outlook on the open issues and questions to be addressed
in the next DMP report.
# 2\. Data summary
In order to provide an overview of the different data sets that are currently
and will be produced in the TAKEDOWN project, the following table shows the
data type, the origin of the data, the related WP number and the format, in
which the data will be presumably stored.
<table>
<tr>
<th>
**#**
</th>
<th>
**Data type**
</th>
<th>
**Origin**
</th>
<th>
**WP#**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Stakeholder contacts collection
</td>
<td>
Publicly available data
</td>
<td>
2
</td>
<td>
.xls
</td> </tr>
<tr>
<td>
2
</td>
<td>
Public security services collection
</td>
<td>
Publicly available data
</td>
<td>
2
</td>
<td>
.xls
</td> </tr>
<tr>
<td>
3
</td>
<td>
Digital security solutions collection
</td>
<td>
Publicly available data
</td>
<td>
2
</td>
<td>
.xls
</td> </tr>
<tr>
<td>
4
</td>
<td>
Quantitative survey data
</td>
<td>
Primary data
</td>
<td>
3
</td>
<td>
.xls +.csv
</td> </tr>
<tr>
<td>
5
</td>
<td>
Expert interview data
</td>
<td>
Primary data
</td>
<td>
3
</td>
<td>
.mp3 + .docx + .txt
</td> </tr>
<tr>
<td>
6
</td>
<td>
Focus groups data
</td>
<td>
Primary data
</td>
<td>
3
</td>
<td>
.docx + .txt
</td> </tr>
<tr>
<td>
7
</td>
<td>
Workshops data
</td>
<td>
Primary data
</td>
<td>
3
</td>
<td>
.docx + .txt
</td> </tr>
<tr>
<td>
8
</td>
<td>
Validation cycles data
</td>
<td>
Primary data
</td>
<td>
7
</td>
<td>
xls + .csv
</td> </tr> </table>
## Table 1: Data sets overview
Table 2 describes the data set and the purpose of the data collection of data
generation in relation with the objectives of the project. Additionally, it
shows the data utility for clarifying to whom the data might be useful.
<table>
<tr>
<th>
**#**
</th>
<th>
**Data type**
</th>
<th>
**Description & Purpose **
</th>
<th>
**Utility**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Stakeholder contacts collection
</td>
<td>
**Description** The data contain information on the main stakeholders of
TAKEDOWN along the major stakeholder groups. They include researchers,
practitioners, policy makers, law enforcement agencies, NGOs and other
initiatives as well as security solutions providers. The contact information
that is collected includes the name, institutional affiliation, position,
email address, phone number and office address.
**Purpose** The collection will be used for contacting the respondents of the
empirical research as well as the validation of the project outcomes. It also
provides the basis for the dissemination of the project and for promoting the
TAKEDOWN solutions.
</td>
<td>
The data could be on the one hand useful for research, as they comprise a
large part of the ecosystem. Furthermore, the data might also be interesting
for the private sector as target groups of their products.
</td> </tr>
<tr>
<td>
2
</td>
<td>
Public security services collection
</td>
<td>
**Description** The data set is a collection of public security services such
as helplines, online reporting platforms or information sites. It covers most
of the European countries and the entries are providing information on the
name, the purpose or focus (OC or TN of both), the institution
</td>
<td>
These data are on the one hand useful for initiatives working in the field of
counter violent extremism (CVE) and against organized crime, because it allows
them to get an overview on
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
that is providing the service and the link or phone number etc.
**Purpose** The collection is on the one hand crucial to get an overview of
already existing public service. On the other hand, the collection will be
made accessible on the TAKEDOWN open information hub.
</th>
<th>
the available resources. On the other hand, also public authorities can make
use of the data in order to see which countries have implemented which
services.
</th> </tr>
<tr>
<td>
3
</td>
<td>
Digital security
solutions collection
</td>
<td>
**Description** The data set structures and clusters digital security software
and hardware from European companies along several main categories. They
include the name of the solution, the field where it can be applied, the
geographical scope and the status (laboratory, market-ready etc.).
Additionally it includes a brief description of the solution, the operational
language, the target group, the vendor, the country of the company and the
link to the solution.
**Purpose** The collection is used for getting an overview of existing
software and hardware tools, which aim at fighting organized crime and
terrorist networks. Additionally, it will be accessible for LEAs and
professionals on the TAKEDOWN
Solutions Platform.
</td>
<td>
These data can be useful on the one hand for LEAs, which are looking for
specific software or hardware
against particular phenomena related to OC or TN. Furthermore, they can also
be useful for private security companies working in one of these fields. The
data might also be useful for solution developer in order to get a market
overview as well as for investors, who are looking for future products to
invest in.
</td> </tr>
<tr>
<td>
4
</td>
<td>
Quantitative survey data
</td>
<td>
**Description** This data set contains the data from the quantitative survey,
which is conducted in the TAKEDOWN project. The target group of the survey is
first-line practitioners (such as teachers, social workers, street workers,
community police officers etc., who are working with people at risk of
becoming involved in OC or TN. In addition to getting information on how they
are dealing with these issues in their daily practice, the survey strongly
aims at getting an understanding what they actually need in order to make
their work easier and how toolkits need to be shaped in order to be a real
support for them. The quantitative survey will be implemented as an online
survey and aims at a minimum of 1.000 recipients.
**Purpose** The outcomes of the survey will be used to develop the
practitioner toolkits, the policy recommendations and the digital Open
Information Hub.
</td>
<td>
The large-scale survey, which will be implemented in TAKEDOWN, is the first
one of this kind and of this scope.
On the one hand, the outcomes will be crucial for understanding the needs and
requirements of the first-line practitioners and for developing the toolkits
etc. On the other hand, the data will be interesting for other researchers
working in this field – either for (full or partial) secondary analysis, for a
comparative analysis with other data or for a panel (longitudinal) survey.
</td> </tr>
<tr>
<td>
5
</td>
<td>
Expert interview data
</td>
<td>
**Description** The data contain of recordings, transcriptions and notes from
about 40 qualitative expert interviews with
</td>
<td>
The information provided in the data is not only crucial for TAKEDOWN, but it
can
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
researchers and policy makers. The interviews will be either conducted
personally, on the phone (or skype) or they can also be conducted in written
form.
**Purpose** The aim of the qualitative interview is to get further insights on
the obstacles related to current policies that are in place. They will
therefore support the development of recommendations for future policies
related to both OC and TN.
</td>
<td>
also be useful for research as well as for policy making. Also practitioners
and LEAs might benefit from it.
</td> </tr>
<tr>
<td>
6
</td>
<td>
Focus groups
data
</td>
<td>
**Description** The dataset contains of protocols, written notes and summaries
from the five focus groups that are held in different countries and attended
by practitioner organizations and LEA representatives.
**Purpose** The focus groups aim at getting in-depth insights on the
challenges and obstacles that these stakeholders are facing related to OC and
TN. The acquired knowledge will help the consortium to shape the toolkits, the
open information hub and the digital solutions platform.
</td>
<td>
The data are not only crucial for TAKEDOWN, but they are also useful for OC or
TN research and policy making. Also practitioners and LEAs might benefit from
it.
</td> </tr>
<tr>
<td>
7
</td>
<td>
Workshops data
</td>
<td>
**Description** The data contain protocols, written notes and summaries that
were done at the three workshops, which are organized in different countries.
The workshops aim at developers and providers of technical solutions.
**Purpose** The information gathered at the workshops will support the
development of the TAKEDOWN Solutions Platform by being able to take into
account the requirements of the security industry.
</td>
<td>
The information provided in the data is not only important for TAKEDOWN, but
it can also be useful for research as well as for policy making. Also
practitioners and LEAs might benefit from it.
</td> </tr>
<tr>
<td>
8
</td>
<td>
Validation cycles data
</td>
<td>
**Description** The data from the evaluation of the non-digital and the
digital solutions shows how major stakeholder groups experience their
usability and relevance.
**Purpose** The validation provides the basis for improving and releasing the
final solutions.
</td>
<td>
The data from the validation of the non-digital and the digital solutions do
mainly have an internal use for improving the solutions and for the lessons-
learned.
</td> </tr> </table>
**Table 2: Data sets description and utility**
# 3\. FAIR data
## Making data openly accessible
The following table is highlighting A) which data that are produced and used
in the project and B) will be made openly available. It also explains why
several datasets cannot be shared because of particular reasons. For these
cases, an alternative solution is provided.
<table>
<tr>
<th>
**#**
</th>
<th>
**Data type**
</th>
<th>
**Data openly available**
**(y/n)**
</th>
<th>
**Justification**
</th>
<th>
**Alternative solution**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Stakeholder contacts collection
</td>
<td>
No
</td>
<td>
Although the contacts of the collection are professionals’ contacts that are
available, the consortium can’t publish them due to potential misuse caused by
automated Spam programs.
</td>
<td>
publicly
</td>
<td>
The statistical information on the stakeholder data (such as how many, from
which countries, which professions etc.) will be integrated in the public
report D2.6. In case an external institution is looking for contacts in a
specific and the coordinator doesn’t see any privacy concerns, relevant
contacts might be forwarded.
</td> </tr>
<tr>
<td>
2
</td>
<td>
Public security services collection
</td>
<td>
Yes
</td>
<td>
(not relevant)
</td>
<td>
</td>
<td>
(not relevant)
</td> </tr>
<tr>
<td>
3
</td>
<td>
Digital security solutions collection
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
4
</td>
<td>
Quantitative survey data
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
5
</td>
<td>
Expert interview data
</td>
<td>
No
</td>
<td>
The data from the
interviews (recordings, protocols
</td>
<td>
expert
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
and transcriptions) will not be published as primary data due to privacy and
security concerns.
Anonymization is not considered as an alternative, because the sample size
allows drawing conclusions on the respondents.
</td>
<td>
The categorization, analysis and interpretation of the primary data will be
accessible in the public report D3.6 (and others) that can be accessed on the
TAKEDOWN project website. Furthermore, the
</td> </tr>
<tr>
<td>
6
</td>
<td>
Focus groups data
</td>
<td>
No
</td>
<td>
The data from the focus groups (recordings, protocols and transcriptions) will
not be published as primary data due to privacy and security concerns.
Anonymization is not considered
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
as an alternative, because the sample size allows drawing conclusions on the
respondents.
</td>
<td>
outcomes will also be disseminated in scientific publications.
</td> </tr>
<tr>
<td>
7
</td>
<td>
Workshops data
</td>
<td>
No
</td>
<td>
The data from the workshops (recordings, protocols and transcriptions) will
not be published as primary data due to privacy and security concerns.
Anonymization is not considered as an alternative, because the sample size
allows drawing conclusions on the respondents.
</td> </tr>
<tr>
<td>
8
</td>
<td>
Validation cycles data
</td>
<td>
No
</td>
<td>
The data from evaluation survey will not be published due to privacy and
security concerns.
Anonymization is not considered as an alternative, because the sample size
allows drawing conclusions on the respondents.
</td>
<td>
The deliverable D7.3 will report on the validation and the development of the
final non-digital and digital solutions based on the validation.
</td> </tr> </table>
### Table 3: Data sets accessibility
As it was indicated above, the following data sets will be made openly
accessible: Data type **#2** (Public security services collection), **#3**
(Digital security solutions collection) and **#4** (Quantitative survey data).
The following table describes the accessibility details of these particular
datasets.
<table>
<tr>
<th>
**#**
</th>
<th>
**Data type**
</th>
<th>
**Location**
</th>
<th>
**Level of acessibility**
</th>
<th>
**Type of availability and required software tools**
</th>
<th>
**Information on metadata and additional data information**
</th> </tr>
<tr>
<td>
2
</td>
<td>
Public
Security
Services
</td>
<td>
TD Solutions Platform
</td>
<td>
Public
</td>
<td>
Filterable and searchable database; can be accessed with a state-ofthe-art
webbrowser
</td>
<td>
No metadata needed; additional information will be provided on the platform
</td> </tr>
<tr>
<td>
3
</td>
<td>
Digital security solutions collection
</td>
<td>
TD Solutions Platform
</td>
<td>
Validated professionals
</td>
<td>
Filterable and searchable database; can be accessed with a state-ofthe-art
webbrowser
</td>
<td>
No metadata needed; additional information will be provided on the platform
</td> </tr>
<tr>
<td>
4
</td>
<td>
Quantitative survey data
</td>
<td>
https://zenodo.org/
</td>
<td>
Registered ZENODO
users
</td>
<td>
Cleaned primary data; can be accessed with SPSS, Excel or open source
</td>
<td>
Metadata as well as a codebook will be deposited in the data repository Zenodo
</td> </tr> </table>
data analysis
software (such
as
PSPP
etc.)
### Table 4: Details on accessible data sets
As it is indicated in the table, especially dataset #4 (Quantitative survey
data) can be accessed with commercial statistical programs such as SPSS or
with open source programs such as PSPP. An account at the Zenodo repository
was created by the TAKEDOWN coordinator and a TAKEDOWN community, where the
dataset as well as papers, reports and presentations will be published, was
installed. The consortium will follow the conditions, rules and regulations
from the Zenodo repository – including the settings for accessing the dataset.
# 4\. Allocation of resources and data security
The consortium will use the free-of-charge Zenodo repository for making the
dataset #4 (Primary data from the qualitative survey) accessible.
Additionally, also the reports D2.6 and D3.6, which includes the analysis of
the expert interview, the workshops and the focus groups, will be published
there. This will ensure that the data are safely stored in this certified
repository for long term preservation and curation.
The handling of the Zenodo repository on behalf of TAKEDOWN as well as all
data management issues related to the project fall in the responsibility of
the coordinator.
As for the publications, where the analyses of the empirical research data
will be presented, the consortium will publish them in scientific journals
that allow open access the costs related to open access will be claimed as
part of the Horizon 2020 grant.
# 5\. Ethical aspects
In order to ensure that all ethical aspects are considered and that the
TAKEDOWN project is compliant with all legal requirements and ethical issues,
a general strategy has been designed by the Ethics leader (IDT-UAB). This
strategy involves an ad hoc monitoring process of the project development by
applying the privacy-by-design approach trough a methodological design based
on a “Socio-legal Approach.” This is a risk approach to privacy and data
protection issues in line with the new General Regulation for Data Protection.
The complete strategy is included in Deliverable 2.2.
This general strategy for the monitoring of the ethical and privacy
implications of the TAKEDOWN project consists of the following four steps.
* **Knowledge acquisition:** This task will include the study of the needs of the empirical research of the project. It will also include the study of all the stakeholders involved in the project and all their potential interactions with the Open Information Hub and the Solutions platform.
* _**Privacy-impact-assessment (PIA):** _ a PIA (Wright & de Hert 2012) will be conducted to study of all the scenarios in which, during the project lifecycle, personal data rights can be at stake. Special attention will be paid to activities involving data collection from external participants.
* _**Risk mitigation strategy** _ : Initial, mid-term and final recommendations, prepared by the Ethics leader (IDT-UAB), regarding compliance with the relevant ethical and legal provisions.
* _**Ongoing Monitoring** _ : In order to ensure that all data collection, storage, protection, retention and destruction during the project are developed in full compliance with EU legislation and relevant national provisions, an Ethics Board has been included in the management structure.
At this stage, a set of initial recommendations have been generated by the
Ethics leader (IDT-UAB) for the three main domains of the project: (i)
empirical research, (ii) Open Information Hub and, (iii) Solutions Platform.
## Initial Recommendations in relation to the empirical research task within
the TAKEDOWN project
The set of recommendations presented here suggest procedures, measures or
strategies for conducting a proper and responsible empirical research
according to the ethical and legal requirements previously identified. These
recommendations are the results of the potential risks detected through: i)
the EUROPRISE criteria, and ii) the application of the Privacy Impact
Assessment methodology. The structure of these recommendations represents the
different domains of the TAKEDOWN research that are relevant for the ethical
and legal requirements and potential risks detected trough the previously
stated methodologies.
<table>
<tr>
<th>
</th>
<th>
**Data formats and software**
</th> </tr>
<tr>
<td>
**1.R1**
</td>
<td>
**Online Survey** : Detailed information on open source software tool for
programming the European-wide online survey will be provided (SocialSci,
ScoGosurvey, LimeSurvey or Survey Monkey).
</td> </tr>
<tr>
<td>
**1.R2**
</td>
<td>
**Expert interviews:** Specific guidance to researchers will be provided about
techniques and procedures to conduct expert interviews. Data formats and
software to manage the information gathered through the interviews will be
specified, specially taking into account the potential variety of research
material that can be generated (interviews transcriptions, audio recordings,
images, ethnographic diaries, written texts).
</td> </tr> </table>
<table>
<tr>
<th>
**1.R3**
</th>
<th>
**Focus groups:** Specific guidelines on procedures and content of the focus
groups will be provided. Clear protocols on how to collect and store
information during the development of the focus groups will be specified.
Formats regarding the data gathered and different software to manage this
information will be provided too.
</th> </tr>
<tr>
<td>
</td>
<td>
**Processing quantitative data files**
</td> </tr>
<tr>
<td>
**2A.R1**
</td>
<td>
Questions regarding the recording of data matrix, variable names and labels,
values and labels, recording variables, missing data, and weighting will be
specified in a codebook.
</td> </tr>
<tr>
<td>
</td>
<td>
**Processing qualitative data files**
</td> </tr>
<tr>
<td>
**2B.R1**
</td>
<td>
Provide guidance and methods for transcribing interviews
</td> </tr>
<tr>
<td>
**2B.R2**
</td>
<td>
Provide guidance and specification on procedures to organize files
</td> </tr>
<tr>
<td>
**2B.R3**
</td>
<td>
Provide guidance and specification on naming data files
</td> </tr>
<tr>
<td>
</td>
<td>
**Physical data storage**
</td> </tr>
<tr>
<td>
**3.R1**
</td>
<td>
The Consortium will evaluate a secure storage system in addition to the
repository.
</td> </tr>
<tr>
<td>
**3.R2**
</td>
<td>
Primary and secondary research data will be stored in a secure and accessible
form.
</td> </tr>
<tr>
<td>
**3.R3**
</td>
<td>
It is necessary to define who, when and under which conditions data can be
accessed (raw data/analyzed data). Definition of access rights for: folders
and files, particularly when they are stored on a server instead of a single
computer.
</td> </tr>
<tr>
<td>
**3.R4**
</td>
<td>
Define procedures for backup and recovery of data (frequency and reliability).
</td> </tr>
<tr>
<td>
**3.R5**
</td>
<td>
Specification of data security measures (physical security, software updates,
virus protection).
</td> </tr>
<tr>
<td>
**3.R6**
</td>
<td>
Data disposal (erasure of data).
</td> </tr>
<tr>
<td>
**3.R7**
</td>
<td>
Specification of procedures for keeping data accessible in terms of migration
(conversion of data files from older formats to newer ones) and refreshing
(transfer of data from one storage tool to another).
</td> </tr>
<tr>
<td>
</td>
<td>
**Anonymisation, confidentiality and personal data**
</td> </tr>
<tr>
<td>
**4.R1**
</td>
<td>
**Online** **survey** : Ensuring anonymity and confidentiality.
</td> </tr>
<tr>
<td>
**4.R2**
</td>
<td>
**Interviews** : Anonymity and use of coded data - replacing personal names
with pseudonyms or categories. (example: replace Maria by female subject or
woman)- change or remove sensitive information (example: “I studied in Oxford”
by “I studied in University”).
</td> </tr>
<tr>
<td>
**4.R3**
</td>
<td>
**Interviews:** Each researcher is responsible for guaranteeing the
confidentiality of the information gathered from the data subject.
</td> </tr>
<tr>
<td>
**4.R4**
</td>
<td>
**Focus groups and Workshops:** Specific information must be provided to the
participants in relation to the type of information to be collected: video,
audio, transcripts.
</td> </tr>
<tr>
<td>
**4.R5**
</td>
<td>
**Storage of data collected:** Data collected will be stored by each partner
in local and protected with password or equivalent measures. Each partner will
ensure that only authorized researchers and for the purpose of the TAKEDOWN
research have access to the data. As this data is stored in local, each
partner can be considered liable for the misuse of such data.
</td> </tr>
<tr>
<td>
**4.R6**
</td>
<td>
**Stakeholder’s database:** Information collected will be restricted to
professional information gathered from open sources. In case a researcher has
private or private obtained information he/she should introduce only the
“available upon request” phrase. Prior to the transmission of this private
obtained information, as a result of a request by other researcher from the
Consortium, consent must be obtained from the data subject.
</td> </tr>
<tr>
<td>
**4.R7**
</td>
<td>
**Stakeholder’s database:** Professional information gathered from open
sources can be considered covered by the exemption to the obligation to inform
the data subject contained in article 14.5 (b) of the General Regulation.
However, the safeguards referred to in article 89.1, in relation to the data
minimization principle (as defined in article 5.1 (c), will be respected.
</td> </tr>
<tr>
<td>
**4.R7**
</td>
<td>
**Summary of empirical research:** No personal data or sensitive information
will be included in the summary. In order to ensure this, the summaries will
be sent to the EAB leader for checking before sharing with the rest of the
Consortium.
</td> </tr>
<tr>
<td>
</td>
<td>
**Informing Research Participants**
</td> </tr>
<tr>
<td>
**5.R1**
</td>
<td>
The **participation information sheet** provided will include: contact
information, subject and objectives of the research, data collection methods,
voluntary nature of participation, confidentiality, and information about the
potential reuse of data.
</td> </tr>
<tr>
<td>
**5.R2**
</td>
<td>
The participant information sheet will be specific for each research activity:
online survey, interviews, focus groups and workshops.
</td> </tr>
<tr>
<td>
**5.R3**
</td>
<td>
The **informed consent form** has to include the information sheet and a
certificate of consent. A model is provided by the Ethics Board.
</td> </tr>
<tr>
<td>
**5.R4**
</td>
<td>
The informed consent form must specific for each type of data that will be
collected, especially regarding video and audio recording.
</td> </tr>
<tr>
<td>
**5.R5**
</td>
<td>
Research participants must be informed that they may **withdraw** from the
project at any moment, without having to explain the reasons, and without any
repercussion.
</td> </tr> </table>
**Table 5: Initial informations for the Empirical Research**
## Initial Recommendations in relation to the Open Information Hub within the
TAKEDOWN project
The set of Recommendations presented here suggest procedures, measures or
strategies to be included in the design of the Open Information Hub. The
structure of these recommendations represents the different domains of the
Open Information Hub that are relevant for the ethical and legal requirements
and potential risks detected trough the previously stated methodologies.
<table>
<tr>
<th>
**Scope of information collected and purposes of collection**
</th> </tr>
<tr>
<td>
** 1 .R1 ** Data subjects will be informed according to articles 12 to 14 of
the General Data Protection Regulation.
</td> </tr>
<tr>
<td>
**Notice and rights of the individual**
</td> </tr>
<tr>
<td>
**2.R1**
</td>
<td>
When the information is obtained from the data subject the Open Information
Hub will provide information according to article 13 and gather the subject
consent.
</td> </tr>
<tr>
<td>
**2.R2**
</td>
<td>
Regarding personal data in the Open Information Hub, only data obtained from
the data subject will be processed.
</td> </tr>
<tr>
<td>
**2.R3**
</td>
<td>
In case that the Open Information Hub provides the users with the possibility
to include data, a disclaimer should be included stating that the user
acknowledges that the inclusion of personal data from other subjects is not
allowed. 1
</td> </tr>
<tr>
<td>
**2.R4**
</td>
<td>
Any natural person whose data is available in the Open Information Hub will
have the right to access, modify and erase such data.
</td> </tr>
<tr>
<td>
**Uses of the Open Information Hub and information collected**
</td> </tr>
<tr>
<td>
**3.R1**
</td>
<td>
More information is needed related to the digital reporting functionality.
</td> </tr>
<tr>
<td>
**3.R2**
</td>
<td>
In case the information is used for the reporting of malicious/suspicious
activities the exceptions concerning the use of personal data by competent
authorities for the purposes of the prevention, investigation, detection or
prosecution of criminal offences or the execution of criminal penalties will
be taken into account. 2
</td> </tr>
<tr>
<td>
**3.R3**
</td>
<td>
In case that the digital reporting tool allows for personal data to be
uploaded only relevant competent authorities, according to national
legislation, will have access to this data.
</td> </tr>
<tr>
<td>
**3.R4**
</td>
<td>
Access roles and permissions will be defined in the first steps of the
designing and development process.
</td> </tr>
<tr>
<td>
</td>
<td>
**Retention**
</td> </tr>
<tr>
<td>
**4.R1**
</td>
<td>
The Consortium will take into account that for the purposes of the TAKEDOWN
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this should be 5 years. (Grant
agreement article 18)
</td> </tr>
<tr>
<td>
**4.R2**
</td>
<td>
The Consortium will take into account that in case of a future exploitation of
the Open Information Hub, different retention periods may apply, depending on
the national legislations.
</td> </tr>
<tr>
<td>
</td>
<td>
**Technical aspects and security**
</td> </tr>
<tr>
<td>
**5.R1**
</td>
<td>
When defining roles and permissions special attention will be paid to the
possibility to track any interaction with the platform that entails access,
modification and deletion of personal data.
</td> </tr> </table>
**Table 6: Initial Recommendations for the Open Information Hub**
## Initial Recommendations in relation to the Solutions Platform within the
TAKEDOWN project
The set of recommendations presented here suggest procedures, measures or
strategies to be included in the design of the Solutions Platform. The
structure of these recommendations represents the different domains of the
Solution Platform that are relevant for the ethical and legal requirements and
potential risks detected trough the previously stated methodologies.
<table>
<tr>
<th>
**Scope of information collected and purposes of collection**
</th> </tr>
<tr>
<td>
** 1 .R1 ** Data subjects will be informed according to articles 12 to 14 of
the General Data Protection Regulation.
</td> </tr>
<tr>
<td>
**Notice and rights of the individual**
</td> </tr>
<tr>
<td>
**2.R1**
</td>
<td>
During the registration process, in case personal data is collected, the
Solutions platform will provide the information listed in articles 13 and 14
of the General data protection regulation and collect the consent of the data
subjects.
</td> </tr>
<tr>
<td>
**2.R2**
</td>
<td>
Regarding personal data in the Solutions Platform, only data obtained from the
data subject will be processed.
</td> </tr>
<tr>
<td>
**2.R3**
</td>
<td>
In case that the Solutions Platform provides the users with the possibility to
include data from other subjects, a disclaimer will be included stating that
the user acknowledges that the inclusion is only allowed with the consent of
that subject. 3
</td> </tr>
<tr>
<td>
**2.R4**
</td>
<td>
Any natural person whose data is available in the Open Information Hub must
have the right to access, modify and erase such data.
</td> </tr>
<tr>
<td>
</td>
<td>
**Retention**
</td> </tr>
<tr>
<td>
**3.R1**
</td>
<td>
The Consortium will take into account that for the purposes of the TAKEDOWN
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years. (Grant
agreement article 18)
</td> </tr>
<tr>
<td>
**3.R2**
</td>
<td>
The Consortium will take into account that in case of a future exploitation of
the Solution platform, different retention periods may apply, depending on the
national legislations.
</td> </tr>
<tr>
<td>
</td>
<td>
**Technical aspects and security**
</td> </tr>
<tr>
<td>
**4.R1**
</td>
<td>
More information will be provided concerning the auditing mechanisms foreseen
for the platform.
</td> </tr> </table>
### Table 7: Initial Recommendations for the Solutions Platform
Related with the generation of primary empirical data, it needs to be
highlighted that, at this stage of the project, and realizing the importance
of the empirical research being conducted, a specific and comprehensive set of
guidelines have been provided to all partners in the Consortium by the Ethics
leader (IDT-UAB): the _Ethical Guidelines for the processing of data in the
context of the Empirical research for the TAKEDOWN project_ (see Annex). This
document aims at offering specific guidance to all the partners of the
Consortium for the performance of the different tasks and activities foreseen
in WP3, concerning empirical research. In order to ensure that all partners
are compliant with the requirements related to Research Ethics and
particularly, Informed consent procedures, the Guidelines include:
* a general introduction containing an explanation on the concept and meaning of Informed consent, in the context of research ethics and empirical research.
* a set of guidelines for quantitative research (online survey)
* a set of guidelines for qualitative research (interviews, focus groups, workshops)
* legal notice to be included in the online survey
* written informed consent form
* oral consent script
# 6\. Outlook towards next DMP
The next DMP will be prepared in for month 18, which is after the finalization
of WP3 (Empirical research). As it was emphasized in the introduction, the DMP
is a living document and several questions can only be answered at a later
stage of the project. Hence, the upcoming DMP will provide updates on the
issues raised above and more information on the following questions:
<table>
<tr>
<th>
**Category**
</th>
<th>
**Underlying questions**
</th> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
* Are the data produced in the project interoperable, that is allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating re-combinations with different datasets from different origins)?
* What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable?
* Will you be using standard vocabularies for all data types present in your data set, to allow inter-disciplinary interoperability?
</td> </tr>
<tr>
<td>
Increase data re-use (through clarifying licences)
</td>
<td>
* How will the data be licensed to permit the widest re-use possible?
* When will the data be made available for re-use? If an embargo is sought to give time to publish or seek patents, specify why and how long this will apply, bearing in mind that research data should be made available as soon as possible.
* Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why.
* How long is it intended that the data remains re-usable?
* Are data quality assurance processes described?
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
* Are the resources for long term preservation discussed (costs and potential value, who decides and how what data will be kept and for how long)?
* Do you make use of other national/ funder/ sectorial/ departmental procedures for data management? If yes, which ones?
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
\- What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?
</td> </tr>
<tr>
<td>
Other aspects
</td>
<td>
* Do you make use of other national / funder / sectorial / departmental procedures for data management?
* If yes, which ones?
</td> </tr> </table>
**Table 8: Issues to be addressed in the next DMPs**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.