filename
stringlengths 18
35
| content
stringlengths 1.53k
616k
| source
stringclasses 1
value | template
stringclasses 1
value |
---|---|---|---|
0397_PANOPTIS_769129.md
|
**Executive Summary**
The PANOPTIS Data Management Plan is a living document that will be updated
where necessary. It describes the way the data of the project are managed
during the project duration and beyond. The objective of the data management
plans is that all types of data useful to the project (and other projects as
well) are clearly identified, FAIR (easily Findable, openly Accessible,
Interoperable and Re-usable), that they don’t raise any ethical or security
concern.
This initial version identifies the topics that need to be addressed in the
data management plan and that will be detailed when the architecture and the
specifications of the system are elaborated (from Month 12).
# Data Summary
The project is built on three main pillars, namely:
* Elaboration of precise forecasts (weather essentially but also other hazards when predictable),
* Elaboration of the vulnerabilities for the Road Infrastructure (RI) components, Monitoring of the RI status.
The data that will be collected and generated after processing fall in these
domains. An important aspect of PANOPTIS is the monitoring over the time of
the events and their effects on the Road Infrastructure (RI). So, both for
deep learning method and for statistics, the data have to be kept for several
years. Typically, we need data from the last ten years and data over the whole
duration of the project (4 years).
The origin of the data is the sensors and processing systems that can provide
a description of the environment and detect events that can threaten the RI.
Among these sensors and processing systems, there are:
* Satellites: EO/IR images for macroscopic events (flood, landslides, etc.) and SAR for smaller events (regular ground move).
* UAVs: In PANOPTIS, the UAVs are equipped with various types of cameras depending on the defects that need to be detected (EO/IR, multi-spectral, hyperspectral) and LIDARs to elaborate 3D maps. The size of the data base collected for the project will be quite huge because it will be thousands of high resolution pictures taken from the project and additionally pictures from external data bases to train the detection algorithms.
* Weather data: again a huge volume of data as the size of the base area to compute the forecast will be small.
* Hazard data: content and size depends on the hazards. In general they are under the form of hazard maps with different colours depending on the probability of occurrence and the resulting severity.
* Vulnerability data: these data will combine the descriptive data for the road and supporting infrastructure (bridges, tunnel, etc.). On the 3D map, the defects will be super-imposed (results of inspections and status assessment). The volume of data is once again dependent on the type of infrastructure (from the most simple which is the road directly built on the terrain to the more complex bridges).
The project will create data:
* WP3 will compute weather forecast/hazard forecast which will be stored as maps with additional free text comments.
* WP4 will elaborate the vulnerability of the roads and their supports.
* WP5 will collect the sensors of the data and pre-process them.
* WP6 will fuse the data to produce a Common Operational Picture (maps with risk, events, objects) completed by HRAP for decision support.
As the system capabilities are optimized with the data and statistics from
previous events, the data have to stay in the archives for a very long period
of time (at least during the whole life of the components).
The data related to the Road Infrastructure belong to the management agencies,
namely ACCIONA and Egnatia Odos. Any additional use that could be done of
these data has to be approved by them.
The data collected and processed from external services (weather, environment)
will be protected as per the respective contracts clauses with this external
services. The data cycle is the following one (EUDAT – OpenAIRE):
At each step of the cycle, the IPRs and contractual clauses need to be
respected. In particular: who owns these data, is the process applied to these
data allowed, where will the data be stored and during how much time, who can
have access to these data, to do what?
# FAIR data
## Making data findable, including provisions for metadata
The data produced in the project will be discoverable with metadata. The
majority of the data used and produced by the project will be time-stamped,
geo-referenced and classified (generally type of defects). The following
scheme shows the types of data that will be collected by the system with the
in situ sensors. The rest of the collected data will be provided by the UAVs
and the satellites.
The UAV are equipped with cameras (EO/IR) so the data are images with their
respective metadata. To create accurate 3D maps, the UAVs can also be equipped
with Lidars and in this case, the data will be a cloud of points.
In Panoptis, two types of satellites instruments will be used:
* Cameras (visible images) which will be processed like UAV images but to detect more macroscopic events (floods, landslides, collapses of bridges, mountains rubbles, etc.). The images will be provided by SENTINEL 2 or SPOT 6/7.
* SAR (Synthetic Aperture Radar): radar images to detect small movements. The radar images will be provided by SENTINEL 3 (SENTINEL 1 has not enough precision to identify the changes that are interesting for PANOPTIS. The detailed list of the data used and processed in PANOPTIS is provided herebelow.
<table>
<tr>
<th>
**DATASET NAME**
</th>
<th>
**Data from SHM sensors**
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Referring to data from sensors installed in the demo sites for monitoring
structural health of the different Road Infrastructures (RI). Can be of
geotechnical focus in the Greek site
(inclinometers, accelerometers, seismographs, etc.), and corrosion sensors in
Reinforced Concrete (RC) in the Spanish site.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Direct insitu measurements (Spanish and Greek demosites). Accessible from
local legacy data acquisition systems
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA (Spanish demosite), EOAE (Greek demosite)
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA (Spanish demosite), EOAE (Greek demosite)
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
WP4 partners (IFS, NTUA, SOF, C4controls, AUTH, ITC)
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA and EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4, (all tasks), WP7 (Task 7.5)
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
* Geotechnical data: Angle of frictio n, Cohesion, Dry unit weigh t, Young's modulus, Void ratio, Soil Permeability coefficient, Soil porosity, Soil bearing capacity.
* Corrosion data. The wireless sensors located on multiple monitoring points provide electrical parameters such as corrosion current density (iCORR), electrical resistance of concrete (RS) of the system, and the double layer capacity (CDL) to a unique electronic system. The information directly stored by the
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
electronic system consists of raw data of sensors (electrical response). In
order to transform these primary data into profitable monitoring information a
specific computer tool based the R software belonging to the R Development
Core Team is used. This application allows to execute the data analysis
process in a fast and automated way. As a result, a series of easily
interpretable graphs are obtained. All the monitoring graphics are updated
daily in an automated way and are available from any of the computers linked
to the system.
</th> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
* **Geotechnical sensors:** Settlement cells ,
Vertical Inclinometer s, Horizontal
Inclinomete r, Rod extensometer,
Standpipe Piezometer, Pneumatic Piezometer
* **Corrosion sensors:** extension R, .rda, .Rdata. Graphs updated every day during the demo period (foreseen period of 2 years)
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Feed geotechnical model of cut-slope located at active landslide region (Greek site)
* Feed structural models of bridges (Greek site)
* Feed corrosion model of reinforced concrete underpass (Spanish site)
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
* **Geotechnical sensors:** Settlement cell s , Ve rtical Inclinometer s, Horizontal Inclinomete r, Rod extensometer, Standpipe
Piezometer, Pneumatic Piezometer
* Corrosion sensors: During the project any computer from PANOPTIS partners involved can be linked to the local measurement system. PANOPTIS system will be as well connected to the local monitoring system.
These data shall not be disclosed, by any means whatsoever, in whole or in
part. However, publication and dissemination of these data is possible after
previous approval by ACCIONA/EOAE. Prior notice of any planned publication
shall be given to ACCIONA/EOAE
</td> </tr>
<tr>
<td>
</td>
<td>
at least 45 calendar days before the publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
no
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA, EOAE control centres. PANOPTIS backup system. Information generated
during the project for at least 4 years after the project in the project
repository.
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Data from weather stations and pavement sensors
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Local weather data coming from legacy weather stations (belonging to end-
users) and new PANOPTIS micro weather stations. Main parameters: Temperature,
relative humidity, pavement temperature, pavement humidity, wind speed, wind
direction, rain precipitations, presence of ice, chemical concentration,
freeze point of solution on the surface.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
In situ measurements of weather stations.
Accessible from local legacy data acquisition
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA and EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA and EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
FINT, AUTH, HYDS, FMI, IFS
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA, EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3 (Tasks 3.5, 3.6, 3.7), WP4 (Tasks 4.1, 4.2, 4.3,
4.4), WP7 (Task 7.5), WP2 (Task 2.4 and Task 2.5)
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data is produced online, in real time, every 3 hours (although the frequency
can be adapted), and stored at ACCIONA/EOAE legacy data acquisition system.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Data can be downloaded from the end-users legacy data management tool in form
of pdf., xlsx., doc. The selection of specific date ranges and parameters is
possible. Size of data depends on the date range and number of parameters
selected (various kB-MB per file).
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Providing real-time information of the weather conditions and forecasts for
the DSS.
</td> </tr>
<tr>
<td>
</td>
<td>
* Update climatic models
* Update risk models
* Update ice prone areas on the road surface for winter operations management
* Rain precipitations data is fed to geotechnical and erosion models of slopes
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
PANOPTIS partners can access to the data via ACCIONA and EAOE legacy data
acquisition during the project. At some point of the project, weather stations
will transfer data online to PANOPTIS system.
ACCIONA/EOAE must always authorise dissemination and publication of data
generated with legacy systems (existing weather stations). It is historic
data, it is not generated for the project. Publication and dissemination of
data from PANOPTIS micro weather stations must be approved by ACCIONA/EOAE
Prior notice of any planned publication shall be given to ACCIONA/EOAE at
least 45 calendar days before the publication. The use of data from PANOPTIS
microweather stations for any other purposes shall be considered a breach of
this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
no
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA and EOAE control centres. Data generated during the project, must be
stored at least for 4 years
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Thermal map of Spanish A2 Highway (pk 62-pk 139.5)
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Thermal profile of the road surface; thermal characteristics per georeferenced
zone along the road corridor
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA data base
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
IFS, FMI, HYDS, AUTH, ITC
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3 (Tasks 3.5, 3.6, 3.7), WP2 (task 2.5), WP4 (Tasks 4.1, 4.3)
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Test performed under request
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Kmz. 138 kB
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Identify ice-prone areas on the road corridor (vulnerable RI). This areas
should be equipped with sensors to control ice formation
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
These data shall not be disclosed, by any means whatsoever, in whole or in
part. However publication and dissemination of these data is possible after
previous approval by ACCIONA. Prior notice of any planned publication shall be
given to ACCIONA at least 45 calendar days before the publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
no
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA, control centre, for the duration of the concession contract
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
UAV data
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Data taken in UAV missions, comprising all the datasets obtained with the
different kind of sensors (RGB, LiDAR, IR, etc.) used in the project
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA acquisitions, ITC acquisitions
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA, EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ITC, ACCIONA, EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
ITC, NTUA
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA, EOAE
</td> </tr> </table>
<table>
<tr>
<th>
Related WP(s) and task(s)
</th>
<th>
WP5, WP4(4.5), WP7 (Task 7.5)
</th> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data is produced under scheduled mission and shared with end users and WP5
partners for processing.
Metadata should include:
* Date/time of data acquisition
* Coordinate system information
* Information of UAV system (camera info, flight height, titl/angle of camera)
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Depending on the sensor used:
* Optical: Images/video.JPEG, MP4,
* Multispectral; Images
* Thermal infrared : Images/ video JPEG, .TIFF, .MJPEG
* Point cloud: ASCII
Estimated volume of images and videos depend on number and size of inspected
road corridor elements. Could range from one to couple of hundreds of GB.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Inspection and degradation assessment of road infrastructure: including slopes erosion; road pavement degradation; cracks in concrete bridges/underpasses, overpasses; degradation of road furniture; vegetation encroaching;
corrosion of steel elements
* 3D modelling of road infrastructure
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
These data shall not be disclosed, by any means whatsoever, in whole or in
part. However publication and dissemination of these data is possible after
previous approval by ACCIONA/EOAE. Prior notice of any planned publication
shall be given to ACCIONA/EOAE at least 45 calendar days before the
publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where?
</td>
<td>
PANOPTIS backup system, during 4 years following
</td> </tr>
<tr>
<td>
For how long?
</td>
<td>
the end of the project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
RGB camera data
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Imagery from fix camera monitoring the soil erosion on slope pk 64 of A2
Highway (Spanish demo)
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA fix camera (to be installed within the project). Accessible from local
legacy data acquisition and to be accessible from PANOPTIS systems (online).
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
NTUA
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
NTUA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Production of data in continuous data stream, data is sent online and stored
in PANOPTIS system and ACCIONA legacy data management system.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
High quality images JPEG Continuous data stream
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
An empirical approach can be applied for erosion of slopes, comparing data on
local water precipitation (from micro weather stations) with volume of soil
erosion (from RGB camera).
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
These data shall not be disclosed, by any means whatsoever, in whole or in
part. However publication and dissemination of these data is possible after
previous approval by ACCIONA. Prior notice of any planned publication shall be
given to ACCIONA at least 45 calendar days before the publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Yes, occasionally, when any operation is carried out by the concessionary
staff.
The consent will be managed when necessary.
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data storage in PANOPTIS system for at least 4 years after the end of the
project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Videos of road surface and road assets
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Videos of road surface and road assets taken with 360-degree camera (Garmin
VIRB 1 ) by ACCIONA
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
ITC
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ITC
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Videos are acquired by ACCIONA every 1 month and shared with involved partners
(ITC) via file sharing service for processing. Software for editing videos
VIRB 360:
_https://www.youtube.com/watch?v=COItl8HDEko_
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Mp4
Video raw mode 5K (2 files at 2496 × 2496 px) 5.7K (2 files at 2880 x 2880)
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Road surface image analysis for deterioration assessment
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA at least 45 calendar days before
the publication. The use of Confidential Information for any other purposes
shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
</td>
<td>
Yes, occasionally, when any operation is carried out by the concessionary
staff.
</td> </tr>
<tr>
<td>
(written) consent from data subjects to collect this information?
</td>
<td>
The consent will be managed when necessary.
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data storage in PANOPTIS system for at least 4 years after the end of the
project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Data Laser Crack Measurement System (LCMS)
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
3D (point cloud) data of the road which is labelled by LCMS system. Cracking
tests results
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA data base (inspection test separate of the project)
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
ITC
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data is obtained under scheduled inspection mission, and stored at ACCIONA
control centre. ACCIONA shares results with image analysis
experts of the project via file sharing service
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Point cloud ASCII, .ply, .las, .pts x, y, z information (coordinates)
Excel file summarising cracking results on the corridor.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
3D information of road surface distresses for deterioration assessment
(quantification of damage).
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA at least 45 calendar days before
the publication. The use of Confidential Information for any other purposes
shall be considered a breach of this Agreement
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to
</td>
<td>
No
</td> </tr>
<tr>
<td>
collect this information?
</td>
<td>
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA data base during the duration of the highway concession contract
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
3D scan data using Terrestrial Laser Scanner system.
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
3D scan data (point cloud) of slopes in Spanish A2 highway using Trimble sx10
scanning total station
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
ITC
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data acquired under scheduled mission by ACCIONA, stored in ACCIONA database
and shared with PANOPTIS image analysis experts via file sharing service
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Point cloud ASCII 1 to 5 GB/scan.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
3D model of slopes for high precision monitoring of soil erosion and
landslides with time (evolution of 3D models with time)
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
These data shall not be disclosed, by any means whatsoever, in whole or in
part. However publication and dissemination of these data is possible after
previous approval by ACCIONA. Prior notice of any planned publication shall be
given to ACCIONA/EOAE at least 45 calendar days before the publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
no
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where?
</td>
<td>
ACCIONA Control centre, until the end of the
</td> </tr>
<tr>
<td>
For how long?
</td>
<td>
concession contract.
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Results of inspection tests on RI
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Results of inspection tests performed out of the scope of the project, but
used in the project. For instance for road surface: IRI results, slip
resistance, transverse evenness, strength properties, macrotexture; results of
bridges
inspections, results of slopes inspections
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA/EOAE data base
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
ITC, IFS
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5, WP4
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Inspection tests are performed according to a year planning. For instance IRI
tests, 2 times per year, slip resistance of the road service is tested 3 times
per year + additional time every 2 years. The data produced is stored at
ACCIONA/EOAE legacy data management system and shared with PANOPTIS partners
involved under request.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Format and size is specific for each test. Results can are presented in form
of report (xslx., pdf.)
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Vulnerability analysis
Input for deterioration analysis via image analysis
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/EOAE legacy data management system, until at least the end of the
concession contract
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Historic inventories of events in the demosites
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Incidences, accidents, procedures applied, lessons learnt
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA and EOAE database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA, EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA, EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
IFS
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA, EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Inventory of historical data (actuations, accidents, incidences, etc.)
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Report in xlsx. or pdf. format
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Vulnerability analysis
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Dissemination level: confidential (only for members of the Consortium and the
Commission
Services).
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
no
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/EOAE database, at least until the end of the concession project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Data winter operations
</th> </tr> </table>
<table>
<tr>
<th>
Data Identification
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Preventive and curative protocols applied on the road surface (salt/brine use
per GPS location) for the last winter seasons
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA/ EOAE database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA/ EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA/ EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
IFS
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA/ EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4, WP7 (Task 7.5)
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
An inventory of the winter operations carried out, including salt/brine
spreading and removal of snow from the road surface is produced every day in
which any action is performed (the anti-icing protocol is activated). The
inventory reports the area affected (km range) and the exact time/date. All
the information is stored in the data management tool of the end-users. The
information is shared under request with the PANOPTIS partners involved.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Daily or Yearly reports detailing daily actions are emitted in form of pdf. or
xlsx. (hundred of kB).
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Relate the use of salt/brine for deicing operations with pavement deterioration, reinforcement of reinforced concrete corrosion
* Create models to optimise the use of deicing agents in winter operations
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Dissemination level: confidential (only for members of the Consortium and the
Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/EOAE database, at least until the end of the concession project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Design details of the road corridor of Spanish A2 Highway and Greek Egnatia
Odos Highway
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Inventory, location and design of road infrastructure, slopes, ditches,
transverse drainage works, road sections, road signs. Drawings, geometry,
topography, DTM, DSM, geotechnical surveys of the RI.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Project as built, Rehabilitation projects, data base of the Conservation
Agency
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
IFS, AUTH, NTUA
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3, WP4
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Historic data of the end-users, stored in the control centres. It is shared
with PANOPTIS partners under request.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Format and weight depends of the file. Some indicative information below:
* Designs in dwg. various Mb
* Topography in dwg. various Mb
* Geotechnical surveys (report pdf.) various Mb.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Models of the RI under study
Information for vulnerability and risk analysis
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Dissemination level: confidential (only for members of the Consortium and the
Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/EOAE
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
CCTV
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Imagery of CCTV installed on the road corridor
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA/EOAE legacy data acquisition systems
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
NTUA, C4C
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4, WP5, WP7 (Task 7.5)
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Spanish A2T2, images are currently taken online every 5 minutes. Data is
accessible online in the legacy data management tool.
Egnatia Odos motorway images.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Accessible online via legacy data management tool of the end-users.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Model the road corridor
Vehicle information in real time (risk, and impact analysis)
Feed for the DSS module
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Dissemination level: confidential (only for members of the Consortium and the
Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to
</td>
<td>
</td> </tr>
<tr>
<td>
collect this information?
</td>
<td>
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/EOAE database, at least until the end of the concession project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Traffic information
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Traffic intensity per hour, per vehicle class (light or heavy), per direction
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA/EOAE control centres (legacy data management tool)
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
NTUA, IFS, C4C
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2 (Task 2.5), WP4 ,WP7 (Task 7.5)
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Information is produced in real time on line. PANOPTIS partners can access via
legacy data management tool.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Accessible online via legacy data management tool of the end-users.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Data used for vulnerability, risk and impact analysis
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Dissemination level: confidential (only for members of the Consortium and the
Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Yes, occasionally, when any operation is carried out by the concessionary
staff.
The consent will be managed when necessary.
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/EOAE database, at least until the end of the concession project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Data on ACCIONA Smart Roads Managment Tool (legacy data management system)
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Any information shared through the legacy ACCIONA Smart Road Tool
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA control centres (legacy data management tool)
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
NTUA, IFS, C4C, FINT, AUTH, ADS, ITC
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2 (Task 2.5), WP3, WP4, WP5, WP6, WP7 (Task
7.5)
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
PANOPTIS partners can access to all the data about the RI in the data
management system of ACCIONA (previously authorised by ACCIONA).
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Accessible online
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Data used for vulnerability, risk and impact analysis, feeding all the models
(weather,
corrosion), image analysis of cameras
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Dissemination level: confidential (only for members of the Consortium and the
Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/database, at least until the end of the concession project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Land use and cover
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Land use and land cover maps
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Open Access inventories of the Spanish
Administration: Ministry of Finance for land use
https://www.sedecatastro.gob.es/Accesos/SECAcc
DescargaDatos.aspx
SIOSE geoportal (Ministry of Public Works) and
CORINE Land Cover, for land cover data
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
Open source data
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2 (Task 2.4), WP3
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data can be downloaded from download services of all the public agencies in
the three levels of Spanish administration, national, regional and local
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
“.shp” or raster format like “.geotiff”
Various Mb
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Feed for climatic and geo-hazards models
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Open source inventory Can be published
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
no
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
no
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data storage in PANOPTIS Open source repository for 4 years after the end of
the project.
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Vegetation maps
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Vegetation maps of the areas surrounding the demosites
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Open Access inventories of the Spanish Ministry of Environment
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
Open source
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data can be downloaded from download services of all the public agencies in
the three levels of Spanish administration, national, regional and local
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Vegatation maps in shape format
LiDAR x,y,z data (laz files ASCII files, ESRI matrix (.asc),
(various Mb)
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Improve simulations of the climate related hazards on the road
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Open source inventory Can be published
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
no
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
no
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data storage in PANOPTIS Open source repository for 4 years after the end of
the project. Also in
National and Regional Open Source inventories.
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Hydrological data
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Hydrological maps, rain precipitation historic, flood prone areas
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Open Access inventories of the Spanish Ministry of Environment
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
Open source data
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2 (Task 2.4), WP3
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data can be downloaded from download services of all the public agencies in
the three levels of Spanish administration, national, regional and local.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
“.shp”, arpsis
Various Mb
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Feed for climatic and geo-hazards models
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Open source inventory Can be published
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
no
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
no
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data storage in PANOPTIS Open source repository for 4 years after the end of
the project. Also in National and Regional Open Source inventories.
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Satellite data
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Imagery (several processing levels available) JPEG 2000, GEOTIFF (Spot 6/7);
Images, metadata, quality indicators, auxiliary data SENTINEL-SAFE (JPEG 2000,
.XML, .XML/GML) (Sentinel-2); Images, metadata, quality indicators, ground
control pointsb.GEOTIFF, .ODL, .QB, .GCP (Landsat
7 ETM +)
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Spot 6/7, Sentinel-2, Landsat 7 ETM+
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ADS
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ADS
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
ADS, ITC
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
</td>
<td>
In ADS data bases
</td> </tr>
<tr>
<td>
storage dates, places) and documentation?
</td>
<td>
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
The satellite images are constituted with pixels.
The size of the pixels depends on the instruments.
The images can be taken with various wavelengths (multi-spectral,
hyperspectral). For PANOPTIS, the number of satellite images will be limited
(due to the slow variation of the landscape and the cost of images). Expected
volume around 20 images.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Identify the changes in the landscape and in the RI to detect possible
problems (landslides, rockslides, flows, etc.)
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
The images are exploited and only the results of exploitation will be
distributed.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ADS data bases for 10 years.
</td> </tr> </table>
## Making data openly accessible
At this time of the project, we can make the hypotheses that the data will be
stored:
* In the project web site repository.
* At the end-user premises/maintenance systems, In the integration platform (system repository), At the partners premises.
Some of the data will be collected from external data bases (open) so as to
develop system capabilities. It is especially true for images of defects on RI
or images of weather/disasters effects on RI. These images will be used to
calibrate the detection/analysis algorithms as several modules will use deep-
learning techniques. So, the more images will be available, the more accurate
the results should be.
In the other way round, some data collected and processed in the project
should be made accessible to researchers outside the consortium so they can
use them for similar purposes. The WP leaders will therefore decide after the
trials which data should be made accessible from outside the consortium in
respect of the IPRs and of the data owners decisions.
The repository that will be used for the open data will be accessible through
the project website hosted by NTUA.
## Making data interoperable
PANOPTIS is dealing with data that describe an environment which is the same
all over Europe (and over the world). The Meteorological data are in general
standardised (WMO) but the interpretation that is done from them to produce
alerts can vary. The approach in PANOPTIS is to use as much as possible
existing standards and propose standardization efforts in the domain where the
standards are not widely used or not yet existing.
For the vulnerability of infrastructures, although not completely
standardized, there are very similar approaches in Europe to define an ID card
of infrastructure hot spots (bridges, tunnels). In AEROBI project, a bridge
taxonomy has been proposed as well as a bridge ontology that enables a
standardization of names and attributes. The taxonomy and the ontology of
bridges from AEROBI will be re-used in PANOPTIS.
For the Command and Control system/COP, the objects displayed in the situation
will be exchanged using pre-standardised or widely spread formats: XML
documents collection (NVG or TSO objects). Using these formats, the situation
elaborated in PANOPTIS can easily be exchanged with other parties having a
modern information system/control room/call centre (e.g. Civil Protection,
112, road police,etc.).
## Increase data re-use (through clarifying licences)
The data will start to be available when the first version of the system is
integrated and validated (From month 24).
From all the data collected and processed by the system, the data related to
the
Road Infrastructure can be confidential. They belong to the road operators
(respectively ACCIONA and Egnatia Odos), so if any third party outside the
consortium wants to use them, a case by case authorization is needed from the
operators.
The data should be accessible after the end of the project;
The web site of the project will be maintained one year after the project,
Academic and Research partners of the project will continue to use it after
the project.
# Allocation of resources
The costs for making data fair in PANOPTIS are related to Task 2.4, managed by
AUTH, with the support of FMI and the end-users (ACCIONA and Egnatia Odos).
The maintenance of these data after the project life-time will be decided
within this task after the system architecture (especially data models)
completion.
# Data security
The data security will be assured by:
The project data repository (controlled access); The partners secured
accesses to their data bases.
PANOPTIS data are not sensitive. The infrastructure data owners (ACCIONA and
Egnatia Odos) essentially want to control the use of their data and be sure
that they are not used in improper ways.
HRAP module will handle a big set of rules and procedures that will also be
used for operational decision support
# Ethical aspects
PANOPTIS data concern natural phenomena and road infrastructure. No part of
PANOPTIS system manipulates personal data.
However, during the tests, trials or dissemination events, pictures of persons
can be taken, either by the system sensors (fixed cameras, UAV cameras) or by
individual cameras to illustrate reports or to put in the project galleries.
In addition, persons from or outside the consortium can be interviewed.
Any time there will be a collection of personal data (images, CVs, etc.), the
persons will sign a consent form under which they accept the use of these data
in the context of the project and provided that the use cannot go beyond what
is specified in the consent form.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0399_PANTHEON_774571.md
|
# 1 Forewords
The project PANTHEON will offer to the scientific community both technical
data to be used for further analyses and research and scientific publications.
**5**
# 2 Technical Data
## 2.1 Purpose of technical data collection/generation and its relation to
project’s objectives
The vision of project PANTHEON is to develop the agricultural equivalent of an
industrial Supervisory Control And Data Acquisition (SCADA) system to be used
for the precision farming of hazelnut orchards.
To do so PANTHEON will develop a system composed of fixed sensors (e.g.
meteorological stations and soil moisture sensors) and ground and aerial
robots that navigate the orchard to collect measurements using various kind of
sensors (including high level imaging sensors such as LiDAR and multispectral
cameras), achieving the resolution of the single tree.
The information will be sent to a central unit, which will store the data,
process them, and extract synthetic indicators describing for each tree:
* water stress;
* presence of pests and diseases;
* geometry of the tree, including the possible presence and dimension of suckers; - estimated number of nuts on the tree.
Based on these synthetic indicators, the system will elaborate a synoptic
report for the agronomist in charge of the orchard, putting in evidence
possible situations that may deserve attention, providing suggestions of
intervention and, if requested, providing a historical view of the status of
the plant and of the treatments already performed.
For some interventions, PANTHEON envisions the design and implementation of
tailored algorithms based on these indicators to automatize farming operations
such as the control of the irrigation level and suckers’ elimination by
robots.
The collection of data is pivotal to ensure the design and implementation of
these techniques. Briefly, primary goals of data collection can be summarized
in the following two points:
1. Development, tuning, and validation of algorithms for the remote sensing of Hazelnut plantations. This includes the design of algorithms that will build the synthetic indicators on the basis of the data collected by the robots and by the fixed sensors. (WP4);
2. Development, tuning, and validation of the automatic feedback algorithms and of the expert system that will generate the synoptic reports (WP5 and WP6).
## 2.2 Origin of the Technical Data
All data generated by the sensors will be collected in the experimental
hazelnut plantation “ _Azienda Agricola Vignola_ ” which is located in the
municipality of Caprarola, in the province of Viterbo, Italy. In particular
the collected data will concern three specific plots of the plantation,
highlighted in Fig. 1.
**6**
**Fig. 1:** Fields for the Pantheon project data collection activity.
The current plan foresees both the collection of general data concerning the
entire areas (e.g. aerial images, soil analysis, weather conditions data,
etc.) and the continuous collection of data on a selected subset of trees over
the four years of the project.
At the current stage, we foresee that a total of ca. 48 trees will be selected
to collect different kind of measurements over the four years of the project
PANTHEON. In particular, they will be organized as follows:
• _Water stress:_ ca. 10 trees selected in field 18 and ca. 10 trees selected
in field 16;
### • Sucker detection and control: ca. 6 trees in field 18;
* _Fruit detection:_ ca. 6 of the trees selected in field 16;
* _Tree geometry reconstruction_ : ca. 6 trees selected in field 16;
### • Pest and disease detection: ca. 10 trees selected in field 21
The selected trees will be continuously monitored manually by PANTHEON
agronomists tentatively every ten days and autonomously by the ground and
aerial robots tentatively once a month. Full details concerning the procedures
for the trees selection will be part of Deliverable D2.3 “Real-world (1:1
scale) hazelnut orchard for final demo”.
The data collected by the robots will be stored in a database. This data-set
will be used (especially in the first part of the project) to develop, train,
tune and validate the automatic analysis algorithms, while the data collected
manually by the agronomists will be used as _ground truth_ for benchmarking.
Furthermore, this dataset will be used (mostly in the second part of the
project) to validate the effectiveness of the expert system to identify needs
and proposed the right corrective action (WP5) **7**
A more detailed overview of the data that will be collected is reported in the
next subsection.
## 2.3 Types and formats of technical data will the project generate/collect
As explained, different types of data will be collected/generated during the
project. They will contain evaluations and measurements performed with various
techniques and sensors both on single trees and on entire areas.
In principle, the collected data can be divided in the following classes:
1. _**General information on the orchard:** _ descriptive information of each area including: a. number of trees,
2. agronomic age and history,
3. type of irrigation;
4. composition of the soil in each area,
5. ID and geo-localization of each tree in the orchard,
6. altimetric characterization of each point of the orchard,
7. geo-localization of the irrigation installation.
Data will be provided in the following formats:
1. A _**.json** _ file with a synthetic description of each area, its history, and including the ID of each tree, its age, an indication of the cultivar and its geo-localization.
2. A more complete standard GIS format (e.g. the Geography Markup Language) containing the map of the orchard with all the relevant information (trees ID and position, irrigation lines, altimetry).
2. **_Agronomic Data collected manually_ : ** results of agronomical evaluations performed by PANTHEON agronomists on the selected trees. This includes:
1. the evaluation of the phenology,
2. the evaluation of the biometric variables,
3. the detection of pests and diseases,
4. the evaluation of suckers.
Further data that will be collected manually concerns the yearly hazelnut
yield of each plant under observation. It is expected that all the information
will be collected using standardized protocols. Details concerning the
protocols to be used will be part of Deliverable D2.3 “Realworld (1:1 scale)
hazelnut orchard for final demo”. The data will be stored in tables using
Excel _**.xlsx** _ files.
3. **_Raw Remote Sensing Data collected by the robots_ : ** data collected by the various sensors mounted on the ground and aerial robots of the project. More specifically it will consist of:
1. images captured with RGB,
2. images captured with Multispectral and Thermal Cameras,
3. 3D measurements captured with Lidar,
4. Data relative to their triggering (RTK-GPS position, date and time, orientation of the gimbal, orientation and speed of the robot).
More specifically the data collected by the Unmanned Aerial Robot will be
1. Sony a5100: _**.raw** _ RGB images, ** 8 **
2. Tetracam MCAWL _**.raw** _ multispectral images,
3. Teax ThermalCapture 2.0 _**.raw** _ themal images.
Each of these images will be associated with a JSON object containing the
description of the data, date and time of the capture, GPS positioning of the
image, and all the data concerning the telemetry of the UAV and the position
of the gimbal at the time of the trigger. The JSON objects will be collected
in a _**.json** _ file.
The data collected by the Ground Robot mostly consists of the three main
sensors
1. Faro Focus S70 (laser scanner) _**.fls** _ files containing the 3D point cloud
2. Sony a5100: _**.raw** _ RGB Images
### c. MicaSense RedEdge-M **.raw** multispectral images
Each of these images will be associated with a JSON object containing the
description of the data, date and time of the capture, GPS positioning of the
image, and all the data concerning the telemetry of the ground robot and the
position of the gimbals at the time of the trigger. The JSON objects will be
collected in a _**.json** _ file. It is also foreseen to store the data of the
extra navigation sensors (e.g. the navigation lidar) in **_.raw_ ** for
comparison purposes **.**
4. **_Elaborated Remote Sensing Data_ : ** processed data computed starting from the raw remote sensing data. These data include both data resulting from pre-processing (filtering, homogenization, etc.) and real derived data, such as: orthophotos of the orchards and of some of its parts, graph representation of the hazelnut tree structure, water stress maps, indicators on the presence of suckers, estimation of the state of health of the plants. At the current stage the format of these data has not been defined yet, however, whenever possible standard **XML** or **JSON** formats will be used.
5. _**Measurements collected by the fixed IoT infrastructure:** _ measurements collected on the field 24/7 by the fixed Internet of Things (IoT) infrastructure composed of a weather station and moisture sensors placed in different parts of the orchard. These data will be collected as ASCII files and possibly converted to Excel _**.xlsx** _ files.
6. _**History of the plants:** _ It represent the history of all the treatments sustained by the plants. This will be recorded in an Excel _**.xlsx** _ files.
At current state, we expect that all the data will be collected in a NoSQL
database for easy queries and all the generated files will have an associated
JSON object containing all relevant information.
2.4 Re-use of existing data
No re-use of any existing data is foreseen at the present stage
## 2.5 Expected size of the data
The expected total size of the generated data mostly depends on the remote
sensing activities and their subsequent analysis. Diversely, all the other
information that will be collected during the entire duration of the project
(information on the orchard, manual sampling and sampling from the
infrastructure) will amount to less than 200 MB. ** 9 **
Roughly the data gathered by the remote sensing-based activities (in
particular the .raw and preprocessed images) will represent ~95% of the whole
technical data managed during the project.
2.5.1 UAV data
For what concerns the remote sensing performed through the UAV (water stress
and pest and disease detection) the raw file size for each capture is about
1. **28 MB** for the Sony a5100 RGB camera;
2. **15 MB** for the Tetracam MCAW multispectral camera;
3. **0.8 MB** for the Teax ThermalCapture 2.0 thermal camera.
Which amount to approximately **44 MB** per capture. For each day of
measurement, we assume approximatively 2000 captures, for a total of ca. **90
GB/day.** At this point, by assuming a minimum of 7 measurements per year
(full details about the calendar of automated sampling activities will be part
of Deliverable D2.3 “Real-world (1:1 scale) hazelnut orchard for final demo”),
a total of ca. **0.63 TB/year** of raw image data from the UAV is reached
**,** which will result in ca. **2.5 TB** of **raw image data** from the UAV
in the entire duration of the project.
To receive the final multispectral orthoimages, which are needed to calculate
the spectral indices, a post processing is required. In the **first testing
phase** (year 1-2) and in the **development phase** (year 3) intermediate
files are generated to evaluate the correctness of the results and to further
develop the algorithms. More and more of these files can be deleted with
progressive development of the project.
Based on the current design of the processing chain we assume to generate each
measurement day post processed data in a magnitude of
* **390 GB** in the testing phase;
* **35 GB** in the development phase; - **30 GB** for the final product.
Assuming 7 measurement days per year this results in a data volume of about
* **2.8 TB/year** in the testing phase;
* **0.3 TB/year** in the development phase and for the final product.
So about **6.2 TB post processed UAV remote sensing data** will be generated
during the entire duration of the project.
2.5.2 UGV data
To perform the remote sensing activities through the UGV (tree geometry
reconstruction, suckers detection and fruit detection) we plan to capture each
tree by 4 Lidar scans and by 16 photo shoots ** 10 ** per camera. Based on
the sensor characteristics, it is foreseen that for each tree and day of
measure raw sensor files are generated with a volume of at most
1. **0.25 GB** for the Faro Focus S70 laser scanner (.fls);
2. **0.45 GB** for the Sony a5100 RGB camera (.raw);
3. **0.05 GB** for the MicaSense RedEdge-M multispectral camera (.raw).
For the UGV the amount of data depends on the specific operation and on the
phase of development of the project. 60 trees measured with all sensors and 12
trees measured with Lidar only (full details about the calendar of automated
sampling activities will be part of Deliverable D2.3 “Real-world (1:1 scale)
hazelnut orchard for final demo”), it is possible to estimate the total amount
of data generated every year. For the various activities we foresee that every
year we will measure
* **60 trees** scanned by Lidar;
* **48 trees** captured by the cameras.
So, each **year** we will generate approximately **39 GB raw UGV sensor data**
in the field. A data volume of **9 GB/day** is not exceeded.
To receive the multispectral point clouds and image data used for further
analyzes, the raw data has to be post processed. In the **first testing
phase** (year 1-2) it is important to store more data (including more .raw and
intermediate formats data) to evaluate the correctness of all intermediate
processing steps. Based on the current design of the processing chain, for
each tree post processed data is generated with a data volume of
approximatively
* **2.7 GB MB** for the laser scanner;
* **8.1 GB MB** for the RGB and multispectral cameras.
For the **development phase** (year 3) and the **final product** (year 4) most
intermediate and temporary files can be deleted, and the amount of post
processed data for each tree will decrease to
* **0.75 GB** for the laser scanner;
* **3.6 GB** for the RGB and multispectral cameras.
Based on the planned data acquisition design we will generate approximately
* **550 GB/year** for in the testing phase;
* **260 GB/year** for the development phase and final product.
So, we will generate approximately **1.6 TB post processed UGV remote sensing
data** during the entire duration of the project.
2.5.3 Total data volume
**We estimate that** approximatively **1.7 TB** will be generated during the
entire duration of the project coming from the main sensors of the ground
robots and **8.7 TB** coming from the sensors of the UAV. Considering all the
data acquired from all the various sources it is reasonable **to estimate the
total amount of data that will be generated in the order of 10-15 TB.**
## 2.6 Third parties possibly interested in the data
The consortium believes that the third party possibly interested in the data
are mostly research group on remote sensing that may want to reuse the
collected data to test and validate new algorithms and ** 11 ** research
groups interested on hazelnut plantation that may be interested in validating
current best practices or formulating new paradigms for orchards management.
# 3 FAIR data
## 3.1 Making data findable, including provisions for metadata
3.1.1 Name Convention and Provision of Metadata
All data will be stored following the following name convention:
_**TypeodData-CalendarDay-SequentialNumber.extension** _
where:
* **Type of data:** represent a code of the type of data composed of four capital letters. The meaning of each code will be developed during the project.
* **Calendar Day:** follows the convention YYYY.MM.DD
* **Sequential Number:** is the progressive number for that specific kind of data generated in that day
* **Extension:** is the one proper for that type of data
This naming allows to easily find and order the data for type, date and
sequence, for instance _UAV12018.08.03-1.raw_ represent the first capture from
the first sensor of the UAV on the 3 rd of August 2018 whose format is a
.raw.
Together with each generated file, there will be always be an associated JSON
object that will be stored in a _**.json** _ file containing the relevant
metadata and extra information that might be needed.
3.1.2 Structure of the metadata (including keywords and version numbers)
Each generated file will have an accompanying JSON object that will be stored
in a _**.json** _ file which will be structured to include the following
information
* **General information on the data:** it contains metadata such as the name-file including the data and its key, a description of the nature of the data (including versioning), keywords for easy searchability, and indication on the license under which the data are distributed.
* **Accessibility information:** it contains information on how to read the data. It includes the format of the file (with possible versions), when relevant indication on the way the data is structured (e.g. convention for tables), and suggestions on the software to open the data (including an URL to the software producer, when available).
* **Service information:** Contain the extra information on the data acquisition. It will always contain the timestamp of the acquisition, and the GPS coordinates of the acquisition, together with any other information that can be useful for the elaboration of the data.
A tentative structure of a possible .json describing data is reported
hereafter
{
"generalInfo" : {
"filename" : "TypeodData.extension", ** 12 **
"key" : " CalendarDay-SequentialNumber",
"description" : "Here a description of the file and its content",
"keywords" : [ "Keyword1", " Keyword2", " Keyword3"],
"copyrightOwner" : "H2020 EU Project PANTHEON, www.project-pantheon.eu"
"copyrightLicense" : "Type of licence with which data are released"
},
"dataInfo" : {
"formatFile" : "format file",
"structure" : "Possible description of the information file",
"supportSoftware" : "Name of the software to open the data",
"urlSoftware" : "if available, URL to a software to open the data" },
"serviceInfo" : {
"timeStamp" : "Timestamp in Unix Epoch format",
"gps" : ["Latitude","Longitude","Altitude"],
….
}
## 3.2 Making data openly accessible
3.2.1 Default Open Access, Exceptions and Temporary Embargos
In line of principle, it is intention of the consortium to make all the
collected data publicly available by default at the end of the project, so
that they can be re-used by the project partners and by third parties.
Exceptions to this general principle will be made on the basis of:
* Possible well-motivated objections raised by either one of the partner or by the owner of the hazelnut orchard “ _Azienda Agricola Vignola_ ” concerning the disclosure of sensitive information that might jeopardize the economical exploitation of the results of the project or legitimate economical/privacy interests of the involved organizations. The pertinence of the objections must be approved by the consortium boards.
* Technical difficulties in publicly sharing the data due to the size of the database and the associated bandwidth requirements. Should this be the case, a representative sample of the data will be selected and will be made publicly available on the internet without any access restriction. The consortium will grant access to the entire dataset upon request.
Furthermore, any consortium partner may request a temporary embargo on any
specific subset of the data up to the time that scientific publication,
patents, or products based on those data are published.
The means to make the data publicly available will be detailed in Section
3.2.3.
3.2.2 Software to access the data
As already detailed in Section 2, the technical data generated is either raw
data from the various sensors (and that as such follow the specifications of
the sensors manufacturer) or processed data provided in the most common
storage formats.
**13**
In the **JSON** object accompanying every generated data file, it is foreseen
a field which describes the type of the data, its internal structure (when
relevant), and a suggestion on the software to be used. The JSON object will
also contain a link to the suggested software to access the data. Whenever
possible, link to downloadable open source software will be provided.
3.2.3 Repository and Access to the Data
All data will be stored in a NoSQL database (the same that will be used within
the central unit for the project). The database will run on the main
workstation of the project, installed at the University of Roma Tre.
To make the data accessible, a webpage connected to the project webpage will
be created as a frontend to the NoSQL database. The page will describe the
content of the database, and the instructions for accessing it.
The possibility to also upload the material on a public repository for
research data sharing (e.g. _https://zenodo.org/_ ) will be evaluated.
However, at the current stage this solution seems nonpracticable given the
very large size of the generated database. A possible solution could be to
select a representative subset of the data (e.g. all the measurements
concerning a very small number of trees) to be uploaded on a standard
repository for research data sharing and clearly putting a disclaimer that a
large dataset is accessible at the project website upon request.
The access to the database will be through a login and password. The login and
password obtainable through the front-end will require the Name, Last Name and
institutional email registration. The user will have read-only privileges to
the data and he/she will not have the access to restricted data or embargoed
data. Access to restricted or embargoed data will be possibly granted upon
motivated request to the Consortium. The personal data of the registered users
(name, last name, and email) will be accessible only to the system
administrator.
3.2.4 Licenses
The data will be released under **Creative Common Attribution-NonCommercial-
ShareAlike** licence, for details on this licence please refer to
_https://creativecommons.org/licenses/by-ncsa/3.0/legalcode_ . The information
on the licenses will be reported in each **JSON** description as well as on
the front page of the repository
Figure 2 – The data will be released under Creative Common Attribution-
NonCommercial-ShareAlike License
## 3.3 Making data interoperable
Since the developed data will be stored in the most common formats, it is
reasonable to expect that data could be re-used with a good level of
interoperability. The use of the **._json_ ** auxiliary file to explicit the
data types, and possible internal structure of the date will facilitate the
interoperability. Furthermore, as the data will be collected in a NoSQL
database, access to the elaborated data (and 1 ** 4 ** possible conversion
to specific reporting formats) will be easily achieved.
To make our data interoperable with other agricultural-related databases and
support interdisciplinary interoperability we will use metadata vocabularies
(based on RDFS) and standard ontologies (based on OWL) for agronomists, such
as, AGRO (the AGRonomy Ontology) 1 , developed by The Open Biological and
Biomedical Ontology (OBO) Foundry.
## 3.4 Increase data re-use (through clarifying licences)
3.4.1 Licensing to increase re-use
The data will be publicly released under **Creative Common Attribution-
NonCommercial-ShareAlike** licence. The information on the licenses will be
reported in each _**.json** _ description as well as on the front page of the
repository.
Summarizing from the **Creative Common** website
(https://creativecommons.org/licenses/by-ncsa/3.0/) this license allows to
freely:
* **Share** – Copy and redistribute the data in any medium or format
* **Adapt** – Remix, transform and build upon the data
Under the following conditions:
* **Attribution** — The user must give appropriate credit to the licensor, provide a link to the license, and indicate if changes were made. The user may do so in any reasonable manner, but not in any way that suggests the licensor endorses the user or the use of the data.
* **NonCommercial** — The user may not use the material for commercial purposes. The PANTHEON consortium pledge to not consider publication of scientific papers on peerreviewed journal a commercial purpose.
* **ShareAlike** — If the user remix, transform, or build upon the data, he must distribute his contributions under the same license as the original.
3.4.2 Availability of the data
The consortium will ensure the public access to the generated database
starting from the beginning of the fourth year of the project, taking into
account the possible exceptions highlighted in Section
3.1.1. The consortium will ensure the internet availability of the database at
least 2 years after the end of the project.
3.4.3 Description of the data quality assurance process
The consortium will comply with high standard of data collection. Full details
concerning the methods **15** for data collection and protocols will be part
of the Deliverable D2.3 “Real-world (1:1 scale) hazelnut orchard for final
demo”.
# 4 Data security
Data will be stored in a server which will be physically located at Roma Tre
University and protected by a firewall. In particular, the server will be a
cluster of Standard Linux-based workstation equipped with the latest versions
of open-source security tools. Regarding data reliability and fault-tolerance,
data will we replicated in the local server. In addition, whenever possible,
the other partners of the consortium will keep copies of the data sets to
ensure some redundancy against possible failures.
# 5 Scientific Publications
All scientific outcomes will be provided in open access mode. In particular,
the 'green' open access model will be used. Every scientific outcome generated
in the project will be self-archived in three locations: on the project
website, on arXiv, and on Researchgate to ensure maximal visibility. The ** 16
** researchers will be instructed to publish only in journal and conferences
ensuring self-archiving (green publishers). Exceptions to this policy must be
authorized by the Project Management Committee. The authorization to publish
on journal/conference not ensuring self-archiving will be granted only if
motivated by reasons of opportunity.
# 6 Ethical aspects
No ethical aspects concerning data sharing is expected. If any should raise
(e.g. images capturing neighboring fields or unexpected people passing by),
proper actions will be taken, e.g., data removal.
At the current stage is foreseen that the database will not contain any
personal information except:
* Progressive ID of the Agronomic Experts (for agronomical evaluation). As described in the Ethics deliverable D8.1 the real identity behind the evaluation number will be known only to the leader of WP5, who will store it in a nondigital register for his eyes only together with the copies of the informed consent that the expert will sign (for a fac-simile of the informed consent please refer to deliverable D8.1)
* Authors or of the data. The author of the data will be given the possibility to appear in the database with his real name of with a standardized nickname. In both cases he will sign an informed consent that will be kept by the data management responsible
* Name, Last Name and Email of each user of the database. This information will be restricted (only the system administration will have access it). All people signing up in the repository will have to agree on an informed consent form on the use of personal data complying with the Italian legislation.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0403_WeGovNow_693514.md
|
Executive Summary
H2020 projects are to provide a first version of the Data Management Plan
(DMP) within the first six months of the project, with a view to being updated
during the project lifetime. The present document presents the initial DMP for
the WeGovNow project, thereby describing the project’s current view on the
data management life cycle for the datasets to be collected, processed or
generated for the purposes of the WeGovNow pilot evaluation. This refers to
the handling of evaluation data during and after the project. According to the
workplan, the current version of the DMP will be updated on the basis of a
dedicated evaluation framework to be developed until project month 12 (D4.1).
The final DMP will be presented in a dedicated deliverable (D6.3). The current
view can be summarized as follows:
* Data set references and names will be specified on the basis of the evaluation framework to become available by project month 12 (D4.1).
* Qualitative evaluation data on positive and/or negative impacts of utilising the WeGovNow platform and services, as perceived at the part of the participating municipalities, will be generated. These will be augmented by quantitative data to be collated (e.g. time spent on utilising the WeGovNow platforms by municipal staff).
* Qualitative evaluation data on positive and/or negative impacts of utilising the WeGovNow platform and services, as perceived at the part of civil society stakeholders (e.g. representatives local NGOs participating at a given pilot site) and citizens, will be generated.
* Quantitative data on WeGovNow platform and service utilisation which can be automatically derived from the technical infrastructure to be piloted (e.g. platform utilisation statistics) will be aggregated.
* Currently it is envisaged that any aggregated data and case level data will be made available in an anonymised manner towards the end of the project for non-commercial, research purposes upon request.
* All data will be stored at the project coordinator’s corporate technical infrastructure, protected against unauthorised access and backed up at different levels, including regular off-premise backups.
# Introduction
According to available guidance, H2020 projects are to provide a first version
of the Data Management Plan (DMP) within the first six months of the project
1 . The initial DMP should be updated - if appropriate - during the project
lifetime. The present document is a first version of DMP for the WeGovNow
project. It describes the project’s current view on the data management life
cycle for the datasets to be collected, processed or generated for the
purposes of the evaluation of the three WeGovNow pilots. This refers to the
handling of evaluation data during and after the project. In particular, the
current document sets out an initial view on what data will be collected and
processed (Chapter 2). It also initially describes the methodology (Chapter 3)
and standards (Chapter 4) which will be applied. Furthermore, it is described
how data are expected to be shared with any external parties (Chapter 5) and
how these will be preserved (Chapter 6). According to the workplan, the
current version of the DMP will be updated on the basis of a dedicated
evaluation framework to be developed until project month 12 (D4.1). The final
DMP will be presented in a dedicated deliverable (D6.3).
# Data set reference and name
Based on the evaluation framework to become available by project month 12,
data set references and names will be specified.
# Data set description
In accordance with the project’s workplan, the WeGovNow platform and services
will be piloted under day-to-day conditions in three municipalities from
project moth 18 onwards. All three trial sites will be evaluated according to
a common evaluation programme. According to the workplan the initially
described evaluation approach (Annex I) will be consolidated by project month
12 and reported in a dedicated deliverable (D4.1) respectively. The current
view is that different sets of evaluation data will be generated:
* Qualitative data on positive and/or negative impacts of utilising the WeGovNow platform and services, as perceived at the part of the participating municipalities, will be generated by means of key informant interviews (staff). It is anticipated that these will take the form of semi-structured interviews. Interviews will be undertaken in pairs to enable detailed note-taking. Interview notes will be typed up according to agreed formats and standards. The ultimate number of interviews will depend on the local context within which the WeGovNow platform and services are to be implemented at each pilot site. At the current stage, it is anticipated that 3 to 5 key informant interviews will be conducted per pilot site. These will be augmented by quantitative data (e.g. time spent on utilising the WeGovNow platforms by municipal staff) which will be gathered either by means of retrospective interviews or staff diaries. The ultimate decision about
the data collation techniques to be applied is expected to depend on the local
circumstances prevailing at each of the pilot sites, e.g. when it comes to the
feasibility within the participating municipalities’ day-to-day operations.
* Qualitative data on positive and/or negative impacts of utilising the WeGovNow platform and services, as perceived at the part of civil society stakeholders, will be generated by means of key informant interviews (e.g. representatives local NGOs participating at a given pilot site) and focus groups (e.g. citizens). The key informant interviews are expected to be conducted as described above. The ultimate number of interviews will again depend on local circumstances (e.g. the no. of local NGOs utilising the WeGovNow platform in a given pilot site). It is currently expected that 4 to 8 key informant interviews will be conducted per pilot site. Focus groups will involve two evaluators, and be conducted in the vernacular. Whether recorded or not, the event will be transcribed or documented using agreed formats and standards for handling the issue of multiple voices, interruptions, labelling of participatory and visual activities, and so on. The evaluators will be reasonably fluent in both English and the main language in which focus groups will be conducted, so that transcriptions will be translated into English only where the researcher is fluent in both languages and better able to transcribe in English, or to enable analysis of particular sections of the text. This will help avoid unnecessary effort.
* Quantitative data on WeGovNow platform and service utilisation which can be automatically derived from the technical infrastructure to be piloted (e.g. platform utilisation statistics) is expected to be aggregated in anonymous form. During the development phase of the platform, it will be clarified what data can be expected to be made automatically available for this purpose. In any case, no personal data is expected to be derived from the platform for evaluation purposes.
As the pilot evaluation will refer to a newly developed platform, these data –
or similar data – is not available from existing sources. Any quantitative
data to be generated throughout the project’s piloting duration will be stored
and analysed with help Microsoft Excel based tools. Qualitative data to be
generated will be stored in Microsoft Word format.
# Standards and metadata
During the evaluation plan development phase lasting until project month 12,
metadata, procedures and file formats for note-taking, recording,
transcribing, and anonymising semistructured interview and focus group
discussion data will be developed and agreed. This will also be achieved for
any quantitative data to be generated throughout the WeGovNow pilot duration.
# Data sharing
Based on the evaluation framework which will be developed by the end of
project month 12, the project will formulate a strategy to grant open research
data access, in accordance with the rules of the Horizon 2020 programme.
Currently it is envisaged that any aggregated data and case level data will be
made available in an anonymised manner towards the end of the project. Access
will be provided freely to non-commercial, research purposes upon request.
Requests for data access can be made via the project website or direct contact
to the project co-ordinator or evaluation WP leader. Access to the dataset
will be granted after signature of a data access request form, regulating
inter alia proper mentioning of the data source. The dataset will be made
available for at least three years after the ending of the formal project
duration.
# Archiving and preservation
Based on the evaluation framework that will become available by project month
12, a description of the procedures that will be put in place for preservation
of the evaluation data will be described and the current Data Management Plan
will be updated respectively. It is envisaged that the data will be stored in
the form of Microsoft Excel files and Microsoft Word files at the project
coordinator’s corporate technical infrastructure. These will be protected
against unauthorised access and backed up at different levels, including
regular off-premise backups. The evaluation data to be generated in the
framework of WeGovNow is currently envisaged to be preserved for a period of
at least three years after the ending of the project duration. The exact
volume of the data to be preserved cannot be determined at the current stage
of the project. It is however envisaged that the volume will be at an order of
magnitude not involving any additional cost for data storage / preservation
worth to be noted.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0405_SCENT_812391.md
|
# Introduction
The data management plan (DMP) is a living document and will be updated over
the course of the project whenever significant changes arise, such as new data
or changes in consortium policies. The DMP will be updated in month 24 and
month 36, and for the final review in month 48\. The DMP will be written
according to the ‘H2020 templates: Data management plan v1.0
– 13.10.2016’
# Data Summary
In order to facilitate the data collection, document sharing and
collaboration, the University of Nottingham's file will make use of a
combination of Microsoft SharePoint and a Git based repository service, figure
1.
The latter is managed internally by the George Green Institute for
Electromagnetics Research (GGIEMR) and the former is supplied as part of a
long-term business contract with Microsoft. This combination will guarantee
availability and data integrity throughout and beyond the span of the project.
Both services will be available to all project partners either by direct login
to the service, a private (to that individual) hyperlink or a public
hyperlink. Privileges to read, edit or contribute to the data and/or documents
can be controlled via each approach.
**Figure 1.** Data management and sharing services available at The George
Green Institute, University of Nottingham.
## Expected Data Formats
The anticipated experimental and simulation work will encompass large range of
different model description, experimental setup data, experimental measurement
data and documentation. The data stored will therefore need able to reflect
the multitude of sources. In order to give an accessible and coherent
representation of the data all project partners will concentrate on a few, but
very common data formats where possible, namely:
1. Measurement data, an ASCII based touchstone format, file ending .sp2. These files are directly written by measurement instrumentation and can be read by a multitude of software.
2. Numerical data stored in MATLAB compatible (binary format for large data sets) or comma separated variable (.csv) formats (for smaller data sets). The corresponding files should be accompanied by scripts that allow for the reading and visualisation of the data, or by an explicit description on how to read the data using other applications.
3. The data collected will, in the main, originate through experimental measurement. The data sets collected can range from a number of kilobytes (KB) through to many gigabytes (GB) in size
4. Documentation should be provided as editable MS Word files i.e. .docx. Final versions of documents for dissemination should also be stored as .pdf.
5. Simulation results arising from using commercial software should be made available along with the complete simulation packages’s ‘project’ files used to generate the result(s).
# FAIR data
## Making data findable, including provisions for metadata
To facilitate the usage and re-use of the stored date, all stored data will be
accompanied by a text-based description of the data i.e. its source and format
and how the data can be interacted with. The precise detail of the data format
should be contained with a folder containing the data and named using a
convention that readily identifies the source Institution and ESR.
Measurement datasets originating from electronic instruments usually will
usually themselves contain metadata describing the state of the instrument
during data acquisition. Therefore, metadata, in a human readable format will
be held external to the data they describe. The metadata will follow the
principles of the Dublin Core Schema and for each of the datasets stored the
following metadata elements will be provided, table 6.1:
<table>
<tr>
<th>
**Metadata Element**
</th>
<th>
**Use**
</th>
<th>
**Example value**
</th> </tr>
<tr>
<td>
**Title**
</td>
<td>
Name of dataset
</td>
<td>
</td> </tr>
<tr>
<td>
**Subject**
</td>
<td>
Specific research topic
</td>
<td>
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Description of dataset
</td>
<td>
</td> </tr>
<tr>
<td>
**Creator**
</td>
<td>
Primary person responsible for collecting dataset
</td>
<td>
Typically the ESR
</td> </tr>
<tr>
<td>
**Publisher**
</td>
<td>
Parent Institution of Creator
</td>
<td>
e.g. University of Nottingham
</td> </tr>
<tr>
<td>
**Contributor**
</td>
<td>
Other(s) involved in creating dataset
</td>
<td>
</td> </tr>
<tr>
<td>
**Date**
</td>
<td>
Date of creation
</td>
<td>
YYYY-MM-DD
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Nature of the dataset
</td>
<td>
e.g. vector network analyser
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
Format of the dataset and/or file identifier
</td>
<td>
Touchstone, .sp2
</td> </tr>
<tr>
<td>
**Language**
</td>
<td>
Language of the dataset
</td>
<td>
Use ISO Code 639-1 Code e.g. for English: en
</td> </tr>
<tr>
<td>
**Relation**
</td>
<td>
Any related resources
</td>
<td>
</td> </tr>
<tr>
<td>
**Rights**
</td>
<td>
Any rights related to the dataset
</td>
<td>
</td> </tr> </table>
**Table 6.1:** Table metadata elements applicable to SCENT datasets.
The main entrance point for all the data will be via the following link:
_https://uniofnottm.sharepoint.com/sites/SCENT_ to which all members of the
project will have access.
The longevity of the data storage is ensured by a contract between University
of Nottingham and Microsoft for their content management solution
“Sharepoint”. The administrative accounts are spread out over several
permanent members of staff at UN to ensure responsiveness and availability.
## Making data openly accessible
The Horizon 2020 strategic priority of Open Science (OS) will be completely
respected by all participants. OS describes the on-going evolution of doing
and organising science, as enabled by digital technologies and driven by
globalisation of the scientific community. OS aimed at promoting diversity in
science and opening it to the general public, in order to better address the
societal challenges and ensure that science becomes more responsive both to
socioeconomic demands and to those of European citizens. OS also provides
significant new opportunities for researchers to disseminate, share, explore
and collaborate with other researchers. OS aims at improving and maximizing
access to and re-use of research data generated by funded projects. Results
should be findable, accessible, interoperable and reusable (FAIR). Therefore
SCENT will deposit and take measures to make it possible for third parties to
access, mine, exploit, reproduce and disseminate - free of charge for any user
– data resulting as well as results from the funded project.
## Making data interoperable
* Data will be stored in formats as outlined in section 1.1 to allow the re-use of any data as appropriate.
* Research data management plans will ensure that research data are available for access and reuse where required by Horizon 2020 terms and conditions or where otherwise appropriate and under appropriate safeguards.
* ESRs are responsible for deciding, subject to legal, ethical and commercial constraints, which data sets are to be released to meet their obligations. Data shall be released for access and reuse as soon as practicable after research activity is completed and results published.
## Increase data re-use (through clarifying licences)
* The privacy and other legitimate interests of the subjects of research data must be protected.
* Research data of future historical interest, and all research data that represent records of the project’s partner Institutions, including data that substantiate research findings, will be offered and assessed for deposit and retention in an appropriate national or international data service or domain repository, or a University repository.
* Exclusive rights to reuse or publish research data should not be handed over to commercial publishers or agents without retaining the rights to make the data openly available for re-use, unless this is a condition of funding.
# Allocation of resources
Data storage will be managed internally by the George Green Institute for
Electromagnetics
Research (GGIEMR) and the former is supplied as part of a long-term business
contract with Microsoft. This combination will guarantee availability and data
integrity throughout and beyond the span of the project. There are no
additional costs to the project.
# Data security
All data is subject to local backup and backup provision through the cloud
based services maintained at the University of Nottingham. All data stored on
University Microsoft Cloud based services is encrypted and therefore secure.
Data is accessible through modern web browsers over the secure Hypertext
Transfer Protocol Secure (https).
# Ethical aspects
No ethical aspects for the data are expected. Research data will be managed to
the highest standards throughout the research data lifecycle as part of the
University’s commitment to research excellence.
# Conclusion
To reflect the dynamic nature of the generation of data and its associated
type, this is a living document and will be updated at 12 monthly intervals to
include a summary of data stored.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0407_A-LEAF_732840.md
|
The purpose of the DMP is to provide an overview of the main elements of the
data management policy that will be used by the Consortium with regard to the
project research data. The DMP is not a fixed document but will evolve during
the lifespan of the project.
The DMP covers the complete research data life cycle. It describes the types
of research data that will be collected, processed or generated during the
project, how the research data will be preserved and what parts of the
datasets will be shared or kept confidential.
This document is the third version of the DMP, delivered in Month 13 of the
project. It includes an overview of the datasets to be produced by the
project, and the specific conditions that are attached to them. The next
versions of the DMP will be updated in Month 36 (D7.8) as the project
progresses.
This Data Management Plan describes the **A-LEAF** strategy and practices
regarding the provision of Open Access to scientific publications,
dissemination and outreach activities, public deliverables and research
datasets that will be produced.
Categories of outputs that **A-LEAF** will give Open Access (free of charge)
and have been agreed upon and approved by the Exploitation and Dissemination
Committee (EDC) include:
* Scientific publications (peer-reviewed articles, conference proceedings, workshops)
* Dissemination and Outreach material
* Deliverables (public)
<table>
<tr>
<th>
**A-LEAF public deliverables**
</th>
<th>
**Month**
</th> </tr>
<tr>
<td>
Kick off meeting agenda
</td>
<td>
1
</td> </tr>
<tr>
<td>
Project Management Book
</td>
<td>
3
</td> </tr>
<tr>
<td>
Project Report 1(Public version)
</td>
<td>
16
</td> </tr>
<tr>
<td>
Project Report 2 (Public version)
</td>
<td>
32
</td> </tr>
<tr>
<td>
Final Report
</td>
<td>
50
</td> </tr>
<tr>
<td>
A-LEAF DMP (and updates)
</td>
<td>
2, 12, 24, 36
</td> </tr>
<tr>
<td>
Web-page and logo
</td>
<td>
2
</td> </tr>
<tr>
<td>
A-LEAF Dissemination and Exploitation Plan (and updates)
</td>
<td>
3, 12, 24, 36
</td> </tr>
<tr>
<td>
A-LEAF Communication and Outreach Plan (and updates)
</td>
<td>
4, 12, 24, 36
</td> </tr> </table>
* Research Data
* Computational Data
Any dissemination data linked to exploitable results will not be put into the
open domain if they compromise its commercialisation prospects or have
inadequate protection.
1.1. **A-LEAF** strategy and practices
The decision to be taken by the project on how to publish its documents and
data sets will come after the more general decision on whether to go for an
academic publication directly or to seek first protection by registering the
developed Intellectual Property (IP). Open Access must be granted to all
scientific publications resulting from Horizon 2020 actions. This will be done
in accordance with the Guidelines on Open Access to Scientific Publications
and Research Data in Horizon 2020 (15 February 2016) [1].
_**Concerning publications** _ , the consortium will provide open access
following the ‘Gold’ model: an article is immediately released in Open Access
mode by the scientific publisher. A copy of the publication will be deposited
in a public repository, OpenAIRE and ZENODO or those provided by the host
institutions, and available for downloading from the **A-LEAF** webpage. The
associated costs are covered by the author/s of the publication as agreed in
the dissemination and exploitation plan (eligible costs in Horizon 2020
projects).
_**Concerning research data** _ , the main obligations of participating in the
Open Research Data Pilot are:
1. To make it possible for third parties to _access_ , _mine_ , _exploit_ , _reproduce_ and _disseminate_ \- free of charge for any user - the following:
1. the published data, including associated metadata, needed to validate the results presented in scientific publications, as soon as possible
2. other data, including raw data and associated metadata, as specified and within the deadlines laid down in the data management plan; and
2. To provide information about _tools_ and _instruments_ at the disposal of the beneficiaries and necessary for validating the results.
**A-LEAF** follows the Guidelines on Data Management in Horizon 2020 (15
February 2016)
[2].
The consortium has chosen ZENODO [3] as the central scientific publication and
data repository for the project outcomes. This repository has been designed to
help researchers based at institutions of all sizes to share results in a wide
variety of formats across all fields of science. The online repository has
been created through the European Commission’s OpenAIREplus project and is
hosted at CERN.
ZENODO enables users to:
* easily share the long tail of small data sets in a wide variety of formats, including text, spreadsheets, audio, video, and images across all fields of science
* display and curate research results, get credited by making the research results citable, and integrate them into existing reporting lines to funding agencies like the European Commission
* easily access and reuse shared research results
* define the different licenses and access levels that will be provided
Furthermore, ZENODO assigns a Digital Object Identifier (DOI) to all publicly
available uploads, in order to make content easily and uniquely citable.
# SCIENTIFIC PUBLICATIONS
2.1 Dataset Description
As described in the DoA (Description of Action), the consortium will produce a
number of publications in journals with the highest impact in
multidisciplinary science. As mentioned above, publications will follow the
“Gold Open Access” policy. The Open Access publications will be available for
downloading from the **A-LEAF** webpage ( _www.a-leaf.eu_ ) and also stored
in the ZENODO/OpenAIRE repository.
2.2 Data sharing
The Exploitation and Dissemination Committee (EDC) will be responsible for
monitoring and identifying the most relevant outcomes of the **A-LEAF**
project to be protected. Thus, the EDC (as described in the Dissemination and
Exploitation plan) will also decide whether results arising from the
**A-LEAF** project can pursue peer-review publication.
The publications will be stored at least in the following sites:
* The ZENODO repository
* The **A-LEAF** website
* OpenAIRE
3. DOI
The DOI (Digital Object Identifier) uniquely identifies a document. This
identifier will be assigned by the publisher in the case of publications.
4. Archiving and preservation
Open Access, through the **A-LEAF** public website, will be maintained for at
least 3 years after the project completion.
Items deposited in ZENODO, including all the scientific publications, will be
archived and retained for the lifetime of the repository, which is currently
the lifetime of the host laboratory CERN (at least for the next 20 years).
# DISSEMINATION / OUTREACH MATERIAL
3.1 Dataset Description
The dissemination and outreach material refers to the following items:
* Conferences: all academic partners of **A-LEAF** will attend the most relevant conferences and promote the results of the project through oral talks and/or posters.
* Workshops: two workshops will be organized in M28 and M48 to promote awareness of the **A-LEAF** objectives and results (data produced: presentations and posters).
* Dissemination material: flyers, videos, public presentations, **A-LEAF** newsletter, press releases, tutorials, etc.
* Communication material: website, social media, press desk, audiovisual material. Outreach activities for project’s promotion to the general public.
2. Data sharing
All the dissemination and communication material will be available (during and
after the project) on the **A-LEAF** website and ZENODO.
3. Archiving and preservation
Open Access, through the **A-LEAF** public website, will be maintained for at
least 3 years after the project completion. All the public dissemination and
outreach material will be archived and preserved on ZENODO and will be
retained for the lifetime of the repository.
# PUBLIC DELIVERABLES
4.1 Dataset Description
The documents associated to all the public deliverables defined in the Grant
Agreement, will be accessible through open access mode. The present document,
the **A-LEAF** Data Management Plan update, is one of the public deliverables
that after submission to the European Commission will be immediately released
in open access mode in the **A-LEAF** webpage, CORDIS website and ZENODO.
<table>
<tr>
<th>
**A-LEAF public deliverables**
</th> </tr>
<tr>
<td>
Kick off meeting agenda
</td> </tr>
<tr>
<td>
Project Management Book
</td> </tr>
<tr>
<td>
Project Report 1 (public version)
</td> </tr>
<tr>
<td>
Project Report 2 (public version)
</td> </tr>
<tr>
<td>
Final Report
</td> </tr>
<tr>
<td>
A-LEAF DMP (and updates)
</td> </tr>
<tr>
<td>
Web-page and logo
</td> </tr>
<tr>
<td>
A-LEAF Dissemination and Exploitation Plan (and updates)
</td> </tr>
<tr>
<td>
A-LEAF Communication and Outreach Plan (and updates)
</td> </tr> </table>
All other deliverables, marked as confidential in the Grant Agreement, will
only be accessible for the members of the consortium and the Commission
services. These will be stored in the **ALEAF** intranet with restricted
access to the consortium members. The Project Coordinator will also store a
copy of the confidential deliverables.
4.2 Data sharing
Open Access to **A-LEAF** public deliverables will be achieved by depositing
the data into an online repository. The public deliverables will be stored in
one or more of the following locations:
* The **A-LEAF** website, after approval by the Project Advisory Board (PAB) (if the document is subsequently updated, the original version will be replaced by the latest version)
* The CORDIS website, will host all public deliverables as submitted to the European Commission. The **A-LEAF** page on CORDIS is:
_http://cordis.europa.eu/project/rcn/206200_en.html_
* ZENODO repository
4.3 Archiving and preservation
Open Access, through the **A-LEAF** public website will be maintained for at
least 3 years after the project completion.
All public deliverables will be archived and preserved on ZENODO and will be
retained for the lifetime of the repository.
# RESEARCH DATA
5.1 Dataset Description
Besides the open access to the data described in the previous sections, the
Open Research Data Pilot also applies to two types of data:
* The data, including metadata, needed to validate the results presented in scientific publications (underlying data).
* Other data, including associated metadata. The PAB will be able to choose which data (besides the data associated to publications) they make available in open access mode.
All data collected and/or generated will be stored according to the following
format:
## A-LEAF_WPX_TaskX.Y/Title_Institution_Date
Should the data cannot be directly linked or associated to a specific Work
Package and/or task, a self-explanatory title for the data will be used
according to the following format:
_**A-LEAF_Title_Institution_Date** _
When the data is collected in a public deliverable this other format may also
be used:
_**D.X.Y A-LEAF_ Title of the Deliverable** _
# COMPUTATIONAL DATA
The computational data outcome of the simulations will be stored following the
same procedure as before at the local nodes of ioChem-BD.org that allows the
generation of DOIs for the particular datasets from the calculations and
ensures its reproducibility.
# RESPONSIBILITY FOR THE IMPLEMENTATION OF THE DMP
The consortium will make a selection of relevant information, disregarding
that not being relevant for the validation of the published results.
Furthermore, following the procedure described in section 2.2, the data
generated will be carefully analysed before giving open access to it in order
to be aligned with the exploitation policy described in the Dissemination and
Exploitation Plan (D7.3).
Therefore, data sharing in open access mode can be restricted as a legitimate
reason to protect results expected to be commercially or industrially
exploited. Approaches to limit such restrictions will include agreeing on a
limited embargo period or publishing selected (nonconfidential) data.
The selected research data and/or data with an embargo period, produced in
**A-LEAF** will be deposited into an online research data repository (ZENODO)
and shared in open access mode.
Each partner of the consortium will be responsible for the storage and backup
of the data produced in their respective host institutions. Furthermore, each
partner is responsible for uploading all the research data produced during the
project to the **A-LEAF** intranet (restricted to the members of the
consortium) or for sending it to the coordinator, who will inform the rest of
the consortium once it is uploaded. The coordinator will be responsible for
collecting all the public data and uploading it in the **A-LEAF** public
website and ZENODO.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0409_Startup Lighthouse_780738.md
|
**Introduction**
This deliverable contains the current status for:
* Quality & Risk Management procedures
* Communication & Management Tools
* Data Management Plan
# Quality Management
Ricardo Silva (Vilabs) takes the role of Risk and Quality Manager (RQM) to
identify, assess and manage administrative and technical risks, as well as the
implementation of the quality procedures and the verification of the project
results.
Quality Management protocol
The RQM consults with the project partners as activities are designed,
implemented and evaluated. This becomes a _de_ _facto_ responsibility of
the coordination team, providing a solid ground for successful, timely and
quality implementation of the project activities.
Deliverables
All project deliverables are approved in the following process:
Activity Leader -> RQM Review -> Coordination Team Review -> Consortium
Approval
A _first deliverable template_ has been made available to partners. Risk
Assessment and Management:
Risk management requires identification, control and recording of risks,
highlighting of the consequences and the appropriate management actions.
The RQM is responsible for ensuring that the activities are realised within
the proposed timeline and delays are kept to a minimum. Beyond the annual
milestones, the RQM will pay special attention to the interdependence between
tasks.
The RQM will monitor and evaluate the risk matrix (probability and impact
assessment) throughout the project lifetime, additionally undertaking steps to
decreasing the probability of the risks with highest probability.
Each partner will have the responsibility to report immediately to the RQM any
risky situation that may arise and may affect the project objectives or their
successful completion. Any change in time schedule of deliverables or in the
allocated budget must be reported to the RQM. In case of problems or delays,
the Coordination Team will be consulted and may take the necessary actions. In
case no resolution is reached, the Consortium will be consulted and will
establish mitigation plans to reduce the impact of risk occurring.
The table below summarizes an indicative list of the risks identified by the
project consortium and their related contingency plans in brief.
<table>
<tr>
<th>
**#**
</th>
<th>
**Description of risk**
</th>
<th>
**Level of Likelihood**
</th>
<th>
**WP**
**Involved**
</th>
<th>
**Contingency plans**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Financial risk
</td>
<td>
Low
</td>
<td>
ALL
</td>
<td>
The implicit uncertainty related to the project may lead into a significant
variation of costs. For this reason, administrative/financial management will
not be limited to reporting but also include monitoring as to constantly
assess the financial health of the project and identify early signs of
concern.
</td> </tr>
<tr>
<td>
2
</td>
<td>
Changes in the project team
</td>
<td>
High
</td>
<td>
ALL
</td>
<td>
Identify these changes the soonest possible. Require from partners to include
substitutes with equivalent (or higher) qualifications and experience. Inform
the substitutes in detail about the project, their role and responsibilities.
</td> </tr>
<tr>
<td>
3
</td>
<td>
Delay in the project timetable
</td>
<td>
Medium
</td>
<td>
ALL
</td>
<td>
Coordinator agrees on: (i) re-allocation of resources; (ii) parallel execution
of tasks; or (iii) rescheduling of activities or a suitable combination of
those.
</td> </tr>
<tr>
<td>
4
</td>
<td>
Dissemination may not have sufficient impact
</td>
<td>
Low
</td>
<td>
ALL
</td>
<td>
The Dissemination Plan will set clear objectives and activities to raise the
importance of LIGHTHOUSE and the benefit to all stakeholders.
</td> </tr>
<tr>
<td>
5
</td>
<td>
Some of the partners or of the consortium leave
</td>
<td>
Low
</td>
<td>
ALL
</td>
<td>
All of the project partners have committed to this proposal. In case such a
scenario would happen, we will replace the leaving partner by another one with
a similar profile. The wide network of contacts from the different partners
guarantees a high probability for a successful replacement.
</td> </tr>
<tr>
<td>
6
</td>
<td>
Ongoing dissemination may take more effort and resources than planned
</td>
<td>
Low
</td>
<td>
WP6
</td>
<td>
(a) Continuous on-line liaison between the Partners on their use of resources,
(b) shared dissemination opportunities with other related projects, and (c)
previous relevant experience of the Partners, will ensure that this does not
occur.
</td> </tr>
<tr>
<td>
7
</td>
<td>
Quality of events is below expectations
</td>
<td>
Low
</td>
<td>
WP2,
WP3,
WP4
</td>
<td>
Coordinator will continuously evaluate the project processes and submit its
conclusions. The Coordinator together with Activity Leaders will
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
analyse them and take actions based on these conclusions, in order to
continuously improve the procedures.
</th> </tr>
<tr>
<td>
8
</td>
<td>
Release of deliverables is not on time
</td>
<td>
Low
</td>
<td>
ALL
</td>
<td>
Identify the causes and the partners responsible for missing the established
plan. Confront responsible partners with the situation and request formal
adequate commitment for future deliverables. Analyse the proposed time
schedule for the production of deliverables and consider if the introduction
of modifications will ease and improve the deliverable production process.
</td> </tr>
<tr>
<td>
9
</td>
<td>
Number of startups attending activities are below expectations
</td>
<td>
Low
</td>
<td>
WP2,
WP3,
WP4
</td>
<td>
LIGHTHOUSE launches a new round of the activity, after evaluation by the
Coordinator, and Activity Leader contacts startups directly in order to
maximise the conversation and understand what is attractive and unattractive
about the activities.
</td> </tr>
<tr>
<td>
10
</td>
<td>
One of the selected startups leaves an ongoing activity
</td>
<td>
Low
</td>
<td>
WP2,
WP3,
WP4
</td>
<td>
A waiting list will be created among the finalists of each activity, from
which a replacement will be selected.
</td> </tr>
<tr>
<td>
11
</td>
<td>
LIGHTHOUSE activities are not clearly understood by the public
</td>
<td>
Medium
</td>
<td>
WP2,
WP3,
WP4,
WP5,
WP6
</td>
<td>
Create a FAQ section and other types of online tools upon validation from user
testing with the target audience.
</td> </tr>
<tr>
<td>
12
</td>
<td>
Deliverables produced in low quality
</td>
<td>
Low
</td>
<td>
WP1
</td>
<td>
Proper internal quality procedure and criteria have been designed. Provide
enough resources (time and human) in all tasks to ensure required quality.
</td> </tr>
<tr>
<td>
13
</td>
<td>
Overcrowding of similar
activities
</td>
<td>
Medium
</td>
<td>
WP2,
WP3,
WP4
</td>
<td>
In case there is a possibility that organising LIGHTHOUSE activities saturates
the ecosystem, the consortium will instead co-organise and co-sponsor
activities to ensure maximum impact to startups and the ecosystem in general.
</td> </tr>
<tr>
<td>
14
</td>
<td>
Low visibility/impact of events in term of number of attendees, press
coverage
</td>
<td>
Low
</td>
<td>
WP6
</td>
<td>
Analyse the media and marketing campaign developed, identify the causes and
explore new networks/contact to reach the target. Deploy engaging tactics and
know-how to the next set of
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
organised events to maximise their impact and therefore, the project’s impact.
</td> </tr>
<tr>
<td>
15
</td>
<td>
Low number of new business contacts among startups and
investors/corporates/publi c administrations
</td>
<td>
Low
</td>
<td>
WP2,
WP3,
WP4
</td>
<td>
Identify the causes and explore new networks/contact to reach the target.
Organize new matchmaking events.
</td> </tr> </table>
# Communication & Management Tools
The main points of the communication framework agreed in the kick-off meeting
can be found below:
■ Physical and online meetings:
■ Regular Physical project meetings
■ Bi-weekly meetings using GoToMeeting
■ Meeting minutes, including Action Items of bi-weekly calls
<table>
<tr>
<th>
**Planned physical meeting**
</th>
<th>
**When**
</th>
<th>
**Where**
</th>
<th>
**Who**
</th> </tr>
<tr>
<td>
Kick-off meeting
</td>
<td>
M1
</td>
<td>
Portugal
</td>
<td>
FastTrack
</td> </tr>
<tr>
<td>
SE workshop
</td>
<td>
M3
</td>
<td>
Paris
</td>
<td>
SE Initiative
</td> </tr>
<tr>
<td>
Project meeting
</td>
<td>
M6
</td>
<td>
Vilnius
</td>
<td>
Startup Division
</td> </tr>
<tr>
<td>
SE workshop
</td>
<td>
M10
</td>
<td>
Sofia
</td>
<td>
SE Initiative
</td> </tr>
<tr>
<td>
Project meeting (Awards)
</td>
<td>
M11
</td>
<td>
Awards
</td>
<td>
F6S
</td> </tr>
<tr>
<td>
Project meeting and review
</td>
<td>
M14/15
</td>
<td>
TBC by the EC
</td>
<td>
TBC
</td> </tr>
<tr>
<td>
SE workshop/event
</td>
<td>
M21
</td>
<td>
TBC
</td>
<td>
SE Initiative
</td> </tr>
<tr>
<td>
Project meeting (Awards/Final Conference)
</td>
<td>
M23
</td>
<td>
Awards
</td>
<td>
F6S
</td> </tr>
<tr>
<td>
Final Review
</td>
<td>
M26
</td>
<td>
TBC by the EC
</td>
<td>
TBC
</td> </tr> </table>
■ Internal Communication
■ Communication tools:
<table>
<tr>
<th>
**Tool**
</th>
<th>
**Usage**
</th> </tr>
<tr>
<td>
Email ( _[email protected]_ )
Mailing list
**[email protected]_ #Slack group **
</td>
<td>
Communication among partners on a daily basis
</td> </tr>
<tr>
<td>
GoToMeeting
</td>
<td>
• Consortium conference calls bi weekly
</td> </tr>
<tr>
<td>
Deadlines & Action points
Keeping record of important dates
</td>
<td>
* Startup Lighthouse Google Calendar
* Everybody has access
</td> </tr> </table>
■ Google Drive Repository:
■ All project related documents
■ _Centralised database_ for all project information available to all
partners
■ Project mailing lists with Skype and mobile numbers, as daily communication
tools
■ European Commission and Project Officer
The Project Coordinator is the main contact point to the EC and coordinates
the preparation of all required official reports, amendments and project
reviews for the EC summarizing progress on project tasks, deliverables and
budget usage and reporting any deviations and corrective actions put in place.
On the other hand, Activity Leaders respond to the EC via the coordinator on
any issues raised in periodic reports or with deliverables relating to
particular WPs thus ensuring a satisfactory response is provided.
# Data Management Plan
STARTUP LIGHTHOUSE takes the protection of personal and private data
seriously, especially as it is a sensitive topic for many startups and
scaleups. All information potentially shared by scaleups (personal data and
intellectual property) is only used for the purposes of the project and all
rights (including Intellectual Property Rights) are kept exclusively by the
scaleups themselves. All data are stored in internal project databases
(spreadsheets stored in a shared drive exclusive to the project consortium) or
EU compliant platforms such as F6S.
A multi-dimensional consent mechanism will be implemented, where participants
will be invited to consent to their involvement in the project activities and
define their preferences on data disclosure, data storage, preservation,
opening and sharing of their own data and data created.
Consortium partners, in cooperation with the project participants, can opt not
to release specific data related to the financial planning, valuation or exit
strategy of participating startups.
STARTUP LIGHTHOUSE maintains the required protection of personal data and full
compliance to all the Data Regulations in force in national and European
legislation about the protection of personal data and has established all the
technical means in their reach to avoid the loss, misuse, alteration, access
to unauthorised persons and theft of the data provided to this entity,
notwithstanding that security measures on the Internet are not impregnable. As
data controllers, the project coordination team, who is also responsible for
the Impact assessment and the conduction of research in the project, will file
a request to “ _The Hellenic Data Protection Authority_ ” describing
thoroughly the purposes of the research, the process of data gathering,
processing, analysing which will be in line with the provisions of Greek Law
2472/1997 (and 3471/2006 regarding electronic communications) and will fully
comply with the EU General Data Protection Regulation (GDPR), which replaces
the EU’s Data Protection Directive 95/46 EC and will be fully respecting the
privacy and data protection rights and Ethics guidelines in data storage and
treatment within H2020.
Regarding knowledge management and protection: Eventual production of reports
or insights from the data collected through the project will be published on
LIGHTHOUSE platforms free for anyone to access.
IPRs will be controlled in accordance to general EC policies concerning
ownership, exploitation rights, confidentiality, commercial utilisation of
results, availability of the information and deliverables to other EU funded
projects and disclaiming rules. Specific actions will be taken in order to
satisfy the basic intellectual property regime that publication rights will be
owned by those who produce the respective results (either employers or
employees depending on their country’s regime), whereas distribution within
the project should be granted for free (decision of non-disclosure should be
taken by the consortium with adequate compensation to the partners).
The basic principle is that foreground knowledge, therefore created within (or
resulting from) the project belongs to the project partner who generated it.
If knowledge is generated jointly and separate parts cannot be distinguished,
it will be jointly owned, unless the contractors concerned agree on a
different solution. The granting of Access Rights to jointly owned foreground
will be royalty-free and the granting of Access Rights to own foreground will
either be on royalty-free or on the basis of fair and reasonable conditions.
Regarding background, the granting of Access Rights will be royalty-free for
the execution of work during the project, unless otherwise agreed before
signature of the Grant Agreement. For the purposes of policy development and
the further promotion of innovation, the European Community will be given a
non-exclusive royalty-free license to use the public knowledge generated in
the project, such as reports, methodologies or case material. Confidential
information relating to individuals or companies will be collected and
protected in strict accordance with EU and national regulations and best
practice regarding data confidentiality.
## 1\. DATA SUMMARY
All data collection by the project is related broadly to the following
purposes:
* Participant Selection
* Activity Logistics & Organisation
* Activity Evaluation/Feedback
* Impact Assessment
* Policy Recommendations
1. Participant Selection:
Startup information will be collected exclusively through the F6S platform,
which is complying to GDPR. This information is collected through an
application form, a general example being provided _here_ .
2. Activity Logistics & Organisation:
Selected startups and other consenting participants will provide basic
information related to the organisation of the activities - from
identification needed for security purposes to dietary requirements.
3. Activity Evaluation/Feedback:
Participants will be asked to evaluate their experience with the project with
the aim of improving activities and developing best practices.
4. Impact Assessment:
Information related to business performance will be collected from
participating companies to assess the impact of the project, comparing to the
project KPIs, which are simplified below.
5. Policy Recommendations:
A mixed strategy of surveys and interviews with selected participants will be
executed to develop policy recommendations, collecting their opinions on the
subject matter.
Overall, the project aims, at most, to collect data from 120 startups and, for
Policy Recommendations, to extend that survey to a community of over 3000
individuals across Europe.
All data is stored in project folders, only accessible to the project
consortium.
Data will become public to promote startups within the scope of project
activities (e.g.: pitch-deck to investors) or for the Impact Assessment
(aggregated and anonymous) and Policy Recommendations (aggregated and
anonymous, unless it’s an agreed testimonial/opinion).
All public documents will be double checked with the original sources of data
before publication.
## 2\. FAIR DATA
Most of the data collected is related to specific companies, so data is
identified by associating it with the company name.
Most of the data is also private, so re-use will be limited to ensure the
rights of the participants.
Activity Evaluation/Feedback, Impact Assessment and Policy Recommendations
data will be made public, after being aggregated and anonymised. This will be
provided in Google Sheets formats, so open to anyone to access.
This data can be used by all other organisations looking to support businesses
across the world to understand the potential impact of specific activities.
The data should remain available indefinitely. **3\. ALLOCATION OF RESOURCES**
The costs are negligible as they can be stored using Google Drive.
The project coordinator is responsible for ensuring proper data management in
this project.
## 4\. DATA SECURITY & DATA PRESERVATION
The same provisions for data security and conservation as the platforms used:
F6S and Google Drive.
STARTUP LIGHTHOUSE’s generated data about the development of these activities
will be archived for self-sustainability purposes in order to allow the
consortium to carry on the activities at a later stage or to provide this
information freely to any who would continue the work of the project after its
end.
Data owners retain the right to be forgotten via communicating to Startup
Lighthouse’s established communication channels.
## 5\. ETHICAL ASPECTS
The main ethical considerations of the project and its data are related to
privacy. Each startup applicant will have to consent with the terms and
conditions made explicit here: _http://startuplighthouse.eu/startup-
lighthouse-terms-conditions/_
_“Startup Lighthouse takes the protection of personal and private data
seriously. All information shared by (personal data and intellectual property)
is only used for the purposes of the project and all rights (including
Intellectual Property Rights) are kept exclusively by the applicants
themselves. All data are stored in internal databases (exclusive to the
organisers) or EU compliant platforms such as F6S._
_We will not disclose any information to any third parties not directly
involved in Startup Lighthouse activities that you are taking part in._
_Startup Lighthouse maintains the required protection of personal data and
full compliance to all the Data Regulations in force in national and European
legislation about the protection of personal data and has established all the
technical means in their reach to avoid the loss, misuse, alteration, access
to unauthorised persons and theft of the data provided to this entity,
notwithstanding that security measures on the Internet are not impregnable._
_You consent your involvement in the project activities and accept these
principles on data disclosure, data storage, preservation, opening and sharing
of own data and data created._
_You agree that the Startup Lighthouse project has the right to the use of
your company’s image and profile in case you are selected, and that of your
team strictly for media publication as well as to inform you of future events
and activities, strictly related to Startup Lighthouse project.”_
The following deliverables will explore the Ethical Aspects in more detail.
D7.2 : GEN Requirement No. 2 / D1.6 "Ethical and Legal Issues"
An additional deliverable must be foreseen in WP1: D1.6 "Ethical and Legal
Issues". The deliverable must provide detailed information and explain how
H2020 ethical principles will be fully respected both as concerns the
involvement of humans and the processing of personal data. As to ethics issues
in general, the deliverable must include, but not be necessarily limited to,
the following: - before the beginning of an activity raising an ethical issue,
copy of any ethics committee opinion required under national law must
submitted; - the applicant must provide a thorough analysis of the ethics
issues raised by this project and the measures that will be taken to ensure
compliance with the ethical standards of H2020; - templates must be provided
for Informed Consent Forms and Information Sheets (in language and terms
understandable to participants).
D7.3 : H - Requirement No. 3 [6]
As concerns humans,the deliverable must include, but not be necessarily
limited to, the following: - details on the procedures and criteria that will
be used to identify/recruit research participants must be provided; - detailed
information must be provided on the informed consent procedures that will be
implemented for the participation of humans; - templates of the informed
consent forms and information sheet must be submitted; - the applicant must
provide details about the measures taken to prevent the risk of enhancing
vulnerability/stigmatisation of individuals/groups.
D7.4 : POPD - Requirement No. 4 [6]
As concerns data protection, the deliverable must include the following: -
detailed information on the procedures that will be implemented for personal
data collection, storage, protection, retention and destruction and on how
such acts of processing will fully comply with national and EU data protection
rules, with particular reference to the EU General Data Protection Regulation,
in compliance with the accountability principle; - detailed information on the
physical and logical security measures that will be adopted for the protection
of personal data, with particular reference to sensitive data, where
applicable; - detailed information on the informed consent procedures that
will be implemented in regard to the collection, storage and protection of
personal data; - justification in case of collection and/or processing of
personal sensitive data; - explicit confirmation that the data used are
publicly available; - in case of data not publicly available, the provision of
relevant authorisations; - detailed information on the use of secondary data
to demonstrate full compliance with ethical principles and applicable data
protection laws.
## 6\. OTHER ISSUES
All organisations related to the project are adapting their processes to GDPR,
which makes this Data Management Plan subject to changes. Next versions will
update the situation.
The Impact Assessment framework is, at the time of this writing, still being
developed, which will influence the data collection methodology. A document
circulated among the Consortium members will outline the evaluation strategy
of the Startup Lighthouse project that has the objective to assess the project
activities and results in different levels, including both the quantitative
and qualitative variables, while in parallel a policy related framework will
be formulated to assess the involved ecosystems.
Besides the literature research for identifying the proper measures and
standards for assessing each ecosystem, key players and participants will be
selected to participate in semi-structured interviews, surveys through
questionnaires and they will provide testimonials, upon their approval, to
analyse and identify the potential and existing barriers of each local
ecosystem. The following image summarises the key collection moments for each
performed activity, the different stakeholder categories and the method of
data collection. This is the initial plan:
Any changes related to the Impact Assessment and data gathering, analysing and
processing will be thoroughly described at D1.2 Annual progress report. The
overall framework and the results will be fully described in D1.3 Impact
Assessment and policy recommendations report, which will be submitted in M24.
Before the submission of the final deliverable, some preliminary results will
be publicly available on the project website and they will be disseminated to
any interested parties. The participants of this research will be fully
informed about their participation, the withhold of data and the right to
retain their data after filing a request. All participants will be asked to
sign a written consent form, before proceeding.
Responsibility for the data protection compliance remains within the Project
Coordination team.
<table>
<tr>
<th>
**KPI (original)**
</th>
<th>
**KPI (simple)**
</th>
<th>
**Category**
</th> </tr>
<tr>
<td>
Connect over 100 ecosystem builders in each Deep Dive Week
</td>
<td>
# ecosystem builders DDW
</td>
<td>
Attendance
</td> </tr>
<tr>
<td>
Attract over 20 investors to each Deep Dive Week, to a total of 160 investors
participating
</td>
<td>
# investors
DDW
</td>
<td>
Attendance
</td> </tr>
<tr>
<td>
Have more than 300 investors participating in on-site activities
</td>
<td>
# investors
</td>
<td>
Attendance
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
total
</th>
<th>
</th> </tr>
<tr>
<td>
Organisations and relevant individuals participant as mentors / 50
</td>
<td>
# mentors
</td>
<td>
Attendance
</td> </tr>
<tr>
<td>
Attract more than 40 prospective investors to STARTUP LIGHTHOUSE’s Among
Investors events on digital investments
</td>
<td>
# investors
AmongInvestor
s
</td>
<td>
Attendance
</td> </tr>
<tr>
<td>
Showcase 60 of the best STARTUP LIGHTHOUSE startups to top EU tech events
</td>
<td>
# startups
Europass
</td>
<td>
Attendance
</td> </tr>
<tr>
<td>
Award 20 startups with the STARTUP LIGHTHOUSE award on 2 major tech events
</td>
<td>
# startups
Awards
</td>
<td>
Attendance
</td> </tr>
<tr>
<td>
Organise 3 scouting missions beyond Europe to 30 of the selected startups
</td>
<td>
# scouting missions
</td>
<td>
Coordination
</td> </tr>
<tr>
<td>
STARTUP LIGHTHOUSE expects that, out of its financial targets, 10% will be
achieved in collaboration with the European Structural & Investment Funds
(ESIF) or supported actions.
</td>
<td>
% investment raised from ESIF
</td>
<td>
Coordination
</td> </tr>
<tr>
<td>
Events co-organised / 10
</td>
<td>
# events within
DDWs
</td>
<td>
Coordination
</td> </tr>
<tr>
<td>
Build an online community with over 3000 members of ecosystem builders from
across Europe
</td>
<td>
# members
total
</td>
<td>
Hubs
</td> </tr>
<tr>
<td>
Build an online community with more than 500 potential startup
investors/customers
</td>
<td>
# members investors
</td>
<td>
Hubs
</td> </tr>
<tr>
<td>
Support selected startups obtain over 2000 investment, partnership or
customers leads
</td>
<td>
# leads total
</td>
<td>
Leads
</td> </tr>
<tr>
<td>
Support selected startups obtain over 500 new international customer leads
</td>
<td>
# leads customer
</td>
<td>
Leads
</td> </tr>
<tr>
<td>
Support selected startups obtain over 500 investment leads
</td>
<td>
# leads investor
</td>
<td>
Leads
</td> </tr>
<tr>
<td>
Set up over 100 meetings between startups and potential investors/customers
</td>
<td>
# meetings startups<>inve stors
</td>
<td>
Meetings
</td> </tr>
<tr>
<td>
Physical meetings with public authorities / 10
</td>
<td>
# meetings public authorities
</td>
<td>
Meetings
</td> </tr>
<tr>
<td>
Support selected startups to: Develop over 100 adapted products/services to
new markets
</td>
<td>
# new markets
</td>
<td>
Results
</td> </tr>
<tr>
<td>
Support selected startups to: Raise their turnover collectively over 50% by
the end of the project
</td>
<td>
% turnover increase
</td>
<td>
Results
</td> </tr>
<tr>
<td>
Support selected startups to: Create over 500 new jobs
</td>
<td>
# jobs created
</td>
<td>
Results
</td> </tr>
<tr>
<td>
Support selected startups raise over €50m in total investment
</td>
<td>
# investment raised
</td>
<td>
Results
</td> </tr>
<tr>
<td>
Identify and support the 120 best upcoming scale-ups in Europe
</td>
<td>
# startups selected
</td>
<td>
Selection
</td> </tr>
<tr>
<td>
Number of Applications to exceed 600
</td>
<td>
# applications
</td>
<td>
Selection
</td> </tr>
<tr>
<td>
Secure sponsorship to run at least 2 Deep Dive Weeks after the project’s end
</td>
<td>
# sponsorship
DDW
</td>
<td>
Sustainability
</td> </tr>
<tr>
<td>
Secure sponsorship to run at least 2 STARTUP LIGHTHOUSE activities (workshops,
matchmaking, pitching competition, mentoring, etc.) after the project’s end
</td>
<td>
# sponsorship activity
</td>
<td>
Sustainability
</td> </tr>
<tr>
<td>
Develop an outreach campaign that reaches 1,000,000 ecosystem players,
builders and EU citizens - showcasing the impact of STARTUP LIGHTHOUSE and
Startup Europe
</td>
<td>
# views/clicks
</td>
<td>
Visibility
</td> </tr>
<tr>
<td>
Mass media publications / 200
</td>
<td>
# media publications
</td>
<td>
Visibility
</td> </tr>
<tr>
<td>
Unique visitors / 1,000,000
</td>
<td>
# unique visitors
</td>
<td>
Visibility
</td> </tr>
<tr>
<td>
Social media interactions / 100,000
</td>
<td>
# SM
interactions
</td>
<td>
Visibility
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0410_Urban_Wins_690047.md
|
# INTRODUCTION
## Context and objectives
Urban_Wins project (“Urban metabolism accounts for building Waste management
Innovative Networks and Strategies”) is financed by H2020 (project no. 690047)
and is implemented by the Municipality of Cremona as coordinator in
partnership with 26 Austrian, Italian, Portuguese, Romanian, Spanish and
Swedish waste stakeholder partners.
The Data Management Plan (DMP) describes the management of all the data and
data sets that will be collected, processed, generated, curated and preserved
during and after the project ends, as well as the openness of the data to the
general public.
Post-modern societies are characterized by an exponential increase of the data
whilst their use and re-use is more or less stable due to the scarce use of
accepted and trusted standards. Or, those standards form a key pillar of
science because they enable the recognition of suitable data. To ensure this,
agreements on standards (where applicable), quality levels and data management
best practices have to be negotiated and agreed within the research projects.
Strategies have to be fixed to preserve and store the data over a defined
period of time in order to ensure their availability and re-usability, this
representing one key objective of the DMP plan.
Therefore, a particular focus of the DMP is the **research data** used and /
or generated in the project. In this sense, the DMP aims to describe research
data with the attached metadata to make them discoverable, accessible,
assessable, usable beyond the original purpose and exchangeable between
researchers.
According to the “Guidelines on Open Access to Scientific Publication and
Research Data in Horizon 2020” (2015): “ _Research data refers to information,
in particular facts or numbers, collected to be examined and considered as a
basis for reasoning, discussion, or calculation. In a research context,
examples of data include statistics, results of experiments, measurements,
observations resulting from fieldwork, survey results, interview recordings
and images. The focus is on research data that is available in digital form._
"
The project coordinator, Municipality of Cremona, has elaborated this DMP. It
will be implemented in partnership with the WP leaders and the rest of the
Consortium.
The project Grant Agreement, DoA and Consortium Agreement represent the
baseline to which all partners have to refer to when implementing the current
plan. In fact, some contents of the DMP have been extracted from the above
three reference documents whilst others have been specifically developed for
this deliverable in order to ensure compliance.
The DMP follows the H2020 recommendations regarding the data management plan
and policy for the projects participating in the Open Research Data Pilot even
if Urban_Wins is not participating in the respective programme.
The DMP is not a fixed document, but evolves during the lifespan of the
project. The DoA makes reference to a unique DMP deliverable available in M1
of project implementation (the first version of the document submitted in July
2016). During the project implementation, several amendments will be made to
the present DMP and shared with the EC. The present document represents the 2
nd version of the DMP and it has been issued in August 2018.
## Audience
The DMP is mainly targeting the project WP and task leaders and should be
constantly consulted during the implementation of the actions and tasks in
order to properly manage the data management issues. Moreover, WP leaders are
expected to periodically revise the data management aspects related to their
WP, by consulting the task leaders.
Secondly, the plan is addressed to all the personnel of the partners involved
in Urban_Wins who should be aware of the data management principles and
procedures when implementing the project actions.
Indirectly, the DMP addresses the research institutions and other
organizations interested in using the data gathered / produced in Urban_Wins.
# DATA MANAGEMENT PROCEDURES
## General observations
Urban_Wins is a 36-month project involving 27 public and private partners from
6 European countries, and has a total budget of approx. 5 million EUR. The
financial and organizational complexity of the project is enhanced by its
ambitious objective: to develop and test methods for designing and
implementing innovative and sustainable Strategic Plans for Waste Prevention
and Management. In order to reach its objectives, the project will make use
and generate a large array of data (such as urban data, data coming from
surveys, personal data, etc.) whose management will be realized through a
large variety of procedures, tools and processes described in the present
document.
## Data management policy principles
The DMP is guided by the following general principles that shall be followed
in the implementation of the project actions:
* Data is a public good and should be made openly available;
* The partners will make use of the most appropriate community standards and best practices linked to data management;
* Data should be discoverable, accessible and interoperable to specific quality standards;
* Data should be assessable and intelligible;
* Quantitative and qualitative data obtained in the project will be made public keeping the anonymity of the contributors or centralized in final forms;
* MFA data will respect the secrecy issues of the issuing institutions;
* Data protection and privacy will be fully respected. The personal data that will be collected during the project will be shared only with the EC in order to fulfill the project obligations and will not be made public;
* Data of long-term value shall be carefully preserved;
* Metadata is strategic in order to insure the discoverability and access to data;
* The constraints (legal, ethical and commercial) on the data that is released shall be fully analyzed;
* Embargo periods delaying data release shall be considered each time it is necessary to protect the effort of the creators;
* Cost-effective use of public funds for R&I will be ensured.
## Data generation and collection
The DMP applies to two types of data:
1. the data, including associated metadata, needed to validate the results presented inscientific publications as soon as possible (including personal data);
2. other data, including associated metadata according to the individual judgment of the project partners.
At a preliminary analysis realized during the project submission stage, the
project will generate / collect the following types of data:
### Table 1 – Data collected / generated by Urban_Wins
<table>
<tr>
<th>
**Description of data**
</th>
<th>
**Associated WPs**
</th> </tr>
<tr>
<td>
statistics on urban data such as water, soil and material consumption, waste
generation, air particulates etc., as well as other economic, environmental,
health and social data necessary for the analysis of urban metabolism in 24 EU
cities
</td>
<td>
WP1
WP2
</td> </tr>
<tr>
<td>
data on urban waste prevention and management strategies across 6 EU
countries
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
material flows and socio-economic indicators for the 8 pilot cities
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
qualitative data from the stakeholder members of the online and physical
agoras collected during the meetings and surveys
</td>
<td>
WP1 – WP6
</td> </tr>
<tr>
<td>
personal data (name, email, photograph, phone number) from the respondents
to online questionnaires and interviews
</td>
<td>
WP1, WP3
</td> </tr>
<tr>
<td>
personal data (name, email, photograph, phone number) from the members of
the agoras (online and physical) and of community activators
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
name and email address of the subscribers to the newsletters
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
personal data (name, email, photograph, phone number) from the participants
at the project public events (EU and national conferences, webinars etc.)
</td>
<td>
WP3 - WP8
</td> </tr> </table>
The datasets collected / generated by the project will be detailed by the WP
leaders in the first six months of the project implementation, following the
model presented in Annex 1 and below, and will be periodically updated:
### Data sets analysis per deliverable
<table>
<tr>
<th>
_WPs_
</th>
<th>
_1_
</th>
<th>
_2_
</th>
<th>
_3_
</th>
<th>
_4_
</th>
<th>
_5_
</th>
<th>
_6_
</th>
<th>
_7_
</th>
<th>
_8_
</th> </tr>
<tr>
<td>
**Data set reference and name**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Standards and**
**metadata**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and**
**preservation (including storage and backup)**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Indications for other types of data,**
**except data sets**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
The DMP includes specific requirements concerning the management of the
personal data of the individuals involved in the project actions, derived,
among others, from the EC Ethics requirements raised during the evaluation
process.
The DMP leaves open to the WP leaders the procedures on the handling, use,
accessibility and preservation of other types of data, including data sets.
This data, as well as the associated procedures, will be agreed with WP
leaders during the periodical revisions of the information from Annex 1. The
revision of Annex 1 by the WP leaders will be realized – if applicable -
during the quarterly WP technical reports. The technical reports will address
issues linked to the realization of the relevant tasks and actions: state-of-
the-art of activities, quality issues, communication and dissemination issues,
involvement of personnel etc. The data management issues raised by the reports
will be analysed by the EC and discussed with the PTC and eventually PEB,
depending on the situation.
The WP leaders can upgrade at any time Annex 1.
## Data exploitation / sharing / accessibility and re-use aspects
The data collected and generated by the project will be generally widely open
to the general public in order to be exploited, shared and re-used.
In order to enable the re-use of data, depending on the data collected, the
Consortium partners will use easily accessible by the general public.
When defining and agreeing on the data management procedures, the WP and task
leaders will ensure that the project data is:
Discoverable
•are the data and associated software produced and/or used in the project
discoverable (and readily located), identifiable by means of a standard
identification mechanism (e.g. Digital Object Identifier)?
Accessible
•are the data and associated software produced and/or used in the project
accessible and in what modalities, scope, licenses (e.g. licencing framework
for research and education)?
Assessable and intelligible
•are the data and associated software produced and/or used in the project
assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. are the minimal datasets handled
together with scientific papers for the purpose of peer review, is data
provided in a way that judgments can be made about their reliability and the
competence of those who created them)?
Usable beyond the original purpose for
which it was collected
•are the data and associated software produced and/or used in the project
useable by third parties even long time after the collection of the data (e.g.
is the data safely stored in certified repositories for long term preservation
and curation; is it stored together with the minimum software, metadata and
documentation to make it useful; is the data useful for the wider public needs
and usable for the likely purposes of non-specialists)?
Interoperable
•are the data and associated software produced and/or used in the project
interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc. (e.g. adhering to standards for data
annotation, data exchange, compliant with available software applications, and
allowing recombinations with different datasets from different origins)?
Public data will be made available through the project website. For many
previous European projects, it has been difficult to reuse the findings
because the websites have closed down after the projects’ end dates.
Urban_Wins website will be planned in such a way that before the project ends,
a post-project phase version will be created to facilitate access to data
unrestricted in time on the Municipality of Cremona website.
## Data preservation aspects
On the short term (in maximum 6 months after the kick-off of the project),
each partner will define internally:
* the back-up procedures (security / storage) for the data for which it is responsible (as Task / WP leader);
* the responsibilities for data management and curation within its research team;
* define data management support that it might require from the PC / WP leader.
All Consortium shared data will be stored in secure environments at the
premises of Consortium partners with access privileges restricted to the
relevant project partners. Processing and use of data follows the General data
protection regulation (GDPR) entered into force in May 2018\.
The data gathered by a Consortium member remains in the care of that member,
and will not be distributed to any party outside of the Consortium during the
lifespan of the project. At the end of the funding, all the data collected and
generated in the project will be stored in the institutional repository of the
Municipality of Cremona who will be responsible to preserve it at least 5
years after the project ends.
The PC and WP leaders will take all the measures to make it possible for third
parties to access, mine, exploit, reproduce and disseminate — free of charge
for any user — the data, including associated metadata, needed to validate the
results presented in scientific publications as well as the data collected and
generated in the project.
## Roles and responsibilities
The following project actors are involved in the coordination and
implementation of the DMP:
1. **Project coordinator** (Mara Pesaro), supported by the **project assistant** (Daniele Gigni), is responsible for the overall administrative and financial management of the project and for reporting and communication to the European Commission and:
* ensures the coordination and carries responsibility for the data management issues; ✔ develops the DMP;
* periodically updates the DMP with the inputs from the WP leaders or with various issues raised by the other partners;
* represents the final decision body concerning the data management issues;
* supervises the storage in the institutional repository the project data and preserves it at least 5 years after the project ends.
2. **PEB** , consisting in one representative from each partner ensures, in close partnership with the PC, the monitoring and assessing the actual progress of the project and:
* feedbacks on the data management issues raised by the WP leader, PC or any other partner;
* specifically address data management issues during the online and face to face PEB meetings.
3. **PTC** , composed of WP leaders, coordinates the technical progress in order to ensure WP goals are met on time and within the budget restrictions and:
* ensures the management of the data in partnership with the WP and task leaders;
* monitors the data management within each WP and proposes corrective measures;
* reports each 3 months the research progress to the PEB, including data management aspects;
* defines, accompanies and reviews WP and tasks scope and execution according to project objectives and findings and provides support for the identification and management of data.
4. **WP leaders** (WP 1 – CTM, WP 2 – Chalmers, WP 3 – NOVA.ID.FCT, WP 4 – Ecosistemi, WP 5 – Iuav, WP 6 – Ecoteca, WP 7 – Cremona, WP 8 – ICLEI):
* manage the WP associated data in partnership with the task leaders;
* fill in the preliminary data in Annex 1 for the completion of the current plan and provide further details to complete the Annex within the first 6 months of project implementation;
* periodically update Annex 1 at the request of the PC or whenever considered appropriate (including during the project quarterly reports);
* report to the PTC and the PEB during the periodical meetings, including on data management issues;
* inform the PC about the progress of their work regularly, including on data management issues;
* can ask information from the project partners involved in the respective WP concerning data back-up procedures, roles and responsibilities;
* agree with the Task leaders upon the most suitable features for the open license of the deliverables;
* decide in consultation with the pertinent Task leaders the most suitable online repositories for the scientific publications.
**f) Project partners:**
* report data management issues to the WP leaders, PEB and PC whenever appropriate;
* ensure the back-up procedures (security / storage) for the data for which they are responsible;
* establish the responsibilities for data management and curation within its research team;
* define data management support that it might require from the PC / WP leader.
## UPDATES
The DMP will be updated - if considered appropriate - during the project
lifetime. The updates will not take the form of deliverables as this aspect is
not stipulated in the DoA. Updated forms of the DMP will be however sent to
the EC (project officer) for acknowledgement.
The DMP will be revised whenever significant changes arise in Urban_Wins, such
as:
* emergence of new data sets or of significant new type of collected / generated data;
* changes in Consortium policies; - external factors.
Specific evaluations of the DMP will be realized before the midterm and the
final project reviews in order to be able to encompass in the PPRs the
potential modifications.
# TECHNICAL DATA MANAGEMENT
As a general principle, Urban_Wins will provide open access to the general
public to its deliverables, (peer-reviewed) scientific publications and
research data.
## DATA MANAGEMENT FOR DELIVERABLES
In principle all the project deliverables will be licensed with the copy left
_Creative Commons license_ . The license will enable users to freely copy,
modify, distribute and use the respective deliverable by mentioning its
source.
Each Task leader, upon consultation of the WP leader, will decide on the most
appropriate Creative Commons features to be applied to the respective
deliverable. In general, the WP and task leaders are advised to use the
following
features: _https://creativecommons.org/licenses/by-nd/4.0/_
All the project deliverables of interest to the project stakeholders and
general public (technical documents) will be hosted on Urban_Wins platform,
from where they can be downloaded.
The management and communication documents elaborated under WP7 and WP8 will
have the status of “public document” but they will be sent under request and
not made available on the project platform.
The table below summarizes the project deliverables, associated WPs,
deliverable leaders, and due date. Accordingly to the DoA, all deliverables
have a “PUBLIC” dissemination level.
<table>
<tr>
<th>
**No**
</th>
<th>
**Name**
</th>
<th>
**Delivery date**
</th>
<th>
**Month of delivery**
</th>
<th>
**Lead partner**
</th>
<th>
**WP**
</th> </tr>
<tr>
<td>
D.7.1
</td>
<td>
Executive project plan and procedures
</td>
<td>
2016/06
</td>
<td>
M01
</td>
<td>
Cremona
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
D.7.2
</td>
<td>
Risk Assessment and Contingency Plan
</td>
<td>
2016/06
</td>
<td>
M01
</td>
<td>
Cremona
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
D.7.3
</td>
<td>
Quality Management Plan, including impact monitoring plan and indicators
</td>
<td>
2016/06
</td>
<td>
M01
</td>
<td>
Cremona
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
D.7.4
</td>
<td>
Data Management Plan
</td>
<td>
2016/06
</td>
<td>
M01
</td>
<td>
Cremona
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
D.7.5
</td>
<td>
Societal Responsibility Management Plan
</td>
<td>
2016/06
</td>
<td>
M01
</td>
<td>
Cremona
</td>
<td>
WP7
</td> </tr> </table>
<table>
<tr>
<th>
D8.1
</th>
<th>
Dissemination and communication strategy
</th>
<th>
2016/07
(1st version)
</th>
<th>
M02
</th>
<th>
ICLEI
</th>
<th>
WP8
</th> </tr>
<tr>
<td>
D.3.1
</td>
<td>
Thematic, actor and country-oriented waste stakeholder matrixes, having the
stakeholder’s categorized maps as annexes
</td>
<td>
2016/09
</td>
<td>
M04
</td>
<td>
Ecosistemi
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
D.3.3.1
</td>
<td>
Syllabus for local coordinators training sessions on Active Collaborative
Methodologies
</td>
<td>
2016/10
</td>
<td>
M05
</td>
<td>
NOVA.ID.FC T
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
D8.3
</td>
<td>
City Match activities planned and ongoing
</td>
<td>
2016/11
(1st version)
</td>
<td>
M06
</td>
<td>
ICLEI
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
D.3.2
</td>
<td>
Online agoras spaces that integrate the project platform including smart
phone/tablet application with additional existing tools favored by desired
participants
</td>
<td>
2016/12
</td>
<td>
M07
</td>
<td>
RoGBC
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
D8.2
</td>
<td>
Sector Watch developed, set up and launched
</td>
<td>
2017/02
</td>
<td>
M09
</td>
<td>
ICLEI
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
D.1.1
</td>
<td>
Report outlining a comprehensive assessment of the best WMS, policies,
regulations, and summary for each city and nation involved
</td>
<td>
2017/04
</td>
<td>
M11
</td>
<td>
CTM
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
D.2.1
</td>
<td>
Model architecture. Design of the conceptual model and the different
components that constitute it, including definition of data requirements
</td>
<td>
2017/05
</td>
<td>
M12
</td>
<td>
Chalmers
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
D8.1
</td>
<td>
Dissemination and communication strategy
</td>
<td>
2017/05 (2nd version)
</td>
<td>
M12
</td>
<td>
ICLEI
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
D.1.2
</td>
<td>
Report covering the conclusions from the analysis of urban metabolism
variables and preliminary indications for the definition of Urban Models for
Strategic Waste Planning in selected cities
</td>
<td>
2017/06
</td>
<td>
M13
</td>
<td>
Chalmers
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
D.4.1
</td>
<td>
Methodological guidelines for the construction of the Strategic Planning
</td>
<td>
2017/08
</td>
<td>
M15
</td>
<td>
IUAV
</td>
<td>
WP4
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
frameworks based on urban metabolism approach
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
D.2.2
</td>
<td>
Urban Metabolism guide. Report with procedures to implement Urban Metabolism
analytical tool in European cities
</td>
<td>
2017/08
(1st version)
</td>
<td>
M15
</td>
<td>
Chalmers
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
D.5.1.1
</td>
<td>
Collaborative Methodology to
personalize the Urban Strategic Plan for each city
</td>
<td>
2017/09
</td>
<td>
M16
</td>
<td>
NOVA.ID.FC T
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
D.5.1.2
</td>
<td>
Eight evaluation Plans (one for each pilot city in its own language)
</td>
<td>
2017/09
</td>
<td>
M16
</td>
<td>
NOVA.ID.FC T
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
D.4.2
</td>
<td>
Strategic Planning frameworks for the 8 pilot cities
</td>
<td>
2018/03
</td>
<td>
M22
</td>
<td>
IUAV
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
D.5.2.1
</td>
<td>
Eight Urban Strategic Plans at “ground level” (one for each pilot city in its
own language)
</td>
<td>
2018/04
</td>
<td>
M23
</td>
<td>
IUAV
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
D.6.1
</td>
<td>
Corpus of at least 50 best practices concerning waste prevention and
management strategies
</td>
<td>
2018/04
</td>
<td>
M23
</td>
<td>
Global
Innovation
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
D.2.3
</td>
<td>
Urban Metabolism case studies. Reports for each of the 8 cities that will be
subject to detailed study with quantification and analysis of their
Urban Metabolism
</td>
<td>
2018/05
(1st version)
</td>
<td>
M24
</td>
<td>
Chalmers
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
D8.1
</td>
<td>
Dissemination and communication strategy
</td>
<td>
2018/05 (3rd version)
</td>
<td>
M24
</td>
<td>
ICLEI
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
D.6.2
</td>
<td>
Guidelines for the use of UM, MFA and LCA analysis results in waste decision
making
</td>
<td>
2018/12
</td>
<td>
M31
</td>
<td>
Ecoteca
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
D.6.3
</td>
<td>
Guidelines for the selection and implementation of adequate stakeholder
engagement techniques
</td>
<td>
2019/02
</td>
<td>
M33
</td>
<td>
Ecoteca
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
D.3.3.2
</td>
<td>
Report on impacts of the participatory decision-making process
</td>
<td>
2019/03
</td>
<td>
M34
</td>
<td>
NOVA.ID.FC T
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
D.3.3.3
</td>
<td>
Report on effective stakeholder engagement practices
</td>
<td>
2019/03
</td>
<td>
M34
</td>
<td>
NOVA.ID.FC T
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
D.5.4.1
</td>
<td>
One transnational report on pilot actions (in English)
</td>
<td>
2019/03
</td>
<td>
M34
</td>
<td>
IUAV
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
D.5.4.2
</td>
<td>
Eight Roadmaps (one for each pilot city in its own language)
</td>
<td>
2019/03
</td>
<td>
M34
</td>
<td>
IUAV
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
D.5.4.3
</td>
<td>
EU Roadmap to recommendations
</td>
<td>
2019/03
</td>
<td>
M34
</td>
<td>
IUAV
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
D.6.4
</td>
<td>
Final version of the toolkit uploaded on the Urban_Wins platform
</td>
<td>
2019/04
</td>
<td>
M35
</td>
<td>
Ecoteca
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
D.2.4
</td>
<td>
Database of Urban Metabolism Flows
</td>
<td>
2019/05
</td>
<td>
M36
</td>
<td>
CEIFACOOP
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
D8.4
</td>
<td>
Project results exploitation plan
</td>
<td>
2019/05
</td>
<td>
M36
</td>
<td>
Cremona
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
D.2.2
</td>
<td>
Urban Metabolism guide. Report with procedures to implement Urban Metabolism
analytical tool in European cities
</td>
<td>
2019/05 (2nd version)
</td>
<td>
M36
</td>
<td>
Chalmers
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
D.2.3
</td>
<td>
Urban Metabolism case studies. Reports for each of the 8 cities that will be
subject to detailed study with quantification and analysis of their
Urban Metabolism
</td>
<td>
2019/05 (2nd version)
</td>
<td>
M36
</td>
<td>
Chalmers
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
D8.3
</td>
<td>
City Match activities planned and ongoing
</td>
<td>
2019/05 (2nd version)
</td>
<td>
M36
</td>
<td>
ICLEI
</td>
<td>
WP8
</td> </tr> </table>
## DATA MANAGEMENT FOR PUBLICATIONS
Urban_Wins will provide open access to its scientific information, including
publications, meaning that the online access to its results will be free of
charge to the end-user and reusable.
In the context of Urban_Wins (and of scientific projects in general),
'scientific information' means:
-> peer-reviewed scientific research articles (published in scholarly journals);
-> research data (data underlying publications, curated data and/or raw data).
### Peer-reviewed scientific research articles
Concerning its publications, Urban_Wins will use “gold” open access publishing
by insuring their publication in open access journals or in journals that
enable an author to make an article openly accessible.
The deliverables subject to scientific publications will be uploaded in
“machine-readable” electronic copies in an online repository (selected from
OpenAIRE, ROAR or OpenDOAR centralized repositories) that will best suit the
topics. Whenever possible, WP leaders will also aim to deposit at the same
time as the publication the research data needed to validate the results
presented in the deposited scientific publications ('underlying data'),
ideally in a data repository.
It is the responsibility of the WP leader, in cooperation with the Task
leaders associated to the deliverable subject to scientific publication, to
decide on the most suitable online repositories.
### Research data
Open access to research data refers to the right to access and reuse digital
research data under the terms and conditions set out in the GA and CA.
'Research data' refers to the information, in particular facts or numbers,
collected to be examined and considered as a basis for reasoning, discussion,
or calculation within Urban_Wins. The focus is on research data that is
available in digital form.
Within Urban_Wins, users can normally access, mine, exploit, reproduce and
disseminate openly accessible research data free of charge.
Specific criteria will apply for input data for the different methods, tools
and models to be used in WP1 and WP2 depending on the existence of data
collected by statistical institutes or other relevant stakeholders. It is
foreseen that most data will come from existing standard datasets, for
example, International trade statistics, Industrial production, but also from
third party databases, for example, LCI databases. This fact imposes
restrictions in the publication of input data, due to the secrecy in micro-
data because of few existing individuals in the same category, either
companies or individuals and restrictions to publish commercial data that
cannot be reproduced or publicly displayed. Hence, this data is only available
for research purposes. Regarding output data, Urban_Wins produced datasets
will be publicly available to the largest extent possible, where conflicts
with the input datasets are deemed non-existing. The respective input and
output data will be highlighted in the “Data sets per deliverable” template
that will be filled in by WP leaders.
Further information can be consulted in the __Guidelines on Open Access to
Scientific Publications and Research Data in Horizon 2020_ _ .
## TECHNICAL DATA NAMING
Documents will be shared between Urban_Wins participants in an internal share
point / working space hosted by the Urban_Wins platform, made available within
the first 6 months of project implementation.
In order to ensure a reliable system for tracing documents and their different
versions, a document naming system is introduced 1 .
In this sub-chapter, the naming procedures for deliverables and reports, as
well as for documents related to events taking place on specific dates are
described. This category applies to all the working documents that are to be
created during the Urban_Wins project, such as deliverables or internal
reports and working documents.
These documents will be named as it is indicated below:
_Urban_Wins__-_.extension_
For deliverables the deliverable number (DXX) will be used as the name.
The initial version of every document will be version 00 and revision 00\. The
document will be processed and the changes will be saved as revision 01,
revision 02, etc.
Once the document is considered definitive, it will be saved as version 01
revision 00. In the case of those documents that have to be approved by the
PC, PEB, PTC, etc., e.g. deliverables, the version 01 revision 00 of it will
be sent to the respective bodies.
If the PC, PEB or PTC considers that any modification has to be done, the
changes will be saved as “version 01 revision 01”, revision 02, etc.
For those documents that do not have to be approved by the PM (e.g. internal
documents), the final version of it will be also saved as version 01 revision
00. If later on, the document has to be modified or updated, the document will
be similarly saved as “version 01 revision 01”, revision 02 etc.
The name of the partner is optional and should be used for documents that are
handled by different partners. Example of naming (when Ecosistemi changes to a
draft of this document): _Urban_Wins _D11_00-03_Ecosistemi_ .doc
Documents related to **events that take place on a given date** , e.g.:
minutes of meetings, workshops agendas, etc. will be named as follows:
_Urban_Wins ___-_.extension_ .
The date will begin with the full year followed by month and day, each
separated with dashes (e.g. 2016-03-26). This allows the accurate
chronological ascending or descending order of documents in the file system of
various operating systems.
The versions naming follows the same procedure as the one described above.
The name of the partner is optional and should be used for documents that are
handled by different partners. Example of naming:
_Urban_Wins_2016-07-07+08_Kick-off-meetingminutes_01-00.doc_
# PERSONAL DATA MANAGEMENT
Urban_Wins actions involve the collection and the management of personal data
from various individuals, as well as the analysis of behavioural data within
the urban metabolism context. This aspect constitutes the object of various
Ethics requirements that have been raised during the evaluation process and on
which the PC needs to provide clarifications within M2 and more generally
throughout the project.
The management of personal data will be realized by respecting the principles
of intelligible consent, confidentiality, anonymization and other ethical
considerations, where appropriate. No sensitive data will be collected.
The table below summarizes the type of personal data that will be collected,
the associated WPs and the partners involved:
<table>
<tr>
<th>
**Type of personal data collected**
</th>
<th>
**Associated**
**WP**
</th>
<th>
**Responsible partner**
</th> </tr>
<tr>
<td>
Name, email, photograph, institution for the members of the online agoras and
the online community in general
</td>
<td>
WP3
</td>
<td>
Cremona
Marraiafura
</td> </tr>
<tr>
<td>
Name, email, phone number, institution for the members of the physical agoras
</td>
<td>
WP3 – WP5
</td>
<td>
Municipalities of Leiria, Torino, Cremona, Sabadell, Manresa, Bucharest, Città
Metropolitana di Roma
</td> </tr>
<tr>
<td>
Name, email, phone number, institution for the participants at the project
communication and dissemination events (kick off conference, final national
conferences)
</td>
<td>
WP8
</td>
<td>
Cremona, Ecoteca,
Ceifacoop, CTM, SERI,
Chalmers
</td> </tr>
<tr>
<td>
Email (and eventually name, phone number and affiliation) of the persons
participating in project surveys
</td>
<td>
WP1
WP3
</td>
<td>
Coimbra Ecosistemi
</td> </tr>
<tr>
<td>
Name and email of the subscribers to the newsletters
</td>
<td>
WP8
</td>
<td>
ICLEI
</td> </tr>
<tr>
<td>
Urban dwellers behavioural data
</td>
<td>
WP1
WP2
</td>
<td>
Chalmers
</td> </tr> </table>
Each partner will realize the management of the personal data in accordance
with the _EC General data protection regulation (GDPR)_ and the applicable
national legislation. A specific attention should be paid to the following
aspects from GDPR Regulation:
* Personal data definition: _https://ec.europa.eu/info/law/law-topic/dataprotection/reform/what-personal-data_en_
* Data processing: _https://ec.europa.eu/info/law/law-topic/data-protection/reform/whatconstitutes-data-processing_en_
* Data collection rules: _https://ec.europa.eu/info/law/law-topic/data-_
_protection/reform/rules-business-and-organisations/principles-gdpr/what-data-
can-weprocess-and-under-which-conditions_en_
* Data to be provided to individuals: _https://ec.europa.eu/info/law/law-topic/dataprotection/reform/rules-business-and-organisations/principles-gdpr/what-informationmust-be-given-individuals-whose-data-collected_en_
* Before and after 25th May 2018: _https://ec.europa.eu/info/law/law-topic/dataprotection/reform/rules-business-and-organisations/legal-grounds-processingdata/grounds-processing/does-consent-given-25-may-2018-continue-be-valid-once-gdprstarts-apply-25-may-2018_en_
* Validity of consent: _https://ec.europa.eu/info/law/law-topic/data-_
_protection/reform/rules-business-and-organisations/legal-grounds-
processingdata/grounds-processing/when-consent-valid_en_
* Data storage duration: _https://ec.europa.eu/info/law/law-topic/data-_
_protection/reform/rules-business-and-organisations/principles-gdpr/how-long-
can-databe-kept-and-it-necessary-update-it_en_
* Data processing halt: _https://ec.europa.eu/info/law/law-topic/data-_
_protection/reform/rights-citizens/my-rights/can-i-ask-company-organisation-
stopprocessing-my-personal-data_en_
* Public authorities: _https://ec.europa.eu/info/law/law-topic/data-_
_protection/reform/rules-business-and-organisations/public-administrations-
and-dataprotection/what-are-main-aspects-general-data-protection-regulation-
gdpr-publicadministration-should-be-aware_en_
* Scientific research: _https://ec.europa.eu/info/law/law-topic/dataprotection/reform/rules-business-and-organisations/legal-grounds-processingdata/grounds-processing/how-consent-processing-scientific-research-obtained_en_
* Data Protection Officer: _https://ec.europa.eu/info/law/law-topic/data-_
_protection/reform/rules-business-and-organisations/obligations/data-
protectionofficers/does-my-company-organisation-need-have-data-protection-
officer-dpo_en_
* Data Protection Authorities: _http://ec.europa.eu/justice/article-29/structure/dataprotection-authorities/index_en.htm_
Each partner involved in the management of the personal data will be required
in the first two months of project implementation to provide a “Data
management declaration” stating that the respective organization “will handle
personal data according to the national legislation respecting international
and EU laws and in compliance with ethical principles and that no sensitive
data is not involved”.
## Informed consent from the participants
Before the start of any activity, Urban_Wins participants need to provide
their informed, intelligible consent concerning the applicable procedures for
the personal data collection, storage, protection, retention and destruction.
An “Informed consent form” that will be applied to all the human participants
in the project (both to the online and face-to-face activities) is provided as
Annex 2 in the current document.
The PC will be entitled to request at any time information and supporting
documents concerning the internal procedures for the storage, protection,
retention and destruction of the personal data of the Consortium partners
involved in the management of personal data.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0414_NewsEye_770299.md
|
# Data Summary
An important objective of the NewsEye project is to develop methods and tools
for the effective exploration and exploitation of European newspaper data (see
_https://www.newseye.eu/_ ). This will be done using new technologies and “big
data” approaches, combining “close” and “distant reading” methods from the
digital humanities. This aims to improve the way of studying the European
cultural heritage not only for researchers and experts but also for the
general public.
To this purpose, NewsEye will collect data and metadata related to newspapers
from 3 European libraries (the National library of Finland, the National
library of France and the National library of Austria). The collected data
that will form the test cases represents about 1.5 million pages of newspapers
in 4 different language (Finnish, French, German and Swedish), covering the
period 1850-1950. This dataset consists of 19 different newspaper titles.
## NewsEye workflow and input data
The NewsEye project follows a typical digitization workflow: Starting with
simple image files and some general metadata these images are processed with
layout analysis (LA), automated text recognition (ATR) 1 , article
separation (AS) and named entity recognition (NER). Furthermore, named
entities are linked with external sources (named entity linking) and enriched
with properties such as ‘stance’. Finally these data are used to set up
powerful ways to access the collection not only by searching, but in an
interactive and adaptive way which will take into account the full wealth of
the data. The simple image, together with some metadata is therefore the
starting point of the whole workflow. In our case also text data from former
ATR campaigns carried out by the participating libraries will be available as
a starting point and as a reference for comparing results achieved in the
project with the state of the art. This section describes in more detail which
data were available for the NewsEye project in the participating libraries and
how we tackled the collection process itself.
## Data types and formats
The main objective of the NewsEye project is to make data openly accessible
via an open platform, which will be conforming to the linked data requirements
through the use of international standards (RDF, JSON-LD, IIIF, XML). This
data will correspond to the ground truth data and the results of the project.
As a matter of fact the digital library community was dominated during the
last 15-20 years by the XML based standards set up by the Library of Congress.
The most important are:
* MODS: Metadata Object Description Standard
* METS: Metadata Encoding and Transmission Standard
* ALTO: Analyzed Layout and Text Object
These standards are used worldwide, including the participating libraries in
NewsEye which preserve and manage their data by using these standards.
However, in the last years a new development was initiated by some well-known
libraries who gathered under the hood of the ‘International Image
Interoperability Framework’ (IIIF). The main difference to the conventional
XML schemas is the shift in the perspective: Instead of starting with the
concept of ‘meta data’ - as it is natural for analogue libraries – the image
itself, or in the notion of IIIF, the ‘canvas’ is the main focus.
At the NewsEye kick-off meeting in La Rochelle it became very clear among the
consortium members that the NewsEye project - especially in its role as being
a vanguard of future digital library applications - should go towards this
direction. This does of course not mean that METS/ALTO are outdated or no
longer useable, but that the main distribution format with which we would like
to describe data and meta data within the project will be IIIF and the web
annotation framework from W3C. In this way each of the tools, such as layout
analysis, text recognition and named entity recognition will provide an
additional annotation layer to the source document (body/target).
The NewsEye platform will then follow these recommendations in order to be
fully interoperable with existing tools and to ease the sharing of
information. The data and metadata produced by the NewsEye platform and which
will correspond to the results of the projects will be stored in an Apache
Solr platform built with Apache Lucene. Apache Lucene provides a Java-based
indexing and search technology, while the Solr tool provides a high-
performance search server with XML/HTTP and JSON/Python/Ruby APIs and many
other features.
Based on this architecture, the platform provides various API endpoints to
extract data in all the formats used by the partners (XML, JSON-LD, IIIF, XML)
based on the type of data and the international standards. Moreover, some
collaborations with the Impresso project are conducted in order to propose a
European standard to share contents of historical newspapers.
## Data storage
In principle, the NewsEye platform will only host metadata, computed from the
original data, and make results available and harvestable by the national
libraries. However, in case the owner of documents or data provides no
adequate API, the documents will be stored in the platform repository,
together with the metadata.
To this end, a centralized metadata repository (named the NewsEye
Demonstrator) will be built following current standards in metadata vocabulary
(data types and formats presented below). The internal data representation
within the NewsEye Demonstrator is based on Samvera/FEDORA.
After the first year of the project, datasets are provided by the libraries,
enriched by Transkribus and several other tools, analyzed with dynamic text
analysis methods and made available to users again via the NewsEye
Demonstrator and the Personal Research Assistant as a distinct part of the
final demonstrator.
Data will be available within the lifecycle of the project via public access
provided by the contents owner through APIs. One particular point of the
project is that it will mainly deal with public data, hosted by national
libraries or in public repositories.
The NewsEye project is partly connected to the READ Project 2 (more
specifically with the Transkribus tool and team) and some results may be
shared between them. However, the major part of data will correspond to
digitized newspapers hosted by the national libraries involved in the NewsEye
project.
The volume of data that need to be stored on the demonstrator server varies
between the partners libraries. When a library publishes images through an
IIIF server, the only data that need to be stored and indexed are the text and
the associated enrichments produced by the project (named entities, article
separation, _etc_ ). The volume of the dataset used in the NewsEye project is
about 1TB. This number considers both the raw images, the raw ATR files and
the indexed data and metadata.
# FAIR data
In the NewsEye project, partners have agreed to aim at making their research
data findable, accessible, interoperable and reusable (FAIR). This is why the
project participates in the open research data pilot (ORDP) set by the
European commission under the H2020 programme.
## Data and results accessible to NewsEye consortium partners
Making results and data accessible is one of the main objectives of the
NewsEye project. Bearing that in mind, many tools and guidelines have been
planned in accordance with the description of the action. First, a global
digital library application has been created in order to store metadata and
create standardized content between partners. This repository will directly be
integrated in the project demonstrator: the NewsEye digital library
demonstrator. It provides access to data via a web interface and a platform to
utilise and visualize the results of all NewsEye tools. It uses an open
architecture, extensible via APIs for applications and plugins _._ On top of
that, it works with any kind of storage facility, using international exchange
standards in order to possibly be harvested by the national libraries and
easing the access to such data/metadata for each partner in the project.
For existing data, not generated within NewsEye, such as the image data from
the national libraries, the data owner/data provider will specify how data
will be shared and made available. These aspects will be continuously
elaborated upon in deliverables of WP1. Besides, this work package will
describe the global data model and the way to access data _._
## Making data findable, including provisions for metadata
The data produced during the NewsEye project will be metadata, which will be
made available via the centralised metadata repository, which will be built as
part of the NewsEye web portal platform. Each newspaper issue imported in the
NewsEye platform is identified by a unique and persistent identifier. The
metadata generated during the project will cite not only this “NewsEye ID”,
but also the original ID coming from the libraries. Solutions for
sustainability will be examined in details as the core subject of task T7.4,
running from M13 to the end of the project. Moreover, a copy of all this data
will be accessible through the Zenodo platform at
_https://zenodo.org/communities/newseye/_ .
Metadata will be collected from the results of WP1 through WP6. Then, it will
follow the model and the quality requirements defined in WP1 and WP8. Later,
the metadata will be included in the NewsEye platform regarding the specific
features of WP7. All these metadata will be provided under an interoperable
format, following international standards (such as JSON-LD, Linked Data, OAI-
PHM, IIIF and XML). As stated on the previous section, metadata will be
produced under various formats depending on the need of each partner. In
practice, all data and metadata are stored in the Apache Solr server from the
NewsEye Platform. Apache Solr is highly reliable, scalable and fault tolerant,
providing distributed indexing, replication and load-balanced querying,
automated failover and recovery, centralized configuration and more.
Based on these international standards, the NewsEye platform will be fully
interoperable with all the partners of the project. This will make data
findable and easy to reach at an international level, allowing any institution
to harvest results from our project.
Finally, the source code of software and tools developed in the project will
be accessible through the different git repositories of the partners, to be
listed on the project Website.
## Making data openly accessible
We want to make data, produced in the NewsEye project life cycle, easy to
reach. A focus will be made on research data produced in the project, for
instance its datasets (including especially training data / ground truth for
ATR raining data, NER, event detection, etc.). As all research papers will be
built upon these data, and in order to share these data with research groups
outside the consortium, we intend to rely on the Zenodo platform for making
them available, both during and after the project. In accordance with the
consortium agreement, the choice of license to apply on each dataset will be
discussed by all partners linked to the dataset. Following the recommendations
of the first monitoring meeting we have already published 2 datasets on Zenodo
for the training of text recognition engines 3 , and trained word embeddings
of changing vocabulary in English, Dutch, Swedish and Finnish over 1750-1950
4 . Other datasets are under preparation.
Raw data are already openly accessible as they are available through the
online platforms of the national libraries. The associated metadata will be
available in a centralized portal embedded on the NewsEye website (
_https://newseye.eu/_ ). Data will be linked with associated metadata. The
adequate subset of data (depending on institutional policy) will be made
available via the NewsEye website and later through a long-term storage
platform which will be defined within task T7.4 on sustainability.
Software developed under NewsEye, e.g., software tools for processing data or
automatically interacting with data will be deposited on code repositories
(such as github/gitlab). Restricted-use data, software and code are recorded
in the NewsEye grant agreement, and may vary according to institutional and
national policies and legislations. In case of restricted-use, metadata is
still provided so that we can still contact the data owner. The access request
will then depend on the data owner’s consideration, and full access to the
data may be granted.
The raw data studied in this project (both raw images and ATR files) will be
accessible through the NewsEye demonstrator either through a web browser or
programmatically through an API. Users are requested to create an account in
order to access the original data along with additional metadata created
during the life cycle of the project (better ATR, article separation, named
entities, topics, _etc_ ). Part of the original data is under copyright and is
thus restricted to project members. However, produced metadata will still be
shared to the community as NewsEye participates in the open research data
pilot (ORDP) and will share what is produced within its framework.
## Making data interoperable
The NewsEye project aims to collect and document the data in a standardized
way. We must make sure that the datasets can be understood, interpreted and
shared in isolation alongside accompanying metadata and documentation. To this
purpose, the NewsEye Digital Library Demonstrator will contain all data
produced in the project and make them available in different ways and via
different channels to several user groups. Figure 1 details the data flow in
the project and illustrates how data are collected, exchanged and preserved
within NewsEye.
Figure 1: Data flow in NewsEye
The exact implementation of this system is expected for Y2 and Y3 where we
will be able to open up our workflows and tools towards interested libraries
or research groups also from outside the project. Some collaboration are
already been established with the Impresso project in order to get uniform
URLs and API at a European level. Such cooperation can be intensified also as
part of WP7 Demonstration, Dissemination, Outreach and Exploitation.
## Increasing data re-use (through clarifying licenses)
Most data used in the NewsEye project already belongs to the project partners.
Following this, all existing data will keep their existing license, or a
license will be provided with such data.
Data collected under NewsEye will be made available for re-use upon completion
of the experiments. Data produced and made openly available under NewsEye will
be available for third parties, provided this does not contradict specific
rules, as specified in Section 2.3. In case of restricted-use, metadata is
still freely provided **,** enabling to contact the data owner. The request
will then be up for the data owner’s consideration, and depending on her
decision full access to the data may be granted. The data will be available
for at least 5 years after the conclusion of the project. Data quality
assurance will be covered in deliverables of WP8.
All metadata produced in the framework of the NewsEye project will be made
public, using appropriate licenses. The choice of the license will be defined
after discussion between partners involved in the production and management of
such metadata. Special attention is given to the sustainability of the
produced data and metadata, which will be made available on the Zenodo
platform in order to ease its reusability.
In any case, the rules set out and agreed upon within the consortium agreement
shall prevail.
# Allocation of resources
On the one hand, the immediate costs anticipated to make the produced datasets
FAIR are related to hosting the NewsEye Demonstrator, which will be managed by
the University of La Rochelle (ULR) within task T7.1 of WP7. On the other
hand, a long-term deposit system for the datasets (data and metadata) will be
proposed within task T7.4 (for instance within the context of European
research infrastructures and/or through repositories such as Zenodo) for at
least 5 years after the conclusion of the project. It is noted that any
unforeseen costs related to open access research data in Horizon 2020 are
eligible for reimbursement during the length of the project under the
conditions defined in the Grant Agreement, in particular Article 6 and Article
6.2.D.3.
The costs for research data management are essentially covered through by
staff work within WP1 and WP7, overall estimated at 6PM. The infrastructure
and hardware for running the NewsEye demonstrator are run and powered through
the IT services of ULR and thus not covered by direct costs of the project.
When it comes to the resources required for sustainability, full details will
be developed from M13 within task T7.4. The current plan is to make datasets
and code available through open, free and sustained repositories such as
Zenodo and github, while the platform would be taken over by a European
research infrastructure (advanced contacts have notably been made with
DARIAH), and ideally by future subsequent projects in the vivid application
domain of historical newspapers, some of which are currently being proposed by
NewsEye partners.
The University of La Rochelle is responsible for data management within the
NewsEye project and specifically for creating and updating the present data
management plan. The contact person is Mickaël Coustaty.
Each NewsEye partner must follow the policies set out in this DMP. Datasets
have to be created, managed and stored properly and in accordance with the
European and national legislation. Dataset approval, metadata registration and
data archival and sharing through repositories is the responsibility of the
partner that generates the data.
The PIs of each partner will have the responsibility of implementing the DMP
in their institution.
# Data security
The NewsEye project will be based on public archives hosted by the national
libraries. No data generated within the project is thus considered as highly
confidential. Thus, data security regulations are not deemed critical in this
project.
Following the completion of the project, all responsibilities concerning data
recovery and secure storage will be integrated with the dataset repository.
The centralised repository related to the demonstrator will be hosted by the
University of La Rochelle, which will archive and preserve them locally, using
daily
backup routines in operation under institutional policies **.** In details,
this server is managed by the IT services of ULR and follows the classic CIA
triad (Confidentiality, Integrity, Availability):
* Confidentiality: This means that only people from our project will access to data and sensitive information (like logs) is accessed only by authorized persons (i.e. administrators of the server) and kept away from those not authorized to access them;
* Integrity: information will remain readable and correct. This will be implemented using hashing technics to ensure that data remains the same compared to previous backups;
* Availability: information and resources will remain available to those who need it. This part is managed by the IT services of ULR which provides a 99% availability of the infrastructures from the University through processes such as redundancy (RAID), Intrusion Detection System and DDoS protection
Partners are expected to adopt suitable tested backup strategies enabling full
data recovery in case of an unexpected event. The responsibility for data
security and long-term preservation lies within the institutions. The server
used for the NewsEye platform includes a backup strategy managed by the host
provider. Moreover, in order to ensure the improving quality of project
results, a backup of the Solr Index will be made before each major update of
the data / metadata.
# Ethical aspects
The NewsEye project will mainly deal with the enrichment of public data.
However, partners need to comply with the Ethics on research integrity as
described in the article 34 of the Grant Agreement.
Regarding the involvement of human participants, it will only occur for the
purpose of the demonstrator developed in task T7.1, where activity will be
logged to improve the performance of the personal research assistant. All
users will be informed of the data collection and its consequences. The
project strictly adheres to the General Data Protection Regulation (GDPR)
2016/679 of the European Parliament and of the Council of 27 April 2016, on
the protection of natural persons with regard to the processing of personal
data and on the free movement of such data 5 , as well as to national
regulations.
The full details on the implementations of ethics in NewsEye have been
delivered within the deliverables D9.1 to D9.5 of the Ethics work package
(WP9).
# Further work
Data management procedures are visible to NewsEye partners. In the near-
future, standardization of data management will be one important part of the
DMP through the provision of data models as developed in task T1.1 of WP1. The
other part will consist in setting guidelines, such as for the demonstrator
and the quality assurance plan as developed in WP7 (task T7.1) and WP8 (task
T8.3).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0417_INFERNET_734439.md
|
# INTRODUCTION
A new element in the Horizon 2020 is the use of data management plans
detailing what data a project generates and how this data is made accessible.
This document introduces the first version of the Data Management Plan (DMP)
of the INFERNET project funded by the European Union’s Horizon 2020 Program
under Grant Agreement #654206. The DMP describes the data management cycle for
datasets to be collected and/or generated by INFERNET. It covers:
1. What research data will be collected and/or generated by INFERNET;
2. The handling of research data;
3. The methodologies that will be applied;
4. Data-sharing policies;
5. Data curation and preservation policies.
Open data are becoming increasingly important for maximizing the excellence
and growth of the research activity in Europe. INFERNET is aligned with the
foundations of open data, namely
* building a software-defined toolkit in an open source project for inference in biological networks; • building a permanent link to the open source community through case examples;
* sharing the data produced with the community.
Note that primary data used in the scientific research of the INFERNET project
relative to WPs come from public open databases like UNIPROT, the XFAM suite
provided by the European EMBL-EBI. This gives us the full possibility of
redistributing processed data and results obtained on the basis of the primary
data.
The INFERNET DMP primarily lists the different datasets that will be produced
by the project, the main exploitation perspectives for each of those datasets,
and the major data management principles. The purpose of the DMP is to provide
an analysis of the main elements of the data management policy that will be
used by the consortium. The DMP is however not a “crystallized” document and
it will evolve during the lifespan of the project.
The DMP regards all the data sets that will be collected, processed and/or
generated within INFERNET. To generate this DMP, the consortium created a data
management policy based on a) the elements that the EU proposes to address for
each data set and b) the specific capability of the consortium to address each
of those element. The elements were then used to create a DMP template which
was refined with all partners.
The structure of the document is the following:
* **Section 2** presents the General Principles to which the INFERNET consortium will adhere;
* **Section 3** details the types of data to be collected, processed and/or generated by the INFERNET consortium (which can be grouped as (a) research papers, (b) software codes and (c) data sets proper) and outlines the corresponding open data policies the consortium will follow;
* **Section 4** discusses the outreach strategies the consortium plans to implement;
* **Section 5** provides an overview of the content of the data collected and/or generated by the INFERNET consortium;
* **Section 6** draws conclusion and sets future goals.
The intended audience for this document is the INFERNET consortium and the
European Commission.
# GENERAL PRINCIPLES
## Aims of the Data Management Plan
This DMP aims at providing insight into the facilities and criteria employed
for the collection, generation, storage, dissemination and sharing of research
data related to the INFERNET project. In particular, the DMP will focus on
1. Embedding the INFERNET project in the EU policy on data management, which is increasingly geared towards providing open access to data that is gathered with funds from the EU;
2. Enabling verification of the research results of the INFERNET project;
3. Fostering the reuse of INFERNET data by other researchers;
4. Enabling the storage of INFERNET data in publicly accessible repositories;
The INFERNET project has a very broad understanding to the notion of “data”
(to be detailed in Section 3). In the following, we shall outline the basic
principles on which we designed the INFERNET DMP.
## Participation in the Pilot on Open Research Data
The INFERNET project participates in the Pilot on Open Research Data launched
by the European Commission along with the Horizon 2020 program.
The INFERNET consortium strongly believes that open access to research data
and publications is important within the context of responsible and
reproducible research and innovation, and agrees on the benefits that the
European innovation ecosystem and economy can draw from allowing reusing data
at a larger scale. Ensuring research data and publications can be openly and
freely accessed means that any relevant stakeholder can choose to cross-check
and validate whether research data are accurately and comprehensively reported
and analysed, and may also encourage the reuse of data. However, open access
to research data must comply with sound research ethics, ensuring for instance
that no directly or indirectly identifiable information is revealed.
## Intellectual Property Rights and Security
Project partners keep Intellectual Property Rights (IPR) on their technologies
and data. As a legitimate result, the INFERNET consortium will have to protect
these data and consult the concerned partner(s) before publication.
IPR management is concerned also on preventing the leak or hack of the data.
Although we do not plan to collect human sample data, INFERNET will guarantee
that if the specific nature of the dataset requires, we will include secure
protection to it.
A holistic security approach will be undertaken to protect the three mains
pillars of information security:
* Confidentiality,
* Integrity, ● Availability.
The security approach will consist of a methodical assessment of security
risks followed by an impact analysis. This analysis will be performed on the
personal information and data processed by the proposed model.
## Personal Data Protection
We are not planning to collect personal data such as full names, contact
details, background, etc. Should the development of the project require such
data, we will adhere with the EU's Data Protection Directive 95/46/EC1 aiming
at protecting personal data. National legislations applicable to the project
will also be strictly followed, such as the Italian Personal Data Protection
Code 2. In such case, all data will be collected by the project after
providing subjects with full details on the experiments to be conducted, and
after obtaining signed informed consent forms from them.
# TYPES OF DATA HANDLED DURING THE PROJECT AND CORRESPONDING OPEN DATA
POLICIES IMPLEMENTED BY INFERNET
The INFERNET project will deal with three main sources of data that can be
subject of open data policies: _research papers,_ _software source code_ ,
_datasets_ . In the following we shall focus on how the INFERNET consortium
will handle each of these sources. It should however be emphasized that
INFERNET will also make use of secondary sources, including: literature
research, existing databases collecting experimental results, archives of
research papers (both preprints and published articles) and actively
maintained software repositories.
## Research papers
Research papers published both in peer-reviewed journals and conference
proceedings, will be the main instrument to propagate our research
contributions to the appropriate audience. Where appropriate, INFERNET will
protect the intellectual property of the work prior to publication, but in
general, the consortium will privilege Open-Access journals. We will make
publications available through the project web portal and systematically use
other web resources like preprint servers ( _e.g._ ArXiv.org, bioRxiv.org
etc.), as is the tradition in physics research and which gain increasing
acceptance by the editorial policies of specific journals.
Currently, there are two main strategies to implement open data access on
research papers: _gold_ and _green_ open data [1]. Following _gold_ open data,
researchers can publish in an Open Access (OA) free online access journal.
According to _green_ open data, instead, researchers can deposit a version of
their published works into a subject-based or institutional repository. As not
all journals today comply with gold open data standards, all INFERNET related
publications will be made publicly available following _green_ open data
standards. Our concrete strategy to comply with the _green_ open data standard
will be:
* **Self-Archiving** , i.e. the act of the author depositing a free copy of an electronic document online in order to provide open access to it. This is considered a reasonable route to make a research paper open data ( _green_ ). We have already deployed the INFERNET web site (http:/www.infernet.eu) where published papers will be uploaded in compliance with the embargo period of the journal to which the article will be submitted for publication.
* **Metadata:** Every INFERNET publication will be associated to metadata that describes the type and topic of the publication (abstract), as well as the original publisher, venue and Document Object Identifier (DOI).
* **Public Archives:** In compatibility with the journal embargo time, we aim at disseminating INFERNET publication on open archives. In particular arxiv ( _http://xxx.lanl.gov_ ), and biorxiv ( _http://www.bioarxiv.org_ ) will be the preferred online repositories.
## Software source code
All partners will be contributing to a public and centralized code management
system. This makes the development of the project open and transparent for the
public. In particular we will not only leverage the results at the end of the
project as open data, but it also makes the source code open for the entire
software life-cycle. In details
* **Centralized repository:** we are going to create a GitHub ( _http://github.com_ ) also linked from and to our project website. GitHub is currently the most popular code management public repository due to the large availability of options to fork/branch/merge versions of a software project that enables third parties to easily extend the source code.
* **Long-term availability** will be guaranteed by the _cloud_ nature of the storage strategy implemented by the repository.
* **Licensing:** whenever possible we will license open source code under either Apache License 2.0 or GNU General Public License 3.0. Loosely speaking these licenses provide the user with the freedom to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software, under the terms of the license, without concern for royalties [2]. However, the intellectual property of the source code is kept: For instance, the Apache License requires preservation of the copyright notice and disclaimer, which are related to the project [3].
* **Code usability:** thanks to the helpful extension of GitHub, code will be always complemented by upto-the-date documentation that will help the use of the code even even beyond the lifetime of INFERNET.
## Data sets
In many cases, research publications will be associated with a dataset, either
as a source of information to extract novel observations or as a result of the
research process. Our aim is to provide in an open format all research data
needed to validate the results of the associated publications. Again, as in
the case of scientific publications, we will try to adhere as much as possible
to _a green_ open data standard.
* **Self-Archiving** : in analogy with what we already outlined for publications, datasets will be either directly uploaded, or referenced from the INFERNET website.
* **Availability** : The web site of the project will be extended to 6 years over the natural duration of the project guaranteeing long-term availability of the data.
* **Metadata and open formats:** Every INFERNET dataset will be organized with simple lightweight and well-established file format (such as CSV). We will avoid closed-source proprietary formats. Very relevant will be the use of metadata to understand the _topic_ , _purpose_ , _collection/generation methodology_ as well as an explanation of the different _fields_ of the dataset.
# OUTREACH STRATEGY AND DATA SHARING
To foster the re-use of INFERNET research data by third parties, consortium
members will be committed to implementing a strategy to disseminate results
for the benefit of the scientific community as well as for potentially
interested economic players.
The actions that will be undertaken to maximize the visibility of our results
will be:
* **Reference to dataset and software in the publication:** papers produced for the project will contain a clear reference to where the data and related software actually live to maximize the awareness of INFERNET results in the scientific community;
* **Advertise available data in conference and public events:** we will leverage the presence of members of INFERNET in international conference to present not only scientific results, but also the software and related datasets.
Data sharing will be achieved through publicly accessible web servers as
described in the previous section.
# DATA DESCRIPTION
In this section we will describe the different items that will be produced
during the entire project lifetime. As already stated above, DMP is an ongoing
process and will be updated in the course of the project. This is the first
release of the document, and given the very early stage of the project, we do
not have at present material to describe. With respect to the data format we
will adhere to the following rules:
* Research articles: PDF according to the guidelines outlined in section 3.1 of this document.
* Software codes: standard languages (C, C++, Julia, )
* Data files: minimal machine readable formats (CSV, ASCII, TXT), suitable metadata, and manual and guidelines to use them.
# CONCLUSIONS
The purpose of this document was to provide the plan for managing the data
generated and collected during the project, i.e. the Data Management Plan
(DMP). Specifically, the DMP described the data management life cycle for all
datasets to be collected, processed and/or generated by a research project. It
covered the handling of research data during and after the project, including:
what data will be collected, processed or generated; what methodology and
standards will be applied; how data will be shared/made open and in what
formats; and how data will be curated and preserved.
Following the EU’s guidelines regarding the DMP, this document will be updated
during the project lifetime (in the form of deliverables).
# BIBLIOGRAPHY
1. E. Commission, “Guidelines on open access to scientific publications and research data in Horizon 2020.” http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020- hi-oa-pilotguide_en.pdf, 2013.
2. Wikipedia, “Comparison of free and open-source software licenses.” https://en.wikipedia.org/wiki/Comparison_of_free_and_open-source_software_licenses. [3] Wikipedia, “Apache license.” https://en.wikipedia.org/?title=Apache_License.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0418_ChainReact_687967.md
|
# Executive Summary
In accordance with the European Commission Directorate-General for Research &
Innovation “Guidelines on Data Management in Horizon 2020” v.2.1, the
ChainReact Consortium partners collected, analysed and selected a series of
datasets that corresponded to their progress regarding ChainReact’s three main
struts: The Whistle, OpenCorporates, and WikiRate. Each Consortium partner
received a call-to-action to introduce the datasets most relevant to their
respective deliverables.
This document reflects on the current state of Consortium agreements on the
datasets that are produced and managed and outlines these sets of data in
detail in terms of their description, selection methodology, use, owner,
effect, data sharing principles and agreements, and lifecycle.
The data management plan will remain alive and evolving throughout the
lifespan and the project. A second submission of the DMP will take effect at
month 12. The datasets may also be altered due to converging factors such as
project maturity, shifts in consumer usage, shifting to following working
phase, etc.
# Methodology
The methodology followed for drafting this initial DMP adheres to the European
Commission’s Guidelines 1 as interpreted in the online tool DMPonline 2 .
DMPonline produced by the UK's Digital Curation Centre (DCC) 3 to help
research teams address DMP requirements by addressing a series of questions
for each dataset a project produces.
Accordingly, ChainReact’s Initial DMP addresses the fields below for each
dataset:
* Data set reference and name
* Data set description
* Standards and metadata
* Data sharing
* Archiving and preservation (including storage and backup).
## Dataset reference and name
This field is the identifier for the dataset to be produced. The ChainReact
dataset identification follows the naming: Data_ _ <WPno> _ _ _ <serial number
of dataset> _ _ _ <dataset title> _ . Example: **Data_WP2_1_Wikirate_Site** .
## Dataset description
In this field the data that will be generated or collected is described,
including references to their origin (in cases where data iare collected),
nature, scale, to whom it could be useful, and whether it underpins a
scientific publication. Where applicable, information on the existence (or
non-existence) of similar data and the possibilities for their integration and
reuse are mentioned.
## Standards and metadata
This field examines existing suitable standards within relevant disciplines,
as well as an outline on how and what metadata will be created. The available
data standards (if any) accompany the description of the data that will
collected and/or generated, including the description on how the data will be
organised during the project, mentioning for example naming conventions,
version control and folder structures.
The DCC provides the following questions to be considered as guidance on Data
Capture Methods:
* _How will the data be created?_
* _What standards or methodologies will you use?_
* _How will you structure and name your folders and files?_
* _How will you ensure that different versions of a dataset are easily identifiable?_
## Data sharing
In this field we describe how data will be shared, including access
procedures, and embargo periods (if any). We also outline the technical
mechanisms for dissemination, including necessary software and other tools for
enabling re-use; define the breadth of access.
In case the dataset cannot be shared, the reasons for this will be mentioned
(e.g. ethical, rules of personal data, intellectual property, commercial,
privacy-related, security-related).
## Archiving and preservation
Here the procedures that will be put in place for long-term preservation of
the data will be described, along with the indication of how long the data
should be preserved, what is its approximated end volume, including a
reference to the associated costs (if any) and how these are planned to be
covered. This point emphasizes in the long-term preservation and curation of
data, beyond the lifetime of the project. Where dedicated resources are
needed, these should be outlined and justified, including any relevant
technical expertise, support and training that is likely to be required and
how it will be acquired.
# ChainReact Datasets
## WP1 Datasets
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP1_1_ChainReact_Docs_Site**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
A restricted Wagn-based website at docs.chainreact.org used for internal
collaboration of all ChainReact partners. Will include the canonical versions
of reports, deliverables, proposals, and core results of huddles and other
meetings. Because of the flexibility of this platform, it will often be used
for creating structures for organizing other data collaborated on by many
partners.
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
Like all Wagn sites, the docs site is organized into “cards”. For every edit
of every card (including name, type, and content changes), wagn stores:
a userstamp a timestamp, and an IP address.
When multiple cards are edited simultaneously, these independently tracked
“actions” are grouped into single “acts”. It is also possible to collect
additional metadata and standards-conforming data within cards.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
By default, cards on the docs site are restricted to viewing by partners,
though any individual card may be independently made publicly viewable if
deemed appropriate by its editors. Much of the site’s content is material
being prepared for publication but not appropriate for publication in raw
states. Other cards contain conversations, personal data, and proposals that
have been rejected or not yet agreed upon. It is, by and large, a site for
process rather than final products.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
The docs site is currently stored on the WikiRate production server and will
likely be moved to a smaller server when WikiRate.org moves to a multiserver
architecture. Full site backups are automatically generated daily, with one
copy stored locally and another transferred to our development server. Wagn
automatically handles card revisions, and the complete history of every card
is visible via the interface.
Decko Commons eV has accepted responsibility for continued hosting of and
updated to the website after the project’s completion. Should it be unable to
continue hosting at some point in the future, it will provide all partners
with an archive, which will be made conveniently usable with the installation
of the open-source Wagn/Decko platform.
</td> </tr> </table>
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP1_2_Contacts_Database**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
Lists of key Contacts at partners, hosted in the form of mailing lists
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
Standard form of name, and email address, organised into general and project
specific mailing lists (e.g. financial contacts, WP coordination)
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
These contact lists are viewable by the ChainReact project team and editable
by the administrators, at WikiRate e.V.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
The data is stored and maintained in ChainReact’s Google apps account
</td> </tr> </table>
## WP2 Datasets
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP2_1_Whistle_Research_Informing_Design_Data**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
This data-set includes all data collected in relation to research that informs
the design of The Whistle. The nature of this data will include audio-visual
recordings of interviews and user testing sessions \- along with the
associated consent forms, transcriptions, interview/test plans and participant
recruitment lists/documents. This data-set will be stored in a google drive
folder, and relevant people from the project team will be granted access. This
data-set is likely to support scientific publications, in which case
transcripts or excerpts may be shared alongside these publications. This data-
set will not be particularly large, and should not exceed 1 gigabyte in size.
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
The top-level google drive folder will contain sub-folders for the following:
* Documents containing interview questions and related materials
* Interview recordings Interview transcripts
* Interview recruitment tracking
Files relating to interviews will be stored within sub-folders named for the
organisation they relate to with titles denoting the person who was
interviewed.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
This data-set will be shared with all relevant project team members through
their google accounts.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
As this data-set will be stored in a google drive folder, it will benefit from
a version history and there should be no issue with its preservation.
</td> </tr> </table>
## WP3 Datasets
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP3_1_Whistle_Reports**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
A restricted data-set encompassing the full detail of all incoming civilian
witness reports and attachments for The Whistle. The Whistle will run
reporting campaigns, in collaboration with NGOs, to collect reports from
civilian witnesses. When a civilian witness submits a report this will create
a record on The Whistle’s secure server, for the purpose of the data
management plan all such reports are being treated as a single data-set. In
practice, only nominated representatives of the partner NGO for each campaign
will be allowed to access reports related to that campaign. The precise nature
and scale of this data-set will depend on the choice of reporting campaigns.
Ethics deliverable 9.2 contains further detail on how this data will be stored
and transmitted, and deliverable 2.1 contains detail on the ethical review of
prospective campaigns (which includes review of which data will be stored and
procedures for data collection). This data-set will contain sensitive
information, and therefore storing and transmitting it securely is a central
concern for the project. This data-set may be used in academic research, and
therefore may underpin a scientific publication.
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
As The Whistle is in the early stages of development the choice of a specific
standard for storage of this data is yet to be made. The data for incoming
reports will be similar to that produced by standard web forms that allow
attachments. The choice of a specific standard will be determined by security
considerations.
When a report is submitted, it will be stored along with meta-data such as the
time of creation and IP address of submitter. The Whistle will also allow
aspects of a report to be passed through relevant external APIs that could
facilitate work on verification of its authenticity. Results of these API
calls will also be stored as additional meta-data for a report.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
Due to the sensitive nature of this data-set, access will be tightly
restricted. Only nominated representatives of the partner NGO for a campaign,
and relevant people within the project team, will have access to this data. A
reporting campaign may also produce aggregated or de-personalised data that
can be published on sites like wikirate.org (thus forming part of the
Data_WP5_1_WikiRate_Site_Cards data-set). The manner in which publicfacing
data is produced for a campaign will be considered as part of the ethical
review for a prospective reporting campaign.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
The full data for reports will be retained on the secure server until 3 months
after the reporting campaign ends - at which point it will be transferred to a
secure archive housed separately to the data for live campaigns. Data held in
this archive will only be used for research purposes. Preservation of this
archived data-set will be the responsibility of the research team at
Cambridge. At the point when this data-set serves no further research purpose,
or cannot be maintained securely, it will be destroyed.
</td> </tr> </table>
## WP4 Datasets
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP4_1_Possible NGO Partners**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
Contacts and engagement data-set to track charities that could be partner with
The Whistle to run test reporting campaigns
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
Data is stored in a Google Sheet with columns representing:
* Charity name
* Location
* Website
* Contact Email
* Funding Band
* Purpose
* Digital Literacy
* Country Focus
* Population Focus
* Notes
* Interview Status
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
This data set will be shared with all relevant team members working on the
interview study and outreach with possible partners for The Whistle.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
As this data-set is stored in a google sheet it will benefit from a version
history and there should be no issue with its preservation.
</td> </tr> </table>
## WP5 Datasets
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP5_1_WikiRate_Site_Cards**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
The primary Wagn database for the WikiRate.org website.
(Note that the assets for this website are treated as a separate dataset,
because they will involve separate archiving and preservation.)
All of WikiRate’s core concepts – Companies, Metrics, Topics, Claims, Reviews,
Sources, and Projects – as well as more standard content like Users and simple
webpages, are organized as cards within a wagn website.
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
Like all Wagn sites, WikiRate.org is organized into “cards”, and all data are
stored in the same five tables (cards, card_acts, card_actions, card_changes,
and card_references.) As noted in _Data_WP1_1_ChainReact_Docs_Site_ above, for
every edit of every card (including name, type, and content changes), Wagn
stores:
* a userstamp
* a timestamp, and
</td> </tr> </table>
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP5_1_WikiRate_Site_Cards**
</th> </tr>
<tr>
<td>
</td>
<td>
an IP address.
Wagn also supports a REST API that allows this data to be made available in
many formats. Company data will be made available in many standard formats,
including JSON, XBRL, and simpler formats like CSV.
Many metrics themselves contain standardized data. Initially, standards
conformity will be enforced via community feedback and editing, though some
automation will likely be added in later stages.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
Account login information, including encrypted passwords, are protected and
made invisible to web users. All other information on WikiRate.org is
available for reading and download by the general public.
Some metric data providers have requested download limitations so that their
original datasets could not be reconstructed from WikiRate.org. We are
currently weighing the benefits of supporting such limitations (and thus
receiving permission to put more data on WikiRate.org) vs. the costs of having
to support more restrictions and communicate the nature of and rationale for
these restrictions to users.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
Development and promotion of this dataset is the core focus of WikiRate eV,
who intend to see it thrive and grow long after the end of the current
project, supported by broad fundraising and community-building strategies.
The entire database is archived nightly, with a full version tarred and copied
to a remote server. We also frequently make full and partial copies to various
servers for use in development and testing.
Some site copies are used for experimenting with data that we are not yet
ready to publish for technical or social reasons, most commonly permission not
yet granted.
Wagn automatically handles card revisions, and the complete history of every
card is visible via the interface.
</td> </tr> </table>
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP5_2_WikiRate_Site_Assets**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
Files uploaded to WikiRate.org, including images, structured and unstructured
source files, and optimized CSS and JavaScript.
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
Metadata for these files are stored as cards in the previous dataset,
_Data_WP5_1_WikiRate_Site_Cards._ Each asset is stored with a card_id and
action_id that allows it to be mapped to that dataset.
However, because our multi-server architecture calls for a canonical database
engine on one server and canonical file service elsewhere, these two datasets
will be tracked separately.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
All files are publicly available. Direct links to the data are provided on
WikiRate.org
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
At present, the data remain on our production server and, like the production
</td> </tr>
<tr>
<td>
_Data set reference and name_
</td>
<td>
**Data_WP5_2_WikiRate_Site_Assets**
</td> </tr>
<tr>
<td>
</td>
<td>
database, are archived and backed up nightly. Soon they will be moved to an
independent server or cloud service in support of WikiRate.org’s designed
multi-server architecture.
As with _Data_WP5_1_WikiRate_Site_Cards,_ maintenance and development of this
dataset is connected to the primary focus of WikiRate e.V. and will be central
to ongoing planning, fundraising, and promotion.
</td> </tr> </table>
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP5_3_CERTH_ Companies**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
CERTH’s company entities collection that have been and are going to be
obtained by Web data extraction using easIE (an easy-to-use information
extraction framework).
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
A schema-free document-oriented database is used which allows us to add or
remove fields from the collection without impact to the database soundness.
Each company is described by the following:
* id
* company_name
* aliases
* website
* address
* country
* wikirate_id: this field is present only in companies that have been integrated to WikiRate platform.
* opencorporates_id: this field is present only if there is a matching entity in OpenCorporates database.
Company mapping task will result to the integration of companies between
OpenCorporates and WikiRate. Additional fields might be considered in order to
represent the relationships between companies in our dataset derived from
OpenCorporates corporate networks.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
A RESTful API will be available for anyone who wishes to have access to the
dataset. The data will be available in JSON format.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
Preservation will be ensured by backup of the original database.
</td> </tr> </table>
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP5_4_Metrics**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
CERTH’s metrics collections that have been and are going to be extracted from
external Web sources by using easIE (an easy-to-use information extraction
framework).
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
A schema-free document-oriented database is used which allows us to add or
remove fields from the collection without impact to the database soundness.
Each metric is described by the following:
</td> </tr> </table>
**|** P a g e
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP5_4_Metrics**
</th> </tr>
<tr>
<td>
</td>
<td>
* name
* value
* referred_company
* citeyear
* source
* source_name
* type
* currency
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
The collected metrics will be available through a RESTful API for anyone who
wishes to have access to the dataset. The data will be available in JSON
format. We encourage people and companies to reuse our data and contribute to
data collection task regarding companies’ CSR performance.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
Preservation will be ensured by backup of the original database.
</td> </tr> </table>
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP5_5_WikiRate_Usability**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
Results of user testing and design, including think aloud tests, analytics,
reading material, etc.
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
The top-level google drive folder contains sub-folders for the following:
* Lean UX Activities
* UX Design
* UX Research
Files will be named with descriptive titles coupled with date and version
information. Interview recordings and transcript file names will contain the
name of the organisation represented by the interviewee, a number denoting the
interview’s order and date information.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
This data-set will be shared with all relevant project team members through
their google accounts.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
As this data-set will be stored in a google drive folder, it will benefit from
a version history and there should be no issue with its preservation.
</td> </tr> </table>
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP5_6_OpenCorporates_Corporate_Relationship_Sources**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
This is the list of potential sources for relationship data, compiled for the
report as part of WP5.1. This dataset is not kept in a database, but in the
Google Doc, which is the master document for the report (rather than the
derived Word Document supplied as a deliverable).
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
As this is kept in a Google Document, all changes to it are automatically
tracked.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
This is a list of “not yet published” data and is therefore private.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
As the dataset is in the cloud, there is automatic archiving. We also
periodically export the report into different forms (e.g. Word Docs).
</td> </tr> </table>
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP5_7_OpenCorporates_Companies**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
This dataset is the core dataset of over 100 million companies in
OpenCorporates, all obtained from primary public sources by OpenCorporates
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
The OpenCorporates company data has multiple fields and attributes, often
deeply nested and rich. The conceptual schema is described (using JSONschema)
at **_https://github.com/openc/opencschema/blob/master/build/company-
schema.json_ ** (this schema is opensource).
All data is fully provenance, describing both the source and retrieval
timestamp
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
The data is available through the OpenCorporates enterprise-level API
(Application Programming Interface), which provides rich querying and
retrieval.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
The data lives on our production MySQL database, which lives on our
multiserver architecture (master + slave + backup slave), which is backed up
daily, with historical backups.
</td> </tr> </table>
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP5_8_OpenCorporates_Corporate_Structures**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
This dataset is the corporate structure information OpenCorporates has
extracted from official public sources (includes shareholding, subsidiary,
control relationships from company registers, SEC, other regulators)
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
The OpenCorporates corporate structure data is modelled using our own model,
which is open source (see
**_https://github.com/openc/opencschema/blob/master/build/_ ** for schemas),
and described in **_a series_ ** **_of_ ** **_blog posts_ ** . As the data
comes from multiple sources, with varying levels of details and subtle
differences in meaning (for example the way shareholding is represented), the
models need to be able to cope with this, in particular both high and low
granularity, significant ambiguities, and different natures of the
relationship (e.g. shareholding, subsidiaries, other control relationships).
All data is fully provenanced, describing both the source and retrieval
timestamp
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
The data is available through the OpenCorporates enterprise-level API
(Application Programming Interface), which provides rich querying and
retrieval. As part of this project we will be working with the partners to
enhance retrieval of corporate structure information via the API
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
The data lives on our production MySQL database, which lives on our replicated
multiserver cluster (master + slave + backup slave), which is backed up daily.
In addition, we use a replicated Neo4J cluster for storing the relationships
in a graph database. This is also backed up daily
</td> </tr> </table>
## WP6 Datasets
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP6_1_Corporate Engagement**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
Contacts and engagement database to help us identify targets and progression
towards corporate engagement
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
Data will be collected through Google tracking sheets and where possible
tracked in Salesforce software.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
This data set will be shared with all relevant team members working on
outreach, partnerships and engagement. Additionally analysis of this data set
may be used at periodic project meetings to indicate progress and consider
direction
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
Salesforce is a dynamic database which will exist in perpetuity whilst
WikiRate e.V. benefits from the non-profit license. If we ever need to migrate
to another software the entire database can be exported. Google sheets will
also exist in perpetuity, and offer a layer of tracking and analysis which
Salesforce cannot capture alone.
</td> </tr> </table>
## WP7 Datasets
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP7_1_Collective_Awarness_Platforms_Research**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
This data set will be used to analyse the functioning of ChainReact as a prime
example of Collective Awareness Platform. It will be the result of
extract/transform process that will retrieve the data from various ChainReact
databases (especially the repositories of The Whistle and Wikirate), combine
them and transform into the form suitable for research.
The dataset will describe in detail the actions of ChainReact users – their
interactions with the platform, their uploads, their site navigation paths,
etc. It will be used to calculate various indicators describing the overall
functioning of ChainReact.
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
The specific technology and data model of research dataset is contingent on
the final structure of source databases and the design of research the dataset
will be used for. Both these aspects being under development, there is a wide
range of storage choices being considered at the moment, from standard SQL
schemas, to XML/JSON containers, to graph databases.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
The data set will be initially shared among ChainReact members. It will be
made available publicly as the background for research activities.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
The specific location of the dataset (and consequently archiving and
preservation policies) is yet to be decided. Existing ChainReact
infrastructure (servers) could be used or specific cloud or local solution be
chosen, depending on the dataset and research requirements that will be
decided in the course of the project.
</td> </tr> </table>
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP7_2_Title_ChainReact_Evaluation**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
This data set comprises of various forms of data needed to evaluate ChainReact
in terms of progress towards the realisation of its goals and the quality of
its inner functioning. The data set includes the progress reports and other
communication with consortium partners, the audio recordings and transcripts
of interviews with ChainReact team members, participatory observation notes
and results of desk research activities.
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
The data set will be stored as a google drive folder with subfolder structure
reflecting the nature and structure of research material.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
The data-set will be shared mainly among the researchers performing
evaluation. The less sensitive elements of the data-set (eg. progress reports,
desk research notes) will be made available for general reuse by Consortium,
while one-to-one communication recordings will be treated as confidential and
shared only among researchers directly involved in evaluation tasks. The
access control to the data will be realised by google drive sharing mechanism
with possibility of encrypting particular file containers as an extra security
layer.
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
The archiving of the data set will be realised by google drive persistence and
versioning mechanism. The data-set will be stored for two years after final
evaluation report, in accordance with evaluation standards, for the purpose of
verification and auditing. After two years the data set will be discarded.
</td> </tr> </table>
## WP8 Datasets
<table>
<tr>
<th>
_Data set reference and name_
</th>
<th>
**Data_WP8_1_NGO Engagement**
</th> </tr>
<tr>
<td>
_Data set description_
</td>
<td>
Contacts and engagement database to help us identify targets and progression
towards NGO engagement
</td> </tr>
<tr>
<td>
_Standards and metadata_
</td>
<td>
Data will be collected through Google tracking sheets and where possible
tracked in Salesforce software.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
This data set will be shared with all relevant team members working on
outreach, partnerships and engagement. Additionally analysis of this data set
may be used at periodic project meetings to indicate progress and consider
direction
</td> </tr>
<tr>
<td>
_Archiving and preservation_
</td>
<td>
Salesforce is a dynamic database which will exist in perpetuity whilst
WikiRate e.V. benefits from the non-profit license. If we ever need to migrate
to another software the entire database can be exported. Google sheets will
also exist in perpetuity, and offer a layer of tracking and analysis which
Salesforce cannot capture alone.
</td> </tr> </table>
# Conclusion
This Data Management Plan identifies the datasets managed by the ChainReact
consortium organized by work packages. As detailed under section 3 of this
report “ChainReact Datasets”, the nature of these datasets vary according to
each components’ roles and responsibilities. For example, CERTH’s company
metadata are collected and maintained through the easIE extraction framework
and preserved through regular backup of the database, whereas the outreach
plan set by WikiRate manages a database of contacts and leads that are
categorised according to their outreach status (connection established/not,
connection success/pending, etc.).
The ChainReact datasets are evolving. Therefore, DMP is a living document that
will keep being updated through the lifetime of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0419_e-Confidence_732420.md
|
# Introduction
This document is Version 3 of the Data Management Plan (DMP), presenting an
overview of data management processes agreed upon among eConfidence project’s
partners. Data management is defined in accordance with the Grant Agreement
and, in particular, with Articles 27 (Protection of results), 28 (Exploitation
of results), 29 (Dissemination of results 1 ), 31 (Access rights to results)
and 39 (Personal Data Protection)
This DMP will first establish some general principles in terms of data
management and Open Access.
Subsequently, it will be structured as proposed by the European Commission in
_H2020 Programme – Guidelines on FAIR Data Management in Horizon 2020_ 2 ,
covering the following aspects:
* Data Summary
* FAIR data
* Allocation of resources
* Data security
* Ethical aspects
* Other issues
Data management processes covered in this plan relate in particular to the
following project outputs:
* _Consortium Agreement_ (access rights, Personal Data Protection and IPR management 3 )
* _Deliverable 1.1 – Quality plan_ (quality control for publications)
* _Deliverable 7.2 – Dissemination plan_ (publications and scientific results)
* _Deliverable 2.4 - Report with ethical and Legal Project compliance_ (ethics and data protection)
* Exploitation activities of WP6
**Review timetable**
The DMP is a “living” document outlining how the research data collected or
generated will be handled during and after the eConfidence project. The DMP is
updated over the course of the project whenever significant changes arise
(e.g. new data collected, changes in the consortium or Consortium Agreement,
revision of IPR management, revision of research protocol). Furthermore, its
development and implementation is carried out in accordance with the following
review timetable, as envisaged in the Description of Action.
**By July 2017 – Version 2 (M10)**
* General revision of the plan during the 2 nd partners meeting.
* Revise open access strategy and participation in ORDP, if needed (in accordance with the definition of IPR management as per _Consortium Agreement_ ).
* Data collected: specification of types and formats of data generated/collected and the expected size.
* Findability: specification of naming conventions, search keywords identifications, versioning.
* Security: definition of procedures for data storage, data recovery and transfer of sensitive data.
* _Annex 1 – Data collected and processes_
**By February 2018 – Version 3 (M16-17)**
* General revision of the plan during the 3 rd partners meeting.
* Interoperability: specification of metadata vocabularies, standards and methodologies for datasets and assessment of interoperability level.
* Ethical aspects: revision of plan and strategy upon ethical approval of intervention protocol for research
**By July 2018 – Version 4 (M22-23)**
* Accessibility: description of documentation of tools needed and/or available to access and validate the data, such as code, software, methods and protocols (in accordance with WP5 deliverables).
* Licensing: final definition.
* Resource: final definition based on collected data, analysed results and prospective publications.
# General principles for data management
## Data collected and personal data protection
Within the eConfidence project, partners collect and process research data and
data for general project management purposes, according to their respective
internal data management procedures and in compliance with applicable
regulations.
Data collected for general purposes may include contact details of the
partners, their employees, consultants and subcontractors and contact details
of third parties (both persons and organisations) for coordination,
evaluation, communication, and dissemination and exploitation activities.
Research data are collected and processed in relation with the research pilots
(WP2, WP4 and WP5).
During the project lifetime, data are kept on computers dedicated for the
purpose and securely located within the premises of the project partners. Data
archiving, preservation, storage and access, is undertaken in accordance with
the needed ethical approval at the partner institution and the institution
where the data is captured. The data is preserved for a minimum of 10 years
(unless otherwise specified). All data susceptible of data protection are
subject to standard anonymization and stored securely (with password
protection). The costs for this is covered by the partner organization
concerned.
Detailed information on the procedures that are implemented for data
collection, storage, protection, retention and destruction are provided in
_Annex 1 – Data collected and processes_ .
Confirmation that the above mentioned processes comply with national and EU
legislation is provided by each partner and verified by the Data Controller 4
.
## Partners’ roles
For the overall data management flow, two main roles are identified (Data
Controller and Data Processor), as defined in the Consortium Agreement.
Table 1 contains the contacts of the institutional Data Protection Officers
responsible for data management and protection of personal data within each
partners’ organisation.
### **Table 1 – Data Protection Officers**
<table>
<tr>
<th>
**Organisation legal name**
</th>
<th>
**Legal address**
</th>
<th>
**Data Protection Officer**
</th> </tr>
<tr>
<td>
P1 – Instituto Tecnologico de Castilla y León
</td>
<td>
c/ Lopez Bravo 70 Burgos 09001 Spain
</td>
<td>
Amelia García
</td> </tr>
<tr>
<td>
P2 – EUN Partnership aisbl
</td>
<td>
Rue de Trèves 61
B-1040 Brussels, Belgium
</td>
<td>
John Stringer [email protected]
</td> </tr>
<tr>
<td>
P3 – Everis Spain SLU
</td>
<td>
Avd. Manoteras, 52
28050 Madrid (Spain)
</td>
<td>
Eduardo García Repiso
[email protected]_
</td> </tr>
<tr>
<td>
P4 – Nurogames GmbH
</td>
<td>
Schaafenstraße 25
50676 Cologne
</td>
<td>
Jens Piesk
[email protected]
</td> </tr>
<tr>
<td>
P5 – University of
Salamanca
</td>
<td>
</td>
<td>
D. JUAN MANUEL CORCHADO RODRÍGUEZ
Vicerrector de investigación y transferencia [email protected]_
</td> </tr>
<tr>
<td>
P6 – FHSS Rijeka
</td>
<td>
Sveučilišna avenija 4, HR-
51000 Rijeka, Croatia
</td>
<td>
Rajka Kolić, [email protected]
</td> </tr> </table>
# Research data and Open Access
The eConfidence project is part of the H2020 Open Research Data Pilot (ORDP)
and publication of the scientific results is chosen as a mean of
dissemination. In this framework, open access is granted to publications and
research data (WP4 and WP5) and this process is carried out in line with the
_Guidelines on Open Access to Scientific Publications and Research Data in
Horizon 2020_ 5 (as outlined in the summary below) 6 .
The strategy to apply Open Access for the project’s scientific results is
revised, step by step, according to personal data protection regulations, the
results of the ethical approval process of the research protocols and the
provisions of the Consortium Agreement. If needed, it will be possible to “opt
out” from this open access strategy for specific and well-defined subsets of
data.
## Scientific publications
Open access is applicable to different types of scientific publication related
to the research results, including its bibliographic metadata, such as:
* journal articles
* monographs and books
* conference proceedings, abstract and presentations
* grey literature (informally published written material)
Grey literature includes also reports and deliverables of the projects related
to the research, whose Dissemination level is marked as Public (WP2, WP4,
WP5).
Open access is granted as following.
* Step 1 – Depositing machine readable electronic copy of version accepted for publication in repositories for scientific publications (before or upon publication)
* Step 2 – Providing open access to the publication via the chosen repository
For access to publications, a hybrid approach is considered (both green GA and
gold OA), depending on the item and the dissemination channels that will be
available.
* Green OA (self-archiving) – depositing the published article or the final peer-reviewed manuscript in repository of choice and ensure open access within at most 6 months (12 months for publications in the social sciences and humanities).
* Gold OA (open access publishing) – publishing directly in open access mode/journal
Any publication of the scientific results also needs to comply with the
process envisaged in _D1.1 – Quality plan – Section Quality control for
publication_ and in _Consortium Agreement Section 8.3 – Dissemination_ .
## Research data
In addition, open access is granted also to underlying research data (data
needed to validate results presented in publication) and their associated
metadata, any other data (not directly attributable to the publication and raw
data) and information on the tools needed to validate the data and, if
possible, access to these tools (code, software, protocols etc.).
Open access is granted as following.
* Step 1 – Depositing the research data in a research data repository
* Step 2 – Enabling access and usage free of charge for any user (as far as possible)
## Other project’s outcomes
As per any other outcomes of the project, they are disseminated accordingly to
the Dissemination level indicated in the Description of Action and they are
also subjected to protection in accordance with the Consortium Agreement and
in reference to Access Rights.
# FAIR Data management plan
## Data summary
The Data Summary provides an overview of the purpose and the nature of data
collection and generation, and its relation to the objective of the
eConfidence project.
### Objectives of the project and research
The eConfidence project aims to test a methodology that includes several
models, such as the Activity Theory-based Model of Serious Games (ATMSG) for
game development methodology combined with Applied Behaviour Analysis (ABA)
and Learning Analytics (LA), in order to design serious games able to promote
behavioural changes in the user.
eConfidence tests the methodology with two serious games in Spanish and
English speaking schools, to assess behavioural changes in children.
Within this research several types of data are collected.
Initially, theoretical and empirical data from previous research are collected
through literature review in order to suggest games’ scenarios and Applied
Behaviour Analysis (ABA) procedures, to determine KPIs and to select
measurement instruments.
Subsequently, data regarding target behaviours (safe use of internet and
bullying), key variables that affect those behaviours, as well as relevant
personal variables are collected in pre-test and post-test research phases by
using questionnaires. Also, data on in-game behaviours are collected during
the research participants’ gaming sessions and data on quality, usability and
experience with serious games are collected in post-test phase.
The purpose of collecting data in pre-test and post-test phases, as well as in
gaming sessions, is to analyse cognitive, emotional and behavioural changes
produced by the playing the games, in order to evaluate effectiveness of games
mechanics in producing changes through ABA procedures.
The final results, obtained from statistical analysis of the data, could be
useful for different stakeholders, such as game developers, educational policy
makers, educational and mental health institutions.
### Data collected
The research data of the project are original and no existing data is being
reused for the research results.
The research data are collected through pilots in 10 schools (5 Spanish
speaking and 5 English speaking schools), through a process in three phases:
pre-test questionnaire, experimentation, post-test questionnaire. Participants
are 12-14 years old students. The description of the research protocol is
available in _D2.3 – Intervention protocol_ and the full description of the
indicators is available in _D2.2 – Dossier of measurements instruments to
apply in the pilot test._
Data collected were defined by the research partners (USAL and FHSS) for
research data (questionnaires) and by the technical partners that develop the
games, Nurogames and ITCL, for games metrics.
The data collected with the pre-test and post-test questionnaires, beside
users’ profile information (age, gender, language, parental educational and
employment status, gaming experience, and participation in prevention
programmes), focus on knowledge, behaviour, and variables derived from the
Theory of planned behaviour (TPB: attitudes, perceived behavioural control,
subjective norms and behavioural intentions) related to safe internet use and
bullying, as well as on personal variables (social skills, assertiveness,
empathy, and friendship). All TPB and personal variables are assessed by using
self-reported instruments that will be applied online.
During the gaming sessions, different behavioural indicators are recorded
(e.g. user choices in game scenarios), in order to track behaviour changes in
safe use of internet and bullying behaviour. The data collected during the
game play as metrics (participants playing the two serious games on bullying
and online safety) include most of the relevant actions in the games: their
selections, number of errors and attempts, response time, playing time by
mini-game and full game, etc. All this data are analysed in order to extract
the player evolution during the game.
This data are also analysed with Big Data in order to get gaming tends, gaming
groups etc. with supervised and unsupervised learning techniques.
Data on user satisfaction are also collected as a separate questionnaire at
the end of the experimentation phase.
#### Types of data, size and formats
Types and formats of the data generated through the research, how they were
collected and the expected size are described below.
The last column specifies which data were selected to be made openly
available, considering data protection obligations, ethical aspects and
relevance for further research.
##### Table 2 – Datasets summary
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Brief description**
</th>
<th>
**Types**
</th>
<th>
**Formats**
</th>
<th>
**Expected size**
</th>
<th>
**Open data y/n**
</th> </tr>
<tr>
<td>
Pre-test
</td>
<td>
Knowledge, Behaviour, TPB variables, demographics
</td>
<td>
File (data)
</td>
<td>
Excell.xls SPSS.sav
</td>
<td>
1,5 MB
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Game play
</td>
<td>
Game metrics about playing
</td>
<td>
TBD
</td>
<td>
TBD
</td>
<td>
TBD
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
Bullying game and Safe Use of Internet
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Post-test
</td>
<td>
Same variables as pre-test plus user satisfaction questions
</td>
<td>
TBD
</td>
<td>
Excell.xls or SPSS.sav
</td>
<td>
1,5 MB
</td>
<td>
Yes
</td> </tr> </table>
## FAIR Data
In general terms, research data generated in the eConfidence project are – in
as far as possible – “FAIR”, that is findable, accessible, interoperable and
re-usable.
### Findability \- Making data findable, including provisions for metadata
Publications are provided with bibliographic metadata (in accordance with the
guidelines).
Unique and persistent identifiers are used (such as Digital Object Identifiers
- DOI 7 ), when possible also applying existing standards (such as ORCID 8
for contributor identifiers).
As per the European Commission guidelines 9 , bibliographic metadata that
identify the deposited publication are in a standard format and include the
following:
* the terms ["European Union (EU)" & "Horizon 2020"]
* the name of the action, acronym and grant number
* the publication date, the length of the embargo period (if applicable) and a persistent identifier.
Datasets are provided with appropriate machine-readable metadata (see
Interoperability) and keywords are provided for all type of data.
#### Search keyword
The keywords relate to the variables assessed in the research. The custom
keywords identified are: eConfidence, bullying, safe use of internet, Theory
of planned behaviour, empathy, assertiveness, social skills and friendship.
#### Naming conventions and versioning
Files are named according to their content to ease their identification with
the project. The project name is at the beginning (eConfidence_pretest;
eConfidence_post-test). The date is formatted as filename_yymmdd.
* The name of the project: eConfidence
* Brief description of the content. i.e. Pretest
* Number of version of the file
* Date
### Accessibility – Making data openly accessible
Data and related documentation are made available depositing them in the
repository of choice (Zenodo 10 ), together with the publications, and are
accessible free of charge for any user.
Zenodo is a repository built by CERN, within the OpenAIRE project, with the
aim of supporting the EC’s Open Data policy by providing a set of tools for
funded research 11 . Zenodo provides tools to deposit publications and
related data and to link them.
Any needed restriction in access to the data is evaluated before final
publication, in accordance with ethical aspects (conducting research with
humans and children) and with protection of personal data.
#### Methods and tools
Documentation on the tools needed to access and validate the data are also
provided (including protocols and methods).
If the code/software used to analyse the results is generated by the project’s
partners under an open license and using open source tools, this code is also
made available with the data.
Methods and tools will be finalized in Version 4 of this plan (Summer 2018).
### Interoperability - Making data interoperable
Metadata models were evaluated among the ones available in the Metadata
Standards Directory 12 .
Dublin Core standard 13 ( _Table 3 - DC Metadata Element Set_ ) was selected
to add metadata to each of the datasets identified in _Table 2 – Datasets
summary._
#### **Table 3 - DC Metadata Element Set**
<table>
<tr>
<th>
**Term Name: contributor**
</th> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/contributor_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Contributor
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
An entity responsible for making contributions to the resource.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Examples of a Contributor include a person, an organization, or a service.
Typically, the name of a Contributor should be used to indicate the entity.
</td> </tr>
<tr>
<td>
**Term Name: coverage**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/coverage_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Coverage
</td> </tr> </table>
<table>
<tr>
<th>
Definition:
</th>
<th>
The spatial or temporal topic of the resource, the spatial applicability of
the resource, or the jurisdiction under which the resource is relevant.
</th> </tr>
<tr>
<td>
Comment:
</td>
<td>
Spatial topic and spatial applicability may be a named place or a location
specified by its geographic coordinates. Temporal topic may be a named period,
date, or date range. A jurisdiction may be a named administrative entity or a
geographic place to which the resource applies. Recommended best practice is
to use a controlled vocabulary such as the Thesaurus of Geographic Names
[TGN]. Where appropriate, named places or time periods can be used in
preference to numeric identifiers such as sets of coordinates or date ranges.
</td> </tr>
<tr>
<td>
References:
</td>
<td>
[TGN] _http://www.getty.edu/research/tools/vocabulary/tgn/index.html_
</td> </tr>
<tr>
<td>
**Term Name: creator**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/creator_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Creator
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
An entity primarily responsible for making the resource.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Examples of a Creator include a person, an organization, or a service.
Typically, the name of a Creator should be used to indicate the entity.
</td> </tr>
<tr>
<td>
**Term Name: date**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/date_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Date
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
A point or period of time associated with an event in the lifecycle of the
resource.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Date may be used to express temporal information at any level of granularity.
Recommended best practice is to use an encoding scheme, such as the W3CDTF
profile of ISO 8601 [W3CDTF].
</td> </tr>
<tr>
<td>
References:
</td>
<td>
[W3CDTF] _http://www.w3.org/TR/NOTE-datetime_
</td> </tr>
<tr>
<td>
**Term Name: description**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/description_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Description
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
An account of the resource.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Description may include but is not limited to: an abstract, a table of
contents, a graphical representation, or a free-text account of the resource.
</td> </tr>
<tr>
<td>
**Term Name: format**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/format_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Format
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
The file format, physical medium, or dimensions of the resource.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Examples of dimensions include size and duration. Recommended best practice is
to use a controlled vocabulary such as the list of Internet Media Types
[MIME].
</td> </tr>
<tr>
<td>
References:
</td>
<td>
[MIME] _http://www.iana.org/assignments/media-types/_
</td> </tr> </table>
<table>
<tr>
<th>
**Term Name: identifier**
</th> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/identifier_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Identifier
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
An unambiguous reference to the resource within a given context.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Recommended best practice is to identify the resource by means of a string
conforming to a formal identification system.
</td> </tr>
<tr>
<td>
**Term Name: language**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/language_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Language
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
A language of the resource.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Recommended best practice is to use a controlled vocabulary such as RFC 4646
[RFC4646].
</td> </tr>
<tr>
<td>
References:
</td>
<td>
[RFC4646] _http://www.ietf.org/rfc/rfc4646.txt_
</td> </tr>
<tr>
<td>
**Term Name: publisher**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/publisher_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Publisher
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
An entity responsible for making the resource available.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Examples of a Publisher include a person, an organization, or a service.
Typically, the name of a Publisher should be used to indicate the entity.
</td> </tr>
<tr>
<td>
**Term Name: relation**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/relation_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Relation
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
A related resource.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Recommended best practice is to identify the related resource by means of a
string conforming to a formal identification system.
</td> </tr>
<tr>
<td>
**Term Name: rights**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/rights_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Rights
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
Information about rights held in and over the resource.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Typically, rights information includes a statement about various property
rights associated with the resource, including intellectual property rights.
</td> </tr>
<tr>
<td>
**Term Name: source**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/source_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Source
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
A related resource from which the described resource is derived.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
The described resource may be derived from the related resource in whole or in
part. Recommended best practice is to identify the related resource by means
of a string conforming to a formal identification system.
</td> </tr>
<tr>
<td>
**Term Name: subject**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/subject_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Subject
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
The topic of the resource.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Typically, the subject will be represented using keywords, key phrases, or
classification codes. Recommended best practice is to use a controlled
vocabulary.
</td> </tr>
<tr>
<td>
**Term Name: title**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/title_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Title
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
A name given to the resource.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Typically, a Title will be a name by which the resource is formally known.
</td> </tr>
<tr>
<td>
**Term Name: type**
</td> </tr>
<tr>
<td>
URI:
</td>
<td>
_http://purl.org/dc/elements/1.1/type_
</td> </tr>
<tr>
<td>
Label:
</td>
<td>
Type
</td> </tr>
<tr>
<td>
Definition:
</td>
<td>
The nature or genre of the resource.
</td> </tr>
<tr>
<td>
Comment:
</td>
<td>
Recommended best practice is to use a controlled vocabulary such as the DCMI
Type Vocabulary [DCMITYPE]. To describe the file format, physical medium, or
dimensions of the resource, use the Format element.
</td> </tr>
<tr>
<td>
References:
</td>
<td>
[DCMITYPE] _http://dublincore.org/documents/dcmi-type-vocabulary/_
</td> </tr> </table>
If relevant, additional metadata will be defined, for datasets specific to the
project, in accordance with the existing standards. In this case, the option
to provide a mapping to existing ontologies will be assessed during the
evaluation phase (WP5) in summer 2018.
### Data re-use and licensing
As per data quality assurance processes, in order to assess the good quality
of the data retrieved, these guidelines are followed. Once the datasets are
downloaded from the Xtend platform in the Excel format, they are transformed
into .sav format (SPSS) and basic quality assurance measures are taken. After
datasets from different schools are merged, it is assured that all the
variables line up in their proper columns. If omission are found, in order not
to lose results on the entire scale, the missing values are substituted with
means. Basic statistical analysis is then performed to check for any outliers
or impossible answers, checked against expected scale ranges. Several
variables have to be recoded or transformed, so a clear coding system was
developed. Transformation is conducted separately by two research teams (USAL
and FHSS), and data are compared in order to ensure no mistakes are made
during transformation.
Publications and underlined data are made available at the end of the
evaluation phase, once all data are collected and analysed (Summer 2018). All
the data indicated in section 4.1.2 as Open Data will be made available for
re-use after the end of the project. The licences for publications and related
data will be defined in Version 4 of this plan, based on the final data, in
order to verify compliance with personal data protection regulations and the
ethical approval process results. Creative Commons is the chosen licensing
system, and the license for each item will be selected using the EUDAT license
wizard tool 14 .
## Allocation of resources
In Horizon 2020, costs related to open access to research data are eligible
for reimbursement during the duration of the project, under the conditions
defined in the Grant Agreement (Article 6). The project uses this option for
publications, while related data will be deposited in open repositories, free
of charge.
Human resources required to implement this plan are considered in the relevant
partners’ staff budget, according to their tasks in the project’s activities
(ITCL, EUN, Everis, FHSS, USAL).
Roles and responsibilities for data management within the project are
described in sections _General principles for data management_ and _Annex 1 –
Data collected and processes – summary_ .
## Data security
The key procedure for data security of the eConfidence project are outlined in
the document _D2.4 – Report with ethical and Legal Project compliance_ and
summarized in the following.
### Data collection
The collection of research data is carried out entirely through the Xtend
platform 15 (an educational platform made available by the Data Controller,
EVERIS): pre-test questionnaire, game play and post-test questionnaire.
Each participant accesses the platform through an individual account (username
and password), created by EVERIS and provided directly to the research
coordinator of each school through password protected files. The research
coordinator provides the students involved in the research with their
credentials ensuring confidentiality.
#### Anonymization Process
Data Results in the platform are not associated with user’s identity.
The name of the research participant appears on the consent forms, of which
one digital copy is kept by everis. All data in the platform is anonymized by
assigning an anonymized user to each student. Algorithm of anonymization is
only known by Data Manager Controller to maintain the anonymity of the
results.
Information of the association between platform user and student of each
centre/school is transmitted to each coordinator of the school in Excel format
and with the specific data of the school. The Excel sheet is secured through
256-bit AES (Advanced Encryption Security) codification and password. Password
is sent to the centre’s research coordinator through SMS, not to use the same
communication channel as for the Excel sheet. Student’s data and platform’s
users’ conversion of the centre are stored by the school following the legal
requirements of the country and are to be destroyed at the end of the project.
All data collected during the study through the platform is associated to the
platform user. That means that reports, results, internal communications and
external publications do not contain any personal data of the students.
### Data maintenance and storage
#### Data access in Xtend platform
Research and research-related personal data collected are stored only in Xtend
systems. Personal Data is only accessible by Data Controller.
Access is restricted to each participant, under their fictional pseudo-
identity, and to the members of the Data Controller organisation and
eConfidence research team.
Each access to the research data is properly logged with the information of
the authorized user who requests access to the data.
Access is managed using cost-effective state of the art information security
techniques: i.e. mutual authentication of the experimental prototype and its
authorized users, restricted access for each user to functionality required to
fulfil their project role, and encryption of all messages passing between the
users and the experimental prototype.
In Xtend, three roles had been defined and had been associated to eConfidence
profiles. **Table 4 - Xtend platform roles**
<table>
<tr>
<th>
**e-confidence Profile**
</th>
<th>
**Xtend role**
</th>
<th>
**Groups inside the role**
</th> </tr>
<tr>
<td>
Student
</td>
<td>
Student
</td>
<td>
2 groups: control and experimental
</td> </tr>
<tr>
<td>
Research Coordinator
</td>
<td>
Coordinator
</td>
<td>
2 groups: school and school group
</td> </tr>
<tr>
<td>
Data Manager
</td>
<td>
Administrator
</td>
<td>
No group
</td> </tr> </table>
Functionalities and access defined for each role are explained in the table
below.
##### Table 5 - Xtend platform functionalities
<table>
<tr>
<th>
Student
</th>
<th>
access to own questionnaire
</th>
<th>
Complete and send the completed
</th>
<th>
To run the game
</th>
<th>
To see all students of his/her
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
questionnaire
</td>
<td>
</td>
<td>
group
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Coordinator
</td>
<td>
access to own questionnaire
</td>
<td>
Complete and send the completed questionnaire
</td>
<td>
To run the game
</td>
<td>
To see all students of his/her group
</td>
<td>
To see all Xtend
profiles of his/her group
</td>
<td>
To send message
to all
students of his/her group
</td>
<td>
To change his/her password
</td> </tr>
<tr>
<td>
Administrator
</td>
<td>
All Xtend functionality
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
#### Process of backups of Xtend platform
Xtend platform is hosted on Amazon Web Services (AWS) infrastructure and
follow an automated procedure for daily backups of data managed. A daily
backup is programmed for all the instances of the platform using the AES-256
encryption algorithm. Daily backups have a retention period of 30 days.
A second data storage backup is programming once a month but with a retention
period of 5 years.
## Ethical aspects
The ethical aspects of the research generating the scientific data of the
project are covered in the following deliverables, also taking into
consideration the European Commission Ethics Summary Report for the project.
* _D2.3 – Intervention protocol_
* _D2.4 – Report with ethical and Legal Project compliance_
As mentioned, these aspects are taken into consideration in the selection of
data to be made available for re-use (section 4.1.2) and for the security
procedures (section 4.5).
### Consent
Expressed written consent is collected for all participants (students 12-14
years old) and their parents, before the pre-test phase. Participants and
parents were also provided with Information sheets.
The request for consent makes clear that:
* anonymized or process data could be used in future studies as well as for publication purposes
* personal privacy and data protection is guaranteed during these activities
* data from the tests (anonymized) may be reused by other researchers after the eConfidence project, for validation process or for new research.
The full process to manage the consent is outlined in D2.4.
Since the name of the parent/guardian as well as their respective child(ren)
constitute personal data, the consent forms are handled as follows:
* A digital copy of the consent form is made and kept on a secure computer at the Data Controller’s premises; only the data controller has access to these copies.
* The hardcopy is destroyed.
* An arbitrary index is assigned to each participant.
The correspondence between the arbitrary index and the softcopy consent forms
is held in a suitably encrypted table held on a secure computer at the Data
Controller’s premises. This table also contains a cross-reference to the data
processor(s) for the data associated the indexes in the table. Any datasets
(video, audio etc.) is associated with the index generated. Only the Data
Controller has access to the correspondence between consent form and indexes.
## Other issues
Table 6 contains other relevant national, sectorial and institutional
references and procedures for data management.
### **Table 6 – Other references for data management**
<table>
<tr>
<th>
**Organisation**
</th>
<th>
**National regulations**
</th>
<th>
**Other references**
</th> </tr>
<tr>
<td>
P1 – Instituto
Tecnologico de
Castilla y León
</td>
<td>
Organic Law 15/1999 fron 13th of
December for personal data protections
</td>
<td>
</td> </tr>
<tr>
<td>
P2 – EUN
Partnership aisbl
</td>
<td>
GDPR/Privacy Act 8th December 1992 – protection of privacy in relation to the
processing of personal data
</td>
<td>
Belgian Data Protection Authority
(commission@privacy commission.be)
</td> </tr>
<tr>
<td>
P5 – University of
Salamanca
</td>
<td>
</td>
<td>
Comité de Bioética of the Univ. of Salamanca
https://evaluaproyectos.usal.es/main_pa ge.php
</td> </tr>
<tr>
<td>
P6 – FHSS RIJEKA
</td>
<td>
The Law on Protection of Personal Data (Republic of Croatia, Official Gazzete
no.
103/03, 118/06, 41/08, 130/11, 106/12)
</td>
<td>
</td> </tr> </table>
Annex 1
–
Data collected and
process
es
–
summary
<table>
<tr>
<th>
**Organisation**
</th>
<th>
**Dataset name**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th>
<th>
**Format**
</th>
<th>
**Collection process**
</th>
<th>
**Owner**
</th>
<th>
**Storage**
</th>
<th>
**Access/**
**Privacy level**
</th>
<th>
**Backup**
</th>
<th>
**Destruction at the of end of the project**
</th>
<th>
**Retention in years**
</th> </tr>
<tr>
<td>
P1 - ITCL
</td>
<td>
Anonymized test
about aesthetic line
</td>
<td>
survey containing several questions about gaming preferences, aesthetic lines,
colour palettes and game narrative to Target group
</td>
<td>
Project data
</td>
<td>
pdf
</td>
<td>
Physical forms with digitisation
</td>
<td>
ITCL
</td>
<td>
ITCL local repository
</td>
<td>
ITCL staff
</td>
<td>
No backup
</td>
<td>
NO
</td>
<td>
5 (for
management and auditing requirements)
</td> </tr>
<tr>
<td>
P1 - ITCL
</td>
<td>
Anonymized test about Beta version of
School of Empathy
</td>
<td>
survey containing several questions about the first beta version of School of
empathy to Target group
</td>
<td>
Project data
</td>
<td>
pdf
</td>
<td>
Physical forms with digitisation
</td>
<td>
ITCL
</td>
<td>
ITCL local repository
</td>
<td>
ITCL staff
</td>
<td>
No backup
</td>
<td>
NO
</td>
<td>
5 (for
management and auditing requirements)
</td> </tr>
<tr>
<td>
P2 – EUN
</td>
<td>
Call for schools
</td>
<td>
Organisation, contact persons, applications and selection process data, to
manage the selection and agreement with schools
</td>
<td>
Organisat ion and personal data
</td>
<td>
.xls
</td>
<td>
Through application form
</td>
<td>
</td>
<td>
EUN NAS
(server)
</td>
<td>
EUN Staff and experts
</td>
<td>
Once
</td>
<td>
No
</td>
<td>
5 (for
management and auditing requirements)
</td> </tr>
<tr>
<td>
P3 – EVERIS SPAIN SL
</td>
<td>
Experts list
</td>
<td>
List of experts related to econfidence project.
</td>
<td>
Public data
</td>
<td>
Pdf
</td>
<td>
In public web
</td>
<td>
everis
</td>
<td>
Everis client database
</td>
<td>
everis workers
</td>
<td>
No
</td>
<td>
No
</td>
<td>
Indefinitely
</td> </tr>
<tr>
<td>
P3 – EVERIS SPAIN SL
</td>
<td>
Consent forms in digital format
</td>
<td>
Form consent of students and parents for econfidence use.
</td>
<td>
Personal data
</td>
<td>
Pdf
</td>
<td>
Physical form with digitisation
</td>
<td>
Everis
</td>
<td>
everis local repository
</td>
<td>
256-bit AES &
Password protected/
Only Data Controller
</td>
<td>
No backup
</td>
<td>
No
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
P3 – EVERIS SPAIN SL
</td>
<td>
File with the association between student and Xtend User – digital format
</td>
<td>
Excel sheet with the association between Xtend platform user with the student
data
</td>
<td>
Personal and project data
</td>
<td>
xls
</td>
<td>
List form
</td>
<td>
everis
</td>
<td>
everis local repository
</td>
<td>
256-bit AES &
Password protected/ Only Data Controller
</td>
<td>
No backup
</td>
<td>
Yes
</td>
<td>
0
</td> </tr>
<tr>
<td>
P3 – EVERIS SPAIN SL
</td>
<td>
Platform Users
</td>
<td>
Users of Xtend platform with association with the profile which define
accesses to data
</td>
<td>
Project data
</td>
<td>
In platform Database
</td>
<td>
Database
list
</td>
<td>
everis
</td>
<td>
Xtend platform
</td>
<td>
Password protected. Only Data Manager
</td>
<td>
Xtend platfor m
</td>
<td>
No
</td>
<td>
5 _(for_
_management and auditing requirements)_
</td> </tr>
<tr>
<td>
P3 – EVERIS SPAIN SL
</td>
<td>
Results and reports of using platform (questionnaire and games)
</td>
<td>
Research and personal data about the use of Xtend platform
</td>
<td>
Results data
</td>
<td>
In platform Database
</td>
<td>
Database information or reports in Xtend platform
</td>
<td>
everis
</td>
<td>
Xtend platform
</td>
<td>
Xtend users considering profiles
</td>
<td>
Xtend platfor m
</td>
<td>
No
</td>
<td>
5 _(for_
_management and auditing requirements)_
</td> </tr> </table>
This project has received funding from the European Union's Horizon 2020
research and innovation programme under grant agreement - No 732420
This communication reflects only the author's view. It does not represents the
view of the European Commission and the EC is not responsible for any use that
may be made of the information it contains.
21
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0422_PreCoM_768575.md
|
# Executive Summary
**The PreCoM project**
Cheaper and more powerful sensors, predictive cognitive CBM system, together
with big data analytics, offer an unprecedented opportunity to track machine-
tool performance and health condition. However, manufacturers only spend 15%
of their total maintenance costs on predictive (vs reactive or preventative)
maintenance.
The PreCoM project will deploy and test a predictive cognitive maintenance
decision-support system able to identify and localize damage, assess damage
severity, predict damage evolution, assess remaining asset life, reduce the
probability of false alarms, provide more accurate failure detection, issue
notices to conduct preventive maintenance actions and ultimately increase in-
service efficiency of machines by at least 10%.
The platform includes 4 modules: 1) a data acquisition module leveraging
external sensors as well as sensors directly embedded in the machine tool
components, 2) an artificial intelligence module combining physical models,
statistical models and machine-learning algorithms able to track individual
health condition and supporting a large range of assets and dynamic operating
conditions, 3) a secure integration module connecting the platform to
production planning and maintenance systems via a private cloud and providing
additional safety, self-healing and self-learning capabilities and 4) a human
interface module including production dashboards and augmented reality
interfaces for facilitating maintenance tasks.
The consortium includes 3 end-user factories, 3 machine-tool suppliers, 1
leading component supplier, 4 innovative SMEs, 3 research organizations and 3
academic institutions. Together, we will validate the platform in a broad
spectrum of real-life industrial scenarios (low volume, high volume and
continuous manufacturing). We will also demonstrate the direct impact of the
platform on maintainability, availability, work safety and costs in order to
document the results in detailed business cases for widespread industry
dissemination and exploitation.
**Goal and structure of deliverable**
The present document _D9.5: Open Data Management Plan_ is a public deliverable
of the PreCoM project, developed within _WP9: Dissemination, Communication &
Ecosystem Development _ at month 6 (April 2018).
This deliverable is based on the Template for the Open Research Data
Management Plan (DMP) 1 recommended by the European Commission. The
following sections describe how PreCoM plans to make the project data
Findable, Accessible, Interoperable and Reusable (FAIR). This DMP constitutes
a preliminary version produced at month 6 which will be updated progressively
during the project course, when the specific types of data and open data will
be defined in detail, selected and planned for eventual publications. Partners
will check throughout the project whether the publication of some or all types
of data could be incompatible with the obligation and will to protect emerging
results that can reasonably be expected to be commercially or industrially
exploited.
**PreCoM Open Research Data Management Plan (DMP)**
# SUMMARY
_(dataset_ 2 _reference and name; origin and expected size of the data
generated/collected; data types and formats)_
<table>
<tr>
<th>
**Purpose of the data collection/generation**
* Analysing the condition monitoring, maintenance, production, quality and production cost information coming from the three use-cases
* Sharing information between partners
* External dissemination and communication (through e.g. publications and reports)
**Relation to the objectives of the project**
* The prediction models will be based on historical data as well as data sets recorded during the project
* To support the maintenance technicians and manager with information
* To continuously improve the accuracy of the models/modules included by the system
**Types and formats of data generated/collected**
* Office files (.docx, .pptx, .xlsx)
* Pdf-files
* 3D-Model-file (.vrml, .fbx)
* Image- and video-files (.png, .jpg, .mp4)
* Matlab files (.mat)
* csv-files
* Text files (.txt)
* Sensor (raw) data in time and frequency domain
* NC data
* Python script file (.py)
* R software files: R script file (.r), R objects file (.rds, .rda, .RData)
* Open document: text documents (.odt), spreadsheet documents (.ods), database documents (.odb), graphics documents (.odg) and formula documents (.odf).
</th> </tr> </table>
<table>
<tr>
<th>
* Compressed Files (.zip, .rar, .tar.gz)
* Database Files: SQL file (.sql), JSON file (.json)
* TeX files: LaTeX file (.tex), R Markdown file (.rmd) and R Knitr file (.rnw)
* XML viewer (.xml)
**Re-use of existing data:**
**Yes, it will be done, in particular:**
* Historical data from the Condition Monitoring Systems as well as other production software (also including excel-sheets) is re-used
* (Maintenance) documentation of the production machines
* Economic data concerning, for example production losses per time unit, maintenance and production costs.
* Quality data, for example defectives, quality rate and causes behind that.
* Existing images, manuals and video files to guide workers through maintenance processes
* Exiting 3D-Model files for worker guidance and machine status overview
**Origin of the data**
* Condition Monitoring Systems of the production machines including sensor platform, NC
Data, and external controllers
* Production Software (MES, PPS, etc.)
* Economic and quality systems
* Documentations/Manuals of Production Machines
**Expected size of the data**
* For each machine (on average):
− 600MB per month (information from CNC)
− 4MB per file diagnosis cycle (high sampling rate files).
* Internal Repository for sharing information between partners, publication and reports: < 1TB
* For each publicly-available file (e.g., publications, open dataset): <10Mb
**Data Utility: to whom will it be useful?**
* all the partners involved in the project and the scientific community
</th> </tr> </table>
# MAKING DATA FINDABLE
_(dataset description: metadata, persistent and unique identifiers e.g., DOI)_
<table>
<tr>
<th>
**Discoverability of data (metadata provision)**
* The data from the Condition Monitoring Systems are stored in a cloud database (so-called SAVVY Cloud) for internal use, which includes metadata; all office documents include metadata as well.
* DOI when published (scientific articles)
**Identifiability of data and standard identification mechanisms. Do you make
use of persistent and unique identifiers such as Digital Object Identifiers?**
* Yes, in publications and data appearing in journals, magazines and other collections
(assigned by publisher)
**Naming conventions used**
* Date, purpose of the document, editors and version number
**Approach towards search keyword**
* A limited and appropriate set of keywords will be selected for each publication/dataset, as well as each deliverable. Publications should integrate the terms "European Union
(EU)", "Horizon 2020", "PreCoM" and the Grant agreement number
* No keywords in internal documents or condition monitoring data
**Approach for clear versioning**
* Office documents are named with the version number and the name of the editors
* Deliverables include a table, which lists the different versions together with the editors
* Condition Monitoring Data, such as vibration measurements, does not need a versioning as there only exists one version and the data is distinguished by data measuring date and time)
**Standards for metadata creation (if any). If there are no standards in your
discipline describe what metadata will be created and how.**
* Descriptive and structural Metadata
</th> </tr> </table>
# MAKING DATA OPENLY ACCESSIBLE
_(which data will be made openly available and if some datasets remain closed,
the reasons for not giving access; where the data and associated metadata,
documentation and code are deposited (repository?); how the data can be
accessed (are relevant software tools/methods provided?)_
<table>
<tr>
<th>
**Specify which data will be made openly available? If some data is kept
closed provide rationale for doing so**
* The publicly-available data will be published in public deliverables within the PreCoM project and scientific journals.
* The selection of open data will be done progressively during the project as far as the consortium defines and agrees in this respect.
* The potential exploitation of production and documentation data (data from the condition monitoring systems), as well as of other types of data, may lead to keep closed some data, as they might contain intellectual property (e.g. NC Code, CADModels) from partners or third parties.
**Specify how the data will be made available**
* PreCoM website
* Open Access journal publications
* Data repository (Zenodo)
**Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?**
* Depending on the data disclosed, they might need specific software to be opened, like the following ones: o Matlab o Microsoft Office
* Statistical Software R (Open Source) o Statistical Software SPSS
* SAVVY Cloud REST API (described in D2.3)
* Savvy interoperability modules (data accessible in the shopfloor) o Documentation of the Software is provided through the Software provider o Python software (Open Source)
</th> </tr>
<tr>
<td>
**Specify where the data and associated metadata, documentation and code are
deposited**
* Internal project repository (LRZ Sync&Share hosted by TUM) for Office documents
* SAVVY Cloud for Condition Monitoring data
* Data repository (Zenodo) for public deliverables and open access publications and data
**Specify how access will be provided in case there are any restrictions**
* A formal request should be sent via e-mail to the project coordinator (Basim Al-
Najjar, [email protected]_ ; Francesco Barbabella,
[email protected]_ ), which will evaluate the request together with
the relevant other partners and will eventually grant (partial or total)
access to restricted data.
</td> </tr> </table>
# MAKING DATA INTEROPERABLE
_(which standard or field-specific data and metadata vocabularies and methods
will be used)_
<table>
<tr>
<th>
**Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.**
* Use of common and standard file formats (.txt, .mat, .docx, .pptx, .csv, .xlsx)
* Further specific formats and eventual conversions have to be defined
**Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?**
* Explicit and interoperable vocabulary can be used and eventually definitions can be provided to clearly define terms
</th> </tr> </table>
# INCREASE DATA RE-USE
_(what data will remain re-usable and for how long, is embargo foreseen; how
the data is licensed; data quality assurance procedures)_
<table>
<tr>
<th>
**Specify how the data will be licensed to permit the widest reuse possible**
* Open publications and data will be licensed under CC Attribution-NonCommercial 4.0 International license or similar ones (to be agreed on a case-by-case by the consortium).
* Further restrictions might be possible depending on the type of data and eventual emerging IPRs.
**Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed**
* Anonymized Condition Monitoring data used for validation in publications and public deliverables might be made available for re-use directly after the publication, if no issue emerges from IPR protection or other industrial needs
* In addition, specific sets of condition monitoring data can be anonymized and made available for re-use on request
* Methodologies and codes generated during the project might be disclosed only after eventual patents or other IPR protection measures will be fully granted in relevant countries
**Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why**
* Re-use of the processed data published in journals are restricted according to abovementioned conditions for granting open access data.
* In addition, specific sets of condition monitoring data can be anonymized and made available for re-use on request
**Describe data quality assurance processes**
* Data quality assurance processes will be defined in detail when the types of data produced during the project will be clarified.
**Specify the length of time for which the data will remain re-usable**
* For at least 3 years after the end of the project, open data could be re-used.
</th> </tr> </table>
# ALLOCATION OF RESOURCES and DATA SECURITY
_(estimated costs for making the project data open access and potential value
of long-term data preservation; procedures for data backup and recovery;
transfer of sensitive data and secure storage in repositories for long term
preservation and curation)_
<table>
<tr>
<th>
**Estimate the costs for making your data FAIR. Describe how you intend to
cover these costs**
* Open access publication costs (including open access data) in journals may vary from 1,000 to 3,000 €
* A publication budget of 33,000 Euros has been split between Linnaeus University (Coordinator), eMaintenance, Paragon, ITMATI, Technical University Munich, Technical University Chemnitz for open (gold) access publication costs.
* The data repository (Zenodo) is free.
**Clearly identify responsibilities for data management in your project**
* Open Data Manager: to be appointed (TUM, WP9 Leader). He/she will coordinates the open data management activities, makes proposals to the Executive Board on the definition of data produced during the project and the selection of open data and their publication.
* Executive Board: 1 representative per WP leader, including: Linnaeus University (Coordinator and Chair), CEA, ITMATI, Technical University Munich, Technical University Chemnitz, eMaintenance, Ideko, Vertech Group. The Executive Board evaluates the proposals by the Open Data Manager and discuss eventual issues and implications on publications and data access, making binding decisions when relevant. Further project partners and/or third parties might be involved in the discussion when data are produced or relates to other organizations.
* Responsible of internal data repository (LRZ Sync&Share): Simon Zhai (TUM). He maintains the project intranet accessible and updated at a secure address.
* Responsible of SAVVY Cloud for Condition Monitoring Data (information from machines): to be appointed (SAVVY). He/she will manage the cloud infrastructure enabling the collection and analysis of data from demonstration companies.
* Responsible of Data Repository (Zenodo): to be appointed (TUM). He/she will manage the publication of open access publications and data in online repositories.
**Describe costs and potential value of long term preservation**
* Data storage is managed at company scale. Therefore, storage for R&D projects is not really limited and represents negligible costs.
</th> </tr>
<tr>
<td>
**Address data recovery as well as secure storage and transfer of sensitive
data**
* Internal Repository (LRZ Sync&Share): For each file, the last five Versions are stored and can be restored
* SAVVY Cloud:
− Incremental backup is performed for machine information.
− Daily backup for management information (metadata).
* sensitive data is transferred through the password secured internal repository (LRZ Sync&Share), SAVVY Cloud provides password secured access to project partners (and TLS communication), who need to work with the Condition Monitoring Data
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0424_VICINITY_688467.md
|
# Executive Summary
_«The VICINITY project will build and demonstrate a bottom-up ecosystem of
decentralised interoperability of IoT infrastructures called virtual
neighborhood, where users can share the access to their smart objects without
losing the control over them.»_
The present document is a deliverable “D9.3 – Data Management Plan” of the
VICINITY project (Grant Agreement No.: 688467), funded by the European
Commission’s Directorate-General for Research and Innovation (DG RTD), under
its Horizon 2020 Research and Innovation Programme (H2020).
The VICINITY Consortium has identified several areas that need to be
addressed; Protocol interoperability, identification tokens, encryption keys,
data formats and packet size. Also, several issues are related to latency,
bandwidth and general architecture.
VICINITYs activities will involve human participants, as some of the pilots
will be conducted in real homes with actual
residents. For some of the activities to be carried out by the project, it may
be necessary to collect basic personal data (e.g. name, background, contact
details), even though the project will avoid collecting such data unless
necessary. Such data will be protected in accordance with the EU's Data
Protection Directive 95/46/EC 1 of the European Parliament and of the
Council of 24 th of October 1995 on the protection of individuals with
regard to the processing of personal data and on the free movement of such
data. National and local legislations applicable to the project will also be
strictly applied (full list described in annex 2: ethics and security).
All personal data, or data directly related to the residents, will first be
collected when the project has received a signed informed consent form from
the subjects participating.
This is the second version of the project Data Management Plan (DMP). It
contains preliminary information about the data the project will generate,
whether and how it will be exploited or made accessible for verification and
re-use, and how it will be curated and preserved. The purpose of the Data
Management Plan (is to provide an analysis of the main elements of the data
management policy that will be used by the consortium with regard to all the
datasets that will be generated by the project. The DMP is not a fixed
document, but will evolve during the lifespan of the project (Figure 1).
**Figure 1: Data Management Plan – deliverables 2016 – 2019**
_Note: In order to assist the official project review process by the
commission for the first project period (M1-M24), a preliminary version of the
updated DMP of D9.3 was delivered prior to M24 (December 2017), in order to be
enable a better assessment of the progress of the Data Management in the
project by the reviewers._
The datasets referred to in this document are drafted during the first project
stages (completed 30th of June 2016) of the project. The document can only
reflect the intentions of the project partners toward developing the overall
project’s datasets. The second revision (D9.3) has been prepared for 31st
December 2017, and the third (D9.4) will be ready by 31st December 2019. This
follows the H2020 guidelines on Data Management Plans, and as stated in the
Grant Agreement 688467.
As the project progresses and results start to arrive, the datasets will be
elaborated on. The detailed descriptions of all the specific datasets that
have been collected will be described, made available under the relevant Data
Management framework.
# Introduction
The purpose of the Data Management Plan (DMP) deliverable is to provide
relevant information concerning the data that will be collected and used by
the partners of the project VICINITY. The project aims to develop a solution
defined as “Interoperability as a Service” which will be a part of the
VICINITY open gateway (Figure 2). In order to achieve this, a platform for
harvesting, converting and sharing data from IoT units has to be implemented
on the service layer of the network.
**Figure 2: Domains and some of the functionalities the DMP has to cover**
This goal entails the need for good documentation and implementation of
descriptors, lookup-tables, privacy settings and intelligent conversion of
data formats. The strength of having a cloud-based gateway is that it should
be relatively simple to upgrade with new specifications and implement
conversion, distribution and privacy strategies. In particular, the privacy
part is considered an important aspect of the project, as VICINITY needs to
follow and adhere to strict privacy policies. It will also be necessary to
focus on possible ethical issues and access restrictions regarding personal
data so that no regulations on sensitive information are violated.
The datasets collected will belong to four main domains; smart energy,
mobility, smart home and eHealth (Figure 3: Example of potential data points
in use cases that generate data.). There exist several standards and
guidelines the project needs to be aware within each of these fields. There
are a number of different vendors and disciplines involved – and much of the
information that is available only exists in proprietary data formats. For
this reason, VICINITY will target IoT units that follow the specifications
defined by oneM2M consortium, ETSI standardization group and international
groups and committees.
The DMP has been undergone some changes in particular in regards to privacy
concerns when collecting and distributing. This version of the document is
based on the knowledge generated through discussions, demonstrations and
preparations for deployment at pilot sites.
**Figure 3: Example of potential data points in use cases that generate
data.**
# General Principles
## 3.1. Participation in the Pilot on Open Research Data
VICINITY participates in the Pilot on Open Research Data launched by the
European Commission along with the Horizon2020 programme. The consortium
believes firmly in the concepts of open science, and the large potential
benefits the European innovation and economy can draw from allowing reusing
data at a larger scale. Therefore, all data produced by the project may be
published with open access – though this objective will obviously need to be
balanced with the other principles described below.
## 3.2. IPR management and security
As a research and innovation action, VICINITY aims at developing an open
framework and gateway – but with support for value added services and business
models. The project consortium includes partners from private sector, public
sector and end-users (Figure 4). Some partners may have Intellectual Property
Rights on their technologies and data. Consequently, the VICINITY consortium
will protect that data and crosscheck with the concerned partners before data
publication.
**Figure 4: The VICINITY consortium includes partners from different sectors
with confidential data**
A holistic security approach will be followed, in order to protect the pillars
of information security (confidentiality, integrity, availability). The
security approach will consist of a methodical assessment of security risks
followed by their impact analysis. This analysis will be performed on the
personal information and data processed by the proposed system, their flows
and any risk associated to their processing.
Security measures will include secure protocols (HTTPS and SSL), login
procedures, as well as protection against bots and other malicious attacks
such as CAPTCHA technologies. Moreover, the industrial demo sites apply
monitored and controlled procedures related to the data collection, their
integrity and protection. The data protection and privacy of personal
information will include protective measures against infiltration as well as
physical protection of core parts of the systems and access control measures.
## 3.3. Personal Data Protection
The technical implementation of VICINITY does not expose, use or analyze data,
but some activities will involve human participants. The pilots will be
conducted in real apartments and cover real use scenarios related to health
monitoring, booking, home management, governance, energy consumption and other
various human activity and behavior analysis –related data gathering purposes.
Some of the activities to be carried out by the project may need to gather
some basic personal data (e.g. name, background, contact details, interest,
IoT units and assigned actions), even though the project will avoid collecting
such data unless data is really necessary for the application.
Such data will be protected in accordance with the EU's Data Protection
Directive 95/46/EC 2 “on the protection of individuals with regard to the
processing of personal data and on the free movement of such data” and of the
Council of 24 October 1995 on the protection of individuals with regard to the
processing of personal data and on the free movement of such data (Figure 5).
**Figure 5: VICINITY complies with European and national legislations**
WP3 and WP4 activities dealing with the implementation and deployment of core
components will be performed in Slovakia under leadership of local partners
(BVR and IS). For this reason the solution will be reviewed for compliance
with Data Protection Act No. 122/2013 approved by National Council of the
Slovak Republic together with its amendment No. 84/2014 which already reflects
the EC directive proposal 2012/0011/COD.
WP7 and WP8 activities will be performed in Greece, Portugal and Norway under
the leadership of local partners. In the following the consortium outlines the
legislation for the countries involved in the Trial:
1. Greek Trial in Municipality of Pilea-Hortiatis, Thessaloniki, for Greece, legislation includes “Law 2472/1997 (and its amendment by Law 3471/2006) of the Hellenic Parliament”.
* Regulatory authorities and ethical committees: Hellenic Data Protection Authority http://www.dpa.gr/
2. Norwegian trials in Teaterkvarteret healthcare assisted living home in Tromsø and offices in Oslo Sciencepark, Oslo, have to comply with national legislation “Personal Data Act of 14 April No.31” 5relating to the processing of personal data.
* Each pilot demonstration has to notify regulatory body Datatilsynet pursuant to section 31 of the Personal Data Act and section 29 of the Personal Health Data Filing System Act.
3. Portuguese Trial in Martim Longo microgrid pilot site in the Algarve region, Portugal. The Portuguese renewable energy legislative base dates back to 1988, and was upgraded and reviewed multiple times since then. The most important legislative diplomas are listed; DL 189/88, DL 168/99, DL 312/2001, DL 68/2002, DL 29/2006 and DL 153/2014. The last on the list refers to also one of the most important legislative changes, being the legislative base for broad based auto-consumption, with possibility to inject excess energy in to the grid under certain conditions.
* The collection and use of personal data in Portugal are regulated by the following two laws: “Law 41/2004” (and its amendment “Law 46/2012”), and “Law 32/2008”.
Further information on how personal data collection and handling should be
approached in the VICINITY project will be provided in other deliverables.
All personal data collection efforts of the project partners will be
established after giving subjects full details on the experiments to be
conducted, and obtaining from them a signed informed consent form (see Annex
2: VICINITY consent form template), following the respective guidelines set in
VICINITY and as described in section 3.4: Ethics and Security.
Beside this, certain guidelines will be implemented in order to limit the risk
of data leaks;
* Keep anonymised data and personal data of respondents separate;
* Encrypt data if it is deemed necessary by the local researchers;
* Store data in at least two separate locations to avoid loss of data;
* Limit the use of USB flash drives;
* Save digital files in one of the preferred formats (see Annex 1), and
* Label files in a systematically structured way in order to ensure the coherence of the final dataset
A more formal description of best practice principles can be found in Table 1:
Best practice for use of production data.
## 3.4. Production data
The consortium is aware that a number of privacy and data protection issues
could be raised by the activities (use case demonstration and evaluation in
WP7 and WP8) to be performed in the scope of the project. The project involves
the carrying out of data collection in all pilot applications on the virtual
neighborhood. For this reason, human participants will be involved in certain
aspects of the system development by contributing real life data. During the
development life cycle process, it will be necessary to operate on datasets.
Some of the datasets may be based on production data, while others may be
generated (synthetic).
The VICINITY architecture is decentralised by design (Figure 6). Production
data will be used for testing purposes. Certain functionality like the
discovery function and the related search criteria, raise the need for proper
implementation of Things Ecosystem Description (TED) – which describes IoT
assets that exists in the same environment.
**Figure 6: The VICINITY architecture is decentralised by design**
The public will have access to the VICINITY ontology alongside the VICINITY
discovery function at the conclusion of the project. However, all data
generated through the test phase and development process will be removed.
<table>
<tr>
<th>
**BEST PRACTICE – PRODUCTION DATA**
The consortium will follow what is considered best practice for handling both
copies of production data and live data.
* **Data Obfuscation and security safeguards**
Use obfuscation methods to remove/protect data or reduce the risk of personal
information being harvested on data breach, and encrypt data where
appropriate.
* **Data minimization**
Minimize the size of datasets and the amount of fields used.
* **Physical/environmental protection and access control**
Restrict and secure the environment where the data is used and stored and
limit the ability to remove live data in either physical or electronic format
from the environment. Also limit access to the data to authorized users with
business needs and who have received appropriate data protection training.
* **Retention limits and data removal**
Limit the time period for use of the data and dispose of live data at end of
use period. Destroy physical and electronic live data used for training,
testing, or research at the conclusion of the project.
* **Use Limits**
Limit through controls and education the likelihood that live data, whose
integrity is not reliable, is re-introduced into production systems or
transferred to others beyond its intended purpose.
* **Watermarking**
</th> </tr>
<tr>
<td>
</td>
<td>
Include warning information on live data where possible to ensure users do not
assume it is dummy data. This applies to all pilot sites where time critical
actions have to be taken, and where forecast analysis needs to be based on
accurate data.
</td> </tr>
<tr>
<td>
•
</td>
<td>
**Legal Controls**
Implement Confidentiality and Non-Disclosure Agreements if applicable. This
will apply to all operators responsible for living labs that address eHealth
and assisted living.
</td> </tr>
<tr>
<td>
•
</td>
<td>
**Responsibility for accountability, training and awareness**
Ensure that identified personnel (by role) are assigned responsibility for
compliance with any conditions of the approval for the use of live data. The
personnel responsible for the technical description of the dataset will also
serve as contact for the use of live data. This also applies to providing
safety and training sessions for all persons having access to live data. The
partners responsible for pilot sites handling real time data from living labs
will prepare information that is to be handed out to relevant stakeholders.
</td> </tr> </table>
**Table 1: Best practice for use of production data**
How these best practice principles are being implemented, are described in
more detail in section 3.5 Ethics and Security and 3.11 Data sharing
## 3.5. Ethics and security
The consortium is aware that a number of privacy and data protection issues
could be raised by the activities (use case demonstration and evaluation in
WP7 and WP8) to be performed in the scope of the project. The project involves
the carrying out of data collection in all pilot applications on the virtual
neighborhood. For this reason, human participants will be involved in certain
aspects of the project and data will be collected. This will be done in full
compliance with any European and national legislation and directives relevant
to the country where the data collections are taking place
(INTERNATIONAL/EUROPEAN):
* The Universal Declaration of Human Rights and the Convention 108 for the Protection of Individuals with Regard to Automatic Processing of Personal Data and
* Directive 95/46/EC & Directive 2002/58/EC of the European parliament regarding issues with privacy and protection of personal data and the free movement of such data.
In addition to this, to further ensure that the fundamental human rights and
privacy needs of participants are met whilst they take part in the project, in
the Evaluation Plans a dedicated section will be delivered for providing
ethical and privacy guidelines for the execution of the Industrial Trials. In
order to protect the privacy rights of participants, a number of best practice
principles will be followed. These include:
* no data will be collected without the explicit informed consent of the individuals under observation. This involves being open with participants about what they are involving themselves in and ensuring that they have agreed fully to the procedures/research being undertaken by giving their explicit consent.
* The owners of personal data are to be granted the right of inspection and the right to be removed from the registers.
* no data collected will be sold or used for any purposes other than the current project;
* a data minimisation policy will be adopted at all levels of the project and will be supervised by each Industrial Pilot Demonstration responsible. This will ensure that no data which is not strictly necessary to the completion of the current study will be collected;
* During the development life cycle process, it will be necessary to operate on datasets. Some of the datasets may be based on production data, while others may be generated (synthetic). These data will be removed by the end of the project.
* Any shadow (ancillary) personal data obtained during the course of the research will be immediately cancelled. However, the plan is to minimize this kind of ancillary data as much as possible. Special attention will also be paid to complying with the Council of Europe’s Recommendation R(87)15 on the processing of personal data for police purposes, Art.2 :
_“The collection of data on individuals solely on the basis that they have a
particular racial origin, particular religious convictions, sexual behavior or
political opinions or belong to particular movements or organisations which
are not proscribed by law should be prohibited. The collection of data
concerning these factors may only be carried out if absolutely necessary for
the purposes of a particular inquiry.”_
* compensation – if and when provided – will correspond to a simple reimbursement for working hours lost as a result of participating in the study; special attention will be paid to avoid any form of unfair inducement;
* if employees of partner organizations, are to be recruited, specific measures will be in place in order to protect them from a breach of privacy/confidentiality and any potential discrimination; In particular their names will not be made public and their participation will not be communicated to their managers.
* Data should be pseudomised and anonymized to allow privacy to be upheld even if an attacker gains access to the system.
* Furthermore, if data has been compromised or tampering is detected, the involved parties are to be notified immediately in order to reduce risk of misuse of data gathered for research purposes.
The same concern addressed here also applies to open calls (see section 3.8
Open Call).
## 3.6. The VICINITY Data Management Portal
VICINITY will develop a data management portal as part of the project. This
portal will provide to the public, for each dataset that will become publicly
available, a description of the dataset along with a link to a download
section. The portal will be updated each time a new dataset has been collected
and is ready of public distribution. The portal will however not contain any
datasets that should not become publicly available.
The initial version of the portal became available during the 2nd year of the
project, in parallel to the establishment of the first versions of project
datasets that can be made publicly available. The VICINITY data management
portal will enable project partners to manage and distribute their public
datasets through a common infrastructure as described in Table 2.
<table>
<tr>
<th>
**One dataset for (I/II)**
</th>
<th>
**One dataset for (II/II)**
</th>
<th>
**Administrative tools**
</th> </tr>
<tr>
<td>
each IoT unit
</td>
<td>
Datasets from pilots (see section 3.5 for examples)
</td>
<td>
List of sensor / grouping
</td> </tr>
<tr>
<td>
personal information
</td>
<td>
groups of devices
</td>
<td>
List of actions / sequences
</td> </tr>
<tr>
<td>
energy related domains
</td>
<td>
each health device
</td>
<td>
List of users
</td> </tr>
<tr>
<td>
• each interface (energy)
</td>
<td>
node/object
</td>
<td>
List of contacts
</td> </tr>
<tr>
<td>
• each measuring device
(energy)
</td>
<td>
messaging
</td>
<td>
Balancing loads
</td> </tr>
<tr>
<td>
• each routing device
(energy)
</td>
<td>
sequences / actions (combination tokens / nodes)
</td>
<td>
Booking
</td> </tr>
<tr>
<td>
mobility related domains
</td>
<td>
biometric (fingerprint, retina)
</td>
<td>
Messaging
</td> </tr>
<tr>
<td>
• parking data (mobility)
</td>
<td>
camera
</td>
<td>
Criteria
</td> </tr>
<tr>
<td>
• booking (mobility)
</td>
<td>
access
</td>
<td>
Priorities
</td> </tr>
<tr>
<td>
• areas (mobility)
</td>
<td>
each smart home device
(temperature, smoke, motion, sound)
</td>
<td>
Evaluation / feedback
</td> </tr> </table>
**Table 2: datasets stored in the VICINITY management portal**
## 3.7. Format of datasets
For each dataset the following will be specified:
<table>
<tr>
<th>
**DS. PARTICiPANTName.##.Logical_sensorname**
</th> </tr>
<tr>
<td>
**Data Identification**
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_Where are the sensor(s) installed? What are they monitoring/registering? What
is the dataset comprised of? Will it contain future sub-datasets?_
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_How will the dataset be collected? What kind of sensor is being used?_
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_What is the name of the owner of the device?_
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_What is the name of the partner in charge of the device? Are there several
partners that are cooperating? What are their names?_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_The name of the partner._
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_The name of the partner._
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WPxx and WPxx._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_What is the status with the metadata so far? Has it been defined? What is the
content of the metadata (e.g. datatypes like images portraying an action,
textual messages, sequences, timestamps etc.)_
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Has the data format been decided on yet? What will it look like?_
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Example text:_
_Production process recognition and help during the different production
phases, avoiding mistakes_
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_Example text:_
_The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if the dataset or specific portions of it
(e.g. metadata, statistics, etc.) are decided to become of widely open access,
a data management portal will be created that should provide a description of
the dataset and link to a download section. Of course these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination_
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_Has the data sharing policies been decided yet? What requirements exist for
sharing data? How will the data be shared? Who will decide what to be shared?_
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Who will own the information that has been collected? How will it adhere to
partner policies? What kind of limitation is put on the archive?_
</td> </tr> </table>
**Table 3: Format of dataset description**
## 3.8. Open Call
**NB: The actual content of the Open Calls, such as documents and other
material, is at the moment of this deliverable, still being worked out by the
project participants. All descriptions and considerations related to open
calls must therefore be considered tentative, and this section can be thought
of as a tool for implementing best practise.**
The Open Call process of the VICINITY project will involve third parties.
System integrators (Figure 7) will be one of target groups for the calls.
These will be presented for opportunities to integrate IoT infrastructures
based on the VICINITY framework. Implementation/integration of Value-Added
Services will also most likely be part of issues Open Calls will tackle. The
calls should adhere to the principles which govern Commission calls. These
principles all include confidentiality: all proposals and related data,
knowledge and documents are treated in confidence.
**Figure 7: Involving 3rd parties through open calls will provide**
**VICINITY with valuable experience, and evolve interoperability**
The Project Coordinator will present a legal contract with the third parties
that are granted open calls. This contract will specify all the control
procedures and made compliant with the Grant Agreement and the Consortium
Agreement. This is done in order to assure that their contributions are in
line with the agreed upon work plan; that the third party allows the
Commission and the Court of Auditors to exercise their power of control on
documents and information stored on electronic media or on the final
recipient's premises.
Proposals for open calls and the deliverables that come as a result will
include sections that describe how the data management principles have been
implemented. It is expected the papers will follow the outlines that are
presented in the legal contract and adhere to GDPR. This also applies to
sharing ideas and intellectual properties. Furthermore, the deliverables will
present how the chosen architecture and methodologies will be handled by the
stakeholders, integrators and SME’s.
According VICINITY concept the participants can decide with whom they wish to
cooperate and to which extent. Participants will be held responsible for
partners they team up with follow the same guidelines as the main project and
the open call project.
## 3.9. Description of methods for dataset description
Example test dataset will be generated by research teams from the participants
in the project. These test datasets will be prepared in XML-files. They will
also be made available in XML and JSON format (Figure 8). The datasets will be
based on semantic analysis of data from test sensors and applied to an
ontology.
The collected dataset will encompass different methodological approaches and
IoT standards defined by the global standard initiative oneM2M. The data will
run through different test environments like TDD (Test Driven Development),
ATDD (Acceptance Test Driven Development), PBT (Property Based Testing), BDD
(Behavior Driven Development). The project will focus on using model-based
test automation in processes with short release cycles.
**Figure 8: Datasets will be prepared and provided in XML and JSON format**
Apart from the research teams, these datasets will be useful for other
research groups, Standard
Development Organisations (SDO) and technical integrators with within the area
of Internet of Things (IoT).
No comparable data is available as of yet, but there are several descriptions
that will be used as basis for the test data.
All datasets are to be shared between the participants during the lifecycle of
the project. Feedback from other participants and test implementations will
decide when the dataset should be made publicly available. When the datasets
support the framework defined by the VICINITY ontology, they will be made
public and presented in open access publications.
The VICINITY partners can use a variety of methods for exploitation and
dissemination of the data including:
* Using them in further research activities (outside the action)
* Developing, creating or marketing a product or process
* Creating and providing a service, or
* Using the data in standardisation activities
Restrictions:
1. All national reports (which include data and information on the relevant topic) will be available to the public through the HERON web-site or a repository or any other option that the consortium decides and after verification by the partners so as to ensure their quality and credibility.
2. After month 18 so that partners have the time to produce papers; 3) Open access to the research data itself is not applicable.
## 3.10. Standards and metadata
The data will be generated and tested through different test automation
technologies, e.g. TDL (Test description language), TTCN-3 (Test and Test
Control Notation), UTP (UML Testing Profile). The profile should mimic the
data communicated from IoT units following the oneM2M specifications.
The Systems Modeling Language 3 (SysML) is used for the collection, analysis
and processing of requirements as well as for the specification message
exchanges and overviews of architecture and behavior specifications (Figure
9).
**Figure 9: Example of SysML model of Virtual Oslo Science City**
The project intends to share the datasets in an internally accessible
disciplinary repository using descriptive metadata as required/provided by
that repository. Additional metadata to example test datasets will be offered
within separate XML-files.
They will also be made available in XML and JSON format. Keywords will be
added as notations in SysML and modelled on the specifications defined by
oneM2M. The content will be similar to relevant data from compatible IoT
devices and network protocols. No network protocols have been defined yet, but
several have been evaluated. Files and folders will be versioned and
structured by using a name convention consisting of project name, dataset
name, date, version and ID.
## 3.11. Data sharing
The project aims to prepare the API for internal testing through the VICINITY
open gateway.
The VICINITY open gateway is defined as Interoperability as a Service. In
other words - it is a cloud based service that assumes the data has already
been gathered and transferred to the software running on the service layer.
These data will be made available for researchers in a controlled environment,
where login credentials are used to get access to the data in XML and JSON-
format (Figure 10).
**Figure 10: Data will only be provided partners with proper login
credentials**
The project focus on developing a framework that allows for a scalable and
futureproof platform upon which it can invest and develop IoT applications,
without fear of vendor lock-in or needing to commit to one connectivity
technology.
The researchers must therefore be committed to the requirements, architecture,
application programming interface (API) specifications, security solutions and
mapping to common industry protocols such as CoAP, MQTT and HTTP. Further
analysis will be performed using freely available open source software tools.
The data will also be made available as separate files.
The goal is to ultimately support the Europe 2020 strategy 4 by offering the
open data portal. The Digital Agenda proposes to better exploit the potential
of Information and Communication Technologies (ICTs) in order to foster
innovation, economic growth and progress. Thus VICINITY will support EUs
efforts in exploiting the potential offered by using ICT in areas like climate
change, managing ageing population, and intelligent transport system to
mention a few examples.
## 3.12. Archiving and preservation (including storage and backup)
As specified by the "rules of good scientific practice" we aim to preserve
data for at least ten years. Approximated end volume of example test dataset
is currently 10 GB, but this may be subject to change as the scope of the
project may change.
Associated costs for dataset preparation for archiving will be covered by the
project itself, while long term preservation will be provided and associated
costs covered by a selected disciplinary repository. During the project data
will be stored on the VICINITY web cloud as well as being replicated to a
separate external server.
# Datasets for smart grid from Aalborg University (AAU)
AAU will mainly deal with control design, energy management systems
implementation and Information and Communication Technology (ICT) integration
in small scale energy systems. AAU will scale-up by using hardware in the loop
solution and will participate actively in the implementation at the Energy
sites proposed in VICINITY. AAU will act as interface between ICT experts and
Energy sites in the project, as well as test interactions between the
developed concepts on the ICT side and the control and management of electric
power networks. Implementation and experimental results will be an important
outcome for the project.
<table>
<tr>
<th>
**DS.AAU.01.GRID_Status**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_This dataset comprised different parameters characterising the electrical
grid from the generation to the distribution sections. The cost of the
electricity will also be considered in this dataset, so as to have full
information that enables micro-trading actions._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The sensors that feed this dataset are; energy generation and consumption on-
site from RES, instant grid cost of energy consumed and purchased from the
grid_
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The devices will be the property of the test site owners, where the data
collection is going to be performed._
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_AAU_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_AAU_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_AAU_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_The data are received in JSON format. Regarding the volume of data, it
depends on the motion/activity levels of the engaged devices. However, it is
estimated to be 4 KB/transmission._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Due to privacy issues, the collected data are stored at a secured database
scheme at Aalborg University Facilities. Data exploitation is foreseen to be
achieved through testing valueadded services, data analytics and statistical
analysis._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_The full dataset will be confidential and only the authorized AAU personnel
will have access as defined. AAU could provide energy data to specific
consortium members under a detailed confidentiality framework._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_The created dataset could be shared under a detailed confidentiality
framework by using open APIs through the middleware._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Due to privacy issues, the collected data are stored at a secured database
scheme at Aalborg University Facilities. Data exploitation is foreseen to be
achieved through testing valueadded services, data analytics and statistical
analysis._
_A back up will be stored in an external storage device, kept by AAU in a
secured place. This back-up will be available when it is required from the
pilot sites._
</td> </tr> </table>
**Table 4: Dataset description of the AAU GRID status**
# Datasets for smart energy from Enercoutim (ENERC)
ENERC will participate providing the facilities and the experience in
implementing solar production integrated into municipality smart city efforts.
To this end, ENERC will actively participate in the deployment, management and
evaluation of the “Smart Energy Microgrid Neighbourhood” Use Case. Its
contribution will be focused on the energy resource potential demand studies
and economic sustainability. Its expertise will allow ICT integration with
smart city management focused on better serving its citizens.
The main aim of this project is the demonstration of a Solar Platform which
provides a set of shared infrastructures and reduces the total cost per MW as
well as improves the environmental impact compared to the stand alone
implementation of these projects. As main responsibilities, ENERC will be in
charge of strategic technology planning and integration coordination,
designing potential models for municipal energy management, as well as
identifying the optimal ownership structure of the microgrid system with a
focus on delivering maximum social and economic benefit to the local
community.
<table>
<tr>
<th>
**DS.ENERC.01.METEO_Station**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_The weather conditions will influence the energy production, so it becomes
critical to understand the current and foreseen scenarios. It is fundamental
to constantly carry out different measures with the meteo station equipment of
the parameters that can influence both energy production and consumption over
time._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The sensors that feed this dataset are; temperature, humidity, wind speed and
wind direction, barometer, precipitation measurement and sun tracker._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The devices will be the property of the test site owners, where the data
collection is going to be performed._
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_ENERC_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_ENERC_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_ENERC_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_The data are received in JSON format. Regarding the volume of data, it
depends on the motion/activity levels of the engaged devices. However, it is
estimated to be 4 KB/transmission._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Due to privacy issues, the collected data are stored at a secured database
scheme at SOLAR LAB Facilities, allowing access to registered users. Data
exploitation is foreseen to be extended through envisioned value-added
services, allowing full access to specific authorised users (e.g. facility
managers), and for a broader use in an anonymised/aggregated manner for data
analytics and statistical analysis._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_The full dataset will be confidential and only the authorized ENERC personnel
and related end-users will have access as defined. Specific consortium members
involved in technical development and pilot deployment will further have
access under a detailed confidentiality framework._
_Furthermore, if the dataset in an anonymised/aggregated manner is decided to
become of widely open access, a data management portal will be created that
should provide a description of the dataset and link to a download section._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Due to ethical and privacy issues, data will be stored in a database scheme
at the SOLAR LAB facilities, allowing only authorised access to external end-
users. A back up will be stored in an external storage device, kept by ENERC
in a secured place. Data will be kept indefinitely allowing statistical
analysis._
</td> </tr> </table>
**Table 5: Dataset description of the ENERC METEO station**
<table>
<tr>
<th>
**DS.ENERC.02.BUILDING_Status**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_The information associated to the energy consumption in buildings will allow
identifying the usage of resources for each measurement point._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The sensors that feed this dataset are; Cooling energy demand, heating energy
demand, hot water demand, building equipment demand_
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The devices will be the property of the test site owners, where the data
collection is going to be performed._
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_ENERC_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_ENERC_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_ENERC_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_The data are received in JSON format. Regarding the volume of data, it
depends on the motion/activity levels of the engaged devices. However, it is
estimated to be 4 KB/transmission._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Due to privacy issues, the collected data are stored at a secured database
scheme at SOLAR LAB Facilities, allowing access to registered users. Data
exploitation is foreseen to be extended through envisioned value-added
services, allowing full access to specific authorised users (e.g. facility
managers), and for a broader use in an anonymised/aggregated manner for data
analytics and statistical analysis._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_The full dataset will be confidential and only the authorized ENERC personnel
and related end-users will have access as defined. Specific consortium members
involved in technical development and pilot deployment will further have
access under a detailed confidentiality framework._
_Furthermore, if the dataset in an anonymised/aggregated manner is decided to
become of widely open access, a data management portal will be created that
should provide a description of the dataset and link to a download section._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_The created dataset could be shared by using open APIs through the
middleware_ _as well as a data management portal._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_None._
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Due to ethical and privacy issues, data will be stored in a database scheme
at the SOLAR LAB facilities, allowing only authorised access to external end-
users. A back up will be stored in an external storage device, kept by ENERC
in a secured place. Data will be kept indefinitely allowing statistical
analysis._
</td> </tr> </table>
**Table 6: Dataset description of the ENERC building status**
<table>
<tr>
<th>
**DS.ENERC.03.GRID_Status**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_This dataset comprises the different parameters that characterise the
electrical grid from the generation to the distribution sections. Moreover the
cost of the electricity will be considered in this dataset so as to have full
information that enables micro-trading actions._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The sensors that feed this dataset are; Electrical energy generated on-site
from RES ,Thermal energy generated on-site, thermal energy consumed, grid
electricity consumed, instant grid cost of energy consumed, value of energy
purchased from the grid_
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The devices will be the property of the test site owners, where the data
collection is going to be performed._
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_ENERC, AAU_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_ENERC, AAU_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_ENERC, AAU_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_The data are received in JSON format. Regarding the volume of data, it
depends on the motion/activity levels of the engaged devices. However, it is
estimated to be 4 KB/transmission._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Due to privacy issues, the collected data are stored at a secured database
scheme at SOLAR LAB Facilities and AAU servers, allowing access to registered
users. Data exploitation is foreseen to be extended through envisioned value-
added services, allowing full access to specific authorised users (e.g.
facility managers), and for a broader use in an anonymised/aggregated manner
for data analytics and statistical analysis._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_The full dataset will be confidential and only the authorized ENERC/AAU
personnel and related end-users will have access as defined. Specific
consortium members involved in technical development and pilot deployment will
further have access under a detailed confidentiality framework._
_Furthermore, if the dataset in an anonymised/aggregated manner is decided to
become of widely open access, a data management portal will be created that
should provide a description of the dataset and link to a download section._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Due to ethical and privacy issues, data will be stored in a database scheme
at the SOLAR LAB facilities and AAU servers, allowing only authorised access
to external end-users. A back up will be stored in an external storage device,
kept by ENERC in a secured place. Data will be kept indefinitely allowing
statistical analysis._
</td> </tr> </table>
**Table 7: Dataset description of the ENERC grid status**
# Datasets for eHealth from GNOMON Informatics SA (GNOMON)
GNOMON will provide its background knowledge in the specific field of assisted
living and tele care in the context of social workers. In addition, GNOMON
will actively contribute in the use case pilot setup, assessment and
benchmarking.
The company has developed and provided the remote care and monitoring
integrated system for people with health problems as well as of the software
applications for support and organization using information and communication
technologies of the business operation of HELP AT HOME program in the
Municipality of Pilea-Hortiatis. This infrastructure could be further
exploited and extended for the scope of VICINITY project and specifically for
the realisation of the eHealth Use Case.
<table>
<tr>
<th>
**DS.GNOMON.01.Pressure_sensor**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_The sensors will be in possession of patients in need of assisted living and
identified by the equivalent municipality (MPH) health care services to ensure
the validity of each case. The measurements are scheduled to be taken once a
day, requiring the patient to make use of the device placed within their
apartment. The main task of the sensor is to monitor pressure
(systolic/diastolic) and heart rate levels._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The dataset will be collected via a combination of connected devices
consisting of a Bluetooth Blood Pressure monitor and a Connectivity Gateway
based on Raspberry pi._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The device will be the property of the test site owners, where the data
collection is going to be performed._
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_GNOMON, MPH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_GNOMON, MPH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_GNOMON, MPH_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_The data are received in XML format. In a later stage, they are converted to
JSON format and stored in a database. Regarding the volume of data, it depends
on the participation levels of the engaged patients. However, it is estimated
to be 16 KB/measurement._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Due to privacy issues, the collected data are stored at a secured database
scheme at MPH headquarters, allowing access to registered users (i.e. MPH
health care services personnel and eHealth call center). Data exploitation is
foreseen to be extended through envisioned value-added services, allowing full
access to specific authorised users (e.g. doctors), and for a broader use in
an anonymised/aggregated manner for creating behaviour profiles and clustering
patients to different medical groups._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_The full dataset will be confidential and only the authorized MPH personnel
and related end-users will have access as defined. The latter authorized
groups of users will access data in a tamper-proof way with an audit mechanism
triggered simultaneously to guarantee the alignment with relevant requirements
coming from the recently introduced General Data Protection Regulation (GDPR).
Specific consortium members involved in technical development and pilot
deployment will further have access under a detailed confidentiality
framework._
_Furthermore, if the dataset in an anonymised/aggregated manner is decided to
become of widely open access, a data management portal will be created that
should provide a description of the dataset and link to a download section._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal. Dataset from VICINITY could be used and
exploited anonymized from another European project. Dataset from health
devices deployed at seniors’ houses will provide added value and be the base
for other research projects (e.g. statistical data). VICINITY could have an
open portal / repository on its website, providing anonymized data’s
information like timestamp and description._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Due to ethical and privacy issues, data will be stored in a database scheme
at the headquarters of MPH, allowing only authorised access to external end-
users. A back up will be stored in an external storage device, kept by MPH in
a secured place. Data will be kept indefinitely allowing statistical
analysis._
</td> </tr> </table>
**Table 8: Dataset description of the GNOMON pressure sensor**
<table>
<tr>
<th>
**DS.GNOMON.02.Weight_sensor**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_The sensors will be in possession of patients in need of assisted living and
identified by the equivalent municipality (MPH) health care services to ensure
the validity of each case. The measurements are scheduled to be taken once a
day, requiring the patient to make use of the device placed within their
apartment. The main task of the sensor is to keep track of weight measurements
and mass index (given the fact that the patient provides an accurate value of
his/her height). Future subset may contain information about resting
metabolism, visceral fat level, skeletal muscle and body age._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The dataset will be collected via a combination of connected devices
consisting of a Bluetooth Body Composition monitor and a Connectivity Gateway
based on Raspberry pi._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The device will be the property of the test site owners, where the data
collection is going to be performed._
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_GNOMON, MPH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_GNOMON, MPH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_GNOMON, MPH_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_The data are received in XML format. In a later stage, they are converted to
JSON format and stored in a database. Regarding the volume of data, it depends
on the participation levels of the engaged patients. However, it is estimated
to be 48 KB/measurement._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Due to privacy issues, the collected data are stored at a secured database
scheme at MPH headquarters, allowing access to registered users (i.e. MPH
health care services personnel and eHealth call center). Data exploitation is
foreseen to be extended through envisioned value-added services, allowing full
access to specific authorised users (e.g. doctors), and for a broader use in
an anonymised/aggregated manner for creating behaviour profiles and clustering
patients to different medical groups._
</td> </tr>
<tr>
<td>
Data access policy
Dissemination level (Confidential,
only for members of
Consortium and the Commission
Services) / Public
</td>
<td>
/
the
</td>
<td>
_The full dataset will be confidential and only the authorized MPH personnel
and related end-users will have access as defined. The latter authorized
groups of users will access data in a tamper-proof way with an audit mechanism
triggered simultaneously to guarantee the alignment with relevant requirements
coming from the recently introduced General Data Protection Regulation (GDPR).
Specific consortium members involved in technical development and pilot
deployment will further have access under a detailed confidentiality
framework._
_Furthermore, if the dataset in an anonymised/aggregated manner is decided to
become of widely open access, a data management portal will be created that
should provide a description of the dataset and link to a download section._
</td> </tr>
<tr>
<td>
Data sharing, re-use
distribution (How?)
</td>
<td>
and
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal. Dataset from VICINITY could be used and
exploited anonymized from another European project. Dataset from health
devices deployed at seniors’ houses will provide added value and be the base
for other research projects (e.g. statistical data). VICINITY could have an
open portal / repository on its website, providing anonymized data’s
information like timestamp and description._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Due to ethical and privacy issues, data will be stored in a database scheme
at the headquarters of MPH, allowing only authorised access to external end-
users. A back up will be stored in an external storage device, kept by MPH in
a secured place. Data will be kept indefinitely allowing statistical
analysis._
</td> </tr> </table>
**Table 9: Dataset description of the GNOMON weight sensor**
<table>
<tr>
<th>
**DS.GNOMON.03.Fall_sensor**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr> </table>
<table>
<tr>
<th>
Dataset description
</th>
<th>
_The fall sensor is a wearable sensor that will be in possession of patients
in need of assisted living and identified by the equivalent municipality (MPH)
health care services to ensure the validity of each case. The main goal of the
sensor is to automatically detect when a patient falls either due to an
accident or in the case of a medical incident. The event is triggered
automatically after a fall, but a similar event is also triggered by pressing
the equivalent panic button (wearable actuator). In both cases, an automated
emergency phone call is placed to the eHealth Call Center._
</th> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The dataset will be collected via a combination of devices consisting of a
hub (Lifeline Vi) and a fall detector that are wirelessly connected._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The device will be the property of the test site owners, where the data
collection is going to be performed._
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_GNOMON, MPH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_GNOMON, MPH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_GNOMON, MPH_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_An audit log containing alerts (incl. false alarms) is stored. The amount of
alerts is estimated to be 50 alerts (incl. false alarms) per month._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Due to privacy issues, the collected data are stored at a secured database
scheme at MPH headquarters, allowing access to registered users (i.e. MPH
health care services personnel and eHealth call centre). Data exploitation is
foreseen to be extended through envisioned value-added services, allowing full
access to specific authorised users (e.g. patient’s doctors), and for a
broader use in an anonymised/aggregated manner for data analytics and
statistical analysis._
</td> </tr>
<tr>
<td>
Data access policy
Dissemination level (Confidential,
only for members of
Consortium and the Commission
Services) / Public
</td>
<td>
/
the
</td>
<td>
_The full dataset will be confidential and only the authorized MPH personnel
and related end-users will have access as defined. The latter authorized
groups of users will access data in a tamper-proof way with an audit mechanism
triggered simultaneously to guarantee the alignment with relevant requirements
coming from the recently introduced General Data Protection Regulation (GDPR).
Specific consortium members involved in technical development and pilot
deployment will further have access under a detailed confidentiality
framework._
_Furthermore, if the dataset in an anonymised/aggregated manner is decided to
become of widely open access, a data management portal will be created that
should provide a description of the dataset and link to a download section._
</td> </tr>
<tr>
<td>
Data sharing, re-use
distribution (How?)
</td>
<td>
and
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal. Dataset from VICINITY could be used and
exploited anonymized from another European project. Dataset from fall sensors
at seniors’ houses will provide added value and be the base for other research
projects (e.g. statistical data). VICINITY could have an open portal /
repository on its website, providing anonymized data’s information like
timestamp and description._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Due to ethical and privacy issues, data will be stored in a database scheme
at the headquarters of MPH, allowing only authorised access to external end-
users. A back up will be stored in an external storage device, kept by MPH in
a secured place. Data will be kept indefinitely allowing statistical
analysis._
</td> </tr> </table>
**Table 10: Dataset description of the GNOMON fall sensor**
<table>
<tr>
<th>
**DS.GNOMON.04.Wearable_Fitness_Tracker_Sensor**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_The fitness sensors are sensors embodied to wearable fitness trackers such as
activity wristbands. The latter equipment will be in possession of middle aged
citizens, either with a chronic health issue (e.g. obesity) or not, that are
identified by the equivalent municipality (MPH). The municipality will try to
promote fitness awareness and improve citizens’ health under the concept of a
municipal-scale competition that will be based on activity related data coming
from the sensors (e.g. step counting, hours of sleep, etc)._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The data will be collected by wearable fitness trackers, mainly in the form
of activity wristbands (e.g. Xiaomi MiBand, FitBit, etc.)._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr> </table>
<table>
<tr>
<th>
Partner owner of the device
</th>
<th>
_The device will be the property of the test subject, in this case the
participating citizen._
</th> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_GNOMON, MPH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_GNOMON, MPH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_GNOMON, MPH_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Τhe collection of data from wearable fitness tracker sensors is event-driven.
New data are dispatched once they are produced._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Data exploitation is foreseen to be extended through envisioned value-added
services, allowing full access to specific authorised users (e.g. doctors),
and for a broader use in an anonymised/aggregated manner for data analytics
and statistical analysis. Additionally, as one of the value-added services
introduced is related to the concept of a municipalscale competition, data
analysis will also serve the needs of calculating and providing a ranking
among the competitors._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_The full dataset will be confidential and only the authorized MPH personnel
and related end-users will have access as defined. The latter authorized
groups of users will access data in a tamper-proof way with an audit mechanism
triggered simultaneously to guarantee the alignment with relevant requirements
coming from the recently introduced General Data Protection Regulation (GDPR).
Specific consortium members involved in technical development and pilot
deployment will further have access under a detailed confidentiality
framework._
_Furthermore, if the dataset in an anonymised/aggregated manner is decided to
become of widely open access, a data management portal will be created that
should provide a description of the dataset and link to a download section._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal. Dataset from VICINITY could be used and
exploited anonymized from another European project. Dataset from wearable
fitness trackers will provide added value and be the base for other research
projects (e.g. statistical data). VICINITY could have an open portal /
repository on its website, providing anonymized data’s information like
timestamp and description._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Due to ethical and privacy issues, data will be stored in a database scheme
at the headquarters of MPH, allowing only authorised access to external end-
users. A back up will be stored in an external storage device, kept by MPH in
a secured place. Data will be kept indefinitely allowing statistical
analysis._
</td> </tr> </table>
**Table 11: Dataset description of the GNOMON Wearable Fitness Tracker
Sensor**
<table>
<tr>
<th>
**DS.GNOMON.05.Beacon_Sensor**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_The beacon sensors are sensors to be deployed in municipality’s sport
facilities, e.g. gym, pool, etc. and also tested at CERTH/ITI’s Smart Home.
The municipality will try to promote fitness awareness and improve citizens’
health under the concept of a municipal-scale competition that will be based
on activity related data gathered by the sensors and processed accordingly
(e.g. translation of beacon signals to actual time spent in sport
facilities)._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The data will be collected by beacons deployed in municipality’s sport
facilities and at CERTH/ITI’s Smart Home._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The device will be the property of the test site owners, where the data
collection is going to be performed._
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_GNOMON, MPH, CERTH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_GNOMON, MPH, CERTH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_GNOMON, MPH, CERTH_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Τhe collection of data from beacons is event-driven. New data are dispatched
once they are produced for example when middle-age person visits a sport
centre._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Data exploitation is foreseen to be extended through envisioned value-added
services, allowing full access to specific authorised users (e.g. doctors),
and for a broader use in an anonymised/aggregated manner for data analytics
and statistical analysis. Additionally, as one of the value-added services
introduced is related to the concept of a municipalscale competition, data
analysis will also serve the needs of calculating and providing a ranking
among the competitors._
</td> </tr>
<tr>
<td>
Data access policy
Dissemination level (Confidential,
only for members of
Consortium and the Commission
Services) / Public
</td>
<td>
/
the
</td>
<td>
_The full dataset of beacons deployed in CERTH / ITI’s smart house, that is
not sensitive, will be accessible through a local experimental repository._
_The full dataset of beacons deployed in houses of elderly people are
sensitive, therefore, will be confidential and only the authorized MPH
personnel and related end-users will have access as defined. The latter
authorized groups of users will access data in a tamper-proof way with an
audit mechanism triggered simultaneously to guarantee the alignment with
relevant requirements coming from the recently introduced General Data
Protection Regulation (GDPR). Specific consortium members involved in
technical development and pilot deployment will further have access under a
detailed confidentiality framework._
_Furthermore, if the dataset in an anonymised/aggregated manner is decided to
become of widely open access, a data management portal will be created that
should provide a description of the dataset and link to a download section._
</td> </tr>
<tr>
<td>
Data sharing, re-use
distribution (How?)
</td>
<td>
and
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal. Dataset from VICINITY could be used and
exploited anonymized from another European project. Dataset from beacons at
sport centres will provide added value and be the base for other research
projects (e.g. statistical data). VICINITY could have an open portal /
repository on its website, providing anonymized data’s information like
timestamp and description._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Due to ethical and privacy issues, data will be stored in a database scheme
at the headquarters of MPH, allowing only authorised access to external end-
users. A back up will be stored in an external storage device, kept by MPH in
a secured place. Data will be kept indefinitely allowing statistical
analysis._
</td> </tr> </table>
**Table 12: Dataset description of the GNOMON Beacon Sensor**
<table>
<tr>
<th>
**DS.GNOMON_CERTH.06.Gorenje_Smart_Appliances_Sensor**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_The sensors related to Gorenje smart appliances are sensors embodied to
specific house equipment such as ovens and fridges. The latter equipment will
be provided by Gorenje partner and will be in possession of patients in need
of assisted living and identified by the equivalent municipality (MPH) health
care services to ensure the validity of each case. Similar equipment will also
be deployed in CERTH / ITI’s facilities. The main goal of the sensors is to
automatically detect when a patient opens the fridge or uses the oven in order
to create behaviour profiles based on relevant criteria (e.g. frequency of
use, etc), trigger alerts in case of deviation from the normal standards of
use and inform the call centre._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The data will be collected by specific smart appliances (i.e. oven, fridge)
provided by the Gorenje partner and adjusted to VICINITY requirements._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The device will be the property of the test site owners, where the data
collection is going to be performed._
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_CERTH, GORENJE, GNOMON, MPH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_CERTH, GORENJE, GNOMON, MPH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_CERTH, GORENJE, GNOMON, MPH_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP6, WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Τhe collection of data from Gorenje devices is time-driven and dispatched
every 15min and it is depended on the standards that Gorenje provides._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Data exploitation is foreseen to be extended through envisioned value-added
services and for a broader use in an anonymised/aggregated manner for creating
behaviour profiles and clustering patients to different medical groups.
Significant deviation from the latter profiles is expected to trigger relevant
alerts._
</td> </tr>
<tr>
<td>
Data access policy
Dissemination level (Confidential,
only for members of
Consortium and the Commission
Services) / Public
</td>
<td>
/
the
</td>
<td>
_The full dataset of Gorenje devices deployed in CERTH / ITI’s facilities,
that are not sensitive, will be accessible through Gorenje Cloud in a local
experimental repository._
_The full dataset from Gorenje devices deployed in elderly’s people houses
will be confidential and only the authorized MPH personnel and related end-
users will have access as defined through Gorenje Cloud. The latter authorized
groups of users will access data in a tamper-proof way with an audit mechanism
triggered simultaneously to guarantee the alignment with relevant requirements
coming from the recently introduced General Data Protection Regulation (GDPR).
Specific consortium members involved in technical development and pilot
deployment will further have access under a detailed confidentiality
framework._
_Furthermore, if the dataset in an anonymised/aggregated manner is decided to
become of widely open access, a data management portal will be created that
should provide a description of the dataset and link to a download section._
</td> </tr>
<tr>
<td>
Data sharing, re-use
distribution (How?)
</td>
<td>
and
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal. Dataset from VICINITY could be used and
exploited anonymized from another European project. Dataset from Gorenje
devices deployed at seniors’ houses will provide added value and be the base
for other research projects (e.g. statistical data). VICINITY could have an
open portal / repository on its website, providing anonymized data’s
information like timestamp and description._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Due to ethical and privacy issues, data will be stored in a database scheme
at the headquarters of MPH, allowing only authorised access to external end-
users. A back up will be stored in an external storage device, kept by MPH in
a secured place. Data will be kept indefinitely allowing statistical
analysis._
</td> </tr> </table>
**Table 13: Dataset description of the GNOMON/CERTH Gorenje Smart Appliances
Sensor**
# Datasets for eHealth from Centre for Research and Technology Hellas (CERTH)
CERTH / ITI will contribute in the use case pilot setup for houses at
Municipality of Pilea-Hortiatis and provide its background knowledge in the
field of assisted living. It will also provide its infrastructure of Smart
House for cross-domain implementation including building sensors and devices
which have been also integrated to houses at MPH.
<table>
<tr>
<th>
**DS.CERTH.01.Occupancy_Sensor**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_Occupancy sensors will be deployed, on the one hand, in houses of patients in
need of assisted living, identified by the equivalent municipality (MPH)
health care services to ensure the validity of each case, but also in CERTH’s
smart house facilities for testing reasons. The main task of the sensor is to
provide a 24/7 occupancy status for the area of its responsibility. Data
coming from this sensor will be used to create behaviour profiles based on
relevant criteria (e.g. occupancy level for a specific room, etc) and trigger
alerts in case of deviation from the normal standards._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The dataset will be collected via a combination of connected occupancy
sensors (e.g. Wi-Fi, ZigBee etc.) and a Connectivity Gateway based on
Raspberry pi or other vendor._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The device will be the property of the test site owners, where the data
collection is going to be performed._
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_GNOMON, MPH, CERTH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_GNOMON, MPH, CERTH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_GNOMON, MPH, CERTH_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Τhe collection of data from occupancy sensors is time-driven and dispatched
every 15min (e.g. through REST Services, XML format etc.)._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Data exploitation is foreseen to be extended through envisioned value-added
services and for a broader use in an anonymised/aggregated manner for creating
behaviour profiles and clustering patients to different medical groups.
Significant deviation from the latter profiles is expected to trigger relevant
alerts which will be sent to the call centre._
</td> </tr>
<tr>
<td>
Data access policy
Dissemination level (Confidential,
only for members of
Consortium and the Commission
Services) / Public
</td>
<td>
/
the
</td>
<td>
_The full dataset of occupancy sensors deployed in CERTH / ITI’s smart house,
that are not sensitive, will be accessible through a local experimental
repository._
_The full dataset of sensors deployed in houses of elderly people are
sensitive therefore will be confidential and only the authorized MPH personnel
and related end-users will have access as defined. The latter authorized
groups of users will access data in a tamper-proof way with an audit mechanism
triggered simultaneously to guarantee the alignment with relevant requirements
coming from the recently introduced General Data Protection Regulation (GDPR).
Specific consortium members involved in technical development and pilot
deployment will further have access under a detailed confidentiality
framework._
_Furthermore, if the dataset in an anonymised/aggregated manner is decided to
become of widely open access, a data management portal will be created that
should provide a description of the dataset and link to a download section._
</td> </tr>
<tr>
<td>
Data sharing, re-use
distribution (How?)
</td>
<td>
and
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal. Dataset from VICINITY could be used and
exploited anonymized from another European project. Dataset from sensors
deployed at seniors’ houses will provide added value and be the base for other
research projects (e.g. statistical data). VICINITY could have an open portal
/ repository on its website, providing anonymized data’s information like
timestamp and description._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Due to ethical and privacy issues, data will be stored in a database scheme
at the headquarters of MPH, allowing only authorised access to external end-
users. A back up will be stored in an external storage device, kept by MPH in
a secured place. Data will be kept indefinitely allowing statistical
analysis._
</td> </tr> </table>
**Table 14: Dataset description of the CERTH Occupancy Sensor**
<table>
<tr>
<th>
**DS.CERTH.02.Motion_Sensor**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr> </table>
<table>
<tr>
<th>
Dataset description
</th>
<th>
_Motion sensors will be deployed, on the one hand, in houses of patients in
need of assisted living, identified by the equivalent municipality (MPH)
health care services to ensure the validity of each case, but also in CERTH’s
smart house facilities for testing reasons. The main task of the sensor is to
provide the 24/7 motion levels for the area of its responsibility. Data coming
from this sensor will be used to create behaviour profiles based on relevant
criteria (e.g. motions level for a specific room and time period, etc.) and
trigger alerts in case of deviation from the normal standards._
</th> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The dataset will be collected via a combination of connected motion sensors
(e.g. Wi-Fi, ZigBee etc.) and a Connectivity Gateway based on Raspberry pi or
other vendor._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The device will be the property of the test site owners, where the data
collection is going to be performed._
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_GNOMON, MPH, CERTH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_GNOMON, MPH, CERTH_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_GNOMON, MPH, CERTH_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP6, WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Τhe collection of data from motion sensors is time-driven and dispatched
every 15min (e.g. through REST Services, XML format etc.)._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Data exploitation is foreseen to be extended through envisioned value-added
services and for a broader use in an anonymised/aggregated manner for creating
behaviour profiles and clustering patients to different medical groups.
Significant deviation from the latter profiles is expected to trigger relevant
alerts._
</td> </tr>
<tr>
<td>
Data access policy
Dissemination level (Confidential,
only for members of
Consortium and the Commission
Services) / Public
</td>
<td>
/
the
</td>
<td>
_The full dataset of occupancy sensors deployed in CERTH / ITI’s smart house,
that is not sensitive, will be accessible through a local experimental
repository._
_The full dataset of sensors deployed in houses of elderly people are
sensitive therefore will be confidential and only the authorized MPH personnel
and related end-users will have access as defined. The latter authorized
groups of users will access data in a tamper-proof way with an audit mechanism
triggered simultaneously to guarantee the alignment with relevant requirements
coming from the recently introduced General Data Protection Regulation (GDPR).
Specific consortium members involved in technical development and pilot
deployment will further have access under a detailed confidentiality
framework._
_Furthermore, if the dataset in an anonymised/aggregated manner is decided to
become of widely open access, a data management portal will be created that
should provide a description of the dataset and link to a download section._
</td> </tr>
<tr>
<td>
Data sharing, re-use
distribution (How?)
</td>
<td>
and
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal. Dataset from VICINITY could be used and
exploited anonymized from another European project. Dataset from sensors
deployed at seniors’ houses will provide added value and be the base for other
research projects (e.g. statistical data). VICINITY could have an open portal
/ repository on its website, providing anonymized data’s information like
timestamp and description._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Due to ethical and privacy issues, data will be stored in a database scheme
at the headquarters of MPH, allowing only authorised access to external end-
users. A back up will be stored in an external storage device, kept by MPH in
a secured place. Data will be kept indefinitely allowing statistical
analysis._
</td> </tr> </table>
**Table 15: Dataset description of the CERTH Motion Sensor**
# Datasets for intelligent mobility from Hafenstrom AS (HITS)
HITS will provide the user requirements specifications and demonstration of
transport domain use case, while it will actively participate in the
dissemination and exploitation activities of the project. By employing knowhow
within standardization bodies, mobility and smart city governance, HITS will
allow municipalities and smart cities to better utilize internal resources and
improve on services offered to citizens and agencies alike.
Furthermore, HITS will be responsible for the Use cases “Virtual Neighbourhood
of Buildings for Assisted Living integrated in a Smart Grid Energy Ecosystem”
and “Virtual Neighbourhood of Intelligent (Transport) Parking Space”. Towards
this direction, it will be the main partner to bring/arrange the required
infrastructure, in collaboration with other Consortium partners (i.e., TINYM
partner), for the use case demonstration.
<table>
<tr>
<th>
**DS.HITS.01.Parkingsensor**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_The sensors will be installed at a test site, and will register proximity of
objects of a certain size. Future subset may contain information about
temperature, humidity, noise, light and other temperature, visual and touch
related data. The sensors main task is to detect if the space is occupied.
This information will later on be integrated with identification in order to
verify that the vehicle/unit that occupies the space is licenced through
either booking or ticketing action being taken._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The dataset will be collected through a sensor that is mounted at the parking
site._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The device will be the property of the test site owners, where the data
collection is going to be performed_
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata include: (a) description of the experimental
setup (e.g. process system, date, etc.) and procedure which is related to the
dataset (e.g. proactive maintenance action, unplanned event, nominal
operation. etc.), (b) scenario related procedures, state of the monitored
activity and involved workers, involved system etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_The data will be stored at XML format and are estimated to be 50-300 MB per
month._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Registering parking activity based upon availability, vehicle,
ownership/licence, comparing with nearby infrastructure and surrounding ITS
technology._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_The full dataset will be available to participants in the project. If the
dataset or specific portions of it (e.g. metadata, statistics, etc.) are
decided to become of widely open access, a data management portal will be
created that should provide a description of the dataset and link to a
download section. Of course these data will be anonymized, so as not to have
any potential ethical issues with their publication and dissemination._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Data will be stored in the storage device of the developed system (computer).
A back up will be stored in an external storage device. Data will be kept
indefinitely allowing statistical analysis._
</td> </tr> </table>
**Table 16: Dataset description of the HITS parking sensor**
<table>
<tr>
<th>
**DS.HITS.02.SmartLight**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_Smart lights will be installed at the lab, and will demonstrate how light and
colours can indicate the state of access and availability. Future subset may
contain information about proximity, movement, heat sensing (infrared), sound
sensing and door contact sensors. The smart lights main task is to visually
inform about the state of the parking space. This information may later on be
integrated with indicators for occupancy, time to availability and validity._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The dataset will be received from a laptop in the lab._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The device will be the property of the test site owners, where the data
collection is going to be performed_
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata include: (a) description of the experimental
setup (e.g. process system, date, etc.) and procedure which is related to the
dataset (e.g. proactive maintenance action, unplanned event, nominal
operation. etc.), (b) scenario related procedures, state of the monitored
activity and involved workers, involved system etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_The data will be stored at XML format and are estimated to be 50-300 MB per
month._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Registering parking activity based upon availability, vehicle,
ownership/licence, comparing with nearby infrastructure and surrounding ITS
technology._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_The full dataset will be available to the members of the consortium. If the
dataset or specific portions of it (e.g. metadata, statistics, etc.) are
decided to become of widely open access, a data management portal will be
created that should provide a description of the dataset and link to a
download section. Of course these data will be anonymized, so as not to have
any potential ethical issues with their publication and dissemination._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Data will be stored in the storage device of the developed system (computer).
A back up will be stored in an external storage device. Data will be kept
indefinitely allowing statistical analysis._
</td> </tr> </table>
**Table 17: Dataset description of the HITS Smart lighting**
<table>
<tr>
<th>
**DS.HITS.03.LaptopTeststation**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr> </table>
<table>
<tr>
<th>
Dataset description
</th>
<th>
_The laptop test station will be installed at the workbench where the operator
normally works, and will aggregate data and process information received
wirelessly from other devices delivering data of relevance to the mobility
domain and parking in particular. Future subset may contain information about
other domains – energy, and data packages from smart home and health-devices.
The test stations main task is to process data and trigger activate and log
actions accordingly._
</th> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The dataset will be collected wirelessly and via USB ports._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The device will be the property of the test site owners, where the data
collection is going to be performed_
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata include: (a) description of the experimental
setup (e.g. process system, date, etc.) and procedure which is related to the
dataset (e.g. proactive maintenance action, unplanned event, nominal
operation. etc.), (b) scenario related procedures, state of the monitored
activity and involved workers, involved system etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_The data will be stored at XML format and are estimated to be 50-300 MB per
month._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Registering parking activity based upon availability, vehicle,
ownership/licence, comparing with nearby infrastructure and surrounding ITS
technology._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_The full dataset will be available to the members of the consortium. If the
dataset or specific portions of it (e.g. metadata, statistics, etc.) are
decided to become of widely open access, a data management portal will be
created that should provide a description of the dataset and link to a
download section. Of course these data will be anonymized, so as not to have
any potential ethical issues with their publication and dissemination._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_The created dataset could be shared by using open APIs through the middleware
as well as a data management portal._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Data will be stored in the storage device of the developed system (computer).
A back up will be stored in an external storage device. Data will be kept
indefinitely allowing statistical analysis._
</td> </tr> </table>
**Table 18: Dataset description of the HITS laptop test station**
<table>
<tr>
<th>
**DS.HITS.04.Sensio_sensors_temperature_motion_lock**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_Sensors for measuring temperature, motion detection and identifying status of
door/window lock will be installed in apartments that are managed by
caretakers employed by Tromsø municipality._
_The datasets will contain general information about activities, and offer
insight that building manager, caretakers and medical staff can utilize to
offer better service and trigger messages should deviations situations occur._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The dataset will be received from a Sensio gateway that stores the data on an
external server, and made available to a laptop at the pilot site through an
API._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The device will be the property of the test site owners, where the data
collection is going to be performed_
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will contain information on location, and be accompanied with the
respective documentation of its contents. Indicative metadata include:
scenario related procedures, state of the monitored activity and involved
workers, involved system etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_The data will be stored at XML format and are estimated to be 30-50 MB per
month._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Identifying usage history used for resource planning and detecting unexpected
activities based on activity or lack of activity, as well as measured values
versus expected data._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_The full dataset will be available to the members of the consortium. Specific
portions will be accessible to building managers and medical staff. Parts of
the data will be anonymised, while other will available through a two-pass
data management porta. For privacy reasons, the data access will be limited,
so configuration will be made in close cooperation with the service provider._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_Due to confidentially, the created dataset will only be made accessible
through a data management portal that is open to medical staff and managers._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Data will be stored in the storage device of the developed system (computer).
A back up will be stored in an external storage device. Data will be kept
indefinitely allowing statistical analysis._
</td> </tr> </table>
**Table 19: Dataset description of the Sensio sensors**
<table>
<tr>
<th>
**DS.HITS.05.Gorenje_Smart_Appliances_Sensor**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_The Gorenje smart appliances installed at the Tromsø pilot site includes a
fridge and an oven. The appliances are managed by caretakers employed by
Tromsø municipality, the tenants themselves and the building manager. The
appliances contain sensors that among other things can measure timestamps and
temperature._
_The data harvested will be used to identify usage history in order to offer
better service, identify abnormal behaviour, and otherwise generate logs that
can be used for statistical analysis._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The data will be collected by specific smart appliances (i.e. oven, fridge)
provided by Gorenje and adjusted to VICINITY requirements. The data will be
made available to a laptop at the pilot site through an API._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The device will be the property of the test site owners, where the data
collection is going to be performed._
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_HITS, GORENJE_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_HITS, GORENJE_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_HITS_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP6, WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata may include device id, measurement date, device
owner, state of the monitored activity, etc._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_A collection of data is dispatched every 15 minute. The format is based on
standards provided by Gorenje._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Usage data to identify behaviour patterns and as mean for training disabled
users in being more self-sufficient are examples are examples of value-added
services that can be built on top of the platform. As the data pool increases,
more services are expected to be included._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_The full dataset of Gorenje devices deployed at the Tromsø pilot site
“Teaterkvarteret 1. Akt”, will be stored at the Gorenje Cloud in a local
experimental repository._
_The full dataset will be available to selected members of the consortium.
Specific portions will be accessible to building managers and medical staff.
Parts of the data will be anonymised, while other will available through a
two-pass data management porta. For privacy reasons, the data access will be
limited, so configuration will be made in close cooperation with the service
provider._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_Anonymised parts of the dataset will be available for training and statistic
purposes. Aggregated data that could be used to identify the user or other
privacy related information will be limited. Due to confidentially, the
created dataset will only be made accessible through a data management portal
that is open to medical staff and managers._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_None_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Data will be stored in the storage device of the developed system (computer).
A back up will be stored in an external storage device. Data will be kept
indefinitely allowing statistical analysis._
</td> </tr> </table>
**Table 20: Dataset description of the Gorenje smart appliances sensor**
# Datasets for buildings from Tiny Mesh AS (TINYM)
The primary role of Tiny Mesh Company is as a developer and technology
provider, with the company´s IoT solution as the main enabling technology. The
goal is to offer promising technology solutions through participation in use
cases. We focus on creating new products, services and business model as part
of the Internet-of-Everything (IoE). New potential arise when IoE is used for
connecting, integrating and controlling all kinds of meters, street lights,
sensors, actuators, assets, devices, tags and other devices.
TINYM will contribute in the practical implementation through their work with
definitions of use case. TINYM will take practical ownership of the various
demo sites through the role as of leader of WP7.
<table>
<tr>
<th>
**DS. TinyMesh.01.Door_Sensor**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_The sensors will be installed in the door of a room where there is a need for
monitoring usage._
_Data packet contains sensor data of movement._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_Discrete digital input_
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The property owner Tiny-Mesh_
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_Tiny-Mesh_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_Tiny-Mesh_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_Tiny-Mesh_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_Metadata about location of the sensor, network topology and network status
will be available in Tiny-Mesh Workbench._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Data is delivered as a discrete value indicating if door has been opened or
closed, volume of data depends on the usage._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_The purpose of this collection is to give input data for analysis of room
usage for analyses to the building owner and Facility manager._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_Data access for building manager and facility manager. Data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_Data access is confidential. Only members of the consortium, building manager
and facility manager will have access on it for privacy reasons._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Unless specified otherwise by client the data will be stored in a Value Added
Service._
</td> </tr> </table>
**Table 21: Dataset description of the Tiny-Mesh Door Sensor**
<table>
<tr>
<th>
**DS. TinyMesh.02.Energy_Water_Consumption_Sensor**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_The sensors will be installed to measure consumption of water and
electronics.._
_Data packet contains sensor data of movement._
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_Data is retrieved through industry-standard meters and communicated through
Tiny-Mesh infrastructure before being made available to the consortium._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_The property owner Tiny-Mesh_
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_Tiny-Mesh_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_Tiny-Mesh_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_Tiny-Mesh_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_Metadata about location of the sensor, network topology and network status
will be available in Tiny-Mesh Workbench._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Communication with meter will be on a proprietary interface according to
meter vendor. Data will be delivered as KW/h or l/h on a configurable interval
of (default: 1 minute)._
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_The purpose of this collection is to give input data for analysis of resource
usage to control peak electricity or alarm of abnormal use._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_Data access is restricted to the consortium, building manager and facility
manager. Data will be anonymized, so as not to have any potential ethical
issues with their publication and dissemination._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_Data access is confidential. Only members of the consortium, building manager
and facility manager will have access on it for privacy reasons._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Data will be stored in the metering devices as well as the TinyMesh provided
Value Added Service._
</td> </tr> </table>
**Table 22: Dataset description of the Tiny-Mesh consumption sensor for energy
and water.**
<table>
<tr>
<th>
**DS. TinyMesh.03 Tinymesh_Gateway**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_Data packed from any Tinymesh network_
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_The Tiny-Mesh Gateway relays information from different TinyMesh devices to
upstream service._
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_Tiny-Mesh_
</td> </tr>
<tr>
<td>
Partner in charge of the data
collection (if different)
</td>
<td>
_Tiny-Mesh_
</td> </tr>
<tr>
<td>
Partner in charge of the data
analysis (if different)
</td>
<td>
_Tiny-Mesh_
</td> </tr>
<tr>
<td>
Partner in charge of the data
storage (if different)
</td>
<td>
_Tiny-Mesh_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WP7 and WP8._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_Tiny-Mesh Gateway is a serial communication device that can transfer data in
two modus; transparent and packed._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_Tiny-Mesh Gateway is serial communication device._
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level (Confidential, only for members of the
Consortium and the Commission
Services) / Public
</td>
<td>
_The full dataset will be confidential and only the members of the consortium
will have access on it._
</td> </tr>
<tr>
<td>
Data sharing, re-use and
distribution (How?)
</td>
<td>
_Data and metadata will be accessible by an API in Tinymesh Cloud._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
_Data and metadata will be accessible by an API in Tinymesh Cloud._
</td> </tr> </table>
**Table 23: Dataset description of the Tiny-Mesh gateway**
# Conclusions
This document is the second version of the Data Management Plan. It is based
on knowledge harvested through describing requirements, preparing the VICINITY
architecture, and planning the pilot sites. The updated datasets have been
delivered from the participants that are responsible for the test labs and the
living labs, and describes procedures and infrastructure that have been
defined at this point in the project.
The work on semantics and privacy issues has continued. It is the process of
clarification of procedures that has led to many of the updates that are found
in this document. Certain areas still need some attention. This will in
particular matter for Open Calls, as these are still tentative and the
documents and other material is still being worked out by the VICINITY
consortium. Activities for a Data Management Portal have proceeded, and a
demonstration has been held twice that presented how the VICINITY architecture
works, how it integrates and how the concept of virtual neighborhood functions
in practical terms. More updates is envisaged after studies of the pilot sites
proceeds, and open calls are being presented. Future versions may have updated
Consent forms as well since the upcoming GDPR may lead to changes in how
privacy and ethics issues are formulated.
Lessons learned from this report is there has been introduced more IoT assets
that will be integrated within the ecosystems that will be tested. There has
been a fruitful discussion between project partners, which increases the
quality of this document. Ownership of data become more important, and will
receive special attention in the next part. The Data Management Portal is
still under work, but need for each project partner to contribute to editing /
access rights will need to be managed accordingly. It must also be noted that
the partners are unable to exactly specify what kind of datasets that will be
relevant as the project proceeds. This is what they expect to learn from the
pilot sites and other tests conducted at the workbench. It is therefore
expected that the datasets may change accordingly.
The VICINITY Data Management Plan still put a strong emphasis of the
appropriate collection – and publication should the data be published – of
metadata, storing all the information necessary for the optimal use and reuse
of those datasets. This metadata will be managed by each data producer, and
will be integrated in the Data Management Portal. This is considered even more
important with the upcoming deployment of the General Data Privacy Regulations
(GDRP).
The final version of DMP is due in December 2019. It is expected to present
the final datasets and lessons learned, alongside plans for further management
of test data and production data. It will provide information on the existence
(or not) of similar data and the possibilities for integration and reuse. In
addition, issues like the period of data preservation, the approximated end
volume, the associated costs and how these are planned to be covered will be
tackled in order to make the Portal and other necessary management tools
operational and to provide a detailed Management Plan for each dataset.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0426_WEAR_732098.md
|
# 1 INTRODUCTION
WEAR Sustain ( _Wearable_ _technologists Engage with Artists for Responsible
Innovation_ ) aims to engage people in the creative industries, particularly
art and design, to work more closely with technology and engineering
industries, to shift the development of the wearables, smart and e-textile
landscape towards a more sustainable and ethical approach. The project fosters
cross-disciplinary, cross-sectoral collaboration through the co-design and co-
development of ethical, critical and aesthetic wearable technologies and smart
textiles, with a strong focus on the use of personal data within the industry.
Wearable technologies aimed at private consumers constitute a nascent market,
expected to grow very fast. Numerous technology companies and startups are
working to make the next wearable device or application for body data
tracking. Currently, wearable technology collects user’s personal
(physiological) data, most commonly through medical or fitness monitoring. The
wearable technology companies own the users’ physiological data, mainly
collected via mobile apps and devices, with the ability to perform any kind of
operations on it, such as analyse it, interpret it, or sell it, without user
consent. These issues are rarely discussed beyond the fine print on these
devices and some vaguely described security policies and long EULAs (End User
Licence Agreements).
WEAR Sustain aims to raise and discuss awareness around sustainability,
ethical and personal data issues. In addition, in January 2012, the European
Commission proposed a comprehensive reform of data protection rules 1 in the
EU, with Regulation 2 3 coming into force on 24th May 2016 with application
from 25 May 2018. Personal data Regulation (EU) 2016/679 4 in particular
addresses issues around processing of personal data and on the free movement
of such data.
The project will develop a framework within which the future of ethics and
sustainability within wearables, electronic or smart textile can be discussed
and prototyped, to become examples of what the next generation of developments
could or should be. The project will engage a wide variety of stakeholders 5
that are involved in the development of use of these technologies over the
project’s two year duration, between January 2017 and December 2018.
At the end of the project WEAR Sustain will highlight any new approaches to
design, production, manufacturing and business models, to enable
entrepreneurs, stakeholders and citizens to become more aware of the issues
involved in making and using wearable technologies. These findings will be
made available within a Sustainability Strategy and Toolkit in December 2018.
WEAR Sustain participates in the Horizon 2020 Open Research Data Pilot, which
aims to improve and maximise access to and re-use of research data it
generates. This Data Management Plan, has been developed to determine and
explain which research data will be made open, based on the Horizon 2020
Guidelines 4 on Data Management. It describe the data generated and the
processes and roles that will involve all consortium partners and will ensure
their commitment by including appropriate terms in the Consortium Agreement.
WEAR adheres to Open Knowledge principles, guaranteeing stakeholders and users
open access to published research and information through open data
publishing. The reports and recommendations that will be produced by the
project will therefore be freely available to all users and the general public
via the WEAR Sustain website and freely disseminated locally via WEAR internal
and external dissemination channels.
The following WEAR Sustain Data Management Plan is a living document that will
be updated where needed, as the project progresses.
## 2\. DATA SUMMARY
### 2.1 PURPOSE OF DATA COLLECTION
WEAR Sustain will provide access to the facts and knowledge gleaned from the
project’s activities over a two-year period, to enable the project’s
stakeholder groups, including creative and technology innovators, researchers
and the public at large, to find and re-use its data, and to find and check
research results.
The project’s activities aims to generate knowledge, methodologies and
processes through fostering cross-disciplinary, cross-sectoral collaboration,
discussion, evaluation and co-design/development of ethical, critical and
aesthetic wearable technologies and smart textiles. The data from these
activities will be collected at knowledge exchange events, via the funded
sustainable innovation process and online via the WEAR ecosystem, to evaluate
how future creators may develop future wearables, smart and e-textiles that
are ethical and/or sustainable.
It is planned that knowledge generated throughout the project will lead
towards new approaches to design, production, manufacturing and business
models to help artists and designers, entrepreneurs, stakeholders, and
citizens become more aware of the issues involved in making and using ethical
and sustainable wearable technologies and to shift the development of the
wearables and e-textile landscape towards a more sustainable and ethical
approach.
WEAR will encourage all parties to contribute their knowledge openly, to use
and to share the project’s learning outcomes, and to help increase awareness
and adoption of ethics and sustainability in the wearables, smart and
e-textile fields and the technology industry at large.
Funded projects have the right to opt-out of sharing their data, but will need
to say why. Reasons for opting out may include privacy, intellectual property
rights or if sharing might jeopardise their project's main objective.
Knowledge, methodologies and processes documented will be used to create a
Sustainability Strategy and Toolkit at the end of the project, which will set
the benchmark for ethical and sustainable technology development. The
Sustainability Strategy and Toolkit in particular will summarise the outputs
of the WEAR ICT-36 Innovation Action, to inform stakeholders on the
innovations and processes available for adoption of sustainable business and
innovation practices, born out of the WEAR Sustain early stage funded
innovation activities. WEAR will report on the selection process, the
performance of the funded teams, their progress over the course of the project
and their potential to grow and find funding to continue their project. We
will make this information open and accessible for all.
### 2.2 DATA COLLECTION AND CREATION
#### WEAR Sustain collected and/or created data
WEAR Sustain will collect, generate and create data from its project
activities across four broad categories of data for evaluation;
1. Data for evaluation;
2. Research Data and metadata;
3. Manuscripts;
4. Dissemination material.
The following interactions with project stakeholders will be used to collect
the data the project needs for evaluation;
* WP2 - Digital mapping of the WEAR Sustain ecosystem 7 ;
* WP3 - Online applications for the Open Calls 8 ;
* WP3 - Interaction with experts on review & selection panels 9 for funding evaluation;
* WP4 - Monitoring of the Sustainable Innovation process by the funded teams and their mentors and hub leaders, project support to funded team and interaction with support hubs;
* WP5 - Gathering of all other insights from events, funded teams and expert consultation available to feed into the Sustainability Strategy Toolkit;
* WP6 - Dissemination/engagement activities including presentations and discussions on project themes by experts and stakeholders at project and external events.
Data types may take the form of lists (of organisations, events, activities,
etc.), reports, papers, interviews, expert and organisational contact details,
field notes, videos, audio and presentations. Video and Presentations
dissemination material will be made accessible online via the WEAR Sustain
website and disseminated through the project’s media channels, EC associated
activities, press, conference presentations, demonstrations and other means,
using open publishing means and standards.
In the following we describe the types of data and the formats used. A list of
all data to be collected and created is shown in Table 1 and any additional
information will be explained in the following sections.
7 8 https://network.wearsustain.eu/ http://wearsustain.eu/open-calls/
9 Expert panels are managed by the consortium under WP3. The expert panel
will be published online upon selection.
#### Data for evaluation
Data for evaluation will consist of image, video, audio and manuscript
datasets, to be used for evaluation and for development of the Sustainability
Strategy and Toolkit. This data will be used by the consortium throughout the
project.
WEAR will take advantage of any pre existing data that can be used in the
project. Datasets will include any material collected by partners in the
consortium, such as WP2 or WP5 state of the art research, and public images
owned by the consortium.
The project will generate new images, video, audio and documents data. Data
from events, with permission from any owners, will be evaluated, processed and
shared with all stakeholders and the general public online. The project will
also evaluate and share data collected via the WEAR Online Network to assess
the services offered by the hubs. Data collected via applications on the F6S
platform (WP3) will evaluated and any data supplied by winning teams may be
processed and used for the team’s promotion. Data for evaluation will also
cover the project ethics and sustainability themes.
Audio files may be recorded at the knowledge exchange sessions at WEAR Sustain
public events via mobile phones and digital recorders. The common file format
for this is WAV or AIFF (which are the highest quality), but might be
published as MP3.
Images and video datasets will use common file formats. Images will be JPEG
and PNG files. Video will be MP4 for best quality, however videos may also be
in file type MOV, MPEG, AVI, 3GP, WMV, or FLV.
#### Research Data and metadata
This category uses data generated by user interaction via;
* The Sustainable application and Innovation process
* WEAR events and
* WEAR Online Network and Ecosystem.
##### Manuscripts
Manuscripts will consist of all the reports generated during the project,
including all deliverables, publications and internal documents. Microsoft
Word (DOCX) and PDF will be used for final document versions.
##### Dissemination Material
WEAR Sustain will produce dissemination material in a variety of forms:
posters, public presentations, how-to/speaker videos and website. All
dissemination material will be shared via PDF, JPEG or PNG files unless
otherwise stared.
The expected size of all the data, as outlined in Table 1 is around 52 GB.
This will be updated as the project progresses.
### 2.3 DATA COLLECTION AND CREATION METHODOLOGIES
**Collection and creation of data**
In the following, details of the collection or creation of the data of the
different categories/types will be provided:
#### ● Data for evaluation
Online data of the WEAR network will be collected by the team in WP2 in the
form of digital mapping of the WEAR Sustain ecosystem.
During the competition application phases each of the 48 applicant teams will
provide up to 10 pages of text and a 3 minute video pitch each of a maximum
30mb file size, submitted and held via an online application portal called F6S
5 . The pre-selection process will be documented through scoring sheets and
discussion, followed by similar final selection process.
For dissemination WEAR will use images, videos, audio and transcripts from
events of knowledge exchange activities are the datasets that will be
generated. There will be 10 project events and a final showcase over the two
years of the project. It is estimated that there may be a total of 40 video
recorded presentations in total at project events, plus up to six recorded
round table discussions per event totalling 60 audio/transcript files. There
will also , and 48 funded project presentations contributing to image and
video data for the final showcase. In addition there may be external event
data recording.
During the Sustainable Innovation process, WEAR will monitor the
methodologies, processes and support used by our 48 funded teams. Teams will
record their findings via an Offbot 6 project reporting tool, which will
remain private, for use by the teams, mentors and WEAR consortium. The team
and mentors may also provide reports.
#### ● Research Data and metadata
WEAR Sustain will use quantitative and qualitative research methods for data
collection throughout the project. The former will rely on sampling and
structured data collection to produce results that are easy to summarise,
compare and generalise. Qualitative data collection will be used used to
clarify quantitative evaluation findings. The project will gather valuable
Knowledge and insights originating from the following project activities;
**○** Knowledge Exchange activities at events and webinars - Professionals and
SMEs round table discussions at our events and online. Registration and
attendance to Knowledge Exchange activities
**○** The Digital Platform for Ecosystem Visualisation - WEAR Sustain mapping
of the EU wearable technology and e-textiles network; the ‘What, Who, Where,
When, Why and How’ of wearable technologies and e-textiles materials, in terms
of where components are sourced, experimentation, design, prototyping, and how
they are tentatively transformed into new business models across Europe,
managed by Datascouts.
**○** The WEAR Sustain Website - The website allows users to subscribe to a
notifications list. Cookies are also used.
**○** The WEAR Sustain Mailing lists and databases - Databases of WEAR Sustain
hubs, experts, mentors and event and newsletter sign-ups, managed via
Mailchimp and Datascouts. Databases must include the source of the data (e.g.
event), list the person’s first name and surname, their location, the date,
their email and their affiliation.
**○** Surveys - Online and paper based surveys will be provided after each
WEAR activity, such as events, to glean as much data from the participant as
possible.
Following data collection, as part of our mandate for the project’s (WP5), to
build a WEAR Sustain Sustainability Strategy, the project will organise the
collected data into a suitable format for analysis. During the analysis stage,
WEAR Sustain will examine the relationships, patterns and trends in the above
data collected, to develop conclusions for an open access Toolkit, which will
be published online and made freely available.
#### ● Manuscripts and Dissemination material
A total of 27 WEAR public deliverable reports will be created by the
consortium, stored on the WEAR website. In addition dissemination material
including a media kit, press releases, presentations, publications, images and
videos, as well as other resources to aid the project will be created by the
consortium over the duration of the project and made freely available for
dissemination on the WEAR Sustain website. Microsoft Office tools will be used
wherever possible as well as PDF and JPEG.
##### Structure, name and versioning of files
Regarding the structure of the WEAR Consortium shared drive, which is private,
there is currently no provision for meta data, but each document produced is
attached to a specific work package (eg WP1, WP2, WP3 et al), and within the
work package standardised sub folders exist which correspond to the
deliverables for that package.
In the case of video and image files, the project will keep the raw data
separate from the processed data. The consortium has chosen to use Flickr
which provides 1TB of free storage. The consortium will use this to store all
unprocessed video and image files to be marked as private. Raw audio and
transcript files will be stored privately on the shared drive.
Processed and edited videos will be uploaded to YouTube under the Science &
Technology 7 category with a Standard YouTube Licence. Selected images of
the best quality will be made public on Flickr with All Rights Reserved.
Public Flickr files and Youtube videos will be made visible and accessible via
the WEAR Sustain Website. Public dissemination material will be stored on the
project website in the Share section, under the Media and Resources 8 page,
including manuscripts and reports. We have enabled sharing of publically
available resources across a variety of social media channels.
Documentation of public material will include, wherever possible, the publish
date, the event/methodology used, the aim, followed by our general funding
statement and the project URL. A feedback link to a survey form may also be
provided where appropriate. In the case of manuscripts, the same information
will apply unless the structure of the documentation inhibits it (e.g. a
journal/conference paper).
All WEAR public platforms and published material should state the purpose of
WEAR Sustain as an EU-wide wearables, smart and e-textiles project funded by
Horizon 2020 to confront ethics and sustainability through research and
innovation. It must state that the project has received funding from the
European Union’s Horizon 2020 research and innovation programme under grant
agreement No. 732098, that the sole responsibility of the publication lies
with the WEAR Sustain Consortium and that the European Commission is not
responsible for any use that may be made of the information contained therein.
As to the naming of files, in all cases, files will be named according to
their event or content and date. Versioning is not appropriate for much of the
data produced as part of WEAR Sustain. We enable version control on reports
and deliverables, and this will be managed on a case by case basis appropriate
to the task.
Metadata information usually includes such basic elements as: title, who
published the dataset, when it was published, how often it is updated and what
license is associated with the dataset. The metadata will correlate with the
glossary 9 of terms defined by the WEAR consortium, which is publicly
available on the website.
## 3\. FAIR DATA
WEAR Sustain will endeavour to make its research data ‘Findable, Accessible,
Interoperable and Reusable (F.A.I.R)’, leading to knowledge discovery and
innovation, and to subsequent data and knowledge integration and reuse.
The WEAR consortium is aware of the mandate for open access of publications in
the H2020 projects and participation of the project in the Open Research Data
Pilot. The consortium is yet to decide on a data repository for the project
and is in discussion on the best repository for the project outcomes. The
consortium, through WP5 and WP6, will ensure that scientific results produced
by funded teams that do not need to be protected and can be useful for the
research community will be duly and timely deposited, for access by any user.
As mentioned above these will be;
* Electronic copies of any final versions or final peer-reviewed manuscripts accepted for publication, made available with open access publishing;
* Public project deliverables and any necessary summaries of confidential project deliverables;
* Public presentations, and any other kind of dissemination material;
* Research data needed to validate the results presented in any deposited publications.
The standard file format for information will be in PDF. Unless documentation
is protected under copyright, all publications will be made freely available
under the Creative Commons Licence.
**Data within the WEAR DataScouts online platform** is searchable through a
variety of search interfaces that provide endpoints to search on name and
keywords. Current search filters include
-Accelerator, Creative Professional, Enabler, Technology Provider, Creative & Innovation Hub, Investor, Academia, Government & Public Admin, Enterprise, Association - geographically. Stakeholders may also search for a partner via their membership to find a creative or technology collaborator for the projects Open Calls. K eywords are being normalised before being made available for search in order to optimize the data quality and searchability. One of these normalisations is to lowercase tags that are attached to objects and another is to normalise geographical data. For the latter we use the GeoNames 10 data dump. There is currently no publication of standardised metadata, such as DCAT, except for when a CSV dump is made within the DataScouts application. When requesting a CSV dump from DataScouts, a separate RDF file will be provided containing DCAT conform metadata, containing key aspects such as the publisher name, URI, etc. Internal identifiers are being kept but are not transparent to the users of the application, here a translation to non-persistent identifiers makes the different objects accessible. Data from the WEAR Online Network will be processed and made available through DataScouts at the end of the project (or DataScouts SLA) through a CSV dump.
The data processed in DataScouts will be downloadable without any license,
effectively waiving rights of any sort. Data quality is assured in an
automated way, meaning data coming from a variety of sources will be cleaned,
formatted and normalised appropriately. E.g. using GeoNames references for
addresses, normalizing social media handles to full social media URIs, and
normalizing URLs. All of the incoming data, cleaning, and normalizing
processes are being tracked throughout the lifecycle, meaning every change can
be traced back to its source, effectively implementing a provenance workflow.
**For event data** , sharing of registration and attendance details will be
limited to numbers, and broad demographic details. The remaining personal data
will be restricted as a way of protecting personal information. Speaker
presentations may be made available on Slideshare, if the speaker approves of
this. Events will be filmed, edited and uploaded to YouTube for public
consumption. Information generated through roundtable discussions at events
will not be published in it’s raw data form but audio transcriptions will be
collected and used for analysis, anonymised and published via the WEAR Sustain
Sustainability Strategy and toolkit.
**A WEAR Sustain glossary 16 ** has been developed by the consortium and
made available via the WEAR Sustain website for inter-disciplinary
interoperability, particularly to enable understanding between technical and
non technical disciplines and is a publically available document. Some
terminology is particular to different industries, and we will endeavour to
avoid the use of highly technical terms and use language which is easily
understood by all. The Wearables and E-textiles page on the WEAR Sustain
website also publishes a diagram outlining the ‘Art, Design & Technology
disciplines that could be involved in the development of wearables, smart or
e-textiles 17
**The WEAR Sustain Website,** hosted by We Connect Data, will be used for the
dissemination of available research and WEAR Sustain activities. The
consortium is exploring ways of ensuring the website www.wearsustain.eu and
online community via datascouts will remain available for a longer period of
time (to be decided) beyond the project duration.
**WEAR will share published material on external online platforms** that hold
a captive potential WEAR audiences including Slideshare, YouTube, Linkedin and
social media platforms such as Facebook, Instagram, and Twitter. We will be
adding metadata in line with the platforms’ most commonly searched for
keywords.
**All other research data** to be collected throughout the project lifecycle
will be shared via the Sustainability Strategy and Toolkit to be used by the
wearables, smart, e-textiles industries and the public at large. This toolkit
will be made available in Month 24 of the project.
**Projects funded via WEAR Sustain** will be required to maintain records
relating to their funding for a minimum of five years to comply with audits
conducted either by or on behalf of the European Commission. Partner
organisations will adhere to their own internal regulations for keeping
records.
16 17 The WEAR Glossary can be found at _http://wearsustain.eu/wp-
content/uploads/2017/03/WEAR-Sustain-Glossary.pdf_
_http://wearsustain.eu/about/wearables-e-textiles/_
### 4\. DATA SECURITY
#### Data storage, Access and Backup
Storage and maintenance of WEAR Sustain data will be handled according to the
data category, privacy level, need to be shared among the consortium and its
size.
The WEAR Sustain project utilises a number of online platforms to collect and
store data. Although we are beginning with the use of these tools the
consortium is researching into migration to platforms that have a strong
ethical standpoint, in line with our challenge to address data ethics into our
project.
**Application via F6S -** During the competition application phase the WEAR
project uses the F6S.com start-up platform to enable applicants to pitch their
ideas during the Open Call process. The platform is used to process the
applications and facilitate the review process. F6S is used regularly by EU
projects relying on open calls (e.g. FP7 and H2020). It will store all data
from this process and most questions related to data handling are dealt with
in their section on 'privacy policy' 11 . F6S will collect and store the
information for each application, automatically collected by F6S to include
contact information, location, role, skills and experience. F6S will also
collect and store;
* Information about applicant's’ proposed projects seeking funding:
* An executive summary of the project (max. 1 page);
* A presentation of the team, their expertise, previous realised projects on wearables including contact details and CV of the management team members (max. 1 page);
* A project pitch (max. 5 pages), where the project is proposed on the basis of the NABC method (Need, Approach, Benefit, Competition), including a description of the technology and design thinking methodologies used in the project;
* A prototype plan of how they will develop over the course of 6 months from where the project is located in the development process at the time of submission to a fully market ready prototype. The plan will describe the key milestones for the project, a brief description of the deliverables and the budget. (max. 2 pages);
* A concrete business case for the application of their idea. (max. 1 page);
* a video pitch of the project (max 3 minutes), consisting of all above topics.
All application documents will remain private, in line with F6S privacy
policy, with the exception that All WEAR consortium members, reviewers, judges
and mentors will have access to these data over the course of the WEAR
project. Reviewers will have access to data regarding the proposals they have
to review during the selection period. F6S does not provide further data
regarding the format for data storage and where it is stored. WEAR will
download project applications and store the information on the shared drive.
Information of the selected teams will also be transferred from F6S to
DataScouts to allow for monitoring.
It is worth noting that WEAR may use another platform for round 2 that has
more transparency around its use of the data it collects. WEAR will request
that all applicants data be deleted after the selection process by F6S.
**WEAR Online Network and Ecosystem** - The WEAR DataScouts platform enables
the analysis of the WEAR ecosystem data to uncover and analyse connections
between those registered in our network. The data processed via DataScouts
will be stored on Digital Ocean servers, which are located in Amsterdam and
are owned by TeleCityGroup. These servers are serviced as virtual private
servers and are virtually accessible only by DataScouts admins, which is the
only way to access the raw data itself. Physical access is entirely
restricted, except for specific Digital Ocean engineers. 12 The
aforementioned data is also available through the DataScouts application. The
collected and processed data will be kept on these servers for the duration of
the project or for the duration of the DataScouts SLA. The data is both stored
in a MySQL compatible format, in MariaDB and is indexed in ElasticSearch, both
systems are hosted on Digital Ocean servers. DataScouts also hosts the WEAR
Sustain public website, accessible by the WP6 team.
**Offbott** 13 - This online journal, will be used by the funded teams to
supply the WEAR consortium with regular updates on the team’s progress. Every
day the Offbott will send an email to every member of the funded teams. It
logs all responses and the consortium will be able to access this for team
management and reporting purposes. The consortium and project mentors will
have access to the reports. And at the end of the project there will be a
journal for each project to review. All responses and journals can be
downloaded in PDF format. There is a hosted version and WEAR is exploring how
the project may host its own version.
**External dissemination software -** For dissemination activities WEAR uses
a number of readily available tools to support our activities. These include
the EventBrite event management software, for registration and attendance
details and Mailchimp email software for communications. These will be used
by the consortium for community engagement and dissemination. These online
platforms store the collected data for an indefinite period and their privacy
policies comply with the EU-U.S. Privacy Shield Framework 14 . Eventbrite’s
partnership with MailChimp allows seamless integration. Databases may be
downloaded and stored on the WEAR consortium’s shared drive. Any data shared
with the general public will be will be limited to numbers, and broad
demographic details via reports. The remainder of data will be restricted as a
way of protecting personal information. All software is password protected
accessible only by the WEAR consortium and it’s project team.
As mentioned in section 2.2. raw WEAR Sustain video and image data will be
stored in Flickr. This online account allows members 1 TB of photo and video
storage. The consortium will have access to these, making only a selection of
the highest quality images shareable.
**All other electronic data** generated during research activities, such as
mailing lists and surveys will be stored on the consortium’s shared Google
Drive, backed up by google servers, or locally at partner’s workstations and
servers. Locally, consortium partners must have secure servers for any
information to be stored and server drives must be backed-up periodically. A
backed up copy is considered sufficient for these types of data. The project
will be working with a wide range of hubs and partners and the project will
encourage them to store shared documents via the consortium's servers. The
project is exploring ethical storage and is considering moving our storage
once a suitable solution is found.
### 5 RESPONSIBILITIES AND RESOURCES
#### Responsibilities
Imec, as the project coordinator, is responsible for implementing the Data
Management Plan (DMP).
In principle, all partners are responsible for data generation, metadata
production and data quality. Specific responsibilities are to be assigned
depending on the data and the internal organisation in the WPs and tasks were
data is created. Thus, for example, WP6 is responsible for coordinating
dissemination data, such as video material from events, and WP2 for
coordinating data in the DataScouts ecosystem. In the case of acquisition of
data the leader of WP5 will organise the responsibilities for all the partners
that will contribute to the Sustainability Strategy and Toolkit.
#### Resources
The cost for data management for the data processed within DataScouts and the
website for dissemination is already covered in WP2 Ecosystem Intelligence
Platform.
Dataset collection, storage, backup, archiving and sharing will be, in the
majority of cases, the responsibility of the partners who creates the data
and/or the servers in which they will be stored. imec, as the coordinator will
be responsible for ensuring the backup of any shared drives and servers.
It is not yet known if any extra resources, such as physical storage and media
is needed. **Completion of research**
imec as the coordinator will choose the most suitable repository to deposit
data and publish results. The coordinator will also inform OpenAIRE, the EU-
funded Open Access portal.
#### Data Security
WEAR Sustain intends to make its data public at the point of use. To ensure
any individual's’ personal privacy is protected during the sharing of data,
the consortium is reviewing a number of platforms for sharing of data. The
project currently uses a shared Google Drive and Dropbox for the transfer of
files but is considering other ethical platforms, such as the Signal app which
sends files as a secure and encrypted chat application.
The data processed within DataScouts is backed-up on dedicated servers on a
daily basis, over a secure SSL connection. When data recovery is needed, the
back-ups will be transferred over the same connection. Every server where data
is being kept is only accessible through an SSL connection with public key
authentication.
On Google Drive the information stored is not personal information. Any
information related to individuals has already made this information available
to the public by those individuals. This data is recoverable through back ups
managed by Google.
On F6S data is located at https://www.f6s.com/terms-and-conditions and
regularly updated. F6S does not provide further data regarding the format for
data storage and where it is stored.
#### Data Sharing
WEAR Sustain has its lineage in another EU project called FET-Art and its
umbrella initiative ICT&Art Connect, which Camille Baker and Rachel Lasebikan
were involved in developing. Imec’s acquired company iMinds was a partner in a
follow-up study. The European Commission’s (ICT) department DG-Connect has
recently launched the STARTS initiative 22 to promote inclusion of artists
in innovation projects funded by H2020. WEAR is one of the STARTS projects
covered by the STARTS Initiative. WEAR will also work closely with other
STARTS-related projects VERTIGO 23 , STARTS Prize 24 , FEAT 25 and
BRAINHACK 26 to ensure the full exploitation of WEAR’s research and for
community building. Close interaction will to support our activities in
Europe and participants from the FET-Art projects have been invited to
participate in WEAR via the online network.
#### Ethics and Compliance
This section is to be covered in the context of the ethics review, ethics
section of DoA and ethics deliverables. Ethics is covered in a separate
deliverable D7.1, which describes the principles and procedures for
collection, storage, processing and destruction of personal data in WEAR
Sustain’s activity (and in D7.2 due in M7, which is more focused on the
selected teams' handling of data).
The consortium is exploring new ethical ways of sharing and storing data, and
ethical compliance within the project. This is being researched as part of the
project, to be managed by WP7.
#### Copyright and Intellectual Property Rights (IPR)
Sustainable Innovation teams funded by the WEAR Sustain project will own their
own IPR for their prototype developments. They do agree to share their
methodologies and processes for the duration of the WEAR Sustain project so
that we may obtain the research needed for the Sustainability Strategy and
Toolkit. Details of this are described in the Open Call section of the
website.
Table 1 below provides the details of the owners of each of the data to be
collected and produced by the WEAR Sustain project. As a general principle,
for collected data the owner will be remain the same. For produced data the
producer of the data will own the data unless they have agreed to produce the
data on WEAR Sustain’s behalf.
The WEAR team has a separate consortium agreement in place to addresses any
copyright issues with the consortium.
22 23 https://ec.europa.eu/digital-single-market/en/ict-art-starts-platform
http://vertigo.starts.eu/vertigo-project/
24 25 https://starts-prize.aec.at/
26 http://featart.eu/ http://hackthebrain-hub.com/
**Table 1 - D6.11 Data management Plan:**
**Table 1 WEAR Sustain collected and produced Data**
This document lists all the data that WEAR Sustain is collecting or generating
during the lifetime of the project, how it will be exploited and of it will be
shared for verification or reuse. It also identifies which data will be kept
confidential, which will be made openly available and where it will be stored.
The spreadsheet can be viewed at:
_https://docs.google.com/spreadsheets/d/1Zw8mOl6VVetBUarQeSIgonc0M8H2x8tIAozaCTE151o/edit?usp=s
haring_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0427_INDEX_766466.md
|
# 1\. Data management plan in the context of H2020
## 1.1 Introduction
The European Commission (EC) is running a flexible pilot under Horizon 2020
called the Open Research Data Pilot (ORD pilot). This pilot is part of the
Open Access to Scientific Publications and Research Data Program in H2020. The
ORD pilot aims to improve and maximize access to and re-use of the research
data generated by Horizon 2020 projects and takes into account the need to
balance openness and protection of scientific information, possible
commercialization and Intellectual Property Rights (IPR) protection, privacy
concerns, security as well as data management and preservation issues.
According to the EC suggested guidelines, participating projects are required
to develop a Data Management Plan (DMP). The DMP describes the types of data
that will be generated or gathered during the project, the standards that will
be used to generate and store the data, the ways how the data will be
exploited and shared for verification or reuse, and how the data will be
preserved. In addition, beneficiaries must ensure that their research data are
Findable, Accessible, Interoperable and Reusable (FAIR).
DMP of project INDEX will be set according to the article 29.3 of the Grant
Agreement “Open Access to Research Data”. Project participants can deposit
their data in a research data repository and take measures to make the data
available to third parties. The third parties should be able to access,
search, exploit, reproduce and disseminate the data. This should also help to
validate the results presented in scientific publications. In addition,
Article 29.3 suggests that participants will have to provide information, via
the repository, about tools and instruments needed for the validation of
project outcomes.
On the other hand, Article 29.3 incorporates the obligation of participants to
protect results, security and to protect personal data and confidentiality
prior to any dissemination. Article 29.3 concludes: “ _As an exception, the
beneficiaries do not have to ensure open access to specific parts of their
research data if the achievement of the action's main objective, as described
in Annex I, would be jeopardized by making those specific parts of the
research data openly accessible. In this case, the data management plan must
contain the reasons for not giving access_ .”
In line with this, the INDEX consortium will decide what information will be
made public according to the analysis of aspects as potential conflicts
against commercialization, IPR protection of the knowledge generated (by
patents or other forms of protection), risk for obtaining the project
objectives/outcomes, etc.
## 1.2 Scope of the document
This document is a deliverable of the INDEX project, which is funded by the
European Union’s.
Horizon 2020 Programme under Grant Agreement number 766466. It describes what
data the project will generate, how they will be produced and analysed. It
also aims to detail how the data related to the INDEX project will be
disseminated and afterwards shared and preserved. It covers: I. the handling
of research data during and after the project.
2. what data will be collected, processed or generated.
3. what methodology and standards will be applied.
4. whether data will be shared/made open and how.
5. how data will be curated and preserved.
The DMP is not a fixed document. On the contrary, it will have to evolve
during the lifespan of the project. This first version of the DMP includes an
overview of the datasets to be produced by the project, and the specific
conditions that are attached to them.
An updated version of the DMP will get into more detail and will describe the
practical data management procedures implemented by the INDEX project.
## 1.3 Dissemination policy
The DMP for INDEX focuses on the security and robustness of local data storage
and backup strategies, and on a plan for this repository-based data sharing,
where and when appropriate and is based on the guidelines provided by the EU
in the DMP template document.
Effective exploitation of INDEX research results depends on the proper
management of intellectual property. A Consortium Agreement was signed by all
the parties in order to inter alia specify the terms and conditions pertaining
to ownership, access rights, exploitation of background dissemination of
results, in compliance with the Grant Agreement. The Consortium Agreement is
based on the DESCA Horizon 2020 Model with the necessary adaptations
considering the specific context and the parties involved in the project.
Its basic principles are as follows:
### 1) Ownership of the results
Results are owned by the Party that generates them. Joint ownership is
governed by Grant Agreement Article 26.2 with the following additions:
Unless otherwise agreed:
* each of the joint owners shall be entitled to use their jointly owned results for non-commercial research activities on a royalty-free basis, and without requiring the prior consent of the other joint owner(s), and
* each of the joint owners shall be entitled to otherwise Exploit the jointly owned Results and to grant non-exclusive licenses to third parties (without any right to sub-license), if the other joint owners are given:
(a) at least 45 calendar days advance notice; and (b) Fair and Reasonable
compensation.
### 2) Access rights
During the Project and for a period of 1 year after the end of the Project,
the dissemination of own Results by one or several Parties including but not
restricted to publications and presentations, shall be governed by the
procedure of Article 29.1 of the Grant Agreement subject to the following
provisions. Prior notice of any planned publication shall be given to the
other Parties at least 45 calendar days before the publication. Any objection
to the planned publication shall be made in accordance with the Grant
Agreement in writing to the Coordinator and to the Party or Parties proposing
the dissemination within 30 calendar days after receipt of the notice. If no
objection is made within the time limit stated above, the publication is
permitted.
It is noteworthy that INDEX project will involve the use of easily accessible
human biological samples
(biological fluids, primarily blood derivatives). All personal data collection
in INDEX will be done within the remit of formal ethics clearances obtained
from Scientific Ethical Committee of Central Denmark (M20090237) and the
Danish Data Protection Agency (2007-58-0015) and granted by the relevant
university and/or local health officials. Thus, any patient-related data, such
as data from pre-exiting health record data will fall under the ethics
clearance.
The legal basis for the personal data processing will be the participant’s
consent, obtained in accordance with the rules to which the collecting partner
is subject.
The most relevant standards regarding data handling, in this experimental
context with patients, concern the area of ethics, data protection and
privacy.
They are listed below:
* the Charter of Fundamental Rights of the European Union (signed in Nice, 7 December 2000,
2000/C 364/01) in particular Article 3 “Right to the integrity of a person”
and Article 8 “ Protection of Personal Data”;
* Decision 1982/2006/EC of the European Parliament and the Council concerning the Seventh Framework Programme of the European Community for research, technological, development and demonstration activities (2007-2013);
* Council Directive 83/570/EEC of 26;
* Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data;
* Directive 98/44/EC of the European Parliament and of the Council of 6 July 1998 on the legal protection of biotechnological inventions.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0429_WellCO_769765.md
|
# Executive Summary
This document is the deliverable “D6.6 Data Management Plan” of the European
project – “WellCo – Wellbeing and Health Virtual Coach” (hereinafter also
referred to as “WellCo, project reference: 769765).
The **Data Management Plan (DMP)** describes the types of data that will be
produced, collected and/or processed within the project and how this data will
be handled during and after the project, i.e. the standards that will be used,
the ways in which data will be exploited and shared (for verification or
reuse), and in which way data will be preserved. This DMP has been prepared by
taking into account the template of the “Guidelines on Data Management in
Horizon 2020” [Version 3.0 of 26 July 2016]. The elaboration of the DMP will
allow WellCo partners to address all issues related with data protection,
including ethical concerns and security protection strategy. WellCo will take
part in the Open Research Data Pilot (ORD pilot); this pilot aims to improve
and maximise access to and re-use of research data generated by Horizon 2020
projects, such as the data generated by the WellCo platform during its
deployment and validation. Moreover, under Horizon 2020 each beneficiary must
ensure open access to all peer-reviewed scientific publications relating to
its results: these publications will be made also available through the public
section of the WellCo website. All these aspects have been taken into account
in the elaboration of the DMP.
This deliverable is a **living document** . At this stage in the research a
**lot of questions concerning the data are still open for discussion** .
Questions concerning opening up the data or answers to questions related to
the Findable, Accessible, Interoperable, Re-use (FAIR) principles will only
have a provisional answer in this DMP. We will add relevant information to the
DMP as soon as it is available. So far, in M6 we are at the beginning of the
project and we have very little pseudo-anonymized data collected within the
WP2 (“Co-design”), stored at each trial site and at the joint repository
(Alfesco by HIB). An update will be provided no later than in time for the
first review (M12). Other updates will be provided at M24 and M36 detailing
which/how the data will be made available to others within the Pilot on Open
Research Data (ORD).
Starting from a brief illustration of the WellCo project, and of the ethical
concerns that could affect the project and their link with the new General
Data Protection Regulation that comes into force this month offering as result
some guidelines for data protection and security in WellCo, this report tries
to describe the procedures of data collection, storing and processing at M6 of
the project.
<table>
<tr>
<th>
1
</th>
<th>
Introduction
</th> </tr> </table>
The research activities undertaken in the WellCo project have important data
protection aspects, in particular due to the sensitive and personal data it
collects, processes and stores. This deliverable analyses the **data
management implications** of the activities undertaken in the project, and
describes the guidelines and procedures put in place in order to ensure
compliance with data management requirements.
The structure of the document is as follows:
Initially, section 2 provides a data summary for the WellCo project. In order
to link the purpose for the generation and processing of data with the
project, **background information of the WellCo project as well as the main
objectives for the project** are explained **.**
Then, as many different actors are involved as active participants: elderly,
their informal caregivers and professionals, one of the major concerns of the
consortium is the protection of their privacy while collecting, analysing and
storing sensible data. Thus, **section 3** of this deliverable focuses on the
ethics measures that will be taken in each of the countries producing,
collecting and/or processing data according to the new European Regulation on
Privacy, the General Data Protection Regulation, (GDPR) that has came into
force in May 2018. Although **ethic measurements** were already **defined in
D2.2** **for the countries producing data** , i.e. the **countries where trial
sites are performed** , Denmark (DK), Italy (IT) and Spain (ES), this document
expands these ethic measurements to cover also the **collecting, storing,
processing and re-use of this data** by technical partners during the
implementation of the modules envisaged for WellCo. At the end of this
section, some **guidelines for data protection and security** are proposed.
The aim is to assure maximum privacy for all the personal and sensitive (e.g.,
ethnicity, health/wellbeing) data within the project as well as after the
project end, when this research data will be made as openly accessible as
possible.
The final section gathers some FAIR principles with the aim of providing a
data management plan that enables to **maximize the access to and re-use of
research data** , also ensuring **open access to all peer-reviewed scientific
publications and agreed datasets during and after the project.** A **detailed
description** of the datasets to be handled in each WP of the project,
according to the requirements set out in Annex 1 – Data Management Plan
template of the “Guidelines on Data Management in Horizon H2020” [1] is set at
the end of this section (Section 5.2). This covers: (a) the handling of
research data during and after the project; (b) what data will be generated,
collected and processed; (c) what methodology and standards will be applied;
(d) whether data will be shared/made open access and how; (e) how data will be
curated and preserved.
<table>
<tr>
<th>
2
</th>
<th>
WellCo: Data Summary
</th> </tr> </table>
This sections aims to make a review of the scope of the project (purpose and
objectives) in order to clarify the relation between it and the data
generation, collection and processing envisaged in the project.
## 2.1 The purpose of WellCo
The aim of the WellCo project is to develop and test in a **user-centric** ,
iterative approach a “ **Well-being and Health Virtual Coach for behaviour
change** ”. WellCo, thereby, seeks to deliver a radical new Information and
Communication Technologies
(ICT)-based solution in the **provision of personalized advice, guidance and
follow-up** of users for the **adoption of healthier behaviour choices** that
help them **to maintain or improve** their **physical cognitive, mental and
social well-being** for as long as possible. The whole service is also
followed-up and **continuously supported by a multidisciplinary team of
experts** , **as well as users’ close caregivers** that provide their clinical
evidence and knowledge about the user to ensure effectiveness and accuracy of
the change interventions.
## 2.2 Objectives of WellCo
As gathered at proposal stage, the main objectives of the WellCo project and
those that explain the purpose of data collection/generation in the scope of
the project are listed below:
* **_Objective 1 (O1)_ . Develop novel ICT based concepts and approaches for useful and effective personalised recommendations and follow up in terms of preserving physical, cognitive, mental and social well-being for as long as possible. **
WellCo provides an innovative technology framework, based on **last mile AI
technologies** , that establishes a solid ground for a highly personalised
environment where WellCo will be incorporated in a seamless way in the user’s
daily activities by means of **dynamic profiles** that take into consideration
all the **context around the user** (from **user reported outcomes** , to
**profile information** , **Life Plan** or **data derived from the monitoring
of the user** ). This personalization will allow the platform to provide
adapted goals and recommendations to users with the aim of leading to a
behavioural change on a healthy lifestyle. This change process will be
followed-up and continuously supported by a multidisciplinary team of
professionals and users’ relatives or informal caregivers as main supporters.
* **_Objective 2 (O2)_ . Validate non-obtrusive technologies for physical, cognitive, social and mental wellbeing. **
WellCo aims to fuse **data that can from multiple sources: static data** such
as Profile, life goals (defined along e.g., Life Plan method), etc. and
**dynamic** data **derived from the monitoring of the user:** data from
**wearable bracelets** , **smartphone sensor data** and the implementation of
**deep learning techniques to extract sentiment features of the user** based
on his/her speech and body gestures. The aim is to infer not only the
individual behaviour but also the social, cognitive and environmental context
surrounding him/her in order to provide highly adapted and personalised
guidelines and recommendations that could be adapted to individual’s’ daily
routine.
WellCo as a non-obtrusive solution will result in a higher amount of data and
quality since users will be more likely to engage longer with our solution.
The “observer effect” will be minimized resulting in data quality that will
closely match the natural behaviour of the subjects.
* **_Objective 3 (O3)_ . Evidence of user-centred design and innovation, new intuitive ways of human-computer interaction and user acceptance. **
**WellCo key activities to optimize engagement and adoption are focused on the
personalisation and affective awareness** ; so the **solution is strictly
aligned with the user Life Plan** . WellCo addresses behavioural aspects
including hesitation, engagement and discouragement in the adaptation of the
interactive interface. Furthermore, WellCo **includes user’s emotional state
into the adaptation** of the interactive interface, which is essential in
considering the user needs for engagement, thereby furthering adaptive user
interface knowledge. User centred design is specifically addressed in T3.3 of
the project with the **personalization of the interactive user interfaces** to
the needs and preferences of individual users **based on context-of-use**
using user profiles, context models and heuristics context aware models, e.g.
rules or decision trees. In order to provide an intuitive user interaction
with the application, WellCo provides **speech interaction by means of an
affective aware virtual coach** that is always active in the device (that
could be de-activated on the settings) and **Natural Language technologies** ,
so WellCo will be able to understand user’s daily-life conversation in
different languages and guide the user through advice and recommendations (de-
activation is always possible, and instead normal interaction through touch
screen could be used).
Regarding user acceptance, to ensure the usability and personalisation of the
platform, **WellCo design will be developed jointly with technical, business
and end user partners through all the project life** (starting from the needs
identification prior to the proposal phase). On tasks T2.4 WellCo **Co-design
will be developed, and mock-ups** are expected **to be shared and designed
together with the set of users, involved also in the requirements phase.**
* **_Objective 4 (O4)_ . Cost-effective analysis to maximize the quality and length of life ** in terms of activity and independence for people in need of guidance and care due to age related conditions because of self-care, lifestyle and care management.
Evidence suggests that **self-management** , especially for people with long-
term conditions **, can be effective through behavioural change and self-
efficacy** (for example for diabetes patients) and may reduce drug and
treatment costs and hospital utilisation, which is translated on savings for
the National Health Systems. **WellCo** will aim to support this evidence by
**sharing project results and ensuring open access to all peer-reviewed
scientific publications** as well as **research data supporting them** , as
long as it respects the **balance among openness and protection of scientific
information, commercialization and IPR, privacy concerns, security and data
management and preservation questions** .
## 2.3 Types of data generated, collected and processed in WellCo
As extracted from the previous sections of WellCo, different types of data
coming from multiple data sources will be available in WellCo. Mainly this
data will consist of:
* **Static Data (O1 &O3) ** needed to perform a static modelling of the user, i.e.: o **User profile information** o **Life Plan information –** different areas of a user’s life like health, work, community involvement, relationship with friends and families, etc.
* **Dynamic Data (O2 &O3) ** needed to dynamically model the user and adapt the recommendations to the social, cognitive and environmental context surrounding him/her. o **Wearable Bracelet**
* **TicWatch S** 1 – heart rate, steps, distance, calories, sleep quality, GPS and accelerometer.
* **Nokia Steel HR** 2 \- heart rate, steps, distance, calories, sleep quality. o **Personal Smartphone/ Tablet Device**
* Record visible WiFi access point;
* Localisation via GPS;
* Counting of number of SMS and phone calls sent / received / missed (no actual content of SMS or phone calls will be stored);
* Patterns of use of specific app categories (e.g. social media, browsing, email, photography, etc.). WellCo will never track individual apps to ensure preservation of privacy;
* Lastly screen on / off events that could provide interesting input in assessing mental state (e.g. anxiety, stress, sleep quality);
* Record ambient sound (extract features in real time, no storage) – Affective Computing;
* Record video (extract features in real time, no storage) – Affective Computing. o **Patient Report Outcomes;**
* Self-reported nutrition, physical activity, sleep, stress etc.
o **Expert and Informal Caregivers reported Outcomes**
These **data will be originated by the target users involved in each of the
trial sites defined in WellCo** , Denmark (DK), Italy (IT) and Spain (ES) on
the part of SDU (DK), FBK (IT) and GSS (ES). For more information about the
sample size and enrolment procedures of these users, please see D2.1 User
Involvement Plan.
The data previously originated in trial sites will be **collected, processed
and stored** **according to three phases** that define the core of WellCo –
co-design, implementation and validation. These phases suppose an enlargement
of the initial phases described in D2.2 Ethics, Gender and Data Protection
Compliance Protocol and that only covered the collecting, processing and
storing of data from beneficiaries in charge of trial sites, i.e. FBK, GSS and
SDU. A new phase has been included that aims to cover the management of data
by technical partners in order to handle data as part of the work they perform
for the implementation of algorithms and technologies that ensure the
provision of effective personalized recommendations.
<table>
<tr>
<th>
**#**
</th>
<th>
**Phase**
</th>
<th>
**Description**
</th>
<th>
**Partners involved**
</th> </tr>
<tr>
<td>
**1**
**2**
</td>
<td>
Co-Design
Implemen tation
</td>
<td>
The _first phase_ , consists of requirements gathering and concept development
of WellCo. The data from participants will be captured, stored and processed
by the personal involved in trial sites according to the ethics measures
defined in D2.2.
</td>
<td>
FBK, GSS, SDU
</td> </tr>
<tr>
<td>
_The second phase_ will consist on the collection and processing of the data
derived from the profile and
</td>
<td>
FBK, JSI, UCPH,
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Validation
</td>
<td>
monitoring of each of the users involved in trial sites in Spain, Italy and
Denmark. This processing will allow the development of the modules described
in WP4 and WP5 (see figure 1)
</td>
<td>
MONSENSO
, HIB, CON
</td> </tr>
<tr>
<td>
_The third phase_ will be the validation of each of the three prototypes
envisaged in WellCo in the different trial sites defined in the project. The
data from this validation will be captured, stored and processed by personnel
involved in these trials and provide it to technical user with the aim of
improving the coming prototype.
</td>
<td>
FBK, GSS, SDU
</td> </tr> </table>
_**Table 1.- Data Collecting, Processing and Storage phases.** _
In order to clarify phase 2 of the table above and with the aim of determining
the relation between the processing of the data and the achievement of the
different goals expected in WellCo, the initial architecture design is
included in the picture below with the aim of offering a clearer view of how
the data available in WellCo will feed each of the modules composing the
architecture.
_**Figure 1.- WellCo Platform conceptual architecture and main components.** _
As already mentioned, this is a living document so it is expected that the
figure above changes along the project lifetime since the first version of
this document has been delivered in M6, i.e. when WP3 WellCo Prototyping and
Architecture has just started.
<table>
<tr>
<th>
3
</th>
<th>
WellCo: Ethical Issues
</th> </tr> </table>
As part of the engagement on ethics, the WellCo consortium has been committed
to ensure that ethical principles and legislation are applied in the scope of
the activities performed in the project from the beginning to the end. For
this reason, the consortium has identified relevant ethical concerns already
during the preparation of the project proposal and, then, during the
preparation of the Grant Agreement. During this phase, ethics issues have been
already covered as part of D2.2 Ethics, Gender and Data Protection Compliance
Protocol and later, in D7.1 POPD Requirement No.2 and D7.2 H – Requirement
No.3.
In the context of this deliverable, it can be determined that the ethical
issue that could have more impact on data handling and sharing during and
after the project is that regarding privacy and data protection issues,
especially relevant because of the entry into force of the new General Data
Protection Regulation this month that establishes a common framework for data
protection in Europe.
The following section aims to describe how the founding principles of the new
European Regulation on Privacy, the General Data Protection Regulation,
(GDPR), will be followed in the WellCo consortium. Then, these principles will
be used in the coming section to set out specific guidelines for accurate and
compliant use of personal data within the boundaries of the GDPR. It is
important to mention that this deliverable is a living document and as far as
GDPR-related developments are clearer, further details will be included in it.
Additionally, it is important to note that some of the details of the data
management implementation are also mentioned within deliverable D2.2 “Ethics,
Gender and Data Protection Compliance Protocol”.
## 3.1 Alignment with the GDPR
This deliverable will describe how the data will be handled during and after
the project. As of May 2018 the GDPR will come into play. This means all
partners within the consortium will have to follow the same new rules and
principles. On the one hand, it makes it easier for the project management to
set up guidelines for the accurate and compliant use of personal data. On the
other hand, it means that in some cases, tools and partner specific guidelines
are not yet available. This deliverable is a living document and as far as
GDPR-related developments are more clearer, further details will be included
in it. Additionally, it is important to note that some of the details of the
data management implementation are also mentioned within deliverable D2.2
“Ethics, Gender and Data Protection Compliance Protocol”.
In this chapter we will describe how the founding principles of the GDPR will
be followed in the WellCo consortium. Then we will set out specific guidelines
for accurate and compliant use of personal data within the boundaries of the
GDPR.
## 3.2 Lawfulness, fairness and transparency
**_Personal data shall be processed lawfully, fairly and in a transparent
manner in relation to the data subject._ **
All data gathering from individuals will require informed consent of the test
subjects, participants, or other individuals who are engaged in the project.
Informed consent requests will consist of an information letter and a consent
form (generic template in an appendix of D2.2 “Ethics, Gender and Data
Protection Compliance Protocol”). This will state the specific causes for the
experiment (or other activity), how the data will be handled, safely stored,
and if/how shared. The request will also inform individuals of their rights to
have data updated or removed, and the project’s policies on how these rights
are managed.
Along the project, we will try to anonymize the personal data as far as
possible, however we foresee this will not be possible for all instances; some
data will be pseudo-anonymized where the identity of the participants will not
be known to researchers, but based on the data content collected one may get
back and discover this identity.
A specific consent will be acquired to use the cumulative data for open
research purposes; including presentations at conferences, publications in
journals as well as, once accurately anonymized, depositing a bulk data set in
an open repository at the end of the project. This clause is included in the
informed consent form.
The consortium is going to be as transparent as possible in the collection of
personal data. This means when collecting the data information leaflet and
consent form will describe the kind of information, the manner in which it
will be collected and processed, if, how, and for which purpose it will be
disseminated and if and if/how it will be made open access. Furthermore, the
subjects will have the possibility to request what kind of information has
been stored about them and they can request their data to be removed from the
cumulative results.
## 3.3 Purpose limitation
**_Personal data shall be collected for specified, explicit and legitimate
purposes and not further processed in a manner that is incompatible with those
purposes_ **
The WellCo project will not collect any data that is outside the scope of the
project. Each researcher will only collect data necessary within their
specific work package and task activity (see Section 4.2).
### 3.3.1 Data minimisation
_**Personal data shall be adequate, relevant and limited to what is necessary
in relation to the purposes for which they are processed** _
Only data that is relevant for the project’s research questions and the
required state assessment and coaching activities will be collected. However
since participants are free in their answers, both when using the WellCo
coaching or in answering open ended research questions, this could result in
them sharing personal information that has not been asked for by the project.
This is normal in any coaching relationship and we therefore chose not to
limit the participants in their answer possibilities; we will rather limit the
scope of the data being processed to the minimum one necessary for coaching to
work.
### 3.3.2 Accuracy
_**Personal data shall be accurate and, where necessary, kept up to date.** _
All the collected data will be checked for consistency and will be stored with
the metadata for which timeframe that data applies; for example “age” could be
stored as “age in 2018” and once captured, would be automatically updated.
However since some of the dataset register self- reporting data from the
participants, we cannot check this data for accuracy.
### 3.3.3 Storage limitation
_**Personal data shall be kept in a form, which permits identification of data
subjects for no longer than is necessary for the purposes for which the
personal data are processed** _
All personal data that will no longer be used for project purposes will be
deleted as soon as possible. All personal and sensitive data will be made
anonymous as soon as possible. At the end of the project, if the data has been
accurately anonymized, then it will be stored in an open repository. If data
cannot be anonymized, the pseudanonymized datasets will be stored for a
maximum of the partner’s archiving rules within the institution. For example,
a complete data set will be archived at the UCPH for 10 years, according to
its data policy. Each partner has its individual data policy.
### 3.3.4 Integrity and confidentiality
_**Personal data shall be processed in a manner that ensures appropriate
security of the personal data, including protection against unauthorised or
unlawful processing and against accidental loss, destruction or damage, using
appropriate technical or organisational measures** _
All personal data will be handled with appropriate security measures applied.
This means:
* Along phase 1 and 3 of the project, data sets with personal data will be stored at dedicated servers at the trial sites (DK, IT and ES) complying with all GDPR regulations. Decisions with respect to data storage for the project’s phase 2 (and beyond) will be made accordingly.
* Access to these servers will be managed by the project controller and will be given only to authorized individuals who need to access data for accomplishing the tasks within WellCo. Access can be retracted if necessary.
* In some cases pseudo-anonymized data sets can further be shared through the WellCo Alfresco platform and code repository by HIB, only if the datasets are sufficiently encrypted. The key to the encryption will be handed out by the collaborating parties and will be changed when access needs to be revoked.
* All WellCo collaborators with access to the identifiable, non-anonymized personal data will need to sign a confidentiality agreement, i.e., the “Contract for Data Controller” as defined in D2.2.
* None of the WellCo datasets can be copied outside of the secure servers, unless stored encrypted on a password protected storage device. In case of theft or loss, these files will be protected by the encryption. These copies must be deleted as soon as possible and cannot be shared with anyone outside the consortium or within the consortium without the accurate and compliant authorization.
In exceptional cases where the dataset is too large, or it cannot be
transferred securely, each partner can share their own datasets through
channels that comply with the GDPR.
### 3.3.5 Accountability
_**The controller shall be responsible for, and be able to demonstrate
compliance with the GDPR.** _
There is no one responsible for the data management in the project; we assume
a role of separate Data Controller at each of the trial sites, controlling the
same data types across the trials sites (D2.2). Furthermore, at project level,
the project management is responsible for the accurate data management within
the project. In the next section, guidelines will be described for each
partner to follow in case of datasets with personal and sensitive data. The
project management will regularly check whether the partners follow these
guidelines. For each data set, a responsible data Controller has to be
appointed at the partner level. This person is held accountable for this
specific data set.
<table>
<tr>
<th>
4
</th>
<th>
Guidelines for Data Protection and Security
</th> </tr> </table>
As part of the above principle, a guideline for data protection and security
has been established in this section with the aim of ensuring that all
researchers keep up the principles of lawful and ethical data management along
the whole project duration and after. The guidelines established in this DMP
are embraced within the consortium and the project management will ensure
these principles will be followed.
It is important to highlight that, because of the fact that the first version
of this DMP is published at M6, when there are still many uncertainties about
the data collected in the project, additional details are going to be inserted
in here along the project progress, as well as given within D2.2.
## 4.1 Purpose limitation and data minimisation
Researchers will apply the principles of purpose limitation and data
minimisation to the different types of data defined in section 2.3. Each
researcher will take care not to collect any data that is outside the scope of
his/her research and will not collect additional information not directly
related to the goal of his/her research.
## 4.2 Personal information
As soon as the parameters in the data-sets defined in section 2.3 are
identified, the researchers need to indicate whether the data set will contain
personal information.
In cases where the parameters themselves contain no personal information, but
the various parameters can be merged to show a distinct pattern that can be
linked to a specific person, the data set is co-called pseudo-anonymized and
will be classified as containing personal information as well.
When the dataset contains personal information or otherwise information that
needs to be kept confidential, the following privacy principles should be
taken into account:
* Sensitive data should be stored at either the dedicated trial site server, or encrypted on Alfresco and/or in a common code repository.
* In the case of personal data collected in physical form (e.g. on paper), it shall be stored in a restricted-access area (e.g. locked drawer) to which only WellCo authorized staff has access. This applies to informed consent collected in paper form or documents generated along the user requirements phase (e.g., results of the brainstorm with the users). Once the data has been digitised, the physical copies shall be securely destroyed.
## 4.3 Anonymisation and pseudo-anonymisation
The data controller will make sure the personal data is anonymized as quickly
as possible after its collection. When the data cannot be anonymized
completely, it will be pseudo-anonymized as much as possible – the personal
identifier must be stored separately. The authorized personnel, data
controllers, will store the key between the pseudo-anonymized files and the
list of participants. The key will be stored in a separate physical location
from the original files. We keep in mind that the research subjects should be
able to withdraw their data completely from the WellCo at any point in time,
hence the key must be stored securely but be feasible to be accessed.
Part of the WellCo platform relies on client-server technology. Both the
client and the server should incorporate the privacy rules as set out in the
GDPR as of May 2018. At the moment (M6) it is undecided: we are looking into
the different possibilities of hosting a server at each trial site (DK, IT,
ES), as well as assuring that each technical partner (MONSENSO, UCPH, FBK,
HIB, CON, JSI) has its own GDPR-compliant server. As far as for the client
side technology, we are looking into the possibilities of pseudoanonymizing
the client side, e.g., the tablet or a smartphone on which the app runs, may
be serving as a random, yet unique identifier for the project. However, the
implications of the privacy-by-design provisions in the GDPR cannot be settled
up front and will be contributed to this document along research and
development in WP3 and WP4.
## 4.4 Informed consent
When collecting personal information, researchers are required to get informed
consent from the study participants. In D2.2 we provided a standardized EU
informed consent template, which can always be supplemented with additional
consent requests, depending on the project stage, time involvement, as well as
risks and benefits of the involvement.
Consent should cover all processing activities carried out for the same
purpose or purposes. When the proposing has multiple purposes, consent should
be given for all of them.
## 4.5 End users’ rights
The user can submit a request to see which information about him/her is being
kept on our files through the contact person on the consent form. He/she can
request to delete his information up to 48 hours after the experiment has
taken place. Furthermore he/she can request that no additional data collection
will take place starting immediately from the time of request.
## 4.6 Storage and researchers’ access to data
Personal and sensitive user data will be stored safely and in a secure
environment; potentially at each trial site. Backups are an important aspect
of the server management and shall also be GDPR compliant. For example backup
of secure servers at UCPH are made every 24 hours by the system itself. A
common security protocol will be established once the project reaches the
maturity level, for all the partners storing personal data (defining
authentication, authorization and encryption; protection against unauthorized
access, internal threats and human errors, etc.).
Access to this secure environment can be granted or revoked by either the
researchers responsible for the data, or the project management on a case to
case basis and will not be given out by default to all researchers
contributing to WellCo activities. All users that are granted access to the
datasets will need to sign a Data Protection Contract (see Appendixes of
D2.2). Access can be restricted or revoked, when researchers are not complying
with the guidelines or when their contract is terminated.
## 4.7 Encryption
When researchers want to share personal data files through Alfresco and/or the
code repository, the data files will need to be encrypted. Each researcher is
free to use their own preferred encryption tools, to make the process as
easily available as possible to participating parties, however as secure as
needed. Possibilities for encryption as build in Word and Excel or can
leverage PGP keys (more advanced option).
If a scientist keeps data files with personally identifiable data on own
personal computer or on a separated hard drive for data analysis purposes,
he/she has to use BitLocker of FileVault for the encryption of the hard drive.
4.8 Open data and FAIR principles.
Within the WellCo project, we endorse the European Commission’s motto: “ _to
make the data as open as possible, but as closed as necessary”_ . We are
committed to protect the privacy of the participants involved, and the
confidentiality of specific results or agreements. In these cases the data
will not be made available for public use.
In all other cases we will try our best to make the research data as broadly
available as possible. This means the FAIR (having the research data findable,
accessible, interoperable and reusable) principles will be held, but at the
moment it is not possible for us to give definitive answers on how these will
be held. We intent to discuss those in more detail, also in this document,
once more information on the data sets comes to light. So far we discuss the
FAIR principles along each WP activities and tasks (c.f., Section 3).
## 4.9 Privacy statements
WellCo will actively communicate the privacy and security measures it takes
through all media channels (from consent forms to websites) with a privacy
statement. We will adjust the statement to fit the target group, purpose, and
level of privacy.
## 4.10 Update of the DMP
The DMP deliverable is a living document. The fact that at the moment there
are still many uncertainties about the data does not release us of the
obligation to ethically and lawfully collect, process, and store this data.
All researchers have the responsibility to keep the DMP up to date, so the DMP
will reflect the latest developments in data collection.
<table>
<tr>
<th>
5
</th>
<th>
WellCo Data Management Plan Details
</th> </tr> </table>
Within this section the work package leaders describe the different data sets
that will be used within their WP as well as possible. For the description of
the work packages, the standard European Commission template for a data
management plan has been used. However, many questions concerning the FAIR
principles cannot be answered at this moment. Therefore we have specified
provisional guidelines concerning these principles below. If not otherwise
specified in the Work Package description, these provisional guidelines will
for now apply to the data set. Description in the Work Packages that deviate
from these intentions will be mentioned in the description of the work
packages.
It is important to notice that, as long as it is possible from a privacy point
of view, it is our intention to make all the below-mentioned written data
openly available in order to validate the data presented in scientific
publications and on a voluntary basis. Only those parts of the data that
pertain to practices and technologies covered by any secrecy clauses in the
consortium agreement or in the exploitation agreements reached within the
consortium or between the consortium and external parties will be excluded.
## 5.1 Provisional FAIR Guidelines for WellCo Data Sets
### 5.1.1 Findable
Digital Object Identifier ( **DOI** ) is a unique alphanumeric string assigned
by a registration agency (the International **DOI** Foundation) to identify
content and provide a persistent link to its location on the Internet. The
publisher assigns a **DOI** when an article is published and made available
electronically.
As already specified within the GA Article 29.2, with respect to the open
access for the peer reviewed publications, the bibliographic metadata must be
in a standard format and must include all of the following: the terms
“European Union (EU)” and “Horizon 2020”; the name of the action, acronym and
grant number; the publication date, and length of embargo period if
applicable, and a persistent identifier, e.g., a DOI.
Each dataset within the WellCo project will get a unique Digital Object
Identifier (DOI). If/when the data set will be stored in a trusted repository
the name might be adapted in order to make it more findable. To construct a
DOI, we may assign it a name containing three elements along the pattern
_UserModel.WellCo-data-_
_set.datasetID.version.WellCo_controller_ , where _UserModel_ is the logical
name that is associated with the user state assessment component (e.g.,
physical, mental health), _WellCo-data-set_ is the data set name, and
_datasetID_ and _version_ are assigned by the _WellCo_controller_ (i.e., a
specific project partner).
Keywords will be added in line with the content of the publications and
datasets and with terminology used in the specific scientific fields, to make
these easily findable for different researchers.
### 5.1.2 Accessible
As described before, our intention is to open up as many WellCo data as
possible. However, if we cannot guarantee the privacy of the participants by
accurate anonymization of the data or the IPR of the owner beneficiary are
under risk, the data set might be opened up under a very restricted license or
it will remain completely closed. This document will be updated along the
project development with which data will be made accessible and which not as
well as the reasons for opting out.
For those project results to be made openly available, WellCo will adhere to
the pilot for open access to research data (ORD pilot) adopting an open access
policy of all projects results, guidelines and reports, providing on-line
access to scientific information that is free of charge to the reader. Open
access will be provided in two categories: **scientific publications** (e.g.
peer-reviewed scientific research articles, primarily published in academic
journals) and **research data** (Subsections below).
#### Open access to scientific publications
According to the European Commission, “under Horizon 2020, each beneficiary
must ensure open access to all peer-reviewed scientific publications relating
to its results” (see also Article 29.2 of the GA). The WellCo Consortium
adheres to the EU open access to publications policy, choosing as most
appropriate route towards open access **selfarchiving** (also known as “
**Green Open Access** ”), namely “a published article or the final peer-
reviewed manuscript is archived (deposited) in an online repository before,
alongside or after its publication. Repository software usually allows authors
to delay access to the article (“embargo period”). The Consortium will ensure
open access to the publication within a maximum of six months.
The dissemination of WellCo results will occur by mean of activities
identified in the initial plan for exploitation and dissemination of results
(PEDR), such as creation of the web page for the project, public workshops,
press releases, participation in international events, etc. In compliance with
the Grant Agreement, **free-online access will be privileged for scientific
publication** , following the above-mentioned rules of “green” open access.
All relevant information and the platform textual material (papers, leaflets,
public deliverables, etc.) will be **also freely available on the project
website.** In order to guarantee security, this textual material will be
available in **protected PDF** files. In specific cases and according to the
rules of open access, the dissemination of research results will be managed by
**adopting precautionary IPR protection protocols,** not to obstacle the
possibility of protecting the achieved foreground with preventive disclosures.
#### Open access to scientific publications (Open Research Data Pilot)
According to the European Commission, “research data is information
(particularly facts or numbers) collected to be examined and considered, and
to serve as basis for reasoning, discussion, calculation”. Open access to
research data is **the right to access and reuse digital research data** under
the terms and conditions set out in the Grant Agreement.
Regarding the digital research data generated in the action, according to the
Article 29.3 of the GA, the WellCo Consortium will:
_**Deposit in a research data repository** and take measures to make it
possible for third parties to access, mine, exploit, reproduce and disseminate
– free of charge for any user – the following: _
1. _The data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;_
2. _Other data, including associated metadata, as specified and within the deadlines laid down in this data management plan;_
3. _Provide information – via the repository- about tools and instruments at the disposal of the beneficiaries and necessary for validating the results._
WellCo Consortium will make a great effort, **whenever possible** , to make
this research data available **as open data or through open services** . It is
important to note that because of the low maturity of this document and some
existing uncertainties about the data collected in the project, additional
details are going to be inserted in here as the project progresses.
### 5.1.3 Interoperable
We are considering generating project specific ontologies in order to
normalize and make data from different sources interoperable. Additionally we
consider suitable metadata standards, for example: DataCite 3 . Depending on
the scientific field where the data set will originate from, additional meta-
data standards might be used.
### 5.1.4 Reusable
When possible, the data set will be licensed under an Open Access license.
However, this will depend on the level of privacy, and the Intellectual
Property Right (IPR) involved in the data set or the scientific publication. A
period of embargo will only be necessary if a data set contains specific IPR
or other exploitable results will justify an embargo. The length of embargo
will be negotiated on an individual basis.
Our intention is to make as much data as possible re-useable for third
parties. Restriction will only apply when privacy, IPR, or other exploitations
ground are in play.
All data sets will be cleared of bad records, with clear naming conventions,
and with appropriate meta- data conventions applied (see section 5.1.1).
The length of time, the data sets will be stored will depend on the content of
the data set. For example if the data set contains practices that we foresee
will be replaced soon, these set will not be stored for eternity. Furthermore
data sets collected leveraging specific technological solutions, might become
out-dated, which will also limit their time of reusability.
## 5.2 DMP within WellCo Work Packages
The following section represents a work in progress where a FAIR approach,
allocation of resources, data security, ethical aspects and other issues will
be detailed along each work package tasks and activities.
### WP2: Co-design (GSS, M1-M36)
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
**1 Data Summary**
</td>
<td>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</td> </tr>
<tr>
<td>
**Answers**
In T2.2 and T2.3: datasets on user requirements to serve a proper development
of WellCo (technical and functional requirements), and additionally to
elaborate on WellCo personas (thus, lifestyle and lifestyle patterns, main
concerns in well-being, and personal goals) and to describe and validate
scenarios, wireframes and user journeys.
In T2.4.: datasets on validation and feedback of users regarding a clickable
mock-up, prototype 1, prototype 2 and prototype 3 (final version of WellCo)
and feedback of users in order to measure the success of the project.
Specific datasets are:
* Notes and minutes of brainstorming, workshops, focus groups (.DOCX)
* Recordings and notes from interviews with stakeholders (.DOCX)
* Cultural probes: data form the user´s filled diaries, WhatsApp messages sent, personal interviews about users’ feelings in the cultural probes process.
* Reports after individual interviews on a questionnaire for technical and functional requirements.
* Reports after individual interviews to offer feedback on wireframes and user journeys
* Reports for personal feedback on the clickable mock-up, prototype number 1. 2 and 3
* Reports on monitoring through wearable devices
* Reports for personal feedback on success of the project
* Ex-ante and Ex-post evaluations referred to the participants in the test trials
Files are pseudo-anonymized and stored in for example in .DOCX, .PDF, .XLSX
formats Size: ±100MB so far
</td> </tr>
<tr>
<td>
** 2.1 FAIR: Findable ** Outline the discoverability of data (metadata
provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
**Answers**
The metadata associated with each dataset:
* Organization name, contact person
* Type of activity where data was collected, date
</td> </tr> </table>
<table>
<tr>
<th>
Further metadata might be added at the end of the project in line with meta
data conventions.
No deviations from the intended FAIR principles are foreseen at this point.
</th> </tr>
<tr>
<td>
**2.2 FAIR: Accessible**
</td>
<td>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata,
documentation and code are deposited
* Specify how access will be provided in case there are any
restrictions
</td> </tr>
<tr>
<td>
**Answers**
No data is going to be publically available at this point.
No deviations from the intended FAIR principles are foreseen at this point.
</td> </tr>
<tr>
<td>
**2.3 FAIR: Interoperable**
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
**Answers**
Data is stored in interoperable format (DOCX) that can be opened by anyone
authorized to do so.
No deviations from the intended FAIR principles are foreseen at this point.
</td> </tr>
<tr>
<td>
**2.4 FAIR: Reusable**
</td>
<td>
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain reusable
</td> </tr>
<tr>
<td>
**Answers**
N/A at this stage
</td> </tr>
<tr>
<td>
**3\. Allocation of resources**
</td>
<td>
* Estimate the costs for making your data FAIR. Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
**Answers**
The work to be done in making the data FAIR will be covered by the assigned
budget for producing the deliverables.
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**4\. Data Security**
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
**Answers**
The original data is stored in a dedicated trial site server. Namely,
handwritten notes will be stored under lock in the offices of the trial site
owner (FBK, GSS and SDU) in a physical storage space separate from the
participant lists of workshops and interviewees. The pseudo-anonymized data
(interview summaries, co-design reports) are shared on Alfresco (managed by
HIB).
Audio recordings and handwritten notes (e.g. Post-its) will be destroyed once
they have been added to the machine-written notes from the workshops or
interviews. In cases where audio recordings or handwritten notes are never
added to the machine-written notes, they will be destroyed in any case no
later than the end of the WellCo project.
Machine-written notes (i.e. data files in .DOCX and .XLSX format) will be
stored in Alfesco space provided by HIB. Access is granted in line with the
project’s procedures.
All the data will be destroyed, once the research has end, thus the project
has end. Once destroyed, the data processor must certify their destruction in
writing and must deliver the certificate to the data controller.
Additionally, GSS has to follow the procedure described in the document
_Report On The Security Measures To Be Adopted For The File_ " _UNIQUE RECORD
OF USERS OF THE SOCIAL RESPONSIBILITY SYSTEM_ ". This document states, i.e.
“Personal data will be cancelled when they are no longer necessary for the
purpose for which they were collected or registered. However, they may be kept
for as long as any type of responsibility can be demanded, but in any case it
must be determined”.
</td> </tr>
<tr>
<td>
**5\. Ethical Aspects**
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td> </tr>
<tr>
<td>
**Answers**
Ethical consent has been acquired from the participants so far.
Ethical approval for the studies, is under evaluation (UCPH to date). GSS does
not require the ethical approval.
Additionally, HIB, as leader of WP7, will guarantee the compliance of the
Ethical deliverables from within WP2.
</td> </tr>
<tr>
<td>
**6\. Other**
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr>
<tr>
<td>
**Answers**
No other procedures need to be put in place for project management data.
</td> </tr> </table>
### WP3: Prototyping And Architecture (HIB, M6-M30)
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
**1 Data Summary**
</td>
<td>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</td> </tr>
<tr>
<td>
**Answers**
This WP will collect, pre-process and store all the research data derived from
the monitoring of the user in a centralized server where it will be anonymized
as long as possible. The collection of this data will allow an initial pre-
processing of it. The idea behind this pre-processing is to enable the
normalization of this data in order to be interoperable with the rest of
modules of WP4 and WP5 where a complete processing will be performed. This WP
will also:
* Generate and store deliverables D3.1, D3.2, D3.3, D3.4 and D3.5 in the common repository in Alfresco. D3.2 is a public document so it will be also available in the project webpage;
* Code for prototypes as well as system logs will be shared in the common code repository;
* Intermediate documents for requirements and architecture design will be shared through the common repository in Alfresco.
The previous collection/generation of research data will allow the re-use of
this normalized data to: on the one hand, help to develop WellCo as a novel
ICT based platform for useful and effective personalised recommendations and
follow-up in terms of preserving or improving wellbeing (O1, as in Section
2.2) and, on the other hand, contribute to the validation of non-obtrusive
technologies for physical, cognitive, social and mental wellbeing (O3)
Deliverables will be in .DOCX and .PDF. The format for the research data has
not been decided yet.
Data in this module will be re-used by modules in WP4 and WP5. Also, after
being normalized and anonymized, and as soon as it does not affect data
protection or IPR, this data will be made open in ORD.
This research data will be originated in the smartphone/tablet and wearable
devices worn by the users participating in trials in Spain, Denmark and Italy.
Deliverables and code will be originated by the beneficiaries participating in
this WP, i.e. HIB, FBK, UCPH, JSI, CON, MONSENSO.
State the expected size of the data is not known yet.
As already explained, this data, once normalized, will serve as input for the
modules implemented in WP4 and WP5.
</td> </tr>
<tr>
<td>
**2.1 FAIR: Findable**
</td>
<td>
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
**Answers**
All WellCo datasets will use a standard format for metadata according to the
described in section
</td> </tr> </table>
<table>
<tr>
<th>
5.1.1. Further metadata might be added at the end of the project in line with
meta data conventions. Each dataset within the WellCo project will get a
unique Digital Object Identifier (DOI). If/when the data set will be stored in
a trusted repository the name might be adapted in order to make it more
findable
Identifiability of data is already explained above.
The naming conventions for deliverables are described in the project handbook
for the project. The naming conventions for datasets are explained in section
5.1.1.
Keywords will be added in line with the content of the datasets and with
terminology used in the specific scientific fields to make the datasets
findable for different researchers.
The version will be included as part of the naming conventions.
</th> </tr>
<tr>
<td>
**2.2 FAIR: Accessible**
</td>
<td>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata,
documentation and code are deposited
* Specify how access will be provided in case there are any
restrictions
</td> </tr>
<tr>
<td>
**Answers**
Due to the initial stage of the project, there is still some uncertainty on
the specific data to be handled. Deliverables will be shared around consortium
partners in Alfresco repository and those, which are public, will be available
in the project webpage. Regarding datasets, as gathered in the GA and along
the whole document, they will be made open as soon as they do not represent a
risk for IPR and data protection.
For those project results to be made openly available, WellCo will adhere to
the pilot for open access to research data (ORD pilot).
Methods or software tools needed to access the data are not known yet.
The consortium will decide, and specify where the data and associated
metadata, documentation and code are deposited
The consortium will decide, and specify how access will be provided in case
there are any restrictions.
</td> </tr>
<tr>
<td>
**2.3 FAIR: Interoperable**
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
**Answers**
Deliverables will be delivered in .PDF format in order to ensure that the
format is always kept. Regarding datasets, as part of this WP, ontology will
be designed with the aim of performing an initial pre-processing that enables
the normalization of this research data.
Use of standard vocabulary for all data types present in our data set, to
allow inter-disciplinary interoperability is mentioned above.
</td> </tr>
<tr>
<td>
**2.4 FAIR: Reusable**
</td>
<td>
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain reusable
</th> </tr>
<tr>
<td>
**Answers**
Whenever possible, the datasets will be licensed under an Open Access license
The datasets will be made available for re-use twelve-months later to project
end, or on partnerby partner basis, as agreed with all the project partners.
In the case of deliverables in WellCo for this WP, they will be stored in
Alfresco and published in the project web page as soon as they are delivered
in the EC Portal (without any embargo period).
As explained in section 4, the intention is to make as much data as possible
re-useable for third parties. Restriction will only apply when privacy, IPR,
or other exploitations ground are in play.
All data sets will be cleared of bad records, with clear naming conventions,
and with appropriate meta- data conventions applied. HIB as responsible for
this WP will perform a quality control of the datasets processed in this work
package by editing and moderation, cleaning, pre-processing, adding metadata,
transforming to a more convenient format or providing easier access.
The datasets will be available for reuse till the quality assurance tasks
performed over each data determines that these datasets are out-dated
</td> </tr>
<tr>
<td>
**3\. Allocation of resources**
</td>
<td>
* Estimate the costs for making your data FAIR. Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
**Answers**
The work to be done in making the data FAIR will be covered by the assigned
budget for producing the deliverables.
</td> </tr>
<tr>
<td>
**4\. Data Security**
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
**Answers**
HTTPS will be used as application protocol for WellCo. HTTPS is an extension
of HTTP for secure communication over a computer network; Transport layer
Security (TLS) or Secure Sockets Layer (SSL) encrypts it. Moreover the system
also includes: Authorization and authentication processes;
* Periodic backups of the databases and the code;
* Firewall inspection trough White Lists;
* Intrusion detection and prevention mechanisms.
</td> </tr>
<tr>
<td>
**5\. Ethical Aspects**
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td> </tr>
<tr>
<td>
**Answers**
The guidelines for data protection and security defined in Section 3 will be
followed for the data available in this WP. Some of the aspects that will be
covered are: data minimization, protection of personal information through
anonymisation and pseudo-anonymisation, rights for the user to give his/her
consent and to ask for access to his/her data, rectification of data, removal
and portability.
</td> </tr>
<tr>
<td>
**6\. Other**
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr> </table>
**Answers**
No other procedures need to be put in place for project management data.
### WP4: Physical, Cognitive And Mental User Assessment (UCPH, M1-M21)
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
**1 Data Summary**
</td>
<td>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</td> </tr>
<tr>
<td>
**Answers**
To design, implement and evaluate the WellCo user assessment services we will
collect the following data
* User state assessment model specification (.DOCX, .XLSX, .PDF) and implementation
* Self-assessed variables or data collected via a wearables dataset (Heart rate, Steps, Distance, Calories, Sleep quality, Accelerometer, Gyroscope and Magnetometer); estimated size: 10 MB/day
* Potentially Smartphone datasets (WiFi patterns usage, applications usage, GPS, ON-OFF and ambient sound measurements); estimated size: 10MB/day o Behavioural features, derived from the above sources like "step counts", “hours of sleep”; estimated size: few kB-1MB/day
* API designs for wearables and smartphones dataset (.DOCX, .PPT)
* data will be transmitted over HTTPs in the form of data objects (e.g., JSON) to a secure server where it is persisted in another relational database management system (e.g., MySQL).
* System logs (performance, debugging, benchmarking of service quality)
* Stored on device: SQLite format
* Source code (Java, Python, PHP, .APK, etc.)
</td> </tr>
<tr>
<td>
**2.1 FAIR: Findable**
</td>
<td>
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
**Answers**
No deviations from the intended FAIR principles are foreseen at this point.
</td> </tr>
<tr>
<td>
**2.2 FAIR: Accessible**
</td>
<td>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
* Specify where the data and associated metadata,
documentation and code are deposited
* Specify how access will be provided in case there are any
restrictions
</th> </tr>
<tr>
<td>
**Answers**
To access the data we will likely leverage MySQL technology. Some examples of
open source options are: DBeaver, SQLelectron or SequelPro.
Accessibility of the data for others will only be provided if we assure that
the data is anonymized and based on it will not be possible to identify or
retrace a person (for instance through location tracking).No deviations from
the intended FAIR principles are foreseen at this point.
</td> </tr>
<tr>
<td>
**2.3 FAIR: Interoperable**
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
**Answers**
We use standard models for encoding the data (e.g., JSON, CSV). No uses of
specific ontologies are planned so far.
No deviations from the intended FAIR principles are foreseen at this point.
</td> </tr>
<tr>
<td>
**2.4 FAIR: Reusable**
</td>
<td>
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain reusable
</td> </tr>
<tr>
<td>
Depending on the content of the data set and whether it contains personal
information, re-use by third parties could be possible.
No deviations from the intended FAIR principles are foreseen at this point.
</td> </tr>
<tr>
<td>
**3\. Allocation of resources**
</td>
<td>
* Estimate the costs for making your data FAIR. Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables.
</td> </tr>
<tr>
<td>
**4\. Data Security**
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
**Answers**
An anonymized universal unique identifier will be used to identify the data
collected from each user; it will not be possible to reveal the identity of
the user solely based on this ID. However there might be a combination of data
possible, with which you can identify a person, for example 24hour location
tracking.
</td> </tr>
<tr>
<td>
The raw sensor data will be transmitted over HTTPS in the form of data objects
(e.g., JSON) to a secure server where it is persisted in another relational
database management system (e.g., MySQL). Any further information on the
server it at the moment of writing not available yet.
In all cases data will be stored according to the project’s guidelines on
personal data.
</td> </tr>
<tr>
<td>
**5\. Ethical Aspects**
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td> </tr>
<tr>
<td>
**Answers**
The end user will receive an information leaflet and will sign a consent form.
This way we ensure the individual is fully informed about the nature of the
research and the data collection that takes place and they give their (full)
consent for the research.
</td> </tr>
<tr>
<td>
**6\. Other**
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr>
<tr>
<td>
No other procedures need to be put in place for project management data.
</td> </tr> </table>
### WP5: Behaviour Modelling And Lifestyle Coach (JSI, M8-M29)
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
**1 Data Summary**
</td>
<td>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</td> </tr>
<tr>
<td>
**Answers**
To design, implement and evaluate the WellCo Behaviour Modelling and Lifestyle
Coach, the following data will be collected:
* Features extracted from an initial pre-processing of video in real time. Video is never stored. These data will be only shared in case of need to support peer-reviewed scientific reports
* Data for speech emotion analysis (affective computing) – recorded sound– saved or process in real-time
* Data from WP4 (physical activity, nutrition specifications, mental assessment, behavioural features) and sentiment analysis will be used for dynamic user modelling- Data directly gathered from the wearable bracelet.
* Sensors and data embedded in the smartphone or tablet device of the user.
* Static data of the user such as Profile Information, Life Plan and Reported Outcomes and Expert/Informal caregiver reports. These data will be shared after anonymisation.
* Above mentioned will be used to provide personalized recommendations to the user through the virtual coach in order to ensure the adoption and maintenance of healthier behaviour change habits as gathered in Objective 1 (O1, Section 2.1.) of WellCo.
Although the format and synchronization of these data have still to be
decided, we are considering the possibility of having specific ontologies in
order to normalize data formats and make them interoperable among the
different modules of WP5.
Regarding the re-use of the data in this WP, we plan to make them as open as
possible. Because of this, as the project reaches maturity and we have more
certainty about the data, we will define some measures to ensure that IPR and
data privacy is taken into consideration by design as well as which data is
feasible to be made open without prejudice to the foregoing. The uncertainty
about this data makes difficult to determine the expected size of this data as
well as to define to whom will it be useful.
</td> </tr>
<tr>
<td>
**2.1 FAIR: Findable**
</td>
<td>
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
**Answers**
In case we decide to publish the anonymized dataset for speech sentiment
analysis, data will be provided as audio recordings or files, containing the
extracted features (CSV or ARFF file formats). Annotations will be provided as
CSV files. That coincides with standard practice regarding to publication of
recorded speech datasets.
However, due to the uncertainty about the data to be shared in this module,
there is not yet a final decision about how we plan to make these data
findable. For sure, we will use a standard format for metadata and naming as
is already described in section 4. Further metadata might be added at the end
of the project in line with these meta data conventions.
</td> </tr> </table>
<table>
<tr>
<th>
**2.2 FAIR: Accessible**
</th>
<th>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata,
documentation and code are deposited
* Specify how access will be provided in case there are any
restrictions
</th> </tr>
<tr>
<td>
**Answers**
Following the ideas described along the project, since the initial stage of
the project, there is still some uncertainty on the specific data to be
handled. Datasets will be made open as long as they serve as support to
scientific publications in the project and also under anonymized basis,
considering that neither IPR or data privacy of users from which this data was
originated are at risk.
</td> </tr>
<tr>
<td>
**2.3 FAIR: Interoperable**
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
**Answers**
Data will be stored in standard formats, such as WAV files for recorded audio,
CSV and ARFF for metadata and annotations, some data may be in a database,
such as MySQL.
The interoperability of data will be made possible thanks to the use of
ontologies that will ensure that data is converted to common formats that
enable interoperability both among the different modules in this WP and the
scientific community when making them open.
</td> </tr>
<tr>
<td>
**2.4 FAIR: Reusable**
</td>
<td>
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain reusable
</td> </tr>
<tr>
<td>
**Answers**
As already mentioned, whenever possible, the datasets will be licensed under
an Open Access license. Once we decide which data in this WP is reused we will
establish quality assurance measures to ensure that all datasets in this WP
are cleared of bad records, with clear naming conventions, and with
appropriate meta- data conventions applied as well as the responsible for
this.
</td> </tr>
<tr>
<td>
**3\. Allocation of resources**
</td>
<td>
* Estimate the costs for making your data FAIR. Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
**Answers**
</td> </tr>
<tr>
<td>
The work to be done in making the data FAIR will be covered by the assigned
budget for producing the different modules collecting and processing these
data.
</td> </tr>
<tr>
<td>
**4\. Data Security**
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
**Answers**
N/A at this moment.
</td> </tr>
<tr>
<td>
**5\. Ethical Aspects**
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td> </tr>
<tr>
<td>
**Answers**
We will ensure transparency by making data subjects aware of the type of data
collected and processed in WellCo as well as which of these datasets will be
shared, always after a complete anonymisation process, in Open Research
repositories.
Informed consent will be always required before performing any of these
actions. These features are quite interesting for the case of recording speech
data of the interaction of the user with the virtual coach in early
prototypes– in the user’s normal environment, not in a laboratory, the
collected data may include personal information.
</td> </tr>
<tr>
<td>
**6\. Other**
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr>
<tr>
<td>
**Answers**
No other procedures need to be put in place for project management data.
</td> </tr> </table>
### WP6: Dissemination and Exploitation (CON, M2-M36)
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
**1 Data Summary**
</td>
<td>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</td> </tr>
<tr>
<td>
**Answers**
The following data is going to be considered:
* Conference/journal publications (.PDF)
* Exploitation plan (.DOCX, .PDF)
* Standardization activities (.DOCX, .XLSX)
* Dissemination materials (.PPT, .PDF, .JPG, videos) including website (.html) with embedded content, as well as connected to Google Analytics to evaluate its reach
</td> </tr>
<tr>
<td>
**2.1 FAIR: Findable**
</td>
<td>
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
**Answers**
Data related to dissemination and exploitation will be findable –to the best
of the consortiums’ capacity- utilizing digital communications best practices,
e.g. hashtag, metadata, keywords. In social media, WellCo posts will be
findable and discoverable by the name, while for posts to different media
(e.g. 3rd party blogs), the posts will refer to the project website.
At this moment we foresee no separate datasets to be posted in repositories at
the end of the project.
</td> </tr>
<tr>
<td>
**2.2 FAIR: Accessible**
</td>
<td>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata,
documentation and code are deposited
* Specify how access will be provided in case there are any
restrictions
</td> </tr>
<tr>
<td>
**Answers**
Most of this data will be made public, although there might be made an
exception when it comes to data concerning the project exploitation. We
foresee most data will be published online, just not in online repositories,
since it does not contain specific research data.
</td> </tr> </table>
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
**2.3 FAIR: Interoperable**
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
**Answers**
This is not applicable for data related to dissemination and exploitation.
</td> </tr>
<tr>
<td>
**2.4 FAIR: Reusable**
</td>
<td>
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain reusable
</td> </tr>
<tr>
<td>
**Answers**
The data related to dissemination and exploitation will be reusable. The
reference to original materials will be kept.
</td> </tr>
<tr>
<td>
**3\. Allocation of resources**
</td>
<td>
* Estimate the costs for making your data FAIR. Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
**Answers**
The work to be done in making the data FAIR will be covered by the assigned
budget for producing the deliverables.
</td> </tr>
<tr>
<td>
**4\. Data Security**
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
**Answers**
This is not applicable for data related to dissemination – containing only the
cumulative, anonymized data representation.
For the WellCo website visitors, privacy statement will be provided.
</td> </tr>
<tr>
<td>
**5\. Ethical Aspects**
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td> </tr>
<tr>
<td>
**Answers**
All participants in the consortium have agreed with posting their pictures
online for dissemination items and project updates.
</td> </tr>
<tr>
<td>
**6\. Other**
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr> </table>
**Answers**
No other procedures need to be put in place for project management data.
<table>
<tr>
<th>
6
</th>
<th>
Conclusive Remarks
</th> </tr> </table>
This deliverable provides a description of the data management strategies
taken in account during the project. It describes and outlines the existing
regulations to which WellCo must comply, and defines how data will be
collected, stored, shared and most important protected. Important measures are
mentioned about the protection of the data, which should be taken into account
during the project. This is a “living document” and an update will be provided
no later than in time for the first review (M12). Other updates will be
provided at M24 and M36 detailing which/how the data will be made available to
others within the Pilot on Open Research Data (ORD).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0430_CYBECO_740920.md
|
# Introduction
## Objective and Scope
The objective of this deliverable is to establish a Data Management Plan (DMP)
for the CYBECO project compliant with the _Horizon 2020 FAIR Data Management
Principles_ . These principles establish that, in general terms, the project’s
research data should be findable, accessible, interoperable and reusable. The
partners will bring specific datasets towards the project, and there will be
many more data exchanges during the actual project. It is key that this data
is treated correctly, to prevent the leaking of IP or commercially sensitive
information from one of the partners by another partner.
This plan is based on the _Guidelines on FAIR Data Management in Horizon 2020_
[1] and its annex the _Horizon 2020 FAIR DMP template_ [1]. It further details
the data management contents of the _CYBECO Proposal_ [2]. The DMP is intended
to be a living document, and it will be updated with further detail as the
project progresses and when significant changes occur. Therefore, it will have
several versions and includes an update procedure.
The current report is the first version (i.e., DMP v.1), which is also
deliverable D2.2 that shall be submitted to the European Commission in the
third month of the project (M3).
## Document Structure
The document is structured as follows:
* Sect. 2 describes the procedure for implementing and updating the DMP.
* Sect. 3 presents the general aspects of the DMP, covering how the project as a whole would manage repositories, open access, data security and data from research with human participants.
* Sect. 4 synthesises the adherence of CYBECO datasets to the FAIR principles on making the research data findable, accessible, interoperable and reusable. It also discusses the adherence to other supporting principles, namely, resource allocation, data security and ethical aspects. The specific information for the different datasets is provided in their Dataset Record, in the annex.
* Sect. 5 provides a list with the different datasets of the project. Additionally, the data management information for each dataset is detailed in its Dataset Record (provided as annexes of this document). The current version of the DMP, the initial plan, only details one dataset: the internal repository that will centralise the project documents and most of its datasets.
# Implementation and update procedure
The DMP plan is implemented by evaluating the different datasets created by
the project regarding the FAIR principles, as well as an evaluation of the
overall data management within the project.
Specifically, the implementation of the DMP will consist of the following
steps:
1. Creation of a _**Data Repository** _ in a private part of the CYBECO website. Unless there would be a possible conflict with confidentiality, security or commercial sensitivity, all data needed to validate the results as presented in any of the publications will be made available through an _**open research data repository** _ in the CYBECO website as soon as possible. The URL for the website is _www.cybeco.eu_ , whereas the URL to access the private part is _www.cybeco.eu/private-area/repository_ .
2. Each partner should fill a _**Dataset Record** _ for each of the datasets they create. We define a dataset as any collection of research data that is particular or special from the data management perspective. This means that data about different topics might be grouped in a dataset if no particular aspect makes its management different (e.g., confidentiality, security, intellectual property). Annex 1 provides a template of the Dataset Record.
3. Once the Dataset Record is filled, each partner should store them in the Dataset Repository alongside the actual dataset.
4. Some datasets might have specific data management policies or procedures (e.g., the experiments). If possible, each partner should upload those policies and procedures too.
5. On a regular basis, CSIC, with the support of TREK, will review these records to update the DMP accordingly and ask for additional feedback to the partners. As a minimum, the DMP should be updated in the context of the periodic evaluation/assessment of the project. If there are no other periodic reviews envisaged within the grant agreement, an update needs to be made in time for the final review at the latest.
Additionally, the consortium will agree and specify, in the next project
meeting (second half of 2017), the following data management aspects:
* How many years the data will be preserved after the end of the project and how the data will be curated for that long-term preservation.
* Identification of the relevant datasets that the project will generate. With special emphasis on protection (e.g., privacy and intellectual property) and public sharing (for both scientific and general usage).
# Summary of the Data Management Plan
The DMP details the following aspects: datasets, standards and metadata, data
sharing, identification of repositories, long-term preservation and associated
costs. The project datasets will be kept in a repository in a private part of
the CYBECO website (hereafter the CYBECO Repository), which will allow for all
data to be identifiable, but also include information about whether this data
is commercially sensitive and so on to allow for proper sharing of the
information amongst the partners and the public at large. This central
repository will ensure long-term preservation of the data, and will also be
secured using relevant methods for access protection and backup.
The main reference for the DMP are the _Guidelines on FAIR Data Management in
Horizon 2020_ [1] and its annex the _Horizon 2020 FAIR DMP template_ [1].
These provide a sufficient high-level procedure for data management.
However, the approach of the plan is bottom-up: each dataset will be evaluated
and prepared based on its security, privacy, technical and dissemination
needs. Those datasets without critical aspects will follow the H2020’s FAIR
guidelines. However, some datasets would have additional specific data
management procedures. Two of these specificities are privacy and ethical
aspects. Datasets generated from research with human participants will follow
stringent procedures as specified in Sect. 3.2. Another factor that makes more
convenient an individual data management procedure is that each domain has
different publication or metadata standards or procedures. Thus, for
maximising the access and utility of our public datasets, it is important to
follow these domain-specific procedures.
## Open access to research data
Unless there would be a possible conflict with confidentiality, security or
commercial sensitivity, all data needed to validate the results as presented
in any of the publications will be made available through an open research
data repository in the CYBECO website as soon as possible. Likewise, other
elements, such as software tools or equipment, will also be provided in the
same repository.
Any sensitive data may be masked and made anonymous to protect the sensitivity
of data, while still allowing this to be used by other projects in the future.
Such sensitive data could also fall under an embargo period, the length of
which will be determined by the potential commercial development based on this
data, such as e.g. IP protection.
Some data may not be made available at all to the public due to its
commercially sensitive or security nature, to ensure that the project delivers
long-term profitable development for the commercial partners. The same applies
to IP brought into and developed during the project. If during the project
certain IP would restrict the intended availability of some of the outputs,
then a sample code approach will be used to overcome this problem. Such sample
code, as e.g. also used in the standardisation of the MPEG format, allows for
a functional model to be presented, while the freely available code would not
contain all possible optimisations. Hence, commercially and security sensitive
information can be retained and secured accordingly while the open source
tools would still be functional.
Following the dissemination plan of CYBECO (D8.1), datasets associated with
scientific publications are especially relevant. Peer-reviewed scientific
publication must be made openly available and free of charge, online, for any
user. Therefore, datasets and tools needed for make the paper reproducible
will be provided.
## Data management of research with human participants
As declared in the ethical self-assessment, we shall perform research with
human participants, specifically with volunteers for social and human sciences
research and that personal data collection/processing will be involved. These
activities will be performed by DEVSTAT and UNN, which have experience in
performing this type of research with the highest ethical standards. Their
research protocols will fully comply with the principles of the _Declaration
of Helsinki (1989)_ , the _Universal Declaration of Human Rights (UNESCO,
1948)_ and the _Agreement for the Human Rights Protection in Biology and
Biomedicine (Oviedo, 1997)_ , and the CYBECO charter of ethics for experiments
[2].
### Behavioural economic studies
Several considerations will be made to minimise confidentiality issues with
the participants in the behavioural economics studies. First, the amount of
personal information required will be limited to the absolute minimum. Second,
personal information will be collected without unique identifiers attached to
the data, or known to the researcher. Although consent forms will include the
participant name, these personal identifiers will not be linked to the
recorded data. Third, each participant data will be associated with an
alphanumeric code to remain anonymous in all stages of the research protocol.
The identifying list will be stored in a safe and separate area from the study
data.
Security measures for storage and handling of subject data will be carefully
considered: experimental data will be originally recorded in a computer
without Internet access and with restricted access to researchers involved in
the project. Access passwords are necessary to log in the experimental
computerised setup. Participant recordings will be removed from the computer
when the experiment has finished with each participant to increment the
security of this data set.
### Psychology-led studies
Similar considerations will be in place for the psychology-led studies, which
adheres, in addition, to the ethical code of practice as laid down by the
_British Psychological Society_ . Although the stored interview data may hold
information unique to particular participants and so strict consent protocols
will be devised around the storage and use of this data and the right to
withdraw. Participants will be free to withdraw from the experiment at any
time. It will be made clear that participation is voluntary, and that deciding
to quit from the study is affecting only the amount of payment, but a minimum
wage is guaranteed at the end of the first session.
## Data management security
Information will be handled as confidentially as possible in accordance to
applicable national regulations on data protection, to the _Directive
95/46/EC_ of the European Parliament and of the Council of 24 October 1995 on
the protection of individuals with regard to the processing of personal data
and on the free movement of such data (OJ 23 November 1995, No L. 281 pp.
0031-0050) and to the _Directive 2001/20/EC_ of the European Parliament and of
the Council of 4 April 2001 on the approximation of the laws.
Because of its very own nature, security is a key issue within CYBECO. As
stated, the Executive Board of the Project will act as well as Security
Scrutiny Committee, identifying issues that should remain at confidential
level. In particular, some details in connection with the experiments and the
product will remain at such level for security reasons, as stated in the work
plan. Besides, the CSIC team includes a specialist in data protection, J.A.
Rubio, who will take care of data protection issues.
# Adherence to the FAIR principles
This section synthesizes the adherence of the CYBECO datasets to the FAIR
principles on making the research data findable, accessible, interoperable and
reusable. The specific information for the different datasets is provided in
their Dataset Record, in the annexes.
This section will evolve as the CYBECO project grows.
## Making data findable
* CYBECO will create a repository for the project partners and an open research data repository for the public.
* Datasets and documents will contain version numbers, metadata and keywords for identification. Internally, following the structure of the project organisation. The public available datasets will, additionally, use identifiers such as DOI and metadata that facilitates a clear identification and citation by external users.
## Making data openly accessible
* All data needed to validate the results of CYBECO will be made openly available unless there would be a possible conflict with confidentiality, security or commercial or intellectual property aspects.
* Sensitive datasets may be masked, made anonymous, or presented as a sample to protect the sensitivity of data, while still allowing this to be used by the public.
## Making data interoperable
• The public available datasets will follow standardized formats that
facilitate their interoperability and reusability. First, by using highly
interoperable formats such as .sql, .csv, or .xml. Second, by producing “tidy
data” [3] so that the datasets are easy to edit and visualize.
## Making data reusable
• Sensitive data may fall under an embargo period for determining whether and
how this data will be made public.
## Additional aspects of data management
### Resource allocation
* Long-term preservation of the CYBECO website and, thus, the repositories.
* Preservation of back-ups of the datasets.
### Data security
* Datasets in the CYBECO repository will include information about whether the data is sensitive and the type of sensitive information (e.g., personal data, intellectual property, commercial).
* Website hosted in European servers.
* Use of secure methods for access and backups of the CYBECO repositories.
* The CSIC team includes a specialist in data protection, J.A. Rubio, who will take care of data protection issues.
### Ethical aspects
* Following the ethical self-assessment, we have declared that we shall perform research with human participants and, thus, personal data collection and processing.
* The data management of research with human participants will be performed by DEVSTAT and UNN, which have experience in performing this type of research with the highest ethical standards. Further details of the data management of this research is provided in Sect. 3.2.
# Datasets
The initial DMP identifies one special dataset: the internal repository.
During the life of the project this list will grow to include other datasets.
Each of the dataset is further detailed in its correspondent annex.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0431_SALBAGE_766581.md
|
# 1\. Introduction
According to the EU regulations, projects participating in the core areas of
Horizon 2020 starting from 2017 must participate in the Open Research Data
Pilot (ORD Pilot). This includes Future and Emerging Technologies (FET)
projects. Thus, the SALBAGE Project as a H2020-FET funded project is bounded
to participate in the ORD pilot action on open access to research data.
Open access implies unrestricted online access to research outputs such as
journal articles, without access fees. The goal of the EU with this program is
fostering access to and re-use of data generated by EU funded projects in
order to improve and maximize public financial European resources and avoid
duplication of efforts. According to the EU guidelines 1 , ORD pilot applies
primarily to the data needed to validate the results presented in scientific
publications. More specifically, projects participating in the Pilot are
required to deposit and make public, as soon as possible, the research data
described below:
* The data, including associated metadata, needed to validate the results presented in scientific publications
* Other data, including associated metadata, as specified and within the deadlines laid down in a data management plan (DMP).
# 2\. Plan Description
The Data Management Plan (DMP) of the SALBAGE project describes the data
management life cycle including specific standards of the databases in terms
of formats, metadata, sharing, archiving and preservation.
The DMP will be developed during the project life and periodically updated.
This document represents the initial version of the data management life cycle
for all datasets to be collected, processed or generated by the project
partners.
The present document has been prepared with aid of the DMP Online tool.
# 3\. Data summary
SALBAGE project intends to explore the feasibility of using Aluminium-Sulfur
batteries with polymerized electrolytes based on ionic liquids and deep
eutectic solvents.
The project is structured in 6 work packages. Three main WPs will be devoted
to the study of materials properties and electrochemical reactions of the main
components of a battery, namely Anode, Cathode and Electrolyte. Thus, WP2 is
focused in the study of the electrolyte; WP3 in the study of the Aluminium
anode and WP4 for the case of the Sulphur cathode. On top of that, data
resulting from the combination of these elements will be generated. Most of
these data will come from a combination of computational simulations (DFT) in
WP5 and confirmed by experimental results from different electrochemical and
testing techniques in WP6.
Therefore, in SALBAGE project, data coming from the above mentioned WP and its
corresponding tasks will be generated and collected. In this first approach,
three data types can be distinguished and foreseen:
3.1. Experimental data
WP2, WP3 and WP4 are devoted to the study of the chemical, electrochemical and
material properties of the materials composing the basic cell of the battery.
For the correct development of the tasks included in these WP, a variety of
electrochemical and surface science techniques will be used. The data will be
use to recognize performance of the proposed materials in the proposed
battery. In deeper detail:
* WP2 will gather data regarding the capability of a set of proposed ILs and DES to be incorporated into polymer gels or blends. Their further application as electrolytes will be studied also for which conductivity measurements will be performed. Data obtained in the characterization of the electrolyte will be shared with the other partners, especially those involved in WP3, 4 and 5.
* WP3 will study the stripping and electrodeposition of Al from the proposed electrolytes on different aluminium anodes and alloys, including the formation of dendrites on the surface. Different techniques will be used such as cyclic voltammetry, impedance spectroscopy and SEM imaging. Results will provide insights on the performance of the proposed electrolytes to be coupled with an aluminium anode, allowing determining which might be employed and which might not. Outputs will be internally provided to the other partners, namely those involved in WP2 and WP5 and WP6. The most successful results will be retained for their use in the battery and promising and results beyond the state of art will be published.
* WP4 is devoted to the study of Sulfur electrode. The use of Sulfur as cathode in a battery is not straightforward due to the variety of species that Sulfur can form. In order to improve and boost its performance, the use of redox mediators is foreseen in the project. Thus, electrochemical studies will be carried out regarding the performance of Sulfur modified with different species (redox mediators) as cathode and results will be provided to the partners involved n WP2, WP5 and WP3.
2. Simulation data
WP5 involves all the simulation activities that will allow reducing the number
of species to be tested experimentally in WP3 and WP4. The stability of
different molecules in the given conditions will be examined by means of DFT
simulations in order tell which would be the most stable and probable. Outputs
of this WP will allow WP4 and WP3 to reduce the number of experimental tests
to carry out to the most stable species, reducing efforts and optimizing
resources. Likewise, a continuous feedback between WP4, WP3 and WP5 will be
stablished in order to refine results.
Reports and deliverables of WP5 will be made public. Additionally, the results
obtained beyond the state-of-art will be published.
3. Testing data
The information gathered with the outputs of WP2, WP3 and WP4 as performance
of the individual elements of the battery will be actually combined in a
battery cell and tested as a whole. Results on the performance of this cell
will give information about the real performance and capabilities of an
Aluminium/Sulfur battery. Tests will be carried out in relevant conditions and
results will provide the basis to determine the viability and possibilities of
this sort of battery beyond the state-of-art. Results from this WP will be
provided to the partners involved in WP2, WP3 and WP4 in order to improve the
materials combination. In addition, a potential market analysis depending on
the battery performance will be prepared and made public.
In all cases, details of the equipment used, such as the make and model of the
instrument, the settings used and information on how it was calibrated will be
provided along with each set of data.
The techniques used for the characterization of materials may include specific
software but the data created by the acquisition devices will be transformed
into figures and tables in order to better share with the other partners and
beyond. Thus data will be presented as text including images and/or figures.
Other formats that might be used in the case of other documents different form
texts are the following: Mendeley database (.ris); ASCII or MS Excel
spreadsheet (.xlsx and commadelimited .csv); and/or MS Word for text documents
(.docx); Microsoft Word 2007 for textbased documents. MP3 or WAV for audio
files. Images will be saved and stored in JPG with the maximum quality
available. Windows Media Video for video files. Quantitative data analysis
will be stored in SAV file format (used by SPSS) from which data can be
extracted using the open-source spss read Perl script.
These file formats have been chosen because they are accepted standards and in
widespread use.
These results will be useful to material scientist and battery development
industry.
It is not envisaged that there will be any privacy issues with respect to the
data as there aren’t personal data involved.
# 4\. FAIR data
In accordance with the EU Guidelines, data produced in the present project
should be FAIR, that is: Findable, Accessible, Interoperable and Reusable.
4.1. Making data findable, including provisions for metadata:
In order to make the documents **findable** within the repositories metadata
will be inserted along with the document. For that, relevant and sufficient
keywords will be used, some examples could be the words Battery, Aluminium,
Sulfur /Sulphur, Ionic Liquids, Polymerization, Deep Eutectic Solvents, and
any other more specific keyword relevant to the content of the publication as
well as appropriate and relevant titles.
All data and metadata will be stored using English as language in order to
make them more easily findable for the scientific community. Besides, IUPAC
standards will be used for units and chemical names.
For identification purposes, the repositories offer the assignation of
persistent and unique identifiers such as Digital Object Identifiers **(DOI)**
identification numbers to clearly and univocally identify documents. In the
case of Zenodo, it also supports DOI versioning of the document for further
editions.
In the case of the project deliverables (some of which will be public), they
will be identified with number and version, date and type of document.
Following the rules below:
Type: DEC/R/ DEM according to the description presented in the deliverable
table 3.1 of the proposal.
Dissemination Level: Choose one PU/ CO (public/ confidential) according to the
deliverable list table on the proposal
Name: same as in table 3.1
Document ID: should be D.X.x- TYPE- deliverable number-year.
The deliverable number is the order on the list and it also appears in the
Grant agreement data.
Some examples:
-D1. Deliverable D.1.1 launch of website. ID would be D1.1-PU-01-2017
-D16. Deliverable D3.3. Effect of inorganic additives on the anode performance ID would be D3.3-CO-16-2018
Date: Day/Month/Year
4.2. Making data openly accessible:
The most effective way to spread the data generated by the SALBAGE project is
by means of scientific publications. In accordance with the OPEN Pilot plan,
research data results must be granted Open Access. This means that scientific
publications of the research findings directly coming from the project must be
made openly and publically available by the partners involved and its
institutions, at least in its almost-final version. In any case the principal
investigators on the project and their institutions will hold the
**intellectual property rights** for the research data they generate but they
will grant redistribution rights to repository for purposes of data sharing.
In order to make data publically available, paper will be uploaded to
repositories as PDF file to public internet sites. Each partner will be
responsible of making its data resulting from the SALBAGE project open
according to the H2020 FAIR guidelines. In order to do that, data will be
stored in either the institution's repositories or in ZENODO (www.zenodo.org).
ZENODO is an open repository from OpenAIRE H2020 project and CERN. Data
uploaded to ZENODO is linked to OpenAIRE and the EC portal what guarantees its
**accessibility** to all public.
In addition to those repositories, copies can be uploaded to social networks
either scientific platforms, such as ResearchGate.net, or professional, such
as LikedInd, as well as to the project website hosted at
**www.salbageproject.eu.**
In the case of SALBAGE project, a combination of the above mentioned forms
will be used.
The procedure will be as follows:
* As soon as results from the project are published, PDF copies along with any complementary data will be uploaded to the selected repository and to ResearchGate.
* In parallel, they will be announced in the website including links to the publication location.
* In addition, project results will also be disseminated by other means such as newsletters, conferences etc., as well as by the corresponding LinkedIn and twitter profiles in order to make the data reach the widest possible audience.
On top of that, some of the project deliverables are public, such as those
coming from the simulation activities. In these reports the most stable
species for the given conditions will be presented for all the public to know.
The report will include the list of possible species that might form as a
result of the redox processes when the battery is charged and discharged and
which of them are the most probable according to the simulation data.
Complementary experimental data supporting the results will also be provided.
For preservation, we will supply periodic copies of the data and public
deliverables to Zenodo repository. That repository will be the ultimate home
for the data generated along the project life and beyond.
4.3. Making data interoperable:
In order to make the data **interoperable** , data stored in public
repositories will include description of the equipment, conditions and
settings used to acquired data as well as a comprehensive explanation and
description on of the experimental procedures followed to obtain data,
whenever it applies. In the case of DFT data, all the boundary conditions and
assumptions will be provided with the data.
In order to be able to reproduce experiments, publications might include
additional supporting information with complementary data that help verifying
the results presented for the sake of interoperability in order to make the
data presented fully reproducible in other laboratories
IUPAC nomenclature will be used as well as International Standards and metric
units in order to facilitate interoperability.
Public press releases and Social Media news in LinkedIn and Twitter will use
common language for the general public to understand.
4.4. Increase data re-use (through clarifying licenses):
Data presented in the public repositories might be used by third parties for
research purposes as state-of art, in order to avoid duplication of efforts
and as the basis for future investigations and research on the topic.
The generated data can be re-used in similar configurations, whenever the
aluminium anode the sulphur cathode or the polymeric electrolyte would be used
as part of an electrochemical setting (battery, super-capacitor), in
combination with each other or not. For instance, data regarding the stability
and species formed in the cathode can be extrapolated for its use in Li-S
batteries.
Nevertheless, the **commercial** use of the data generated by the project
might be restricted if any patent or exploitation agreement has been filled or
signed by the consortium members. In which case, information about the patent
will also be provided by the project foreseen ways.
With regard to **quality assurance,** research groups and institutions
participating in this project are top-level and with great reputation and
trajectory within their respective fields what assures the reliability and
quality of their findings and results. In addition, the strict procedures that
researchers must follow in order to be able to publish results in a peerreview
journal guarantees their quality.
# 5\. Allocation of resources
The responsible of the data preservation corresponds to the partner(s)
generating the data. For the compilation of the documents, the coordinator is
responsible of gathering and reporting to the EU. In addition, dissemination
of the results generated will be made by the means foreseen in the
Dissemination Plan (deliverable 2.2 of the project).
Each partner is responsible of making its data and results open and of
uploading the results to their repositories, being the cost of this eligible
for reimbursement during the duration of the project.
The coordinator is responsible of creating and updating the DMP. The cost of
documentation preparation and uploading is included in the WP1 management
tasks, eligible for reimbursement in accordance with EU rules.
In a first approach, only free repositories such as those provided by the
institutions and Zenodo will be used. On the new versions of the DMP a
revision of costs will be made.
# 6\. Data security
The research data from this project will be deposited with the institutional
repository on the partner’s official pages. The research data from this
project will be deposited in those repositories to ensure that the research
community have long-term access to the data.
The data files from this study will be managed, processed, and stored in a
secure environment (e.g., lockable computer systems with passwords, firewall
system in place, power surge protection, virus/malicious intruder protection)
and by controlling access to digital files with password protection.
Universities involved have self-stored mechanisms that are intend to preserve
data. SME's have also backup systems that preserve their information.
In a deeper detail:
* **Albufera:** Computers are password protected and equipped with all the due virus and firewall protections. Computers for collection of data in measurement equipment such as potentiostats or battery cyclers are connected to UPS in order to avoid the loss of data due to an unpredicted electrical failure. User’s data are backed up locally in hard copy once a week. A remote copy is also kept in a cloud based storage system and regularly backed up and stored in a different place.
* **DTU:** Computers and clusters are protected by password, antivirus and firewall. The data are produced using the Niflheim cluster hosted at DTU. Niflheim is currently assuring for the standards required by the Danish research council and DTU in terms of preservation of data (from daily backups to long-term storage of the data). All the post-processing scripting will be run and saved in the project folder of the same cluster. Periodic local updates (on removal disks) will also be performed. When the person responsible for the project will move, the data will be transferred to the PI of the project (Tejs Vegge, DTU Energy, Section for Atomic Scale Modelling and Materials). The final data, protected by a DOI, will also be stored in the computational materials repository (CMR - https://cmr.fysik.dtu.dk/) which is hosted at DTU Physics and has been active for more than 8 years. The properties collected in a database will be accompanied by ReadMe files to understand how the data was obtained and what exactly is included. Code will be commented in the python script, as well as additional ReadMe instructions will be attached describing how to use and run the script.
* **TU Graz:** All computers are protected by password, antivirus and firewall. These are regularly updated. User data is stored on several computers and backed up regularly. A remote copy in a cloud and object based TU Graz internal storage system is used for data exchange between project members within the TU Graz. The storage nodes and the server that monitor and balance the system are located at three sites within the TU Graz. The system is capable of autocorrection in case of failure of single disks or whole storage nodes. For a disaster recovery, data are synchronized in a separate data storage unit on a daily basis.
* **Univ. of Southampton:** Computers are password protected and equipped with virus and firewall protections. Computers for collection of data in measurement equipment such as potentiostats or battery cyclers are connected to UPS to avoid the loss of data due to an unpredicted electrical failure. User’s data are stored in several computers. Remote copies of the files are also kept in the University storage system and regularly backed up.
* **Scionix:** All client computers and servers are protected by a strong-password methodology. All computers have a virus and firewall installed and set up. These are cloud controlled and updates threats and suspected activities are managed centrally all updates and changes are automatically pushed to the clients. All data is maintained and collected on servers at the central site and data is managed, protected and backed up locally and remotely. All data is stored and managed in compliance with current regulation and policies
* **Univ. of Leicester:** Computers are protected by a strong-password methodology for which there is a compulsory 90-day replacement cycle. All computers (managed desktop and stand-alone) are equipped with virus and firewall protections, these are regularly and automatically updated. Computers used for collection of data attached to measurement equipment such as potentiostats, microscopes or battery cyclers are adminstered through central desktop management consistent with the Universty Data Management policies (https://www2.le.ac.uk/services/researchdata). This means that all data are backed up centrally and thereofre protected against unscheduled local or regional power failures. Additionally, user data are stored on several redundant hardware encrypted remote backup devices. All data are stored and managed in compliance with new regulation and policies governing the secure storage of research and personal data (General Data Protection Regulations).
* **ICTP-CSIC:** At CSIC all computers are password protected and equipped with virus and firewall protections according to CSIC protocols. Computers for collection of data in measurement equipment are connected to UPS in order to avoid data loss caused by unpredicted electrical failure. User’s data are stored in several computers and backed up regularly. Researchers from CSIC have access to the data management services provided by DIGITAL.CSIC which includes data storage and open access data publication, repositories and DOI assignation ( _https://www.re3data.org/search?query=DIGITAL.CSIC_ ) . DIGITAL.CSIC meets the quality criteria of the global directory of repositories and has the Data Seal of Approval Certificate.
7\. Ethical aspects
This project does not involve ethical issues to be managed.
# 8\. Other
Each institution has implemented procedures to guarantee the preservation and
curation of data which are in good alignment with the EU guidelines.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0432_MH-MD_732907.md
|
3\. a well-designed privacy preserving and security layer that combines a
multi-level anonymisation engine to support data privacy preserving data
publishing to external parties and a privacy preserving complex data flow
execution engine (i.e., differential privacy, Secure Multi-Party Computation
(SMPC), homomorphic encryption) to support privacy preserving data mining and
analytics within MHMD platform.
_Figure 1: MHMD Architecture_
Hence, relying on a federated data management infostructure where no central
authority holds the entirety of data and a blockchain platform as a
distributed, public and transparent ledger that orchestrates and monitors data
sharing, MHMD will decentralize data storage and it will enable not only the
project stakeholders but also data subjects to witness data sharing activities
at any time. This key architectural specificity will allow MHMD to bring trust
within a network of possibly highly heterogeneous and unsecure appliances.
Transactions will be automated thanks to the provision of custom-tailored
smart contracts.
# 2.3 Data description
MHMD will generate and integrate three main types of data.
1. **Pseudonymised (de-identified) clinical (routine) data** extracted from medical information systems (e.g., phenotype / demographic data, genomic data, medical images and signals, lab tests). Such data will be stored in a federated data storage platform where each hospital will have its own node.
2. **Individual personal data including machine-generated data from Internet of Things (IoT)** connected devices (wearables, smartphones): taking stock of MHMD’s partner Digi.me (https://get.digi.me), MHMD will aggregate personal data from disparate sources (i.e., social media accounts, clinical data repositories, personal drives) and data derived from commonly used wearables, or personal monitoring devices, as they are stored on smartphones. Such data will be stored in a centralised, user-owned account.
3. **Derived data related to the usage and the processing of the data** : such data could be related to the different types of data profiles, pre-processing and mining data flows, analytics, biomedical and statistical simulation models, user profiles for app personalisation and privacy preservation, blockchain and security transactions.
# 2.4 Data Sourcing
The data sources to be explored are, in priority and chronological order:
1. **Hospital pseudonymised datasets** : already consented and available pseudonymised data from clinical partners having taken part in the MD-Paedigree (md-paedigree.eu) and Cardioproof (cardioproof.eu) E.U. funded projects (UCL, DHZB, OPBG);
2. **Individual user data** : individual digi.me users who will download the application and start sharing their data;
3. **Hospitals bringing additional data** : bringing in other individual users among their patients or involving other third parties (clinicians, hospitals, patients’ associations).
# 2.5 Data extraction and data storage
## 2.5.1 Clinical data extracted from Healthcare Information Systems
The MHMD project will build upon and extend the already existing distributed
data management and storage platform that interconnects several clinical
centres in EU FP7 MD-Paedigree and FP7 CARDIOPROOF projects and the related
biomedical data extraction, pre-processing and data integration flow. Based on
this flow, routine clinical data are extracted from local Healthcare
Information Systems within hospitals and are properly pseudo-anonymized (de-
identified), normalized, curated, transformed and stored on a local node
within the hospital. This architecture allows sourcing and preparing sensitive
data at the hospital level and applying proper anonymisation onsite under the
strict supervision of local IT and data controllers, who can quality check,
quarantine, or even stop the sharing at any time. The verified data are then
uploaded to a local (within hospital) node, which federates contents with the
other connected centres. Beyond the data sourcing process, this architecture
also makes it possible to deeply penetrate the local Healthcare Information
System, by connecting it to the ETL routing system or proprietary RIS, PIS or
PACS databases. As such, 3 of the participating hospitals in MHMD have
integrated the solution to their routine systems.
This integrative architecture is a competitive and unique advantage for the
project as it enforces privacy-bydesign starting immediately from the data
source and leaves full control to the data controllers over time. It also
makes it possible to establish a 2-phase development strategy for the market
place, starting from synthetic test data and then moving to exploitation with
routine data. Besides, having real clinical centres involved, they will
conform with their respective national laws as to the conservation of their
respective medical sensitive data over time.
## 2.5.2 Individual Personal Data
The basic data management resides on the Personal Data Account (PDA)
application of the DIGI.me that will retrieve in the background personal data
to an encrypted local library, which the users can then add to a personal
cloud of their choice (e.g. Dropbox, Google Drive, Microsoft OneDrive, or a
home based personal cloud such as Western Digital MyCloud) to sync across all
their devices. Hence, through the adoption of the digi.me app, MH-MD will
gather personal data from sparse data sources, from actual biomedical data to
data shared through social networks, from biometric data coming from wearable
and mobile devices to privacy preferences gathered with specific
questionnaires, etc.
A key benefit is that locally stored data do not interact or come into contact
with any other interface servers or third-party storage houses. The User
Interface (UI) will be engaging, while providing the users with an incentive
to appreciate and benefit from their data. Hence, the MHMD architecture is
such that no third party, nor MHMD itself, can directly access any user data
held in the personal MHMD encrypted library. Data subjects can permission
access to portions of that data to apps websites/businesses using a Permission
Access Certificate (PAC) that is designed to ensure explicit and informed
consent together with a clear requirement for “Right to Forget” and a protocol
to activate that Right at a later date.
# 2.6 Data usage and utility
The ultimate goal of MHMD is to extract valuable and accurate information from
clinical routine data targeting specific similarity analysis and knowledge
discovery uses cases related to precision medicine and biomedical research.
Individual personal data will be used in conjunction with those coming from
clinical data repositories, and contribute to the overall data pool,
supporting cross-domain knowledge discovery analyses. For instance,
geolocation and physical activity data, as well as purchases and social media
activities, can provide valuable indicators to classify medical risk profiles.
Finally, the proposed platform will allow patients to share their data with
medical institutions and other organizations while still enjoying very strong
privacy safeguards.
# MAKING DATA FINDABLE, ACCESSIBLE, INTEROPERABLE AND RESUSABLE [FAIR DATA]
## Data Modelling, Harmonisation and Integration
For the purpose of research and business, distributed biomedical and personal
data need to be normalized. The already existing MD-Paedigree/Cardioproof
Infostructure has been designed and implemented with this specific purpose in
mind and is currently deployed to serve both projects’ needs. This
infrastructure will be extended to ingest and semantically integrate
additional, non-medical data sources.
At the heart of the system, a patient centric data model will be developed
capturing and integrating all biomedical data following a dynamic Subjective-
Objective-Assessment-Plan (SOAP) model of an Electronic Medical Record
supporting vertical integration and temporal evolution. Whenever possible,
well-established biomedical onto-terminological resources such as ATC, SNOMED
CT, ICD-10, MESH, etc. will be incorporated either directly or as semantic
annotations. In addition, efficient data storage and handling of non-
traditional data types such as geolocation data, images and streams will be
supported, e.g., data from wearable devices.
For personal data, MHMD will further extend the already existing digi.me
Personal Data Account semantic modelling scheme taking into consideration
possible overlaps on biomedical data modelling. In addition to the patient
specific data, application specific data will be modelled and integrated.
## Data Cataloguing and Persistent Identifiers
MHMD will develop a catalogue service indexing available data in the centres
with Persistent Identifiers. The data model will be used to populate and
browse the MHMD global data catalogue, and it will be mapped to the Persistent
IDentifiers (PIDs), to create non repudiable, persistent, unique and standard
identifiers to selected data points. The resulting data catalogue will be
browsable by advanced semantic-enabled engines and interfaces, allowing to
segment, group, and thus create, specific cohorts of data. PIDs will be used
in transactions in lieu of the actual data and will thus ensure that no
sensitive data is compromised nor exposed at any time in the transaction
processes.
## Accessibility and Data sharing
MHMD’s goal is to create the first open biomedical information network centred
on the connection between organisations and the individual, aiming at
encouraging hospitals to start making pseudo-anonymized or anonymised data
available for open research, while prompting citizens to become the ultimate
owners and controllers of their health data.
Regarding personal data, the GDPR legislation identifies two alternatives
regarding the application of the EU regulation:
* Anonymised (irreversibly de-identified or “sanitized”) data, for which re-identification is made impossible with current “state of the art” technology. For these types of data, the GDPR does not apply, so long as the data subject cannot be re-identified, even by matching his/her data with other information held by third parties. Data security, however, is not defined by the legal authority.
* Pseudonymised (partially de-identified) data: they constitute the basic privacy-preserving level allowing for some data sharing, and represent data where direct identifiers (e.g. Names, SSN) or quasi-identifiers (e.g. unique combinations of date and zip codes) are removed and data is mismatched with a substitution algorithm, impeding correlation of readily associated data to the individual’s identity. For such data, GDPR applies and appropriate compliance must be ensured.
In the context of MHMD both options will be considered and addressed through
well-defined data sharing flows as follows:
**Accessing Anonymized Data:** MHMD will consider possible re-use, sharing and
correct citation/crediting of specific subsets of Anonymised datasets in an
Open Science environment ensuring compliance with the European efforts and
policies related to OpenAccess and OpenData. In more detail, MH-MD will
consider the adoption of the appropriate policies in the entire data flow and
under specific consent will provide access to experimental anonymised datasets
through research data repositories and horizontal infrastructures (e.g.,
OpenAIRE, ZENODO). Such datasets could be either related to a small number of
variables targeting specific clinical research use cases or contain aggregated
/ statistical information (e.g., for an epidemiological research). Well
established anonymisation techniques will be incorporated ensuring specific
privacy guarantees (e.g., k-Anonymity) while optimizing data utility.
**Accessing Pseudonymised (partially de-identified) data:** all clinical data
stored in the system will be pseudonymised and will be only accessible within
MHMD data management and data processing platform through specific privacy
preserving APIs. MHMD relies on a decentralized, blockchain-based
infrastructure that monitors and orchestrates data sharing transactions and a
multi-level privacy preserving and security layer that provides secure access
with specific privacy guarantees on the data. This way it ensures that data
will only be accessed and used from specific stakeholders and applications
(data processors) and for welldefined and specific purposes in alignment with
the data subject’s ‘dynamic’ consent. Dynamic Consent allows to extend
traditional consents, combining them into a novel user workflow in which
patients may or may not allow access to their data based on a range of key
parameters:
* What will data be used for
* What will be done with the data
* What data will be retained
* What data will be shared with 3rd parties and for what purpose
* How will the right to be forgotten be implemented
Hence, MHMD will give the opportunity and assurance to the data subjects
(e.g., patients, hospitals, individuals) that they are able to control their
data in a flexible and agile manner, being enabled to monitor and re-evaluate
the clauses included in the initial agreement / consent.
## Data Profiling and Data Quality
MHMD will incorporate the already existing DCV Data profiling and Data
Cleaning engine provided by ATHENA RC to assess and ensure the quality of the
data. DCV is able to analyse the content, structure, and relationships within
data to uncover patterns, inconsistencies, anomalies, and redundancies and
automate the curation processes using a variety of advanced data cleaning
methods. MHMD will work on expanding already existing data profiling
capabilities, defining a formal methodology to support classification of
medical data and correspondent security and privacy provisions suggested in
each category. The MHMD methodology will be framed by regulatory analysis and
yield indication for policies in those areas where current regulations are not
addressing fine grained operational constraints.
## Allocation of resources and responsibilities
* **Federated data management:** Gnúbila (a data privacy solution designer and independent software vendor) will develop, deploy and maintain the federated data management MHMD Infostructure for the clinical centres. Extending its FedEHR federated platform and its FedEHR Anonymizer product, that have already been deployed at the participating hospitals of MD-Paedigree and Cardioproof projects, Gnùbila will provide solutions to extract, de-identify, demilitarise and share medical sensitive data cross-enterprise and transnational.
* **Clinical Data modelling and data integration:** HES-SO (University of Applied Sciences Western Switzerland) (leader of WP4) will be responsible for the clinical data sourcing and preparation, the construction of a clinical data catalogue and the normalization of the clinical data with reference terminologies.
* **Personal data management:** Digi.me will provide and extend the already existing digi.me software and platform that will gather personal data from sparse data sources, from actual biomedical data to data shared through social networks and from biometric data coming from wearable and mobile devices to privacy preferences gathered with specific questionnaires. Digi.me will also provide expertise and knowledge as required concerning personal data, data normalisation, and health value exchange.
* **Data Profiling & Data Quality Assurance: ** ATHENA RC will provide the necessary tools, techniques and methodologies for data profiling (including data sensitivity and privacy profiling) and data curation, extending the already existing DCV data profiling and data cleaning web based tool (deployed in MD-Paedigree project).
* **Privacy Preserving solutions and data security:** ATHENA RC (leader of the related WP5) will provide the anonymisation tool (AMNESIA) and the related techniques for privacy preserving data publication as well as a privacy preserving complex data flow execution engine (EXAREME) targeting privacy preserving data mining within MHMD. In addition, ATHENA will provide the required API for privacy preserving data access.
* **Blockchain Infrastructure and Smart Contracts:** Gnùbila (leader of the related WP6) will provide, integrate and deploy the blockchain platform which will handle consent and data transactions between the concerned centres. ATHENA RC will participate at the specification of the blockchain related policies, requirements and guidelines. Lynkeus will participate at the Smart Contracts specification.
# DATA PROTECTION, PRIVACY PRESERVATION AND DATA SECURITY
MHMD is dealing with highly sensitive biomedical and personal data hence data
security and privacy preservation will be addressed in every step of the data
processing flow, from harvesting and curation to sharing and analysis.
Following and implementing privacy-by-design and privacy-by-default
guidelines, MHMD will develop an innovative architecture for data storage,
access, and sharing, having recourse to federated data management and
blockchain / smart contracts technology, and combining it with multi-level
anonymisation and encryption techniques, whose efficiency and usability will
be quantitatively measured during the project’s duration. In addition, a
complete methodology for re-identification and penetration threats modelling
and test will be developed and the resulting system will be openly challenged,
to spot possible breaches.
## Privacy preserving data sharing and decentralized monitoring and
orchestration
As described in section 3.3, MHMD will combine and support two specific data
access / sharing flows:
* Privacy preserving data publishing where specific anonymized subsets of data will be exposed to external parties
* Privacy preserving complex data flow execution within MHMD platform, where specific applications will be able to process and analyse the pseudo-anonymized data through a well-defined secure API that implements multi-level privacy preservation techniques (including Secure Multi-Party Computation (SMPC), differential privacy and homomorphic encryption) targeting data mining and analytics.
A key novelty of MHMD will be the incorporation of these mechanisms in its
overall privacy policy in conjunction with cryptographic and data fishing
prevention techniques.
The entire platform will rely on a blockchain infrastructure to orchestrate
and monitor data sharing transactions (where transactions will be made of
anonymous consent(s) and their related PID(s)). Relying on the blockchain as a
distributed, public and transparent ledger will enable not only the project
stakeholders but also data subjects to witness data sharing activities at any
time, while decentralizing decision making on the actual transactions.
Transactions will be automated thanks to the provision of custom-tailored
smart contracts. This way, MHMD will promote decentralised privacy preserving
data sharing and analytics, increasing transparency and strengthening
individuals’ right to control and be aware of the processing of their data.
## Sensitivity and security data profiling
MHMD will provide a formal methodology to support privacy related profiling of
medical and personal data and adjust correspondent security and privacy
provisions. Such methodology will be framed by regulatory analysis and yield
indication for policies in those areas where current regulations are not
addressing fine grained operational constraints. Hence, MHMD will classify
data types and assign them to different security and privacy preserving
modules, based on their relevance, sensitivity, risk for the individual, and
practical value, and will also craft recommended best practices for the
protection of each data type. MHMD’s privacy profiling methodology and related
privacy preserving execution flow will impact both the way that privacy
related options are communicated to data subjects (providing a clear, easily
understandable privacy preservation scale per type and method) and the way
that privacy preservation techniques are applied (ensuring that engineers can
easily understand how to build privacy-friendly applications implementing the
concepts of Privacy by design and Privacy by default principles in practice).
## Software development
All software modules will encapsulate state-of-the art security,
authentication and authorization mechanisms. The robustness of such modules is
ensured by years of developments in the field (the basic building-blocks stem
from previously funded EU projects or from already functioning commercial
solutions) and will be tested through dedicated penetration / hacking tests
and challenges. In addition, data protection methods will be made available
through a set of secure APIs and Smart Contracts.
## Fingerprinting and watermarking
MHMD’s internal monitoring functions will be paired with scanning and tracking
functionalities, capable of identifying data that were leaked or fraudulently
acquired, by making use of fingerprinting and watermarking as a reactive
method, i.e. as means to discover and attribute data leakages. Watermarks
embed a unique identification feature to the dataset, allowing to determine
data identity and provenance. Fingerprinting is similar to watermarking, but
is further personalised to a specific user of a dataset, thus allowing to
identify the specific source a dataset has been obtained from.
## Penetration/hacking challenges
MHMD will organize penetration/hacking challenges, open to the participation
of external competitors. Selfhacking tests are also foreseen. For these
penetration challenges only synthetic datasets will be used. Both penetration
tests and patient re-identification scenarios will be executed to thoroughly
stress test the infrastructure, software and platform functions.
# ETHICAL ASPECTS
_To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables._
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0433_GOEASY_776261.md
|
**Introduction**
</th> </tr> </table>
The purpose of this document is to present the initial Data Management Plan
(DMP) of the GOEASY project and to provide the guidelines for maintaining the
DMP during the project.
The Data Management Plan methodology approach adopted for the compilation of
D6.1 has been based on the updated version of the “Guidelines on FAIR Data
Management in Horizon 2020 version 3.0 released on 26 July 2016 by the
European Commission Directorate – General for Research & Innovation” 1 . All
GOEASY data will be handled according to EU Data protection and Privacy
regulation and the upcoming General Data Protection Regulation (GDPR) † .
The GOEASY DMP addresses the following issues:
* Data Summary
* FAIR data
* Making data findable, including provisions for metadata
* Making data openly accessible
* Making data interoperable
* Increase data re-use
* Allocation of resources
* Data security
* Ethical aspects Other issues
According to EU’s guidelines regarding the DMP, the document will be updated -
if appropriate - during the project lifetime (in the form of deliverables).
GOEASY will be deployed in two pilot sites in different countries: (I)
Stockholm, Sweden and (II) Turin, Italy. Currently, the deployment and usage
of the deployed GOEASY functionalities is not yet defined. Therefore, we will
need to update the DMP with the data that is being collected/created at each
pilot site according to its usage and whether it can be published as Open
Data.
**Scope**
The scope of the DMP is to describe the data management life cycle for all
data sets to be collected, processed or generated in all Work Packages during
the 36 months of the GOEASY project. FAIR Data Management is highly promoted
by the Commission and since GOEASY deals with several kind of data, relevant
attention has been given to this task. However, the DMP is a living document
in which information will be made available on a more detailed level through
updates and additions as the GOEASY project progresses.
**Methodology**
The DMP concerns all the data sets that will be collected, processed and/or
generated within the project. The methodology the consortium follows to create
and maintain the project DMP is hereafter outlined:
1. Create a data management policy.
1. Using the elements that the EC guidelines 1 proposes to address for each data set.
2. Adding the strategy that the consortium uses to address each of the elements.
2. Create a DMP template that will be used in the project for each of the collected data sets, see Appendix 1 GOEASY Template for DMP.
3. Creating and maintaining DMPs
1. If a data set is collected, processed and/or generated within a work package, a DMP should be filled in. For instance, training data sets, example collections etc.
2. For each of the pilots, when it is known which data will be collected, the DMP for that pilot should be filled in.
4. The filled DMPs should be added to this document as updates in section 3.
1. This document is the living document describing which data is collected within the project as well as how it is managed.
5. Towards the end of the project, an assessment will be made about which data is valuable to be kept as Open Data after the end of the project.
a. For the data that is considered to be valuable an assessment of how the
data can be maintained and the cost involved will be made. We expect that in
the GOEASY project, the partners can share most of such data under an Open
Data Commons Open Database License (ODbL).
**Related documents**
<table>
<tr>
<th>
**ID**
</th>
<th>
**Title**
</th>
<th>
**Reference**
</th>
<th>
**Version**
</th>
<th>
**Date**
</th> </tr>
<tr>
<td>
[RD.1]
</td>
<td>
Description of Action/ Grant Agreement
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**2**
</th>
<th>
**GOEASY Data Management Policy**
</th> </tr> </table>
The responsible party for creating and maintaining the DMP for a data set is
the partner that creates/collects such data. If a data set is collected,
processed and/or generated within a work package, a DMP should be created.
Before each pilot execution, it should be clear which data set is
collected/created in the pilot and how the data will be managed, i.e. the DMPs
for the pilot data must be ready and accepted. This will be done individually
for each of the pilots because of the difference between the pilots being in
different countries and of different types of events, i.e. closed, open etc.
**Naming and identification of the Data set**
To have a mechanism for easily identifying the different collected/generated
data, we will use a naming scheme. The naming scheme for GOEASY datasets will
be a simple hierarchical scheme including country, pilot, creating or
collecting partner and a describing data set name. This name should be used as
the identification of the data set when it is published as Open Data in
different open data portals. The structure of the naming of the dataset will
be as follows:
GOEASY_{Country or WP}_{Pilot Site or WP}_{Responsible
Partner}_{Description}_{Data Set Sub Index}
Figure 1: GOEASY Data Set Naming Scheme
The parts are defined as follows:
* GOEASY: Static for all data sets and is used for identifying the project.
* Country: The two letter ISO 3166-1 country code for the pilot where data has been collected or generated.
* WP: the work package together with work package number, e.g., WP6.
* Pilot Site: The name of the pilot site where the data was collected, without spaces with CamelCaps in case of multiple words, e.g., AsthmaWatch etc.
* Responsible Partner: The partner that is responsible for managing the collected data, i.e. creates and maintains the Open Data Management plan for the data set. Using the acronyms from D1.1.
* Description: Short name for the data set, without spaces with CamelCaps in case of multiple words, e.g., Location, Pollution level etc.
* Data Set Sub Index: Optional numerical index starting from 1. The purpose of the dataset sub index is that data sets created/collected at different times can be distinguished and have their individual meta data.
GOEASY_IT_Turin_GAPES_Location_1
Figure 2: GOEASY Data Set Naming Example
In the example shown in Figure 2, the Data set is created within GOEASY
project in Italy at Turin pilot site. GAPES is responsible for Open Data
Management plan for the dataset. The dataset contains location data and it is
the first of a series of data sets collected at different times.
There can be situations where the data needs to be anonymised with regards to
the location the data has been collected, for instance at some pilots it might
not be allowed to publish people count data with the actual event location for
security reasons. In these cases, the Country and Pilot Site will be replaced
by string UNKNOWN when it is made available as Open Data.
For data sets that are not connected to a specific pilot site the Pilot Site
should be replaced with the prefix
WP followed by the Work Package number that creates and maintains the Open
Data Management plan for the dataset, e.g., WP6. The same applies to the
Country part which also should be replaced with the prefix WP followed by the
Work Package number in the cases where the data set is not geographically
dependent, such as pure simulations or statistics.
**Data Summary / Data set description**
The data collected/created needs to be described including the following
information:
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any) o Provide the identification of the re-used data, i.e. GOEASY identifier or pointer to external data, if possible.
* Specify the origin of the data
* State the expected data size (if known)
* Outline the data utility: to whom will it be useful **Fair Data**
FAIR data management means in general terms, that research data should be
“FAIR” ( **F** indable, **A** ccessible, **I** nteroperable and **R**
e-usable). These principles precede implementation choices and do not
necessarily suggest any specific technology, standard, or implementation
solution.
**2.3.1 Making data findable, including provisions for metadata**
This point addresses the following issues:
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism.
* Outline the naming conventions used.
* Outline the approach towards search keywords.
* Outline the approach for clear versioning.
* Specify standards for metadata creation (if any).
As far as the metadata are concerned, the way the consortium will capture and
store information should be described. For instance, for data records stored
in a database with links to each item, metadata can pinpoint their description
and location.
There are various disciplinary metadata standards, however the GOEASY
consortium has identified a number of available best practices and guidelines
for working with Open Data, mostly by organisations or institutions that
support and promote Open Data initiatives, and will be taken into account.
These include:
* Open Data Foundation
* Open Knowledge Foundation
* Open Government Standards
Furthermore, data should be interoperable, adhering for data annotation and
data exchange, compliant with available software applications related to LBS.
**2.3.2 Making data openly accessible**
The objectives of this aspect address the following issues:
* Specify which data will be made openly available and, in case some data is kept closed, explain the reason why.
* Specify how data will be made available.
* Will the data be added to any Open Data registries?
* Specify what methods or software tools are needed to access such data, if a documentation is necessary about the software and if it is possible to include the relevant software (e.g. in open source code).
* Specify where data and associated metadata, documentation and code are deposited.
* Data that will be considered safe in terms of privacy, and useful for release, will be made available for download under the ODbL License.
* Specify how access will be provided in case there are restrictions.
**2.3.3 Making data interoperable**
This aspect refers to the assessment of the data interoperability specifying
which data and metadata vocabularies, standards or methodologies will be
followed in order to facilitate interoperability. Moreover, it will address
whether standard vocabulary will be used for all data types present in the
data set in order to allow inter-disciplinary interoperability.
In the framework of the GOEASY project, we will deal with many different types
of data coming from very different sources, but in order to promote
interoperability we use of the following guidelines:
* OGC SensorThings API model for time series data 2 , such as environmental readings etc.
* If the data is part of a domain with well-known open formats that are in common use, this should be selected.
* If the data does not fall in the previous categories, an open and easily machine-readable format should be selected.
**2.3.4 Increase Data Re-use**
This aspect addresses the following issues:
* Specify how the data will be licensed to permit the widest reuse possible. o Tool to help selecting license: _https://www.europeandataportal.eu/en/content/show-license_ o If a restrictive license has been selected, explain the reasons behind it.
* Specify when data will be made available for re-use.
* Specify if the data produced and/or used in the project is useable by third parties, especially, after the end of the project.
* Provide a data quality assurance process description, if any.
* Specify the length of time for which the data will remain re-usable.
**Allocation of Resources**
This aspect addresses the following issues:
* Estimate the costs for making the data FAIR and describe the method of covering these costs. o This includes, if applicable, the cost for anonymising data.
* Identify responsibilities for data management in the project.
* Describe costs and potential value of long-term preservation.
**Data security**
The provisions for data security and recovery are taken care by the partners
running their respective databases. The responsible partners will use the
feedback generated by the security related tasks, when setting up their back
ends. The security related feedback comes from T3.1, T3.3 and T4.5.
The security and integrity of the data transfer is guaranteed by the applied
state-of-the-art software frameworks or libraries. At the end of the project,
the consortium will decide if a long term preservation of the data is needed.
EU's OpenAIRE suggestions for selecting a proper repository will be taken into
account.
**Ethical aspects**
The informed consent covers the intended use of the data including long term
preservation, in accordance with
EU regulation (see Appendix 2). Further, Deliverable 8.1 “Ethics requirements”
(including the updated Data Management Plan) refers to ethical aspects, with
special focus on PODP (Protection of Personal Data).
**Other issues**
Other issues will refer to other national/ funder/ sectorial/ departmental
procedures for data management that are used.
<table>
<tr>
<th>
**3**
</th>
<th>
**Initial DMP Components in GOEASY**
</th> </tr> </table>
During third and fourth quarter of the project, each work package will analyse
which DMP components are relevant in for its activities. When the pilots
definitions will be ready with regards to which data is collected and how data
is used, DMPs for the pilots need to be created. This definition will follow
the template in Annex 1. Here below we present a first set of initial generic
DMP components.
**WP2 – User Scenarios (ApesMobility)**
# Table 1: DMP for WP2 – User Scenarios (ApesMobility)
<table>
<tr>
<th>
**DMP Element**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
GOEASY_Italy_WP2_FIT_UserScenarios_ ApesMobility
</td> </tr>
<tr>
<td>
DMP Responsible
Partner
</td>
<td>
FIT
</td> </tr> </table>
**Date Partner Name Description of change**
Revision History **2018-03-15** FIT Yannick Created initial DMP
Bachteler
<table>
<tr>
<th>
Data Summary
</th>
<th>
Definition of scenarios for scoping of the initial requirements (D2.1 Initial
Visions, Scenarios and Use Cases; updated in D2.4 Updated Visions, Scenarios,
Use Cases and Innovation; and in D2.6 Final Visions, Scenarios, Use Cases and
Innovations) is based on brainstorming, focus group and discussions with pilot
partners. Talking to and gathering data from end users is an integral part of
the GOEASY project and will help to ensure that a useful product is created.
Evaluated data is presented in a graphical way within the deliverables, e.g.
as mind maps.
</th> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
It will become both discoverable and accessible to the public when the
consortium decides to do so. D2.1 and its updated (D2.4) and final version
(D2.6) contain a table stating all versions of the document, along with who
contributed to each version, what the changes where as well as the date the
new version was created.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is available in deliverables (D2.1, D2.4, D2.6). The dissemination level
of D2.1 is public. It is available through a document sharing system (BSCW)
for the members of the consortium. As soon as deliverables will be publicized,
it will be uploaded along with the other public deliverables to the project
website or anywhere else the consortium decides.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
Raw data (e.g. audio recording of focus group) cannot be made freely available
because it contains sensitive information.
</td> </tr>
<tr>
<td>
Increase Data Re-use
</td>
<td>
Engineers, who want to build similar systems, could use it as a foundation.
</td> </tr>
<tr>
<td>
Allocation of Resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Documentation of scenarios will be securely saved on the FITs premises and
will be shared with the rest of the partners through the GOEASY wiki
(Confluence) and document sharing system.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other Issues
</td>
<td>
N/A
</td> </tr> </table>
**WP2 – User Scenarios (AsthmaWatch)**
# Table 2: DMP for WP2 – User Scenarios (AsthmaWatch)
<table>
<tr>
<th>
**DMP Element**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
GOEASY_Sweden_WP2_FIT_UserScenarios_AsthmaWatch
</td> </tr>
<tr>
<td>
DMP Responsible
Partner
</td>
<td>
FIT
</td> </tr> </table>
**Date Partner**
Revision History **2018-03-15** FIT
<table>
<tr>
<th>
Data Summary
</th>
<th>
Definition of user scenarios for scoping of the initial requirements (D2.1
Initial
Visions, Scenarios and Use Cases; updated in D2.4 Updated Visions, Scenarios,
Use Cases and Innovation; and in D2.6 Final Visions, Scenarios, Use Cases and
Innovations) is based on brainstorming, interviews and discussions with pilot
partners. Talking to and gathering data from end users is an integral part of
the GOEASY project and will help to ensure that a useful product is created.
Evaluated data is presented in a graphical way within the deliverables, e.g.
as mind maps.
</th> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
It will become both discoverable and accessible to the public when the
consortium decides to do so. The D2.1 and its updated (D2.4) and final version
(D2.6) contain a table stating all versions of the document, along with who
contributed to each version, what the changes where as well as the date the
new version was created.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is available in deliverables (D2.1, D2.4, D2.6). The dissemination level
of D2.1 is public. It is available through the document sharing system (BSCW)
for the members of the consortium. As soon as deliverables will be publicized,
it will be uploaded along with the other public deliverables to the project
website or anywhere else the consortium decides.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
Raw data (e.g. interview protocol) cannot be made freely available because it
contains sensitive information.
</td> </tr>
<tr>
<td>
Increase Data Re-use
</td>
<td>
Engineers who want to build similar systems, could use it as a foundation.
</td> </tr>
<tr>
<td>
Allocation of Resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Documentation of scenarios will be securely saved on the Fraunhofer premises
and will be shared with the rest of the partners through the GOEASY wiki and
document sharing system.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other Issues
</td>
<td>
N/A
</td> </tr> </table>
**Name Description of change**
Yannick Created initial DMP
Bachteler
**WP2 – User Requirements**
# Table 3: DMP for WP2 – User Requirements
<table>
<tr>
<th>
**DMP Element**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
GOEASY_WP2_WP2_FIT_UserRequirements_1
</td> </tr>
<tr>
<td>
DMP Responsible
Partner
</td>
<td>
FIT
</td> </tr> </table>
**Date Partner Name Description of change**
Revision History **2018-03-15** FIT Yannick Created initial DMP
Bachteler
<table>
<tr>
<th>
Data Summary
</th>
<th>
Analysis and definition of user requirements for scoping of the initial
requirements (D2.1 Initial Visions, Scenarios and Use Cases; and updated
versions) are based on brainstorming, interviews, focus group and discussions
with pilot partners (see previous DMP). The data is essential for the
technical team to develop the GOEASY platform; other partner teams throughout
the project, as well as the wider research community will benefit when results
are published.
</th> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
It will become both discoverable and accessible to the public when the
consortium decides to do so. The D2.1 and its updated (D2.4) and final version
(D2.6) contain a table stating all versions of the document, along with who
contributed to each version, what the changes where as well as the date the
new version was created.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data is/ will be available in deliverables (D2.1, D2.4, D2.6). The
dissemination level of D2.1 is public. It is available through the document
sharing system
</td> </tr>
<tr>
<td>
</td>
<td>
(BSCW) for the members of the consortium. As soon as deliverables will be
publicized, it will be uploaded along with the other public deliverables to
the project website or anywhere else the consortium decides.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
Raw data is recorded and formatted as user stories in the JIRA Issue tracker
hosted at Fraunhofer premises.
</td> </tr>
<tr>
<td>
Increase Data Re-use
</td>
<td>
Engineers who want to build similar systems, could use this as an example.
</td> </tr>
<tr>
<td>
Allocation of Resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Documentation of requirements will be securely saved on the Fraunhofer
premises and will be shared with the rest of the partners through the GOEASY
wiki and document sharing system.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other Issues
</td>
<td>
N/A
</td> </tr> </table>
**WP5 – Scalability and e-Security stress-tests**
# Table 4: DMP-template for WP5 – Scalability and e-Security stress-tests
<table>
<tr>
<th>
**DMP Element**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
GOEASY_WP5_WP5_ISMB_Stresstesting
</td> </tr>
<tr>
<td>
DMP Responsible
Partner
</td>
<td>
ISMB
</td> </tr> </table>
**Date Partner Name Description of change**
Revision History **2018-xx-xx** ISMB Created initial DMP
<table>
<tr>
<th>
Data Summary
</th>
<th>
</th> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
</td> </tr>
<tr>
<td>
Increase Data Re-use
</td>
<td>
</td> </tr>
<tr>
<td>
Allocation of Resources
</td>
<td>
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
</td> </tr>
<tr>
<td>
Other Issues
</td>
<td>
</td> </tr> </table>
**WP6 – Citizens Engagement, Recruitment and Support**
# Table 5: DMP-template for WP6 – Citizens Engagement, Recruitment and
Support
<table>
<tr>
<th>
**DMP Element**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
GOEASY_WP6_WP6_COT_CitizensEngagement
</td> </tr>
<tr>
<td>
DMP Responsible
Partner
</td>
<td>
COT
</td> </tr> </table>
**Date Partner Name Description of change**
Revision History **2018-xx-xx** COT Created initial DMP
<table>
<tr>
<th>
Data Summary
</th>
<th>
</th> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
</td> </tr>
<tr>
<td>
Increase Data Re-use
</td>
<td>
</td> </tr>
<tr>
<td>
Allocation of Resources
</td>
<td>
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
</td> </tr>
<tr>
<td>
Other Issues
</td>
<td>
</td> </tr> </table>
**WP6 – ApesMobility Pilot**
# Table 6: DMP-template for WP6 – ApesMobility Pilot
<table>
<tr>
<th>
**DMP Element**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
GOEASY_WP6_WP6_GAPES_ApesMobility
</td> </tr>
<tr>
<td>
DMP Responsible
Partner
</td>
<td>
ISMB
</td> </tr> </table>
**Date Partner Name Description of change**
Revision History **2018-xx-xx** GAPES Created initial DMP
<table>
<tr>
<th>
Data Summary
</th>
<th>
</th> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
</td> </tr>
<tr>
<td>
Increase Data Re-use
</td>
<td>
</td> </tr>
<tr>
<td>
Allocation of Resources
</td>
<td>
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
</td> </tr>
<tr>
<td>
Other Issues
</td>
<td>
</td> </tr> </table>
**WP6 – Citizens Engagement, Recruitment and Support**
# Table 7: DMP-template for WP6 – Citizens Engagement, Recruitment and
Support
<table>
<tr>
<th>
**DMP Element**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
GOEASY_WP6_WP6_CNET_AsthmaWatchPilot
</td> </tr>
<tr>
<td>
DMP Responsible
Partner
</td>
<td>
CNET
</td> </tr> </table>
**Date Partner Name Description of change**
Revision History **2018-xx-xx** CNET Created initial DMP
<table>
<tr>
<th>
Data Summary
</th>
<th>
</th> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
</td> </tr>
<tr>
<td>
Increase Data Re-use
</td>
<td>
</td> </tr>
<tr>
<td>
Allocation of Resources
</td>
<td>
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
</td> </tr>
<tr>
<td>
Other Issues
</td>
<td>
</td> </tr> </table>
**WP6 – Holistic GOEASY Platforms and Applications Evaluation**
# Table 8: DMP-template for WP6 – Holistic GOEASY Platforms and Applications
Evaluation
<table>
<tr>
<th>
**DMP Element**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
GOEASY_WP6_WP6_FIT_PlatformEvaluation
</td> </tr>
<tr>
<td>
DMP Responsible
Partner
</td>
<td>
FIT
</td> </tr> </table>
**Date Partner Name Description of change**
Revision History **2018-xx-xx** FIT Created initial DMP
<table>
<tr>
<th>
Data Summary
</th>
<th>
</th> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
</td> </tr>
<tr>
<td>
Increase Data Re-use
</td>
<td>
</td> </tr>
<tr>
<td>
Allocation of Resources
</td>
<td>
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
</td> </tr>
<tr>
<td>
Other Issues
</td>
<td>
</td> </tr> </table>
**WP7 – Dissemination**
# Table 9: DMP-template for WP7 – Dissemination
<table>
<tr>
<th>
**DMP Element**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Identifier
</td>
<td>
GOEASY_WP7_WP7_GAPES_Dissemination
</td> </tr>
<tr>
<td>
DMP Responsible
Partner
</td>
<td>
GAPES
</td> </tr> </table>
**Date Partner Name Description of change**
Revision History **2018-xx-xx** GAPES Created initial DMP
<table>
<tr>
<th>
Data Summary
</th>
<th>
</th> </tr>
<tr>
<td>
Making data findable,
including provisions for metadata
</td>
<td>
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
</td> </tr>
<tr>
<td>
Increase Data Re-use
</td>
<td>
</td> </tr>
<tr>
<td>
Allocation of Resources
</td>
<td>
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
</td> </tr>
<tr>
<td>
Other Issues
</td>
<td>
</td> </tr> </table>
**Abbreviation**
<table>
<tr>
<th>
**Abbreviation**
</th>
<th>
**Explanation**
</th> </tr>
<tr>
<td>
DMP
</td>
<td>
Data Management Plan
</td> </tr>
<tr>
<td>
WP
</td>
<td>
Work Package
</td> </tr>
<tr>
<td>
IoT
</td>
<td>
Internet of Things
</td> </tr>
<tr>
<td>
LBS
</td>
<td>
Location-based Service
</td> </tr>
<tr>
<td>
WBS
</td>
<td>
Work Breakdown Structure
</td> </tr>
<tr>
<td>
GNSS
</td>
<td>
Global Navigation Satellite System
</td> </tr>
<tr>
<td>
API
</td>
<td>
Application Programming Interface
</td> </tr>
<tr>
<td>
OGC
</td>
<td>
Open Geospatial Consortium
</td> </tr>
<tr>
<td>
ICT
</td>
<td>
Information and Communication Technology
</td> </tr>
<tr>
<td>
FAIR data
</td>
<td>
Findable, accessible, interoperable and re-usable data
</td> </tr>
<tr>
<td>
GDPR
</td>
<td>
General Data Protection Regulation
</td> </tr>
<tr>
<td>
GAPES
</td>
<td>
greenApes Srl SB
</td> </tr>
<tr>
<td>
FIT
</td>
<td>
Fraunhofer Institute for Applied Information Technology
</td> </tr>
<tr>
<td>
CNET
</td>
<td>
CNet Svenska AB
</td> </tr>
<tr>
<td>
COT
</td>
<td>
Città di Torino
</td> </tr>
<tr>
<td>
ISMB
</td>
<td>
Istituto Superiore Mario Boella sulle Tecnologie dell’ Informazione e delle
Telecomunicazioni
</td> </tr>
<tr>
<td>
OGC
</td>
<td>
Open Geospatial Consortium
</td> </tr>
<tr>
<td>
POPD
</td>
<td>
Protection of Personal Data
</td> </tr> </table>
***References**
1. Guidelines on Fair Data Management in Horizon 2020, Version 3.0 26 July 2016; _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020hi-oa-data-mgt_en.pdf_ (Accessed 15 February 2018)
2. Official PDF of the Regulation (EU) 2016/679 (General Data Protection Regulation), _https://gdpr-info.eu/_ (Accessed 22 March 2018)
3. 2018 reform of EU data protection rules, _https://ec.europa.eu/commission/priorities/justiceand-fundamental-rights/data-protection/2018-reform-eu-data-protection-rules_en_ , (Accessed 22 March 2018)
4. Open Geospatial Consortium (OGC) SensorThings API.
_https://github.com/opengeospatial/sensorthings_ (Accessed 5 August 2017)
**Appendix 1 GALILEO Template for DMP**
<table>
<tr>
<th>
**DMP Element**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Identifier
</td>
<td>
The identifier of the data set following the GOEASY naming principles, see 2.1
</td> </tr>
<tr>
<td>
DMP Responsible Partner
</td>
<td>
The Partner that is responsible for creating and maintaining the DMP
</td> </tr>
<tr>
<td>
Revision History
</td>
<td>
**Date Partner**
</td>
<td>
**Name**
</td>
<td>
**Description of change**
</td> </tr>
<tr>
<td>
**2018-xx-xx** xxx
</td>
<td>
xxx
</td>
<td>
Created initial DMP
</td> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Guidelines in 2.2
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Guidelines in 2.3.1
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Guidelines in 2.3.2
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
Guidelines in 2.3.3
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Increase Data Re-use
</td>
<td>
Guidelines in 2.3.4
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Allocation of Resources
</td>
<td>
Guidelines in 2.4
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Guidelines in 2.5
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
Guidelines in 2.6
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Other Issues
</td>
<td>
Guidelines in 2.7
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**Appendix 2 GALILEO Informed Consent Form**
Users will be required to read, understand and sign the Informed Consent
Sheet.
_Generic Consent Form Template_ 3
I understand that my participation in the GOEASY project will involve [provide
brief description of what is required, e.g. ...completing two questionnaires
about my attitudes toward controversial issues which will require
approximately 20 minutes of my time.].
I understand that participation in this project is entirely voluntary and that
I can withdraw from the project at any time without giving a reason.
I understand that I am free to ask any questions at any time. I am free to
withdraw or discuss my concerns with [name].
I understand that the information provided by me will be held totally
anonymously, so that it is impossible to trace this information back to me
individually. I understand that this information may be retained indefinitely.
I also understand that at the end of the project I will be provided with
additional information and feedback about the purpose of the project.
I, ___________________________________(NAME) consent to participate in the
project conducted by [name]
Signed:
Date:
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0434_CARTRE_724086.md
|
# Introduction
This document defines the specific means taken in the CARTRE project to manage
the datasets created in CARTRE, as well as efficient working procedures to be
used by the CARTRE consortium partners for collaboration.
The purpose of a Data Management Plan (DMP) is to describe the data management
life cycle for all datasets to be collected, processed or generated by a
research project. It covers the handling of research data during & after the
project; what data will be collected, processed or generated; what methodology
& standards will be applied; whether data will be shared / made open access &
how; how data will be curated & preserved (see reference in section 1.2).
This document is targeted at:
* The CARTRE consortium and associated partners: for laying out data management procedures
* Future researchers, in European projects or otherwise, that wish to reuse and use the CARTRE data sets to understand what is accessible and what the access procedures are
* Policy makers and stakeholders of Automated Road Transport (ART) for overviewing the positions, consensus and divergence in the ART domain
* The EC for assessing CARTRE’s approaches for data management.
## CARTRE Contractual References
CARTRE, Coordination of Automated Road Transport Deployment for Europe, is a
support action.
The Grant Agreement number is 724086 and project duration is 24 months,
effective from 1 October 2016 until 30 September 2016. The EC Project Officer
is Mr. Ludger Rogge and the project coordinator Dr. Maxime Flament, ERTICO.
## Authoritative documents
1. Grant agreement H2020-ART-2016 724086 CARTRE, Coordination of Automated Road Transport Deployment for Europe – CARTRE, 27 September 2016
2. CARTRE Project (724086) consortium agreement
3. Guidelines on Data Management in Horizon 2020 _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2 020-hi-oadata-mgt_en.pdf_
4. Guidelines on the Implementation of Open Access to Scientific Publications and
Research Data in Projects supported by the European Research Council under
Horizon 2020
_https://erc.europa.eu/sites/default/files/document/file/ERC_Guidelines_Implementatio
n_Open_Access.pdf_
# Data summary
## Purpose of the data collection/generation
The data that is collected or generated in CARTRE, contributes to building
position papers. These papers serve to feed the public discussion and policy
building and thus accelerate development and deployment of automated road
transport. The creation process of the position papers will increase
cooperation between key stakeholders from different sectors. By creating
common views, the CARTRE project encourages testing and sharing best
practices.
The position papers will be based on a total of 9 datasets, containing
information about the themes of interests, their challenges and statements
regarding the themes.
## CARTRE datasets
Table 1 gives an overview of all the datasets, their specific purpose and
metadata of the information in the dataset. Each dataset will be presented in
the form of a single document.
**Table 1 CARTRE datasets**
<table>
<tr>
<th>
**Data set number**
</th>
<th>
**Title**
</th>
<th>
**Purpose**
</th>
<th>
**Meta data**
</th>
<th>
**Dissemination Level**
</th> </tr>
<tr>
<td>
DS1
</td>
<td>
Contact details of participants
</td>
<td>
Allow networking and organising meetings
</td>
<td>
Contact name,
affiliation, partner type [beneficiary, associated partner, network],
organization type, contact type [partner, project, team], phone number, e-mail
address, address, VAT number, company
description, CVs of key personnel, relevant projects, relevant publications,
available infrastructure
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS2
</td>
<td>
Themes of
interest
</td>
<td>
To scope the project and to focus meetings and organisation
</td>
<td>
Id, title, theme description, 1 st moderator, 2 nd
moderator, theme
description, priority
</td>
<td>
Public
</td> </tr>
<tr>
<td>
DS3
</td>
<td>
Thematic
interests of partners
</td>
<td>
To identify what the partner wants to focus on in CARTRE.
</td>
<td>
Company name, membership
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS4
</td>
<td>
Challenges for
Automated
Road Traffic
</td>
<td>
To identify what needs to be solved to accelerate ART
</td>
<td>
Title, theme, background/context
</td>
<td>
Public
</td> </tr>
<tr>
<td>
DS5
</td>
<td>
Statements on
</td>
<td>
A confident and
</td>
<td>
Title, theme,
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**Data set number**
</td>
<td>
**Title**
</td>
<td>
**Purpose**
</td>
<td>
**Meta data**
</td>
<td>
**Dissemination Level**
</td> </tr>
<tr>
<td>
</td>
<td>
thematic interests or challenges
</td>
<td>
forceful statement of fact or belief. To provoke discussion or to formulate
consensus
</td>
<td>
challenge, background/context, date
</td>
<td>
</td> </tr>
<tr>
<td>
DS6
</td>
<td>
Votes on statements
</td>
<td>
Personal agreement or disagreement on a statement
</td>
<td>
Statement, vote (6 scale [don’t agree at
all-strongly agree]), comments
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS7
</td>
<td>
Position papers
</td>
<td>
To formulate a draft or final position of the consortium and its network on
the Themes of Interest. To publish a clear position on the themes and what
needs to be addressed to accelerate ART.
</td>
<td>
Theme, Keywords,
Challenges, Statements, Voting statistics, Research topics
</td>
<td>
Public
</td> </tr>
<tr>
<td>
DS8
</td>
<td>
FOT-Net Catalogue entries
</td>
<td>
CARTRE will build on FOT-Net and VRA catalogues of ongoing tests and available
data
</td>
<td>
Information in FOTNet wiki and Tools catalogues provided by Wiki users
</td>
<td>
Public
</td> </tr>
<tr>
<td>
DS9
</td>
<td>
VRA-net Catalogue entries
</td>
<td>
CARTRE will build on FOT-Net and VRA catalogues of ongoing tests and available
data
</td>
<td>
Information in VRAwiki
Data and Tools catalogues provided by Wiki users
</td>
<td>
Public
</td> </tr>
<tr>
<td>
DS10
</td>
<td>
Questionnaire data
</td>
<td>
CARTRE will use specific questionnaires to
collect input from its network
</td>
<td>
Votes on highpriority topics and free-form
comments. The results will be reflected in
CARTRE’s work and deliverables.
</td>
<td>
Confidential unless included in position papers (DS7) or deliverables
(DS10)
</td> </tr>
<tr>
<td>
DS11
</td>
<td>
Formal deliverables
</td>
<td>
To provide the formal deliverables of the projects which document the results
of the project and provide evidence for achieving the project objectives.
</td>
<td>
Title, project title, project grant number, deliverable number, key words
</td>
<td>
Mostly public, following Grant agreement
</td> </tr> </table>
## Origin of data
The input for the position papers will originate from the CARTRE network and
from the consortium participants of CARTRE, who provide questionnaire data on
challenges, statements and votes. Inputs from related CSA projects are used,
in particular the Vehicle and Road Automation project ( _http://vra-net.eu/,_
currently closing), SCOUT (running in parallel), Mobility4EU (
_http://www.mobility4eu.eu/_ 2016–2018) and FOT-Net Data ( _www.fotnet.eu_ ,
ending December 2016). This is in line with the principle from the CARTRE
Coordination Auction to avoid reinventing the wheel, exchanging experience and
knowledge from existing research.
CARTRE will build on FOT-Net’s and VRA’s wiki catalogues (DS8, DS9) on
automated driving tests and other Field Operational Tests (FOTs):
_http://wiki.fot-net.eu/_ and _http://vranet.eu/wiki_ . These catalogues
provide information on past and ongoing test campaigns worldwide. Their
content is based on input received from the FOT and VRA communities. Besides
the tests, the catalogues provide further details on available data (FOT Data
Catalogue) and tools that have been used (FOT Tools Catalogue).
CARTRE will set out public questionnaires (DS10) e.g. about high-priority
research questions for upcoming tests of automated driving. Such data will be
used in guiding the project work and collaboration topics, as well as in
publications.
# General CARTRE approach to data management
The mission of CARTRE is to accelerate development and deployment of automated
road transport by increasing market and policy certainties. This is further
operationalised in objectives on 1. Public–private collaboration; 2.
International cooperation within and beyond Europe and 3\. Strategic alignment
of national action plans.
Since CARTRE is a coordination and support action, the project does not aim to
develop protectable Intellectual Property. However, new results will be
created that are valuable for various stakeholders. This document describes
how the project will handle the information, with a particular focus on open
research data.
Several fundaments of the CARTRE data management plan have been identified:
1. Within the partners and associated partners, the new information developed is open to all project partners after registration on a project intranet site as set out in the Consortium Agreement section 9.3).
2. Before publication of the results to the public, all partners have a right to review the material. The consortium agreement arranges further details on notification, protection of partner interests and objections (section 8.4).
3. The identity of the author is kept confidential for DS4 ‘challenges, DS5 ‘statements’ and DS6 ‘votes’ (see Table 1). This allows for free discussion. Only the intranet site administrators can view the identity of the contributing partner.
4. The consortium agreement details provisions on intellectual property of the content.
5. By joining the project, the partners are deemed to have consented to creating the data collected, as this is the purpose of the project. This is again confirmed in the Consortium Agreement (Section 2, Purpose). Therefore, no informed consent forms are used within the consortium.
6. For questionnaire respondents outside of the project consortium, an information sheet and informed consent will be used as shown in 9. This may be done in paper or in electronic form as part of a questionnaire.
7. All deliverables of the project are considered open to the public with the exception of 7 confidential deliverables, as specified in the Grant Agreement.
8. Interested network members can become associates to the CARTRE project. On signing the Associated Partnership Letter of Intent, they can have access to the CARTRE Sharepoint site to contribute to the content creation process.
# Making data findable, including provisions for metadata
The collected and generated data as listed in Table 1 will be made available
to the participants of the CARTRE project via the Microsoft SharePoint
document management and storage system. This environment will also be
accessible to the associated partners of the CARTRE project. It is
administered and provided by TNO.
Microsoft SharePoint provides functionality for identification mechanisms,
such as:
* Unique identifiers for files
* Adding keywords for findability in the environment • Clear and automatic version history of each document
* Adding metadata to documents.
From the Grant agreement article 29.2, the metadata requirements below are
applied. We quote:
_“The bibliographic metadata must be in a standard format and must include all
of the following:_
* _the terms “European Union (EU)” and “Horizon 2020”;_
* _the name of the action, acronym and grant number;_
* _the publication date, and length of embargo period if applicable, and_
* _a persistent identifier.”_
# Data sharing within consortium
The data sharing procedures apply to all data types and are in accordance with
the Grant Agreement. Table 2 outlines the project access procedures and rights
in relation to the data gathered throughout the CARTRE project.
The data will be stored in a data management and storage system Microsoft
SharePoint. This environment provides access control and traceability of the
stored data.
An administrator of the Microsoft SharePoint environment can provide
restrictions on groups of files and individual files. Authorization and access
control will be handled by the coordination team of the CARTRE project. Each
participant of the CARTRE project or an associated partner who has signed the
Letter of Intent has access to the SharePoint environment for data access. At
the end of the project, the public deliverables will be made available through
the CARTRE public website.
To track the changes of each participant or network members each authorized
user must be registered in Microsoft SharePoint. Changes by a user are tracked
via the metadata of the data in the SharePoint environment. However, content
for the statements, challenges and votes are added anonymously by the users.
**Table 2 Data access procedure**
<table>
<tr>
<th>
**Activity**
</th>
<th>
**Access procedures and rights**
</th> </tr>
<tr>
<td>
Registration to Project Intranet
</td>
<td>
Interested network members can apply to become an associated member. The
coordination team decides whether the network member is an appropriate
organisation for the CARTRE project. If so, the network member can sign the
Associated Partner Letter of Intent. After signature, access is granted to
staff of the new Associated Member.
</td> </tr>
<tr>
<td>
Access rights
</td>
<td>
The CARTRE consortium agreement specifies access rights to Results and
background (Section 9). This includes open access for implementation of the
CARTRE Grant and conditional access for exploitation of the results.
</td> </tr>
<tr>
<td>
Compilation of challenges and votes into a position paper
</td>
<td>
The compilation of a position paper is done in various teleconferences and
live meetings among the CARTRE contacts that are interested in the Theme
topic. Full attendance is not needed. Each Theme of Interest has a moderator
and possibly a second moderator. The theme moderator performs a role as editor
of the position paper to prepare it for review.
</td> </tr>
<tr>
<td>
Publication of a project result
</td>
<td>
The consortium agreement specifies a period for prior notice of a planned
publication and procedures for objections of the other partners. It also
refers to article 29.1 of the Grant Agreement which specifies that results
should be published as soon as possible within the constraints of IP
protection, partner interests and security. These documents are authoritative
(this summary is not).
</td> </tr> </table>
# Making data openly accessible
## Dissemination of data
The Dissemination Strategic Plan has been described in deliverable D6.1. The
data will be promoted within the constraints of the Grant agreement.
For future research, the catalogues, themes, challenges and position papers
are available. The statements and votes are not published. See Table 1 on page
5 for the full list.
The data is published via two channels: a website as a public channel and via
the Microsoft SharePoint environment. Information and data on the public
website can be used by all visitors anonymously. The audience requiring
specific data can request access to the Microsoft SharePoint environment as an
associated partner (see Table 2).
The reasons for not disclosing certain datasets to the public or research
community are as follows:
**Table 3 Grounds for not disclosing certain datasets**
<table>
<tr>
<th>
**Data set**
**number**
</th>
<th>
**Data set**
</th>
<th>
**Ground for non disclosure**
</th> </tr>
<tr>
<td>
DS1
</td>
<td>
Contact details
</td>
<td>
Privacy, prevention of spam messages or advertising
</td> </tr>
<tr>
<td>
DS4
</td>
<td>
Thematic interests of partners
</td>
<td>
May reveal business interests
</td> </tr>
<tr>
<td>
DS5
</td>
<td>
Statements on themes of interests
</td>
<td>
Publication of the author would lead to heavy selfcensoring by the authors. By
allowing anonymous statements, participants feel more free to create content.
Also, the raw statements are not subjected to review and may contain
politically incorrect or commercially sensitive content. The editing and
review process of the position papers assures that the support of the
consortium members for the subset of statements included in the position
papers.
</td> </tr>
<tr>
<td>
DS6
</td>
<td>
Votes on statements
</td>
<td>
Privacy and ethical grounds for stimulating independent voting.
</td> </tr> </table>
Note that in multi-beneficiary projects it is also possible for specific
beneficiaries to keep their company-specific data closed if relevant
provisions are made.
## Public website
The dissemination strategy for the website as a public deliverable has been
described in deliverable D6.1. It will be a shared website with the SCOUT
project using the URL: _http://www.connectedautomateddriving.eu/_
The public website has no access control (except for website administration)
and is strictly separated from the project intranet using SharePoint where all
confidential information is handled.
## Project intranet website
Authorization and data access for the SharePoint environment is managed by the
coordination team. Daily administration is handled by TNO. The environment
allows users to view the data via an internet browser. The data can be edited
using Microsoft Office or equivalent (open-source) software. Furthermore,
there are possibilities to access the data via database protocols, such as SQL
or Microsoft Access, or map the SharePoint environment as a network drive.
These options are supported and available by SharePoint.
Similar to the SharePoint environment for the consortium, this environment
will have options for metadata. External users will also need to be registered
as an authorized user to access the data and require extended authorization to
modify data.
# Making data interoperable
Data interoperability allows researchers, institutions, organisations,
countries, etc. reuse the existing data by adhering to (existing) standards in
the field and using available (open) software applications where possible.
Data in project’s wiki catalogues is structured and uses metadata definitions
defined for FOTs in FOT-Net Data project.
Documents, spreadsheets and presentations are stored in Portable Document
Format (pdf) as well as accepted XML formats. This is supported by software
such as Acrobat Reader, Microsoft Office and open source software such as
LibreOffice.
Questionnaire data is possible to export from electronic databases to various
formats on request.
## Increase data re-use through clarifying licenses
The following licenses to the datasets have been identified.
**Table 4 License types for data sets**
<table>
<tr>
<th>
**Dataset number**
</th>
<th>
**Title**
</th>
<th>
**License**
</th> </tr>
<tr>
<td>
DS1
</td>
<td>
Contact details of participants
</td>
<td>
No license, confidential
</td> </tr>
<tr>
<td>
DS2
</td>
<td>
Themes of interest
</td>
<td>
Copyright
</td> </tr>
<tr>
<td>
DS3
</td>
<td>
Thematic interests of partners
</td>
<td>
No license, confidential
</td> </tr>
<tr>
<td>
DS4
</td>
<td>
Challenges for Automated Road
Traffic
</td>
<td>
Copyright
</td> </tr>
<tr>
<td>
DS5
</td>
<td>
Statements on thematic interests or challenges
</td>
<td>
No license, confidential
</td> </tr>
<tr>
<td>
DS6
</td>
<td>
Votes on statements
</td>
<td>
No license, confidential
</td> </tr>
<tr>
<td>
DS7
</td>
<td>
Position papers
</td>
<td>
Copyright
</td> </tr>
<tr>
<td>
DS8
</td>
<td>
FOT-net catalogue entries
</td>
<td>
Open data
</td> </tr>
<tr>
<td>
DS9
</td>
<td>
VRA-net catalogue entries
</td>
<td>
Open data
</td> </tr>
<tr>
<td>
DS10
</td>
<td>
Questionnaires
</td>
<td>
Copyright
</td> </tr>
<tr>
<td>
DS11
</td>
<td>
Formal deliverables
</td>
<td>
Public deliverables: copyright
Confidential deliverables: copyright
</td> </tr> </table>
For the copyrighted data sets, a Creative Commons CC BY license will be
explored. This gives permission to share (copy and redistribute the material
in any medium or format), adapt (remix, transform, and build upon the material
for any purpose, even commercially), provided appropriate credit is given (see
_https://creativecommons.org/licenses/by/3.0/_ ). This will be explored by the
coordination team and needs support of the General Assembly.
The deliverables (DS11) are available only for partners and associated
partners until positive review by the project officer.
The public project results (see Table 1) can be used by third parties upon
release of the results. The reuse is under the condition of appropriate credit
to the CARTRE project.
The data will be reusable for four years. However, since the position papers
will outdate quickly in the rapidly changing world of automated road traffic,
the value of the data will quickly drop after 2 years.
## Data quality
Data quality assurance processes related to position papers are based on
internal review processes. The quality of public wiki catalogues or
questionnaire data is only checked by administrators against expected data
types.
## Data security
The secured server ecity.tno.nl requires a username and password. Users of a
site are invited by the project manager, and are given access only to specific
projects. The server received an A rating on the Quality SSL Labs SSL report
(https://www.ssllabs.com/ssltest), and uses only TLS 1.2, 1.1 and 1.0.
Project sites on this server are being migrated to Office 365 SharePoint
Online. After migration to this environment, use of multi-factor
authentication to access the sites will be compulsory.
Backups are automatically run using a fixed schedule.
# Ethical aspects
The main ethical aspect of the datasets creation and data usage remains in the
privacy of the authors and the partner companies. This is addressed by
anonymous creation of content for challenges, statements and votes. The ethics
are further addressed in WP7 with deliverables D7.1, D7.2, D7.3 and D7.4.
Commision regulations for research projects specify specify the relevant
ethical topics: human participants in the research; protection of personal
data and third countries.
For human participants in questionnaires, in particular network members and
public outside of the project consortium, an informed consent form will be
used as shown in Appendix 1.
# Glossary: Acronyms and definitions
<table>
<tr>
<th>
**Term**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
ART
</td>
<td>
Automated Road Transport
</td> </tr>
<tr>
<td>
Background
</td>
<td>
Background IPR as defined in Article 24 of the Grant Agreement
</td> </tr>
<tr>
<td>
CARTRE
</td>
<td>
EU H2020 ART06 CSA project CARTRE, GA number 724086
</td> </tr>
<tr>
<td>
Consent Form
</td>
<td>
A form signed by a participant to confirm that he or she agrees to participate
in the research and is aware of any risks that might be involved.
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Metadata is data that describes other data. Meta is a prefix that in most
information technology usages means "an underlying definition or description."
Metadata summarizes basic information about data, which can make finding and
working with particular instances of data easier.
_http://whatis.techtarget.com/definition/metadata_
</td> </tr>
<tr>
<td>
Participant
Information
Sheet
</td>
<td>
An information sheet is an important part of recruiting research participants.
It ensures that the potential participants have sufficient information to make
an informed decision about whether to take part in your research or not.
</td> </tr>
<tr>
<td>
Project intranet web site
</td>
<td>
A closed website for interaction between registered partners and associated
partners. In this case a Microsoft SharePoint site
_https://ecity.tno.nl/sites/eu-cartre_ or _https://partners.tno.nl/sites/eu-
cartre_
</td> </tr>
<tr>
<td>
Public CARTRE web site
</td>
<td>
Joint CARTRE-SCOUT website with the URL
_http://www.connectedautomateddriving.eu/_
</td> </tr>
<tr>
<td>
Repository
</td>
<td>
A digital repository is a mechanism for managing and storing digital content.
</td> </tr>
<tr>
<td>
Statements
</td>
<td>
A confident and forceful statement of fact or belief. To provoke discussion or
to formulate consensus
</td> </tr>
<tr>
<td>
Votes
</td>
<td>
Personal agreement or disagreement on a CARTRE statement
</td> </tr> </table>
# Appendix 1. Information sheet and Informed consent template
This template outlines information to be given in CARTRE questionnaires and an
informed consent form regarding the use of the collected data. The consent can
be collected on paper or digitally as part of an introduction to an
electronically conducted questionnaire.
Thank you for your willingness to fill in a CARTRE questionnaire.
The CARTRE project is set up to accelerate European automated road transport.
It does so by collaboration between partners, international cooperation and
joint agenda setting for research and policy. It results in a number of
position thematic papers (among others).
Your input is used to determine what the challenges, opportunities and
opinions are on these themes. We look for both consensus and contrasting
opinions.
Your input will be used in an anonymised way. We may quote you anonymously.
Your votes on statements will be shown together with other anonymous replies.
We ensure that readers cannot derive your identity.
If you have further questions, you can contact the CARTRE member who gave you
this questionnaire, [email protected] or ERTICO
(http://ertico.com/ or phone +32 2 4000 700).
Please answer the following questions about your approval:
I agree that my answers to the questionnaire are used anonymously in CARTRE
publications.
I agree I do not agree
I agree that some of answers may be quoted anonymously.
I agree I do not agree
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0435_IRIS_774199.md
|
# 1\. Introduction
The Data Management Plan (DMP) consists of a description of the data
management life cycle for the data to be produced, collected, and processed,
and will include information on the handling of data during and after the end
of the project, i.e. what data will be produced, collected, and processed,
which methodology and standards will be applied, whether data will be shared
and/or made open access, and how data will be curated and preserved (including
after the end of the project).
## 1.4 Scope, objectives and expected impact
The scope of this document is to provide the procedure to be adopted by the
project partners and subcontractors to produce, collect and process the
research data from the IRIS demonstration activities. The adopted procedure
follows the guidelines provided by the European Commission in the document
_Guidelines on FAIR Data Management in Horizon 2020_ .
This document has been built based on the Horizon 2020 FAIR DMP template
(Version: 26 July 2016), which actually provides a set of questions that the
partners should answer with a level of detail appropriate to the project. It
is not required to provide detailed answers to all the questions in this first
report on DMP. The DMP is intended to be a living document in which
information can be made available on a finer level of granularity through
updates as the implementation of the project progresses and when significant
changes occur. As a minimum, the DMP shall be updated in the context of the
periodic evaluation/assessment of the project.
This second report on DMP, submitted at M12 (30 th September 2018),
describes a preliminary plan for data production, collection and processing,
and will be continuously updated until the end of the project, as part of WP9
activities. Update D9.9 (second update on the Data management plan) which will
be delivered in M30, includes a final revision of the information presented in
D9.8. This revision will be mainly based on the feedback on D9.8 which will be
given in the plenary meeting in Nice (October 2018) and new insights that will
arise when data is actually being managed. Further on D9.9 will be a version
that includes the templates of D9.8 filled in with information about the data
that is being aggregated from the various demonstrator projects within the
IRIS framework.
D9.10 (M42) and D9.11 (M60) finally are mainly a continuation of D9.9 which
include the details of all new datasets that will be gathered or aggregated in
the period between D9.9 and the moment that each update is being delivered.
The availability and sharing of project data will raise the impact of IRIS
activities, allowing for access to a large number of stakeholders. The DMP
considers (see Figure 1):
* Data Types, Formats, Standards and Capture Methods
* Ethics and Intellectual Property
* Access, Data Sharing and Reuse
* Resourcing
* Deposit and Long-Term Preservation
* Short-Term Storage and Data Management
_Figure 1. Aspects considered in the data management plan._
## 1.5 Contributions of partners
The main project partners in T9.2 are UU, RISE and CERTH. UU, as the leader in
T9.2, is responsible for coordinating the activities related to the definition
of the data model and the DMP for performance and impact measurement. RISE as
the WP9 leader ensures that all activities are in line with other related WPs
by establishing communication with the respective WP leaders. Part of this
work entails cooperation with ongoing projects, initiatives and communities in
WP2, such as the H2020-SCC CITYKEYS project for smart city performance
indicators, and facilitation for all performance data to be incorporated into
the database of the EU Smart City Innovation System (SCIS). Furthermore, RISE
as the leader in T9.1 ensures that all relevant data are addressed in D9.1,
based on the initial definition of the KPIs included in T9.1, as well as that
any new KPIs, being introduced if the need arises to modify them after review,
are addressed in D9.9 (Second update on the DMP which is due to be published
in M30).
RISE organised WP9 workshops in March and April 2018 in all the Lighthouse
Cities (LH), i.e. Gothenburg, Nice and Utrecht, to discuss LH solutions and
possible monitoring strategies for technologies, indicators and data
collection. Different than projected in D9.1 these workshops where more about
the LH solutions and the KPI’s itself. It was too early to define detailed
monitoring strategies in these sessions. Therefore, another session on this
topic is planned in the 3 rd IRIS Consortium plenary Board meeting in Nice.
This meeting will take place on October 16, 17 and 18 of 2018. In the
interactive program that will be created for all consortium partner contacts a
workshop on the Data Management Plan will be organized.
Instead of directly supplying data collection sheets, all the LH will be
invited to provide input on relevant data to be collected, discuss the purpose
of utilisation of collected data and the project goals together with IMCG
representing WP3 Business models. These workshops will establish an
harmonised approach among the LH with respect to the DMP development and the
Pilot on Open Research Data 1 .
CERTH as the leader in T9.3 ensures that the development of the first report
of the DMP and T9.2 activities are in line with T9.3 activities and the
development of the City Innovation Platform (CIP).
In the course of the project, the project partners will be guided by the T9.2
leader and the WP9 leader on how to provide input and report on data to be
generated or collected during the project by using the templates listed in
this second report on the DMP.
## 1.6 Relation to other activities
In Figure 2, the timeline for the DMP development within the IRIS project is
illustrated, pointing out interactions with other tasks and WPs. Next to this
document, the DMP will be further updated in M30 (D9.9: Second update on the
Data management plan), in M42 (D9.10: Third update on the Data management
plan), and in M60 (D9.11: Fourth and final update on the Data management
plan).
WP9 and WP4 activities are connected (including the linkage to activities in
T4.3 ‘Data Governance Plan’ which is meant to facilitate a smooth, secure and
reliable flow of data, including the description of supporting processes and
assets, and also addressing privacy and ethical issues). The work in T9.2 will
be performed in close and continuous collaboration with WP 5-7 to ensure that
the DMP addresses data and relevant developments from the IRIS demonstration
activities in the LH. Furthermore, with respect to ethical aspects each LH and
FC will have its own Ethics Committee and one person will be nominated per
site as responsible for following the project’s recommendations and the
National and European legislations (See Section 6.1.2), thus linking WP9 to WP
5-7 and to WP8 (Replication by Lighthouse regions, Follower cities, European
market uptake). Finally, T9.2 will also ensure privacy and security of
sensitive information, for legal or ethical reasons, for issues pertaining to
personal privacy, or for proprietary concerns linking to WP3.
The data management plan on a first glance might have large similarities with
D9.3 (Data model and management plan for integrated solutions). The main
differences are that data management plan D9.1 has its primary focus on the
definition of datasets. D9.3 defines the variables within these sets, and how
these variables determine the KPI’s.
_Figure 2. Timeline for the DMP development within the project duration,
indicating interactions with other work tasks and packages._
## 1.7 Structure of the deliverable
This document has been built based on the Horizon 2020 FAIR DMP template
(Version: 26 July 2016). Accordingly, the document is structured as follows:
**Section 2 Data Summary** : This section provides Table 1 which summarizes
the data to be generated/collected during the project. This table includes
standardised items, of which the contents are described in this section
**Section 3 FAIR data** : Besides this data summary, more information about
the data is required to meet the demands of FAIR- data. Section 3 shortly
describes what this means. It introduces another table with items that should
be added in the data management plan, together with a description.
**Section 4 Allocation of Resources:** Section 4 is about the costs of making
FAIR data.
**Section 5 Data security:** Refers to how each partner will make sure it
keeps its data secure.
**Section 6 Ethical aspects:** Refers to the ethical aspects that arise during
the production and utilization data in the IRIS project **.**
**Section 7 Other issues:** In this section, the project partners will report
the use of any other national/funder/sectorial/departmental procedures for
data management.
# 2 Data Summary
In Table 1 a summary is provided of the data to be generated or collected
during the project. This table includes standardised items and lists as
described below.
At this stage of the project it is still not possible to list in the exact
data that will be generated/collected during the project, since relevant
activities in T9.1 ‘Specification of the monitoring and evaluation methodology
and KPIs definition’ are running in parallel. A full overview of the data will
be possible after the completion of T9.1 and the submission of D9.2 ‘Report on
monitoring and evaluation schemes for integrated solutions’ in M12.
Apart from some minor modifications, the main difference between the tables in
this document compared to the ones in D9.1 is the appearance. To facilitate
the collection of data, a large excel table is created, (Annex 4) including
all different tables in this document and additional space for data mentioned
in chapter 4 and 5.
A workshop in the plenary meeting in Nice (October 2018) will be organized to
improve, initiate and inform about the data collection process.
## 2.1 Explanation for the input of table 1
In **Column 1 ‘** Title of data set’: Each dataset should be named according
to the following model:
IRIS_XX_YYY_NAME
Where:
* XX corresponds to the abbreviation of the lighthouse or follower city providing the data set as defined in the table below:
<table>
<tr>
<th>
**Lighthouse city**
</th>
<th>
**Abbreviation**
</th>
<th>
**Follower City**
</th>
<th>
**Abbreviation**
</th> </tr>
<tr>
<td>
Gothenburg
</td>
<td>
GO
</td>
<td>
Alexandropoulis
</td>
<td>
AL
</td> </tr>
<tr>
<td>
Nice
</td>
<td>
NI
</td>
<td>
Focsani
</td>
<td>
FO
</td> </tr>
<tr>
<td>
Utrecht
</td>
<td>
UT
</td>
<td>
Santa Cruz de Tenerife
</td>
<td>
SC
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Vaasa
</td>
<td>
VA
</td> </tr> </table>
* YYY is an abbreviation for the demonstrator project or integrated solution of which the dataset is part of (can be defined by the project leader)
* NAME specifies a name or a short title for the corresponding data set. The name/title shall be self-explanatory regarding the nature/purpose of the data set.
In **Column 2** New dataset is for administrative purposes, to specify if a
dataset is
* New: No similar dataset is generated before)
* Edit: The dataset is an edited version of a previously generated set
* Addition: The dataset is a previously generated set with added data
In **Column 3** ‘Relation to project objective’ select the objective of the
project (1-8) that relates to the purpose of the data to be generated or
collected:
* **Objective 1:** Demonstrate solutions at district scale integrating smart homes and buildings, smart renewables and closed-loop energy positive districts
* **Objective 2:** Demonstrate smart energy management and storage solutions targeting Grid flexibility
* **Objective 3:** Demonstrate integrated urban mobility solutions increasing the use of environmentally-friendly, alternative fuels, creating new opportunities for collective mobility and lead to a decreased environmental impact
* **Objective 4:** Demonstrate the integration of the latest generation ICT solutions with existing city platforms over open and standardised interfaces enabling the exchange of data for the development of new innovative services
* **Objective 5:** Demonstrate active citizen engagement solutions providing an enabling environment for citizens to participate in co-creation, decision making, planning and problem solving within the Smart Cities
* **Objective 6:** Put in practice bankable business models over proposed integrated solutions, tested to reduce technical and financial risks for investors guaranteeing replicability at EU scale
* **Objective 7:** Strengthening the links and active cooperation between cities in a large number of Member States with a large coverage of cities with different size, geography, climatic zones and economical situations
* **Objective 8:** Measure and validate the demonstration results after a 3-years large-scale demonstration at district scale within 3 highly innovative EU cities
In **Column 4** ‘Data type’ select the type of data to be generated or
collected:
* **integers**
* **booleans**
* **characters**
* **floating-point numbers**
* **alphanumeric strings**
* **Other (please specify)**
* **Not known yet**
In **Column 5** ‘Data format’ select the format of data to be
generated/collected:
* **ASCII text-formatted data (TXT)**
* **CAD data (DWG)**
* **Comma-separated values (CSV)**
* **dBase (DBF)**
* **eXtensible Mark-up Language (XML)**
* **Tab-delimited file (TAB)**
* **Geospatial open data based upon JavaScript Object Notation (GeoJSON)**
* **Geo-referenced TIFF (TIF, TFW)**
* **Hypertext Markup Language (HTML)**
* **Keyhole Markup Language (KML)**
* **MS Word (DOC/DOCX)**
* **MS Excel (XLS/XLSX)**
* **MS Access (MDB/ACCDB)**
* **OpenDocument Spreadsheet (ODS)**
* **Open Document Text (ODT)**
* **Rich Text Format (RTF)**
* **SPSS portable format (POR)**
* **Other (please specify)**
* **Not known yet**
**Note:** When choosing the right **format** for **open data** 2 it is
recommended to start with comma separated values (CSV) files. CSV is perfect
for tabular data and can be easily loaded into and saved from applications
like Excel, making it accessible to users. For geospatial open data formats,
formats to be considered are geoJSON (based upon JavaScript Object Notation -
JSON) and Keyhole Markup Language (KML) which is based upon Extensible Markup
Language – XML. These formats are specifically designed with usability in mind
and can easily be imported and exported from specialist mapping tools like
Open Street Map and CartoDB.
In **Column 6** ‘Re-use of existing data’ select one of the following options
(in the case of re-use of existing data, please specify in plain text how to
re-use):
* **Re-use of existing data (specify how)**
* **Non re-use of existing data**
* **Not known yet**
In **Column 7** ‘Origin of the data’ please specify in plain text the origin
of the data.
In **Column 8** ‘Expected size of the data’ please specify the expected size
of the data and add the appropriate units: Kilobytes (KB), Megabytes (MB),
Gigabytes (GB), and Terabytes (TB).
In **Column 9** ‘Data utility’ please specify to whom the data might be useful
in terms of Work Package (WP) and/or Task (T).
In **Column 10** ‘Other info’ please specify, if applicable, the **data
units** , **time resolution** and **the time period** that the data set covers
in DD/MM/YEAR, or any other relevant information that was not addressed in
columns 1-8. For example, for time-series of power measurement data mention
the units, time resolution and the time period that the data set covers (e.g.
measurements in kW with 15 minutes resolution from 01/01/2018 to 01/02/2018).
In **Column 11** ‘City’ please specify the relevant city (Lighthouse or
Follower) for the corresponding data set. (this column could be
In **Column 12** ‘Contact person(s)’ please specify the name and e-mail of the
responsible contact person(s) for the corresponding data set.
H2020: First update of the Data Management Plan – 28-09-2018
_Table 1 Data Summary_
<table>
<tr>
<th>
**Admin**
</th>
<th>
</th>
<th>
**Data Summary**
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Title of data set
</td>
<td>
New dataset?
</td>
<td>
Relation to project objective
</td>
<td>
Data type
</td>
<td>
Data format
</td>
<td>
Re-use of existing data
</td>
<td>
Origin of the data
</td>
<td>
Expected size of the data
</td>
<td>
Data utility (WP and Task)
</td>
<td>
Other info
</td>
<td>
City
</td>
<td>
Contact person(s)
(name / email)
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**D 9.8** Dissemination Level: **Public** Page **17** of **33**
H2020: First update of the Data Management Plan – 28-09-2018
# 3 FAIR data
The IRIS project partners will ensure that the project research data will be
'FAIR', that is findable, accessible, interoperable and re-usable.
For all the data produced and/or used in the project, the project partners
will put effort in:
* Making data findable, including provisions for metadata
* Making data openly accessible
* Making data interoperable
* Increase data re-use (through clarifying licences)
More information about FAIR can be accessed through the FORCE11 community [1],
and the FAIR principles published as an article in Nature [2].
As a first step in making the project research data 'FAIR', the projects
partners involved in the LH demonstration activities will be asked after M12
to fill in the template with the data set description (See Table 2). This
template will be filled in for each dataset summarised in Table 1. These
dataset descriptions will be incorporated in the next update of the DMP (D9.9
in M30).
## 3.1 Data identification
#### 3.1.1 Title of dataset
The title of the dataset is similar as in table 1
#### 3.1.2 Dataset description
Gives a short description of the dataset, use keywords: _what is monitored?
What is the purpose of the dataset? What kind of sensor is being used?_
## 3.2 Partners, services and responsibilities
Specify in this part of the table the partner who
* Owns the device
* Collects the data
* Analyses the data
* Stores the data
Also specify to which work package and task the dataset is related with
(similar as Table 1)
**D 9.8** Dissemination Level: **Public** Page **18** of **33**
## 3.3 Standards
#### 3.3.1 Info about metadata and documentation
What kind of metadata is being provided with the data?
* Has the metadata been defined?
* What is the status of the metadata so far
* What is the content of the metadata (Datatypes like images portraying an action, textual messages, sequences, timestamps, etc.)
#### 3.3.2 Data standards and formats
These columns specify data standards and formats similar as in table 1
## 3.4 Data exploitation and sharing
#### 3.4.1 Data exploitation
What is the data going to be used for? What is the purpose, use of the data
analysis? For example: _Production process recognition and help during the
different production phases, avoiding mistakes_
#### 3.4.2 Data access policy / Dissemination level
What policies adhere to the data? Is the data Public? Or is it confidential?
Only to be shared amongst the consortium members and the Commission services?
In case of public data, make sure that no potential ethical issues will arise
with the publication and dissemination.
Example text: _The full dataset will be confidential and only the members of
the consortium will have access on it. Furthermore, if the dataset or specific
portions of it (e.g. metadata, statistics, etc.) are decided to become of
widely open access, a data management portal will be created that should
provide a description of the dataset and link to a download section. Of course
these data will be anonymized, so as not to have any potential ethical issues
with their publication and dissemination_
#### 3.4.3 Data sharing, reuse and distribution
Has the data sharing policies been decided yet? What requirements exist for
sharing data? How will the data be shared? Who will decide what to be shared?
#### 3.4.4 Embargo periods
In case there is any embargo period related to the data, it can be specified
here.
## 3.5 Archiving and preservation
Specify in these columns where and until when the data (and its backups) are
stored.
H2020: First update of the Data Management Plan – 28-09-2018
_Table 2 Format with the dataset description_
<table>
<tr>
<th>
**Fair Data (Table2)**
**Partners, services and Responsibilities**
</th>
<th>
</th>
<th>
**Standards**
</th>
<th>
</th>
<th>
**Data exploitation and sharing**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
**Title of data set**
</td>
<td>
**Data set description**
</td>
<td>
**Partner owner of the device**
</td>
<td>
**Partner in charge of the data collection**
</td>
<td>
**Partner in charge of the data analysis**
</td>
<td>
**Partner in charge of the data storage**
</td>
<td>
**WP's and**
**Tasks**
</td>
<td>
</td>
<td>
**Info about metadata**
</td>
<td>
**Data type**
</td>
<td>
</td>
<td>
**Data format**
</td>
<td>
**Data exploitatio n**
</td>
<td>
**Data access policy /**
**Disseminati on level**
</td>
<td>
**Data sharing, reuse and distribution**
</td>
<td>
**Embargo**
**periods (if any)**
</td>
<td>
**Location of**
**Data**
</td>
<td>
**Location of**
**Backup**
</td>
<td>
**Expiry date**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
0
</td>
<td>
0
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**D 9.8** Dissemination Level: **Public** Page **20** of **33**
# 4 Allocation of resources
Further to the FAIR principles, the DMP will also address the allocation of
resources. All the data produced and/or used in the project, will be described
by using the template included in Table 1. For each described dataset the
partners will report on the costs for making data FAIR in the IRIS project.
This information will be incorporated in the next update of the DMP (D9.9 in
M30).
# 5 Data security
For all the data produced and/or used in the project, the project partners
will ensure data security. For each described dataset (based on the template
in Table 1) the partners will state the provisions taken for data security.
This includes data recovery as well as secure storage and transfer of
sensitive data. Further on, it defines how long-term preservation and curation
in certified repositories will take place. This information will be
incorporated in the next update of the DMP (D9.9 in M30).
# 6 Ethical aspects
For all the data produced and/or used in the project, the project partners
will take into account ethical aspects. Specifically, the project partners
will address all obligations as described in the Description of the Action
(DoA) 3 , in ARTICLE 34 ‘ETHICS AND RESEARCH INTEGRITY’. Thus, the IRIS
project will assure the investigation, management and monitoring of ethical
and privacy issues that could be relevant to its envisaged technological
solution and will establish a close-cooperation with the Ethics Helpdesk of
the European Commission. The consortium is aware that a number of privacy and
data protection issues could be raised by the activities (in WP5, WP6 and WP7)
to be performed in the scope of the project. The project involves the carrying
out of data collection in all LH and FC in order to assess the effectiveness
of the proposed solutions. For this reason, human participants will be
involved in certain aspects of the project and data will be collected. This
will be done in full compliance with any European and national legislation and
directives relevant to the country where the data collections are taking
place, as well as with the EU General Data Protection Regulation (GDPR) 4 ,
which replaces the Directive 95/46/EC, with enforcement date the 25 th May
2018.
### 6.1.1 IRIS Ethical Policy
IRIS will follow the opinions of various expert committees in the field (e.g.
the European group on ethics in science and new technologies to the European
Commission. In addition, all national legal and ethical requirements of the
Member States where the research is performed will be fulfilled. Any data
collection involving humans will be strictly held confidential at any time of
the research. This means in detail that:
* All the test subjects will be informed and given the opportunity to provide their consent to any monitoring and data acquisition process that all the subjects will be strictly volunteers and all test volunteers will receive detailed oral information.
* No personal or sensitive data will be centrally stored. In addition, data will be scrambled where possible and abstracted in a way that will not affect the final project outcome.
In addition, they will receive in their own language:
* A commonly understandable written description of the project and its goals.
* The planned project progress and the related testing and evaluation procedures. ▪ Advice on unrestricted disclaimer rights on their agreement.
On the other hand, an Ethics Helpdesk will scrutinise the research, to
guarantee that no undue risk for the user, neither technically nor related to
the breach of privacy, is possible. Thus, the Consortium shall implement the
research project in full respect of the legal and ethical national
requirements and code of practice. Whenever authorisations have to be obtained
from national bodies, those authorisations shall be considered as documents
relevant to the project. Copies of all relevant authorisations shall be
submitted to the Commission prior to commencement of the relevant part of the
research project.
### 6.1.2 IRIS Ethics Helpdesk
All used assessment tools and protocols within IRIS LH and FC will be verified
beforehand by its Ethics helpdesk regarding their impact to business actors
and end users before being applied to the sites. The helpdesk takes
responsibility for implementing and managing the ethical and legal issues of
all procedures in the project, ensuring that each of the partners provides the
necessary participation in IRIS and its code of conduct towards the
participants. Each LH and FC will have its own Ethics Committee and one person
will be nominated per site as responsible for following the project’s
recommendations and the National and European legislations.
### 6.1.3 Data to be collected within IRIS LH and FC
Data will be both manually and automatically collected by smart sensors and
other proprietary equipment installed at selected areas during the execution
of the demonstration activities and will be further investigated in (WP5, WP6
and WP7). In most cases the collected data will be data needed for monitoring
the contextual conditions of the pilot areas (energy consumption, energy
production, temperature, humidity, weather etc.). Since some of the collected
data in the latter case may involve sensitive personal data, all provisions
for data management will be made in compliance with national and EU
legislation: Including the European Network for Information and Security
Agency 5 security measures to minimise the risk to data protection arising
from smart metering and the British Sociological Association's Statement of
Ethical Practice as described in the following paragraphs.
The project research data will be collected in two phases:
* Before the implementation of the demonstration activities in the LH (for baselines, references and design data).
* After the implementation of the demonstration activities in the LH (for evaluation purposes).
The consent procedure for the pilot use case realisation at each of the
selected pilot sites will make use of a template of a consent form, to be
adopted as required per pilot use case. Such a template is included in Annex 3
- Consent form template
# 7 Other issues
In this section, the project partners will report the use of any other
national/funder/sectorial/departmental procedures for data management. This
information will be incorporated in the next update of the DMP (D9.9 in M30).
# 8 Conclusions
The Data Management Plan is a working document that is updated regularly
during the IRIS project. The plan provides templates that will be used by the
partners in the project, when data is being generated or gathered. To make use
of these templates properly, the document gives information on what is
expected in each column of the template. By managing all this data in a
structured way, the FAIR principles will be maintained. This first update is
the result of a revision of the content of the first version of the DMP
(D9.1), some changes have been made to the content of the tables and parts of
the text has been revised to make the information more clear and concise. The
next update of the DMP will consist of a revision of the templates presented
in this plan, but more significantly it will present the utilization of the
templates itself. Meaning that it will show all required details of the data
that is being managed during the IRIS project till the time of deliverable 9.9
(M30).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0436_IRIS_774199.md
|
# 1\. Introduction
The Data Management Plan (DMP) consists of a description of the data
management life cycle for the data to be produced, collected, and processed,
and will include information on the handling of data during and after the end
of the project, i.e. what data will be produced, collected, and processed,
which methodology and standards will be applied, whether data will be shared
and/or made open access, and how data will be curated and preserved (including
after the end of the project).
## 1.1. Scope, objectives and expected impact
The scope of this document is to provide the procedure to be adopted by the
project partners and subcontractors to produce, collect and process the
research data from the IRIS demonstration activities. The adopted procedure
follows the guidelines provided by the European Commission in the document
_Guidelines on FAIR Data Management in Horizon 2020_ .
This document has been built based on the Horizon 2020 FAIR DMP template
(Version: 26 July 2016), which actually provides a set of questions that the
partners should answer with a level of detail appropriate to the project. It
is not required to provide detailed answers to all the questions in this first
report on DMP. The DMP is intended to be a living document in which
information can be made available on a finer level of granularity through
updates as the implementation of the project progresses and when significant
changes occur. As a minimum, the DMP shall be updated in the context of the
periodic evaluation/assessment of the project.
This first report on DMP, submitted at M6 (31st March 2018), describes a
preliminary plan for data production, collection and processing, and will be
continuously updated until the end of the project, as part of WP9 activities.
Specifically, the DMP will be updated in M12 (D9.8: First update on the Data
management plan), in M30 (D9.9: Second update on the Data management plan), in
M42 (D9.10: Third update on the Data management plan), and in M60 (D9.11:
Fourth and final update on the Data management plan).
The availability and sharing of project data will raise the impact of IRIS
activities, allowing for access to a large number of stakeholders. The DMP
considers (see Figure 1):
* Data Types, Formats, Standards and Capture Methods
* Ethics and Intellectual Property
* Access, Data Sharing and Reuse
* Resourcing
* Deposit and Long-Term Preservation
* Short-Term Storage and Data Management
_Figure 1. Aspects considered in the data management plan._
## 1.2. Contributions of partners and relation to other activities
The main project partners in T9.2 are UU, RISE and CERTH. UU, as the leader in
T9.2, is responsible for coordinating the activities related to the definition
of the data model and the DMP for performance and impact measurement. RISE as
the WP9 leader ensures that all activities are in line with other related WPs
by establishing communication with the respective WP leaders. Part of this
work entails cooperation with ongoing projects, initiatives and communities in
WP2, such as the H2020-SCC CITYKEYS project for smart city performance
indicators, and facilitation for all performance data to be incorporated into
the database of the EU Smart City Innovation System (SCIS). Furthermore, RISE
as the leader in T9.1 ensures that all relevant data are addressed in D9.1,
based on the initial definition of the KPIs included in T9.1, as well as that
any new KPIs, being introduced if the need arises to modify them after review,
are addressed in D9.8 (First update on the DMP which is due to be published in
M12).
In Figure 2, the timeline for the DMP development within the IRIS project is
illustrated, pointing out interactions with other tasks and WPs. Next to the
D9.8 (First update on the DMP which is due to be published in M12), the DMP
will be further updated in M30 (D9.9: Second update on the Data management
plan), in M42 (D9.10: Third update on the Data management plan), and in M60
(D9.11: Fourth and final update on the Data management plan).
RISE will organise WP9 workshops in March and April 2018 in all the Lighthouse
Cities (LH), i.e. Gothenburg, Nice and Utrecht, to discuss LH solutions and
possible monitoring strategies for technologies, indicators and data
collection. Instead of directly supplying data collection sheets, all the LH
will be invited to provide input on relevant data to be collected, discuss the
purpose of utilisation of collected data and the project goals together with
IMCG representing WP3 Business models. These workshops will establish a
harmonised approach among the LH with respect to the DMP development and the
Pilot on Open Research Data 1 . CERTH as the leader in T9.3 ensures that the
development of the first report of the DMP and T9.2 activities are in line
with T9.3 activities and the development of the City Innovation Platform
(CIP), and thus connect WP9 with WP4 activities (including the linkage to
activities in T4.3 ‘Data Governance Plan’ which is meant to facilitate a
smooth, secure and reliable flow of data, including the description of
supporting processes and assets, and also addressing privacy and ethical
issues). The work in T9.2 will be performed in close and continuous
collaboration with WP 5-7 to ensure that the DMP addresses data and relevant
developments from the IRIS demonstration activities in the LH. Furthermore,
with respect to ethical aspects each LH and FC will have its own Ethics
Committee and one person will be nominated per site as responsible for
following the project’s recommendations and the National and European
legislations (See Section 6.1.2), thus linking WP9 to WP 5-7 and to WP8
(Replication by Lighthouse regions, Follower cities, European market uptake).
Finally, T9.2 will also ensure privacy and security of sensitive information,
for legal or ethical reasons, for issues pertaining to personal privacy, or
for proprietary concerns linking to WP3.
In the course of the project, the project partners will be guided by the T9.2
leader and the WP9 leader on how to provide input and report on data to be
generated/collected during the project by using the templates listed in this
first report on the DMP.
M6 M12
M18
M24 M30
M36
M42 M60
IRIS WP9 Workshops in
March -
April 2018 on
Monitoring Strategy
D9.9: Second
update on the DMP
D9.8: First update
on the DMP
D9.10: Third update
on the DMP
D9.11: Fourth and final
update on the DMP
T2.3 CITYKEYS and SCIS (M1-M60)
D9.1: First report
on the DMP
WP5 Utrecht Lighthouse City demonstration activities
WP7 Gothenburg Lighthouse City demonstration activities
WP6 Nice Lighthouse City demonstration activities
T9.1 Specification of
the monitoring and
evaluation
methodology and KPIs
definition (M1-M12)
T9.3 Establishment of a unified
framework for harmonized data
gathering, analysis and
reporting (M9-M24)
T9.2 Defining the data model and the data management plan for performance and
impact measurement (M4-M60)
_Figure 2. Timeline for the DMP development within the project duration,
indicating interactions with other work tasks and packages._
## 1.3. Structure of the deliverable
This document has been built based on the Horizon 2020 FAIR DMP template
(Version: 26 July 2016). Accordingly, the document is structured as follows:
* **Section 2:** Data Summary
* **Section 3:** FAIR data
* **Section 4:** Allocation of resources
* **Section 5:** Data security
* **Section 6:** Ethical aspects
* **Section 7:** Other issues
* **Section 8:** Further support in developing your DMP
# 2\. Data Summary
In Table 1, a summary is provided of the data to be generated/collected during
the project. This table includes standardised items and lists as described
below.
At this stage of the project it is still not possible to list in the exact
data that will be generated/collected during the project, since relevant
activities in T9.1 ‘Specification of the monitoring and evaluation methodology
and KPIs definition’ are running in parallel. A full overview of the data will
be possible after the completion of T9.1 and the submission of D9.2 ‘Report on
monitoring and evaluation schemes for integrated solutions’.
In **Column 1** ‘Title of data set’ please specify a name or a short title for
the corresponding data set. The name/title shall be self-explanatory regarding
the nature/purpose of the data set.
In **Column 2** ‘Relation to project objective’ select the objective of the
project (1-8) that relates to the purpose of the data to be
generated/collected:
* **Objective 1:** Demonstrate solutions at district scale integrating smart homes and buildings, smart renewables and closed-loop energy positive districts
* **Objective 2:** Demonstrate smart energy management and storage solutions targeting Grid flexibility
* **Objective 3:** Demonstrate integrated urban mobility solutions increasing the use of environmentally-friendly, alternative fuels, creating new opportunities for collective mobility and lead to a decreased environmental impact
* **Objective 4:** Demonstrate the integration of the latest generation ICT solutions with existing city platforms over open and standardised interfaces enabling the exchange of data for the development of new innovative services
* **Objective 5:** Demonstrate active citizen engagement solutions providing an enabling environment for citizens to participate in co-creation, decision making, planning and problem solving within the Smart Cities
* **Objective 6:** Put in practice bankable business models over proposed integrated solutions, tested to reduce technical and financial risks for investors guaranteeing replicability at EU scale
* **Objective 7:** Strengthening the links and active cooperation between cities in a large number of Member States with a large coverage of cities with different size, geography, climatic zones and economical situations
* **Objective 8:** Measure and validate the demonstration results after a 3-years large-scale demonstration at district scale within 3 highly innovative EU cities
In **Column 3** ‘Data type’ select the type of data to be generated/collected:
* **integers**
* **booleans**
* **characters**
* **floating-point numbers**
* **alphanumeric strings**
* **Other (please specify)**
* **Not known yet**
In **Column 4** ‘Data format’ select the format of data to be
generated/collected:
* **ASCII text-formatted data (TXT)**
* **CAD data (DWG)**
* **Comma-separated values (CSV)**
* **dBase (DBF)**
* **eXtensible Mark-up Language (XML)**
* **Tab-delimited file (TAB)**
* **Geospatial open data based upon JavaScript Object Notation (GeoJSON)**
* **Geo-referenced TIFF (TIF, TFW)**
* **Hypertext Markup Language (HTML)**
* **Keyhole Markup Language (KML)**
* **MS Word (DOC/DOCX)**
* **MS Excel (XLS/XLSX)**
* **MS Access (MDB/ACCDB)**
* **OpenDocument Spreadsheet (ODS)**
* **Open Document Text (ODT)**
* **Rich Text Format (RTF)**
* **SPSS portable format (POR)**
* **Other (please specify)**
* **Not known yet**
**Note:** When choosing the right **format** for **open data** 2 it is
recommended to start with comma separated values (CSV) files. CSV is perfect
for tabular data and can be easily loaded into and saved from applications
like Excel, making it accessible to users. For geospatial open data formats,
formats to be considered are geoJSON (based upon JavaScript Object Notation -
JSON) and Keyhole Markup Language (KML) which is based upon Extensible Markup
Language – XML. These formats are specifically designed with usability in mind
and can easily be imported and exported from specialist mapping tools like
Open Street Map and CartoDB.
In **Column 5** ‘Re-use of existing data’ select one of the following options
(in the case of re-use of existing data, please specify in plain text how to
re-use):
* **Re-use of existing data (specify how)**
* **Non re-use of existing data**
* **Not known yet**
In **Column 6** ‘Origin of the data’ please specify in plain text the origin
of the data.
In **Column 7** ‘Expected size of the data’ please specify the expected size
of the data and add the appropriate units: Kilobytes (KB), Megabytes (MB),
Gigabytes (GB), and Terabytes (TB).
In **Column 8** ‘Data utility’ please specify to whom the data might be useful
in terms of Work Package (WP) and/or Task (T).
In **Column 9** ‘Other info’ please specify, if applicable, the **data units**
, **time resolution** and **the time period** that the data set covers in
DD/MM/YEAR, or any other relevant information that was not addressed in
columns 1-8. For example, for time-series of power measurement data mention
the units, time resolution and the time period that the data set covers (e.g.
measurements in kW with 15 minutes resolution from 01/01/2018 to 01/02/2018).
In **Column 10** ‘City’ please specify the relevant city (Lighthouse or
Follower) for the corresponding data set.
In **Column 11** ‘Contact person(s)’ please specify the name and e-mail of the
responsible contact person(s) for the corresponding data set.
IRIS: Data Management Plan v1.0 – 30.03.2018
_Table 1. Data Summary._
<table>
<tr>
<th>
Title of data set
</th>
<th>
Relation to project objective
</th>
<th>
Data type
</th>
<th>
Data format
</th>
<th>
Re-use of existing data
</th>
<th>
Origin of the data
</th>
<th>
Expected size of the data
</th>
<th>
Data utility
</th>
<th>
Other info
</th>
<th>
City
</th>
<th>
Contact person(s)
(name / email)
</th> </tr>
<tr>
<td>
See explanation in pg. 11
</td>
<td>
See explanatio n in pg. 11
</td>
<td>
See explanation in pg. 11
</td>
<td>
See explanation in pg. 12
</td>
<td>
See explanation in pg. 13
</td>
<td>
See explanation in pg. 13
</td>
<td>
See explanation in pg. 13
</td>
<td>
See explanation in pg. 13
</td>
<td>
See explana tion in pg. 13
</td>
<td>
See explanation in pg. 13
</td>
<td>
See explanation in pg. 13
</td> </tr>
<tr>
<td>
</td>
<td>
Choose
an item.
</td>
<td>
Choose an item.
</td>
<td>
Choose an item.
</td>
<td>
Choose an item.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Choose an item.
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Choose
an item.
</td>
<td>
Choose an item.
</td>
<td>
Choose an item.
</td>
<td>
Choose an item.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Choose an item.
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Choose
an item.
</td>
<td>
Choose an item.
</td>
<td>
Choose an item.
</td>
<td>
Choose an item.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Choose an item.
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Choose
an item.
</td>
<td>
Choose an item.
</td>
<td>
Choose an item.
</td>
<td>
Choose an item.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Choose an item.
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Choose
an item.
</td>
<td>
Choose an item.
</td>
<td>
Choose an item.
</td>
<td>
Choose an item.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Choose an item.
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Choose
an item.
</td>
<td>
Choose an item.
</td>
<td>
Choose an item.
</td>
<td>
Choose an item.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Choose an item.
</td>
<td>
</td> </tr> </table>
*** If necessary, then please add lines in Table 1 by copying-pasting the
following line:**
<table>
<tr>
<th>
</th>
<th>
Choose an item.
</th>
<th>
Choose an item.
</th>
<th>
Choose an item.
</th>
<th>
Choose an item.
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
Choose an item.
</th>
<th>
</th> </tr> </table>
D 9.1 Dissemination Level: **Public** Page **14** of **25**
# 3\. FAIR data
The IRIS project partners will ensure that the project research data will be
'FAIR', that is findable, accessible, interoperable and re-usable.
For all the data produced and/or used in the project, the project partners
will put effort in:
* Making data findable, including provisions for metadata
* Making data openly accessible
* Making data interoperable
* Increase data re-use (through clarifying licences)
More information about FAIR can be accessed through the FORCE11 community [1],
and the FAIR principles published as an article in Nature [2].
As a first step in making the project research data 'FAIR', the projects
partners involved in the LH demonstration activities will be asked during
M6-12 to fill in the template with the data set description (See Table 2).
This template will be filled in for each dataset summarised in Table 1. These
dataset descriptions will be incorporated in the first update of the DMP (D9.8
in M12).
_Table 2. Format of the data set description._
<table>
<tr>
<th>
**Data Identification**
</th> </tr>
<tr>
<td>
Data set description
</td>
<td>
_Where are the sensor(s) installed? What are they monitoring/registering? What
is the dataset comprised of? Will it contain future sub-datasets?_
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
_How will the dataset be collected? What kind of sensor is being used?_
</td> </tr>
<tr>
<td>
**Partners services and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
_What is the name of the owner of the device?_
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
_What is the name of the partner in charge of the device? Are there several
partners that are cooperating? What are their names?_
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
_The name of the partner._
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
_The name of the partner._
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_The data are going to be collected within activities of WPxx and WPxx._
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_What is the status with the metadata so far? Has it been defined? What is the
content of the metadata (e.g. datatypes like images portraying an action,
textual messages, sequences, timestamps etc.)_
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Has the data format been decided on yet? What will it look like?_
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation
(purpose/use of the data analysis)
</td>
<td>
_Example text:_
_Production process recognition and help during the different production
phases, avoiding mistakes_
</td> </tr>
<tr>
<td>
Data access policy /
Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
_Example text:_
_The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if the dataset or specific portions of it
(e.g. metadata, statistics, etc.) are decided to become of widely open access,
a data management portal will be created that should provide a description of
the dataset and link to a download section. Of course these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination_
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
_Has the data sharing policies been decided yet? What requirements exists for
sharing data? How will the data be shared? Who will decide what to be shared?_
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
\-
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
_Who will own the information that has been collected? How will it adhere to
partner policies? What kind of limitation are put on the archive?_
</td> </tr> </table>
# 4\. Allocation of resources
Further to the FAIR principles, the DMP will also address the allocation of
resources. All the data produced and/or used in the project, will be described
by using the template included in Table 2. For each described dataset the
partners will report on the costs for making data FAIR in the IRIS project.
This information will be incorporated in the first update of the DMP (D9.8 in
M12).
# 5\. Data security
For all the data produced and/or used in the project, the project partners
will ensure data security. For each described dataset (based on the template
in Table 2), the partners will state the provisions taken for data security
(including data recovery as well as secure storage and transfer of sensitive
data), as well as for long term preservation and curation in certified
repositories. This information will be incorporated in the first update of the
DMP (D9.8 in M12).
# 6\. Ethical aspects
For all the data produced and/or used in the project, the project partners
will take into account ethical aspects. Specifically, the project partners
will address all obligations as described in the Description of the Action
(DoA) 3 , in ARTICLE 34 ‘ETHICS AND RESEARCH INTEGRITY’. Thus, the IRIS
project will assure the investigation, management and monitoring of ethical
and privacy issues that could be relevant to its envisaged technological
solution and will establish a close-cooperation with the Ethics Helpdesk of
the European Commission. The consortium is aware that a number of privacy and
data protection issues could be raised by the activities (in WP5, WP6 and WP7)
to be performed in the scope of the project. The project involves the carrying
out of data collection in all LH and FC in order to assess the effectiveness
of the proposed solutions. For this reason, human participants will be
involved in certain aspects of the project and data will be collected. This
will be done in full compliance with any European and national legislation and
directives relevant to the country where the data collections are taking
place, as well as with the EU General Data Protection Regulation (GDPR) 4 ,
which replaces the Directive 95/46/EC, with enforcement date the 25 th May
2018.
### 6.1.1 IRIS Ethical Policy
IRIS will follow the opinions of various expert committees in the field (e.g.
the European group on ethics in science and new technologies to the European
Commission. In addition, all national legal and ethical requirements of the
Member States where the research is performed will be fulfilled. Any data
collection involving humans will be strictly held confidential at any time of
the research. This means in detail that:
* All the test subjects will be informed and given the opportunity to provide their consent to any monitoring and data acquisition process that all the subjects will be strictly volunteers and all test volunteers will receive detailed oral information.
* No personal or sensitive data will be centrally stored. In addition, data will be scrambled where possible and abstracted in a way that will not affect the final project outcome.
In addition, they will receive in their own language:
* A commonly understandable written description of the project and its goals.
* The planned project progress and the related testing and evaluation procedures. ▪ Advice on unrestricted disclaimer rights on their agreement.
On the other hand, an Ethics Helpdesk will scrutinise the research, to
guarantee that no undue risk for the user, neither technically nor related to
the breach of privacy, is possible. Thus, the Consortium shall implement the
research project in full respect of the legal and ethical national
requirements and code of practice. Whenever authorisations have to be obtained
from national bodies, those authorisations shall be considered as documents
relevant to the project. Copies of all relevant authorisations shall be
submitted to the Commission prior to commencement of the relevant part of the
research project.
### 6.1.2 IRIS Ethics Helpdesk
All used assessment tools and protocols within IRIS LH and FC will be verified
beforehand by its Ethics helpdesk regarding their impact to business actors
and end users before being applied to the sites. The helpdesk takes
responsibility for implementing and managing the ethical and legal issues of
all procedures in the project, ensuring that each of the partners provides the
necessary participation in IRIS and its code of conduct towards the
participants. Each LH and FC will have its own Ethics Committee and one person
will be nominated per site as responsible for following the project’s
recommendations and the National and European legislations.
### 6.1.3 Data to be collected within IRIS LH and FC
Data will be both manually and automatically collected by smart sensors and
other proprietary equipment installed at selected areas during the execution
of the demonstration activities and will be further investigated in (WP5, WP6
and WP7). In most cases the collected data will be data needed for monitoring
the contextual conditions of the pilot areas (energy consumption, energy
production, temperature, humidity, weather etc.). Since some of the collected
data in the latter case may involve sensitive personal data, all provisions
for data management will be made in compliance with national and EU
legislation: Including the European Network for Information and Security
Agency 5 security measures to minimise the risk to data protection arising
from smart metering and the British Sociological Association's Statement of
Ethical Practice as described in the following paragraphs.
The project research data will be collected in two phases:
* Before the implementation of the demonstration activities in the LH (for baselines, references and design data).
* After the implementation of the demonstration activities in the LH (for evaluation purposes).
The consent procedure for the pilot use case realisation at each of the
selected pilot sites will make use of a template of a consent form, to be
adopted as required per pilot use case. Such a template is included in Annex
3: Consent form template.
# 7\. Other issues
In this section, the project partners will report the use of any other
national/funder/sectorial/ departmental procedures for data management. This
information will be incorporated in the first update of the DMP (D9.8 in M12).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0444_PEARLS_778039.md
|
#### I. Data Summary
The aim of Data Summary (DS) is to organise the data management during PEARL
_S_ Project. The DS includes the following points or questions:
* **What is the purpose of the data collection/generation in relation to the objectives of the project?**
* **What types and formats of data will the project generate/collect?**
* **Will you re-use any existing data and how?**
* **What is the origin of the data?**
* **What is the expected size of the data?**
* **To whom might it be useful ('data utility')?**
The PEARL _S_ Project purpose of data collection/generation is:
* To develop applied knowledge about how to increase public engagement in the behalf of sustainable renewable energy system through planning processes.
* Using secondment, staff exchange and collaborative research, the project will investigate on national legal basis; will develop methodologies on social innovation; and will explore tools from the multidisciplinary approach of Social Sciences in different European regions.
* Establishing international, cross-cutting and multidisciplinary collaboration as the nexus of a five-country holistic pool of universities and research centres in close cooperation with nonacademic sectors.
All Project activity is structured into work packages. Type of data which will
be collected during PEARL _S_ Project figures in tables 1 to 7.
Table 1: Origin of Data in PEARL _S_ Project. WP 1
<table>
<tr>
<th>
**WP 1**
</th>
<th>
**Participants**
</th>
<th>
**Purpose (in relation to the project objectives)**
</th>
<th>
**Data type**
</th>
<th>
**Format**
</th>
<th>
**Data utility**
**(public or not)**
</th> </tr>
<tr>
<th>
All Consortium
</th>
<th>
External
communication and dissemination strategies
development. Project IP treatment.
Expert recruitment.
</th>
<th>
Website,
patents filling, report.
</th>
<th>
MS Office /
Open
Office documents
</th>
<th>
Public
</th> </tr> </table>
Table 2 . WP 2
<table>
<tr>
<th>
**WP 2**
</th>
<th>
**Participants**
</th>
<th>
**Purpose (in relation to the project objectives)**
</th>
<th>
**Data type**
</th>
<th>
**Format**
</th>
<th>
**Data utility**
**(public or not)**
</th> </tr>
<tr>
<th>
1 – USE
2- CLANER
3 – Territoria
5- ENERCOUTIM
8. AUTH
9. GSH
10 – AKKT
12. – UH
13. –SP Interface
</th>
<th>
Examine and compare national energy policy, land use planning and landscape
practice schemes. Fieldwork.
</th>
<th>
Research reports, interviews and research seminar.
</th>
<th>
MS Office
/ Open Office
documents
Audio o video
(mp3 .aif,
.aiff, .wav,
.avi, .mp4)
</th>
<th>
Both confidential as public
</th> </tr> </table>
Table 3: Origin of Data in PEARL _S_ Project. WP 3
<table>
<tr>
<th>
**WP 3**
</th>
<th>
**Participants**
</th>
<th>
**Purpose (in relation to the project objectives)**
</th>
<th>
**Data type**
</th>
<th>
**Format**
</th>
<th>
**Data utility**
**(public or not)**
</th> </tr>
<tr>
<th>
1 – USE
2-CLANER
3 – Territoria
5 – ENERCOUTIM
7 – UNITN
9. – GSH
10. – AKKT
12. – UH
13. – SP Interface
</th>
<th>
Identify focus groups and behaviour – consumptions patterns. Determine factors
that prevent engagement with renewable energies and efficiency.
Preliminary agreements.
</th>
<th>
Confidential report about market segmentation, key actor maps and indicators
analysis. Statement supporting renewable energy efficiency.
Crowdsourcing working schemes.
</th>
<th>
MS Office
/ Open Office
documents
</th>
<th>
Both, confidential and public
</th> </tr> </table>
Table 4 . WP 4
<table>
<tr>
<th>
**WP 4**
</th>
<th>
**Participants**
</th>
<th>
**Purpose (in relation to the project objectives)**
</th>
<th>
**Data type**
</th>
<th>
**Format**
</th>
<th>
**Data**
**utility**
**(public or not)**
</th> </tr>
<tr>
<th>
1 – USE
3- Territoria
7. – UNITN
8. – AUTH
9. – GSH
11 – TSAKOUMIS
13 – SP Interface
</th>
<th>
Knowledge transfer
and skills enhancement.
Development of advanced
methodologies and tools. Website design.
</th>
<th>
Technical report Scientific report on advanced methodologies
Web-GIS Platform
</th>
<th>
MS Office /
Open
Office
documents
GIS format
files
</th>
<th>
Public
</th> </tr> </table>
Table 5: Origin of Data in PEARL _S_ Project. WP 5
<table>
<tr>
<th>
**WP 5**
</th>
<th>
**Participants**
</th>
<th>
**Purpose (in relation to the project objectives)**
</th>
<th>
**Data type**
</th>
<th>
**Format**
</th>
<th>
**Data utility**
**(public or not)**
</th> </tr>
<tr>
<th>
1. – USE
2. – CLANER
3. – Territoria
4. – ICSUL
5. – ENERCOUTIM
6. – COOPERNICO
7. – AUTH
9- GSH
12 - UH
</th>
<th>
Identification and replication of social innovations in renewable energies.
Innovative practices in public engagement. To strengthen cultural dimension of
renewable energy. Methodologies training and dissemination.
</th>
<th>
Case Study
Training
</th>
<th>
MS Office
/ Open Office
documents
</th>
<th>
Public
</th> </tr> </table>
Table 6 . WP 6
<table>
<tr>
<th>
**WP 6**
</th>
<th>
**Participants**
</th>
<th>
**Purpose (in relation to the project objectives)**
</th>
<th>
**Data type**
</th>
<th>
**Format**
</th>
<th>
**Data utility**
**(public or not)**
</th> </tr>
<tr>
<th>
All Consortium
</th>
<th>
Financial and administrative monitoring. Intellectual property management.
Communication with the Advisory Board.
</th>
<th>
Internal
Communication website, patents filling, etc.
Data Management
Plan –ORDP: Open
Research Data Pilot. Reports.
</th>
<th>
MS Office
/ Open Office
documents
</th>
<th>
Both, confidential and Public
</th> </tr> </table>
Table 7: Origin of Data in PEARL _S_ Project. WP 7
<table>
<tr>
<th>
**WP 7**
</th>
<th>
**Participants**
</th>
<th>
**Purpose (in relation to the project objectives)**
</th>
<th>
**Data type**
</th>
<th>
**Format**
</th>
<th>
**Data utility (public or not)**
</th> </tr>
<tr>
<th>
1 - USE
</th>
<th>
Compliance with the
“Ethics Requirements”
</th>
<th>
Informed consent forms and information sheet –template. Copies of ethics
approvals for the research with humans.
Copies of opinion or confirmation by Institutional Data Protection Officer.
</th>
<th>
MS Office /
Open
Office documents
</th>
<th>
Confidential Both confidential as public
</th> </tr> </table>
The data generated by ESRs would strongly depend on the individual doctoral
projects, tools and research methods used within these projects. Whenever
possible, the dataset will be made available online using the following
formats.
Table 8: File formats
<table>
<tr>
<th>
**Text format**
</th>
<th>
**File extension**
</th> </tr>
<tr>
<td>
Acrobat PDF/A
</td>
<td>
.pdf
</td> </tr>
<tr>
<td>
Comma-Separated Values
</td>
<td>
.csv
</td> </tr>
<tr>
<td>
Open Office Formats
</td>
<td>
.odt, .ods, .odp
</td> </tr>
<tr>
<td>
Plain Text (US-ASCII, UTF-8)
</td>
<td>
.txt
</td> </tr>
<tr>
<td>
XML
</td>
<td>
.xml
</td> </tr>
<tr>
<td>
**Image / Graphic formats**
</td>
<td>
**File extension**
</td> </tr>
<tr>
<td>
JPEG
</td>
<td>
.jpg
</td> </tr>
<tr>
<td>
JPEG2000
</td>
<td>
.jp2
</td> </tr>
<tr>
<td>
PNG
</td>
<td>
.png
</td> </tr>
<tr>
<td>
SVG 1.1. (no java binding)
</td>
<td>
.svg
</td> </tr>
<tr>
<td>
TIFF
</td>
<td>
.tif, .tiff
</td> </tr>
<tr>
<td>
**Audio formats**
</td>
<td>
**File extension**
</td> </tr>
<tr>
<td>
AIFF
</td>
<td>
.aif, .aiff
</td> </tr>
<tr>
<td>
WAVE
</td>
<td>
.wav
</td> </tr>
<tr>
<td>
**Motion formats**
</td>
<td>
**File extension**
</td> </tr>
<tr>
<td>
AVI (uncomprenssed)
</td>
<td>
.avi
</td> </tr>
<tr>
<td>
Motion JPEG2000
</td>
<td>
.mj2, .mjp2
</td> </tr>
<tr>
<td>
Arc Gis
</td>
<td>
.shp, .txt, .xls, .csv, .dgn, .dwg, .dxf, .img, .dt, HDF, .sid, .ntf, .tif,
SDC, SDE, TIN, VPF, ADS, AGF, DFAD, DIME, DLG, ETAK, GIRAS, IGDS, IGES, MIF,
MOSS, SDTS
</td> </tr> </table>
It is encouraged to make existing data available for research within the
Project. WP6 and WP1 will provide data templates, in order to be able to
harmonize the different datasets that are provided. Data origin would be from
beneficiaries along the whole project. They are needed to implement the action
or exploit the results. The expected size of the data is going to be evaluated
during the course of the project. It depends on the extent and the nature of
the data availability. Besides, data might be useful to final development of
the Project as follows:
* European Commission. Research Executive Agency.
* The Framework Programme for Research and Innovation Horizon 2020\.
* Open access to disseminate results.
* Open access to scientific publications.
* Open access to research data.
* Transfer of beneficiaries’ results.
### II. FAIR data
##### 1\. Making data findable, including provisions for metadata
Following points or questions are here included:
**Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?**
**What naming conventions do you follow?**
**Will search keywords be provided that optimize possibilities for re-use?**
**Do you provide clear version numbers?**
**What metadata will be created? In case metadata standards do not exist in
your discipline, please outline what type of metadata will be created and
how.**
The data produced and collected by each member have to be carefully stored and
managed at the facilities of the Project Coordinator -University of Seville,
European Social Research Lab.
Informatics services will ensure regular files backup. Besides, additional
archiving will be made.
Best practices will be followed for data management. To facilitate document
evaluation and review, participants create all deliverables and other official
documents in agreement with established templates.
Besides that, each data is provided with its corresponding metadata in order
to keep data findable. PEARL _S_ Project favours metadata standard following
EU recommendations: the Common European Research Information Format (CERIF)
standard. Identifiers such as Digital Object Identifiers (DOI) will also be
used for publication.
##### 2\. Making data openly accessible
To make open data accessible, following points are here included:
**Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions.**
**Note that in multi-beneficiary projects it is also possible for specific
beneficiaries to keep their data closed if relevant provisions are made in the
consortium agreement and are in line with the reasons for opting out.**
**How will the data be made accessible (e.g. by deposition in a repository)?**
**What methods or software tools are needed to access the data?**
**Is documentation about the software needed to access the data included?**
**Is it possible to include the relevant software (e.g. in open source
code)?**
**Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.**
**Have you explored appropriate arrangements with the identified repository?**
**If there are restrictions on use, how will access be provided?**
**Is there a need for a data access committee?**
**Are there well described conditions for access (i.e. a machine readable
license)?**
**How will the identity of the person accessing the data be ascertained?**
Data related to the social media, courses, open access publications, results
and deliverables will be openly accessible. Also, some data will be
communicated via PEARL _S_ Project social channels, as Twitter and RSS Feed.
According to PEARL _S_ Project Agreement, project results will be accessible
by appropriate means, such scientific publications. Beneficiaries will be able
to access, mine, exploit, reproduce and disseminate those data. However, the
beneficiaries do not have to ensure open access to specific parts of their
research data if some parts of the research data not be openly accessible. In
this case, reasons for not giving access must be in the management plan
contained. Besides, beneficiaries must give each other access to background
data, which are necessary to implement the Project and exploit the results,
except in case of limits or legal restrictions. Affiliated entities must make
a written request to beneficiaries. There is any data access committee in the
development for the PEARLS Project. Participants have their own user
identified for Intranet access, where private data will be deposited.
##### 3\. Making data interoperable
To make open data interoperable, following points are here included:
**Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different
origins)?**
**What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?**
**Will you be using standard vocabularies for all data types present in your
data set, to allow interdisciplinary interoperability?**
**In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?**
Data will be available in the format consultable rendered, when possible. It
will be used a standard vocabulary for all data types. This vocabulary will
allow inter-disciplinary interoperability.
##### 4\. Increase data re-use (through clarifying licences)
To increase data re-use, following points are here included:
**How will the data be licensed to permit the widest re-use possible?**
**When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.**
**Are the data produced and/or used in the project useable by third parties,
in particular after the end of the project? If the re-use of some data is
restricted, explain why.**
**How long is it intended that the data remains re-usable?**
**Are data quality assurance processes described?**
Some data results may be transferred in order to allow data Project by third
parties useable. However, some re-use can be restricted if the Party‘s
interests in relation to the results would be harmed. In that case, a request
is necessary for necessary modifications. Data are Creative Commons licensed
and remain re-usable during the Project.
Validation of data quality is a milestone part of Work Package 2, which is in
PEARL _S_ Grant Agreement included.
## III. Allocation of resources
To allocate the resources, following points are here included **:**
**What are the costs for making data FAIR in your project?**
**How will these be covered? Note that costs related to open access to
research data are eligible as part of the Horizon 2020 grant (if compliant
with the Grant Agreement conditions).**
**Who will be responsible for data management in your project?**
**Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?**
Eventual costs for making data FAIR in the Project would be as Eligible Cost
of PEARL _S_ Grant Agreement covered, as it could be expenditure decided
solely by each beneficiary, according to Consortium Agreement´s eligible costs
by beneficiaries. The following concepts indicatively will be considered as
Eligible Costs for Research, Training and Network Costs and Management and
Indirect Costs:
* **Research Costs:**
* Data bases and Software and Web-GIS platform. o Interviews and on-line questionnaires. o Case studies and fieldwork. o Research reports. o Scientific paper review by experts. o Maps, statements and advanced methodological reports.
* Health insurance.
* Participation in congress, workshops, conferences and other scientific meetings. o Translations and Revision of scientific production.
* Other expenditures decided solely by each Beneficiary for ensuring the successful and eligible implementation of the project.
* **Training and networking costs:**
* Research Seminar for PhD students. o Seminar on Social Analysis Innovation.
* Methodological Course.
* Training through online courses. o Local Workshops participation and other communication activities at host organisation.
* Papers and publications in other divulgation formats of network material. o Other expenditures.
* **Management and Indirect Costs:**
* Project Website and Social Media content update and providing supporting information. o Periodic management reports. o Gender balance and ethics requirements.
* Other expenditures decided solely by each Beneficiary for ensuring the successful and eligible implementation of the project.
A party shall be funded only for its tasks carried out in accordance with the
Consortium Plan. All participants will be responsible for Data Management in
the Project. Data collection will be in relation to research activities in the
Project. The Project will not collect personal data, but it may collect basic
biographical data of people which participates in research. However, those
data will be collected and stored as anonymous data. Data will be collected in
a way that responsible will not impose any bias on the data itself. They will
be kept along the development of the Project itself. Besides, it will not be
necessary to create databases about individuals. Once all Project activity has
been finished, data will be destroyed six months past the termination of the
project. Paper data will be physically destroyed. Digital data will be
overwritten to ensure that they are effectively scrambled and remain
inaccessible.
## IV. Data security
To data security, following points are here included:
**What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?**
**Is the data safely stored in certified repositories for long term
preservation and curation?**
All data collected throughout the Project will be securely stored and among
partners transferred, when necessary also following all security protocols.
Data will be stored throughout the whole of the PEARL _S_ project execution
plan and will be destroyed six months after its conclusion.
### V. Ethical aspects
To ethical aspects, following points are here included:
**Are there any ethical or legal issues that can have an impact on data
sharing? These can also be discussed in the context of the ethics review. If
relevant, include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA).**
**Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?**
According to Research and Innovation activities in civil applications carried
out under Horizon 2020 and PEARL _S_ Project ethical issues table, there are
volunteers for social or human sciences research.
PEARL _S_ Project does not access to private data, such names or personal
identification numbers. The research does not include any human unable to give
informed consent. Researchers and other participants are only able to work
with average and aggregated data, which guarantees the reliability of research
without access to private data. The project requires the use of interviews,
surveys and focus groups, and fieldwork photographs and videos with not-
invasive equipment.
The most important ethical issues for PEARL _S_ project are:
* Respect current European and National regulations.
* Fully and responsibly inform any participant of the purpose of the research and of the ways in which their data and the information will be used.
* Take care of a correct and rightful use of the results of the research.
All data collected will be subject to usual rules about data protection with
respect to data confidentiality, anonymity and privacy. Ethical and legal
issues are included in Deliverables 7.1 - 7.5, which are related to ethics and
data management and were submitted in 11/31/2018 to EC.
An information sheet provision and a consent form related to the Project are
provided to each participant into different activities. Participants will be
informed that:
* Any data, video or audio recording portraying or featuring him or her is treated as confidential.
* Any recording and data are securely stored and used only for the purpose of the present research.
* None of the participants’ personal details will be published and or available to the public without their explicit consent.
## VI. Other issues
**Do you make use of other national/funder/sectorial/departmental procedures
for data management?**
**If yes, which ones?**
Any other procedures for data management are used. However, participants will
submit the Data Management Plan to the competent National Authority for Data
Protection, if necessary.
# VII. Further support in developing your DMP
The Research Data Alliance provides a Metadata Standards Directory that can
be searched for discipline-specific standards and associated tools.
The EUDAT B2SHARE tool includes a built-in license wizard that facilitates
the selection of an adequate license for research data.
Useful listings of repositories include:
Registry of Research Data Repositories
Some repositories like Zenodo, an OpenAIRE and CERN collaboration), allow
researchers to deposit both publications and data, while providing tools to
link them.
Other useful tools include DMP online and platforms for making individual
scientific observations available such as ScienceMatters.
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
**HISTORY OF CHANGES**
</th> </tr>
<tr>
<td>
**Version**
</td>
<td>
**Publication date**
</td>
<td>
</td>
<td>
**Change**
</td> </tr>
<tr>
<td>
1.0
</td>
<td>
31.01.2019
</td>
<td>
▪ First version
</td>
<td>
</td> </tr> </table>
19
**VIII. PEARL _S_ Consortium **
<table>
<tr>
<th>
1
</th>
<th>
</th>
<th>
**USE**
C/ S Fernando 4, Sevilla 41004 Spain
</th>
<th>
Contact:
María-José Prados
</th> </tr>
<tr>
<td>
2
</td>
<td>
</td>
<td>
**CLANER**
C/ Pierre Laffitte nº6 Edificio CITTIC TECNOLÓGICO DE AN, Málaga 29590 Spain
</td>
<td>
Contact:
Carlos Rojo Jiménez
</td> </tr>
<tr>
<td>
3
</td>
<td>
</td>
<td>
**Territoria**
C/ Cruz Roja nº10 piso 1 pta b Sevilla 41008 Spain
</td>
<td>
Contact:
Michela Ghislanzoni
</td> </tr>
<tr>
<td>
4
</td>
<td>
</td>
<td>
**ICSUL**
Avda Prof Anibal de Bettencourt 9, Lisboa 1600 189, Portugal
</td>
<td>
Contact:
Ana Delicado
</td> </tr>
<tr>
<td>
5
</td>
<td>
</td>
<td>
**ENERCOUTIM**
Centro de Artes e Oficios, Rua Das Tinas 1 esq, Alcoutim 8970 064, Portugal
</td>
<td>
Contact:
Marc Rechtel
</td> </tr>
<tr>
<td>
6
</td>
<td>
</td>
<td>
**COOPERNICO**
Praca Duque de Terceira 24 4 Andar 24 Lisboa 1200 161 Portugal
</td>
<td>
Contact:
Ana Rita Antunes
</td> </tr>
<tr>
<td>
7
</td>
<td>
</td>
<td>
**UNITN**
Via Calepina 14, Trento 38122, Italy
</td>
<td>
Contact:
Rossano Albatici
</td> </tr>
<tr>
<td>
8
</td>
<td>
</td>
<td>
**AUTH**
University Campus Administration Bureau, Thessaloniki 54124 Greece
</td>
<td>
Contact:
Eva Loukogeorgaki
</td> </tr>
<tr>
<td>
9
</td>
<td>
</td>
<td>
**GSH**
Gkonosati 88A, Metamorfosi, Athina 14452 Greece
</td>
<td>
Contact:
Vasiliki
Charalampopoulou
</td> </tr>
<tr>
<td>
10
</td>
<td>
</td>
<td>
**CONSORTIS**
Vasileos Georgiou, 15 Thessaloniki 54640 Greece
</td>
<td>
Contact:
Ahí Mantouza
</td> </tr>
<tr>
<td>
11
</td>
<td>
</td>
<td>
**CONSORTIS Geospatial**
Vasileos Georgiou 15, Thessaloniki 54640 Greece
</td>
<td>
Contact:
Georgios Tsakoumis
</td> </tr>
<tr>
<td>
12
</td>
<td>
</td>
<td>
**Ben-Gurion University of the Negev**
P.O.B. 653 Beer-Sheva 8410501 Israel
</td>
<td>
Contact:
Na’ama Teschner
</td> </tr>
<tr>
<td>
13
</td>
<td>
</td>
<td>
**SP Interface**
8 Nave Matz St, Rehovot 7624416 Israel
</td>
<td>
Contact:
Daniel Madar
</td> </tr> </table>
19
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0446_SHAPE-ID_822705.md
|
1. Introduction
1. Scope of this document
This document is the first draft of the SHAPE-ID Data Management Plan (DMP),
presenting the project’s plans for collecting and processing data to fulfil
its commitments under the Horizon 2020 Open Research Data Pilot. The DMP
describes the data generated by the project, how it will be gathered or
created, curated and preserved in compliance with responsible research and
innovation (RRI) guidelines, legal obligations under the General Data
Protection Regulations (GDPR) 1 , and the FAIR Data Principles of ensuring
all data that can be shared is made available in a manner that is easily
findable, accessible, interoperable and permits reuse. The DMP will be
reviewed and if necessary revised during the project in response to any data
processing activities not currently anticipated, or in response to new legal
or ethical guidelines or requirements should they arise in the course of the
project.
2. Objectives of the SHAPE-ID Data Management Plan
The primary objective of the SHAPE-ID Data Management Plan is to ensure clear
procedures are in place for how the project will handle any data collected or
generated as a result of project activities, in compliance with legal and
ethical requirements and FAIR Data Principles. The specific objectives of this
document are as follows:
* Draw up an initial inventory of data the project is expected to create or collect, including the purpose of the data collection, data origins, type, format and utility;
* Perform an initial assessment of what data the project can share openly and what must be restricted, ensuring project data is made ‘as open as possible, as closed as necessary’;
* Define standard measures for making published data Findable, Accessible, Interoperable and Reusable (FAIR);
* Describe any ethical issues raised by the project’s data processing activities and the procedures for addressing these.
1.3 Roles and Responsibilities
All project partners are responsible for ensuring their own data processing
activities comply with GDPR and relevant national law and with RRI principles
for the ethical conduct of research. As Coordinator, TCD will provide
oversight and will have recourse to the Data Protection Office at TCD for
legal advice where necessary. Any perceived risks concerning the compliance of
proposed activities with legal and ethical requirements should be notified to
the Coordinator as soon as possible.
2. Data summary
1. Purpose of Data Collection/Generation
SHAPE-ID will collect and generate data for the purpose of carrying out
project activities and producing contractual project deliverables for
submission to the European Commission under the terms of the SHAPE-ID Grant
Agreement No 822705. This includes conducting a systematic literature review
and survey (Work Package 2), organising six learning case workshops in
different European countries (Work Package 3), developing and validating a
knowledge framework based on these evidence-gathering activities (Work Package
4), producing a toolkit and recommendations for stakeholders to improve
pathways for Arts, Humanities and Social Sciences (AHSS) integration (Work
Package 5) and disseminating project results to stakeholders, including
creating a Stakeholder Contact Database for this purpose (Work Package 6).
2. Types and Formats of Data Collected and Relationship to SHAPE-ID Objectives
SHAPE
\-
ID Objective
s
Data Collection/Generation
[
with formats
]
* O2.1: to disentangle the different Literature review data: bibliographic metadata understandings of interdisciplinary research (including abstracts) of articles, book chapters,
(IDR) books, reports and other texts on
* O2.2: to identify the factors that hinder or interdisciplinarity (‘grey literature’) [csv, xlsx, xml, help interdisciplinary collaboration BibTeX], full texts of books, papers, reports, funding calls, etc. [pdf, txt], for the purpose of
* O2.3: to clarify which understandings of IDR conducting systematic literature review using
and which factors of success and failure are
qualitative and quantitative methods.
specifically relevant for integrating AHSS in
IDR CORDIS projects data: metadata for FP7 and
H2020 funded projects with interdisciplinary aspects [csv, xlsx] for reviewing
funded EC projects for the purpose of selecting projects for survey.
Survey data: qualitative and quantitative data from a survey conducted with
50-60 participants with experience in IDR projects [csv, xlsx].
Interview data: audio recordings [mp3] and transcripts [docx, txt] from test
interviews with 510 participants in IDR projects to contribute to survey
development.
Project outputs: public deliverables, reports, conference presentations and
journal publications incorporating analysis of this data.
[pdf]
* O3.1: to test and validate the findings of the literature and survey exercise in interactive thematic workshops related to key societal challenges involving different stakeholders
* O3.2: to enable comparisons of IDR practices and results with regard to key societal challenges and other emerging missions that Europe faces in the future
* O3.3: to elicit insights from IDR project representatives and stakeholders and coproduce recommendations on the funding mechanisms and implementation of IDR in practice to provide effective responses to societal challenges.
* O3.4: to identify adequate and meaningful criteria and indicators to assess IDR, both ex ante (i.e. funding) and ex post (i.e. impacts on society with reference to societal challenges).
* O3.5 To facilitate exchange of best practices for IDR among existing projects Workshop data: the project will organise six learning case workshops, with approximately 20 invited participants attending each. Participants will have experience in IDR projects, university administration, research funding or research policy-making and the purpose of the workshops is to learn from their experiences and expertise as per the project objectives. Data from the workshops will be collected and analysed, including: written notes, audio recordings [mp3] and evaluation reports [docx] from the workshop; written and/or visual outputs produced by workshop participants (e.g. notes, diagrams, drawings).
Project outputs: public deliverables, reports, conference presentations and
journal publications incorporating analysis of this data. and their
practitioners and experts, as well as to share common challenges and barriers,
developing a network of existing IDR projects and their teams within and
beyond H2020.
* O4.1: to establish a working system of taxonomic categories for AHSS integration modalities providing a shared language of assessment.
* O4.3: to organise a consensus meeting for the panel of experts to validate the findings of the project as reflected in the draft taxonomy.
Research data: it is anticipated that the development of the taxonomy or
knowledge framework will involve working primarily with data gathered during
earlier project activities and other bibliographic metadata for publications
and research projects [csv, xlsx, BibTeX]; it is not yet known for certain
what additional data may need to be collected or produced. FAIR data
principles and standard data management procedures as described in this
document will be applied and more detail will be provided in future versions
of the DMP.
Meeting data: an Expert Panel meeting will be organised to validate the
knowledge framework. Data from the meeting will be collected and analysed,
including: written notes and audio recordings [mp3]; panel members’ notes and
evaluation data from panel members and observers/organisers [txt, docx]. This
data will be confidential and used to prepare a report on the meeting’s
outcomes.
<table>
<tr>
<th>
• O 5.2: to prepare an agreed set of heuristics, in the form of a multi-
faceted decision-making toolkit, to guide applicants
</th>
<th>
Research data: it is anticipated that developing the toolkit will involve
working primarily with data gathered during earlier project activities; it is
not yet known what additional data may need to
</th> </tr> </table>
Project outputs: public deliverables, reports, conference presentations and
journal publications incorporating analysis of this data. and funders in
achieving successful pathways to integration.
be collected or produced. FAIR data principles and standard data management
procedures as described in this document will be applied and more detail will
be provided in future versions of the DMP.
Project outputs: the final toolkit will be a public project deliverable
targeted at stakeholder groups and will be widely publicised and disseminated.
* O6.2: to oversee and coordinate the dissemination of the results emerging from the project to the 4 stakeholder groups to ensure best take up of the project’s recommendations and toolkit in different stakeholder settings.
Contact data: contact data will be collected from stakeholder organisations’
websites or from partners’ personal recommendations to add to a Stakeholder
Contact Database [xlsx, csv, pdf] to be submitted as a public project
deliverable and maintained as a live resource during the project. Individuals’
contact data will also be collected on a voluntary basis through a
subscription form
(hosted by Mailchimp) on the project website. Data from subscribers will not
be shared. All contact data will be used to disseminate project information to
stakeholders.
2.3 Re-Use of Existing Data
* The systematic literature review makes extensive use of existing data in the form of bibliographic records and metadata, published abstracts and full texts of published journal articles, books and reports, which form the basis for its analysis. Data published openly by the European Commission in its CORDIS database of funded projects and calls is also re-used.
* The Stakeholder Contact Database is compiled from existing data published on organisations’ website, namely, organisation name, acronym, address, contact email address, description of activities or remit and, where possible, contact person name.
2.4 Origin of Data
* Bibliographic metadata will be harvested from scholarly communication platforms such as Web of Science, Scopus, JSTOR and OpenAIRE, as well as from partners’ or collaborators’ existing bibliographic libraries where these are shared. Project metadata will be harvested from the European Commission’s CORDIS database. Other repositories and websites will be used as needed.
* Survey and interview data will be gathered through interviews and surveys with participants who will be invited to participate in these data gathering activities on a voluntary basis. Further data will be produced by analysing this data.
* Workshop data will be gathered through observation, recording and evaluation of workshop activities and produced by participants of the workshops. Further data will be produced by analysing this data.
* Contact data will be gathered through organisations’ public websites. Additional information may be provided by contacts on request by email.
5. Expected Size of Data
The exact size of the data generated by SHAPE-ID is unknown but no large-scale
datasets are anticipated and the overall scale is expected to be modest. Most
data will be generated in the course of research activities or as project
outputs following data analysis and taking the form of reports, policy briefs
and other publications.
6. Data Utility
All data collected or generated during SHAPE-ID will be used directly for the
purpose of carrying out project activities, including various forms of
qualitative and quantitative analysis and interpretation of the data.
Research data: some research data generated by the project may be of use to
other researchers, such as metadata libraries on literature or funded projects
engaging in interdisciplinary research. Such data incorporates existing data
and will be made openly available where permitted by the licenses governing
the use of the original data.
Project outputs: project reports, policy briefs, toolkit and other published
outputs will be of use to the project’s stakeholder groups in enabling better
understanding of and supporting successful IDR between AHSS disciplines or
AHSS and STEM disciplines. SHAPE-ID’s four stakeholder groups are:
* European Research Area (ERA) funders and policy-makers;
* Research Performing Organisations (RPOs);
* Researchers in all disciplines;
* Research users or co-creators in industry, the cultural sector and civil society.
3. FAIR Data Procedures
1. Overview
SHAPE-ID is committed to the Open Research Data principle that all data should
be made ‘as open as possible, as closed as necessary’ 2 and with the FAIR
Data Principles that ensure openly published research data is Findable,
Accessible, Interoperable and Reusable. An analysis of the data SHAPE-ID will
collect or produce has been conducted, to determine what level of openness is
possible for each data type.
<table>
<tr>
<th>
Work
Package
</th>
<th>
Data Produced/Collected
</th>
<th>
Data Sharing
</th> </tr>
<tr>
<td>
2
</td>
<td>
Literature Review
* EndNote library of literature
* Metadata of bibliographic sources and funded projects related to inter- or transdisciplinary research
* NVivo codebook with nodes and
categories of analysis
* Results of data analysis
</td>
<td>
Research data such as Endnote libraries and metadata libraries will be made
available if permitted by the licensing terms of the data re-used in these
datasets. Prior to sharing, Endnote libraries will be exported to a
nonproprietary format such as csv to facilitate reuse.
NVivo codebooks and other internal research data is for internal project use
and is not considered of use or interest to the wider community.
The results of all data analysis will be published in the form of project
deliverables and other publications (see considerations for Project
Deliverables and Project Publications below).
</td> </tr> </table>
<table>
<tr>
<th>
2
</th>
<th>
CORDIS projects data
</th>
<th>
Datasets derived from the CORDIS FP7 and H2020 projects dataset published by
the EC will be made openly available.
</th> </tr>
<tr>
<td>
2
</td>
<td>
Interview data
* Audio recordings
* Transcripts
</td>
<td>
Interview data will not be made openly available as it is gathered as an
internal aid to survey development.
</td> </tr>
<tr>
<td>
2
</td>
<td>
Survey data
* Qualitative survey data
* Quantitative survey data
* Results of data analysis
</td>
<td>
The survey is under development at the time the first draft of this DMP is
being prepared and details will be updated in the revised DMP. The survey is
expected to produce both quantitative and qualitative data. It is anticipated
that qualitative survey data will not be shared as it will not be easy to
fully anonymise data when participants may be asked questions about their
experiences with specific projects, institutions or funding schemes.
Quantitative survey data may be published in anonymised form if considered of
sufficient value to the research community. All survey data will be analysed
and results of the data analysis will be published as part of a public project
deliverable
</td> </tr>
<tr>
<td>
3
</td>
<td>
Workshop data
* Observation notes
* Recordings and transcripts
* Evaluation data
* Participants’ written and visual outputs
* Results of data analysis
</td>
<td>
Workshop data will not be shared as it will not be possible to fully anonymise
data when participants are asked to openly discuss both positive and negative
experiences with specific projects, institutions or funding schemes. Results
from the workshops incorporating analysis of data collected during the
workshop will be published as part of a public project deliverable.
</td> </tr> </table>
<table>
<tr>
<th>
4
</th>
<th>
•
•
</th>
<th>
Knowledge Framework
Research Data (TBC)
Expert Panel Meeting data
</th>
<th>
The outputs of the development of the knowledge framework are not yet known
but it is anticipated that they will be made as widely available as possible.
Should preparation of the framework yield any additional research data, full
consideration will be given to making it available to the research community
if it is of potential value and if there are no practical, legal or ethical
restrictions to doing so. Data from the Expert Panel Meeting will be
confidential within the consortium and panel. All results and a report of the
meeting will be published as a public project deliverable.
</th> </tr>
<tr>
<td>
5
</td>
<td>
Toolkit
* Research data (TBC)
* Final toolkit and recommendations
</td>
<td>
The toolkit and associated guidelines will be produced for use by stakeholders
and made widely available. Should preparation of the toolkit yield any
additional research data, full consideration will be given to making it
available to the research community if it is of potential value and if there
are no practical, legal or ethical restrictions to doing so.
</td> </tr>
<tr>
<td>
6
</td>
<td>
Contact data
* Stakeholder Contact Database
* Individual contact data supplied through subscription form
</td>
<td>
The Stakeholder Contact Database is a public deliverable and will be made
publicly available on the project website and linked to from the associated
published project deliverable.
Contact data provided by individuals through the subscription form on the
project website will remain private in accordance with GDPR.
</td> </tr>
<tr>
<td>
All
</td>
<td>
Project Deliverables: the project will produce 24 contractual deliverables, 20
of which are classed as Public (PU) deliverables and will be published on
acceptance.
</td>
<td>
All public project deliverables will be available through the CORDIS database
and on the project website.
Reports and Policy Briefs will be assigned a DOI and deposited in
institutional repositories for long-term storage, access and impact tracking.
</td> </tr>
<tr>
<td>
All
</td>
<td>
Project Publications: project partners will publish results in conferences and
peer-reviewed journals as soon as feasible after generating results.
</td>
<td>
All project publications will be published through Open Access where possible
and will be deposited in institutional repositories such as TCD’s TARA and
Zenodo (other institutional repositories are listed in Section 3.3.3 below).
</td> </tr> </table>
1. Making Data Findable
#### 3.2.1 Discoverability of Data
All published data will be provided with metadata prepared according to
relevant standards to increase findability. Data deposited in TARA and Zenodo
are described according to qualified Dublin Core metadata which complies with
the OpenAIRE Guidelines for Data Archives 3 and uses the DataCite metadata
standard. It includes the funder name, programme name and grant number as well
as links to associated publications and related sources. These metadata are
optimised by their repositories for exposure on the internet for discovery and
harvesting purposes. In addition, the metadata of SHAPE-ID project outputs
will be included in institutional Current Research Information Systems (CRIS)
(e.g. TCD’s Research Support System) which are capable of data exchange using
the Common European Research Information Format (CERIF).
#### 3.2.2 Identifiability of Data
Data will be stored in TARA and Zenodo, both of which automatically assign
persistent identifiers (PIDs) in the form of handles and (in the case of
Zenodo) digital object identifiers (DOIs). For any research outputs not stored
in Zenodo, DOIs will be assigned by TCD Library following its forthcoming
membership of DataCite, a non-profit organisation that provides persistent
identifiers (DOIs) for research data and other research outputs and enables
member organisations to do the same. 4
#### 3.2.3 Naming Conventions
Data will be organised using a standardised naming convention, in files within
folders on the project’s shared drive (mirrored by the structure on the
project researchers’ laptops). Version control will be managed within this
system.
The following standard naming convention will be adopted for all published
project data: 5
ProjectAcronym_GrantAgreementNo_WPnumber_Keyword_Version (e.g. SHAPE-
ID_822705_WP2_journalMetadata_1)
#### 3.2.4 Approach to Search Keywords
Data stored in the designated repositories will have keywords assigned in
those repositories (which also support full text indexing of all terms within
files and thus full text searching). Data will be tagged with standardised
terms to facilitate specific searches, including (where relevant) but not
restricted to, OECD Fields of Science, EC research areas, themes and missions,
and any terms developed by the project as part of the taxonomy or knowledge
framework that Work Package 4 will produce. These terms will be included in
the subject metadata describing the project’s datasets in the
development/analytical spreadsheets and accompanying the datasets as and when
they are archived in the repositories.
#### 3.2.5 Approach to Clear Versioning
All versions will be clearly labelled within the standardised naming
convention outlined in section 3.1.3 (above) and a version history will be
available.
#### 3.2.6 Metadata Creation
Metadata for the project data will be captured and recorded at the point of
the data gathering/creation, parts of which (as appropriate) will be mapped to
OpenAIRE/DataCite compliant Qualified Dublin Core for entry into the
designated repositories (accompanying the corresponding data) for access,
archiving and preservation purposes.
Metadata and related vocabularies used by the project will either comply with,
or be mappable to, existing metadata schemas and standard international and EU
vocabularies 6 .
3.3 Making Data Openly Accessible
#### 3.3.1 Data to be Made Openly Available
Most project contractual deliverables are classed as public deliverables and
will be made available through the European Commission’s CORDIS database and
the SHAPE-ID project website as well as through the designated repositories.
Data collection and analysis methodologies and the results of the data
analysis undertaken during project activities will also be shared.
#### 3.3.2 Data to be Restricted
* Qualitative research data such as interviews, survey results and workshop or meeting data will not be shared as it would be impossible to fully anonymise results when participants are being asked to comment in detail on specific projects or other initiatives they have participated in. However, for all of this data, the methodology and results of the data analysis will be published in the form of project deliverables, reports and other publications as appropriate.
* Private contact data supplied by individual subscribers to the SHAPE-ID mailing list will be restricted in accordance with GDPR.
#### 3.3.3 Process for Making Data Available
Following a process of data cleansing and checking, open or temporarily-
embargoed data outputs will be uploaded from the project’s shared folder to
suitable institutionally-based or external trusted repositories such as Zenodo
and linked with associated outputs using standardised classification/s and
persistent identifiers (PIDs) such as handles and Digital Object Identifiers
(DOIs). Upon publication of reports and other outputs, supporting data
(anonymised, if necessary, using Amnesia 7 ) will be made openly accessible
in this way as a standard practice.
Documents and published outputs will meet international standards for OA
metadata, licensing and interoperability through Zenodo and through the well-
established, OpenDOAR-registered institutional repositories in each of the
partner institutions:
* _TARA_ (Trinity's Access to Research Archive)
* _Edinburgh Research Explorer_ ( using PURE the University’s CRIS as the back engine for the public repository) and/or _Edinburgh DataShare_ .
* _RCIN (Digital Repository of Scientific Institutes)_
* _Research Collection_ (ETH Zürich)
These repositories provide a mechanism for the community to store and share
(through optimised online exposure) educational resources, documents, data and
institutional content, supporting harvesting and aggregation. Open by default,
their content is licensed for re-use via Creative Commons. Embargoes may be
applied if necessary but will be strictly limited. The SHAPE-ID toolkit and
associated policy brief will be openly accessible through this infrastructure
and promoted as such to the stakeholders, as well as being made available on
the project website.
Audio-visual and other non-text-based material will be treated as data and
will be stored, managed, curated, licensed and made accessible under similar
terms to the FAIR principles. Where these and other outputs are designed as
teaching and learning materials they will be made openly available and
discoverable online as Open Educational Resources (OER) using best practice
standards and established OER repositories/portals.
#### 3.3.4 How to Access the Data
It is not anticipated that any special software will be needed to access the
data. However, should specific software be required, every effort will be made
to ensure that it is open source and easily accessible, e.g. when/if Protégé
8 (the free, open-source ontology editor and framework for building
intelligent systems) is used in Work Package 4, it will facilitate the further
development of the outputs by the project and (subsequently) by other
interested parties.
Most of the SHAPE-ID data will be accessible using commonly available desktop
software used on all platforms (word editor, spreadsheet application, browser,
multimedia player). In some cases, e.g. Endnote libraries, data will be output
to csv (comma-separated values) to support accessibility, interoperability and
reusability.
#### 3.3.5 Where to Access the Data
As detailed above, all publicly available project data will be deposited in
institutional repositories such as Trinity’s Open Access repository TARA and
Zenodo. Resources will be linked to through the project website. There will be
no restrictions placed on any data that the project makes open.
## 3.4 Making Data Interoperable
As described above, common bibliographic standards (BibTeX, DataCite) will be
used to ensure the interoperability of bibliographic metadata used to describe
SHAPE-ID datasets. The project data will be made available in common formats
such as csv, txt, xlsx, docx, mp3 and pdf, which are either nonproprietary
formats or can be easily accessed and used with open source software.
As indicated in Section 3.2.6 above, metadata and related vocabularies it uses
will either comply with, or be mappable to, existing metadata schemas and
standard international and EU vocabularies 9 e.g. mapping to Schema.org 10
(the collaborative, community initiative with a mission to create, maintain,
and promote schemas for structured data on the Internet) will be provided, if
required.
The final form of the taxonomy or knowledge framework to be developed in Work
Package 4 has not yet been defined but it will, in so far as possible and
appropriate, be based upon existing data thesauri. The use of SKOS (Simple
Knowledge Organisation System) 11 for the structure of this knowledge
framework is currently under investigation. Should an ontology for AHSS
integration modalities be created as part of Work Package 4, it will be built
using OWL (Web Ontology Language) 12 .
## 3.5 Increasing Data Re-use
### 3.5.1 Licensing Data for Re-Use
All publicly available data will be licensed by default under a Creative
Commons CC-BY license.
### 3.5.2 Time Frame for Data Availability
Data will be made available during the project lifetime once it has been
published in project deliverables or other publications, as deemed appropriate
by partners whose work it concerns. Data may be embargoed until publication of
the results of the data analysis if it is considered necessary. All project
deliverables must be approved by the SHAPE-ID Project Officer before
publication.
### 3.5.3 Accessing Restricted Data or Accessing Data after the Project
All openly available data will be reusable by all interested parties under the
terms of the specified license. Any requests for access to unpublished data
will be reviewed by the project team on a case-bycase basis.
### 3.5.4 Data Quality Assurance
All data collected or generated during the project will be reviewed by the
project team and checked for duplicates and inconsistencies before being made
publicly available. It is anticipated that data utilised and generated in Work
Package 4 will be cleaned using OpenRefine 13 prior to analysis and sharing.
# Allocation of resources
The costs of making the data FAIR are minimal and activities necessary for
doing so, as described in this DMP, will be carried out as part of the
standard duties of personnel employed by the project, with no additional
costs.
Publicly available data will be stored in institutional repositories (as
detailed in Section 3.3.3 above) and Zenodo with no additional costs to the
project.
Data stored on partners’ local servers does not incur additional costs as the
institutional subscriptions already include sufficient storage. Should
unanticipated storage needs arise in the course of the project this will be
reviewed within the consortium to determine the availability of budget to meet
these needs.
# Data Security
## Data Storage
Data collected during the project is stored on secure local servers managed by
partners’ IT Services or IT support teams. Daily backups of partners’ servers
are managed by their institutions’ IT Services in accordance with local data
protection protocols (see Section 7 below). Data used by multiple partners is
stored in a shared Office 365 SharePoint site managed securely by TCD’s IT
Services. Back-ups of the project’s SharePoint resource are routinely and
regularly managed by TCD IT Services. This installation of SharePoint has been
pre-vetted by TCD IT Services to ensure compliance with institutional IT
security policies and with the relevant legislation.
Personal data collected for the Stakeholder Contact Database is stored in the
first instance on this
SharePoint site, and also backed up on a secure network drive at TCD, also
managed by TCD IT Services.
Contact information for the Stakeholder Contact Database and individuals who
subscribe to the SHAPEID mailing list is also stored on Mailchimp servers,
using a paid subscription approved by TCD Data Protection Office.
Free public cloud services such as Dropbox, box or Google drive will not be
used to store data for this project as they do not comply with local or
international policies and/or legislation, with the exception of corporate
services provided by those platforms (e.g. G-Suite), which are compliant with
the relevant data protection legislation.
Laptops and PCs used for data processing are password protected and configured
for security with verified antivirus software. Digital audio recorders will be
used to record audio for interviews and workshops in mp3 format. This data
will be immediately transferred to researchers’ laptops or PCs, stored on
secure local servers and network drives as with other project data, and the
mp3 files deleted from the recording device for additional security and to
protect data subjects’ privacy. Transcripts of these recordings will be made
as soon as feasible as a further backup measure. All paper notes from
workshops or meetings will be stored in locked offices and drawers and
digitised as soon as feasible through transcription or digital photography /
scanning as appropriate.
## Data Recovery
Where partners are storing data locally for the purpose of carrying out
project activities, nightly backups of all research data collected will be
made by those project members to a separate local drive in each member
institution, or to the SharePoint site. All data stored on the SharePoint site
or TCD network drive is automatically backed up nightly by TCD IT Services.
Additionally, another off-site drive shall be maintained and backed-up
periodically by the partner members responsible for the data in question, e.g.
an external hard-drive device stored in another location. Servers, laptops and
PCs used for processing data during the project are backed up on a daily
basis.
All data collected and managed by IBL PAN are stored on an IBL PAN
institutional cloud drive (G-suite). This data is stored while being
collected, cleaned and prepared for use and eventual publication where
possible. The data does not include any sensitive personal data or other data
with ethical implications. All data is also backed up, encrypted (AES-256) and
stored on the institutional QNAP server allowing for data mirroring and
version history. Final datasets will be uploaded to SharePoint for further
archivisation.
## Long Term Storage and Data Preservation
Data outputs from this project will be permanently archived in TARA and/or
Zenodo as well as in other appropriate institutional facilities such as
Edinburgh DataShare 14 . TARA uses DSpace-generated preservation metadata
and checksum reporting. TARA is backed up on a nightly basis using standard
database maintenance and backup processes and procedures and institutional
security protocols. Permission is granted via the deposit agreement for the
migration of file formats should this become necessary in the future.
Appropriate additional trusted subject repositories shall be explored in order
to deposit in multiple locations and comply with the LOCKSS (Lots of Copies
Keep Stuff Safe) principle.
## Contact Data
As described in 5.1 above, SHAPE-ID uses Mailchimp as its mailing list
platform through a paid account operated by the Trinity Long Room Hub at TCD.
All contact data for mailing list subscribers and those added to the
Stakeholder Contact Database will therefore be stored on Mailchimp servers for
the purpose of email communication with contacts. Mailchimp servers are based
in the United States but their practices comply with GDPR through their use of
Privacy Shield. 15 The TCD Data Protection Office has advised that a paid
Mailchimp account may be used for this purpose.
# Ethical Aspects
Ethical issues arise in SHAPE-ID in several contexts where human subjects will
participate in the research as interview, survey or workshop participants, or
where personal data will be collected for use in project dissemination or as a
project requirement. Where ethical issues have been identified, project
partners are committed to following their own institutional guidelines on the
ethical conduct of research involving human subjects, as detailed in Section
6.4 below.
## Interview and survey participants
Participation will be voluntary and by invitation. Interview participants will
be experienced researchers or other project stakeholders such as policy makers
and funders. Participants are not expected to be vulnerable individuals or
minors. Participants will be advised on the purpose and scope of the data
collection, and on how survey data will be stored, processed and shared.
Informed consent will be sought in advance of participation. Participants’
names, email addresses and other potentially identifying information such as
details of projects they are involved in or institutional roles may be
collected directly or may be gathered inadvertently through survey responses.
All data will be stored securely, and no identifiable details will be included
in published work using the survey results without explicit informed consent.
A list of projects included in the survey may need to be published as part of
the project deliverable with explicit informed consent of participants.
## Workshop participants
Participation will be voluntary and by invitation. Workshop participants will
be advised on data collection methods used during the workshop and how data
will be stored, processed and shared. Informed consent will be sought in
advance of participation. Participants’ names, email addresses and other
potentially identifying information such as details of projects they are
involved in or institutional roles may be collected directly or may be
gathered inadvertently. A participant list may be published as part of the
project deliverable with explicit informed consent.
## Stakeholder Contact Database
Because the Stakeholder Contact Database is a public project deliverable and
will be made available through the project website once approved, it was
decided to separate this database from the mailing list that individuals may
subscribe to through the project website. For individual subscriptions, all
data will remain private.
As the Stakeholder Contact Database includes personal data in the form of
contact names and email addresses, a Data Protection Risk Assessment was
conducted. This was reviewed by the TCD Data Protection Office, who approved
the proposal to gather contact details from partners and from organisations’
websites for compiling the Stakeholder Contact Database and advised that the
legal basis for such processing was GDPR Article 6 (1)(e): "processing is
necessary for the performance of a task carried out in the public interest or
in the exercise of official authority vested in the controller". 16
The following recommendations were made by the DPO and have been implemented:
### Privacy Notice
A privacy notice was added to the project website, describing how the project
collects and uses data either provided through the mailing list subscription
form or gathered for the purpose of compiling the Stakeholder Contact
Database. 17 This includes information on the purpose and legal basis for
data collection, how the project stores and shares data, and data subjects’
rights, including the rights of access, rectification, restriction, erasure
and objection to processing.
### Notification of Contacts
It was recommended that data subjects be made aware that their details had
been gathered and added to the SHAPE-ID Stakeholder Contact Database as soon
as possible, with an explanation of the reasons for this and information on
how to opt out.
An email introducing SHAPE-ID, explaining that the project had added the
organisation and/or individual contact name and address to a Stakeholder
Contact Database, how this data will be used, and how to opt out if desired,
was prepared and is sent to all contacts prior to the data being made public,
using Mailchimp to issue emails. A link to the SHAPE-ID Privacy Notice, with
contact details for the Project Manager and the TCD Data Protection Office, is
also included in the email. Where contacts request removal or amendment of
their data this is done promptly.
Contacts may also opt out easily by clicking in the footer of any subsequent
email they receive from SHAPE-ID through Mailchimp.
## Partner Institutional Guidelines on Research Ethics
Each partner will adhere to their own organisation’s guidelines and practices
concerning the ethical conduct of research and will act in compliance with the
relevant national laws transcribing the General Data Protection Regulations.
Specific guidelines are detailed below where applicable.
### TCD
TCD complies with the requirements of the GDPR and the Irish Data Protection
Act 2018. As Coordinator, TCD will consult its own Data Protection Office for
guidance on any issues concerning compliance with these legal requirements.
Research in TCD is conducted in accordance with the University’s Policy on
Good Research Practice 17 and the TCD Ethics Policy. 18 Researchers in TCD
are required to seek ethical approval from a School or Faculty Ethics
Committee prior to commencing any research involving human participants. TCD
will seek approval as required from the Faculty of Arts, Humanities and Social
Sciences Ethics Committee.
### ISINNOVA
ISINNOVA comply with GDPR requirements in all their practices and all data
collection and processing will be carried out in accordance with these
requirements. Furthermore, where derogations are evident, these will be
carried out in accordance with the Italian Data Protection Code (Legislative
decree no. 196/2003, Data Protection Code or DPC) of 2018\.
### ETH Zürich
Ethical research practice at ETH Zürich is guided by the ETH Zürich Compliance
Guide for Integrity and ethics in research. 19 Details of ethics approval
procedures are available at: _https://ethz.ch/en/_ _research/ethics-and-
animal-welfare/research-ethics.html_ .
### UEDIN
Ethics approval for UEDIN’s elements of this research has been obtained from
the University of Edinburgh School of Social and Political Science. The
applicable guidelines are available at
_http://www.sps.ed.ac.uk/research/research_ethics/ethical_review_process_for_staff._
UEDIN complies fully with the requirements of the GDPR and the UK Data
Protection Act 2018.
### IBL
IBL complies fully with the requirements of the GDPR and the Polish Personal
Data Protection Act of 10 May 2018, as described in local law and
institutional guidelines.
# Institutional Data Management Practices
In addition to the principles and practices outlined above, a number of
partners are required to comply with their own institutions’ relevant policies
and guidelines on data management.
## TCD
* TCD Data Protection Policy: _https://www.tcd.ie/info_compliance/data-protection/policy/_
* TCD Open Access Publications Policy:
_http://www.tara.tcd.ie/bitstream/handle/2262/80574/TCD%20Open%20Access%20Policy%2_
_81%29%281%29.pdf_
TCD is also working towards implementing the LERU Open Science Roadmap 20
and Ireland’s recentlylaunched ‘National Framework for the Transition to an
Open Research Environment’ 21 .
## ETH Zürich
* Directive on “Information Security at ETH Zurich”:
_https://rechtssammlung.sp.ethz.ch/Dokumente/203.25en.pdf_
* ETH Zürich Information Security Guidelines: _https://ethz.ch/services/en/it-services/itsecurity/guidelines.html_
## UEDIN
* University of Edinburgh Data Protection Policy (and handbook): _https://www.ed.ac.uk/records-management/policy/data-protection_
* University of Edinburgh Information Security Policy: _https://www.ed.ac.uk/informationservices/about/policies-and-regulations/security-policies/security-policy_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0448_FotoInMotion_780612.md
|
# Introduction
The amount of digital content available to creative industries is growing
exponentially, driven by the ubiquitous use of smartphones and the
proliferation of social media:
* there is a tremendous increase in the amount of photographic content (more than 1.8 billion photos are uploaded to social media platforms each day and more than 700 million monthly users on Instagram);
* the ongoing transformation of factual, entertainment and social media publishers and platforms from textual and photo-centric format to video-driven format (more than 400 hours of video are being uploaded to YouTube each minute and 1 billion hours of video watched every day);
* the increasing impact of 3D and virtual reality for providing immersive storytelling experiences, offering new ways of audience engagement and monetization for content creators in the upcoming years.
Acknowledging the above, the following critical questions become imminent in
both content production and dissemination contexts: how to repurpose this
massive amount of content; what kind of innovative tools are most suitable for
this process; and finally, how these tools can offer new forms of monetization
possibilities for creative industries' professionals.
FotoInMotion, sets to solve these critical questions and provide an innovative
solution to the repurposing of content by offering automated tools for
innovative contextual data extraction, object recognition, creative
transformation, editing and text animation as well as state of the art 3D
conversion options that allow content creators to transform their photos into
highly engaging spatial and three-dimensional video experiences.
FotoInMotion will focus on three major creative industries sectors:
photojournalism to develop interactive photo driven stories; fashion, by
opening up new forms of marketing, product placement and event coverage; and
festivals, by enabling PR and publicity managers to communicate the festival
experiences and engage audiences through immersive communication and
repurposing festival archives.
FotoInMotion aims to build a web and mobile video-editing tool designed to
transform single photographs into high-quality, low-cost videos for creative
industries’ professional and social usage. FotoInMotion will allow both
professional content producers, as well as creative citizens, to automatically
embed contextual information into a single photo or a set of photos, and
produce videos with rich semi-automated editing functions and dynamic effects
that can be easily shared on social media, as well as on professional digital
content delivery platforms, engaging them into new forms of immersive and high
impact storytelling in professional and social media utilizing video.
FotoInMotion videos could be used i) by news organisations to inform about an
event or create high impact video editorials, ii) by creative industries
professionals and companies for the promotion of products or services through
social and digital media marketing campaigns, iii) by festivals and cultural
events to provide on-site coverage and engage audiences, or generally iv) by
individuals who want to give a new power to their photographs, and explain
contexts and settings under which they were taken.
## Document Scope
The FotoInMotion consortium consists of representative organisations from the
creative industry that they want to enhance their legacy data with rich and
attractive multimedia content able to strengthen and enhance the messages that
they want to promote throughout the society.
Taking into account the volume of data to be elaborated in the FotoInMotion
project and in addition the data to be generated using the FotoInMotion
technological tools it is very important at this stage to formulate the
framework to facilitate the usability and re-usability of the data.
The design of the technological framework and the services to be offered by
the FotoInMotion project is necessary by their nature to be compatible with
the FAIR principles which means that the FotoInMotion data needs to be
findable, accessible, interoperable and re-usable.
All the regulations and ethical restrictions which apply in the FotoInMotion
project framework will be thoroughly defined and incorporated in the
FotoInMotion processes without limiting the quality and the accuracy of the
provided services.
It is also very important to provide a secure and controlled data storage that
will easily allow data retrieval and at the same time will be able to protect
the integrity and quality of the stored datasets from losses and damages.
Taking into account that the project still stands at the user requirements
gathering and the architecture design phase this version of the DMP is a
preliminary one that will be updated on month 18 (June 2019) with more
concrete information as the FotoInMotion project evolves.
## Document Structure
This document is comprised of the following chapters:
**Chapter 1** presents an introduction to the project and the document.
**Chapter 2** presents the data summary including the purpose of data
collection, data size, type and format, historical data reuse and data
beneficiaries.
**Chapter 3** presents FotoInMotion FAIR data strategies.
**Chapter 4** describes data security.
**Chapter 5** presents FotoInMotion Concerns related to Personal Data
protection
# Data Summary
As it was explicitly described in the introduction of the DMP the FotoInMotion
project aims to enhance the content produced by the creative industries and
particularly in the domains of Photojournalism, Film Festivals and Fashion
industry.
In order to accomplish the objectives of this project it will be needed the
support and the guidance of the important representatives of the these domains
NOOR and Worldcrunch for the photojournalism case study, Tallinn Black Nights
Film Festival (TBNFF) for the Film Festival case study and Marni for the
Fashion case study.
Thus the data that will be provided in order to explore and validate the
FotoInMotion technological modules that will be developed by the project’s
technological partners will be provided by the Photojournalsm, Film Festival
and Fashion domains’ project partners.
Photographs delivered by Photojournalists are provided for the project in 2
different formats. Common usage is JPEG. But for quality, and also for non-
alteration of the work, photographers are also using RAW file. Clients of NOOR
might happen to request the raw files before publication in order to verify
that no manipulation has been done.
The photos are produced by professional photographers with their camera. Files
are always digital. In case of analog files, photographers then processed
images by digitalization and then created digital files. The archives of
photos that are available for the FotoInMotion projects consist of
approximately 56000 items. The target groups for the data to be generated from
the Photojournalism Pilot could be potentially news agencies, media companies,
photographers, editorial clients, images buyers.
The data that will be provided by TBNFF for the film festivals case study are:
* Photos – portraits of filmmaker and event photos (landscapes, portraits, closeups) made at parties, film screenings, galas, cinemas, industry panels, workshops, concerts.
* Marketing and merchandising photos, “behind the scenes” photos.
* Film still photos for catalogue, website, in venue, social marketing
* Film related metadata: title, original title, synopsis, cast and crew, credits, sales company, festival program, screening data
* Festival guest related metadata: name, position, company, participation in
specific festival program (jury, screenings, industry, VIP), short biography
* Event related metadata: name of the event, type of the event (screening, concert, gala, special event, workshop, conference, workshop, panel) The provided data from TBNFF derives mainly from:
* Photos made by hired photographers and festival social media/marketing team.
* Film stills provided by film distribution companies licensed for festival related use.
* Film, guest metadata provided, submitted and verified by the particular person through festival management database solution Eventival (eventival.com) and processed by the festival staff for festival related use.
The data archives that are available for the Film Festival Pilot consists of:
* 8054 photos per year (2017). Photo sizes depend on the level of photographers and range from 2 mb to 22 mb.
* Approximately 1500 persons related entries per year
* Approximately 3000 entries for films per year
The data to be generated by the Film Festival Pilot might be Public, press,
news agencies, photo banks, industry professionals, PR, distribution and sales
companies, festival guests (depicted on the photos), festival audience
(general public).
The data that will be provided by MARNI for the fashion case study mainly
consists of photo datasets with clothing collections, fashion garments and
accessories, and photo collections from MARNI’s fashion events. All the
datasets that will be used for the Fashion Pilot are property of MARNI created
by professional photographers. The datasets for the fashion pilot consists of
thousands of photos that will be available in the FotoInMotion project. The
data to be generated by the Fashion Pilot is primary the MARNI itself and
fashion media and press.
The content that will be produced from the three pilots will be mainly short
videos with enriched audiovisual content, such as 3D effects embedded with
structured metadata. The volume of the data to be produced during the project
lifespan is expected to be significant taking into account the nature of the
generated content.
# FAIR data
## Making data findable, including provisions for metadata
The data naming that the FotoInMotion content providers are using could be
described follows:
NOOR’ photos’ datasets are created with certain naming convention that
provides detailed information of the photo collection, the attributes of the
image including contextual, place, date, photographer’s name and other related
info. The images provided by NOOR are accompanied with IPTC metadata.
TBNFF photos also follow concrete naming convention. The file name includes
festival abbreviation + event name/ portrait the name of the person depicted
and date.
MARNI’s filing system of the fashion photos is using a dedicated inventory
number composed of numbers and letters.
The data that will be provided for the festival and fashion pilots are not
using any particular metadata schema thus the FotoInMotion will provide a
mechanism to import the data into the project’s repository with concrete
metadata that will allow the reusability of this content.
FotoInMotion will produce metadata describing the content and the context of
acquisition. Content metadata will be made available by Content Owners in the
project (Archive Content), by the Image Analysis Tools or will be manually
contributed. FotoInMotion will also acquire Context Metadata capture by
sensors and that provide additional information relevant to the context the
photo was taken. This metadata may include i) low-level metadata and ii) high-
level/textual metadata; examples in i) are numerical features extracted from
images such as colour histograms; numerical values extracted from the
smartphone sensors just as sets of acceleration or rotation values; examples
of ii) are: keywords inserted by the user or derived by the AI level of the
image analysis tools, such as “person”, “bag” or “car”; audio recordings from
the smartphone sensor and a tag indicating “metallic noise” or “human voice”.
To ensure portability, text/xml format will be used as the output of each of
the modules.
The project will define its own metadata schema for the representation of
digital events. The aim is to define a model that will enable to coherently
gather and express all the contextual multimedia data related to the picture,
including annotations, obtained in the three tasks of WP2. We will investigate
the suitability of some standard metadata schemas to cover parts of this model
and adapt them to the project requirements. Examples of possible schemas to be
considered include Dublin Core, IPTC and MPEG-7 and MPEG-21 standards.
## Making data openly accessible
The Datasets that will be used and generated in the FotoInMotion project will
be licenced with strict copyrights of the providers/owners. Nevertheless
FotoInMotion project will not restrict the generation and usage of non-
licenced content (open data).
The Data that will be provided from the FotoInMotion Pilot partners (NOOR,
Worldcrunch, TBNFF and MARNI) will be made available in the FotoInmotion Cloud
based Data Repository together with the data to be generated by the
FotoInMotion technological outcomes. The videos of the narration tool will be
produced on the server of QdepQ and sent back through the API to the
FotoInMotion repository. The main objective of the FotoInMotion is to create
highly discoverable data that can be easily shared and reused by the users.
All the documentation related to FotoInMotion content generation mechanisms
will be accessible and provided to interested parties.
## Making data interoperable
Two types of metadata will be produced: context metadata and content metadata.
Context metadata shall be acquired by the ECAT and Content Metadata will
either be manually produced, acquired from external archives or result from
image analysis. One of the objectives of the project is to define a metadata
model able to integrate all the different types of information. It may refer
to existing standard solutions whenever possible. Examples include IPTC,
Dublin Core and MPEG.
Mechanisms for exchange and re-use of data will be considered. Due to the lack
of one single standard supported by content owners and external archives,
import and export tools shall be added as required to increase the number of
supported external systems. The project will use, whenever possible, standard
concepts and approaches for content description. Data models are still to be
defined and have a strong dependency on the user requirements still being
refined. Whenever possible standard approaches will be followed.
## Increase data re-use (through clarifying licences)
The Datasets that will be used and generated in the FotoInMotion project will
be licenced with strict copyrights of the providers/owners. Nevertheless
FotoInMotion project will not restrict the generation and usage of non-
licenced content (open data).
The FotoInMotion generated content is intended to be shared and available
(apart from the FotoInMotion repository) through Social Networks and other web
communication channels. During the Pilot phase period users from the three
pilot domains will be invited to use and validate the FotoInMotion content
generation mechanisms and evaluate the generated content quality.
The content to be used and generated by the FotoInMotion project will be
available at least for 3 years. The time period for the data preservation will
be defined at the end of the project taking into account the Consortium’s
decision to be undertaken.
# Data security
The FototoInMotion storage safety and prevention of data harm and loss is one
of the main concerns of the project. The project technological partners and
particularly ATC which is the FotoInMotion integrator will undertake all the
necessary measures to avoid any unwilling situation related to the data
security. The FotoInMotion repository will be cloud based therefore the
project consortium will select an appropriate cloud service provider that is
able to facilitate all the necessary safeguards (regular backups, data
recovery etc.) in order to secure the FotoInMotion data integrity. Please see
WP4 technical deliverables to explore the measures taken for the security
assurance. (example D4.1).
# Ethical aspects
Personal Data protection and privacy respect is also anther important issue
that will be lawfully and accurately handled by the FotoInMotion project. Any
personal data that will be processed by the FotoInMotion project will be
handled taking into account the restrictions and the obligations applied in
the General Data Protection Regulation and particularly of Articles 5 and 6
with respect to the Rights of the data subjects (GDPR, Chapter III).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0452_A-LEAF_732840.md
|
# 1 INTRODUCTION
The purpose of the DMP is to provide an overview of the main elements of the
data management policy that will be used by the Consortium with regard to the
project research data. The DMP is not a fixed document but will evolve during
the lifespan of the project.
The DMP covers the complete research data life cycle. It describes the types
of research data that will be collected, processed or generated during the
project, how the research data will be preserved and what parts of the
datasets will be shared or kept confidential.
This document is the first version of the DMP, delivered in Month 3 of the
project. It includes an overview of the datasets to be produced by the
project, and the specific conditions that are attached to them. The next
versions of the DMP will be updated in Month 12 (D7.6), Month 24 (D7.7) and
Month 36 (D7.8) respectively as the project progresses.
This Data Management Plan describes the **A-LEAF** strategy and practices
regarding the provision of Open Access to scientific publications,
dissemination and outreach activities, public deliverables and research
datasets that will be produced.
Categories of outputs that **A-LEAF** will give Open Access (free of charge)
and will be agreed upon and approved by the Exploitation and Dissemination
Committee (EDC) include:
* Scientific publications (peer-reviewed articles, conference proceedings, workshops)
* Dissemination and Outreach material
* Deliverables (public)
<table>
<tr>
<th>
**A-LEAF public deliverables**
</th>
<th>
**Month**
</th> </tr>
<tr>
<td>
Kick off meeting agenda
</td>
<td>
1
</td> </tr>
<tr>
<td>
Project Management Book
</td>
<td>
3
</td> </tr>
<tr>
<td>
Project Report 1(Public version)
</td>
<td>
16
</td> </tr>
<tr>
<td>
Project Report 2 (Public version)
</td>
<td>
32
</td> </tr>
<tr>
<td>
Final Report
</td>
<td>
50
</td> </tr>
<tr>
<td>
A-LEAF DMP (and updates)
</td>
<td>
2, 12, 24, 36
</td> </tr>
<tr>
<td>
Web-page and logo
</td>
<td>
2
</td> </tr>
<tr>
<td>
A-LEAF Dissemination and Exploitation Plan (and updates)
</td>
<td>
3, 12, 24, 36
</td> </tr>
<tr>
<td>
A-LEAF Communication and Outreach Plan (and updates)
</td>
<td>
4, 12, 24, 36
</td> </tr> </table>
* Research Data
* Computational Data
Any dissemination data linked to exploitable results will not be put into the
open domain if they compromise its commercialisation prospects or have
inadequate protection.
1.1. **A-LEAF** strategy and practices
The decision to be taken by the project on how to publish its documents and
data sets will come after the more general decision on whether to go for an
academic publication directly or to seek first protection by registering the
developed Intellectual Property. Open Access must be granted to all scientific
publications resulting from Horizon 2020 actions. This will be done in
accordance with the Guidelines on Open Access to Scientific Publications and
Research Data in Horizon 2020 (15 February 2016) [1].
_**Concerning publications** _ , the consortium will provide open access
following the ‘Gold’ model: an article is immediately released in Open Access
mode by the scientific publisher. A copy of the publication will be deposited
in a public repository, OpenAIRE and ZENODO or those provided by the host
institutions, and available for downloading from the **A-LEAF** webpage. The
associated costs are covered by the author/s of the publication as agreed in
the dissemination and exploitation plan (eligible costs in Horizon 2020
projects).
_**Concerning research data** _ , the main obligations of participating in the
Open Research Data Pilot are:
1. To make it possible for third parties to _access_ , _mine_ , _exploit_ , _reproduce_ and _disseminate_ \- free of charge for any user - the following:
1. the published data, including associated metadata, needed to validate the results presented in scientific publications, as soon as possible
2. other data, including raw data and associated metadata, as specified and within the deadlines laid down in the data management plan; and
2. To provide information about _tools_ and _instruments_ at the disposal of the beneficiaries and necessary for validating the results.
**A-LEAF** follows the Guidelines on Data Management in Horizon 2020 (15
February 2016) [2].
The consortium has chosen ZENODO [3] as the central scientific publication and
data repository for the project outcomes. The repository has been designed to
help researchers based at institutions of all sizes to share results in a wide
variety of formats across all fields of science. The online repository has
been created through the European Commission’s OpenAIREplus project and is
hosted at CERN.
ZENODO enables users to:
* easily share the long tail of small data sets in a wide variety of formats, including text, spreadsheets, audio, video, and images across all fields of science
* display and curate research results, get credited by making the research results citable, and integrate them into existing reporting lines to funding agencies like the European Commission
* easily access and reuse shared research results
* define the different licenses and access levels that will be provided
Furthermore, ZENODO assigns a Digital Object Identifier (DOI) to all publicly
available uploads, in order to make content easily and uniquely citable.
2. SCIENTIFIC PUBLICATIONS
1. Dataset Description
As described in the DoA (Description of Action), the consortium will produce a
number of publications in journals with the highest impact in
multidisciplinary science. As mentioned above, publications will follow the
“Gold Open Access” policy. The Open Access publications will be available for
downloading from the **A-LEAF** webpage ( _www.a-leaf.eu_ ) and also stored
in the ZENODO/OpenAIRE repository.
2. Data sharing
The Exploitation and Dissemination Committee (EDC) will be responsible for
monitoring and identifying the most relevant outcomes of the **A-LEAF**
project to be protected. Thus, the EDC (as described in the Dissemination and
Exploitation plan) will also decide whether results arising from the
**A-LEAF** project can pursue peer-review publication.
The publications will be stored at least in the following sites:
* The ZENODO repository
* The **A-LEAF** website
* OpenAIRE
3. DOI
The DOI (Digital Object Identifier) uniquely identifies a document. This
identifier will be assigned by the publisher in the case of publications.
4. Archiving and preservation
Open Access, through the **A-LEAF** public website, will be maintained for at
least 3 years after the project completion.
Items deposited in ZENODO, including all the scientific publications, will be
archived and retained for the lifetime of the repository, which is currently
the lifetime of the host laboratory CERN (at least for the next 20 years).
# 3 DISSEMINATION / OUTREACH MATERIAL
3.1 Dataset Description
The dissemination and outreach material refers to the following items:
* Conferences: all academic partners of **A-LEAF** will attend the most relevant conferences and promote the results of the project through oral talks and/or posters.
* Workshops: two workshops will be organized in M24 and M48 to promote awareness of the **ALEAF** objectives and results (data produced: presentations and posters).
* Dissemination material: flyers, videos, public presentations, **A-LEAF** newsletter, press releases, tutorials, etc.
* Communication material: website, social media, press desk, audiovisual material. Outreach activities for project’s promotion to the general public.
2. Data sharing
All the dissemination and communication material will be available (during and
after the project) on the **A-LEAF** website and ZENODO.
3. Archiving and preservation
Open Access, through the **A-LEAF** public website, will be maintained for at
least 3 years after the project completion. All the public dissemination and
outreach material will be archived and preserved on ZENODO and will be
retained for the lifetime of the repository.
# 4 PUBLIC DELIVERABLES
4.1 Dataset Description
The documents associated to all the public deliverables defined in the Grant
Agreement, will be accessible through open access mode. The present document,
the **A-LEAF** Data Management Plan, is one of the public deliverables that
after submission to the European Commission will be immediately released in
open access mode in the **A-LEAF** webpage, CORDIS website and ZENODO.
<table>
<tr>
<th>
**A-LEAF public deliverables**
</th> </tr>
<tr>
<td>
Kick off meeting agenda
</td> </tr>
<tr>
<td>
Project Management Book
</td> </tr>
<tr>
<td>
Project Report 1 (public version)
</td> </tr>
<tr>
<td>
Project Report 2 (public version)
</td> </tr>
<tr>
<td>
Final Report
</td> </tr>
<tr>
<td>
A-LEAF DMP (and updates)
</td> </tr>
<tr>
<td>
Web-page and logo
</td> </tr>
<tr>
<td>
A-LEAF Dissemination and Exploitation Plan (and updates)
</td> </tr>
<tr>
<td>
A-LEAF Communication and Outreach Plan (and updates)
</td> </tr> </table>
All other deliverables, marked as confidential in the Grant Agreement, will be
only accessible for the members of the consortium and the Commission services.
These will be stored in the **A-LEAF** intranet with restricted access to the
consortium members. The Project Coordinator will also store a copy of the
confidential deliverables.
4.2 Data sharing
Open Access to **A-LEAF** public deliverables will be achieved by depositing
the data into an online repository. The public deliverables will be stored in
one or more of the following locations:
* The **A-LEAF** website, after approval by the Project Advisory Board (PAB) (if the document is subsequently updated, the original version will be replaced by the latest version)
* The CORDIS website, will host all public deliverables as submitted to the European Commission. The **A-LEAF** page on CORDIS is:
_http://cordis.europa.eu/project/rcn/206200_en.html_
* ZENODO repository
4.3 Archiving and preservation
Open Access, through the **A-LEAF** public website will be maintained for at
least 3 years after the project completion.
All public deliverables will be archived and preserved on ZENODO and will be
retained for the lifetime of the repository.
# 5 RESEARCH DATA
5.1 Dataset Description
Besides the open access to the data described in the previous sections, the
Open Research Data Pilot also applies to two types of data:
* The data, including metadata, needed to validate the results presented in scientific publications (underlying data).
* Other data, including associated metadata. The PAB will be able to choose which data (besides the data associated to publications) they make available in open access mode.
All data collected and/or generated will be stored according to the following
format:
## A-LEAF_WPX_TaskX.Y/Title_Institution_Date
Should the data cannot be directly linked or associated to a specific Work
Package and/or task, a selfexplanatory title for the data will be used
according to the following format:
_**A-LEAF_Title_Institution_Date** _
# 6 COMPUTATIONAL DATA
The computational data outcome of the simulations will be stored following the
same procedure as before at the local nodes of ioChem-BD.org that allows the
generation of DOIs for the particular datasets from the calculations and
ensures its reproducibility.
# 7 RESPONSIBILITY FOR THE IMPLEMENTATION OF THE DMP
The consortium will make a selection of relevant information, disregarding
that not being relevant for the validation of the published results.
Furthermore, following the procedure described in section 2.2, the data
generated will be carefully analysed before giving open access to it in order
to be aligned with the exploitation policy described in the Dissemination and
Exploitation Plan (D7.3).
Therefore, data sharing in open access mode can be restricted as a legitimate
reason to protect results expected to be commercially or industrially
exploited. Approaches to limit such restrictions will include agreeing on a
limited embargo period or publishing selected (non-confidential) data.
The selected research data and/or data with an embargo period, produced in
**A-LEAF** will be deposited into an online research data repository (ZENODO)
and shared in open access mode.
Each partner of the consortium will be responsible for the storage and backup
of the data produced in their respective host institutions. Furthermore, each
partner is responsible for uploading all relevant data produced during the
project to the **A-LEAF** intranet (restricted to the members of the
consortium) and inform the rest of the consortium once it is uploaded. The
coordinator will be responsible for collecting all the public data and
uploading it in the **A-LEAF** public website and ZENODO.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0453_A-LEAF_732840.md
|
# INTRODUCTION
The purpose of the DMP is to provide an overview of the main elements of the
data management policy that will be used by the Consortium with regard to the
project research data. The DMP is not a fixed document but will evolve during
the lifespan of the project.
The DMP covers the complete research data life cycle. It describes the types
of research data that will be collected, processed or generated during the
project, how the research data will be preserved and what parts of the
datasets will be shared or kept confidential.
This document is the second version of the DMP, delivered in Month 13 of the
project. It includes an overview of the datasets to be produced by the
project, and the specific conditions that are attached to them. The next
versions of the DMP will be updated in Month 24 (D7.7) and Month 36 (D7.8)
respectively as the project progresses.
This Data Management Plan describes the **A-LEAF** strategy and practices
regarding the provision of Open Access to scientific publications,
dissemination and outreach activities, public deliverables and research
datasets that will be produced.
Categories of outputs that **A-LEAF** will give Open Access (free of charge)
and will be agreed upon and approved by the Exploitation and Dissemination
Committee (EDC) include:
* Scientific publications (peer-reviewed articles, conference proceedings, workshops)
* Dissemination and Outreach material
* Deliverables (public)
<table>
<tr>
<th>
**A-LEAF public deliverables**
</th>
<th>
**Month**
</th> </tr>
<tr>
<td>
Kick off meeting agenda
</td>
<td>
1
</td> </tr>
<tr>
<td>
Project Management Book
</td>
<td>
3
</td> </tr>
<tr>
<td>
Project Report 1(Public version)
</td>
<td>
16
</td> </tr>
<tr>
<td>
Project Report 2 (Public version)
</td>
<td>
32
</td> </tr>
<tr>
<td>
Final Report
</td>
<td>
50
</td> </tr>
<tr>
<td>
A-LEAF DMP (and updates)
</td>
<td>
2, 12, 24, 36
</td> </tr>
<tr>
<td>
Web-page and logo
</td>
<td>
2
</td> </tr>
<tr>
<td>
A-LEAF Dissemination and Exploitation Plan (and updates)
</td>
<td>
3, 12, 24, 36
</td> </tr>
<tr>
<td>
A-LEAF Communication and Outreach Plan (and updates)
</td>
<td>
4, 12, 24, 36
</td> </tr> </table>
* Research Data
* Computational Data
Any dissemination data linked to exploitable results will not be put into the
open domain if they compromise its commercialisation prospects or have
inadequate protection.
1.1. **A-LEAF** strategy and practices
The decision to be taken by the project on how to publish its documents and
data sets will come after the more general decision on whether to go for an
academic publication directly or to seek first protection by registering the
developed Intellectual Property (IP). Open Access must be granted to all
scientific publications resulting from Horizon 2020 actions. This will be done
in accordance with the Guidelines on Open Access to Scientific Publications
and Research Data in Horizon 2020 (15 February 2016) [1].
_**Concerning publications** _ , the consortium will provide open access
following the ‘Gold’ model: an article is immediately released in Open Access
mode by the scientific publisher. A copy of the publication will be deposited
in a public repository, OpenAIRE and ZENODO or those provided by the host
institutions, and available for downloading from the **A-LEAF** webpage. The
associated costs are covered by the author/s of the publication as agreed in
the dissemination and exploitation plan (eligible costs in Horizon 2020
projects).
_**Concerning research data** _ , the main obligations of participating in the
Open Research Data Pilot are:
1. To make it possible for third parties to _access_ , _mine_ , _exploit_ , _reproduce_ and _disseminate_ \- free of charge for any user - the following:
1. the published data, including associated metadata, needed to validate the results presented in scientific publications, as soon as possible
2. other data, including raw data and associated metadata, as specified and within the deadlines laid down in the data management plan; and
2. To provide information about _tools_ and _instruments_ at the disposal of the beneficiaries and necessary for validating the results.
**A-LEAF** follows the Guidelines on Data Management in Horizon 2020 (15
February 2016)
[2].
The consortium has chosen ZENODO [3] as the central scientific publication and
data repository for the project outcomes. This repository has been designed to
help researchers based at institutions of all sizes to share results in a wide
variety of formats across all fields of science. The online repository has
been created through the European Commission’s OpenAIREplus project and is
hosted at CERN.
ZENODO enables users to:
* easily share the long tail of small data sets in a wide variety of formats, including text, spreadsheets, audio, video, and images across all fields of science
* display and curate research results, get credited by making the research results citable, and integrate them into existing reporting lines to funding agencies like the European Commission
* easily access and reuse shared research results
* define the different licenses and access levels that will be provided
Furthermore, ZENODO assigns a Digital Object Identifier (DOI) to all publicly
available uploads, in order to make content easily and uniquely citable.
# SCIENTIFIC PUBLICATIONS
2.1 Dataset Description
As described in the DoA (Description of Action), the consortium will produce a
number of publications in journals with the highest impact in
multidisciplinary science. As mentioned above, publications will follow the
“Gold Open Access” policy. The Open Access publications will be available for
downloading from the **A-LEAF** webpage ( _www.a-leaf.eu_ ) and also stored
in the ZENODO/OpenAIRE repository.
2.2 Data sharing
The Exploitation and Dissemination Committee (EDC) will be responsible for
monitoring and identifying the most relevant outcomes of the **A-LEAF**
project to be protected. Thus, the EDC (as described in the Dissemination and
Exploitation plan) will also decide whether results arising from the
**A-LEAF** project can pursue peer-review publication.
The publications will be stored at least in the following sites:
* The ZENODO repository
* The **A-LEAF** website
* OpenAIRE
3. DOI
The DOI (Digital Object Identifier) uniquely identifies a document. This
identifier will be assigned by the publisher in the case of publications.
4. Archiving and preservation
Open Access, through the **A-LEAF** public website, will be maintained for at
least 3 years after the project completion.
Items deposited in ZENODO, including all the scientific publications, will be
archived and retained for the lifetime of the repository, which is currently
the lifetime of the host laboratory CERN (at least for the next 20 years).
# DISSEMINATION / OUTREACH MATERIAL
3.1 Dataset Description
The dissemination and outreach material refers to the following items:
* Conferences: all academic partners of **A-LEAF** will attend the most relevant conferences and promote the results of the project through oral talks and/or posters.
* Workshops: two workshops will be organized in M24 and M48 to promote awareness of the **A-LEAF** objectives and results (data produced: presentations and posters).
* Dissemination material: flyers, videos, public presentations, **A-LEAF** newsletter, press releases, tutorials, etc.
* Communication material: website, social media, press desk, audiovisual material. Outreach activities for project’s promotion to the general public.
2. Data sharing
All the dissemination and communication material will be available (during and
after the project) on the **A-LEAF** website and ZENODO.
3. Archiving and preservation
Open Access, through the **A-LEAF** public website, will be maintained for at
least 3 years after the project completion. All the public dissemination and
outreach material will be archived and preserved on ZENODO and will be
retained for the lifetime of the repository.
# PUBLIC DELIVERABLES
4.1 Dataset Description
The documents associated to all the public deliverables defined in the Grant
Agreement, will be accessible through open access mode. The present document,
the **A-LEAF** Data Management Plan update, is one of the public deliverables
that after submission to the European Commission will be immediately released
in open access mode in the **A-LEAF** webpage, CORDIS website and ZENODO.
<table>
<tr>
<th>
**A-LEAF public deliverables**
</th> </tr>
<tr>
<td>
Kick off meeting agenda
</td> </tr>
<tr>
<td>
Project Management Book
</td> </tr>
<tr>
<td>
Project Report 1 (public version)
</td> </tr>
<tr>
<td>
Project Report 2 (public version)
</td> </tr>
<tr>
<td>
Final Report
</td> </tr>
<tr>
<td>
A-LEAF DMP (and updates)
</td> </tr>
<tr>
<td>
Web-page and logo
</td> </tr>
<tr>
<td>
A-LEAF Dissemination and Exploitation Plan (and updates)
</td> </tr>
<tr>
<td>
A-LEAF Communication and Outreach Plan (and updates)
</td> </tr> </table>
All other deliverables, marked as confidential in the Grant Agreement, will be
only accessible for the members of the consortium and the Commission services.
These will be stored in the **ALEAF** intranet with restricted access to the
consortium members. The Project Coordinator will also store a copy of the
confidential deliverables.
4.2 Data sharing
Open Access to **A-LEAF** public deliverables will be achieved by depositing
the data into an online repository. The public deliverables will be stored in
one or more of the following locations:
* The **A-LEAF** website, after approval by the Project Advisory Board (PAB) (if the document is subsequently updated, the original version will be replaced by the latest version)
* The CORDIS website, will host all public deliverables as submitted to the European Commission. The **A-LEAF** page on CORDIS is:
_http://cordis.europa.eu/project/rcn/206200_en.html_
* ZENODO repository
4.3 Archiving and preservation
Open Access, through the **A-LEAF** public website will be maintained for at
least 3 years after the project completion.
All public deliverables will be archived and preserved on ZENODO and will be
retained for the lifetime of the repository.
# RESEARCH DATA
5.1 Dataset Description
Besides the open access to the data described in the previous sections, the
Open Research Data Pilot also applies to two types of data:
* The data, including metadata, needed to validate the results presented in scientific publications (underlying data).
* Other data, including associated metadata. The PAB will be able to choose which data (besides the data associated to publications) they make available in open access mode.
All data collected and/or generated will be stored according to the following
format:
## **A-LEAF_WPX_TaskX.Y/Title_Institution_Date**
Should the data cannot be directly linked or associated to a specific Work
Package and/or task, a self-explanatory title for the data will be used
according to the following format:
## **A-LEAF_Title_Institution_Date**
When the data is collected in a public deliverable this other format may also
be used:
_**D.X.Y A-LEAF_ Title of the Deliverable** _
# COMPUTATIONAL DATA
The computational data outcome of the simulations will be stored following the
same procedure as before at the local nodes of ioChem-BD.org that allows the
generation of DOIs for the particular datasets from the calculations and
ensures its reproducibility.
# RESPONSIBILITY FOR THE IMPLEMENTATION OF THE DMP
The consortium will make a selection of relevant information, disregarding
that not being relevant for the validation of the published results.
Furthermore, following the procedure described in section 2.2, the data
generated will be carefully analysed before giving open access to it in order
to be aligned with the exploitation policy described in the Dissemination and
Exploitation Plan (D7.3).
Therefore, data sharing in open access mode can be restricted as a legitimate
reason to protect results expected to be commercially or industrially
exploited. Approaches to limit such restrictions will include agreeing on a
limited embargo period or publishing selected (nonconfidential) data.
The selected research data and/or data with an embargo period, produced in
**A-LEAF** will be deposited into an online research data repository (ZENODO)
and shared in open access mode.
Each partner of the consortium will be responsible for the storage and backup
of the data produced in their respective host institutions. Furthermore, each
partner is responsible for uploading all the research data produced during the
project to the **A-LEAF** intranet (restricted to the members of the
consortium) or for sending it to the coordinator, who will inform the rest of
the consortium once it is uploaded. The coordinator will be responsible for
collecting all the public data and uploading it in the **A-LEAF** public
website and ZENODO.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0457_EW-Shopp_732590.md
|
**Executive summary**
EW-Shopp aims at supporting companies operating in the fragmented European
ecosystem of the eCommerce, Retail and Marketing industries to increase their
efficiency and competitiveness by leveraging deep customer insights that are
too challenging for them to obtain today. The integration of public and
private data collected by different business partners will ensure to cover
customer interactions and activities across different channels, providing
insights on rich customer journeys. These integrated data will be further
enriched with information about weather and events, two crucial factors
impacting consumer choices. To realize these objectives, a platform, also
referred to as EW-Shopp platform, will be built.
The Data Management Plan (DMP) reports on the data that EW-Shopp project will
use and generate during its life, from the set up of the EW-Shopp Platform to
the business exploitation of its services.
The deliverable, following the Horizon 2020 guidelines 1 , defines the
general approach that will be adopted in the context of EW-Shopp project in
terms of data management policies. In accordance with these Guidelines, this
deliverable will include information about the handling of data during and
after the end of the project, reserving a particular attention to the
methodology and standards to be applied.
In addition to the guidelines provided by the European Commission, this
document also refers to the plan to address the legal and ethical issues
related to data that will be collected, in close collaboration with the
activities undertaken by the EW-Shopp Ethics Advisory Board and the main
outcomes from WP7.
The deliverable describes the approach established in EW-Shopp to ensure the
life-cycle management of the public and proprietary datasets provided by the
consortium members to the project as well as other dataset produced by the
Consortium during the project execution.
In particular, this report describes rules, best practices and standards used
with regard to make the data findable, accessible, interoperable and reusable
(FAIR data) and the process to collect and manage data in compliance with
ethical and legal requirements. The deliverable includes a high-level
description of the four business cases (BC1: Bing Bang, Ceneje, and Browsetel;
BC2: GfK, BC3: Measurence; BC4: Jot Internet Media) and descriptions of the
datasets provided for EW-Shopp project, which aim to detail identification,
origin, format, access, security of the data and to take into account legal
and ethics requirements.
**Chapter 1**
**Introduction**
According to the Guidelines on FAIR Data Management in Horizon 2020, Data
Management Plan (DMP) is a key element of good data management. A DMP
describes the data management life cycle for the data to be collected,
processed and/or generated by a Horizon 2020 project.
This document will set-up a DMP in accordance with H2020 Guidelines, including
information and suggestions about the handling of data during and after the
end of the project, what data will be collected, processed and/or generated,
which methodology and standards will be applied, whether data will be
shared/made open access and how data will be curated and preserved (including
after the end of the project).
In addition to the guidelines provided by the European Commission, this
document also refers to the plan to address the legal and ethical issues
related to data that will be collected.
The deliverable describes the approach established in EW-Shopp to ensure the
life-cycle management of the public and proprietary datasets provided by the
consortium members to the project as well as other dataset produced by the
Consortium during the project execution, as defined at M6.
In chapter 1 the document defines which are the principles underlying EW-Shopp
DMP, the approach followed to generate the structure, the main contents of the
document and links to the other deliverables and documents. In chapter 2, the
document introduces the EW-Shopp project, its purpose, the kind of dataset
involved in the project, the audience and the responsibilities defined around
the DMP. Chapter 3 introduces core concepts and fundamental legal principles
as well outlines an ethical assessment for data owner and, concerning legal
requirements, provides detailed guidelines about the obligations that data
owners need to comply with. In Chapter 4, a high-level description of the four
business cases is reported in order to give an overall view of the project
scope. In Chapter 5, relevant information regarding the dataset are explained
and the process to collect all the information among data owners is described.
Chapter 6 shows, for each dataset, all the information required for dataset
identification, origin, format, access, security and with respect to ethical
and legal requirements. Data storage policies, data archiving, security,
permission, data access, re-use and licensing are discussed in chapter 7.
Finally, the survey that was submitted to all dataset providers is reported in
Annex A.
<table>
<tr>
<th>
**1.1**
</th>
<th>
**Principles underlying EW-Shopp DMP**
</th> </tr> </table>
The EW-Shopp project aims at deploying and hosting a platform to ease data
integration tasks, by embedding shared data models, robust data management
techniques and semantic reconciliation methods. This platform will offer a
framework for unification of fragmented business data and its integration with
external event and weather data, which will support data analytics services
that offer key competitive advantages in the modern commerce space.
In general, research data should be 'FAIR', that is findable, accessible,
interoperable and re-usable. These principles precede implementation choices
and do not necessarily suggest any specific technology, standard, or
implementation-solution.
In this context, the Data Management Plan is a key activity and it will deepen
the general principles underlying EW-Shopp Data Management Plan (from [DoA]):
* **EW-Shopp Privacy Policy:** We will set up and explicitly define a Privacy Policy adopted in the EW-Shopp project, with which all partners and data processing activities carried out in the project must comply. […] In case some PD is used in some intermediate data processing step, this information will be properly anonymized and used only upon consent to secondary use collected from the users. The EW-Shopp Privacy Policy will assure that data processing activities in EW-Shopp comply with national and EU legislation, including legislation on personal data protection.
* **Statistical data not containing PD:** The majority of datasets consist of statistical data (all dataset classified as not containing personal data in the data description tables). These data do not contain PD but only information treated at an aggregate level that cannot be linked back to single individuals. Therefore, the specific data subjects will be not visible/ recognizable in such sets of data. These data have been collected by business partners in their daily operations in compliance with national regulations, both in relation to privacy protection and informed consent to data processing.
* **Anonymization of data containing PD:** Other datasets are classified as containing personal data in the data description tables. These data will be anonymized before being used in the project so as to comply with the privacy protection policy and national and EU legislation. Among these datasets, we consider three notable cases, for which we specify how we plan to ensure privacy protection constraints.
<table>
<tr>
<th>
**1.2**
</th>
<th>
**General Approach**
</th> </tr> </table>
The EW-Shopp DMP will be developed by taking into account the DMP template
that matches the demands and suggestions of the Guidelines on Data Management
in Horizon 2020, and that is available through the DMPonline platform 2 .
The principal contents indicated in the template are enlisted here below:
* Dataset Description
* Fair data (making data findable, accessible, interoperable and reusable)
* Data security
* Data archiving and preservation
* Ethics and aspects
These contents were utilized as a guide and then the document was customized
according to specific study requirements.
<table>
<tr>
<th>
**1.3**
</th>
<th>
**Applicable documents and references**
</th> </tr> </table>
The following documents are applicable to the subject discussed in this
deliverable, and will be referenced as indicated into round brackets:
1. EW-Shopp – Grant Agreement number 732590 ( [GA] )
2. [GA] Annex 1 – Description of the Action ( [DoA] )
3. EW-Shopp – Consortium Agreement ( [CA] )
4. D7.2 POPD-Requirement No.2 ( [D7.2] )
Short references may be used to refer to project beneficiaries, also
frequently referred to as _partners_ . References are listed in Table 2.
# Table 2. Short references for project partners
<table>
<tr>
<th>
**No.**
</th>
<th>
**Beneficiary (partner) name as in [GA]**
</th>
<th>
**Short reference**
</th> </tr>
<tr>
<td>
1
</td>
<td>
UNIVERSITA’ DEGLI STUDI DI MILANO-BICOCCA
</td>
<td>
UNIMIB
</td> </tr>
<tr>
<td>
2
</td>
<td>
CENEJE DRUZBA ZA TRGOVINO IN POSLOVNO SVETOVANJE DOO
</td>
<td>
CE
</td> </tr>
<tr>
<td>
3
</td>
<td>
BROWSETEL (UK) LIMITED
</td>
<td>
BT
</td> </tr>
<tr>
<td>
4
</td>
<td>
GFK EURISKO SRL.
</td>
<td>
GFK
</td> </tr>
<tr>
<td>
5
</td>
<td>
BIG BANG, TRGOVINA IN STORITVE, DOO
</td>
<td>
BB
</td> </tr>
<tr>
<td>
6
</td>
<td>
MEASURENCE LIMITED
</td>
<td>
ME
</td> </tr>
<tr>
<td>
7
</td>
<td>
JOT INTERNET MEDIA ESPAÑA SL
</td>
<td>
JOT
</td> </tr>
<tr>
<td>
8
</td>
<td>
ENGINEERING – INGEGNERIA INFORMATICA SPA
</td>
<td>
ENG
</td> </tr>
<tr>
<td>
9
</td>
<td>
STIFTELSEN SINTEF
</td>
<td>
SINTEF
</td> </tr>
<tr>
<td>
10
</td>
<td>
INSTITUT JOZEF STEFAN
</td>
<td>
JSI
</td> </tr>
<tr>
<td>
**1.4**
</td>
<td>
**Updates of this deliverable**
</td> </tr> </table>
This deliverable will be updated, over the course of the project, whenever
significant changes arise, to ensure compliance with Horizon 2020 guidelines.
Among these changes it is possible to list: new datasets that will be added,
changes in consortium policies or changes in consortium composition and
external factors.
**Chapter 2 Project Data Management**
<table>
<tr>
<th>
**2.1**
</th>
<th>
**Project purposes**
</th> </tr> </table>
EW-Shopp aims at supporting companies operating in the fragmented European
ecosystem of the eCommerce, Retail and Marketing industries to increase their
efficiency and competitiveness by leveraging deep customer insights that are
too challenging for them to obtain today.
Improved insights will result from the analysis of large amount of data,
acquired from different sources and sectors, and in multiple languages. The
integration of consumer and market data collected by different business
partners will ensure to cover customer interactions and activities across
different channels, providing insights on rich customer journeys. These
integrated data will be further enriched with information about weather and
events, two crucial factors impacting consumer choices.
By increasing the analytical power coming from the integration of cross-
sectorial and cross-language data sources and new data sources companies will
deploy real-time responsive services for digital marketing, reporting-style
services for market research, advanced data and resource management services
for Retail & eCommerce companies and their technology providers, and enhanced
location intelligence services. For example, by using a predictive model built
on top of integrated data about click-through rate of products, weather and
events, we will develop a service that is able to increase advertising of top-
gear sport equipment on a sunny weekend afternoon during Tour De France.
To realize these objectives, a platform, also referred to as EW-Shopp
platform, will be built. The platform will support:
* The integration of consumer and market data, covering customer interactions across different channels and with different languages, and providing insights on rich customer journey
* The enrichment of the integrated data with information about weather and events
* The analysis of the enriched data using visual, descriptive and predictive analytics.
<table>
<tr>
<th>
**2.2**
</th>
<th>
**Project data**
</th> </tr> </table>
EW-Shopp makes use of a mix of public and proprietary datasets. The broad
classes of data include the following:
* Market data – data extracted from marketing research and commercial activity
* Consumer data – profiles from marketing research, e-commerce, digital advertising, and IoT devices
* Category/product data – data coming from commercial activities
* Events reported in media – popular online media data
* Weather data and forecasts
The EW-Shopp platform will provide data services and tools to process and
harmonise data. It will produce a set of agreed data models, including a
shared system of entity identifiers to represent the aforementioned datasets.
The data will furthermore be represented in a way that provides support for
multiple input languages.
<table>
<tr>
<th>
**2.3**
</th>
<th>
**Audience**
</th> </tr> </table>
Project data are oriented to:
* The consortium partners;
* All stakeholders involved in the project; • The European Commission.
Because of the sensitiveness of business data used in the EW-Shopp innovation
action, no commitment to publish datasets provided by business partners as
open data is made in [DoA]. For this reason, we do not include _external
stakeholders_ in the audience for project data. With _external stakeholders_
we refer to a party that: is not a beneficiary, is not a linked third party in
EW-Shopp, is not the European Commission. Although we do not expect to make
datasets openly accessible to external stakeholders, models and methodologies
developed in the project to support interoperability between different parties
will be disseminated to a larger audience of stakeholders.
<table>
<tr>
<th>
**2.4**
</th>
<th>
**Roles and responsibilities**
</th> </tr> </table>
We describe main roles of beneficiaries in the consortium and their
responsibilities with regards to data and services developed in business cases
in Table 3. Roles and Responsibilities of Beneficiaries In the table with
refer to Business Cases with their number, which are further explained in
Chapter 4
.
In the table, we distinguish between two main __roles of beneficiaries in the
consortium_ _ :
* **Business Partners:** partners that develop services within the project, by exploiting the technology developed in the project, i.e., the EW-Shopp platform, on their own data sets and/or with the help of data sets provided by other partners in the project. These partners will also contribute indirectly to the technology by driving its development with the specification coming from their business cases.
* **Technology partners:** partners whose main role in the project is to develop the technology that will support the EW-Shopp platform. These partners will also contribute indirectly to the business cases by performing the following activities:
* Providing or supporting access to core data sets, i.e., data sets such as product data, locations, weather and events, used to integrate and enrich business data.
* Supporting the development of pilots and services by helping business partners integrate or analyze the data.
# Table 3. Roles and Responsibilities of Beneficiaries
<table>
<tr>
<th>
**Partner**
</th>
<th>
**Partner Role**
</th>
<th>
**Resp. wrt Data**
</th>
<th>
</th>
<th>
Resp. wrt Business Cases
</th> </tr>
<tr>
<th>
Business
</th>
<th>
**Tech.**
</th>
<th>
Owner
</th>
<th>
**Facilitator**
</th>
<th>
**Service**
</th>
<th>
Data
</th>
<th>
Tech.
Support
(Integration)
</th>
<th>
Tech.
Support
(Analytics)
</th> </tr>
<tr>
<td>
UNIMIB
</td>
<td>
</td>
<td>
X
</td>
<td>
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
BC2, BC3
</td>
<td>
</td> </tr>
<tr>
<td>
CE
</td>
<td>
X
</td>
<td>
</td>
<td>
X
</td>
<td>
</td>
<td>
BC1
</td>
<td>
BC1
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
BT
</td>
<td>
X
</td>
<td>
</td>
<td>
X
</td>
<td>
X
</td>
<td>
BC1
</td>
<td>
BC1
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
GFK
</td>
<td>
X
</td>
<td>
</td>
<td>
X
</td>
<td>
X
</td>
<td>
BC2
</td>
<td>
BC1,BC2
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
BB
</td>
<td>
X
</td>
<td>
</td>
<td>
X
</td>
<td>
</td>
<td>
BC1
</td>
<td>
BC1
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
ME
</td>
<td>
X
</td>
<td>
</td>
<td>
X
</td>
<td>
</td>
<td>
BC3
</td>
<td>
BC3
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
JOT
</td>
<td>
X
</td>
<td>
</td>
<td>
X
</td>
<td>
</td>
<td>
BC4
</td>
<td>
BC4
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
ENG
</td>
<td>
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
BC4
</td>
<td>
BCALL
</td> </tr>
<tr>
<td>
SINTEF
</td>
<td>
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
BC1
</td>
<td>
</td> </tr>
<tr>
<td>
JSI
</td>
<td>
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
BCALL
</td> </tr> </table>
At a general level, __responsibilities with respect to data_ _ managed in the
project can be summarized as follows:
* **Data owner** , a partner that provides to the consortium data that it owns - **Data facilitator** , a partner that eases access to data that are:
* provided by beneficiaries (i.e., UNIMIB will support access to product data owned by
GFK) o provided by linked third parties (i.e., JSI will provide access to
weather data provided by ECMWF)
* available as open data (i.e., UNIMIB will provide access to relevant data about locations available in sources such as DBpedia 3 )
Partners may thus have different responsibilities with respect to development
of business cases and pilots (see Table 3 for the specification of the
responsibilities of individual beneficiaries in each business case):
* **Service developer** (referred to as “Service” in the table) is a beneficiary that is responsible for developing a service within a business case.
* **Data provider** (referred to as “Data” in the table) is a beneficiary that is responsible for providing its data to support a business case.
* **Technical support (integration)** is a technical partner that is responsible for providing support in a business case by helping business partners in the data integration process.
* **Technical support (analytics)** is a technical partner that is responsible for providing support in a business case by helping business partners in the data analytic process.
The assignment of business cases to technology partners may be subject to
change in the course of the project; Table 3 reports assignments that have
been used to collect requirements included in this document.
In addition to EW-Shopp beneficiaries, the project also include three two
parties having a role in the project:
* **European Centre for Medium-Range Weather Forecasts** (ECMWF **)** is an independent intergovernmental organisation founded in 1975 and supported by 34 states ( _http://www.ecmwf.int_ ). Data from ECMWF are provided to the EW-Shopp project to be used by every partner. **ECMWF** will contribute in EW-Shopp by making available, for the scope of the project, its meteorological archive of forecasts (MARS) of the past 35 years and sets of reanalysis forecasts.
* **CDE** is a Slovene Ltd IT company providing IT solutions for communication and customer relation management linked to Browsetel (BT). CDE will act as a data and infrastructure provider and software development in the context of BC1 in WP4, while BT will focus on business development. Responsibilities of CDE in EW-Shopp are included in responsibilities of BT in Table 3.
**Chapter 3 Ethics and Legal Compliance**
<table>
<tr>
<th>
**3.1**
</th>
<th>
**Legal requirements regarding personal data**
</th> </tr> </table>
The EW-Shopp project must comply with all EU laws regarding data protection.
The purpose of this section is to explain core principles and concepts of the
right to **protection of personal data in scientific research** . 4
In the 1990s, the European Union started a process of codification of data
protection and privacy rights in order to harmonise different national
legislation. Directive 95/46/EC 5 (“Data Protection Directive”) and
Directive 2002/58/EC 6 (“E-Privacy Directive”) are the main legal provisions
that referred to define the legal framework, considering also the EU Charter
of Fundamental Rights 7 and the appropriate national legislation that
transposed these EU directives.
This multilevel legal environment is going to change in 2018, when in May a
new European Regulation comes into force. 8 Indeed, the General Data
Protection Regulation (GDPR) (Regulation (EU) 2016/679) 9 was approved, by
the EU Parliament, on 14 April 2016. It will enter in force 20 days after its
publication in the EU Official Journal and will be directly application in all
member states two years after this date. It is designed to harmonize data
privacy laws across Europe, to protect and empower all EU citizens' data
privacy and to reshape the way organizations across the region approach data
privacy.
Although the new Regulation confirms the main principles of both the above-
cited Directives, it will substitute them and all national legislation on data
protection and privacy rights.
**3.1.1 Core concepts**
European Data Protection legislation is based on some core concepts concerning
the subjects who are going to acquire, collect, process, profile, and use
data; the different types of data; and notification procedures. Below are
listed the most important definitions for scientific research activities.
These definitions have been extrapolated from EU legislation, EU and Member
State (MS) official documents, or other legal documents.
All text in italics is with respect to the new 2018 European regulation and
its additional requirements.
# Table 4 Core concepts - European Data Protection legislation
<table>
<tr>
<th>
**CORE CONCEPT**
</th>
<th>
**Definition**
</th> </tr>
<tr>
<td>
SUBJECTS IN DATA PROCESS
</td>
<td>
**Data Controller** 10 : The natural or legal person, which alone or jointly
with others determines the purposes and means of the processing of personal
data.
</td> </tr>
<tr>
<td>
**Data Processor** 11 : A natural or legal person, which processes personal
data on behalf of the controller.
</td> </tr>
<tr>
<td>
DIFFERENT TYPES OF DATA
</td>
<td>
**Personal Data** 12 : Any information relating to an identified or
identifiable natural person (“data subject”); an identifiable person is one
who can be identified, directly or indirectly, in particular, by reference to
an identification number, _location data, an online identifier_ or to one or
more factors specific to his physical, physiological, _genetic,_ _mental_ ,
_economic_ , _cultural_ or _social identity_ _of that natural person_ .
Personal data may be processed only if the data subject has unambiguously
given his consent (“prior consent”).
**NB: Anonymised data are no longer personal data. See below.**
</td> </tr>
<tr>
<td>
**Sensitive (Personal) Data** 11 : Personal data revealing racial or ethnic
origin, political opinions, religious or philosophical beliefs, trade-union
membership, and the processing of _genetic data, biometric data for the
purpose of uniquely identifying a natural person,_ data concerning health _or
data concerning a natural person’s sex life or sexual orientation._ Sensitive
data may be processed only if the data subject has given his explicit consent
to the processing of those data (“prior written consent”).
**NB: Anonymised data are no longer personal data. See below.**
</td> </tr>
<tr>
<td>
**Genetic Data** 14 : personal data relating to the inherited or acquired
genetic characteristics of a natural person which give unique
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
information about the physiology or the health of that natural person and
which result, in particular, from an analysis of a biological sample from the
natural person in question.
**NB: Anonymised data are no longer personal data. See below.**
</th> </tr>
<tr>
<th>
**Biometric Data** 12 : personal data resulting from specific technical
processing relating to the physical, physiological or behavioural
characteristics of a natural person, which allow or confirm the unique
identification of that natural person.
**NB: Anonymised data are no longer personal data. See below.**
</th> </tr>
<tr>
<th>
**Anonymization** ( **Anonymised Data** ) 13 : Processing of data with the
aim of removal of information that could lead to an individual being
identified. Data can be considered anonymised when it does not allow
identification of the individuals to whom it relates, and it is not possible
that any individual could be identified from the data by any further
processing of that data or by processing it together with other information
which is available or likely to be available. Use of anonymised data does not
require the consent of the “data subject.”
</th> </tr>
<tr>
<th>
**Simulated Data** : Imitation or creation of data that closely matches real-
world data, but is not real world data. For these data, consent is not
necessary since it is not possible to identify the “data subject.”
</th> </tr>
<tr>
<td>
</td>
<td>
**Pseudonymization** 14 : The processing of personal data in such a manner
that the personal data can no longer be attributed to a specific data subject
without the use of additional information, provided that such additional
information is kept separately and is subject to technical and organisational
measures to ensure that the personal data are not attributed to an identified
or identifiable natural person.
</td> </tr>
<tr>
<td>
**Big Data** 15 : High-volume, high-velocity, high-value and high-variety
information (4Vs) assets that demand innovative forms of information
processing.
</td> </tr>
<tr>
<td>
</td>
<td>
**Open Data** 16 : Data that can be freely used, re-used, and redistributed
by anyone – subject only, at most, to the requirement to attribute and share-
alike.
</td> </tr>
<tr>
<td>
PROCESSES
</td>
<td>
**Processing of Personal Data** 17 : Any operation (or set of operations)
that is performed upon personal data _or on sets of personal data_ , whether
or not by automated means, such as collection, recording, organization,
_structuring,_ storage, adaptation or alteration, retrieval, consultation,
use, disclosure by transmission, dissemination or otherwise making available,
alignment or combination, _restriction_ , erasure, or destruction.
</td> </tr>
<tr>
<td>
**Profiling** 18 : Any form of automated processing of personal data
consisting of the use of personal data to evaluate certain personal aspects
relating to a natural person, in particular, to analyse or predict aspects
concerning that natural person’s performance at work, economic situation,
health, personal preferences, interests, reliability, behaviour, location, or
movements.
</td> </tr>
<tr>
<td>
NOTIFICATION
</td>
<td>
**Notification** : According to different national legislation, data
controllers have to notify their National Data Protection Authority (DPA) of
their intention to use data before starting to process data. Requirements,
notification processes, and conditions vary across national DPAs.
</td> </tr> </table>
**3.1.2 Fundamental Principles**
European Data Protection legislation provides that personal data must be
collected, used, and processed fairly, stored safely, and not disclosed to any
other person unlawfully. From this perspective, we can outline the following
fundamental principles regarding personal data use 19 :
1. Personal data must be obtained and processed **fairly, lawfully, and _in a transparent way_ ** 20 : according to EU and MS’s national legislation the data controller has to respect certain conditions, for example do the notification process before starting collecting personal data or obtain prior consent from the natural person (the “data subject”) before collecting his/her personal data;
2. Personal data should only be collected for **specified, explicit, and legitimate purposes** and not further processed in any way incompatible with those purposes: personal data must be collected for specific, clear, and lawfully stated purposes, which the data controller has to specify to the “data subject” and to the national Data Protection Authority (DPA);
3. Personal data should be used in an **adequate, relevant, and not excessive way** in relation to the purposes for which they are collected and/or further processed: processing of personal data should be compatible with the specified purposes for which it was obtained;
4. Keep personal data **accurate, complete** , and, where necessary, **up-to-date** ;
5. Keep personal data **safe and secure** : the data controller must assure adequate technical, organisational, and security measures to prevent unauthorised or unlawful processing, alteration, or loss of personal data;
6. **Retain** personal data for **no longer** than is necessary: personal data should not be kept for longer than is necessary for the purposes for which it was obtained;
7. **No transfer of personal data overseas** : it is prohibited to transfer personal data to any country outside of the European Union and European Economic Area.
The new European Regulation has also added some other principles to correctly
manage privacy and data protection rights. These new principles provide as
follows:
* Data Controller **accountability** : taking into account the nature, scope, context, purposes, and risks of processing, the Data Controller has to implement **appropriate technical and organisational measures** . 21
* **Principles of data protection by design and by default** 25 must be applied:
* **Privacy by design** 22 : The Data Controller, before starting collection and processing of personal data as well as during the processing itself (“the whole life cycle of data”), has to implement **appropriate technical and organisational measures** , such as pseudonymization, which are designed to implement data protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing. In other words, before starting “working” with personal data, _the entire process from the start has to be designed_ in compliance with the required technical and legal safeguards of data protection regulations (e.g. adequate security);
* **Privacy by default** : The Data Controller has to implement appropriate technical and organisational measures for **ensuring that, by default, only personal data that are necessary for _each specific purpose of the processing_ are processed ** . 23 More specifically “Privacy by design’s” (PbD) core concepts 28 are:
1. Being **proactive not reactive** , preventative not remedial: The “PbD approach is characterized
by proactive rather than reactive measures. It anticipates and prevents
privacy invasive events before they happen. PbD does not wait for privacy
risks to materialize, nor does it offer remedies for resolving privacy
infractions once they have occurred — it aims to prevent them from occurring.
In short, Privacy by Design comes before-the-fact, not after”;
2. Having **privacy as the default** setting: “PbD seeks to deliver the maximum degree of privacy by ensuring that personal data are automatically protected in any given IT system or business practice. If an individual does nothing, their privacy still remains intact. No action is required on the part of the individual to protect their privacy — it is built into the system, by default”;
3. Having **privacy embedded into design** : “PbD is embedded into the design and architecture of IT systems and business practices. It is not bolted on as an add-on, after the fact. The result is that privacy becomes an essential component of the core functionality being delivered. Privacy is integral to the system, without diminishing functionality”;
4. Avoiding the **pretence of false dichotomies** , such as privacy vs. security: “PbD seeks to accommodate all legitimate interests and objectives in a positive-sum win-win manner, not through a dated, zero-sum approach, where unnecessary trade-offs are made. PbD avoids the pretence of false dichotomies, such as privacy vs. security – demonstrating that it is possible to have both”;
5. Providing **full life-cycle management of data** : “PbD, having been embedded into the system prior to the first element of information being collected, extends securely throughout the entire lifecycle of the data involved — strong security measures are essential to privacy, from start to finish. This ensures that all data are securely retained, and then securely destroyed at the end of the process, in a timely fashion. Thus, PbD ensures cradle to grave, secure lifecycle management of information, end-to-end”;
6. Ensuring **visibility and transparency of data** : “PbD seeks to assure all stakeholders that whatever the business practice or technology involved, it is in fact, operating according to the stated promises and objectives, subject to independent verification. Its component parts and operations remain visible and transparent, to users and providers alike. Remember, trust but verify”;
7. Being **user-centric and respecting user privacy** : “PbD requires architects and operators to protect the interests of the individual by offering such measures as strong privacy defaults, appropriate notice, and empowering user-friendly options. Keep it user-centric”.
**3.1.3 Notification process and data protection impact assessment**
Generally, every data controller has to notify its national Data Protection
Authority (DPA) of its decision to start collection of personal data before
starting this process. This notification aims at communicating in advance the
creation of a new “database,” explaining the reasons for and purposes of this,
and the technical and organisational safeguards in place to protect the
personal data. Consequently, DPAs are enabled to verify the legal and
technical safeguards required by EU legislation. However, the conditions
attaching to and the procedures for submitting such a notification differ from
EU state to EU state, with the strongest protections in place in Germany and
the Netherlands and the least in Ireland and the UK.
The **new European Regulation** will introduce a different way to manage data
protection issues, following PbD principles, however _. Each Data Controller
has to carry o_ ut an assessment of the impact of processing operations on the
protection of personal data before starting the processing itself to evaluate
the origin, nature, particularity, and severity of risk 24 attaching to
their proposed processing. Such Data Protection/Privacy Impact Assessments
(DPIA) can then be utilised to define appropriate measures to assure data
protection and compliance with EU legislation.
A DPIA is required in case of:
* Systematic and extensive evaluation of personal aspects in automated processing (e.g. profiling);
* Processing on a large scale of sensitive data or of personal data relating to criminal convictions and offences;
* Systematic monitoring of a publicly accessible area on a large scale.
The main aspects of DPIAs are:
1. Systematic description of processing operations and the purposes of the processing;
2. Assessment of the necessity and proportionality of the processing operations in relation to the purposes;
3. Assessment of the risks to the rights and freedoms of data subjects;
4. Measures to deal with the risks, including safeguards, security measures, and mechanisms to ensure data protection and to demonstrate compliance with EU legislation.
In the event that a DPIA indicates a high risk in terms of data protection and
privacy rights, the Data Controller must consult the National Data Protection
Authority prior to the processing. 30
**3.1.4 Notification process in EW-Shopp project**
The use of dataset within EW-Shopp project have to comply with applicable
international, EU and national law (in particular, EU Directive 95/46/EC).
To this aim, data owners have been asked to evaluate each of their dataset in
order to confirm the nature and sensitivity of data to be used within EW-Shopp
project.
In order to make this evaluation, dataset owners, for each dataset, have to
clarify if their own dataset contains PD. If the dataset contains PD, they
have to provide _notification_ and _informed consent for secondary use_ .
If the dataset, to be used for EW-Shopp project, does not contain PD, it is
needed to clarify if it is derived from a dataset which contains PD. If the
dataset derives from a dataset which contains PD, the data owner should
prepare a statement which explains that he will not use data produced in the
project to enrich dataset containing PD for DMP aims and provide also the
notification with the EC regarding the original dataset which contains PD to
be included in deliverable [D7.2].
If the dataset does not contain PD (or derives from a dataset does not contain
PD), the data owner should provide a statement, which details that his own
dataset does not contain PD (explaining the implemented procedures, etc.).
All the notifications and copy of opinions performed by owners of dataset,
which contains PD will be collected in deliverable [D7.2].
<table>
<tr>
<th>
**3.2**
</th>
<th>
**Ethics requirements regarding the involvement of human rights**
</th> </tr> </table>
The EW-Shopp project is implemented considering fundamental ethical standards
to ensure the quality and excellence in the process and after the life of the
project. In the Horizon 2020 it is specified that Ethical research conduct
implies the application of fundamental ethical principles and legislation to
scientific research in all possible domains of research. According to the
procedure established in the Horizon 2020 in terms of Ethics, in order to
achieve the engagement of the scientific research with the ethical dimension,
in EW-Shopp project each BC owner has been asked to answer the following
questions:
* Are there any ethical issues that can have an impact on data sharing?
* Have you taken the necessary measures to protect the humans’ rights and freedoms?
* How did/could these measures impact the BC?
* Do you assess the risks linked to the specific type of data your organization provides?
<table>
<tr>
<th>
**3.3**
</th>
<th>
**Intellectual Property Rights**
</th> </tr> </table>
In the context of EW-Shopp project, the IPR ownership is fundamentally
regulated by the underlying principles of two main official documents (namely
[CA] and [GA]), but further considerations will be detailed within WP5 frame
and provided in its outcome “D5.4 – Update of Exploitation and Dissemination
Strategy (M24)”.
Two main concerns on IPR management could impact the current deliverable:
* Existing or developed datasets will be available to the whole Consortium during the project timespan, but any further use in exploitation activities must follow specific limitations and/or conditions (as stated in Article 25.3 of the [GA] and described in its Attachment 1).
* All the identified datasets will be available to all Beneficiaries in order to develop the business cases used to validate the project results, as explicitly mentioned in the description tables contained in “Chapter 6 - Dataset description” (see Dataset ACCESS section).
<table>
<tr>
<th>
**Chapter 4 Business Case description**
</th>
<th>
**high-level**
</th> </tr> </table>
The main business objective of EW-Shopp is to **develop cross-domain data
integration platform that would enable fragmented European business ecosystem
to increase efficiency and competitiveness through building relevant custom
insights and business knowledge** . This platform will enable us to regain
lost positions in competing against global internet service giants that
managed to position their growth and sector transformation on intensive
exploitation of integrated big data generated at their proprietary platforms.
<table>
<tr>
<th>
**4.1**
</th>
<th>
**Bing Bang, CENEJE (BC1)**
</th> </tr> </table>
The goal of the business case is to follow user experience based on real time
cross channel data integration. The business case will develop analytical
predictive models for managing marketing activities, sales resources,
operations, data quality and content management that will increase partner
efficiency and sales. It will furthermore enable the development of market
data enrichment services and consequent monetization. This will be done
through integrating cross-channel intent, research, interest, interaction and
purchase data with point of sales solutions.
The data that will be integrated are:
* Purchase intent: A collection of user journey data – pageviews, search terms, redirects to sellers and similar.
* Product attributes: A collection of product attributes (varying from generic such as name, EAN, brand, categorization and color to more specific as dimensions or technical specifications).
* Products price history: A collection of seller quotes for products.
* Customer purchase history: Sell out data matched with customer baskets in a defined timeframe.
* Consumer intent and interaction: A collection of user journey data from Google Analytics - pageviews, page events, search terms, redirects to channels, etc.
* Contact and Consumer interaction history: calls (outbound, inbound and simulated calls), other contacts events (email, SMS, click-through, fax, scan, or any other document) and other events.
To achieve the business case goals, in EW-Shopp we will set-up a virtual lab
in a data cloud environment where we will create a set of scenarios by
integrating partner data sets of anonymized user paths to purchase that should
include all possible engagements, decisions and purchase information. The data
will be used in order to:
* develop models of purchase behavior;
* cluster similar behaviors to optimize operations;
* enable user experience advertising;
* develop efficient sales promotions;
* provide efficient marketing and communication tools;
* build segmented mailing groups for efficient automatization of e-mail marketing;
* increase efficiency in above-the-line (mass media) and below-the-line (one to one) activities;
* create efficient POS solutions for sales.
<table>
<tr>
<th>
**4.2**
</th>
<th>
**GfK (BC2)**
</th> </tr> </table>
The goal of the business case two, is to find which are the external variables
and their weights in predicting sales and success of products. Except the
integration between the two datasets provided by GfK, this business case aims
at integrating also external data such as events and weather data, in order to
improve predictability.
The two services, Retail Sales Data Reporting System and Echo, where the
former allows to maximize sales and profit in order to keep customers coming
back, while the latter tracks and improves the experiences of customers in
real-time. The predictive model learned upon the integrated data about
customer feedback as well as third party data will identify which actions
drive growth.
The data that will be integrated are:
* Market data: Sales data (tech goods), Product Attributes and Prices Data (tech goods), and Purchase Data
* Consumers data: Demographics, TV Behaviour & Exposure Data (passive / survey), Online Behavior & Exposure Data, Individual Purchase Data (passive / survey), and Mobile Usage & Exposure Data
* Event data, including Sport Events (World cup, Champion, Olympic games, etc.), Social Events (strikes, terrorism, epidemics, etc.), Political Events (elections, relevant laws, etc.), Natural Events (earthquake, floods, etc.)
* Historical Weather Data: relevant weather information across different countries
* Social media data: measures of customer engagement across different platforms (e.g., email marketing, search
* Purchase intent and search data: data about purchase research and intent by category and search behaviour based on keyword interaction through advertising.
<table>
<tr>
<th>
**4.3**
</th>
<th>
**Measurence (BC3)**
</th> </tr> </table>
The goal of the business case is to improve the Measurance Scout, a location
scouting solution that helps in choosing the best location for the business.
This will optimize the real estate investments by analyzing the traffic around
the location of their interest.
The traffic data, after being anonymized, are collected by Measurance WiFI
technology at a high level of granularity. Moreover. in order to understand
better the potential location, Measurance need also external data such as
weather data, event data, geographic data, sales data of business etc.
The data that are planned to be integrated are:
* Weather data at a high level of granularity
* Events data around a location: we need to be able to filter these events based on their venue and, ideally, on the number of people expected to join the events
* Geographical data: Businesses in the area (shopping, restaurants etc.), schools, tourist attractions, nightlife, etc.
* Sales data: business volume of businesses in the area aggregated by kind of activity (e.g. restaurants, clothes shop, etc.)
<table>
<tr>
<th>
**4.4**
</th>
<th>
**JOT (BC4)**
</th> </tr> </table>
The goal of JOT Business case is using big data technology and integrating
cross domain purchase intention data on the level of search and communication
and content interactions in order to enable JOT to increase its clients’
communication efficiency and marketing effort allocation. Current methods for
online marketing prediction have failed simply because there is no single rule
that can be universally applied to all markets, products and sectors. The only
way to effectively find an online marketing method is to analyse user
behaviour and traffic sources, taking into account the different aspects of
external environmental and behavioural variables that impact it.
Through analysing marketing campaign performance, JOT can obtain behaviour
patterns that can be used to establish a behavioural baseline. Thanks to this
JOT will be able to predict the likely pattern for certain days or times zones
with similar characteristics. Behaviour analysis could be obtained by cross-
referencing geographical data with peak times, baseline traffic, daily
impressions trends, realtime conversion and bounce rates just to name a few
metrics. Furthermore, in order to achieve accurate results, a vast amount of
data will have to be collected so as to provide accuracy to the data sample.
JOT had planned to provide three different datasets within the project (two
are proprietary and meaningful mainly only in their own business case):
* Traffic sources (Bing): Historical marketing campaign performance statistics of search data in Bing advertising platforms.
* Traffic sources (Google): Historical marketing campaign performance statistics of data in Google platform.
* Twitter trends: Trending topics as available through Twitter APIs.
In respect of the [DoA], JOT has datasets to simplify the usage of their data
within EW-Shopp project without impacting the support of the services foreseen
in this business case:
* the original Pixel Dataset has been unified with Traffic source Google and Traffic source Bing;
* for the Email marketing campaign dataset, the company Impacting was no longer able to provide it. JOT confirmed this dataset does not affect the goal of the project, being just a complement to the Traffic Source ones, so this will not interfere in the business case success. Moreover, this has allowed to removing, at the source, the problem related to IP and geo-localisation.
Other datasets will be added to the above-mentioned ones in order to realize
the JOT business case:
* Events: A dataset covering different kinds of events (sporting, large-scale concerts, congresses, elections) for the different countries that wish to take part in the use case will be needed. This kind of dataset is provided through Event Registry dataset.
* Weather history: This dataset will contain historical data on the weather that JOT will utilize for the project. It will show the real weather conditions, even down to a specific hour / minute, during the time period chosen for the study. This dataset is provided through MARS (historical data) dataset.
* Weather forecast: Same time period as for the previous dataset but just that the information will be the weather forecasted or predicted for the given times, not necessarily the actual climatic conditions.
The purpose of this business case is related to carry out systematic analyses
to predict the effect of different variables such as weather and other events
on the performance of marketing campaign. These analyses will lead to the
development of different business services:
1. Event and weather-aware campaign scheduling. This service will be used by JOT to predict the very best moment to launch or run a marketing campaign based on weather conditions and events.
2. Event-based customer engagement analysis. This service supports the analysis of the possible impact of events on Online Shopping.
3. Event-based digital marketing management. This service supports intelligent bidding on digital marketing platforms, programmed based on events.
4. Weather-responsive digital marketing. This service offers intelligent bidding on digital marketing platforms, based on real-time weather conditions.
<table>
<tr>
<th>
**Chapter 5 EW-Shopp DMP**
</th>
<th>
**Methodology for**
</th> </tr> </table>
The aim of this chapter is to provide an explanation of all the information
required to data owners in order to make data findable, accessible,
interoperable and re-usable (FAIR) and to share the process followed in EW-
Shopp to collect these data.
**5.1 Elements of EW-Shopp Data Management Plan**
The DMP should address some important points on a dataset by dataset basis and
should reflect the current status of reflection within the consortium about
the data that will be produced. The DMP, as a key element of good data
management, has to describe the life cycle management applied to the data to
be collected, processed and/or generated by a Horizon 2020 project.
In order to make data findable, accessible, interoperable and re-usable
(FAIR), a DMP should include:
* **Dataset Identification** : specifying what data will be collected, processed and/or generated.
* **Dataset Origin** : specifying if existing data is being re-used (if any), the origin of the data and the expected size of the data (if known).
* **Dataset Format** : describing the structure and type of the data, time and spatial coverage and language and naming conventions.
* **Data Access** : specifying whether data will be shared/made open access. In particular, for:
* **Making data accessible** : specifying if and which data produced and/or used in the project will be made openly available, moreover _explaining why certain datasets_ _cannot be shared_ (or need to be shared under restrictions), separating legal and contractual reasons from voluntary restrictions.
* **Making data interoperable** : specifying if the data produced in the project is interoperable, that is allowing data exchange and re-use. Moreover, specifying what data and metadata vocabularies, standards or methodologies it is meant to follow to make data interoperable.
* **Data Security** : specifying which provisions are in place for data security (including data recovery as well as secure storage and transfer of sensitive data). Furthermore, specifying Personal Data presence and, in that case, privacy management procedures put in practice.
The following paragraphs aim to give more details, in terms of the class of
attributes listed above, and will be used as a guide to describe datasets
provided for EW-Shopp purpose, in accordance with the Guidelines on Data
Management in Horizon 2020.
**5.1.1 Dataset IDENTIFICATION**
First of all, it’s needed to identify the dataset to be produced and provide
dataset details, in terms of description of the data that will be generated or
collected.
Following H2020 guidelines, it has been defined a set of relevant information
that can help to define the dataset identification:
* Category: Dataset typology (Market, Consumer, Products, Weather, Media).
* Data name: Name of the dataset that should be a self-explaining name.
* Description: Description of the dataset in order to provide more details.
* Provider: Name of the beneficiary providing the dataset (or being in charge of bringing it into the project).
* Contact Person: Name of the person to be contacted for further details about the dataset.
* Business Cases number: BC involved (i.e., BCx)
**5.1.2 Dataset ORIGIN**
Following H2020 guidelines, it has been defined a set of relevant information
that can help to define the dataset origin:
* Available at (M): Project month in which the dataset will be available.
* Core Data (Y|N): Indicate if the dataset is mandatory and will be part of the data shared along the different UCs or if it is discretionary and present only a limited usage.
* Size: A rough order of magnitude (ROM) estimation in terms of MB/GB/TB.
* Growth : A dynamic rough order of magnitude (ROM) estimate by selecting the most appropriate frequency in terms of MB/GB/TB per hour/day/week/months/other.
* Type and format: Dataset format, specifying if it is using, for example, CSV, Excel spreadsheet, XML, JSON, etc.
* Existing data (Y|N): The data already exist or are generated for the project’s purpose.
* Data origin: How the data in the dataset is being collected/generated (i.e. SQL table, Google
API, etc.)
**5.1.3 Dataset FORMAT**
Following H2020 guidelines, it has been defined a set of relevant information
that can help to define the dataset format:
* Dataset structure: description of the structure and type of the data. (i.e. the header columns, the JSON schema, REST response fields, etc.).
* Dataset format: definition of the dataset format (i.e. specifying if it is using CSV, Excel spreadsheet, XML, JSON, GeoJSON, Shapefile, HTTP stream, etc.).
* Time coverage: if the data _set has a time dimension, indicatio_ n of what period does it cover.
* Spatial coverage: if the dataset relates to a spatial region, indication of what is its coverage.
* Languages: languages of metadata, attributes, code lists, descriptions.
* Identifiability of data: reference to identifiability of data and standard identification mechanism.
* Naming convention: description about how the dataset can be identified if updated or after a versioning task has been performed, if the dataset is not static.
* Versioning: reference to how often is the data updated (i.e. No planned updating, Annually, Quarterly, Monthly, Weekly, Daily, Hourly, Every few minutes, Every few seconds, Real-time) and how the versioning is managed (i.e. if daily, every day a new dataset is generated with the newly created data or every day a new dataset overrides the old one containing all the data generated from the beginning of the collection, …)
* Metadata standards: specification of standards for metadata creation (if any). If there are no standards description of what metadata will be created and how.
**5.1.4 Dataset ACCESS**
Following H2020 guidelines, it has been defined a set of relevant information
that can help to define the dataset access with the aim to making data
accessible and interoperable:
* Dataset license: if the dataset is released as open data, indication of the license used: CC0 25 , CC-BY 26 , CC-BY-SA 27 , CC-BY-ND 28 , CC-BY-NC 29 , CC-BY-NC-SA 30 , CC-BY-NC-ND 31 , PDDL 38 , ODCby 32 , ODbL 40 , other or proprietary (with link if possible). Otherwise, specify who have access to the dataset (for example, all partners in the consortium, some partners for the purpose of tool development, only a sample will be disclosed, etc.)
* Availability (public | private): the dataset is public or private.
* Availability to EW-Shopp partners (Y|N): the dataset is available to EW-Shopp partners.
* Availability method: specification of how the data will be made available (i.e. web page in the browser, web service (REST/SOAP APIs), query endpoint, file download, DB dump, directly shared by the responsible organization, etc.).
* Tools to access: specification of what methods or software tools are needed to access the data.
* Dataset source URL: specification of where the data and associated metadata, documentation and code are deposited (i.e. dataset source URL, etc.)
* Access restrictions: specification of how access will be provided in case there are any restrictions.
* Keyword/Tags: categorization of the dataset through some relevant keywords/tags (i.e. product categories, price, etc.)
* Archiving and preservation: description of the procedures that will be put in place for longterm preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered.
* Data interoperability: specification of what data and metadata vocabularies, standards or methodologies will be followed to facilitate interoperability.
* Standard vocabulary: specification of what standard vocabulary, to allow inter-disciplinary interoperability, will be used for all data types present in the dataset. If not, a mapping to more commonly used ontologies has to be provided.
We provide some more clarifications about the approach to describe Data
interoperability and Standard vocabulary dimensions in EW-Shopp. Because of
the sensitiveness of business data used in the EW-Shopp innovation action, no
commitment to publish datasets provided by business partners as open data is
made in [DoA]. Thus, the primary focus concerning interoperability in EW-Shopp
is on supporting data integration tasks, rather than on supporting
discoverability of data sets by third parties.
For this reason, in Data interoperability, we will focus on methodologies that
will be adopted to support interoperability between the described dataset and
other datasets. Here we will shortly describe the interoperability
methodologies that we plan to use, while more details will be provided in D3.1
– Interoperability Requirements, which will be published at M8.
* **Publication as linked data (RDF-ization).** Linked data represented with the RDF 33 language provide support to data interoperability by: i) representing information as with graph-based abstractions, often referred to as Knowledge Graphs (descriptions of typed entities, their properties and mutual relations), ii) using global identifiers for entities described in a dataset (URIs), iii) using terms (classes, properties, data types) from shared vocabularies and ontologies. Publishing a source dataset using linked data principles makes it easy to access and use the data for future integration tasks. This methodology is used in particular for EW-Shopp core data, i.e., data that are used as joints to integrate different information sources like product data or product classification schemes, which are not available already as linked data.
* **N/A (Linked Open Data).** For data that are already available as linked data, we consider interoperability methodology not applicable.
* **Semantic data enrichment.** This is a key pillar of EW-Shopp approach adopted to support interoperability. Given an input dataset that is provided in a format different from RDF, and after applying suitable transformations if needed, the dataset will be semantically annotated using semantic labelling techniques. We assume that the input dataset is transformed in a table in CSV format, then, i) the headers of the column tables will be aligned with shared vocabularies (e.g., XSD used to define the data types, or predicates of Schema.org 34 used to describe offers in eCommerce portals), while ii) values will be linked to shared systems of identifiers (e.g., location identifiers from DBpedia). Annotations will support the enrichment of the data using the shared system of identifiers as joints, and ii) publication of the data as Knoweldge Graphs represented in RDF (if useful). For example, after linking a column representing product names to EAN codes, we can retrieve the brand of each product from a linked product data source, thus enriching the original dataset. Semantic data enrichment also provides a methodology to publish data that come in tabular format as linked data. However, such a publication is not a mandatory step in semantic data enrichment.
* **References to shared systems of identifiers and standard data types.** A data sources is made interoperable by using shared systems of identifiers without requiring a full RDFization. For example, we may want to invoke weather data APIs using DBpedia identifiers for locations.
For Standard vocabulary, we refer to shared vocabularies, where “shared” refer
to adoption by community of users. Among shared vocabularies we consider ISO
standards, e.g., ISO 8601 35 date formats, languages and vocabularies
recommended by W3C 36 , e.g., RDF or Time OWL 2 37 , but also vocabularies
and systems of identifiers that are becoming de-fact standard because of
usage, e.g., Schema.org, DBpedia, Wikipedia. We will consider the following
shared vocabularies, which will be used in the project to support
interoperability:
* Terminologies from language specifications
* Predicates, classes and data types specified in languages recommended by W3C (i.e., XSD Data Types 38 , RDF, SKOS 47 , RDFS 39 , OWL 40 ); these terms are used throughout the project, thus they will not be added to the descriptions of individual datasets.
o Classifications
* **Interlinked product classifications.** This classification will be built in EW-Shopp by linking Google Categories (from Google product taxonomy), Global Product Classification by GS1 1 and GFK product categories, i.e., categories used in GFK Product Catalog 2 (GS1 categories are derived from GFK categories and the two classifications are aligned). o Domain ontologies and shared systems of identifiers
* **Linked product data.**
* Schema-level terminology (e.g., Schema.org, GoodRelations 50 )
* Schema-level terminology and identifiers (GfK Product Catalog for retail, with internal identifiers and partially aligned to EAN codes 3)
* **Temporal ontologies** . Standard vocabularies and other vocabularies and ontologies recommended by W3C to represent temporal information (e.g., ISO 8601, XSD Date and Time Data Types, Time OWL 2).
* **Spatial ontologies and locations** . Ontologies covering spatial schema-level terminology as well as identifiers of locations and administrative units across Europe (e.g., Basic Geo WGS84 41 , DBpedia Ontology 42 , Schema.org, Geonames Ontology 43 , Linked GeoData 44 , Linked Open Street Maps 45 )
* **Wikipedia entities.** Wikipedia provide identifiers for a very large number and variety of entities described in Wikipedia, which are adopted by a very large community of data providers and consumers. With Wikipedia entities, we refer also to identifiers used in data sources derived from Wikipedia (e.g., DBpedia) or linked to Wikipedia identifiers (e.g., WikiData 46 ). While identifiers of location play a prominent role in EW-Shopp and are covered by spatial locations, here we refer to entities of different types, used, e.g., to annotate events.
**5.1.5 Data SECURITY**
Following H2020 guidelines, it has been defined a set of relevant information
that can help to define the dataset security:
* Personal Data (Y|N): Confirmation about personal data presence in the dataset.
* Anonymized (Y|N|NA): confirmation if personal data is anonymized.
* Data recovery and secure storage: Information about how was managed data recovery and secure storage.
* Privacy management procedures: Specification about procedure addressed in order to manage privacy.
* PD At The Source (Y|N): Confirmation about Personal data absence at the source.
* PD - Anonymised during project (Y|N): Confirmation about Personal data anonymised during the project.
* PD - Anonymised before project (Y|N): Confirmation about Personal data anonymised before the project.
* Level of Aggregation (for PD anonymized by aggregation): Indication about which is the level of aggregation to allow Personal data anonymization.
**5.2 Process to collect dataset details**
The goal to collect all the information, described in the previous paragraphs,
has been achieved, with respect to EW-Shopp dataset, through the process
described here below.
The first step was intended to set up a table with the main sections of the
Dataset description: Dataset Identification, Dataset origin, Dataset format,
Dataset access and Dataset security. Each of these sections was further
decomposed to contain all the information described in the related paragraphs
showed in this Chapter 5
The second step consisted in preparing a sort of survey in the form of a
textual description (see Annex A – DMP Survey), with the scope to give a clear
understanding of all the required information and ease the fulfilment of the
table.
The third step was realized by performing a collection process, when each
Business case owner had to fulfill the table and then it was interviewed by a
technical partner aiming at discussing the information provided.
At the end of the process, all the information collected was merged in an
integrated spreadsheet. The same information will be discussed, in the
following chapter, using a table format in order to ease the understanding of
each dataset description.
**Chapter 6 Dataset description**
The aim of this chapter is to provide, for each dataset, a description trying
to answer to all the information listed in Chapter 5 in accordance with
Guidelines on FAIR Data Management in Horizon 2020 and with ethics and legal
requirements. Dataset, as it’s possible to see in the following paragraphs,
refers to individual dataset but also to families of datasets with the same
structure created in different moments of time or under other discriminating
conditions.
**6.1 CE Dataset - Consumer Data: Purchase Intent**
**6.1.1 Dataset IDENTIFICATION**
The dataset “Purchase Intent” is proprietary and contains user journey metrics
and logs.
# Table 5. DATASET IDENTIFICATION – Purchase Intent
<table>
<tr>
<th>
**Category**
</th>
<th>
Consumer data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Purchase intent
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
A collection of user journey data – pageviews, search terms, redirects to
sellers and similar. Data is logged to local databases and we provide data
from 1. 1. 2015. Local databases consist of SQL databases and NoSQL databases.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Ceneje
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
David Creslovnik Uros Mevc
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1
</td> </tr> </table>
**6.1.2 Dataset ORIGIN**
This dataset is available from January 2017 and it cannot be defined as “core
data”. The dataset already existed.
# Table 6. DATASET ORIGIN – Purchase Intent
<table>
<tr>
<th>
</th>
<th>
**Available at (M)**
</th> </tr>
<tr>
<th>
**Core Data (Y|N)**
</th> </tr>
<tr>
<th>
**Size**
</th> </tr>
<tr>
<th>
**Growth**
</th> </tr>
<tr>
<td>
</td>
<td>
15000 searches per day
25000 redirects per day
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
structured documents, TSV
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
SQL tables
NoSQL documents
</td> </tr> </table>
**6.1.3 Dataset FORMAT**
The dataset has a tsv (SQL) or json (NoSQL) format, the data structure is
illustrated in the following table. It collects data not in a specific
language, since 2015 and it covers information at Country level. The data is
updated daily that means every day the dataset contains only the data newly
generated.
# Table 7 DATASET FORMAT – Purchase Intent
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
*SQL tables*
Product pageviews
* IdProduct (INT)
* NameProduct (STRING)
* L1 (STRING): Level 1 category
* L2 (STRING): Level 2 category
* L3 (STRING): Level 3 category
* IdUsers (INT)
* Date (DATETIME)
Product deeplinks (redirects to sellers)
* IdProduct (INT)
* NameProduct (STRING)
* L1 (STRING): Level 1 category
* L2 (STRING): Level 2 category
* L3 (STRING): Level 3 category
* IdUsers (INT)
* IdSeller (INT)
* Date (DATETIME)
*NoSQL documents*
Page search
{
"_id" : (ObjectId)
"IdUsers" : (INT),
"TimeStamp" : (ISODate),
"Search" : {
"NumberOfResults" : (INT),
</th> </tr>
<tr>
<td>
</td>
<td>
"Query" : (STRING)
}
</td> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
SQL: tsv
NoSQL: json
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
since 2015
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Country
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
not language specific
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
/{country}/YYYY/MM/DD.tsv
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Daily (every day the dataset contains only the data newly generated)
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr> </table>
**6.1.4 Dataset ACCESS**
The dataset is private, but it is accessible to all the consortium members.
The data will be made available through File-download by means of WGET/Curl.
Dataset will be deposited on AWS or Ceneje static content server and the
access is provided by credentials.
# Table 8 MAKING DATA ACCESSIBLE – Purchase Intent
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: Ceneje
Access: All members
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
File download (zip)
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
WGET/Curl
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
AWS or Ceneje static content server
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
NO (can be generated on demand)
</td> </tr> </table>
# Table 9 MAKING DATA INTEROPERABLE – Purchase Intent
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Interlinked product classification
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Linked product data
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr> </table>
**6.1.5 Dataset SECURITY**
The dataset does not contain personal data because these were anonymized
before being used in the project. It is expected a secure storage and regular
backups.
# Table 10 DATASET SECURITY - Purchase Intent
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Secure storage, regular backups
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PID anonymized by aggregation)**
</td>
<td>
User Id level (anonymous)
</td> </tr> </table>
**6.1.6 Ethics and Legal requirements**
The source of the data contains PD, but data are anonymized before the project
and shared within the project without PD. Since Ceneje already notified to
their Data Protection Officer (DPO) that there will be no PD shared, they
don’t need to get additional opinion. Notification to Data Protection Officer
is included in deliverable [D7.2].
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.2 ME Dataset - Consumer Data: Location analytics data (Hourly)**
**6.2.1 Dataset IDENTIFICATION**
The dataset “Location analytics data”, provided by Measurence, focuses on
Hourly number of devices with WiFi enabled that pass through an area covered
by Measurence WiFi sensors.
# Table 11\. DATASET IDENTIFICATION – Location analytics data
<table>
<tr>
<th>
**Category**
</th>
<th>
Consumer Data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Location analytics data
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Hourly number of devices with WiFi enabled that pass through an area covered
by Measurence WiFi sensors
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Measurence
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Olga Melnyk
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC3
</td> </tr> </table>
**6.2.2 Dataset ORIGIN**
This dataset is available from January 2017 and it cannot be defined as “core
data”. It has a APIs - JSON format with a size of ~600GB and a growth of ~5GB
/ location / month. The dataset already existed before the project.
# Table 12. DATASET ORIGIN – Location analytics data
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
~600GB
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
~5GB / location / month
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
APIs - JSON format
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Proprietary sensors
</td> </tr> </table>
**6.2.3 Dataset FORMAT**
The dataset has a JSON and CSV format. It collects numerical data gathered
since 2015 and it covers information related to zip code, coordinates,
address, county, city, country. The data is updated daily that means every day
the dataset contains only the data newly generated.
# Table 13 DATASET FORMAT – Location analytics data
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
N/A because there is no access to the data through URL
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
JSON and CSV
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
starting from 2015
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
zip code, coordinates, address, county, city, country
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
EN (numerical data)
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
No. Raw data contains a hashed version of the real mac address which is
anonymized at the source
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
/location_id/YYYY/MM/DD/HH
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Daily (every day the dataset contains only the data newly generated)
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr> </table>
**6.2.4 Dataset ACCESS**
The dataset is private, but it is accessible to all the consortium members.
The data will be made available through API by means of Authenticated
encrypted channel.
# Table 14 MAKING DATA ACCESSIBLE – Location analytics data
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: ME. Access: members
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
API
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
Authenticated encrypted channel
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
API endpoint
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials / API keys
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
presence data, location intelligence
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Lifetime archive of raw data. The APIs always use the last version of the
algorithm
</td> </tr> </table>
# Table 15 MAKING DATA INTEROPERABLE – Location analytics data
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr> </table>
**6.2.5 Dataset SECURITY**
The dataset does not contain personal data because these data were anonymized
at the source. It is expected data recovery and a secure storage.
# Table 16 DATASET SECURITY - Location analytics data
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
Y, prior to storing data in a database (No PD is stored in any database)
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
All the data anonymized are before storage (read paragraph 6.2.6)
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr> </table>
**6.2.6 Ethics and Legal requirements**
The MAC addresses that Measurence's sensors collect (which can be unique
identifiers of WiFi transmitters) are hashed with the cryptographic hash
function SHA-2 256bits – which is a set of cryptographic hash functions 47
designed by the United States National Security Agency (NSA). Measurence
followed a privacy by design approach, so after hashing has been performed,
the hashed MAC address is sent to our servers and the original MAC address
gets discarded directly by the sensor: we never store the real mac address on
our servers. Given a hashed MAC address there is no way to reconstruct the
corresponding original MAC address, other than attempt a brute force attack
(which, obviously, is applicable to any cryptographic function). Based on the
above description, this dataset does not contain personal data, therefore the
national and European legal framework that regulates the use of personal data
does not apply and copy of opinion is not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.3 ME Dataset - Consumer Data: Location analytics data (Daily)**
**6.3.1 Dataset IDENTIFICATION**
The dataset “Location analytics data”, provided by Measurence, focuses on
daily number of devices with WiFi enabled that pass through an area covered by
Measurence WiFi sensors.
# Table 17\. DATASET IDENTIFICATION – Location analytics data
<table>
<tr>
<th>
**Category**
</th>
<th>
Consumer Data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Location analytics data
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Daily number of devices with WiFi enabled that pass through an area covered by
Measurence WiFi sensors
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Measurence
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Olga Melnyk
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC3
</td> </tr> </table>
**6.3.2 Dataset ORIGIN**
This dataset is available from January 2017 and it cannot be defined as “core
data”. It has a APIs - JSON format with a size of ~600GB and a growth of ~5GB
/ location / month. The dataset already existed before the project.
# Table 18. DATASET ORIGIN – Location analytics data
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
~600GB
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
~5GB / location / month
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
APIs - JSON format
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Proprietary sensors
</td> </tr> </table>
**6.3.3 Dataset FORMAT**
The dataset has a JSON and CSV format. It collects numerical data gathered
starting from 2015 and it covers information related to zip code, coordinates,
address, county, city, country. The data is updated daily that means every day
the dataset contains only the data newly generated.
# Table 19 DATASET FORMAT – Location analytics data
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
N/A because there is no access to the data through URL
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
JSON and CSV
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
starting from 2015
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
zip code, coordinates, address, county, city, country
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
EN (numerical data)
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
No. Raw data contains an hashed version of the real mac address which is
anonymized at the source
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
/location_id/YYYY/MM/DD/
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Daily (every day the dataset contains only the data newly generated)
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr> </table>
**6.3.4 Dataset ACCESS**
The dataset is private, but it is accessible to all the consortium members.
The data will be made available through API by means of Authenticated
encrypted channel.
# Table 20 MAKING DATA ACCESSIBLE – Location analytics data
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: ME. Access: members
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
API
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
Authenticated encrypted channel
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
TBD / API endpoint
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials / API keys
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
presence data, location intelligence
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Lifetime archive of raw data. The APIs always use the last version of the
algorithm
</td> </tr> </table>
# Table 21 MAKING DATA INTEROPERABLE – Location analytics data
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr> </table>
**6.3.5 Dataset SECURITY**
The dataset does not contain personal data because these data were anonymized
at the source. It is expected data recovery and a secure storage.
# Table 22 DATASET SECURITY - Location analytics data
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
Y, prior to storing data in a database (No PD is stored in any database)
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
All the data anonymized are before storage (read paragraph 6.3.6)
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr> </table>
**6.3.6 Ethics and Legal requirements**
The MAC addresses that Measurence's sensors collect (which can be unique
identifiers of WiFi transmitters) are hashed with the cryptographic hash
function SHA-2 256bits – which is a set of cryptographic hash functions
designed by the United States National Security Agency (NSA). Measurence
followed a privacy by design approach, so after hashing has been performed,
the hashed MAC address is sent to our servers and the original MAC address
gets discarded directly by the sensor: we never store the real mac address on
our servers. Given a hashed MAC address there is no way to reconstruct the
corresponding original MAC address, other than attempt a brute force attack
(which, obviously, is applicable to any cryptographic function). Based on the
above description, this dataset does not contain personal data, therefore the
national and European legal framework that regulates the use of personal data
does not apply and copy of opinion is not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.4 BB Dataset - Consumer Data: Customer Purchase History**
**6.4.1 Dataset IDENTIFICATION**
The dataset “Customer purchase history” is proprietary and contains data on
customers and their purchases.
# Table 23\. DATASET IDENTIFICATION – Customer Purchase History
<table>
<tr>
<th>
**Category**
</th>
<th>
Consumer data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Customer purchase history
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Sell out data matched with customer baskets in a defined timeframe.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Big Bang
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Matija Torlak
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1
</td> </tr> </table>
**6.4.2 Dataset ORIGIN**
This dataset is available from January 2017 and it cannot be defined as “core
data”. It has a size of 29000 products and a growth of 2000 new products per
year. The dataset already existed before the project.
# Table 24 DATASET ORIGIN – Customer Purchase History
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
29000 products
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
2000 new products per year
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
structured tabular data
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Google Analytics, DWH (SQL
structured data
</td>
<td>
tables),
</td>
<td>
Excel
</td> </tr> </table>
**6.4.3 Dataset FORMAT**
The dataset has a CSV/XLS format. It collects data gathered since 2013 and it
covers information related to total or per store location (18 stores + web).
The data is updated daily and contains the data newly generated and history.
# Table 25 DATASET FORMAT – Customer Purchase History
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
BB Classification - can be matched with GPC Classification; purchase data
table structured (SQL)
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
CSV/XLS
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
since 2013
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Total or per store location (18 stores + web)
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
slovenian
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
/{country}/companyname/purchaseid.json
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
daily (new + history)
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
Google Analytics
</td> </tr> </table>
**6.4.4 Dataset ACCESS**
The dataset is public, but it is accessible through password. The data will be
made available through download.
# Table 26 MAKING DATA ACCESSIBLE – Customer Purchase History
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Admin - Full User (Owner)
Access all members through pass and user name
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
public (password, username restricted)
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
Download, view, edit (based on license)
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
Accessible on web
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
BB virtual server
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
OrderId, ProductId, StoreId,…. Same as the Sample
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Can be generated on demand
</td> </tr> </table>
# Table 27 MAKING DATA INTEROPERABLE – Customer Purchase History
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Publication as linked data (RDF-ization)
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Semantic data enrichment
</td> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Interlinked product classification
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Linked product data
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr> </table>
**6.4.5 Dataset SECURITY**
The dataset does not contain personal data because these data were anonymized
at the source. It is expected secure storage and constant download options.
# Table 28 DATASET SECURITY – Customer Purchase History
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Secure storage, constant download options
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
Personal data will not be processed during the project. All data are returned
by analytics engine that will not provide PD.
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD**
**anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr> </table>
**6.4.6 Ethics and Legal requirements**
The source of the data contains PD, but data are anonymized before the project
and shared within the project without PD. Since Bing Bang already notified to
their Data Protection Officer (DPO) that there will be no PD shared, they
don’t need to get additional opinion. Notification to Data Protection Officer
is included in deliverable [D7.2].
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.5 BB Dataset - Consumer Data: Consumer Intent and Interaction**
**6.5.1 Dataset IDENTIFICATION**
The dataset “Consumer intent and interaction” is proprietary and contains data
on customer journeys recorder using Google analytics.
# Table 29\. DATASET IDENTIFICATION – Consumer Intent and Interaction
<table>
<tr>
<th>
**Category**
</th>
<th>
Consumer data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Consumer intent and interaction
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
A collection of user journey data from Google Analytics - pageviews, page
events, search terms, redirects to channels, etc. Data is recorded since
December 2012.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Big Bang
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Matija Torlak
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1
</td> </tr> </table>
**6.5.2 Dataset ORIGIN**
This dataset is available from January 2017 and it cannot be defined as “core
data”. The dataset already existed before the project.
# Table 30 DATASET ORIGIN - Consumer Intent and Interaction
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
130 million pageviews,
20 million sessions,
8 million users,
70000 transactions (since December 2012)
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
10.000 users per day
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
numeric
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Google Analytics
</td> </tr> </table>
**6.5.3 Dataset FORMAT**
The dataset has a CSV format. It collects data gathered since 2013 and it
regards the whole world.
The data is updated daily and contains the data newly generated and history.
# Table 31 DATASET FORMAT – Consumer Intent and Interaction
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
Google Analytics specified
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
CSV, XLS
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
since 2013
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Global
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
not language specific
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
daily (new + history)
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
Google Analytics
</td> </tr> </table>
**6.5.4 Dataset ACCESS**
The dataset is public, but it is accessible through password. The data will be
made available through download.
# Table 32 MAKING DATA ACCESSIBLE – Consumer Intent and Interaction
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Admin - Full User (Owner)
Access all members through pass and user name
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
public (password, username restricted)
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
Download, view, edit (based on license)
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
BB virtual server
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
Google search tags
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
N/A because data is used just for analytical
</td> </tr> </table>
# Table 33 MAKING DATA INTEROPERABLE – Consumer Intent and Interaction
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Interlinked product classification
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Linked product data
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr> </table>
**6.5.5 Dataset SECURITY**
The dataset does not contain personal data. It is expected secure storage and
constant download options.
# Table 34 DATASET SECURITY – Consumer Intent and Interaction
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Secure storage, back up
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
Google Analytics data only, so no PD included. In this case
</td> </tr>
<tr>
<td>
</td>
<td>
data is on the level of product / categories / page.
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD- anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr> </table>
**6.5.6 Ethics and Legal requirements**
Based on the above dataset description, the dataset does not contain personal
data, therefore the national and European legal framework that regulates the
use of personal data does not apply and copy of opinion is not required to be
collected.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.6 ME Dataset - Consumer Data: Location analytics data (Weekly)**
**6.6.1 Dataset IDENTIFICATION**
The dataset “Location analytics data”, provided by Measurence, focuses on
weekly number of devices with WiFi enabled that pass through an area covered
by Measurence WiFi sensors.
# Table 35\. DATASET IDENTIFICATION – Location analytics data
<table>
<tr>
<th>
**Category**
</th>
<th>
Consumer Data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Location analytics data
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Weekly number of devices with WiFi enabled that pass through an area covered
by Measurence WiFi sensors
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Measurence
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Olga Melnyk
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC3
</td> </tr> </table>
**6.6.2 Dataset ORIGIN**
This dataset is available from January 2017 and it cannot be defined as “core
data”. It has a APIs - JSON format with a size of ~600GB and a growth of ~5GB
/ location / month. The dataset already existed before the project.
# Table 36. Dataset ORIGIN – Location analytics data
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
~600GB
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
~5GB / location / month
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
APIs - JSON format
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Proprietary sensors
</td> </tr> </table>
**6.6.3 Dataset FORMAT**
The dataset has a JSON and CSV format. It collects numerical data gathered
starting from 2015 and it covers information related to zip code, coordinates,
address, county, city, country. The data is updated daily that means every day
the dataset contains only the data newly generated.
# Table 37 DATASET FORMAT – Location analytics data
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
N/A because there is no access to the data through URL
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
JSON and CSV
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
starting from 2015
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
zip code, coordinates, address, county, city, country
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
EN (numerical data)
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
No. Raw data contains a hashed version of the real mac address which is
anonymized at the source
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
/location_id/YYYY/weeknum
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Daily (every day the dataset contains only the data newly generated)
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr> </table>
**6.6.4 Dataset ACCESS**
The dataset is private, but it is accessible to all the consortium members.
The data will be made available through API by means of Authenticated
encrypted channel.
# Table 38 MAKING DATA ACCESSIBLE – Location analytics data
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: ME. Access: members
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
API
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
Authenticated encrypted channel
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
TBD / API endpoint
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials / API keys
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
presence data, location intelligence
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Lifetime archive of raw data. The APIs always use the last version of the
algorithm
</td> </tr> </table>
# Table 39 MAKING DATA INTEROPERABLE – Location analytics data
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr> </table>
**6.6.5 Dataset SECURITY**
The dataset does not contain personal data because these data were anonymized
at the source. It is expected data recovery and a secure storage.
## Table 40 DATASET SECURITY - Location analytics data
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
Y, prior to storing data in a database (No PD is stored in any database)
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
All the data anonymised are before storage (read paragraph 6.6.6)
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr> </table>
**6.6.6 Ethics and Legal requirements**
The MAC addresses that Measurence's sensors collect (which can be unique
identifiers of WiFi transmitters) are hashed with the cryptographic hash
function SHA-2 256bits – which is a set of cryptographic hash functions
designed by the United States National Security Agency (NSA). Measurence
followed a privacy by design approach, so after hashing has been performed,
the hashed MAC address is sent to our servers and the original MAC address
gets discarded directly by the sensor: we never store the real mac address on
our servers. Given a hashed MAC address there is no way to reconstruct the
corresponding original MAC address, other than attempt a brute force attack
(which, obviously, is applicable to any cryptographic function). Based on the
above description, this dataset does not contain personal data, therefore the
national and European legal framework that regulates the use of personal data
does not apply and copy of opinion is not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.7 BT Dataset - Customer Communication Data: Contact and Consumer
Interaction History**
**6.7.1 Dataset IDENTIFICATION**
The dataset “Contact and Consumer Interaction history” is proprietary and
contains data on communications with customers.
## Table 41\. DATASET IDENTIFICATION – Contact and Consumer Interaction
History
<table>
<tr>
<th>
**Category**
</th>
<th>
Customer Communication Data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Contact and Consumer Interaction History
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
The dataset contains the following data:
* calls o every outbound call; successful or not (every attempt counts) o every inbound call; successful or not o every simulated call
* other contacts events o every inbound email, SMS, click-through, fax, scan, or any other document
o every outbound email, SMS, fax, or any other sent document
• other events o a record of agent's time spent on waiting for a contact o a
record of every time an agent logs in or out
o a record of every time an agent joins or leaves a campaign o a record of
every CCServer (CDE COCOS CEP Contact Center Server) start up or shutdown
Using this data, it is possible to create statistics and reports regarding
telephony and performance of single agents, groups of agents, campaigns and
call center. Nearly all the reports provided by CCServer are made from this
table.
Although this table isn't meant to serve as a basis for content related
reports (i.e., interview statistics), there are some fields in the table that
may be used for this kind of reports as well.
</td> </tr>
<tr>
<td>
</td>
<td>
Dataset data are either generated from the CCServer system or collected from
collected from the contact signaling (protocol).
The data are intended for handling the Customer Engagement Platform (CEP)
campaigns, they are already used for these intentions and are in future
intended for the same purposes.
Existing data is carrying all information about realized connection types and
services and will be reused and upgraded with new communication channels,
trends and services.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Browsetel / CDE
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Matej Žvan Aleš Štor
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1
</td> </tr> </table>
**6.7.2 Dataset ORIGIN**
This dataset is available from March 2017 and it can be defined as “core
data”. Its size is of 5-20 GB with a growth of 5-20 GB / year. The dataset
already existed before the project.
## Table 42 DATASET ORIGIN – Contact and Consumer Interaction History
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M3
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
5-20 GB
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
5-20 GB / year
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
Current format is SQL, target format CSV UTF-8 Text file (compressed)
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Contact center and Customer Interaction Management data
</td> </tr> </table>
**6.7.3 Dataset FORMAT**
The dataset has CSV UTF8 format. It covers information related to Slovenia
area in English. The data is updated monthly.
## Table 43 DATASET FORMAT – Contact and Consumer Interaction History
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
RAW data.
Optimized Data from the system “Call History” table and history from Customer
Interaction Management. Records describing contacts can be described by
additional information records.
EVENTID
CAMPAIGNRESULT_CCS
</th> </tr>
<tr>
<td>
</td>
<td>
RESULT_CODE
CALL_PRIORITY
ATTEMPT_NR
MANUAL_MODE
CCS_ENDSTATE
COST
CONTACT_COUNT
FOR_APPOINTMENT
CALL_TYPE
CALL_DIRECTION
DISC_CAUSE
DISC_CAUSE_DESC
QUEUE_SIZE
ALL_QUEUE_SIZE
DISC_BY_CUSTOMER
CUSTOM_DATA
CALLED_NUMBER
VRU_NUMBER
TRANSFERS
REJECTS IGNORES...
CALL_REASON
EVENT_SERVICE_ORIGIN
EVENT_ORIGIN
EVENT_TYPE
EVENT_DATE
EVENT_LOCATION
MEDIA_TYPE
TOTAL_TIME
CONVERSATION_TIME
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
CSV UTF8
</td>
<td>
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
1 year (at the start), updated during the project duration
</td>
<td>
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Slovenia
</td>
<td>
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
English
</td>
<td>
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Persistent and unique identifiers are used e.g. CAMPAIGN_ID, CHANNEL_ID…
</td>
<td>
EVENT_ID,
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
Not used
</td>
<td>
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Monthly
</td>
<td>
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
Proprietary solution in form of relational tables
</td>
<td>
</td> </tr> </table>
**6.7.4 Dataset ACCESS**
The dataset is private, but it is accessible to all the consortium members.
The data will be made available from Secure FTP in compressed CSV UTF-8.
## Table 44 MAKING DATA ACCESSIBLE – Contact and Consumer Interaction History
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
No licencing for the time of EW Shopp project duration. Access via ACL is
enabled for all partners in the consortium
</th> </tr>
<tr>
<td>
**Availability (public |**
**private)**
</td>
<td>
private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
Data available from Secure FTP in compressed CSV UTF-8.
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
Secure FTP Client
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
Browsetel, secure file server
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
Contacts
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Data will be preserved for the time of EW Shopp project duration. End volume
is approximated to be 20 GB.
</td> </tr> </table>
## Table 45 MAKING DATA INTEROPERABLE – Contact and Consumer Interaction
History
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr> </table>
**6.7.5 Dataset SECURITY**
The dataset does not contain PD because PD was removed at the source.
## Table 46 DATASET SECURITY – Contact and Consumer Interaction History
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
Caller number is ignored and not recorded (not needed in analytical
processing)
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr> </table>
**6.7.6 Ethics and Legal requirements**
Based on the above dataset description, the dataset does not contain personal
data, therefore the national and European legal framework that regulates the
use of personal data does not apply and copy of opinion is not required to be
collected.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.8 ECMWF Dataset - Weather: MARS Historical Data**
**6.8.1 Dataset IDENTIFICATION**
The dataset “MARS Historical Data” is proprietary and contains meteorological
data.
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M4
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
>85PT
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
Complete status of atmosphere twice a day
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
structured, CSV
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
ECMWF MARS API
</td> </tr> </table>
# Table 47. DATASET IDENTIFICATION – MARS Historical Data
<table>
<tr>
<th>
**Category**
</th>
<th>
Weather
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Meteorological Archival and Retrieval System
(MARS)Historical Data
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Meteorological archive of forecasts of the past 35 years and sets of
reanalysis forecasts.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
European Centre for Medium-Range Weather Forecasts (ECMWF)
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Aljaž Košmerlj
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1, BC2, BC3, BC4
</td> </tr> </table>
**6.8.2 Dataset ORIGIN**
This dataset is available from April 2017 and it can be defined as “core
data”. Its size is >85PT. The dataset already existed before the project.
# Table 48 DATASET ORIGIN – MARS Historical Data
**6.8.3 Dataset FORMAT**
The dataset has CSV format. It covers information related to whole earth in
English language. The data is updated real-time.
# Table 49 DATASET FORMAT – MARS Historical Data
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
N/A
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
past 35 years
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Global
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
English
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
/{country}/YYYY/MM/DD.CSV
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Real-time
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr> </table>
**6.8.4 Dataset ACCESS**
The dataset is private, but it is accessible to all the consortium members.
The data will be made available by API access.
# Table 50 MAKING DATA ACCESSIBLE – MARS Historical Data
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: ECMWF. Access: All members
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
API access
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
REST API, Python API
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
http://apps.ecmwf.int/mars-catalogue/
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
weather, climate
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
ECMWF maintained archive
</td> </tr> </table>
# Table 51 MAKING DATA INTEROPERABLE – MARS Historical Data
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
References to shared systems of identifiers and standard data types
</td> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Wikipedia entities
</td> </tr> </table>
**6.8.5 Dataset SECURITY**
The dataset does not contain PD.
# Table 52 DATASET SECURITY – MARS Historical Data
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
yes, both managed by ECMWF
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr> </table>
**6.8.6 Ethics and Legal requirements**
Based on the above dataset description, the dataset “MARS Historical Data”
does not contain personal data, therefore the national and European legal
framework that regulates the use of personal data does not apply and copy of
opinion is not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
**6.9 CE Dataset - Products and Categories: Product Attributes**
**6.9.1 Dataset IDENTIFICATION**
The dataset “Product attributes” is proprietary and contains information about
individual attributes for various products.
# Table 53. DATASET IDENTIFICATION – Product Attributes
<table>
<tr>
<th>
**Category**
</th>
<th>
Products and categories
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Product attributes
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
A collection of product attributes (varying from generic such as name, EAN,
brand, categorization and color to more specific as dimensions or technical
specifications). Data is collected from more than one thousand online stores
in 5 countries and then automatically and manually merged into an organized
dataset.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Ceneje
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
David Creslovnik Uros Mevc
</td> </tr> </table>
<table>
<tr>
<th>
**Business Cases number**
</th>
<th>
BC1
</th> </tr> </table>
**6.9.2 Dataset ORIGIN**
This dataset is available from January 2017 and it can be defined as “core
data”. The dataset already existed before the project.
# Table 54 DATASET ORIGIN - Product Attributes
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
12 million products
10 million product specifications
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
10000 new products per day
7000 product specifications per day
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
structured tabular data
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
SQL tables
</td> </tr> </table>
**6.9.3 Dataset FORMAT**
The dataset collects data starting from 2016 and related to Country in
Slovenian, Croatian, Serbian language. The data is updated Daily.
# Table 55 DATASET FORMAT – Product Attributes
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
Product attributes
* IdProduct (INT)
* NameProduct (STRING)
* L1 (STRING)
* L2 (STRING)
* L3 (STRING)
* AttName (STRING)
* AttValue (STRING)
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
SQL: tabular
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
since 2016
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Country
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
slovenian, croatian, serbian
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
/{country}/product_attributes.tsv
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Daily (every day the dataset contains full generated data)
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr> </table>
**6.9.4 Dataset ACCESS**
The dataset is private, but it is accessible to all the consortium members.
The data will be made available through File download.
# Table 56 MAKING DATA ACCESSIBLE – Product Attributes
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: Ceneje.
Access: All members
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
File download (zip)
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
WGET/Curl
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
AWS or Ceneje static content server
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
NO (can be generated on demand)
</td> </tr> </table>
# Table 57 MAKING DATA INTEROPERABLE – Product Attributes
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Publication as linked data (RDF-ization)
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Semantic data enrichment
</td> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Interlinked product classification
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Linked product data
</td> </tr> </table>
**6.9.5 Dataset SECURITY**
The dataset does not contain PD.
# Table 58 DATASET SECURITY – Product Attributes
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Secure storage, regular backups
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
Product level
</td> </tr> </table>
**6.9.6 Ethics and Legal requirements**
Based on the above dataset description, the dataset “Product Attributes” does
not contain personal data, therefore the national and European legal framework
that regulates the use of personal data does not apply and copy of opinion is
not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.10 JSI Dataset - Media: Event Registry**
**6.10.1 Dataset IDENTIFICATION**
The dataset “Event Registry” is proprietary and contains clustered information
about events based on news articles online.
# Table 59. DATASET IDENTIFICATION – Event Registry
<table>
<tr>
<th>
**Category**
</th>
<th>
Dataset Media
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Event Registry
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
A registry of news articles which are automatically clustered into events -
sets of articles about the same real-world event. The articles are collected
from over 150 thousand sources from all over the world and in 21 languages.
Article text is processed and annotated using a linguistic and semantic
analysis pipeline. The articles and events are linked based on content
similarity. These links are made automatically and across different languages.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
JSI
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Aljaž Košmerlj
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1, BC2, BC3, BC4
</td> </tr>
<tr>
<td>
**6.10.2**
</td>
<td>
**Dataset ORIGIN**
</td> </tr> </table>
This dataset is available from January 2017 and it can be defined as “core
data”. The dataset already existed before the project.
# Table 60 DATASET ORIGIN – Event Registry
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
136 million articles and 4.8 million events
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
150 thousand articles and 400 events added per day
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
text + metadata
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
online news sites, Event Registry API
</td> </tr>
<tr>
<td>
**6.10.3**
</td>
<td>
**Dataset FORMAT**
</td> </tr> </table>
The dataset collects data starting from December 2013, related to whole earth
in many languages.
The data is updated real-time.
# Table 61 DATASET FORMAT – Event Registry
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
Full documentation available at:
_https://github.com/EventRegistry/eventregistry-python/wiki/Data-models_
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
JSON
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
since December 2013
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Whole Earth
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
English, German, Spanish, Catalan, Portuguese, Italian, French, Russian,
Chinese, Slovene, Croatian, Serbian, Arabic, Turkish, Persian, Armenian,
Kurdish, Lithuanian, Somali, Urdu, Uzbek
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
Wikipedia URIs
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Real-time
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.10.4**
</td>
<td>
**Dataset ACCESS**
</td> </tr> </table>
The dataset is private, but it is accessible to all the consortium members.
The data will be made available through API access.
# Table 62 MAKING DATA ACCESSIBLE – Event Registry
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: JSI
Access: All members
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
limited open and private (subscription-based); full access will be available
to project members
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
API access
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
REST, Python API
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
_http://eventregistry.org/_
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
news, articles, events
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
long-term database storage
</td> </tr> </table>
# Table 63 MAKING DATA INTEROPERABLE – Event Registry
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
References to shared systems of identifiers and standard data types
</td> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Wikipedia entities
</td> </tr>
<tr>
<td>
**6.10.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset does not include PD collected directly from its users. The dataset
contains only publicly available PD (mentions of natural persons in news
articles) as part of its news archive. PD can be removed upon request by any
individual.
# Table 64 DATASET SECURITY – Event Registry
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Secure storage, no sensitive data, local backups
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
"Right to be forgotten" guaranteed
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.10.6**
</td>
<td>
**Ethics and Legal requirements**
</td> </tr> </table>
JSI has already obtained an opinion of the Slovenian Information Commissioner
regarding use of Event Registry data in another EU project. (H2020 project
RENOIR, grant agreement No 691152). A copy of this opinion and an explanation
why it is applicable also for the EW-Shopp project are included in deliverable
[D7.2]. The opinion states that even though Event Registry collects and
indexes news data which is publicly available, it may still constitute as
processing of personal data and some users may want to have their data removed
from the index. This is the so-called “right to be forgotten” which must also
be offered by web search engines such as Google. It can be defined as “the
right to silence on past events in life that are no longer occurring” and
allows individuals to have information about themselves deleted from certain
internet records so that they cannot be found by search engines. To comply
with this, Event Registry supports the option to request a removal of personal
links from its index. The Information Commissioner does not foresee any other
necessary privacy protection measures.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.11 GfK Dataset - Consumer data: Consumer data**
**6.11.1 Dataset IDENTIFICATION**
The dataset “Consumer data” is proprietary and contains clustered information
about events based on news articles online.
# Table 65. DATASET IDENTIFICATION – Consumer data
<table>
<tr>
<th>
**Category**
</th>
<th>
Consumer data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Consumer data
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
TV Behavior & Exposure, Online Behavior & Exposure, HH
& Individual Purchase Level, Mobile Usage, Household & Individual Demographic
and Segmentation Information in Italy, Poland, Netherlands and Italy.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
GfK
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Stefano Albano
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC2
</td> </tr>
<tr>
<td>
**6.11.2**
</td>
<td>
**Dataset ORIGIN**
</td> </tr> </table>
The dataset is available from May 2017 and it can’t be defined as “core data”.
Its size is of 80GB with a growth of 40GB per year. The dataset already
existed before the project.
# Table 66 DATASET ORIGIN – Consumer data
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M5
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
80GB
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
40GB per year
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
structured tabular data, CSV
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
GfK receive the data directly form the panelists that are connected to GfK via
GPRS technology with an ad hoc tablet / via web with a PC/Laptop / via
smartphone. Data are collected actively (with a questionnaires) or passively
(installed apps). Data are anonymized and stored in GfK’s storage systems.
</td> </tr>
<tr>
<td>
**6.11.3**
</td>
<td>
**Dataset FORMAT**
</td> </tr> </table>
The dataset has a CSV format. It collects numerical data since 2016 and it
covers information related to Italy, Germany, Poland, Netherlands. The data is
updated monthly.
# Table 67 DATASET FORMAT – Consumer data
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
Data are stored in data warehouse and can be extracted or visualized through a
software.
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
structured tabular data, CSV
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
Monthly / daily data since 2016
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Italy, Germany, Poland, Netherlands
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
EN (numerical data)
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
Static DB
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Monthly
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.11.4**
</td>
<td>
**Dataset ACCESS**
</td> </tr> </table>
The dataset is private and it is not available to consortium members.
# Table 68 MAKING DATA ACCESSIBLE – Consumer data
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Available only for GfK
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
N/A
</td> </tr> </table>
# Table 69 MAKING DATA INTEROPERABLE – Consumer data
<table>
<tr>
<th>
**Data**
**interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
• •
</td>
<td>
Interlinked product classification Linked product data
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr>
<tr>
<td>
**6.11.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset does not contain PD because those data was removed at the source.
# Table 70 DATASET SECURITY – Consumer data
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
See 6.11.6
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
Data are not aggregated
</td> </tr>
<tr>
<td>
**6.11.6**
</td>
<td>
**Ethics and Legal requirements**
</td> </tr> </table>
GfK collects the data according the current Privacy law, asking each panelist
the consent to transfer the data to GfK for data analysis. GfK has performed
notification to the National Data Protection Authority (attached in [D7.2]).
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.12 GfK Dataset - Market data: Sales data**
**6.12.1 Dataset IDENTIFICATION**
The dataset “Sales data” contains monthly data (in value / number) of Consumer
Electronic, Information Technology, Telecommunication, Major Domestic
Appliances and Small Domestic Appliances products.
# Table 71. Dataset IDENTIFICATION – Sales data
<table>
<tr>
<th>
**Category**
</th>
<th>
Market data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Sales data
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Monthly data (in value / number) of Consumer
Electronics, Information Technology,
Telecommunication, Major Domestic Appliances and Small Domestic Appliances
products.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
GfK
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Alessandro De Fazio
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1, BC2, BC3
</td> </tr>
<tr>
<td>
**6.12.2**
</td>
<td>
**Dataset ORIGIN**
</td> </tr> </table>
The dataset is available from January 2017 and it can’t be defined as “core
data”. Its size is of 80GB with a growth of 5GB per country per year.
# Table 72 DATASET ORIGIN – Sales data
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
80GB per country
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
5GB per country per year
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
structured tabular data, CSV
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
GfK receive from the POS sales data split per product in different formats
(electronic and manual). Data are checked, verified and uploaded into a tool
where the data are connected to the product sheet. The data are collected on a
representative sample of POS and are exploded to the universe.
</td> </tr>
<tr>
<td>
**6.12.3**
</td>
<td>
**Dataset FORMAT**
</td> </tr> </table>
The dataset has a CSV format. It collects data since 2004 related to all
European countries (except:
Albania, Kosovo, Macedonia and Montenegro). The data is updated monthly.
# Table 73 DATASET FORMAT – Sales data
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
Data are stored in a global data warehouse accessible on line. The inputs are
four dimensions Product, Time, Facts, Channels that can be processed like an
excel pivot table.
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
structured tabular data, CSV
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
Monthly data since 2004
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
All European (except: Albania, Kosovo, Macedonia and Montenegro)
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
English
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
Static DB
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Monthly
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.12.4**
</td>
<td>
**Dataset ACCESS**
</td> </tr> </table>
The dataset is private and it is available only for Università Bicocca. The
data is available through ftp but username and password are required.
# Table 74 MAKING DATA ACCESSIBLE – Sales data
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
The data will be transferred to Università Bicocca for data analysis while the
analysis (not the data) will be transferred by Università Bicocca to the
consortium.
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
CSV files via ftp
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
FTP
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
username and password needed to access ftp
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
sales data
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
N/A
</td> </tr> </table>
# Table 75 MAKING DATA INTEROPERABLE – Sales data
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Publication as linked data (RDF-ization)
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Semantic data enrichment
</td> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Interlinked product classification
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Linked product data
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr>
<tr>
<td>
**6.12.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset does not contain PD.
# Table 76 DATASET SECURITY – Sales data
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.12.6**
</td>
<td>
**Ethics and Legal requirements**
</td> </tr> </table>
Based on the above dataset description, the dataset “Sales Data” does not
contain personal data, therefore the national and European legal framework
that regulates the use of personal data does not apply and copy of opinion is
not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.13 GfK Dataset – Products & Categories: Product attributes **
**6.13.1 Dataset IDENTIFICATION**
The dataset “Product attributes” contains Technical Product Data Sheets of all
the products of Consumer Electronics, IT, Telecommunication, Major domestic
appliances, Small domestic Appliances sectors.
# Table 77. DATASET IDENTIFICATION – Product attributes
<table>
<tr>
<th>
**Category**
</th>
<th>
Products & Categories
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Product attributes
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Technical Product Data Sheets of all the products of Consumer Electronics, IT,
Telecommunication, Major domestic appliances, Small domestic Appliances
sectors. Products sheets are defined within the GfK
categorization and include: Brand, Product name, Model, ID, data, EAN code (on
80% of the products) and Technical features.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
GfK
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Marco Tobaldo
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1, BC2, BC4
</td> </tr> </table>
<table>
<tr>
<th>
**6.13.2**
</th>
<th>
**Dataset ORIGIN**
</th> </tr> </table>
The dataset is available from February 2017 and it can be defined as “core
data”. Its size is of 2GB per country (Germany, UK, Italy) with a growth of 2%
per year.
# Table 78 DATASET ORIGIN – Product attributes
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M2
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
2GB per country (Germany, UK, Italy)
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
2% per year
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
Relational
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
GfK receive the data of all the sold products in POS. When there is a new
product GfK set its sheet getting the features of the product from the
manufacturer. All the sheets are created manually, according the GfK data
plan, in the country where the new product has been sold.
</td> </tr>
<tr>
<td>
**6.13.3**
</td>
<td>
**Dataset FORMAT**
</td> </tr> </table>
The dataset has a CSV or xml format. It collects product data since 1982 and
has a European coverage. The dataset is updated daily (every day the dataset
contains only the data newly generated).
# Table 79 DATASET FORMAT – Product attributes
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
We describe here the main structure of the relational database (RDB), by
describing the four CSV files that we extract from it and share in EW-Shopp:
Country_EWS_2017_12_31_Feature_Data.txt (Value of the technical features of
the products)
Country_EWS_2017_12_31_Feature_List.txt (name of the features of the products)
Country_EWS_2017_12_31_Feature_Value_List.txt (code frame of the features)
Country_EWS_2017_12_31_Master_Data.txt (main information about the products)
Country_EWS_2017_12_31_Productgroup_Feature_List.txt (list of the technical
features available for each product)
Each file contains several columns, thus for the complete structure we refer
to documentation in "Spex_retail_CSVrelationalidbased.pdf" shared with the
consortium.
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
structured (R-DB), CSV o xml
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
The dataset includes product data since 1982 and it is daily updated
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
European coverage: Austria, Belgio, Danimarca, Finlandia, Francia, Germania,
UK, Grecia, Italia, Lussemburgo, Olanda, Polonia, Portogallo, Repubblica ceca,
Slovacchia, Italia, Svezia, , Norvegia, Ungheria. Catalog not available in
Irlanda,
Slovenia, Croazia. Bulgaria, Cipro, Estonia, Lettonia, Lituania, Malta,
Romania,
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
Arabic, Czech, Chinese, Korean, Danish, French, Greek, English, Italian,
Dutch, Polish, Portuguese, Russian, Slovak, Spanish, Swedish, German, Turkish,
Hungarian
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
Country_EWS_2017_12_31_Feature_Data.txt
Country_EWS_2017_12_31_Feature_List.txt
Country_EWS_2017_12_31_Feature_Value_List.txt
Country_EWS_2017_12_31_Master_Data.txt
Country_EWS_2017_12_31_Productgroup_Feature_List.txt
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Daily (every day the dataset contains only the data newly generated).
Overwrite old data.
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.13.4**
</td>
<td>
**Dataset ACCESS**
</td> </tr> </table>
The dataset is private and it is available to all consortium members. The data
are available through ftp.
# Table 80 MAKING DATA ACCESSIBLE – Product attributes
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Private license: The data will be transferred to Università Bicocca for data
analysis while the analysis (not the data) will be transferred by Università
Bicocca to the consortium
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
ftp
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
No tools
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
It will be created when needed
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
username and password needed to access ftp
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
product categories / product features / value
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Regular disaster recovery / backup on original data
</td> </tr> </table>
# Table 81 MAKING DATA INTEROPERABLE – Product attributes
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Publication as linked data (RDF-ization)
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Semantic data enrichment
</td> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Interlinked product classification
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Linked product data
</td> </tr>
<tr>
<td>
**6.13.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset does not contain PD.
# Table 82 DATASET SECURITY – Product attributes
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.13.6**
</td>
<td>
**Ethics and Legal requirements**
</td> </tr> </table>
Based on the above dataset description, the dataset “Product attributes” does
not contain personal data, therefore the national and European legal framework
that regulates the use of personal data does not apply and copy of opinion is
not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.14 ME Dataset - Consumer Data: Door counter data**
**6.14.1 Dataset IDENTIFICATION**
The dataset “Door counter data” contains data from customers' door counters.
# Table 83. DATASET IDENTIFICATION – Door counter data
<table>
<tr>
<th>
**Category**
</th>
<th>
Consumer Data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Door counter data
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Data from customers' door counters
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Measurence
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Olga Melnyk
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC3
</td> </tr>
<tr>
<td>
**6.14.2**
</td>
<td>
**Dataset ORIGIN**
</td> </tr> </table>
The dataset is available from January 2017 and it can’t be defined as “core
data”. Its size is of 2Mb with a growth of 60kB/mb/location. The dataset
already existed.
# Table 84 DATASET ORIGIN – Door counter data
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
2Mb
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
60kB/mb/location
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
structured data
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Measurence's customers own data
</td> </tr> </table>
<table>
<tr>
<th>
**6.14.3**
</th>
<th>
**Dataset FORMAT**
</th> </tr> </table>
The dataset has a CSV format. It collects numerical data since 2016 related to
Milan area. The dataset is updated daily.
# Table 85 DATASET FORMAT – Door counter data
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
N/A because there is no access to the data through URL
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
2016
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Milan
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
EN (numerical data)
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
/location_idYYYY/MM/week
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Daily
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr> </table>
<table>
<tr>
<th>
**6.14.4**
</th>
<th>
**Dataset ACCESS**
</th> </tr> </table>
The dataset is private and it is not available to all consortium members.
# Table 86 MAKING DATA ACCESSIBLE – Door counter data
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: ME.
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
text editor/spreadsheet
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
door counters
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
cloud
</td> </tr> </table>
# Table 87 MAKING DATA INTEROPERABLE – Door counter data
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr>
<tr>
<td>
**6.14.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset does not contain PD.
## Table 88 DATASET SECURITY – Door counter data
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**6.14.6**
</td>
<td>
**Ethics and Legal requirements**
</td> </tr> </table>
This dataset does not contain personal data, therefore the national and
European legal framework that regulates the use of personal data does not
apply and copy of opinion is not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.15 BB Dataset - Products and Categories: Product Attributes**
**6.15.1 Dataset IDENTIFICATION**
The dataset “Product attributes” is proprietary and contains data on product
specifications.
## Table 89. DATASET IDENTIFICATION – Product Attributes
<table>
<tr>
<th>
**Category**
</th>
<th>
Products and categories
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Product attributes
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Detailed product specifications for products which are included in Big Bang's
selling portfolio (from generic to specific technical details)
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Big Bang
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Matija Torlak
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1
</td> </tr>
<tr>
<td>
**6.15.2**
</td>
<td>
**Dataset ORIGIN**
</td> </tr> </table>
The dataset is available from January 2017 and it can be defined as “core
data”. Its size is of 20000 products with a growth of 1.000 new products per
year.
## Table 90 DATASET ORIGIN – Product Attributes
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
20000 products
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
1.000 new products per year
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
character and numeric
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
DWH
</td> </tr> </table>
<table>
<tr>
<th>
**6.15.3**
</th>
<th>
**Dataset FORMAT**
</th> </tr> </table>
The dataset has a XLS format. It collects data related to Slovenia in
Slovenian and English languages.
The dataset is updated daily.
## Table 91 DATASET FORMAT – Product Attributes
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
BB Classification - can be mostly matched with GS1 Classification
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
XLS, SQL, CSV
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
All Time
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Slovenia for all Products
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
Slovenian, English
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
BB_productCategoriesYYYY/MM/dd
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
daily (new + history)
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.15.4**
</td>
<td>
**Dataset ACCESS**
</td> </tr> </table>
The dataset is private but it is available to all consortium members. The data
is available through download by means of VPN.
## Table 92 MAKING DATA ACCESSIBLE – Product Attributes
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: Big Bang. Access: All members
</th> </tr>
<tr>
<td>
**Availability (public |**
**private)**
</td>
<td>
Public, restricted with credentials
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
Download, view
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
URL with Credentials
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
URL link secured with Credentials
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
Database Keywords
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Secure Storage, Back up
</td> </tr> </table>
## Table 93 MAKING DATA INTEROPERABLE – Product Attributes
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Publication as linked data (RDF-ization)
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Semantic data enrichment
</td> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Interlinked product classification
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Linked product data
</td> </tr>
<tr>
<td>
**6.15.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset does not contain PD.
## Table 94 DATASET SECURITY – Product Attributes
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Secure storage, daily backup
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
Data only on the level of product / category
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.15.6**
</td>
<td>
**Ethics and Legal requirements**
</td> </tr> </table>
Based on the above dataset description, the dataset does not contain personal
data, therefore the national and European legal framework that regulates the
use of personal data does not apply and copy of opinion is not required to be
collected.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.16 CE Dataset - Market data: Products price history**
**6.16.1 Dataset IDENTIFICATION**
The dataset “Products price history” is proprietary and contains quotes for
various products.
# Table 95\. DATASET IDENTIFICATION – Products price history
<table>
<tr>
<th>
**Category**
</th>
<th>
Market data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Products price history
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
A collection of seller quotes for products. Prices for all of Ceneje's
organized products have been recorded and regularly archived since 2016.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Ceneje
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
David Creslovnik Uros Mevc
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1
</td> </tr>
<tr>
<td>
**6.16.2**
</td>
<td>
**Dataset ORIGIN**
</td> </tr> </table>
The dataset is available from January 2017 and it can’t be defined as “core
data”. Its size is about 3 billion quotes with a growth of 2 million per day.
# Table 96 DATASET ORIGIN - Products price history
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
about 3 billion quotes
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
2 million per day
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
structured tabular data
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
SQL tables
</td> </tr> </table>
<table>
<tr>
<th>
**6.16.3**
</th>
<th>
**Dataset FORMAT**
</th> </tr> </table>
The dataset collects data related to Country area since 2016. The dataset is
updated daily.
# Table 97 DATASET FORMAT – Products price history
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
History
* IdProduct (INT)
* NameProduct (STRING)
* L1 (STRING)
* L2 (STRING)
* L3 (STRING)
* IdSeller (INT)
* Price (MONEY)
* Timestamp (Slovenian time GMT+1)
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
SQL: tabular (tsv)
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
since 2016
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Country
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
not language specific
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
{country}/YYYY/mm/DD/history.tsv
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Daily (every day the dataset contains only the data newly generated)
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.16.4**
</td>
<td>
**Dataset ACCESS**
</td> </tr> </table>
The dataset is private but it is available to all consortium members. The data
is available through file download.
# Table 98 MAKING DATA ACCESSIBLE – Products price history
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: Ceneje
Access: All members
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
File download (zip)
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
WGET/Curl
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
N/A (can be generated on demand)
</td> </tr> </table>
# Table 99 MAKING DATA INTEROPERABLE – Products price history
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Publication as linked data (RDF-ization)
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Semantic data enrichment
</td> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Interlinked product classification
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Linked product data
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
**6.16.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset does not contain PD.
# Table 100 DATASET SECURITY – Products price history
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Secure storage, regular backups
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
Product|Seller level
</td> </tr> </table>
<table>
<tr>
<th>
**6.16.6**
</th>
<th>
**Ethics and Legal requirements**
</th> </tr> </table>
Based on the above dataset description, the dataset “Products price history”
does not contain personal data, therefore the national and European legal
framework that regulates the use of personal data does not apply and copy of
opinion is not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.17 ME Dataset - Consumer Data: Sales data**
**6.17.1 Dataset IDENTIFICATION**
The dataset “Sales data” contains number of receipts get from customers.
# Table 101. DATASET IDENTIFICATION – Sales data
<table>
<tr>
<th>
**Category**
</th>
<th>
Consumer Data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Sales data
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
number of receipts we get from our customers
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Measurence
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Olga Melnyk
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC2
</td> </tr>
<tr>
<td>
**6.17.2**
</td>
<td>
**Dataset ORIGIN**
</td> </tr> </table>
The dataset is available from January 2017 and it can’t be defined as “core
data”. Its size is about 2Mb with a growth of 60kB/mb/location.
# Table 102 DATASET ORIGIN \- Sales data
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
2Mb
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
60kB/mb/location
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
structured data
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Measurence customers' own data
</td> </tr> </table>
<table>
<tr>
<th>
**6.17.3**
</th>
<th>
**Dataset FORMAT**
</th> </tr> </table>
The dataset collects data related to Milan area since 2016. The dataset is
updated weekly.
# Table 103 DATASET FORMAT – Sales data
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
N/A because there is no access to the data through URL
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
2016
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Milan
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
EN (numerical data)
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
/location_id/YYYY/MM/week
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
weekly
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.17.4**
</td>
<td>
**Dataset ACCESS**
</td> </tr> </table>
The dataset is private and it is not available to consortium members.
# Table 104 MAKING DATA ACCESSIBLE – Sales data
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: ME
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
text editor/spreadsheet
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
N/A because company’s dataset is not available through URL
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
receipts
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Cloud
</td> </tr> </table>
# Table 105 MAKING DATA INTEROPERABLE – Sales data
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr> </table>
<table>
<tr>
<th>
**6.17.5**
</th>
<th>
**Dataset SECURITY**
</th> </tr> </table>
The dataset does not contain PD.
# Table 106 DATASET SECURITY – Sales data
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**6.17.6**
</td>
<td>
**Ethics and Legal requirements**
</td> </tr> </table>
This dataset does not contain personal data, therefore the national and
European legal framework that regulates the use of personal data does not
apply and copy of opinion is not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.18 JOT Dataset - Consumer data: Traffic source (Bing)**
**6.18.1 Dataset IDENTIFICATION**
The dataset “Traffic sources (Bing)”, provided by JOT, focuses on historical
campaign performance statistics of search data in Bing advertising platforms.
# Table 107 DATASET IDENTIFICATION – Traffic source (Bing)
<table>
<tr>
<th>
**Category**
</th>
<th>
Consumer Data
</th>
<th>
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Traffic sources (Bing)
</td>
<td>
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Historical campaign performance statistics search data in Bing advertising
platforms
</td>
<td>
of
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
JOT
</td>
<td>
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Ignacio Martínez / Elías Badenes
</td>
<td>
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC4
</td>
<td>
</td> </tr>
<tr>
<td>
**6.18.2**
</td>
<td>
**Dataset ORIGIN**
</td> </tr> </table>
This dataset is available from February 2017 and it cannot be defined as “core
data”. It has a structured format with a size of 1 TB and a growth of 1.5GB
daily. The dataset is generated expressly for the project’s purpose in CSV
format.
# Table 108 DATASET ORIGIN - Traffic source (Bing)
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M2
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
1 TB
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
1.5GB daily
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
structured, CSV
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
BING API
</td> </tr>
<tr>
<td>
**6.18.3**
</td>
<td>
**Dataset FORMAT**
</td> </tr> </table>
The dataset “Traffic source (Bing)” has a CSV format, the data structure is
illustrated in the following table. It collects data gathered from different
European countries, in different language (German, Spanish, French, English),
since 2016 and it covers information related to City/Region/Country. The data
is updated daily that means every day the dataset contains only the data newly
generated.
# Table 109 DATASET FORMAT – Traffic source (Bing)
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
Country: Country where the campaign is oriented.
Language: Language of the keywords and ads.
Category: Topic of the keyword. We have 22 categories such as Travel, Finance,
Vehicles and so forth 48
Campaign Name: An account is form by campaigns. The name of these campaign
contains some information like the language or the category. AdgroupId: Number
given by Bing that identify an ad group. A campaign is form by ad groups.
AdNetworkType2: The network where keywords appear. It can be Bing search (the
typical bing search engine in www.bing.com) or partner network (other webpages
with the bing search box).
Clicks: When a user clicks your ad.
Impressions: Each time your ad is served and appears on the web.
Date: Date (XXXX/XX/XX) when the ad appears.
DayOfWeek: Day of the week when the ad appears.
Device: The device (PC, Tablet, Mobile) where the ad appears.
</th> </tr>
<tr>
<td>
</td>
<td>
MonthOfYear: Month of the year when the ad appears.
Keyword: It’s the search that the user types.
Bing_posicion_anuncio (Bing_Ad_Position): Position of the ad in the browser.
Location: City/Region/Country
Concordancia (Match type): Match type of the keyword. It shows how similar
needs to be the query of a user to show an ad
</td> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
since 2016
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
City/Region/Country
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
German, Spanish, French, English
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
BING_YYMMDD_XX
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Daily (every day the dataset contains only the data newly generated)
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.18.4**
</td>
<td>
**Dataset ACCESS**
</td> </tr> </table>
The dataset is private, but it is accessible to all the consortium members.
The data will be made available through File-download by means of FTP Client.
Dataset are deposited on Azure Platform and the access is provided by
credentials.
# Table 110 MAKING DATA ACCESSIBLE – Traffic source (Bing)
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: JOT. Access: All members
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp**
**partners (Y|N)**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
File-download
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
FTP Client (Open Source) or Web Page
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
Azure platform. The URL will be created when needed.
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
Online Searches (Keywords)
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
5 years after project end
</td> </tr> </table>
# Table 111 MAKING DATA INTEROPERABLE – Traffic source (Bing)
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr>
<tr>
<td>
**6.18.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset “Traffic source (Bing)” does not contain personal data. It is
expected a secure storage and JOT data recovery.
# Table 112 DATASET SECURITY –Traffic source (Bing)
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Secure storage, no sensitive data, JOT data recovery
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.18.6**
</td>
<td>
**Ethics and Legal requirements**
</td> </tr> </table>
All the data that JOT Internet is generating, sharing and processing (in
compliance with Spanish Organic Law 15/1999 for personal data protection,
ISO/IEC 2382-1 and the General Data Protection Regulation (GDPR)) for the
purpose of EW Shopp project does not include personal data. For that reason,
JOT believe that data managed in the project does not include any personal
data and that is why no further action is needed.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.19 JOT Dataset - Consumer data: Traffic source (Google)**
**6.19.1 Dataset IDENTIFICATION**
The dataset “Traffic sources (Google)”, provided by JOT, focuses on historical
campaign performance statistics of search data in Google platforms.
# Table 113 DATASET IDENTIFICATION – Traffic source (Google)
<table>
<tr>
<th>
**Category**
</th>
<th>
Consumer Data
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Traffic sources (Google)
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Historical campaign performance statistics of data in Google platform.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
JOT
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Ignacio Martínez/ Elías Badenes
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC4
</td> </tr>
<tr>
<td>
**6.19.2**
</td>
<td>
**Dataset ORIGIN**
</td> </tr> </table>
The dataset is available from February 2017 and it is defined as “core data”.
It has a structured format (i.e. CSV) with a size up to 3 TB and a growth of
4GB daily. The dataset is generated expressly for the project’s purpose.
# Table 114 DATASET ORIGIN - Traffic source (Google)
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M2
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
> 3TB
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
4GB daily
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
structured, CSV
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
GOOGLE API
</td> </tr>
<tr>
<td>
**6.19.3**
</td>
<td>
**Dataset FORMAT**
</td> </tr> </table>
The dataset “Traffic source (Google)” has a CSV format. It collects data
gathered from different countries, in different language (German, Spanish,
Italian, Dutch, French, English, Portuguese, Russian), since 2016 and it
covers information related to City/Region/Country. The data is updated daily
that means every day the dataset contains only the data newly generated. The
data structure is illustrated in the following table.
# Table 115 DATASET FORMAT – Traffic source (Google)
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
Country: Country where the campaign is oriented.
Language: Language of the keywords and ads.
Category: Topic of the keyword. We have 22 categories such as Travel, Finance,
Vehicles and so forth 49
Campaign Name: An account is form by campaigns. The name of these campaign
contains some information like the language or the category. AdgroupId: Number
given by Google that identify an ad group. A campaign is form by ad groups.
AdNetworkType2: The network where keywords appear. It can be Google search
(the typical google search engine in www.google.com) or partner
</th> </tr>
<tr>
<td>
</td>
<td>
network (other webpages with the google search box).
Clicks: When a user clicks your ad.
Impressions: Each time your ad is served and appears on the web.
Date: Date (XXXX/XX/XX) when the ad appears.
DayOfWeek: Day of the week when the ad appears.
Device: The device (PC, Tablet, Mobile) where the ad appears.
MonthOfYear: Month of the year when the ad appears.
Keyword: It’s the search that the user types.
Google_posicion_anuncio (Google_Ad_Position): Position of the ad in the
browser.
Location: City/Region/Country
Concordancia (Match type): Match type of the keyword. It shows how similar
needs to be the query of a user to show an ad
</td> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
since 2016
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
City/Region/Country
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
German, Spanish, Italian, Dutch, French, English, Portuguese, Russian
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
GOOGLE_YYMMDD_XX
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Daily (every day the dataset contains only the data newly generated)
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.19.4**
</td>
<td>
**Dataset ACCESS**
</td> </tr> </table>
The dataset is private but it is accessible to all the consortium members. The
data will be made available through File-download by means of FTP Client.
Dataset are deposited on Azure Platform and the access is provided by
credentials.
# Table 116 MAKING DATA ACCESSIBLE – Traffic source (Google)
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: JOT. Access: All members
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp**
**partners (Y|N)**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
File-download
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
FTP Client (Open Source) or Web Page
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
Azure platform. The URL will be created when needed.
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
Online Searches (Keywords)
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
5 years after project end
</td> </tr> </table>
# Table 117 MAKING DATA INTEROPERABLE – Traffic source (Google)
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr>
<tr>
<td>
**6.19.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset “Traffic source (Google)” does not contain personal data. It is
expected a secure storage and JOT data recovery.
# Table 118 DATASET SECURITY –Traffic source (Google)
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Secure storage, no sensitive data, JOT data recovery
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.19.6**
</td>
<td>
**Ethics and Legal requirements**
</td> </tr> </table>
All the data that JOT Internet is generating, sharing and processing (in
compliance with Spanish Organic Law 15/1999 for personal data protection,
ISO/IEC 2382-1 and the General Data Protection Regulation (GDPR)) for the
purpose of EW Shopp project does not include personal data. For that reason,
JOT believe that data managed in the project does not include any personal
data and that is why no further action is needed.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.20 JOT Dataset - Market data: Twitter trends**
**6.20.1 Dataset IDENTIFICATION**
The dataset “Twitter Trends” is Open data and focuses on trending topics as
available through Twitter APIs.
# Table 119 DATASET IDENTIFICATION – Twitter trends
<table>
<tr>
<th>
**Category**
</th>
<th>
Twitter Trends
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Market data
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Trending topics as available through Twitter APIs
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
Open Data
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Ignacio Martínez/ Elías Badenes
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC4
</td> </tr>
<tr>
<td>
**6.20.2**
</td>
<td>
**Dataset ORIGIN**
</td> </tr> </table>
The dataset “Twitter Trends” is available from May 2017 and it cannot be
defined as “core data”. It has a structured format with a growth of 10MB
daily. The dataset is generated expressly for the project’s purpose.
# Table 120 DATASET ORIGIN –Twitter trends
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M5
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
50 trending topic / every 15min / country (10MB daily)
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
structured, CSV
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Twitter API
</td> </tr>
<tr>
<td>
**6.20.3**
</td>
<td>
**Dataset FORMAT**
</td> </tr> </table>
The dataset “Twitter trends” has a CSV format, the data structure is
illustrated in the following table. The dataset does not depend on language.
Its spatial coverage is the country and it collects data since May 2017. The
data is updated daily.
# Table 121 DATASET FORMAT – Twitter trends
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
Location: Country of the hashtag. Date: Day of the list.
</th> </tr>
<tr>
<td>
</td>
<td>
Hashtag: Name of the hashtag.
Promoted_Content: Shows is a hashtag is promoted or not.
Tweets_Volume: Number of tweets of a hashtag.
Relevance: Hashtag's position.
</td> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
M5
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Country
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
TWITTER_YYMMDD_XX
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Daily
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.20.4**
</td>
<td>
**Dataset ACCESS**
</td> </tr> </table>
The dataset is private but it is accessible to all the consortium members. The
data will be made available through File-download by means of FTP Client.
Dataset are deposited on Azure Platform and the access is provided by
credentials.
# Table 122 MAKING DATA ACCESSIBLE – Twitter trends
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
Owner: JOT. Access: All members
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
private
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp**
**partners (Y|N)**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
File-download
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
FTP Client (Open Source) or Web Page
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
Azure platform. The URL will be created when needed.
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
Credentials
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
Hashtags
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
5 years after project end
</td> </tr> </table>
Standard vocabulary or taxonomy is not available for “Twitter trends” dataset.
# Table 123 MAKING DATA INTEROPERABLE –Twitter trends
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
Semantic data enrichment
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Wikipedia entities
</td> </tr>
<tr>
<td>
**6.20.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset “Twitter trends” does not contain personal data. It is expected a
secure storage and JOT data recovery.
# Table 124 DATASET SECURITY –Twitter trends
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
Secure storage, no sensitive data, JOT data recovery
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.20.6**
</td>
<td>
**Ethics and Legal requirements**
</td> </tr> </table>
All the data that JOT Internet is generating, sharing and processing (in
compliance with Spanish Organic Law 15/1999 for personal data protection,
ISO/IEC 2382-1 and the General Data Protection Regulation (GDPR)) for the
purpose of EW Shopp project does not include personal data. For that reason,
JOT believe that data managed in the project does not include any personal
data and that is why no further action is needed.
There are no ethical issues that can have an impact on sharing this dataset.
All data are returned by analytics engine that provides only aggregated data
about users grouped by specific characteristics, taking all the necessary
measures to avoid discrimination, stigmatization, limitation to free
association, etc.
**6.21 LOD Dataset - Geographic: DBpedia**
**6.21.1 Dataset IDENTIFICATION**
The dataset “DBpedia” is publicly available and contains factual information
from different areas of human knowledge extracted from Wikipedia pages.
# Table 125. DATASET IDENTIFICATION – DBpedia
<table>
<tr>
<th>
**Category**
</th>
<th>
Geographic Dataset
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
DBpedia
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
DBpedia is a crowd-sourced community effort to extract structured information
from Wikipedia and make this
</td> </tr>
<tr>
<td>
</td>
<td>
information available on the Web. The English version of the DBpedia knowledge
base describes 4.58 million things, out of which 4.22 million are classified
in a consistent ontology, including 1,445,000 persons, 735,000 places
(including 478,000 populated places), 411,000 creative works (including
123,000 music albums, 87,000 films and 19,000 video games), 241,000
organizations (including 58,000 companies and 49,000 educational
institutions), 251,000 species and 6,000 diseases
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
LOD - Access facilitated by UNIMIB
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Andrea Maurino
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1, BC2, BC3, BC4
</td> </tr>
<tr>
<td>
**6.21.2**
</td>
<td>
**Dataset ORIGIN**
</td> </tr> </table>
The dataset is available from January 2017 and it can’t be defined as “core
data”.
# Table 126 DATASET ORIGIN – DBpedia
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
735,000 places (including 478,000 populated places)
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
Not a fixed number, e.g, Dbpedia 3.8 2.8GB, Dbpedia 3.9 2.4GB, while
DBpedia2015-04 4.7GB. More info http://wiki.dbpedia.org/downloads-2016-04
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
rdf, tuples
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
_http://wiki.dbpedia.org/datasets_
</td> </tr>
<tr>
<td>
**6.21.3**
</td>
<td>
**Dataset FORMAT**
</td> </tr> </table>
The dataset has a worldwide coverage and collects data since October 2016 in
125 languages.
# Table 127 DATASET FORMAT – DBpedia
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
provides data in n-triple format (<subject> <predicate> <object> .)
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
.ttl, .qtl
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
up to 10/2016
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Global
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
Localized versions of DBpedia in 125 languages. English, German, Spanish,
Catalan, Portuguese, Italian, French, Russian, Chinese, Slovenian, Croatian,
Serbian, Arabic, Turkish, etc.
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
dbpedia_version/year
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
Yes: DBO, FOAF, SCHEMA.ORG, SKOS, etc.
</td> </tr>
<tr>
<td>
**6.21.4**
</td>
<td>
**Dataset ACCESS**
</td> </tr> </table>
The dataset is public and it is accessible to all the consortium members.
# Table 128 MAKING DATA ACCESSIBLE – DBpedia
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
GNU Free Documentation License.
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
Public
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
SPARQL ENDPOINT, DUMP
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
web service (REST/SOAP APIs), query endpoint
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
_http://wiki.dbpedia.org/datasets_
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
No access restriction
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
cross-domain: places, person, films, food, music, history etc.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
N/A
</td> </tr> </table>
# Table 129 MAKING DATA INTEROPERABLE – DBpedia
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
N/A (Linked Open Data)
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Wikipedia entities
</td> </tr>
<tr>
<td>
**6.21.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset does not contain personal data.
# Table 130 DATASET SECURITY – DBpedia
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.21.6**
</td>
<td>
**Ethics and Legal requirements**
</td> </tr> </table>
Based on the above dataset description, the dataset “DBpedia” does not contain
personal data, therefore the national and European legal framework that
regulates the use of personal data does not apply and copy of opinion is not
required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
**6.22 LOD Dataset - Geographic: Linked Open Street Maps**
**6.22.1 Dataset IDENTIFICATION**
The dataset “Linked Open Street Maps” is publicly available and contains
editable map of the whole world.
# Table 131\. DATASET IDENTIFICATION – Linked Open Street Maps
<table>
<tr>
<th>
**Category**
</th>
<th>
Geographic Dataset
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Linked Open Street Maps
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
OpenStreetMap is built by a community of mappers that contribute and maintain
data about roads, trails, cafés, railway stations, and much more, all over the
world.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
LOD - Access facilitated by UNIMIB
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Andrea Maurino
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1, BC2, BC3, BC4
</td> </tr> </table>
<table>
<tr>
<th>
**6.22.2**
</th>
<th>
**Dataset ORIGIN**
</th> </tr> </table>
The dataset is available from January 2017 and it can’t be defined as “core
data”.
# Table 132 DATASET ORIGIN – Linked Open Street Maps
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
5,027,330,590 GPS points
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
Not a fixed number
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
Data normally comes in the form of XML formatted OSM files
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
_http://planet.openstreetmap.org/planet/planetlatest.osm.bz2_
</td> </tr>
<tr>
<td>
**6.22.3**
</td>
<td>
**Dataset FORMAT**
</td> </tr> </table>
The dataset has a worldwide coverage and collects data in all languages.
# Table 133 DATASET FORMAT – Linked Open Street Maps
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
XML
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
The two main formats used are PBF or compressed OSM XML. PBF is a binary
format that is smaller to download and much faster to process and should be
used when possible. Most common tools using OSM data support PBF.
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
up to date
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
Worldwide. All the nodes, ways and relations that make up our map
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
All languages
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
Each week, a new and complete copy of all data in OpenStreetMap is made
available as both a compressed XML file and a custom PBF format file. Also
available is the 'history' file, which contains not only up-to-date data but
also older versions of data and deleted data items.
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
Yes: DBO, FOAF, SCHEMA.ORG, SKOS, etc.
</td> </tr> </table>
<table>
<tr>
<th>
**6.22.4**
</th>
<th>
**Dataset ACCESS**
</th> </tr> </table>
The dataset is public and it is accessible to all the consortium members.
# Table 134 MAKING DATA ACCESSIBLE – Linked Open Street Maps
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
</th>
<th>
OpenStreetMap is _open data_ , licensed under the Open Data Commons Open
Database License (ODbL) by the OpenStreetMap Foundation (OSMF).
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
</td>
<td>
Public
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp (Y|N)**
</td>
<td>
**partners**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
</td>
<td>
dump, keyword based
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
</td>
<td>
API / dump, SPARQL wrapper
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
</td>
<td>
_http://wiki.openstreetmap.org/wiki/Use_OpenStreetMap_
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
</td>
<td>
No access restriction
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
</td>
<td>
cities, towns, places, municipalities, etc.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
</td>
<td>
N/A
</td> </tr> </table>
# Table 135 MAKING DATA INTEROPERABLE – Linked Open Street Maps
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
N/A (Linked Open Data)
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Wikipedia entities
</td> </tr>
<tr>
<td>
**6.22.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset does not contain personal data.
## Table 136 DATASET SECURITY – Linked Open Street Maps
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr> </table>
<table>
<tr>
<th>
**6.22.6**
</th>
<th>
**Ethics and Legal requirements**
</th> </tr> </table>
Based on the above dataset description, the dataset “Linked Open Street Maps”
does not contain personal data, therefore the national and European legal
framework that regulates the use of personal data does not apply and copy of
opinion is not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
**6.23 LOD Dataset - Geographic: Linked Geo Data**
**6.23.1 Dataset IDENTIFICATION**
The dataset “Linked Geo Data” is publicly available and contains geographic
information for places, cities, countries, etc..
## Table 137. DATASET IDENTIFICATION – Linked Geo Data
<table>
<tr>
<th>
**Category**
</th>
<th>
Geographic Dataset
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
Linked Geo Data
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
LinkedGeoData is an effort to add a spatial dimension to the Web of Data /
Semantic Web. LinkedGeoData uses the information collected by the
OpenStreetMap project and makes it available as an RDF knowledge base
according to the Linked Data principles. It interlinks this data with other
knowledge bases in the Linking Open
Data initiative.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
LOD - Access facilitated by UNIMIB
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Andrea Maurino
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1, BC2, BC3, BC4
</td> </tr> </table>
<table>
<tr>
<th>
**6.23.2**
</th>
<th>
**Dataset ORIGIN**
</th> </tr> </table>
The dataset is available from January 2017 and it can’t be defined as “core
data”.
## Table 138 DATASET ORIGIN – Linked Geo Data
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
8,3GB
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
Not a fixed number
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
.nt
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
_http://downloads.linkedgeodata.org/releases/_
</td> </tr>
<tr>
<td>
**6.23.3**
</td>
<td>
**Dataset FORMAT**
</td> </tr> </table>
The dataset collects data since November 2015 in English.
## Table 139 DATASET FORMAT – Linked Geo Data
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
N-triples
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
.nt
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
up to november 2015
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
It consists of more than 3 billion nodes and 300 million ways and the
resulting RDF data comprises approximately 20 billion triples. The data is
available according to the Linked Data principles and interlinked with DBpedia
and Geo Names.
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
English
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
No versioning
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
Linked open geo vocabulary
</td> </tr> </table>
<table>
<tr>
<th>
**6.23.4**
</th>
<th>
**Dataset ACCESS**
</th> </tr> </table>
The dataset is public and it is accessible to all the consortium members
through dump.
## Table 140 MAKING DATA ACCESSIBLE – Linked Geo Data
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
ODbL
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
Public
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
dump,
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
dump
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
_http://downloads.linkedgeodata.org/releases/_
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
No access restriction
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
cities, towns, places, municipalities, etc
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
N/A
</td> </tr> </table>
## Table 141 MAKING DATA INTEROPERABLE – Linked Geo Data
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
N/A (Linked Open Data)
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Wikipedia entities
</td> </tr>
<tr>
<td>
**6.23.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset does not contain personal data.
## Table 142 DATASET SECURITY – Linked Geo Data
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**6.23.6**
</td>
<td>
**Ethics and Legal requirements**
</td> </tr> </table>
Based on the above dataset description, the dataset “Linked Geo Data” does not
contain personal data, therefore the national and European legal framework
that regulates the use of personal data does not apply and copy of opinion is
not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
**6.24 LOD Dataset - Geographic: GeoNames**
**6.24.1 Dataset IDENTIFICATION**
The dataset “GeoNames” is publicly available and contains geographic
information for places, cities, countries, etc.
# Table 143. DATASET IDENTIFICATION – GeoNames
<table>
<tr>
<th>
**Category**
</th>
<th>
Geographic Dataset
</th> </tr>
<tr>
<td>
**Data name**
</td>
<td>
GeoNames
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
The GeoNames geographical database is available for download free of charge
under a creative commons attribution license. It contains over 10 million
geographical names and consists of over 9 million unique features whereof 2.8
million populated places and 5.5
</td> </tr>
<tr>
<td>
</td>
<td>
million alternate names. All features are categorized into one out of nine
feature classes and further subcategorized into one out of 645 feature codes.
</td> </tr>
<tr>
<td>
**Provider**
</td>
<td>
LOD - Access facilitated by UNIMIB
</td> </tr>
<tr>
<td>
**Contact Person**
</td>
<td>
Andrea Maurino
</td> </tr>
<tr>
<td>
**Business Cases number**
</td>
<td>
BC1, BC2, BC3, BC4
</td> </tr> </table>
<table>
<tr>
<th>
**6.24.2**
</th>
<th>
**Dataset ORIGIN**
</th> </tr> </table>
The dataset is available from January 2017 and it can’t be defined as “core
data”.
# Table 144 DATASET ORIGIN – GeoNames
<table>
<tr>
<th>
**Available at (M)**
</th>
<th>
M1
</th> </tr>
<tr>
<td>
**Core Data (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
10.6GB zipped
</td> </tr>
<tr>
<td>
**Growth**
</td>
<td>
Not a fixed number
</td> </tr>
<tr>
<td>
**Type and format**
</td>
<td>
RDF
</td> </tr>
<tr>
<td>
**Existing data (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
_https://drive.google.com/file/d/0B1tUDhWNTjOWEZZb2VwOG5vZkU/edit?usp=sharing/_
</td> </tr>
<tr>
<td>
**6.24.3**
</td>
<td>
**Dataset FORMAT**
</td> </tr> </table>
The dataset collects data related to all countries.
# Table 145 DATASET FORMAT – GeoNames
<table>
<tr>
<th>
**Dataset structure**
</th>
<th>
RDF
</th> </tr>
<tr>
<td>
**Dataset format**
</td>
<td>
RDF
</td> </tr>
<tr>
<td>
**Time coverage**
</td>
<td>
up to date
</td> </tr>
<tr>
<td>
**Spatial coverage**
</td>
<td>
All countries and points in degree (long & lat)
</td> </tr>
<tr>
<td>
**Languages**
</td>
<td>
English
</td> </tr>
<tr>
<td>
**Identifiability of data**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Versioning**
</td>
<td>
daily dump
</td> </tr>
<tr>
<td>
**Metadata standards**
</td>
<td>
geonames vocab
</td> </tr> </table>
<table>
<tr>
<th>
**6.24.4**
</th>
<th>
**Dataset ACCESS**
</th> </tr> </table>
The dataset is public and it is accessible to all the consortium members
through dump.
# Table 146 MAKING DATA ACCESSIBLE – GeoNames
<table>
<tr>
<th>
**Dataset license**
</th>
<th>
CC-BY 3.0 50
</th> </tr>
<tr>
<td>
**Availability (public | private)**
</td>
<td>
Public
</td> </tr>
<tr>
<td>
**Availability to EW-Shopp partners (Y|N)**
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Availability method**
</td>
<td>
dump,
</td> </tr>
<tr>
<td>
**Tools to access**
</td>
<td>
dump
</td> </tr>
<tr>
<td>
**Dataset source URL**
</td>
<td>
_http://download.geonames.org/export/dump/_
</td> </tr>
<tr>
<td>
**Access restrictions**
</td>
<td>
No access restriction
</td> </tr>
<tr>
<td>
**Keyword/Tags**
</td>
<td>
cities, towns, places, municipalities, etc.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
N/A
</td> </tr> </table>
# Table 147 MAKING DATA INTEROPERABLE – GeoNames
<table>
<tr>
<th>
**Data interoperability**
</th>
<th>
•
</th>
<th>
N/A (Linked Open Data)
</th> </tr>
<tr>
<td>
**Standard vocabulary**
</td>
<td>
•
</td>
<td>
Temporal ontologies
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Spatial ontologies and locations
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Wikipedia entities
</td> </tr>
<tr>
<td>
**6.24.5**
</td>
<td>
**Dataset SECURITY**
</td> </tr> </table>
The dataset does not contain personal data.
# Table 148 DATASET SECURITY – GeoNames
<table>
<tr>
<th>
**Personal Data (Y|N)**
</th>
<th>
N
</th> </tr>
<tr>
<td>
**Anonymized (Y|N|NA)**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data recovery and secure storage**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Privacy management procedures**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**PD at the source (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised during project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**PD - anonymised before project (Y|N)**
</td>
<td>
N
</td> </tr>
<tr>
<td>
**Level of Aggregation (for PD anonymized by aggregation)**
</td>
<td>
N/A
</td> </tr> </table>
<table>
<tr>
<th>
**6.24.6**
</th>
<th>
**Ethics and Legal requirements**
</th> </tr> </table>
Based on the above dataset description, the dataset “GeoNames” does not
contain personal data, therefore the national and European legal framework
that regulates the use of personal data does not apply and copy of opinion is
not required to be collected.
There are no ethical issues that can have an impact on sharing this dataset.
**6.25 Mapping between Dataset and Business case**
In the following table it is possible to see which are all the datasets that
refer to a business case.
# Table 149 Mapping Dataset and Business case
<table>
<tr>
<th>
**id**
</th>
<th>
**Dataset name**
</th>
<th>
**Provider**
</th>
<th>
**BC1**
</th>
<th>
**BC2**
</th>
<th>
**BC3**
</th>
<th>
**BC4**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Purchase intent
</td>
<td>
Ceneje
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2
</td>
<td>
Location analytics data (hourly)
</td>
<td>
Measurence
</td>
<td>
</td>
<td>
</td>
<td>
X
</td>
<td>
</td> </tr>
<tr>
<td>
3
</td>
<td>
Location analytics data (daily)
</td>
<td>
Measurence
</td>
<td>
</td>
<td>
</td>
<td>
X
</td>
<td>
</td> </tr>
<tr>
<td>
4
</td>
<td>
Customer Purchase History
</td>
<td>
Big Bang
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
5
</td>
<td>
Consumer Intent and Interaction
</td>
<td>
Big Bang
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
6
</td>
<td>
Location analytics data (Weekly)
</td>
<td>
Measurence
</td>
<td>
</td>
<td>
</td>
<td>
X
</td>
<td>
</td> </tr>
<tr>
<td>
7
</td>
<td>
Contact and Consumer Interaction History
</td>
<td>
Browsetel
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
8
</td>
<td>
MARS (historical data)
</td>
<td>
ECMWF
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
9
</td>
<td>
Product attributes
</td>
<td>
Ceneje
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
10
</td>
<td>
Event Registry
</td>
<td>
JSI
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
11
</td>
<td>
Consumer data
</td>
<td>
GfK
</td>
<td>
</td>
<td>
X
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
12
</td>
<td>
Sales data
</td>
<td>
GfK
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
</td> </tr>
<tr>
<td>
13
</td>
<td>
Product attributes
</td>
<td>
GfK
</td>
<td>
X
</td>
<td>
X
</td>
<td>
</td>
<td>
X
</td> </tr>
<tr>
<td>
14
</td>
<td>
Door counter data
</td>
<td>
Measurence
</td>
<td>
</td>
<td>
</td>
<td>
X
</td>
<td>
</td> </tr>
<tr>
<td>
15
</td>
<td>
Product attributes
</td>
<td>
Bing Bang
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
16
</td>
<td>
Products price history
</td>
<td>
Ceneje
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
17
</td>
<td>
Sales data
</td>
<td>
Measurence
</td>
<td>
</td>
<td>
X
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
18
</td>
<td>
Traffic sources (Bing)
</td>
<td>
JOT
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
X
</td> </tr>
<tr>
<td>
19
</td>
<td>
Traffic sources (Google)
</td>
<td>
JOT
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
X
</td> </tr>
<tr>
<td>
20
</td>
<td>
Twitter Trends
</td>
<td>
JOT
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
X
</td> </tr>
<tr>
<td>
21
</td>
<td>
Dbpedia
</td>
<td>
LOD
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
22
</td>
<td>
Linked Open Street Maps
</td>
<td>
LOD
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
23
</td>
<td>
Linked Geo Data
</td>
<td>
LOD
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
24
</td>
<td>
GeoNames
</td>
<td>
LOD
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr> </table>
**Chapter 7 Storage and Re-use**
**7.1 Storage**
Data in the EW-Shopp will be exchanged and made available through a two-tier
storage policy. The policy will consist of:
* Tier 1: a shared data space for exchanging raw input data between Consortium partners.
* Tier 2: structured data storage with integrated data based on the DataGraft platform, which will be used to produce the integrated data according to a shared data model.
Tier 1 will be implemented using a file or data sharing solution. It will use
cloud hosting infrastructure services to enable easy access over the web. Data
will be stored using data hosting service and secure data sharing protocols to
ensure that data are not compromised.
Tier 2 will be implemented based on the DataGraft platform where the shared
data model will be published and the output data will be imported in a
database management system and registered in the catalogue, taking into
account the user access restrictions for each dataset.
<table>
<tr>
<th>
**7.2**
</th>
<th>
**Backup and Recovery**
</th> </tr> </table>
Back-up and recovery mechanisms will be implemented on a case by case basis
with respect to each output datasets. Input datasets have already back-up and
recovery in place (when needed) and are directly managed by the data
providers; therefore, no backup and/or recovery mechanism for input datasets
falls within the scope of the EW-Shopp platform.
The concrete data back-up and recovery mechanisms to be adopted at EW-Shopp
platform level will be discussed in the future versions of the Data Management
Plan as they evolve throughout the project, or in other deliverables dealing
with technical aspects (such as the detailed design of the platform or the
business cases implementation plans).
<table>
<tr>
<th>
**7.3**
</th>
<th>
**Data Archiving**
</th> </tr> </table>
The data used and produced during the project development will be updated each
time they change in project lifetime. For each dataset update, a reference
document will also be produced. This document will report the changes of the
dataset respect to previous version _._
EW-Shopp datasets used in the demonstrator will be maintained for at least
five years after project termination. Sensitive data preservation will follow
the guidelines that EW-Shopp consortium will provide during the project
development.
<table>
<tr>
<th>
**7.4**
</th>
<th>
**Security**
</th> </tr> </table>
The EW-Shopp framework will ensure the secure storage and exchange of data in
the project to protect against compromising of sensitive data. One of the main
components that will be used for the EW-Shopp framework and set up of data is
the DataGraft platform (tier 2). DataGraft security is implemented on several
layers as follows:
1. User login – Account information is protected by a password, which is encrypted and DataGraft does not store the non-encrypted version. Furthermore, current deployments of DataGraft use SSL certificates enabled through the CloudFront CDN on AWS. Other configurations of SSL are also possible if necessary;
2. OAuth2 – DataGraft uses a standard implementation of RFC 6749 – token-based authorisation layer for control of client access to resources;
3. API keys for database – The public API of the back-end database of DataGraft (Ontotext S4) is accessible through an API key, which can be created and managed by registered users of the platform; and
4. Encrypted cookies – Front-end cookies containing session information are exchanged between the web UI and the back-end. This cookie stores a session identifier and encrypted session data when users are logged in to the DataGraft Portal.
Security will be considered additionally for the purposes of data exchange
between partners (tier 1) and sharing before the final data
integration/publication. The particular security measures will be taken on a
case by case basis based on the medium for data exchange and the precise needs
of each data provide. They will include the following:
1. Setting up security policies on cloud service providers
2. Setting up secure FTP server for file transfer of any files over the Internet
3. Setting up secret SSH keys for accessing servers/clusters of servers with running databases that host any shared dataset
<table>
<tr>
<th>
**7.5**
</th>
<th>
**Permission**
</th> </tr> </table>
Permission policies will be provided to make EW-Shopp compliant with the
privacy-preserving data management. The platform will provide authentication
mechanisms that ensure data security, as stated in Section 7.4 (supported by
the chosen data exchange medium in tier 1 and the DataGraft platform), in
order to restrict access to data files to the research personnel involved in
EW-Shopp development
<table>
<tr>
<th>
**7.6**
</th>
<th>
**Access, Re-use and Licensing**
</th> </tr> </table>
The individual input dataset sharing can be found in Chapter 6 under "Dataset
ACCESS", together with the individual license for each of them. To this end
access will be provided to the whole EWShopp Consortium and exclusively for
the project objectives.
Datasets produced as a result of the project work will be shared within the
Consortium and will only be allowed for external sharing with a consensual
Consortium approval of the relevant stakeholders, by accepting the terms and
conditions of use, as appropriate. The license for the access, sharing and re-
use of EW-Shopp material and output datasets will be defined by the Consortium
on a case by case basis _._
The research data will be present in scientific publications that the
consortium will write and publish during the funding period. Materials
generated under the Project will be disseminated in accordance with Consortium
Agreement.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0458_TechTIDE_776011.md
|
# Executive Summary
This Document represents the Data Management Plan (DMP) of the H2020 TechTIDE
project. It describes which data is going to be used and produced during
TechTIDE, how it will be accessible and the data management life cycle for the
TechTIDE data.
# 1 Introduction
## 1.1 Objectives of TechTIDE
In the frame of the Horizon 2020 (H2020) call of the European Commission (EC),
the project
“Warning and Mitigation Technologies for Travelling Ionospheric Disturbances
Effects” (TechTIDE) develops a system for the detection and monitoring and
alert for Travelling ionospheric disturbances (TIDs). TIDs constitute a threat
for operational systems using HF or transionospheric radio propagation. TIDs
can impose disturbances of an amplitude of 20% of the ambient electron density
and a Doppler shift of the level of 0.5Hz. Consequently, the direct and timely
identification of TIDs is a clear customer’s requirement for the Space Weather
segment of the ESA SSA Programme. The objective of this proposal is to address
this need with setting up an operational system for the identification and
tracking of TIDs, the estimation of their effects in the bottomside and
topside ionosphere and for issuing warnings to the users with estimated
parameters of TID characteristics. Based on the information released from this
warning system, the users, depending on their applications, will develop
mitigation procedures.
## 1.2 Scope of the Data Management Plan
As described in [REF-1], Data Management Plans (DMPs) are a key element of
good data management. This DMP describes the data management life cycle for
the data to be collected, processed and/or generated by the Horizon 2020
project TechTIDE. As part of making research data findable, accessible,
interoperable and re-usable (FAIR), the TechTIDE DMP includes information on:
* the handling of research data during and after the end of the project
* what data will be collected, processed and/or generated
* which methodology and standards will be applied
* whether data will be shared/made open access and
* how data will be curated and preserved (including after the end of the project).
A DMP is required for all projects participating in the extended ORD pilot,
unless they opt out of the ORD pilot. However, projects that opt out are still
encouraged to submit a DMP on a voluntary basis.
This is the initial TechTIDE DMP submitted 6 month after the kick off of the
H2020 project TechTIDE. This DMP will be updated over the course of the
project whenever significant changes arise, such as (but not limited to):
* new data
* changes in consortium policies (e.g. new innovation potential, decision to file for a patent)
* changes in consortium composition and external factors (e.g. new consortium members joining or old members leaving).
The DMP will be updated in time with the final evaluation/assessment of the
project.
## 1.3 Preparation of the DMP
This DMP is based on the Horizon 2020 DMP template [REF-2] provided by the EC.
The template has been designed to be applicable to any Horizon 2020 project
that produces, collects or processes research data. The TechTIDE DMP covers
its overall approach and if applicable, specific issues for individual
datasets (e.g. regarding openness), are addressed in the DMP.
# 2 Data Summary
## 2.1 Purpose of the data collection/generation
The objective of TechTIDE is to set up an operational system for the
identification and tracking of TIDs, the estimation of their effects in the
bottomside and topside ionosphere and for issuing warnings to the users with
estimated parameters of TID characteristics. Hence, an extensive set of data
will be collected, processed and generated in TechTIDE, in order to feed the
operational system.
## 2.2 Types and formats of data
Within TechTIDE, measurement data from different sensors will be used:
* Digisonde measurements
* Global Navigation Satellite System (GNSS) measurements
* Doppler shift measurements
Additionally, existing data/ products will be used as input for the generation
of TechTIDE products:
* Total Electron Content (TEC) maps provided by DLR o For the European region o Global
* Geomagnetic and Solar Indices from NOAA Space Weather and Prediction Center (SWPC)
* Digisonde parameters from the GIRO quick chart
* Tropospheric - Stratospheric events & data o atmospheric pressure time series with header o Infrasound detection bulletins
* Juliusruh K-Index
The project team will develop several methods for processing these
measurements and allow the detection and characterization of TIDs
* 3D electron density (EDD) products
* HF interferometry products
* TEC Gradient products
* Along Arc TEC Rate (AATR) product
* MSTID detection based on GNSS data
* Height Time Intensity product
* Continuous Doppler shifts of fixed sounding radio frequencies (CDSS)
These products will be provided in form of ASCII files and images. Most
products are provided along with metadata files.
## 2.3 Origin of the data
The data used for the generation of the TechTIDE product partially originates
in the TechTIDE consortium and partially external. A full assessment of the
used data is provided in the TechTIDE knowledge database. A summary table is
shown below _Table 2-1: List of data used or generated in TechTIDE and ist
origin_
<table>
<tr>
<th>
**ID**
</th>
<th>
**Data**
</th>
<th>
**Existing/ new**
</th>
<th>
**Origin**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**Geomagnetic and Solar Indices**
</td>
<td>
Existing
</td>
<td>
NOAA SWPC
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
**TEC maps**
</td>
<td>
Existing
</td>
<td>
DLR
</td> </tr> </table>
<table>
<tr>
<th>
**ID**
</th>
<th>
**Data**
</th>
<th>
**Existing/ new**
</th>
<th>
**Origin**
</th> </tr>
<tr>
<td>
**3**
</td>
<td>
**Digisonde parameters**
</td>
<td>
Existing
</td>
<td>
GIRO quick
chart
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**Juliusruh K-Index**
</td>
<td>
Existing
</td>
<td>
L-IAP
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**Tropospheric - Stratospheric events & data **
</td>
<td>
Existing
</td>
<td>
IAP (from
ARISE project)
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
**electron densities above 14 stations**
</td>
<td>
New
</td>
<td>
NOA
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
**Electron density map**
</td>
<td>
New
</td>
<td>
NOA
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
**TID Situation Map**
</td>
<td>
New
</td>
<td>
NOA/ BGD
</td> </tr>
<tr>
<td>
**9**
</td>
<td>
**TID detection support data per link**
</td>
<td>
New
</td>
<td>
NOA/ BGD
</td> </tr>
<tr>
<td>
**10**
</td>
<td>
**TID Alerts**
</td>
<td>
New
</td>
<td>
NOA/ BGD
</td> </tr>
<tr>
<td>
**11**
</td>
<td>
**TID Detections**
</td>
<td>
New
</td>
<td>
NOA/ BGD
</td> </tr>
<tr>
<td>
**12**
</td>
<td>
**Support data**
</td>
<td>
New
</td>
<td>
NOA/ BGD
</td> </tr>
<tr>
<td>
**13**
</td>
<td>
**TID database**
</td>
<td>
New
</td>
<td>
NOA/ BGD
</td> </tr>
<tr>
<td>
**14**
</td>
<td>
**TID Explorer visualizations**
</td>
<td>
New
</td>
<td>
NOA/ BGD
</td> </tr>
<tr>
<td>
**15**
</td>
<td>
**MUF(3000)F2 above 14 stations**
</td>
<td>
Existing
</td>
<td>
EO
</td> </tr>
<tr>
<td>
**16**
</td>
<td>
**TID Detection above 14 stations**
</td>
<td>
New
</td>
<td>
EO
</td> </tr>
<tr>
<td>
**17**
</td>
<td>
**MSTID detector for around 250 receivers worldwide (120 in Europe)**
</td>
<td>
New
</td>
<td>
UPC
</td> </tr>
<tr>
<td>
**18**
</td>
<td>
**TEC Gradient for Europe**
</td>
<td>
New
</td>
<td>
DLR
</td> </tr>
<tr>
<td>
**19**
</td>
<td>
**HTI plots above 14 stations**
</td>
<td>
New
</td>
<td>
FU
</td> </tr>
<tr>
<td>
**20**
</td>
<td>
**Doppler shift spectrograms**
</td>
<td>
New
</td>
<td>
IAP
</td> </tr>
<tr>
<td>
**21**
</td>
<td>
**CDSS TID detection and analysis**
</td>
<td>
New
</td>
<td>
IAP
</td> </tr>
<tr>
<td>
**22**
</td>
<td>
**NRT AATR values for around 250 receivers worldwide (120 in Europe)**
</td>
<td>
New
</td>
<td>
UPC
</td> </tr>
<tr>
<td>
**23**
</td>
<td>
**Clean data for 4 parameters foF2, hmF2, Hm, MUF from 14 stations**
</td>
<td>
New
</td>
<td>
NOA
</td> </tr>
<tr>
<td>
**24**
</td>
<td>
**Running median and DIFF (difference from observed values) for 4 parameters
foF2, hmF2, Hm, MUF from 14 stations**
</td>
<td>
New
</td>
<td>
NOA
</td> </tr>
<tr>
<td>
**25**
</td>
<td>
**De-trended values and DIFF (difference from observed values) for 4
parameters foF2, hmF2, Hm, MUF from 14 stations**
</td>
<td>
New
</td>
<td>
NOA
</td> </tr>
<tr>
<td>
**ID**
</td>
<td>
**Data**
</td>
<td>
**Existing/ new**
</td>
<td>
**Origin**
</td> </tr>
<tr>
<td>
**26**
</td>
<td>
**Maps of Running median and de-trended values for foF2 and hmF2, two areas
(Europe and Africa), i.e. 4 maps**
</td>
<td>
New
</td>
<td>
NOA
</td> </tr> </table>
## 2.4 Data size
The expected files and their size are documented in the TechTIDE wiki (
_https://techtidewiki.space.noa.gr/wiki/WikiPages/DB-Requirements2_ ). The
status of 29 th March 2018 is documented in the table below.
_Table 2-2: expected size of the TechTIDE data_
<table>
<tr>
<th>
**ID**
</th>
<th>
**Data**
</th>
<th>
**Size**
</th>
<th>
**cadence**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**Geomagnetic and Solar Indices**
</td>
<td>
5 kB
</td>
<td>
1 day
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
**TEC maps**
</td>
<td>
2 x 1 MB
</td>
<td>
5 min
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
**Digisonde parameters**
</td>
<td>
25 kB
</td>
<td>
5 min
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**Juliusruh K-Index**
</td>
<td>
50 kB
</td>
<td>
5 min
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**Tropospheric - Stratospheric events & data **
</td>
<td>
28 MB
</td>
<td>
1 day
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
**electron densities above 14 stations**
</td>
<td>
100x14 KB
</td>
<td>
5 min
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
**Electron density map**
</td>
<td>
150 KB
</td>
<td>
5 min
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
**TID Situation Map**
</td>
<td>
200 kB x 2
</td>
<td>
1 min
</td> </tr>
<tr>
<td>
**9**
</td>
<td>
**TID detection support data per link**
</td>
<td>
2 kB x 6
</td>
<td>
as requested
</td> </tr>
<tr>
<td>
**10**
</td>
<td>
**TID Alerts**
</td>
<td>
TBD kB
</td>
<td>
On event
</td> </tr>
<tr>
<td>
**11**
</td>
<td>
**TID Detections**
</td>
<td>
1 kB per
link
</td>
<td>
1 min
</td> </tr>
<tr>
<td>
**12**
</td>
<td>
**Support data**
</td>
<td>
1 kB per
link
</td>
<td>
1 min
</td> </tr>
<tr>
<td>
**13**
</td>
<td>
**TID database**
</td>
<td>
2 kB per
record
</td>
<td>
2.5 min
</td> </tr>
<tr>
<td>
**14**
</td>
<td>
**TID Explorer visualizations**
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
**15**
</td>
<td>
**MUF(3000)F2 above 14 stations**
</td>
<td>
14 x 7 kB
</td>
<td>
5 min
</td> </tr>
<tr>
<td>
**16**
</td>
<td>
**TID Detection above 14 stations**
</td>
<td>
14 x 1 kB
</td>
<td>
5 min
</td> </tr>
<tr>
<td>
**17**
</td>
<td>
**MSTID detector for around 250 receivers worldwide (120 in Europe)**
</td>
<td>
1MB per
daily file
</td>
<td>
5 min
</td> </tr>
<tr>
<td>
**ID**
</td>
<td>
**Data**
</td>
<td>
**Size**
</td>
<td>
**cadence**
</td> </tr>
<tr>
<td>
**18**
</td>
<td>
**TEC Gradient for Europe**
</td>
<td>
1 MB
</td>
<td>
15 min
</td> </tr>
<tr>
<td>
**19**
</td>
<td>
**HTI plots above 14 stations**
</td>
<td>
tbd
</td>
<td>
15 min
</td> </tr>
<tr>
<td>
**20**
</td>
<td>
**Doppler shift spectrograms**
</td>
<td>
60-110 kB
per file
</td>
<td>
2/8 hour
</td> </tr>
<tr>
<td>
**21**
</td>
<td>
**CDSS TID detection and analysis**
</td>
<td>
60-110 kB
per file
</td>
<td>
15 min
</td> </tr>
<tr>
<td>
**22**
</td>
<td>
**AATR values for around 250 receivers worldwide (120 in Europe)**
</td>
<td>
2MB per daily file for all
receivers
</td>
<td>
5 min
</td> </tr>
<tr>
<td>
**23**
</td>
<td>
**Clean data for 4 parameters foF2, hmF2, Hm, MUF from 14 stations**
</td>
<td>
1 kB per record per station
</td>
<td>
5 min
</td> </tr>
<tr>
<td>
**24**
</td>
<td>
**Running median and DIFF (difference from observed values) for 4 parameters
foF2, hmF2, Hm, MUF from 14 stations**
</td>
<td>
1 kB per record per station
</td>
<td>
5 min
</td> </tr>
<tr>
<td>
**25**
</td>
<td>
**De-trended values and DIFF (difference from observed values) for 4
parameters foF2, hmF2, Hm, MUF from 14 stations**
</td>
<td>
1 kB per record per station
</td>
<td>
5 min
</td> </tr>
<tr>
<td>
**26**
</td>
<td>
**Maps of Running median and de-trended values for foF2 and hmF2, two areas
(Europe and Africa), i.e. 4 maps**
</td>
<td>
150 kB per map
</td>
<td>
5 min
</td> </tr> </table>
## 2.5 Data utility
The external data is requested input for different processors in the TechTIDE
system. It is not supposed to be provided to users.
For the indication of the utility of the TechTIDE products, the TechTIDE
consortium maintains close communication with users. Main users are network
real-time kinematic (NRTK) service providers and HF users. First, a
comprehensive investigation of user requirements has been executed. The
TechTIDE system will be constructed according to these requirements. Then,
user workshops will be organized, where the TechTIDE consortium demonstrates
the TechTIDE system to users and shows the utility of the products. Users will
give feedback which will be used to adjust the presentation of products if
necessary.
# 3 FAIR data
## 3.1 Making data findable, including provisions for metadata
### 3.1.1 Metadata
Each product will be generated along with metadata. Due to the large number of
project partners providing different kinds of products, a harmonization of
metadata within TechTIDE is necessary. At the current state of the project
(requirements definition phase), there is no agreement on a metadata standard.
This topic will be addressed in the design phase in the deliverable D4.1.
### 3.1.2 Naming convention
At the current state of the project (requirements definition phase), there is
no agreement on a naming convention. This topic will be addressed in the
design phase in the deliverable D4.1.
### 3.1.3 Search keywords
Search keywords are considered as useful parameter in the TechTIDE project.
TechTIDE is going to review the user requirements to check what users need. At
the current state of the project, we expect search keywords to form a part of
the metadata. However, definitive handling of search keywords is going to be
defined in the design document D4.1.
### 3.1.4 Versioning
Versioning of product and code is going to be implemented in TechTIDE. It can
be part of the metadata or the naming conventions. A definition of the
handling of versioning is going to be described in the design document D4.1.
## 3.2 Making data openly accessible
### 3.2.1 Openly available data
All new products listed in Table 2-1, will be made openly available through
the TechTIDE system. The TechTIDE system will be accessible through a
dedicated website. Each product will be presented on this website with a
dedicated description and user guideline. Also data access is provided through
the website.
The implementation TechTIDE data storage depends on different criteria like
speed to download and storage capacity. An initial thought is to store the
online data on a webserver. This data can be accessed via HTTP queries. The
websites guides the user to the relevant data. Metadata will be stored along
with each product data. A definitive design of the data storage will be given
in D4.1.
Additionally, off-line data storage with redundancy will be implemented for
the TechTIDE data. The project coordinator and host of the TechTIDE core
system is partner of the Greek Research Technology Network (GRNET), which is
part of GEANT, and going to use their storage facility (if appropriate).
TechTIDE will make benefit of the capabilities in redundancy and capacity of
the GRNET system. On request, users can get individual data sets from the off-
line storage.
The TechTIDE system implements a distributed processing system. The individual
products listed in Table 2-1 are processed/ generated in different institutes
participating in the TechTIDE project. Each of these institutes maintains an
additional local archive of their products and input data. The institutes can
provide data from their repositories on request.
The data access and the data format is designed such that no special software
is needed to access or read the data.
### 3.2.2 Closed and restricted data
Within TechTIDE, DLR is providing TEC data with 5 minutes temporal resolution
to NOA. This data exchange is internal to the project. These TEC maps are the
property of DLR, which has been declared as background IPR in the grant
agreement. DLR and NOA have agreed to keep the data closed to the project. The
data will be used by TechTIDE processors to generate dedicated TID products.
The agreed terms of usage are documented in the TechTIDE knowledge database.
DLR will push the data to a dedicated NOA server.
Since the number of restricted/ closed data is low and the terms of usage have
been described in the knowledge database, there is no need to establish a data
access committee.
## 3.3 Making data interoperable
The data produced in the TechTIDE project is meant to be interoperable, to
allow data exchange and re-use between researchers, institutions and
organisations. Standard formats like JSON are generated where applicable.
There exists a number of open source software reading JSON format. All data
formats are human readable and contain format information. Actually, within
the project itself, different datasets from different origins are combined.
This expertise will also be granted to the TechTIDE products.
Metadata files are provided along with the TechTIDE products. Some products
use standard metadata vocabularies and others generated individual well
readable metadata files which are easy to convert in any standard. The
handling and definition of metadata will be considered in D4.1.
## 3.4 Increase data re-use (through clarifying licences)
The data will be openly accessible by the time of the first release of the
operational TechTIDE system. All open TechTIDE data will be accessible by the
time of the final release of the TechTIDE system. No embargo will be put on
the product re-use.
The open TechTIDE data can be used by third parties. TechTIDE data is planned
to be provided with a creative commons license for free scientific use and
restricted commercial use. The applicable license will be discussed in the
project. If commercial users are interested to use TechTIDE data, individual
agreements will be made between product provider and user.
After the end of the TechTIDE project, the TechTIDE system will continue to
provide its products. However, continuity of the product generation cannot be
guaranteed, because the operation will run on best efforts basis. Also the
maintenance of the online hardware and software cannot be guaranteed for more
than one year after project completion. However, the off-line data
repositories will store the TechTIDE data for at least 5 years. Data can be
provided on request.
Data quality assurance processes are going to be discussed in the design of
the TechTIDE system. A possible approach is the definition of quality metrics
which are provided along with the products. But the feasibility needs to be
assessed in the system design.
**4 Allocation of resources**
N.a.
# 5 Data security
TechTIDE data is going tob e safely stored in the GRNET facility. Data
recovery and secure storage are provided by this certified repository
facility. GRNET is also capable for long term preservation. Sensitive data is
not intended to be used in TechTIDE.
# 6 Ethical aspects
No personal data related to user questionnaires will be stored.
There is no ethical issue with any TechTIDE data.
The handling of personal data generated from user registration in the TechTIDE
portal will consider the EU data protection law, which enters in force May
2018.
# 7 Other issues
For its institutional data repository, DLR makes use of its Data and
Information Management System (DIMS). It maintains institutional procedures
for data management.
IAP saves raw data and some other information from Digisonde and Doppler
sounder on their server in the Institute.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0459_ChipScope_737089.md
|
**Executive Summary**
This is the first version of the Data Management Plan, which explains the
methods used to manage the data generated within the ChipScope project and the
criteria and methods agreed between partners to make them openly accessible,
in fulfillment of the Open Data obligations defined in the Grant Agreement.
The document will be updated every 12 months until the end of the project.
1. **Data summary**
1. **State the purpose of the data collection/generation**
The research activities of the ChipScope project will generate data of two
types:
**Type I: Design and fabrication details:** This data relates to the
fabrication of the microscope prototypes and their parts. Includes, but it is
not restricted to:
* _Engineering drawings_
* _Chip layouts_
* _Semiconductor processing specifications_
* _Flow diagrams_
* _Programming code_
* _User protocols and manuals_
**Type II: Measurements and simulation data:** This data relates to the
experiments carried out during the project, which include the characterization
and simulation of the microscope parts, and the application of the microscopes
to observe samples. These experiments will generate, among other, datasets of:
* _Optoelectronic and spectroscopic measurements_
* _Images of different kinds_
* _Numeric simulation results_
* _New theoretical models_
2. **Explain the relation to the objectives of the project**
The data generated in the research activities of ChipScope will serve us to
achieve the objectives #1 and #2, which relate to the design, fabrication, and
experimental proof-of-concept of the microscope prototypes.
The objective #3 relates to the dissemination, communication, and exploitation
of the project results. This states the obligation to make our best to
exploit, and not jeopardize, the technological assets developed in ChipScope.
Therefore, the achievement of objective #3 has strong implications on level of
open availability of the data generated.
3. **Specify the types and formats of data generated/collected Type I: Design and fabrication details**
* _Engineering drawings:_ CAD files, with defined schemas, shapes and dimensions of the prototypes’ parts (e.g. mechanical holders, stages, microfluidic system, wiring, etc.) and their integration.
* _Chip layouts:_ mask designs to be used in the production steps of the nanoLED and of the CMOS ASICS. Typically, in gds format.
* Semiconductor processing specifications: detailed list of steps, conditions, and materials’ qualities to be used in the production of chip devices. Typically, in Office file (or equivalent) format.
* _Flow diagrams:_ detailed graphical descriptions of the algorithms and procedures to be implemented in software. Typically, in Office file (or equivalent) format.
* _Programming code:_ source code of the programs running in the microscope prototypes, both in the computer side and in the embedded side, as well as source codes of simulation software. Typically, in ASCII files encoding different programing languages.
* _User protocols and manuals:_ detailed written and graphical descriptions on how to operate the different parts of microscope that accompany each prototype when being transferred among partners. Typically, in Office file (or equivalent) format.
**Type II: Measurements and simulation data**
* _Optoelectronic and spectroscopic measurements:_ electrical records, impedance spectra, digital bit stream records, electroluminescence and photoluminescence spectra. Typically, in CSV, Origin or Excel files.
* _Images:_ photo/micro/nanographs taken by camera, optical microscope, scanning electron microscope, and transmission electron microscope of microscope’s prototypes, reference metrological samples and living tissues. Involve image files (e.g. BMP, TIFF, JPG ...) and video files (e.g. MP4, AVI, MOV...)
* _Numeric simulation results:_ calculated physical quantities associated to spatial coordinates (spatial data), like particle densities, recombination rates, energy levels, electromagnetic field strength (associated with the mesh (grid) of the spatial discretization); and global quantities obtained from the simulations, like eigen mode frequencies, contact currents, emission powers, emission spectra (not associated to a spatial discretization). Depending on the specific software, data produced might be in stored in VTK format or ASCII files (for mesh dependent data).
* _New theoretical models:_ description of new formulas and algorithms. Typically, in Office file (or equivalent) format.
4. **Specify if existing data is being re-used (if any)**
No data, other than expertise from partners’ background (e.g. layouts of
similar devices or software produced in the past), is being re-used.
5. **Specify the origin of the data**
All the data generated will be the product of the research carried out by the
partners in the framework of the ChipScope project.
6. **State the expected size of the data (if known) Type I: Design and fabrication details**
* _Engineering drawings:_ 1MB – 100 MB per design
* _Chip layouts:_ 10MB – 100 MB per chip
* _Semiconductor processing specifications:_ 10kB – 10MB per process
* _Flow diagrams:_ 10kB – 10 MB per diagram
* _Programming code:_ 1MB – 10 MB per program
* _User protocols and manuals:_ 1MB – 100MB per document. **Type II: Measurements**
* _Optoelectronic and spectroscopic measurements:_ 1kB – 100 MB per file.
* _Images of different kinds:_ 1MB – 1GB per image/video
* _Numeric simulation results:_ 1 MB – 1GB per simulation
7. **Outline the data utility: to whom will it be useful**
The data will be used for internal validation of the processes, benchmarking
of the performances of the prototypes, and research on metrology and medical
applications.
It may also be useful for research institutions and companies working in the
field of digital imaging, metrology, multiscale simulations, and medical
diagnostics as well; either for a better understanding of the development and
its performances or for benchmarking and reproduction of the results.
2. **FAIR data**
1. **Making data findable, including provisions for metadata:**
1. **Outline the discoverability of data (metadata provision)**
Usually, the data will be self-document. When uploaded to public repositories
(e.g. European OpenAIRE repository), metadata might accompany it, to be
defined in further versions of the DMP.
2. **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?**
To be defined in further versions of the DMP, when the public repository
system will be fully defined.
3. **Outline naming conventions used.**
To be defined in further versions of the DMP, when the public repository
system will be fully defined. As a general rule, it should include information
related to the project, partner generating the data, serial number or date and
description of the dataset.
4. **Outline the approach towards search keyword.**
To be defined in further versions of the DMP, when the public repository
system will be fully defined.
5. **Outline the approach for clear versioning.**
Version control mechanisms should be established and documented before any
data are made openly public. During generation and collection, each partners
will follow its own internal procedures.
6. **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how**
To be defined in further versions of the DMP, when the public repository
system will be fully defined. Metadata will be created manually by depositors
in the deposit form at the repository.
2. **Making data openly accessible:**
1. **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so.**
**Type I data will NOT be made openly available.** This is necessary to
protect the technological asset developed in the project, and comply with the
project objective #3. Any public disclosure of the fabrication details would
jeopardize the chances of exploiting the technology, among the project
partners, with members of the Industry Advisory Board, or with third parties.
**Type II data will only be made openly available partially** . In fulfillment
of project objective #3, the consortium oversees any disclosure of scientific
and technical data made by the partners, in the form of summaries, conference
contributions, paper publications, online communications, etc. The content of
the approved communications is considered not confidential and its
communication is deemed beneficial for the achievement of the project
objectives. Consistently with this communication protocol, the consortium will
make public all the original datasets of Type II data used to prepare these
public communications.
_In brief, only the data relative to experimental measurements (Type II) used
to prepare publications disclosed in open access will be made openly
available._
2. **Specify how the data will be made available.**
Data will be made openly available in relation to an associated open access
publication. For each publication, the associated Type II data will be filed
together in a container format (e.g. zip, or tar). Information to relate each
data set with the corresponding figure, table or results presented in the
publication will be provided.
Data will be made openly available following the same time rules that apply to
the associated open access publication, e.g. in terms of timeliness, and
embargo.
3. **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?**
Data will be made available in standard file formats that could be accessed
with common software tools. This will include, ASCII or Office files for
numeric datasets, and standard picture formats for images, and open output
formats like VTK or HDF5 for mesh related simulation data.
4. **Specify where the data and associated metadata, documentation and code are deposited.**
Details about the public repository system to be used will be fully defined in
further versions of the DMP. In deciding where to store project data, the
following choice will be performed, in order of priority:
* An institutional research data repository, if available
* An external data archive or repository already established in the project research domain (to preserve the data according to recognized standards)
* The European sponsored repository: Zenodo ( _http://zenodo.org_ )
* Other data repositories (searchable here: re3data _http://www.re3data.org/_ ) , if the previous ones are ineligible
### 2.2.5 Specify how access will be provided in case there are any
restrictions.
Data availability is categorized at this stage in one of two ways:
* Openly Accessible Data (Type II associated to open access publication): open data that is shared for re-use that underpins a scientific publication.
* Consortium Confidential data (Type I and the rest of Type II data): accessible to all partners within the conditions established in the Consortium Agreement.
## 2.3 Making data interoperable: 2.3.1 Assess the interoperability of your
data. Specify what data and metadata
### vocabularies, standards or methodologies you will follow to facilitate
interoperability.
Does not apply for the moment.
### 2.3.2 Specify whether you will be using standard vocabulary for all data
types present in your data set, to allow inter-disciplinary interoperability?
If not, will you provide mapping to more commonly used ontologies?
Does not apply for the moment.
## 2.4 Increase data re-use (through clarifying licenses):
### 2.4.1 Specify how the data will be licensed to permit the widest reuse
possible
The Openly Accessible Datasets will be licensed, when deposited to the
repository, under an Attribution-NonCommercial license (by-nc).
### 2.4.2 Specify when the data will be made available for re-use. If
applicable, specify why and for what period a data embargo is needed
The Openly Accessible Datasets could be re-used in the moment of the open
publication.
### 2.4.3 Specify whether the data produced and/or used in the project is
useable by third parties, in particular after the end of the project? If the
re-use of some data is restricted, explain why.
Each archived Openly Accessible Dataset will have its own permanent repository
ID and will be easily accessible, and could be used by any third party under
by-nc license.
### 2.4.4 Describe data quality assurance processes.
The repository platform functioning guarantees the quality of the dataset.
### 2.4.5 Specify the length of time for which the data will remain re-
usable.
Openly Accessible Datasets will remain re-usable after the end of the project
by anyone interested in it. Accessibility may depend on the functioning of the
repository platform, and the project partners do not assume any responsibility
after the end of the project.
# 3 Allocation of resources
**3.1 Estimate the costs for making your data FAIR. Describe how you intend
to**
## cover these costs.
There are no costs associated to the described mechanisms to make the datasets
FAIR and long term preserved.
## 3.2 Clearly identify responsibilities for data management in your project.
The project coordinator has the ultimate responsibility for the data
management in the Project. Each partner is requested to provide the necessary
information to compose the Openly Accessible Datasets in compliance of the
terms defined in the DMP agreed by the consortium.
## 3.3 Describe costs and potential value of long term preservation.
Does not apply for the moment.
# 4 Data security
## 4.1 Address data recovery as well as secure storage and transfer of
sensitive data.
Data security will be provided in the standard terms and conditions available
in the selected repository platform.
# 5 Ethical aspects 5.1 To be covered in the context of the ethics review,
ethics section of DoA and
**ethics deliverables. Include references and related technical aspects if not
covered by the former.**
Concerning the use of the prototypes for medical research applications, all
patient centered data will be kept exclusively within MUWs. Forwarded to the
technical Partners (AIT) will be only project relevant data (e.g. histological
diagnosis) in strictly anonymized form.
# 6 Other 6.1 Refer to other national/funder/sectorial/departmental
procedures for data
## management that you are using (if any)
The project data and documentation is also stored in the project intranet,
which is accessible to all project partners.
This DMP has been created with the tool “Pla de Gestió de Dades de Recerca”
( _https://dmp.csuc.cat/_ ) .
# 7 List of References
Does not apply.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0462_One Health EJP_773830.md
|
# OVERARCHING DATA MANAGEMENT PLAN
The One-Health European Joint Program (OH-EJP) aims at integrating the
complementary expertise of partners across Europe in order to prepare common
action against infectious health threats. Those threats include zoonotic
infections both in animals and humans, and infections or toxin contamination
in feed and food. To reach the objective, the OH-EJP consortium will develop a
sustainable framework for an integrated community of research groups. Research
groups are represented by reference laboratories in fields of human and
veterinary medicine, food and environmental sciences. The OH-EJP will emphasis
on food-borne microbial infections and intoxications, in the scope of a One-
Health (OH) perspective.
To achieve those objectives, a significant amount of data will be collected,
processed and generated, such as OH-EJP deliverables, scientific publications
(e.g. peer-reviewed research articles) and research data. According to the
European Commission (EC), “ __research data_ _ _is information (particularly
facts or numbers) collected to be examined and considered, and to serve as a
basis for reasoning, discussion, or calculation_ ”. In general terms, OH-EJP
data will follow the “ __FAIR_ _ ” principles, meaning “ _Findable,
Accessible, Interoperable and Re-usable_ ”). The FAIR principles will ensure
soundly managed data, leading to knowledge discovery and innovation, and to
subsequent data and knowledge integration and reuse. The data will be made
findable and accessible within the Consortium, and to the broader research
community, stakeholders and policy makers. Also, data has to be compliant with
national and European ethic-legal framework, such as the General Data
Protection Regulation (GDPR, Regulation (EU) 2016/679), which is applicable
since May 2018.Data management plans (DMPs) describe the data management life
cycle for all data to be collected, processed and/or generated by a Horizon
2020 project. It should include information on the handling of research data
both during and after the end of the project; the nature of the data, the
methodology and standards applied, whether data will be shared or made open
access, and how the data will be curated and preserved. The present document
provides information on the general OH-EJP strategy regarding data management
in the form of an overarching data management plan. It defines the strategy on
how OH-EJP data are managed under conditions that conform with the
requirements of Horizon 2020. Adherence to the overarching DMP will be
governed by the Consortium Agreement. Due to the heterogeneity of the data
that will be collected, processed or generated within OH-EJP, and due to the
level of detail needed, each joint research project (JRP) and joint
integrative project (JIP) will also have to develop project specific DMP’s,
using as baseline the present overarching DMP. The first version of project
DMPs are due by month 11 (November 2018), and their development will be guided
by the DMP focal point of OH-EJP, i.e. the Belgian partner Sciensano.
As the OH-EJP is a co-funded program, agreements between partners and
stakeholders are required to collect/process/use data. It must be acknowledged
that the source of co-funding may have priority in some decisions regarding
data management, i.e. that it may dictate where and how the programme output,
including data, should be deposited and named. A guiding principle is also to
avoid duplication of effort, i.e. data and publications should not be
deposited twice. Consequently, the principles provided by the OH-EJP
overarching DMP are meant to complement any requirements from individual
funders, while still ensuring that the data are FAIR, as far as possible.
The DMP is intended to be a living document, and can be further modified or
detailed during the OH-EJP. The information can be made available on a finer
level of granularity through updates as the implementation of the project
progresses and when significant changes occur. Those changes might include new
data, changes in consortium policies ( e.g. new innovation potential, decision
to file for a patent) or changes in composition and external factors (e.g. new
consortium members joining). At minimum, the DMP will be updated in the
context of the periodic evaluation/assessment of the program, but it is
foreseen that the implementation of the DMP at project level will also be part
of the annual reporting.
It is also foreseen that the expectations from the OH-EJP on FAIR data
management will be of value also for institutional development and maturation
with regards to proper data management, thereby contributing to the
overarching goals of alignment and integration at the EU level. To support
development of good research data practice among partner institutes;
guidelines and training will be provided by the joint integrative research
work package to develop DMP competences.
The overarching DMP is structured according to the H2020 templates: _Data
management plan_ _v1.0–13.10.2016_ . It includes 6 components summarized in
the Table 1:
1. Data Summary
2. FAIR data
3. Allocation of resources
4. Data security
5. Ethical aspects
6. Other issues
A last section provides an action plan table (Table 1), which presents
important topics requiring progress and/or update in future version of the
DMP.
# DATA SUMMARY
## Explain the relation of the data to the objectives of the project
The overall goal of OH-EJP is to combine different expertise of partners
across Europe in order to better address threats related to zoonotic diseases
in human and animal and infections or toxin contamination in feed and food.
This will allow for coordination and preparation of joint public and animal
health action plans. Each OH-EJP project (JRP and JIP) is collecting or
processing, and/or generating data with its own purpose and specificities to
serve the common goal of integrated expertise and capacity building.
At the start of the EJP, the consortium manages 13 projects: 2 integrative
projects and 11 research projects. As a step in the development of the
overarching DMP, a questionnaire was distributed to project leaders to capture
the current state of the art with regards to data management, and to identify
needs for further development and training. Below, the relation of the data of
each project with their specific objective is presented.
* Integrative projects
* The ORION project aims at establishing and strengthening inter-institutional collaboration and transdisciplinary knowledge transfer in the area of surveillance data integration and interpretation, along the OH objective of improving health and well-being. Data collected and/or generated serve the objective of providing a prototypic implementation of an integrated strategy for long-term consolidation and harmonization of OH Surveillance solutions.
* The COHESIVE project will collect different kinds of data to support discussion to develop guidelines for national One Health structures. This type of information will be retrieved through questionnaires to make a blue print of human-veterinary collaboration and acquire better knowledge of the present situation. Data over existing risk assessment tools will be collected in order to make a decision tree on when to use which tool. Some COHESIVE partners will be permitted to access the data with the aim of setting up their own Information System with databases harboring WGS/NGS data, metadata and epi data.
* Research projects
* The NOVA project aims at developing epidemiological methods of investigations of potential new sources for the surveillance of foodborne diseases.
* The ListAdapt project will explore the diversity of strains in different compartments of the farm to fork chain to better explain the adaptation capacity of _L. monocytogenes_ .
* The Metastava project will harmonize and optimize the use of metagenomics across Med-Vet partners and share methodologies.
* The project AIR-SAMPLE will develop methods for a standardized protocol for air sampling in poultry flocks.
* The project MoMIR-PPC aims at creating a network that will focus on the prevention of foodborne pathogens in the food chain in order to control zoonotic food-borne infections, optimize husbandry and feeding practices, and decrease the use of antimicrobials in farm industries and hospitals. Based on data obtained from animal infections and human carriers, new approaches will be developed to predict, identify, prevent and control the appearance of animal and human super-shedders based on immune response and gut microbiota composition. The data on the dynamic of super shedders and the analysis in farm conditions will result in a new mathematical model, which provides essential information to producers to support and strengthen biosecurity measures, with a cost effectiveness. This project will also lead to improve diets or additives (pre, probiotics, neutraceuticals) that better protect humans and livestock. Taken together, this will allow to reduce antimicrobial usage. The results will be disseminated and communicated to both the public and the medical-Veterinary society, to decrease import of such bacteria in the future. Results will be disseminated through a variety of written and oral media. Primary data manuscripts will be published in peer-reviewed journals and we anticipate that this will include manuscripts in high scientific impact journals as well as those that specialize in veterinary or animal disease. All of the scientists will regularly attend and contribute to international and national scientific meetings as well as industry orientated meetings.
* The objective of the project MedVetKlebs is to develop, evaluate and harmonize methods for sampling, detection, strain typing and genome-based genotyping of Klebsiella pneumoniae, and share these methodologies across Institutions and with the scientific community in order to optimize the current practices. The purpose is to enlarge and promote a scientific network during the life-time of the project in order to involve more countries concerned by the subject and gain additional expertise. This will allow to identify gaps where furthers investigations are needed to inform current policy questions and design novel approach. The research findings will be disseminated and knowledge transferred to the diverse target audiences through training/exchanges activities at national and international level.
* The project IMPART will harmonize methods for detection of resistant bacteria (to colistin and carbapenem, and resistance of _Clostridium difficile_ ) and subsequent susceptibility testing. Phenotypic data (MIC-values) will be generated to be able for EUCAST to set epidemiological cut-off values to interpret future susceptibility tests of veterinary pathogens.
* The data generated by ARDIG project will help examine the dynamics of anti-microbial resistance (AMR) in different epidemiological units (human, animal, food and environment) from countries which represent significant difference in their usage of antimicrobial agents and AMR prevalence, both in the human and veterinary sectors, as well as different climate, management systems and the potential for transmission of resistance. It will also help in understanding differences and similarities between methodologies used by national institutes in different countries.
* The project RaDAR will help develop common modelling methodologies.
* The project MAD-VIR data management aims at harmonizing and optimizing the practices of identifying all virus including emerging threats and food-borne zoonosis in key institutions/laboratories throughout EU countries.
* TOX-detect is the development and harmonization of innovative methods for comprehensive analysis of food-borne toxigenic bacteria, ie. Staphylococci, Bacillus cereus and Clostridium perfringens.
## Specify the types and formats of data collected/generated
Different data types will be collected/generated, such as publications and
research data, related to foodborne surveillance, AMR and emerging threats.
Other types of data include questionnaire data (e.g. paper-based/online
questionnaires), clinical data, biological data (e.g. measurements in
biological matrices/tissues ), molecular data (including data on part of or
whole genome), modelling data (e.g. estimated exposure and/or effect
parameters), …etc. A list of the different deliverables has been established,
and this list will be further detailed to precise the type of data generated.
Additionally, a comprehensive list of data collected and generated will be
gathered from the different projects as the projects progress.
Data formats should be selected with the view to facilitate data storage and
transfer. Therefore, data will be machine-readable format, preferably in
formats intended for computers (e.g. RDF, XLM and JSON), but also in human-
readable format marked-up to be understood by computers (e.g. microformats,
RDFa). Additionally, it is recommended to use non-proprietary formats if
possible.
## Specify if existing data is being re-used (if any)
The Project Management Team (PMT) of the OH-EJP program encourages partners to
make existing data available for research within the EJP Consortium. To
support such data re-use, lists of datasets collected and generated during the
course of the program will be made available on the OHEJP website, and access
procedures drafted for those data. If relevant in their research task, the
consortium partners should be able to make use of these existing data.
## State the expected size of the data (if known); handling/storage of “big
data”
The expected size depends on the extent and the nature of the data that are
made available, and will be evaluated during the course of the project by the
Consortium partners. Big data handling and storage is expected for some
projects, and adapted procedures will be described in the appropriate project
DMPs.
## Outline the data utility: to whom will it be useful
According to the domain of expertise, data generated within OH-EJP program /
projects can be useful to:
* Other partners belonging to the OH-EJP Consortium (EJP beneficiaries);
* European Commission services and European Agencies, such as EFSA, ECDC, DG-SANCO, DGHEALTH;
* International agencies, such as OIE, WHO; National authorities involved in animal and public health;
* European scientific community, such as European and national reference laboratories, scientific from medical and veterinary research institutions; ▪ Industries involved in animal management and extension services; ▪ General (scientific) public.
It is the objective of the Consortium to provide most of deliverables to the
widest public possible; however, restrictions in the use of data might also
apply. If so, the rational for such restrictions should be provided.
# FAIR DATA
Through the life cycle of the OH-EJP data, the FAIR principles will be
followed as far as possible, while ensuring compliance with national and
European ethic-legal framework. The FAIR component of the DMP still comprises
points to clarify, which will be addressed during the course of the programme.
Points addressed
## Making data findable, including provision for metadata
2.2 Making data accessible
2.3 Making data inter-operable
2.4 Making data re-usable
2.1 Making data findable, including provisions for metadata
### Outline the discoverability of data (metadata provision)
Because of the co-funding setup of OH-EJP, with Programme Managers receiving
their mandate from Programme Owners, agreements have to be made between OH-EJP
partners and relevant data national owners/providers to ensure data
discoverability and identifiability. During the course of the program, the
relevance and opportunity to make those co-funded data findable and accessible
to other OH-EJP partners will be assessed case by case. Different
considerations will be taken into account to support the decision of making
those data findable, such as scientific relevance of data for other OHEJP
partners, technical feasibility, formal agreement with the data
owners/providers, and compliance with national and EU ethic-legal framework.
Data discoverability can be obtained by different means, which include:
* Providing data documentation in a machine-readable format;
* Using metadata standards or metadata models;
* Providing open access (e.g. open data repository);
* Providing access through application;
* Providing online data visualisation/analysis tool for the data, to help researchers to explore data in order to determine its appropriateness for their purposes;
* Providing online links between research data and related publications or other related data; Providing data visibility through a communication system (e.g. social media, website).
All deliverables will be listed on the OH-EJP website (www.onehealthejp.eu),
and the ways by which OH-EJP output can be accessed will be communicated via
social media and other suitable channels to increase visibility of OH-EJP
work. For public deliverables, a link will be available between the OH-EJP
website and the appropriate open repositories where the data is submitted.
Some repositories, such as Zenodo, provide also social media link.
According to EC, _metadata_ is a systematic method for describing such
resources and thereby improving access to them. In other words, it is data
about data. Metadata provides information that makes it possible to make sense
of data (e.g. documents, images, datasets), concepts (e.g. classification
schemes) and real-world entities (e.g. organisations, places). Metadata is
often called data about data or information about information. **D** ifferent
types of metadata exist for different purposes, such as descriptive metadata
(i.e. describing a resource for purposes of discovery and identification),
structural metadata (i.e. providing data models and reference data) and
administrative metadata (i.e. providing information to help management of a
resource). In our case, we are mainly interested to describe a resource for
purposes of discovery and identification.
Each OH-EJP partner will use metadata standards or metadata models appropriate
to their own data, which will be described in the individual project DMP. The
DMP team will provide an inventory of metadata standards or metadata models
related to OH-EJP data. The first call integrative projects, ORION and
COHESIVE have already identified gaps in metadata standards in their domains
of expertise and it will be part of their objectives to develop new metadata
frameworks. Most research projects, for which appropriate metadata standards
do not exist, will take advantage of existing metadata frameworks and adapt
them to describe their data according to their needs.
To provide metadata on the web, two approaches/syntaxes exist for representing
data and resources, i.e. XLM (Tree/container approach) and RDF (Triple based
approach). Different metadata schemes exist for both XLM and RDF approaches. A
metadata scheme is a labelling, tagging or coding system used for recording
catalogue information or for structuring descriptive records. A metadata
scheme establishes and defines data elements and the rules governing the use
of data elements to describe a resource.
### Specify standards for metadata creation (if any)
Because of the lack of appropriate metadata standards, it is expected that the
OH-EJP integrative projects will need to develop metadata frameworks in the
course of their project. For the on-going first call projects, the following
approaches were reported:
* ORION project will explore how metadata standards provided by the UNECE High-Level Group for the Modernisation of Official Statistics, like the Generic Statistical Information Model (GSIM see
https://statswiki.unece.org/display/gsim/Generic+Statistical+Information+Model)
or the Generic Statistical Business Process Model (GSBPM - see
https://statswiki.unece.org/display/GSBPM), can be used to create a mapping
between metadata standards established in the different OH sub-domains.
* COHESIVE will develop a metadata structure based on the framework of EpiJSON ( _Epidemiological JavaScript Object Notation_ ) . The framework provides a unified data format to facilitate the use and structured interchange of epidemiological information in an unambiguous way, linking genomic data to information on the type of disease, the sample collection (who, where, when), the source of the sample (patient, food item, animal, and their identification and biological details), the connections between the various sources of the samples to define the outbreak and the inter-relations between the various components of the outbreak.
Some criteria will be ascertained to ensure best practice in metadata
management:
* Availability: metadata need to be stored where it can be accessed and indexed so it can be found;
* Quality: metadata need to be of consistent quality so users know that it can be trusted; Persistence: metadata need to be kept over time;
* Open License: metadata should be available under a public domain license to enable their reuse.
### Outline the identifiability of data and refer to standard identification
mechanism
The assignment and management of persistent identifiers to the data will be
assessed in the course of the project and will be described in the project
DMPs. It is recommended to use Uniform Resource Identifier (URI) to facilitate
links between different data. Most repositories are providing automatically
persistent identifiers such as DOI, e.g. the functionality provided by Zenodo
platform.
### Outline the approach towards search keyword
To facilitate the queries by keywords, metadata elements need to be aligned
across the OH-EJP. Therefore, the metadata elements must include the term
“OHEJP”, to facilitate finding of OH-EJP data. The selection of the
appropriate repository for the OH-EJP deliverables and data should provide
filtering system based on the metadata elements, e.g. SPARQL system, which is
a standardised language for querying RDF data, able also to query linked data.
### Naming conventions and clear versioning
The naming convention for deliverables was stated in the OH-EJP Grant
Agreement of
September 2017, which is in the format: “D Name of deliverables”. For other
data generated by OHEJP Consortium, the recommended naming convention
consisting in 3 mandatory parts separated by an underscore:
* A prefix with a short and meaningful name of data
* A root composed by:
* the acronym of the project
* the acronym of the program “OHEJP”
* A suffix indicating the date of the last upload into the repository in YYYYMMDD format.
Because of the co-funding setup of the programme and because some repositories
have their own naming conventions, the above naming convention should be
regarded as recommendation but is not compulsory.
## Making data openly accessible
The data and metadata of OH-EJP should by default be made openly available to
European Commission services and European Agencies; EU National Bodies; OH-EJP
consortium; and the general public. According t o _H2020 online manual_ ,
open access refers to the practice of providing online access to scientific
information that is free of charge to the end-user and reusable. In the
context of research and innovation, 'scientific information' can mean: peer-
reviewed scientific research articles (published in scholarly journals), or
research data (data underlying publications, curated data and/or raw data).
Open access to scientific publications means free online access for any user.
The costs of open access publishing are eligible, as stated in the Grant
Agreement. Open access to research data refers to the right to access and
reuse digital research data under the terms and conditions set out in the
Grant Agreement. Users should normally be able to access, mine, exploit,
reproduce and disseminate openly accessible research data free of charge.
### Specify which data will be made openly available; if some data is kept
closed provide rationale for doing so
Data, including deliverables, produced in the course of the project should be
made openly available as the default, while respecting compliance with
European and national ethic-legal framework on personal data protection.
Depending on the deliverables, restrictions might apply for specific reasons
that will be stated in the overarching DMP and in project DMPs for each
research or integrative project. Similarly, restrictions can be foreseen for
other scientific data used/generated during the projects and will be described
in specific project DMPs. The rational for keeping data closed might include:
* Open access is incompatible with rules on protecting personal data: protection of the personal right needs to be ascertained either by avoiding open access to sensitive and personal data, or by anonymizing the data if relevant and feasible.
* Open access is incompatible with the obligation to protect results that can reasonably be expected to be commercially or industrially exploited: In general, open access does not affect the decision to exploit research results commercially, e.g. through patenting. The decision on whether to publish through open access must come after the more general decision on whether to publish directly or to first seek protection.
* Open access is incompatible with the need for confidentiality in connection with data from external owners/providers: Because of the co-funding setup of OH-EJP, partners might use data collected or generated by or with co-funders. If relevant for other research partners, agreements with co-funders will be discussed to make those data accessible to other OH-EJP partners, while respecting compliance with European and national ethic-legal framework.
* Open access is incompatible with the need for confidentiality in connection with security issues Open access would mean that the project's main aim might not be achieved.
To help partners in their decision to use open access, restricted access or
keeping data closed, the DMP team will provide a decision tree. So, the access
to publications or research data, will be data specific. The decision to
select a specific type of access (open, restricted or close) will be under the
responsibility of the individual project partners which
collected/processed/generated the data, and the rational to keep data closed
will be described in the project DMPs.
### Specify how the data will be made available
Deliverables will be made findable and accessible through the OH-EJP platform.
Some deliverables will be kept confidential, but most will be made publicly
available. Public deliverables will be linked to the open repository where
they were deposited in machine-readable format. For example, data in machine-
readable format (e.g. JSON) will be uploaded in the _sub-community One-Health
EJP on_ _OpenAIRE platform_ h osted by Zenodo, and data can be found through
a web browser and downloaded by a potential interested user. Regarding peer-
reviewed publications, the OH-EJP Grant Agreement provides a gold open access
opportunity. Similar accessibility processes are available for other research
data collected or generated during the program.
### Specify what methods, codes or software tools are needed to access the
data
For most data, only standard software, e.g. web browsers, pdf-file readers,
and text readers, will be needed. However, certain data, such as genomic data,
might require specialised tools and languages to access the data. Specialised
tools , such as FoodChain-Lab, might also be required to generate data.
Additionally, one of the goals of COHESIVE project is to develop a new tool
that is in itself a web-based data collection and analysis tool; documentation
for newly developed tools will be provided. Where non-standard tools are used,
procedures to access the data will be documented in the project DMPs.
### Specify where the data and associated metadata, documentation and code
are deposited
Data should be submitted to an appropriate repository (i.e. a place where
digital information (publications, reports, data, metadata) can be stored.).
Partners of OH-EJP consortium consider this is the best means of making these
data FAIR. The DMP team recommends to submit data to disciplinespecific or
community-recognized data repositories where possible, and otherwise to a
generalist repository, (such as Dryad Digital Repo, figshare, Harvard
Dataverse, Open Science Framework, GitHub). Besides making data FAIR, criteria
to select appropriate repositories include:
* Be broadly supported and recognized within the scientific community
* Ensure long-term persistence and preservation of datasets
* Provide expert curation
* Provide stable identifiers for submitted datasets
* Allow public access to data without unnecessary restrictions
Recommended data repositories can be filtered and accessed through _OpenAIRE_
_portal_ , and the _Scientific Data FAIRsharing_ collection. The OpenAIRE
services provide tools to validate repositories/journals and register them in
the OpenAIRE network. However, the filtering system provided by OpenAIRE is
limited to data source type (such as publication repository, institutional
repository), compatibility, and country, but so far it is not possible to
filter by topic. In areas where well-established subject or data-type specific
repositories exist, partners should submit their data to the appropriate
resources. To facilitate the selection of the repositories, the DMP team will
develop a list of repositories in collaboration with partners experts in the
different fields. This list will be evaluated with regard to the criteria
above and to FAIR requirements, and will have a filtering tool on topics
relevant for OH-EJP data. A preliminary example is shown below:
* Biological sciences: nucleic acid sequence (eg:European Nuclotide Archive- ENA, GenBank), functional genomics, which bridge disparate research disciplines (European GenomePhenome Archive – EGA), metabolomics (MetaboLights), proteomics (PRIDE),
* Modelling: mathematical and modelling resources (BioModels Database, Kinetic Models of Biological Systems – KiMoSys), Network Data Exchange – NDEx),
* Health Sciences: immunology (ImmPort), pathogen-focused resources (Eukaryotic Pathogen Database Resources – EuPathDB, VectorBase), repositories suitable for restricted data access (Research Domain Criteria Database-RDoCdb),
Additionally, the DMP team has set up a _sub-community One-Health EJP on
OpenAIRE_ _platform_ . Some projects (e.g. ORION) have developed their own
system, such as the Virtual Resource Environment (VRE), which is hosted on
D4Science.org.
### Specify how access will be provided in case there are restrictions
OH-EJP deliverables and data can either be public or confidential. Some
results might be restricted in their use. Sensitive and personal data can be
made accessible only following the GDPR requirements.
The aim is to reach the highest level of GDPR compliance, amongst others by:
* Relying on the EU authentication platform and security protocols for data sharing.
* Applying a strict policy in granting and revoking access to the data.
* Logging of user identity during data access, download, and upload, including version control. As several repositories will be used to store data, the policy on how to grant access to restricted results will be developed over the course of the project and described in project DMPs.
By default, data generated with OH-EJP co-fund and accompanying metadata are
directly accessible for use within OH-EJP. For sensitive data, the data
owner/data provider shall agree in the transfer of the data at high level of
granularity to an OH-EJP defined repository, using appropriate measures to
anonymise data. Prior to generation of the data, the data owner/data provider
shall confirm ethico-legal compliance of the study in which new data are
generated.
For existing data, not generated with OH-EJP co-fund, the data owner/data
provider specifies the level of granularity that data will be stored and/or
transferred: anonymised single measurement data; pseudonymised single
measurement data; or aggregated data. The data owner/data provider indicates
for each level of granularity whether the data are directly accessible for use
within the OHEJP. In case the data owner/data provider indicates that the data
are not directly accessible for use within OH-EJP, the data owner/data
provider will be asked approval when consortium members request access to the
data to meet the goals of a particular objective.
## Making data interoperable
To generate interoperable data, the OH-EJP consortium will liaise wit h
_Joinup platform_ . Joinup is a collaborative platform created by the
European Commission and funded by the European Union via the Interoperability
solutions for public administrations, businesses and citizens (ISA 2 )
programme. It offers several services that aim to help e-Government
professionals share their experience with each other. And it offers also
support to find, choose, re-use, develop and implement interoperability
solutions.
### Specify what data and metadata vocabularies, standards or methodologies
you will follow to facilitate interoperability
At present, no specific data and metadata vocabularies are available for the
One-Health surveillance domain. A common vocabulary, code lists and mapping of
pre-defined values for harmonising the descriptions of metadata and data will
be defined in the course of the program, specifically through the on-going
integrative projects, i.e. ORION and COHESIVE, in collaboration with all OH-
EJP partners.
In brief, the steps to obtain interoperable data that will be evaluated during
the project include:
* Harvesting metadata standards from different Open Data portals. Different metadata standards exist, such as o DOI for published material (text, images), o DataCite for data archives, o CERIF for scientific data sets, o FGDC/CSDGM for biological profile, o Genome Metadata, ISA-Tab, or GEO for genome data, o INSPIRE for geographical data, o _FOAF_ for people and organisations, o _SKOS_ for concept collections, o _ADMS_ for interoperability assets, o Data Catalog _Vocabulary DCAT_ ,
The metadata DOI will be available for the OH-EJP sub-community platform and,
therefore, the default metadata standard for OH-EJP publications. A
comprehensive list of metadata useful to OH-EJP will be developed to
facilitate consortium partners to select appropriate metadata for their
specific need. Most repositories provide an interface to enter metadata.
* The metadata will be transformed to an appropriate syntax, such as Resource Description framework (RDF). RDF is a syntax for representing data and resources in the web. RDF breaks every piece of information down in triples: subject, predicate, and object.
* Harmonise the RDF metadata produced in the previous steps with _DCAT-AP_ .
* To allow exchange between systems, metadata should be mapped to a common model so that the sender and the recipient share a common understanding on the meaning of the metadata. On the scheme level metadata coming from different sources can be based on different metadata schemes, e.g. DCAT, schema.org, CERIF, own internal model. On the data (value) level, the metadata properties should be assigned values from different controlled vocabularies or syntaxes, e.g.: Dates: ISO8601 (“20130101”) versus W3C DTF (“2013-01-01”), with Zenodo, it is possible to specify subjects from a taxonomy or controlled vocabulary, ie to link term to appropriate ontologies (e.g. _GACS_ ) .
* The last step is to publish the description metadata as Linked Open Data. Data should be published on a repository offering a data catalogue with filtering functionality based on metadata elements. It is also recommended to create linked data. Linking data to other data will provide further context to the data. Data can be linked to URIs from other data sources, using open standards such as RDF (without being publicly available under an open licence). The linked data foundations are using Uniform Resource Identifier (URIs) for naming things, Resource Description framework (RDF) for representing data and resources, and SPARQL for querying linked data. SPARQL is a standardised language for querying RDF data. Some examples of SPARQL initiatives at EU level are EU Open Data Portal SPARQL endpoint and DG SANTE SPARQL endpoint.
### Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability. If
not, will you provide mapping to more commonly used ontologies?
As mentioned in the previous section, there is a lack of metadata standards
for the One-Health surveillance domain. A common vocabulary, codes list and
mapping of pre-defined values for harmonising the descriptions of metadata and
data will be defined in the course of the program.
If there is a lack of metadata standards, the consortium will reuse existing
controlled vocabularies for providing metadata to resources as far as
possible. A controlled vocabulary is a predefined list of values to be used as
values for a specific property in your metadata schema. In addition to careful
design of schemas, the value spaces of metadata properties are important for
the exchange of information, and thus interoperability. Controlled
vocabularies for reused can be found on Joinup (http://joinup.ec.europa.eu)
and Linked Open Vocabularies (http://lov.okfn.org) platforms.
If there is no suitable authoritative reusable vocabulary for describing data,
conventions will be used for describing the vocabulary: RDF Schema (RDFS)
and/or Web Ontology Language (OWL). The best practice when new terms are
required, is to define their range and domain. A range states that the values
of a property are instances of one or more classes. A domain states on which
classes a given property can be used. The new vocabulary should be published
within a stable environment designed to be persistent. Existing resources from
previous EU projects, EFSA and ECDC will serve as the basis for this work.
ORION project will create a data and metadata knowledge model for surveillance
data, in the form of the « _Animal Health Surveillance Ontology_ T his will
aggregate existing ontological models, and further model concepts needed to
connect the multi-disciplinary sources of information needed in disease
epidemiology and surveillance. An example of another interesting ontology is
the Global Agricultural Concept Scheme ( _GACS_ ) , which is multilingual
and includes in its pool of interoperable concepts the identities related to
agriculture from AGROVOC, CAB and NAL Thesauri, which are maintained,
respectively, by FAO of the United Nations, Centre for Agriculture and
Biosciences International (CABI) and US National Agricultural Library (NAL).
## Increase data re-use (through clarifying licences)
### Specify how the data will be licenced to permit the widest reuse possible
For public data, the reuse of the data will be possible through the open
repositories where they will be stored. In addition, the integrative project
COHESIVE will develop tools and software, which will be distributed as open
source software ensuring their widest reuse.
### Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed
The specific decision on an embargo for research data will be taken by the
responsible OH-EJP partners. Scientific research articles should have an open
access at the latest on publication if in an Open Access journal, or within 6
months of publication. For research data, open access should by default be
provided when the associated research paper is available in open access.
### Specify whether the data produced and/or used in the project is useable
by third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why
Public data will be available from open repositories, and therefore reusable
by third parties, even after the end of the project. For confidential data,
access to personal data will be compliant with GDPR, while data concerning
intellectual property will be discussed between relevant partners, and
decision will be taken according to the European and national rules. This
section will be further detailed in the project DMPs.
### Specify the length of time for which the data will remain re-usable
Regarding data stored on the _sub-community One-Health EJP on OpenAIRE
platform_ , all files stored within the repository shall be stored after the
project to meet the requirements of good scientific practice. A strategy for
storage of the files after the project will be included in the DMP in the
course of the program.
For data stored on other repositories, researchers, institutions, journals and
data repositories have a shared responsibility to ensure long-term data
preservation. Partners must commit to preserving their datasets, on their own
institutional servers, for at least five years after publication. If, during
that time, the repository to which the data were originally submitted
disappears or experiences data loss, the partners will be required to upload
the data to another repository and publish a correction or update to the
original persistent identifier if required.
### Describe data quality assurance processes
For the OH-EJP consortium, it is essential to provide good quality data. This
will be ensured through various methods. Firstly, some partner institute have
existing data quality assurance processes, which can be described in their
quality manual. Secondly, publications will be disseminated using peer-
reviewed journals, and similarly, research data will be deposited on
repositories providing curation system appropriate to the data. The
development of a curation system for th e _sub-community_ _One-Health EJP on
OpenAIRE platform_ will be discussed by the PTM.
Additionally, it is part of some projects objectives to develop guidance
documents to assess data quality. These guidelines will be tested and
optimised over the course of these specific projects, and will be validated
using appropriate approaches. For example, the OH Surveillance Codex, which is
developed by the ORION project, intends to serve as quality assurance tool for
One Health data in the future, and this codex will be validated through pilot
studies.
### Specify the data update approach (section not present in H2020 template)
Important datasets often grow and evolve, and we need to ensure that datasets
can be updated while also maintaining a stable version of the data as
published. If no versioning mechanism is available in the data repository, it
might be appropriate to deposit a static version of the data to an appropriate
repository, while hosting in parallel a dynamic version in a project-specific
resource. Both versions of the dataset should be findable.
# ALLOCATION OF RESOURCES
## Estimate the costs for making your data FAIR. Describe how you intend to
cover these costs
Costs related to open access to research data are eligible as part of the
Horizon 2020 grant if compliant with the Grant Agreement conditions.
## Clearly identify responsibilities for data management in your project
To ensure best practices and FAIR principles in the data management of each
project, specific project DMPs will complement the present overarching DMP.
For the overarching OH-EJP DMP, Sciensano ( [email protected]_ ) is the
focal point regarding DMP and will liaise with the project management team and
integrative and research projects. Each partner institute will be responsible
for managing the data that they use, process or generate in the project.
Additionally, each partner institution will transmit the names of a task
leader and a deputy task leader from their IT and/or epidemiology departments
to OH-EJP DMP team. Those designed leaders will have the responsibilities for
the development of the DMPs in which their institution is involved. Guidelines
and training will be provided by the joint integrative research work package
to develop DMP competences within OH-EJP partners.
Currently, the DMP team is responsible to assess sustainable strategy and
planning regarding development of the most appropriate OH-EJP repository. Once
it will be clearly identified, responsibilities for data management in regards
to the OH-EJP repository will be defined using the RACI model (Accountable
Responsible Consulted Informed). The responsibilities might encompass the
initial set-up of the data repository, its maintenance, security assessment,
creation of repository structure (folders/sub-folders for each user group),
development of instructions and support to OH-EJP partners regarding data
repository structure, creation and management of users and user groups
database, assignment of access, upload and download rights for each user
group, ensuring compliance with personal data protection rules, and timely
communicate with OH-EJP partners any possible compliance issue.
## Describe costs and potential value of long term preservation
Currently, no need for additional resources is envisaged beyond the duration
of the project to handle data. However, different strategies for data storage
are under investigation and will be included in the DMP later.
# DATA SECURITY
**Point addressed:**
**Address data recovery as well as secure storage and transfer of sensitive
data**
To be fully compliant with GDPR or any additional national legislation, the
OH-EJP will develop an appropriate security protection strategy as the project
progresses. For instance, data confidentiality and integrity will be
implemented to secure data storage and transfer, by means of tamper-proof
logging mechanism, and/or pseudo-anonymization techniques, and by means of
secure data transfer mechanisms, such as TLS or FTP. Apart from the GDPR, the
consortium partners regard privacy and data protection as a fundamental
principle and hence apply a strict policy on this matter.
# ETHICAL ASPECTS
**Point addressed:**
**Ethical or legal issues that can have an impact on data sharing and that
were not covered in the ethics review**
Ethical aspects are largely covered in the context of the ethics review, the
ethics section of the Description of the Action and the ethics deliverables.
The storage and transfer of data on human subjects to the repositories used by
the consortium are only considered in case of informed consents, ethics
approval, compliance with GDPR and – when applicable - approval by local data
protection authorities.
Partners are expected to describe in detail any controls or limitations on
access to or usage of human data in the ethic section of the Project DMP. The
process by which researchers may apply for access to the data, and the
conditions under which such access may be granted, should similarly be
described. The ethics self-assessment for each JRP and JIP has been evaluated
by ethics advisors. Partners will follow recommendations received from the
ethics advisors, as described in the Description of the Action.
# OTHER
## Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
Some partner institutes might have existing data management processes, that
will be followed to ensure OH-EJP data quality and security. Additionally,
each OH-EJP project will develop its own DMP that will complement the present
overarching DMP, and will provide further details regarding specific data
collected and/or generated in the course of the project. The development of
the project DMPs will support the development of good research data practice
among partner institutes.
# ACTION PLAN
This table 1 provides a summary of the actions to perform to address
unresolved issues of the present DMP.
**ACTION TABLE 1**
**FAIR Data Management at a glance: issues to cover in Horizon 2020 DMP and
related actions to perform**
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Issues to be addressed**
</th>
<th>
</th>
<th>
**Actions**
</th> </tr>
<tr>
<td>
**1\. Data summary**
</td>
<td>
1. Explain the relation to the objectives of the project
2. Specify the types and formats of data generated/collected
</td>
<td>
</td>
<td>
Detailed data type in the list of deliverables
List of data
collected/generated
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
3\.
</th>
<th>
Specify if existing data is being re-used (if any)
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Specify the origin of the data
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
State the expected size of the data (if known)
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
Outline the data utility: to whom will it be useful
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2. **FAIR Data**
2.1. Making data findable, including provisions for metadata
</td>
<td>
1\.
2\.
3\.
4\.
5\.
6\.
</td>
<td>
Outline the discoverability of data (metadata provision)
Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers?
Outline naming conventions used
Outline the approach towards search keyword
Outline the approach for clear versioning
Specify standards for metadata creation (if any). If there are no standards in
your discipline describe what type of metadata will be created and how
</td>
<td>
</td>
<td>
URL of OH-EJP website
Inventory of relevant metadata standards and models
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
</td>
<td>
1\.
2\.
3\.
4\.
5\.
</td>
<td>
Specify which data will be made openly available? If some data is kept closed
provide rationale for doing so
Specify how the data will be made available
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?
Specify where the data and associated metadata, documentation and code are
deposited
Specify how access will be provided in case there are any restrictions
</td>
<td>
</td>
<td>
Adding to deliverables and data tables two field, one public/confidential and
one rational for confidentiality
Developing a decision tree to choose between data open access, restricted
access to data or keeping data closed list of repositories with filtering
system based on topics
</td> </tr>
<tr>
<td>
2.3. Making data
interoperable
</td>
<td>
1\.
</td>
<td>
Assess the interoperability of your data.
Specify what data and metadata vocabularies, standards or methodologies you
will follow to facilitate interoperability.
</td>
<td>
</td>
<td>
Liaise with appropriate support to ensure sustainability?
List of metadata standards useful to OH-EJP
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow interdisciplinary interoperability? If not,
will you provide mapping to more commonly used ontologies?
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2.4. Increase data re use (through clarifying licences)
</td>
<td>
\-
</td>
<td>
1\.
2\.
3\.
4\.
5\.
</td>
<td>
Specify how the data will be licenced to permit the widest reuse possible
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why
Describe data quality assurance processes
Specify the length of time for which the data will remain re-usable
</td>
<td>
</td>
<td>
Set up a curation system for the _sub-community OneHealth EJP on OpenAIRE_
_platform_
</td> </tr>
<tr>
<td>
**3\. Allocation resources**
</td>
<td>
**of**
</td>
<td>
1\.
2\.
3\.
</td>
<td>
Estimate the costs for making your data FAIR. Describe how you intend to cover
these costs
Clearly identify responsibilities for data management in your project
Describe costs and potential value of long term preservation
</td>
<td>
</td>
<td>
List of managers for project DMPs and institutional
DMPs
</td> </tr>
<tr>
<td>
**4\. Data security**
</td>
<td>
</td>
<td>
1\.
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive data
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**5\. Ethical aspects**
</td>
<td>
</td>
<td>
1\.
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**6\. Other**
</td>
<td>
1\.
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td>
<td>
</td>
<td>
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0468_SPICE_713481.md
|
# Data summary
The main objective of the SPICE project is to realize a novel integration
platform that combines photonic, magnetic and electronic components. To align
with the objective of the project, all Partners have been asked to provide
their inputs to this DMP document on what data are going to be collected, in
which format, how they are going to be stored, how they are going to be
deposited after the project, and, finally, what is the estimated size.
Data management is essential for SPICE due to the synergistic approach taken
in this project. In a hierarchical manner, data from each Partner and/or WP
will be required by another Partner and/or WP to build on. For example,
material characterization data from WP1 will be used in the magnetic tunnel
junction design in WP2. These data will also be used in the development of
theoretical models and simulation tools in WP5. All these data will be
required to support the development of an architecture-level simulation and
assessment, and an experimental demonstrator in WP4. Since the various WPs are
managed by various Partners, interaction and data exchange is of key
importance.
The following main data types and formats are identified, alongside their
origin, expected size and usefulness:
* Laboratory experimental characterization data will typically be stored in ascii or binary format, in a (multidimensional) array. These include the characterization of magneto-optic materials, magnetic tunnel junction (MTJ) elements, photonic circuits, and the demonstrator. Data _originate_ from laboratory instrumentation, including lasers, optical spectrum analyzers, electrical source meters, thermo-electric control elements, power meters, etc. Data _size_ depends on the resolution, the amount of devices measured, etc., but does typically not exceed the ~1MB level per dataset and the ~TB level overall. The _usefulness_ is the validation and quantification of performance, which in turn can validate models.
* Simulation data will be stored in simulation-tool specific formats. This includes the QW Atomistix tool, the Verilog tool and the Lumerical tool, for example. Some tools use an open file format, others are proprietary. In all cases, final simulation results can be exported to ascii or binary, if required for communication and documentation. The data _originate_ from running the simulation algorithms, with appropriate design and material parameters. Data _size_ depends, again, on resolution of parameter sweeps, and varies a lot, although is overall not expected to exceed the ~TB level. The _usefulness_ is to provide a quantified background for the design of materials, devices, and circuits, as well as helping with the interpretation and validation of experimental results.
* Process flows are used to describe the fabrication process in detail, of either material growth/deposition, MTJ fabrication and/or PIC fabrication. These are foundry and toolspecific and are stored in either a text document, e.g., “doc(x)”, – or similar – or a laboratory management tool. These typically _originate_ from a set of process steps, which are toolspecific, e.g., dry etching, wet etching, metal sputtering or evaporation, oxide deposition, etc., and are compiled by process operators and process flow designers. The _size_ is limited to a list of process steps in text, possibly extended with pictures to illustrate the crosssections, i.e., not exceeding ~10MB per file. The _usefulness_ is to store process knowledge and to identify possible issues when experimental data indicate malfunction. Existing knowledge in processing, including process flows, will be _reused_ .
* Mask design data are stored in design-tool specific format, but are eventually exported to an open format like “gds”. Their _origin_ depends on how these masks are designed. These can be drawn directly by the designer, or the designer can use a process-design kit (PDK) to use pre-defined building blocks. Data _size_ depends on mask complexity, but typically does not exceed ~100MB per mask set. The _usefulness_ is the identification of structures on a mask, during experimental characterization, also by other Partners and in other WPs, as well as – obviously – providing the necessary input for lithography tools. Together with a mask design, a design report, showing details on the structures and designs and a split chart, should be included. This should also refer to the used process flow. The format is typically text based, e.g., “doc(x)”, and its size does not exceed 10MB.
* Dissemination and communication data take the form of reports, publications, websites and video, using the typical open formats, like “pdf” and “mpeg”. The _origin_ is the effort of the management and dissemination WPs, i.e., these are written or taped by the consortium Partners. The _usefulness_ is the communication between Partners, between the Consortium and the EC, and with the various target audiences outside the Consortium, including students, peers and general public.
# FAIR data
## Making data findable, including provisions for metadata
Most of the SPICE datasets outlined above are not useful by itself, and depend
on context, i.e., the metadata have to be provided to interpret these data,
possibly by connecting these to other datasets. This is typically done using
logbooks or equivalent. This is necessary for experimental datasets, obtained
in the laboratory. For simulation data, obtained with commercial simulation
tools, the metadata are typically part of the data file, although not directly
visible, unless the file is opened. So, also in that case, a logbook is
required. In general, the SPICE consortium aims to provide accessible
logbooks, design reports or equivalent as a means to make datasets findable
_within_ the Consortium. These logbooks will list all relevant datasets.
Datasets and logbooks will be stored on shared folders (on a server), if
relevant for other Partners. Logbooks will have a version number to allow for
adding datasets.
A typical example is a chip design report, which will include a reference to
the process flow (including version number) and a reference to the mask file,
including a detailed description of the designs, as well as an overview of the
simulations, including, e.g., design curves, and with reference to all
simulation datasets.
To make the datasets SPICE _findable_ , we use the following naming convention
for all the datasets produced within SPICE: the naming starts with the WP
number, then the WT number within the WP and finally the dataset title is
added. These are all separated by underscore, i.e.,
<Beneficiary>_<WP#>_<WT#>_<dataset_title>). For example, if the data is
related to the dataset of WP1 (i.e. Magneto-Optic Interaction) with the WT
number of 2, with the dataset_title of “Magneto-Optic_Interaction” from the
beneficiary RU, then the naming will be “RU_WP1_2_Magneto_Optic_Interaction”.
A version number will be added to the end of the title if required.
The Consortium recognizes that some data are confidential and cannot be shared
even within the Consortium. This should not prevent communication and
dissemination, though, and measures should be taken to allow for maximum
information flow, while protecting sensitive information. If, for example, the
exact process details of a component on a chip are confidential, some critical
gds layers can be removed from the shared dataset and/or a so-called ‘black
box’ can replace such components. The gds file can then still fulfill its main
purpose, namely the identification of relevant structures on a chip during
experiments.
The main means of communicating datasets _outside_ the Consortium is through
publications, which have a level of completeness as required by typical peer-
reviewed journals. These publications will be findable through the keywords
provided and the publication can be tracked through a digital object
identifier (DOI). If applicable and/or required, full or partial datasets will
be published alongside, as per the journal’s policy.
Specific datasets that will be shared publicly, outside the Consortium, will
have targeted approaches to make these _findable_ . For example, Verilog/spice
models, developed within SPICE, will be uploaded on, e.g., Nano-Engineered
Electronic Device Simulation Node (NEEDS) from nanohub.org, to be found and
used by others. An extensive set of magneto-optic material parameters will be
made available through the SPICE website, including context and introduction.
## Making data openly accessible
The goal of SPICE is to make as many data and results public as possible.
However, the competitive interest of all Partners need to be taken into
account. The data that will be made _openly available_ are:
* Reports, studies, slidesets and roadmaps indicated as ‘public’ in the GA. These will be made available through the EC website and the SPICE website, typically in pdf format. Additional dissemination is expected through social media, like LinkedIN, to further attract readership. These documents will be written in such a way that these are ‘self-explanatory’ and can be read as a separate document, i.e., including all relevant details and references.
* Verilog/spice models of the MTJs can be made available, for example, on NEEDS, including a “readme” file on how to use the models. These models can be used by commercial tools from Cadence/Synopsys, which are available to most of the universities and industry, e.g., through Europractice in Europe. Furthermore, there is a possibility to develop tools running on the nanohub.org server for the provided models.
* Novel simulation algorithms for the Atomistix toolkit of QW will be made available to the market, through this commercially available toolkit.
* Scientific results of the project, i.e., in a final stage, will be published through scientific journals and conferences. The format is typically pdf, and an open access publication format will be chosen, i.e., publications will be freely available from either the publisher’s website (Gold model) or from the SPICE and university websites (Green model).
The data that will remain _closed_ are:
* Simulation and characterization data sets, that are generated in order to obtain major publishable results and deliverables, will remain closed for as long as the major results and deliverables have not been published. This is to project the Partners and the Consortium from getting scooped.
* Detailed process flows and full mask sets will not be disclosed to protect the proprietary and existing fabrication IP of, most notably, partners IMEC and CEA. If successful, SPICE technology can be made available in line with these Partners’ existing business models. IMEC, for example, offers access to its silicon photonics technology through Europractice.
* Source code of simulation tools developed for the Atomistix toolkit. This is key IP for partner QW, as it brings these tools to the market.
* Final scientific results that have been submitted to scientific journals, but not yet accepted and/or published. This is a requirement of many journals.
These _closed_ datasets will be kept on secure local servers.
No agreement has been made yet for open repositories of data, documentation or
code. This will be decided in our first annual meeting to be held end of 2017.
## Making data interoperable
Open data formats like pdf and doc(x) (reports), gds (mask layout), ascii and
binary (experimental data) will be used as much as possible, which allows for
sharing data with other Partners. Freely available software can be used to
read such files. Design software like Atomistix, Cadence Virtuoso, PhoeniX,
Lumerical and Luceda have proprietary data formats, and it will be
investigated how these can most easily be exported to open formats, in case
there is a need for this.
## Increase data re-use (through clarifying licences)
Experimental and simulation data sets will in principle not be re-usable by
itself, unless otherwise decided. Re-use of these data sets will be
facilitated through scientific publications, which also provide the necessary
context. Conditions for re-use are then set by the publishers’ policies. The
peer-review process, as well as adhering to academic standards, _ensures the
quality_ . These publications will remain re-usable for an indefinite time.
The underlying experimental and simulation data sets will be stored for a time
as prescribed by national and EU laws, though at least 5 years after the SPICE
project ends.
Process flows can potentially be re-used through the specific foundry
facilities, for example as a fabrication service or through a multi-project
wafer run, e.g., through Europractice. Process flows itself will not be
disclosed and cannot be re-used. This is partially to protect the foundry IP,
and partially because process flows are foundry-specific anyway. The
Consortium will discuss a policy for this when the SPICE technology is up and
running. Quality assurance will be aligned with the foundries’ existing
standards for performance, specifications, yield and reproducibility. **No
decisions on the re-use of processes have been made yet.**
Mask designs, or component designs, can only be re-used when the underlying
fabrication process is made available. In that case, designs can be made part
of a PDK. Support and quality assurance, however, will be an open issue. The
Consortium will discuss this when the SPICE technology is up and running. **No
decisions on the re-use of designs have been made yet.**
Simulation tools based on the Atomistix toolkit will be marketed by QW to
ensure the widest possible re-use, under the assumption that there is enough
market potential. Licenses can be obtained on a commercial base by third
parties. QW will remain responsible for their toolkit development, quality and
support and has a team in place to ensure that. The duration and scope of a
license and support will be determined between QW and their potential users at
a later stage. Simulation tools based on Verilog will be publicly shared for
widest re-use. No support is envisioned beyond the duration of SPICE, though,
so quality assurance is an open issue for the moment.
# Allocation of resources
In the SPICE project, data management is arranged under WP6 (Dissemination and
Exploitation) and any cost related to the FAIR data management during the
project will be covered by the project budget. For the depository of the data
on a not yet specified server, a total budget of 2000 Euro for 5 years is
estimated.
The consortium will decide whether a specific data manager is required for
SPICE in the upcoming meeting among consortium members. If not and in the
meantime, this will be managed from WP6. Any other cost regarding the
preservation of the data for a long period will be discussed within the
Consortium as well.
# Data security
All data sets are backed up routinely onto the Partners’ servers, via local
network drives. Data sets are backed up on a periodic basis, typically on a
daily basis. In addition, all processed data will be version controlled, which
is updated with similar frequency. No backups are stored on laptops, or
external media, nor do we use external services for backup.
# Ethical aspects
No ethical aspects have been identified yet.
# Other issues
An open issue are the local, national and EU policies with respect to data
management, and of which the Consortium does not have a complete overview. It
will be investigated for the next update of the DMP to which extent the
current DMP is in agreement and/or in conflict with these policies.
# Appendix – partner input
<table>
<tr>
<th>
**WP / Task**
</th>
<th>
**Responsibl e partner**
</th>
<th>
**Dataset name**
**(for WT of X)**
</th>
<th>
**File types**
</th>
<th>
**Findable**
**(e.g. for WT of 1 for each WP)**
</th>
<th>
**Accessible**
</th>
<th>
**Inter oper**
**able**
</th>
<th>
**Reusable**
</th>
<th>
**Size**
</th>
<th>
**Security**
</th> </tr>
<tr>
<td>
1/X
</td>
<td>
RU
</td>
<td>
RU_WP1_X_Mag neto_Optic_Intera ction_v1
</td>
<td>
*.xlsx , *.doc, *.pdf, *.dat,
*.jpeg
</td>
<td>
All the produced data will be available in the dataset with following the
naming of
RU_WP1_1_Magn eto_Optic_Interacti
on_v1 (No meta
data)
</td>
<td>
Available through scientific reports and publications
</td>
<td>
N/A
</td>
<td>
On a
depository
server for 5 years after the project
</td>
<td>
1 TB
</td>
<td>
Confidential data will be stored and backed up continuously on a secured
server from RU and confidential reports and presentations will be uploaded on
the secured area of the website. Some reports and data will be shared on
Dropbox.
</td> </tr>
<tr>
<td>
2/X
</td>
<td>
SPINTEC
</td>
<td>
SPINTEC_WP2_
X_Spintronic - Photonic integration_v1
</td>
<td>
SEM and
TEM images
(*.jpeg), electrical data (*.xlsx, *.dat, etc.)
</td>
<td>
SPINTEC_WP2_1_
Spintronic -
Photonic integration_v1 (No meta data)
</td>
<td>
available through scientific reports and publications
</td>
<td>
NA
</td>
<td>
On a
depository server (TBD) for 5 years after the
project
</td>
<td>
500
GB
</td>
<td>
Confidential data will be stored and backed up continuously on a secured
server at SPINTEC and confidential reports and presentations will be uploaded
on the secured area of the website. Some reports and data will be shared on
Dropbox.
</td> </tr>
<tr>
<td>
3/X
</td>
<td>
IMEC
</td>
<td>
IMEC_WP3_X_
Photonic_Distribut ion_Layer_v1
</td>
<td>
*.dat, *.docx,
*.pdf
</td>
<td>
IMEC_WP3_1_
Photonic_Distributi on_Layer_v1
(No meta data)
</td>
<td>
available through scientific reports and publications
</td>
<td>
?
</td>
<td>
On a
depository server (TBD) for 5 years after the
project
</td>
<td>
500
GB
</td>
<td>
Confidential data will be stored and backed up continuously on a secured
server at AU and IMEC, and confidential reports and presentations will be
uploaded on the secured area of the website. Some reports and data will be
shared on Dropbox.
</td> </tr>
<tr>
<td>
4/X
</td>
<td>
AU
</td>
<td>
AU_WP4_X_
Architecture_and_
Demonstrator_v1
</td>
<td>
*.dat, *.docx,
*.pdf, *.m
</td>
<td>
AU_WP4_1_
Architecture_and_
Demonstrator_v1
(No meta data)
</td>
<td>
available through scientific reports and publications
</td>
<td>
</td>
<td>
On a
depository server (TBD) for 5 years after the
</td>
<td>
1 TB
</td>
<td>
Confidential data will be stored and backed up continuously on a secured
server at AU and confidential reports and presentations will be uploaded on
the secured area of the website. Some reports
</td> </tr> </table>
Page 10 of 11
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
project
</th>
<th>
</th>
<th>
and data will be shared on Dropbox. The Verilog/spice data will be shared on
some gateways to be used by other people
</th> </tr>
<tr>
<td>
5/X
</td>
<td>
QW
</td>
<td>
QW_WP5_X_Sim
ulation_and_Desi gn_Tools_v1
</td>
<td>
</td>
<td>
QW_WP5_1_Simul ation_and_Design_ Tools_v1 (No meta
data)
</td>
<td>
available through scientific reports and publications
</td>
<td>
</td>
<td>
On a
depository server (TBD) for 5 years after the
project
</td>
<td>
10GB
</td>
<td>
Confidential data will be stored and backed up continuously on a secured
server at QW and confidential reports and presentations will be uploaded on
the secured area of the website. Some reports and data will be shared on
Dropbox.
</td> </tr>
<tr>
<td>
6/X
</td>
<td>
AU
</td>
<td>
AU_WP6_X_Diss emination_and_E xploitation_Tools_ v1
</td>
<td>
</td>
<td>
AU_WP6_X_Disse mination_and_Expl oitation_Tools_v1 (No meta data)
</td>
<td>
Available on the AU website
</td>
<td>
</td>
<td>
On a
depository server (TBD) for 5 years after the
project
</td>
<td>
5 GB
</td>
<td>
The dissemination reports will be kept on a secured server at AU and also
uploaded on SyGMa as well as publicly available on the SPICE website.
</td> </tr>
<tr>
<td>
7/X
</td>
<td>
AU
</td>
<td>
AU_WP7_X_Man
agement _v1
</td>
<td>
*.xlsx , *.doc, *.pdf, *.jpeg,
*.mp3,
*.mpeg
</td>
<td>
AU_WP7_1_Mana
gement _v1 (No
meta data)
</td>
<td>
The confidential data will not be accessible to the public. The public data,
reports, presentations will be available on AU website.
</td>
<td>
</td>
<td>
On a
depository server (TBD) for 5 years after the
project
</td>
<td>
100
MB
</td>
<td>
The annual reports will be confidential and so will not be available for
public. Some minutes, presentations, press release etc. will be available for
public through website.
</td> </tr> </table>
Page 11 of 11
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0473_PERICLES_770504.md
|
2. **Be open and honest-** We will be clear and open regarding the purpose, methods, and outcomes of our work. Transparency, like informed consent, is a process that involves both making principled decisions prior to beginning the research and encouraging participation and engagement throughout its course. In our capacity as researchers, project partners are subject to the ethical principles guiding all scientific and scholarly conduct. We must not plagiarize, nor fabricate or falsify evidence, or knowingly misrepresent information or its source.
3. **Maintain Respectful and Ethical Professional Relationships-** There is an ethical dimension to all professional relationships. Whether working in academic or applied settings, researchers have a responsibility to maintain respectful relationships with others.
# 3 Purpose of data collection
The primary rationale for collecting and generating new data is to meet the
overall goal of the project: the sustainable governance of maritime cultural
heritage. Specific objectives include:
* develop an in-depth, situated understanding of the CH of marine and coastal land/seascapes, including knowledge across local, spatial, environmental, social and economic aspects;
* develop practical tools, based on stakeholder involvement and participatory governance, for mapping, assessing and mitigating risks to CH and to enhance sustainable growth and increase employment by harnessing CH assets;
* provide policy advice to improve integration of CH in key marine and environmental policies and the implementation of associated EU directives; and develop effective knowledge exchange networks.
PERICLES partners have well developed and quality assured processes for
managing data, in line with best practice within their field of research and
in compliance with national funders’ polices, to which all researchers will
adhere. The partners involved carry the necessary and appropriate levels of
indemnity for research involving human participants, giving cover for both
negligent and non-negligent harm. They have local enforced policies and
procedures that govern the collection, storage, quality assurance and security
of data. The study will involve analyses of data from semi-structured
interviews, surveys, visual documentation, focus groups and policy documents.
The research team has extensive experience of gathering and managing data of
this nature.
## 3.1 The relation of Data Collection to the objectives of the project
Data collection is planned specifically around the above four objectives and
the tasks associated with meeting these objectives. This means that secondary
qualitative and quantitative data will be reviewed; and primary data
collection will take place.
Developing both (a) an in-depth situated understanding of maritime CH
including knowledge across local, spatial, environmental, social and economic
aspects, and (b) practical tools for mapping, assessing and mitigating risks
to CH and to enhance sustainable growth, requires primary data collection.
While stakeholder involvement activities, participatory governance, and
developing effective knowledge exchange network activities imply the
collection of contact and informational data is necessary.
## 3.2 Types and formats of primary data to be generated/collected text
The project will collect a wide range of quantitative and qualitative data.
Quantitative data will include economic and market research and quantitative
social questionnaires. Qualitative data will include perceptions, opinions and
experiences of individuals collected through a wide range of methods.
Biological data will include DNA samples/analysis of fish bones. Data will
also be mapped in a GIS portal. This includes a wide range of data layers that
may harbour its own usage restrictions. Some basic demographic data (e.g. age)
will be collected from study participants but will be separated and results
will be reported anonymously as per standard social research ethics guidelines
enforced through ethics committees of the academic partners involved.
Data and type of data will be collected/gathered in the following WPs:
1. WP2: Qualitative interview data
2. WP3: Data from the evaluation of tools
3. WP3: Data displayed on the mapping portal
4. WP3: Output of the data review
5. WP3: Uploaded material from citizens
6. WP4 and WP5: Qualitative interview data; guide, video, transcription, annotation 7. WP7: Website for dissemination
Data collection crosscuts all work package work, with the data coming from
research and fieldwork within demos in each Case Region. Case study data set
overviews are presented in Annex 1. A summary is presented below:
<table>
<tr>
<th>
</th>
<th>
**Data type**
</th>
<th>
**Origin**
</th>
<th>
**WP#**
</th>
<th>
**Case region**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Stakeholder contacts
</td>
<td>
Publicly available data
</td>
<td>
WP6
</td>
<td>
Estonia, Denmark, Wadden Sea, ScotlandIreland, Brittany, Aveiro, Malta, Aegean
Sea
</td>
<td>
.xlsx
</td> </tr>
<tr>
<td>
2
</td>
<td>
Qualitative interview data
</td>
<td>
Primary data
</td>
<td>
WP3,4,5,6
</td>
<td>
Denmark, Wadden Sea,
Brittany
</td>
<td>
mp3,
.doc, .xlsx
.pdf
</td> </tr>
<tr>
<td>
3
</td>
<td>
Data from participative observation
</td>
<td>
Primary data
</td>
<td>
WP3,4,5,6
</td>
<td>
Malta, Wadden Sea
</td>
<td>
mp3,
.doc, .jpg
</td> </tr>
<tr>
<td>
4
</td>
<td>
Survey data
</td>
<td>
Primary data
</td>
<td>
WP3,4,5,6
</td>
<td>
Malta, Wadden Sea
</td>
<td>
.doc,
.xlsx, .dat
</td> </tr>
<tr>
<td>
5
</td>
<td>
Photographic, video and/or audio records
(general)
</td>
<td>
Primary data
</td>
<td>
WP3,4,5,6,7
</td>
<td>
Denmark,Malta,
Wadden Sea
</td>
<td>
.jpg, .tif,
.mp3
</td> </tr>
<tr>
<td>
6
</td>
<td>
Data related to visual methodologies (VPA, ethnographic documentary):
Photographic/video/audio records
</td>
<td>
Primary data
</td>
<td>
WP3,7
</td>
<td>
Malta, Wadden Sea
</td>
<td>
.mov,
.mp3
</td> </tr>
<tr>
<td>
7
</td>
<td>
Published data (incl.
spatial data)
</td>
<td>
Publicly available data
</td>
<td>
WP2,3
</td>
<td>
Estonia, Denmark, Wadden Sea, ScotlandIreland, Brittany,
Aveiro, Malta, Aegean
</td>
<td>
.xlsx, .doc
.pdf
.GeoJSON
</td> </tr>
<tr>
<td>
8
</td>
<td>
Processed data from reviewed academic and policy literature, and online
sources
</td>
<td>
Primary data
</td>
<td>
WP2,3,4,5
</td>
<td>
Estonia, Denmark,
Wadden Sea, Scotland-
Ireland,
Brittany,Aveiro, Malta,
Aegean
</td>
<td>
.jpg, .pdf,
.doc
</td> </tr>
<tr>
<td>
9
</td>
<td>
Quantitative survey data
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
The partners involved have robust processes for the oversight and governance
of research, in particular, research involving human participants. All data
used as part of this research will comply with all relevant legal requirements
and codes of good practice. Confidentiality and disclosure risk are controlled
through the application of information security and data handling policies
contained in relevant partner policies. Where necessary, data will be
anonymized and participants’ confidentiality maintained throughout.
Participants will be pseudonymized through allocation a unique ID that will be
used to identify all their paper and electronic records.
The Lead Researchers will be responsible for maintaining separate,
confidential registers, which will match each participant’s unique ID with
their name. These will be stored securely and separately from other data, with
access limited to designated persons. All databases will be designed to ensure
completeness, accuracy, reliability and consistency of data. The policies and
procedures ensure that there is no deletion of entered data; a list is
maintained of those individuals authorised to make data changes, and all data
changes are documented. Quality control measures will be applied to each step
in the data management process to assure that the necessary level of data
quality is maintained throughout. Requests for data by outside parties will be
handled on a case-by-case basis.
Where appropriate, some data (e.g., images) may be available under Creative
Commons license. Where submissions are made to the online portal through
citizen science, users will also be asked to submit data under a CC license.
Probably: CC-BY-SA-NC referring to the requirements to give credit to the
authors, to share the work under equal terms and not for commercial use.
For those research activities undertaken dealing with visual research methods
such as Visual Problem Appraisal and ethnographic documentaries, we will
adhere and further develop guidelines for ethical visual research methods such
as documented by Cox et al (2014). 1 The Pericles project will follow and
further develop the practice of Visual Informed Consent such as documented in
the publication by Lie & Witteveen (2017) 2 .
The process of visual documentation of stakeholders will also adhere to
aesthetical standards of professional filmmaking and photography to prevent
awkwardness resulting from low aesthetical quality of video or audio material,
which may induce requests for non-use of documented visual data.
Where data cannot be appropriately anonymised to maintain confidentiality, and
protect the rights of the research participants, it will be not be made
publicly available.
The project will comply with the partners’ policies on management of physical
research data and on working with electronic data. Any data held on portable
equipment such as laptops, memory sticks or portable hard drives will be risk
assessed, taking into account the sensitivity of the information. All portable
equipment will be risk assessed and securely encrypted, taking into account
the sensitivity of the information. All data will be transferred to partner
data repositories where they will be stored on a secure server, which is
protected against unauthorised access by user authentication and a firewall.
All identifiable data will be stored in an encrypted format. Access to the
room where the servers are kept is restricted to designated IT staff. Daily
backup procedures are in place and copies of the data are held in separate
locations.
A specified group of research staff will have read-only access to the data
files containing confidential information; only database officers can alter
the confidential personal data files. Paper records of contact sheets,
registration documents, and consent forms will be archived in separate
locations to the electronic data.
Anonymised data sets will be made publicly available through appropriate
repositories as part of an Open Data Policy that will be further developed in
our Data Management Plan.
We strive for ensuring that data will be collecting in – or converted to –
long-term preservation friendly formats, keeping in mind that they must also
be the formats best suited for reuse keeping data interoperable Audio files
will be stored in MP3 or WAV format. Digital images will be stored as JPEGs or
PNG. Microsoft Word will be used for text-based documents. .sav will be used
for SPSS files. The file formats have been selected as they are accepted
standards and used widely. At the end of the project, the Word documents will
be converted to PDF/A. Long term preservation of the data from statistical
analysis packages such as SPSS will be carried out in accordance with the
advice from the Council of European Social Science Data Archives.
## 3.3 Naming of data
A common approach to the naming of documentation and data sets will be
employed.
Files will be named according to the following scheme:
Partner/ section PERICLES_Deliverable_ version number eventually added TC when
the file is submitted in Track Changes
This is seen through the following examples:
For documents and deliverables:
PERICLES_D1.3_V0.2.doc (the document from Alyne)
PERICLES_D1.3_V0.2 TC-WU.doc (our TC identifiable contributions to the
documents)
For Datasets:
PERICLESTaskNumber.Partner.DataType e.g. T5.1.QUB.Interviews
Partners may apply a local version control system or build in mechanisms in
their local storage solution. _**3.4 Re-use of data** _
Some secondary, qualitative data and mapping/GIS data will be accessed via
publically available channels (e.g., Member State archives and mapping sites).
The nature of these portals makes it difficult to establish a certainty in
regard to the content available from them at specific times. Since the project
have no influence on these portals, we can only work from what is available on
the time of development/use of the data. However, we can try to use as much of
the background as relevant from EMODnet since the purpose of this service is
to re-use data from older or existing projects. Wherever it is possible we
make sure that we use data that apply to the INSPIRE directive 2007/2/ec
<table>
<tr>
<th>
**Data type**
</th>
<th>
**Source/Owner**
</th>
<th>
**Used for**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
Stakeholder contacts
</td>
<td>
</td>
<td>
Stakeholder register
</td>
<td>
.xlsx
</td> </tr>
<tr>
<td>
Published data (incl. spatial data)
</td>
<td>
</td>
<td>
</td>
<td>
.xlsx, .doc
.pdf .shp .GeoJSON
</td> </tr>
<tr>
<td>
Background data arrays, e.g.
Maritime museums, Shipwrecks, Geology,
protected areas etc
</td>
<td>
EMODnet
(www.emodnet.eu)
</td>
<td>
Background data for the Portal
</td>
<td>
Web Map Service/Web
Features service (.WMS and
.WFS)
</td> </tr> </table>
All collected data sets will be held at the partners together with own
produced data sets. All partners will have close attention to the reuse of
data and possible licensing issues with blended datasets. For this reason, all
collected datasets will be clearly marked with origin and usage license
options, and option for future reference from own published datasets.
Primary data sources consist of online documentation, policy documents and
peer-reviewed academic articles, including previously conducted research by
the involved partners.
## 3.5 Origin of the data
The origin of these existing data come from previously conducted research by
scholars, researchers, and Member State staff. Some of the data layers in the
mapping portal origin from national or regional data sets. In many cases
included in existing marine as well as national spatial data infrastructures
(MSDI/NSDI).
## 3.6 Expected size of the data
The size of the data handled by PERICLES is generally quite small. For most of
the data types in the project, e.g. doc, xlsx, and picture/image formats the
size would be in the megabyte range. For video and sound formats, file size is
within the gigabyte range.
Data uploaded to, and stored on, the Portal server will be in the terabyte
range, as it includes many data arrays, user text, pictures and videos.
## 3.7 Data utility
The data will be useful for PERICLES partners, associated and affiliated
partners, as well as researchers, planners and policy makers, non-governmental
organizations (including businesses) and citizens interested in maritime
cultural heritage.
# 4 FAIR data
Research data should follow the principles of 'FAIR': making data Findable,
Accessible, Interoperable and Re-usable. Making data findable includes
provisions for metadata.
## 4.1 Making data findable, including provisions for metadata
The findability for the dataset will depend on the selected repository. Most
relevant identified repository is EMODnet, and Zenodo for data that cannot be
added to EMODnet, due to the subject specificity of it. The dataset will
preferably be deposited in repositories with Core Trust Seal and those
harvested by aggregators. However, for some datasets it might be preferable to
upload the data to repositories with better support for the specific kind of
data, e.g. indexing on non-standard values (e.g. specimens).
Metadata on the datasets will be added according to the specification of the
selected repository and available options for adding keywords. For EMODnet
metadata is added to data packages in two steps; The data submitter will fill
out required metadata, such as Organizations, Dataset Identification, Data
Types, Location & Dates and Data Links, where after the EMODnet Data Centre
will review the data submission and complete it with additional metadata.
Zenodo supports the FAIR principles and all data are assigned a globally
unique and persistent identifier (a DOI is issues to every record), and each
record contains a minimum of DataCite's mandatory terms. For internal data the
naming conventions for the dataset will follow the same as enforced for the
internal naming conventions, but possibly adapted if needed for enhanced
findability. All data/documents to be identified via metadata, should include,
but not necessarily be limited to: Revision, Type, Status, Confidential,
Revision date and Created by.
All data produced will strive to match the best practice within the field,
including the recommended formats listed at _http://rd-
alliance.github.io/metadata-directory/_
For publication of datasets, a DOI will be assigned by the repository, where
possible.
## 4.2 Making data openly accessible
Qualitative interview data, following standard, social scientific research
conventions, by which personal data are protected and anonymized, are not
publically accessible.
For datasets without personal data, the consortia will strive to release data
with an open, machinereadable license like Creative Commons, which both Zenodo
and Emodnet supports.
For data that relates to public data sets, where these cannot be re-published
by the consortia, pointers to the original datasets will be included as part
of metadata for the datasets.
Data in the PERICLES project will, as presented in table 1, have different
file formats, but all in format that can be opened/used without need for
additional software.
## 4.3 Making data interoperable
All openly accessible data will be uploaded in a commonly accessible format.
Furthermore, the associated metadata, described above, will facilitate data
interoperability. Zenodo uses the JSON Schema for metadata and offers export
to other formats to promote interoperability of the (meta)data.
As described above, the openly accessible data will not require any additional
software for it to be used.
## 4.4 Making data re-useable
Openly accessibly data collected under PERICLES will be made available for re-
use at the earliest convenient moment, taking the publication of articles into
consideration. Where possible, the project will strive to use an open machine-
readable license like Creative Commons, however this is also limited to the
available licenses for the selected repository. E.g. Emodnet allows for a
limit number of licenses.
(
_https://www.emodnetingestion.eu/media/emodnet_ingestion/org/documents/helpguide_ds_22sept_
_2017.pdf_ )
All data collected in the project will be based on the protocol for each
respective case study with clearly defined procedures here for. All studies
will result in reports wherein the data, methods and results will be
presented. Each researcher will responsible for the quality of the data that
he/she collects/store and have the data checked/validated by a colleague. Data
will be compared across case regions and potential outlies/obvious error will
be handled by either removing the data point or by returning to the origin of
the data and ask for verification or clarification.
All studies will report statistics on the data collected (e.g. number of
participants, number of responders/ non-responders) and all raw data will be
stored to allow for later check for data correctness or re-use of data.
Data collected in non-English speaking countries will be presented in English
to the consortium to allow for data use and validation of the data.
For data collected as part of a scientific article, the method for data
collection, analysis and interpretation will be explained in the article.
Data on ecology usually have a long-term reusability. For this reason, we
strive to use only repositories that is evaluated for sustainability (like the
Core Trust Seal), or repositories that will provide the necessary curation for
the data, ensuring continues findability, accessibility, interoperability and
reuse options.
For data uploaded to Zenodo the data will remain re-usable until Zenodo
discontinues the dataset(s) (i.e. warrantied for a minimum of 20 years).
Data that will remain re-usable within and across different scientific areas
includes all case study data, which will be available through the Portal for
the all interested visitors.
# 5 Allocation of resources
## 5.1 Costs for making data FAIR
FAIR data will be part of the everyday work of each partner, e.g. ensuring
interoperability and proper metadata for the documentation of the datasets.
The project coordinator, AAU, will estimated be using ½ PM for ensuring proper
focus on FAIR, and resolve issues around the data management that is related
to making data public. We do not foresee fees for the publication of data, as
it will be within scope and limits for free use of Zenodo and Emodnet. The
costs for the Portal is included as part of the PERICLES budget.
Websites and the mapping portal will be available for at least 5 years beyond
the project without any costs.
## 5.2 Responsibility for data management
The Steering Committee still has the responsibility for data management.
Potential issue and general discussions regarding the management of PERICLES
data will be discussed on SC meetings throughout the project.
## 5.3 Long term data preservation
The majority of partners are academic universities who are bound to ‘normal’
academic procedures for long-term data preservation. Within PERICLES, those
partners who are not academic institutions, also follow academic conventions.
The data stored on Zenodo will remain re-usable until Zenodo discontinues the
dataset(s). All data available through the Portal will be preserved for at
least 5 years after the end of the project, stored on a university server
(UoY), and available through an AAU domain, together with the website, which
also will be available for at least 5 years after the project.
# 6 Data security
Access control will be in line with the procedures at each partner institution
holding the data.
All raw and processed data will be stored in the secured university networks,
which is backed-up regularly. Both raw and processed data will be shared with
other project members. Signed consent forms and completed hard copies of
survey forms (if no electronic surveys are used) will be kept at the partner
institution directly engaged with collecting consent and carrying out the
surveys. Storage is in a cabinet in an office with restricted access. All
researchers involved in the collection/processing of data are aware of
security issues and these protocols.
Transfer of data between institutions will be done at the project
collaborative platform SharePoint, taking the classification of the data into
consideration. The project SharePoint only provides relevant partners access
to the data though use of e-mail address and password.
Data will be kept secure for a period of at least 5 years (longer is possible
if required by the individual partner institutions). After 5 years, the
necessity of data storage is assessed. If data are still deemed to be useful,
the data will be kept for another 5-year period, after which the need for
storage is again assessed. If data at that point are no longer deemed useful,
data will be erased.
For openly accessibly data, the public repositories (described earlier in this
document) will insure longterm preservation until discontinuing of the data by
the respective repositories.
# 7 Ethical aspects
The ethical aspects of data management and data collection has been covered in
the ethics deliverables of PERICLES submitted in M2 (D8.1, D8.2, D8.3, D8.4,
D8.5, D8.6 and D8.7) and will not be explored further here.
# 8 Other
PERICLES partners have well developed and quality assured processes for
managing data, in compliance with national funders’ polices, to which all
researchers will adhere. The partners involved carry the necessary and
appropriate levels of indemnity for research involving human participants,
giving cover for both negligent and non-negligent harm. They have policies and
procedures that govern the collection, storage, quality assurance and security
of data.
The Data Management Plan (Task 1.5/Deliverable 1.3) will be designed and
uploaded on the Partner’s area of the website, with Individual Data Plans
(listed below) submitted by each partner. In the individual plans, each
partner will assume responsibility for data integrity and quality.
**Partner 1, AAU,** follows the professional policies and standards of
disciplinary associations within which the researchers are affiliated.
**Partner 2, WU** , **follows the Netherlands Code of Conduct for Academic
Practice Principles of Good Academic Teaching and Research** , which is fully
applicable to all research at Wageningen University and Research. The code of
conduct elaborates on recognised principles such as Honesty and
scrupulousness, Reliability, Verifiability, Impartiality, Independence and
Responsibility. 3 In addition, legal regulations for privacy will be adhered
to. Moreover, the Data Management Policy as stipulated by the Environmental
Policy Group (ENP) serves as a guideline for the WU researchers involved in
PERICLES. According to this policy, researchers need to store all data that is
used in publications, as well as data that needs to be stored according to
requirements from the consortium in which the researcher takes part, have to
be stored in an individual repository on the secured university drive for the
duration of 10 years. This repository contains: research proposal; the Data
Management Plan (this document); empirical tools (e.g. questionnaires,
interview guidelines, models); processed data (e.g. excel sheets,
transcripts); documentation of how data has been processed (i.e. coding form,
list relating anonymous data to resource persons) and of the programmes used
to analyse the data; and signed prior informed consent forms.
Data collected/used for research in PERICLES include primary and secondary
data. More specifically, WU researchers collect and use the following data for
the research tasks in which they participate: T2.5: peer-reviewed academic
articles, including previously conducted research by the involved partners,
processed data (excel sheet) and coding forms; T2.7: processed data, derived
from research conducted in T2.3, T2.4 and T2.5, and output (final draft
journal article); T3.2: secondary data (existing spatial data), collated in a
table (word document/excel sheet); T3.3: online documentation, academic
articles and empirical tool (survey), processed data and coding form; T3.4:
semi-structured interviews, observation data, meeting notes, video images and
recorded interviews (VPA), empirical tools (interview guidelines, meeting work
plans, filming guidelines/script), processed data (transcripts, meeting
reports, selected images and footage), documentation of how data has been
processed (i.e. coding form, list relating anonymous data to resource persons,
if applicable) and of the programmes used to analyse the data, and signed
prior informed consent forms. T4.3: data collected in task 4.1, also peer-
reviewed academic articles and semi-structured interviews, interview
guideline, transcripts, list relating anonymous data to resource persons (if
applicable), and signed prior informed consent form; T5.1: peer-reviewed
academic articles, including previously conducted research by the involved
partners, policy documents, processed data and coding forms; T5.2: semi-
structured interviews, interview guideline, transcripts, list relating
anonymous data to resource persons (if applicable), and signed prior informed
consent form; T5.3: data collected in task 5.1 and 5.2, models, processed
data, coding form, and documentation of the programmes used to analyse the
data; T6.1: observation data, meeting notes, meeting work plans, processed
data (meeting reports), and signed prior informed consent forms; T6.2: meeting
notes, meeting work plans, webinar scripts, processed data (meeting reports,
(short) reports capturing feedback from participant); T7.: meeting notes;
T7.4: this task mainly uses data collected in other tasks, processed data, and
final output (e-booklets); T7.6: conference notes (if relevant), PowerPoint
presentations; T7.9: video images and recorded interviews, filming
guidelines/script, processed data (selected images and footage), documentation
of how data has been processed (i.e. coding form, list relating anonymous data
to resource persons, if applicable) and of the programmes used to analyse the
data, and signed prior informed consent forms.
Regarding (co-) ownership, the ENP Data Collection Policy states that all data
collected by WU researchers is at least co-owned by ENP. In addition, if
(processed) data is not archived with WU but with a partner institute, access
to this data (in the form of processed data) has to be warranted by the means
of a data sharing agreement. In that case, WU researchers have to set up a
data sharing agreement. For PERICLES, this Data Management Plan serves as such
agreement, as in section 4 it has been highlighted that “both raw and
processed data may be shared with other project members”. When specific
conditions (e.g. time; authorship; anonymity) have to be considered, a Data
sharing agreement has to be drafted and signed for any research where data is
used or (co-)produced by researchers outside of the ENP group.
**Partner 3, UBO** – is working in accordance with the European Union
Regulation No 2016/679 of the European Parliament and of the Council of 27
April 2016 on the protection of individuals with regard to the processing of
personal data and the French law n°78-17 of 6 January 1978 relating to
information technology, files and to freedoms in its latest version. The
processing of personal data presented to DPO based on the interviews realised
by laboratory AMURE/IUEM laboratory of UBO within the frame of PERICLES
project comply with the legal frame mentioned above.
**Partner 4, UHI** – follows the University Research Data Management Policy
for management of data generated as a result of research projects.
(https://www.uhi.ac.uk/en/t4-media/oneweb/university/research/resource/docs/UHI-
RDM-Policy-and-guidelines-2018.pdf)
**Partner 5, QUB** follows the **Economic and Social Research Council Research
Data Policy** and QUB’s policies on management of physical research data and
on working with electronic data. ( _https://esrc.ukri.org/funding/guidance-
for-grant-holders/research-data-policy/_ )
**Partner 6, UAVR** follows the European Code of Conduct for Research
Integrity ((ESF/ALLEA) -
https://ec.europa.eu/research/participants/data/ref/h2020/other/hi/h2020-ethics_code-
ofconduct_en.pdf policies and guidelines, and complies with the relevant data
protection laws, in particular the European Data Protection Regulation (GDPR)
and with the national laws on that matter in practice at this University
(namely the Regulamento Geral sobre a Proteção de Dados - RGPD).
According to these guidelines, PERICLES’ researchers at UAVR will ensure
appropriate stewardship and curation of all data and research materials,
including unpublished ones, with secure preservation for a reasonable period.
UAVR’s researchers will provide transparency about how to access or make use
of their data and research materials. Research participants that take part in
PERICLES activities are engaged through informed consent procedures, which
follow European best practices. Moreover, we will ensure access to data is as
open as possible, as closed as necessary, and where appropriate in line with
the FAIR Principles (Findable, Accessible, Interoperable and Re-usable) for
data management (as described in section 4. of this document).
Further details on the types of data that are being (or will be) collected,
their purpose and utility, as well as their accessibility are described in
Annex I.
**Partner 7, SAMS** – SAMS is no longer directly collecting any data for
PERICLES – the data collection for our survey on people’s attitudes to local
fisheries is being co-ordinated by York University. The data and ethical
framework for this are therefore being handled by York University in line with
their data handling management procedures.
**Partner 8, MKA** – Currently, a Data Management Plan for MKA is under
preparation. Until the document is approved, MKA are working in accordance
with EU data protection regulation.
**Partner 9, PNRGM** follows the European RGPD (Data Protection Regulation)
which can be downloaded
from: https://pages.checkpoint.com/fr-
gdpr.html?utm_source=googlesem&utm_medium=cpc&utm_campaign=CM_SEM_18Q1_WW_GDPR_FR9&utm_source=google-
**Partner 10, FRI** – The FRI follows mandates of “The Ethics and Research
Ethics Committee” of the Hellenic Agricultural Organization "DEMETER" (NAGREF-
DEMETER) which are in accordance with the national legislation n. 4521/2018
(Government Gazette A’38/2-3-2018). The purpose of the Committee is to
guarantee, on a moral and ethical level, the credibility of all research
projects carried out by all NAGREF-DEMETER Research Institutes. Additionally,
the Committee monitors the compliance with the research integrity principles,
and the criteria of good scientific practice.
It is the responsibility of the Committee to ascertain whether a particular
research project carried out by NAGREF-DEMETER, does not contravene the
legislation in force and whether it complies with the generally accepted
ethical and ethical rules of Research (its content and conduct). The Committee
evaluates the research proposal on research ethics and ethics issues, and is
responsible for its approval or for making recommendations for its revision
when and if ethical and ethical impediments arise. For more details see
https://www.elgo.gr/index.php?option=com_content&view=category&layout=blog&id=282&Itemid=2
109&fbclid=IwAR24uvof3R6I5sHXAyAx8B6n-Kt7Gqzcz4Q1kIjlrB2IhP8tcDrLpfcXKGA
(available in Greek).
FRI’s sub-contractor, the University of Crete, is itself working in accordance
to Principles of Ethical Conduct of the University, as these are guaranteed
and approved by the University Code of Ethics & Research Ethics Committee; see
http://en.uoc.gr/research-at-uni/eth/ethi.html .
The Greek part of PERICLES research program and its actions have been approved
by both these institutions.
**Partner 11, UoY** – UOY has an elaborate DM policy which the York PERICLES
team will adhere to: (https://www.york.ac.uk/about/departments/support-and-
admin/information-services/informationpolicy/index/research-data-management-
policy/) This policy serves to ensure that researchers manage their data
effectively, enabling them to:
* Demonstrate the integrity of their research
* Preserve eligible data for reuse within the university and without (as appropriate)
* Comply with ethical, legal, funder and other requirements in relation to data and data management
This policy states that research data must be:
* Accurate, complete, authentic and reliable;
* Identifiable, retrievable, and available when needed;
* Secure and safe with appropriate measures taken in handling sensitive, classified and confidential data;
* Kept in a manner that is compliant with legal obligations, University policy and, where applicable, the requirements of funding bodies; and
* Preserved for its life-cycle with the appropriate high-quality metadata
The policy also states that retained data must be deposited in an appropriate
national or international data service (as discussed above). Data should be
transferred to the University Research Data York service when suitable data
services are not available.
Additionally, UoY endorses the Research Council UK Common Principles on Data
Policy (http://www.rcuk.ac.uk/research/datapolicy)
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0475_DRIVE_645991.md
|
# 1 Introduction
## 1.1 Scope of the document
This document provides a description of the DRIVE dissemination plan defining
a clear strategy in terms of responsibility, timing, dissemination tools and
dissemination channels. The purpose of the plan is to ensure that information
is shared with appropriate audiences on a timely basis and by the most
effective means. The overall objective is indeed to find an effective and
understandable way for informing the scientific community and broader public
on the existence of the project and its future value, and distributing and
sharing information and knowledge gained from the project.
Therefore this document aims to:
* Develop a common understanding of the objectives of the DRIVE dissemination activities
* Establish mechanisms for effective and timely communication of the project objectives and its evolution
* Monitor and evaluate the effects of the activity and modify the dissemination as necessary to improve the effectiveness
* Identify the target audiences
* Identify the appropriate and relevant key messages and channels for communicating them to the appropriate target audiences
* Exploit the results of the project after its lifetime.
## 1.2 Dissemination objectives
The overall objective of the DRIVE dissemination activities is to increase the
visibility and impact of the DRIVE research community at European, national
and local levels by informing the scientific community and the society of the
existence of the project, its emerging results and its future benefits to the
health community in general.
Achieving these objectives will:
1. Increase awareness about the technical results of the project among the scientific community, providing the ground for appraisal of the results.
2. Promote the real benefits of the DRIVE outputs on patients suffering from diabetes
3. Reinforce the future potential penetration of the products within the market
4. Promote the value of the European Commission’s research investment and the beneficial impact that the project’s results will have for the European community of citizens.
The dissemination objectives will be reached by working into simultaneous
directions such as:
* Dissemination of the scientific and technical results. The main instruments will be the scientific publications at conferences and journals, organization and attendance to workshops, conferences, and trade fairs. Each WP will have specific dissemination activities and WP8 will integrate all of them by using the project website and the social media (Twitter and Facebook) as main vehicles for dissemination.
* Technology transfer to the industry by establishing synergies with the industrial and clinical communities
* Training activities to support and strengthen the dissemination objective
* Patients and citizens panel to facilitate the science-society dialogues on chances, risks and ethical aspects of DRIVE project
## 1.3 Dissemination approach and phases
The DRIVE dissemination strategy is based on progressively increasing
dissemination efforts as project results are obtained, to spread as much as
possible the concept behind the DRIVE project ensuring the favourable
conditions for facilitating the exploitation after the end of project. The
dissemination strategy is intended to optimise the dissemination of project
knowledge and results to companies and organisations, which share an interest
in the scientific results and the applications produced during project
conduction.
The dissemination strategy and timeline is outlined in the table 1:
<table>
<tr>
<th>
**Year**
</th>
<th>
</th>
<th>
**Objective**
</th>
<th>
</th>
<th>
**Methods**
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**1**
</td>
<td>
</td>
<td>
Define a “corporate brand” for the project.
Create awareness on DRIVE project.
</td>
<td>
</td>
<td>
Design of a project logo, of a public web site and of dedicated pages in the
main social networks (Linkedin, Twitter and Facebook) and on YouTube
Publication of high quality graphic materials
(leaflets, brochures and posters)
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Attendance to seminars, conferences and congresses.
</td> </tr>
<tr>
<td>
**2 and 3**
</td>
<td>
</td>
<td>
Dissemination in strategic boards of participants.
Increase awareness and acceptance of the technologies developed
Engage with potential industry and associations.
</td>
<td>
</td>
<td>
Attendance to seminars, conferences and congresses.
Aligning events with similar or complementary EU or national projects.
Seminars focused to disseminate project results and application to
stakeholders.
Web site and social network pages enrichment
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Newsletters to potential industries and associations
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
</td>
<td>
Solicit first commercial interest for further
development/optimisation of the technologies developed
</td>
<td>
</td>
<td>
Attendance to seminars, conferences and congresses
Organisation of seminars focused on business opportunities involving top and
middle managers of industrial organisations
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Preparation of a pre-commercial brochure.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Newsletter to potential industry associations
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Promote the commercial
exploitation of DRIVE results
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Beyond 4**
</td>
<td>
</td>
<td>
Preparation of a commercial brochure.
Promotion in commercial fairs.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Newsletter to targeted industry associations.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Business meetings with top and middle managers of industrial organisations
</td> </tr> </table>
**Table 1: DRIVE’s dissemination strategy and timeline**
The dissemination effort for the project began with the establishment of the
logo of the project, of the project’s web site and dedicated pages on
important social media such as Facebook and Twitter. The members of the
project will also write academic and technical papers and scientific posters,
to be presented at conference and published in leading academic and technical
journals.
# 2 Dissemination strategy
## 2.1 Identification of target stakeholders
In order to create an impact that will last beyond the end of the project by
disseminating the research results to those who could benefit from them, the
Consortium has identified different stakeholders who would be the first to
implement and benefit from its outputs. The following have been selected as
the main group of target stakeholders:
1. Research and wider scientific community, in particular the one dealing with biomaterials development, diabetes treatment, islet transplantation, stem cells & regenerative medicine, nano-biotechnology
2. Prospective customer base: regenerative medicine/cell therapy companies, biotechnology companies, diabetes hospitals, clinical centres and Research Institutes, transplant surgeons
3. Wider community of potential end users: type 1 (and in the future potentially type 2) diabetes patients
Due to the diversified target audiences, the communication strategy envisions
tailoring key messages to be transmitted in an appropriate way to the
different target groups focusing on positive achievements of the project and
the benefits they could bring. This requires clear agreement and careful
coordination among all the partners who may act as speakers or information
sources for a particular project or network. Key messages will be defined
taking in mind the potential impact of the project results for a target
audience and the appropriate modes of communication.
The programme will then use a wide range of dissemination channels for
reaching these target audiences, including:
* Free access to DRIVE public website (target: all groups)
* Events and exhibitions (main target: groups 1 and 2)
* Advertisements and notices in specialised journals and newspapers (main target:
groups 1 and 2)
* Newsletters, leaflets and brochures (target: all groups)
* Participation at sector-relevant exhibitions and conferences (main target: groups 1 and 2)
* Participation at EC events (main target: groups 1 and 2)
* Scientific papers, journal articles, press releases (main target: groups 1 and 2)
* The display of notices and issue of publicity materials to their public contacts by the partners (target: all groups)
* Mail-shots (target: all groups)
Patients and citizens panel to facilitate the science-society dialogues on
chances, risks and ethical aspects of DRIVE project (main target: group 3)
## 2.2 Dissemination responsibilities
DRIVE is a project whose outcomes could dramatically impact on the quality of
life of patients suffering from diabetes (in particular from T1D) and
significantly reduce the social costs of this disease at worldwide level.
According to the American Diabetes Association (2013) the total costs of
diagnosed diabetes have risen to $245 billion in 2012 from $174 billion in
2007, when the cost was last examined. This figure represents a 41 percent
increase over a five year period. Most of the cost for diabetes care in the
U.S., 62.4%, is provided by government insurance (including Medicare,
Medicaid, and the military). The rest is paid for by private insurance (34.4%)
or by the uninsured (3.2%).
For this reason the whole Consortium needs to ensure an appropriate and
effective dissemination of non confidential information about DRIVE aims,
preliminary results, and clinical perspectives.
The dissemination activities have been structured in a way to actively involve
all the partners to effectively disseminate project results to the widest
possible audience, in order to create a critical mass of interest around the
project at national, European and worldwide level.
The dissemination partner leader (INNOVA) will work to ensure proper
information dissemination to support the full communication of the project
results. Partners are involved to provide a structured and dynamic approach to
the dissemination of project results.
## 2.3 Dissemination tools
Different dissemination materials will be designed and shaped during the
entire life of the project following the evolution of the project and
according to the different communications needs, and to the various events
typologies and stakeholder groups. In particular:
# The visual identity (logo) of the project
It represents the first milestone in the dissemination strategy, being the
basis of the project visibility. An attractive and effective graphical
representation helps provide interested parties with the message that the
project delivers. The logo has been designed by a professional graphics
designer to consistently communicate and disseminate the main project concept
of using cells and biomaterials to guarantee the sufficient production of
insulin on behalf of pancreatic islets. The logo is reported in the fig.1:
**Figure 1: logo of the DRIVE project**
# Templates
Templates for power point presentations have been prepared and made accessible
for all members of the project. The templates are important to give a uniform
image of the project and to begin a visual language that allow to immediately
link to the DRIVE project the presented information.
# Digital artwork
**High resolution three dimensional images depicting DRIVE’s novel
technologies have been commissioned from a graphic design company specialising
in life sciences. These images will be used in DRIVE dissemination outputs
throughout the project to draw attention to the expected results.**
# Web site
The DRIVE website ( _www.DRIVE-project.eu_ ) is the main communication tool
to disseminate project results and achievements. The web site will be the main
source of information on the project, on its initiatives (events, conferences,
workshops, etc.) and trainings. The website will contain dissemination items
such as press releases, brochures, newsletters and links to new articles.
# Poster
Some posters describing DRIVE’s approach and the project’s aims have been
already developed and presented in conferences (e.g. IPITA 2015) and outreach
events (Discover Research Dublin
2015) and have been uploaded to the Content Management System for use by DRIVE
partners. Following the evolution of the project, different typologies of
posters according to different needs will be designed to demonstrate and
disseminate to diverse target audiences the projects objectives and the
achieved results.
# Brochures
The project brochure will be designed between the 1 st and 2 nd year of
implementation to provide general information regarding the Drive project, its
objectives and achievements. It will be designed for a standard paper size, to
allow the interested partners to easily download it from the project website
and print it for their own dissemination purposes.
# Newsletter
The Consortium will produce periodic newsletters that will highlight key
results and achievements of the project. It will be published on the project
website and distributed via email to a list of stakeholder’s contacts.
# Publications
DRIVE partners will prepare and submit articles in open access, peer-reviewed
high-level journals, proceeding of conferences as well as in daily newspaper
or in magazines addressing a broad public. The results of the scientific
research work will be submitted for publication to international, peer-
reviewed high-level scientific journals relevant for DRIVE (e.g.
Diabeteologia, Diabetes, Journal of Controlled Release, Nature Materials,
Biomaterials, Tissue Engineering) and, in case, in broad-subject journals (for
information to the scientists and private institutions in other related
fields) following the open access principles.
# Press release
The press releases aims to attract attention to major project developments and
achievements. An initial press release has been prepared by the Project
Coordinator to generate initial interest in the project by the general public.
During the project life, there will be at least one press release per year
which will focus on the completion of a major milestone rather than general
project progress.
# Panels (patient and citizen)
The Consortium will organize target panels in order to involve patients and
citizens in a two-way dialogue with scientists and medical doctors. The focus
will be on the social, ethical, cultural, economic and legal aspects of the
diabetes disease treatments underlying the innovation and the effectiveness of
the DRIVE approach. A pilot panel will be held in Dublin, followed by
additional panels in Italy and Germany which will take advantage from the
evaluation of the pilot event.
# Presentations at external events and conferences
The partners will prepare and deliver papers, communications and lectures at
seminars, relevant conferences and workshops at national and international
level. A list of conferences will be developed through the course of the
project with the aim of increasing visibility and sharing of the achieved
results.
# Social media: Twitter and Facebook
DRIVE will use social networks such as Linkedin, Twitter and Facebook as
useful dissemination tool and channels. In particular, the project will take
advantage of the well-established LinkedIn connections of each partner and
will create a LinkedIn groups to promote and facilitate a dialogue around the
project activities. The twitter account as well as the Facebook page of the
project have already been created and will be continuously updated with the
forthcoming news and events related to the DRIVE project.
## 2.4 Relationships with other relevant initiatives
The DRIVE project will also continue to link to other relevant international
activities and existing research initiatives in the same field. The partners
will establish links to other European research initiatives related to the
topics of DRIVE where they are currently involved, such as ETPNANOMEDICINE
(RCSI member), the FP7 “NEXT” project (where EXPLORA is one of the main
partners), REDDSTAR (DRIVE clinical collaborators). In addition the project
will create a relationship with the following relevant initiatives:
<table>
<tr>
<th>
**Resource**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
DIABETES
RESEARCH
INSITITUTE
FONDATION
</td>
<td>
The Diabetes Research Institute leads the world in _cure-focused research_ .
As the largest and most comprehensive research center dedicated to curing
diabetes, the DRI is aggressively working to develop a biological cure by
restoring natural insulin production and normalizing blood sugar levels
without imposing other risks.
</td> </tr>
<tr>
<td>
JDRF
</td>
<td>
JDRF is the leading global organization funding type 1 diabetes (T1D)
research. Type 1 diabetes is an autoimmune disease that strikes both children
and adults suddenly. JDRF works every day to change the reality of this
disease for millions of people—and to prevent anyone else from ever knowing
it—by funding research, advocating for government support of research and new
therapies, ensuring new therapies come to market and connecting and engaging
the T1D community.
</td> </tr>
<tr>
<td>
EASD
</td>
<td>
The European Association for the Study of Diabetes (EASD) is based on
individual membership and embraces scientists, physicians, laboratory workers,
nurses and students from all over the world who are interested in diabetes and
related subjects. Members are entitled to vote at the General Assembly, which
is held during the Annual Meeting and are eligible for election to the Council
and to the Executive Committee. Membership also provides the possibility of
attending the Annual Meetings of the Association at a considerably reduced
registration fee. Active members receive monthly the official journal of the
Association, Diabetologia, which publishes articles on clinical and
experimental diabetes and metabolism.
</td> </tr>
<tr>
<td>
IDF
</td>
<td>
IDF Europe is the European chapter of the International Diabetes Federation
(IDF). IDF are a diverse and inclusive multicultural network of national
diabetes associations, representing both people living with diabetes and
healthcare professionals. Through our activities, IDF aim to influence policy,
increase public awareness and encourage health improvement, and promote the
exchange of best practice and high-quality information about diabetes
throughout the European region.
</td> </tr>
<tr>
<td>
AMERICAN
DIABETES
ASSOCIATION
</td>
<td>
American Diabetes Association lead the fight against the deadly consequences
of diabetes and fight for those affected by diabetes.
* fund research to prevent, cure and manage diabetes.
* deliver services to hundreds of communities.
* provide objective and credible information.
* give voice to those denied their rights because of diabetes.
</td> </tr>
<tr>
<td>
HUMEN PROJECT
</td>
<td>
The HumEn project brings together six leading European stem cell-research
groups and three industrial partners in a coordinated and collaborative effort
aimed at developing glucose-
</td> </tr>
<tr>
<td>
**Resource**
</td>
<td>
**Description**
</td> </tr>
<tr>
<td>
</td>
<td>
responsive, insulin-producing beta cells for future cell-replacement therapy
in diabetes.
</td> </tr>
<tr>
<td>
SEMMA
THERAPEUTICS
</td>
<td>
Semma Therapeutics was founded to develop transformative therapies for
patients who currently depend on insulin injections. Recent work led to the
discovery of a method to generate billions of functional, insulin-producing
beta cells in the laboratory. This breakthrough technology has been
exclusively licensed to Semma Therapeutics for the development of a cell-based
therapy for diabetes. Semma Therapeutics is working to bring this new
therapeutic option to the clinic and improve the lives of patients with
diabetes
</td> </tr>
<tr>
<td>
NIDDK
</td>
<td>
The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK)
conducts, supports, and coordinates research on many of the most serious
diseases affecting public health. The Institute supports clinical research on
the diseases of internal medicine and related subspecialty fields, as well as
many basic science disciplines.
</td> </tr> </table>
## 2.5 Activities to reach the general public
In addition to the more specific audiences the DRIVE project is also committed
to disseminating information about the project and its potential benefits to
the wider general public. In order to achieve this objective a specific
programme of public awareness activities has been developed and this is
presented below.
<table>
<tr>
<th>
**Activity**
</th>
<th>
**Timetable**
</th>
<th>
**Objectives**
</th>
<th>
**Expected Impact**
</th> </tr>
<tr>
<td>
Press releases to key general press for broad public awareness raising
</td>
<td>
From
Month 6
</td>
<td>
Awareness of DRIVE in selected general press publications
</td>
<td>
Early public awareness raising
</td> </tr>
<tr>
<td>
Identification of key possible target audiences for public awareness raising
and suitable channels of communication
</td>
<td>
From
Month 12
</td>
<td>
To raise visibility and impact of DRIVE activities and results beyond the
research community
</td>
<td>
Greater awareness of DRIVE opportunities and benefits across the broadest
possible range of communities
</td> </tr>
<tr>
<td>
Development of DRIVE dissemination materials for nonspecialist public
audiences
</td>
<td>
From
Month 12
</td>
<td>
To raise visibility and impact of DRIVE activities and results beyond the
research community
</td>
<td>
Pan-European public awareness of DRIVE and effective handling of enquiries
</td> </tr> </table>
<table>
<tr>
<th>
**Activity**
</th>
<th>
**Timetable**
</th>
<th>
**Objectives**
</th>
<th>
**Expected Impact**
</th> </tr>
<tr>
<td>
Organization of patients and citizen panels
</td>
<td>
From
Month 18
</td>
<td>
Promote the wider public understanding of the full range of DRIVE benefits as
the project progresses
</td>
<td>
Increase in awareness and support for building the future user base.
</td> </tr> </table>
# 3 Completed activities
## 3.1 Logo and website
The creation of a project website included agreement and introduction of the
project logo, as described above. A website (www.DRIVE-project.eu) has been
established by INNOVA, with the assistance of all partners and includes public
partner profiles, logo and as work proceeds, the Consortium will supply INNOVA
with relevant images and results suitable for public viewing.
**Figure 2: screenshot of the DRIVE homepage**
**Figure 3: screenshot of the DRIVE “the challenge” section**
**Figure 4: screenshot of the DRIVE “our excellence” section**
## 3.2 Social media dedicated pages
The twitter account as well as the Facebook page of DRIVE project have been
already created and will be continuously updated with the forthcoming news and
events related to the DRIVE project.
**Figure 5: the twitter account of DRIVE (@DRIVE4diabetes)**
**Figure 6: the Facebook page of DRIVE**
** ( _https://www.facebook.com/DRIVEforDiabetes_ ) **
## 3.3 DRIVE Outreach Events
On the day of the 15 th \- 19 th November 2015, DRIVE researchers took
part in IPITA joint conference in Melbourne ( _http://melbourne2015.org/_ )
, one of the most important congresses in the world on pancreas and islets
transplantation. Garry Duffy (RCSI, DRIVE's Coordinator) and Eoin O'Cearbhaill
(UCD, DRIVE PI) submitted on behalf of DRIVE at IPITA joint conference in
Melbourne. DRIVE's Prof Paul Johnson (Oxford Consortium for Islet Transplant)
was there to give a number of talks on his group’s latest islet
transplantation research.
**Figure 7: DRIVE’s coordinator, Garry Duffy (RCSI), presenting DRIVE at IPITA
joint conference in Melbourne, November 2015**
On the evening of the 25 th September 2015, DRIVE researchers have taken
part in Discover Research Dublin 2015 ( _www.discoverresearchdublin.com_ ) ,
an interactive night of free public engagement events on wide variety of
research themes. The initiative was funded by the European Commission's
Research and Innovation Framework Programme H2020 (2014-2020) by the Marie
Skłodowska-Curie actions and was hosted by Trinity College Dublin (TCD).
**Figure 8: DRIVE researchers meeting the public at Discover Research Dublin,
25th**
**September 2015**
**Figure 9: DRIVE researchers meeting the public at Discover Research Dublin,
25th**
**September 2015**
**Figure 10: DRIVE researchers meeting the public at Discover Research Dublin,
25th September 2015**
On 21st June 2015 Dr. Liam Burke (DRIVE’s program manager) gave the support of
the DRIVE Consortium to a fundraising event ( _Lap the Lake_ ) of Diabetes
Ireland, the only national charity in Ireland dedicated to helping people with
diabetes.
**Figure 11: DRIVE’s program manager Liam Burke (RCSI) at the Lap the Lake
charity run organised by Diabetes Ireland**
# 4 Future Plans
The already planned future dissemination activities for DRIVE are set out in
the following table. In particular the project will possibly target the
following international initiatives:
<table>
<tr>
<th>
**Event name**
</th>
<th>
**Date & Place **
</th>
<th>
**Type of**
**Event***
</th>
<th>
**Short description and website (if available)**
</th> </tr>
<tr>
<td>
Controlled Release
Society (CRS)
Annual Meeting
</td>
<td>
Seattle,
Washington,
USA
July 17-20
2016
</td>
<td>
CO
</td>
<td>
With the theme "Advancing Delivery Science & Technology Innovation," this high
quality CRS event will bring together an international audience of nearly
1,450, from over 50 countries. A dynamic program committee headed by Kinam
Park, promises attendees cutting-edge research, innovation, and collaboration.
_http://www.controlledreleasesociety.org/meetings/annual/Pages/_
_default.aspx_
</td> </tr>
<tr>
<td>
European Chapter Meeting of the
Tissue Engineering and Regenerative
Medicine
International
Society (TERMIS)
2016
</td>
<td>
Uppsala,
Sweden, June 28th-July 1st
2016
</td>
<td>
CO
</td>
<td>
The theme of the 2016 TERMIS-EU conference in Uppsala, Sweden is "Towards
Future Regenerative Therapies". The goal of the conference is to bring
together the leading experts within the tissue engineering and regenerative
medicine community to present and discuss their latest scientific and clinical
developments. These last years of research, and especially the increased
collaborations between various specialties, have led to tangible improvements
that are now starting to benefit patients. This meeting will not only serve as
an important teaching platform, but will also give young scientists the
opportunity to present innovative studies. The human networking aspect of such
a meeting and encouraging the exchange of ideas and knowledge are equally
important, not only between
scientists, but also with our industrial partners to allow translation to many
patients. _http://www.termis.org/eu2016/_
</td> </tr>
<tr>
<td>
52nd European
Association for the
Study of Diabetes
(EASD) Annual
Meeting
</td>
<td>
Munich,
Germany, 1216 th Sept 2016
</td>
<td>
CO
</td>
<td>
The EASD Annual Meeting has become the world´s leading international forum for
diabetes research and medicine. It is held in a different European city each
year.
During the Scientific Programme all relevant companies involved in diabetes
care and treatment present tomorrow´s products and services at the industry
exhibition area.
For the first time at this year's EASD, not only Industry Symposia, on Monday
12 September but also the new Evening Symposia on Wednesday, 14 September and
Thursday, 15 September offer excellent opportunities to gain insights into the
latest innovations and cutting-edge products in the field of diabetes.
_http://www.easd-industry.com/_
</td> </tr>
<tr>
<td>
**Event name**
</td>
<td>
**Date & Place **
</td>
<td>
**Type of**
**Event***
</td>
<td>
**Short description and website (if available)**
</td> </tr>
<tr>
<td>
Discover Research
Dublin 2016
</td>
<td>
Dublin, Ireland
30th
September
2016
</td>
<td>
EX
</td>
<td>
Discover Research Dublin is an event funded by the EU under the Horizon 2020
framework as part of European Researchers Night. This takes place on the last
Friday of every September.
DRIVE researchers participated in DRD 2015 where they interacted with the
general public giving, talks, demos and chats. The public will again have the
chance to meet DRIVE researchers and to hear about the progress of the
project. The aim of the event is outreach: to demonstrate that research isn’t
an ivory tower pursuit, and it has real impacts in everyone’s daily lives.
_http://discoverresearchdublin.com/_
</td> </tr>
<tr>
<td>
28th European
Society for
Biomaterials
Annual Congress
</td>
<td>
Athens,
Greece, 4-8th
September
2017
</td>
<td>
CO
</td>
<td>
The European Society for Biomaterials is a non-profit organization at the
forefront of the scientific community determined to tackle unmet clinical
needs by means of advanced materials for medical devices and regenerative
medicine.
The annual congress is a place where scientists, clinicians, industrials and
regulatory affair experts can network to maximise R&D and commercial
opportunities to the benefit of patients. Our interactive website favours
social networking and is a show case for the “innovation” created by our
members.
_http://www.esbiomaterials.eu/Cms/Events_
</td> </tr>
<tr>
<td>
16th World
Congress of the
International
Pancreas and Islet Transplant
Association (IPITA)
</td>
<td>
Place TBC
2017
</td>
<td>
CO
</td>
<td>
This is a highly specialised bi-annual conference that brings together leading
academic, clinical and industrial stakeholders in the field of pancreatic
islet transplant. The DRIVE Project was introduced at the recent conference in
2015, but by 2017 plan to have some exciting results to share with the islet
transplant community. 2015 conference: _http://melbourne2015.org/_
</td> </tr>
<tr>
<td>
DRIVE Citizens and
Patients Panels
</td>
<td>
Ireland, Italy
and Germany,
2016-2017
</td>
<td>
CO
</td>
<td>
The use of stem cells and nanotechnology has evoked a public debate about
their ethical dimension. In order to link DRIVE to society through science-
society dialogues on chances, risks and ethical aspects of DRIVE, **patients
and citizen panels will be organised in the framework of WP8** to discuss
these issues with their potential benefits and risks. This activity is
expected to contribute to overcome the classical one-way communication with
scientists in the role of experts providing information and public and the
role of lay-people receiving information. Engaging in a two-way dialogue
between scientists and patients/public is DRIVE’s goal. In this dialogue, both
scientists and non-scientists learn from each other. In addition, politics,
administrative and industrial bodies will benefit from the participants
assessments as their judgments and associations point out the level of
acceptability for decision makers.
</td> </tr> </table>
*CO: conference; EX: Exhibition
# 5 Data Management Plan description
The data management plan concerns the datasets generated by the project with
the respect to four key attributes: i) a description of the datasets; ii) a
description of the standards and metadata associated with these datasets; iii)
the method that will be employed for sharing these datasets; and iv) a plan
for the long term archiving of these data.
This Data Management plan aims at providing a timely insight into facilities
and expertise necessary for data management both during and after the DRIVE
research, to be used by all DRIVE researchers and their environment.
Long term archiving of the acquired datasets is very important both in terms
of visibility after the end of the project, as well as for a greater
proliferation in the research community.
## 5.1 Data set
Results generated by the participants during the course of and as a result of
the DRIVE project will be owned by the participant(s) generating them and will
be made available to all beneficiaries that will ensure their confidentiality,
as foreseen in the DRIVE Consortium Agreement and in the Grant Agreement
signed with the EC, for non-commercial use and only during the project. When a
result will be generated jointly it will be jointly owned (unless the
participants concerned agree on a different solution ahead of invention).
An internal content management system (CMS, DRIVE deliverable 1.1) has been
developed by Innova. The access is allowed to project partners only through
personalised login data and it will be used as a secure system to share
confidential data between DRIVE partners.
## 5.2 Standard and metadata
The partners of DRIVE will assume the compromise to make their best effort to
deposit at the same time the research data needed to validate the results
presented in the deposited scientific publications, into an open access online
repository.
**In compliance with Horizon 2020 rules, the results obtained will be
published only after proper IP protection with the written approval of all
partners who have contributed to the achievement of the results.**
We intend to share our dataset in a publicly accessible disciplinary
repository using descriptive metadata as required/provided by that repository
called ‘e-publications@RCSI’. This ensures the availability, dissemination and
preservation of publications open to all. The repository, managed by RCSI
Library, provides a robust and stable archive of RCSI’s scholarly output. All
archive content is freely available on the web and it is discoverable by a
wide range of search engines and it optimizes worldwide access to published
work.
## 5.3 Data sharing
Any publishable scientific and technical result arising from the scope of
DRIVE will be subject to a double Open Access strategy. Initially, the
published article or the final peer reviewed manuscript will be archived by
depositing in an online repository after or alongside its publication
according to the requirements of “green” open access. However, if the embargo
period requested by the scientific publisher surpasses the 6 months limit
settled by the EC, the publication will be moved to “gold” open access
granting its immediate open access by the scientific publisher.
## 5.4 Archiving and preservation
To ensure a long term access to data and results obtained by the DRIVE
consortium, the internal content management system (CMS) of DRIVE mentioned in
the section 51 will be used. A “guide for private documents management” has
been set up by Innova and distributed to all partners.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0479_iPSpine_825925.md
|
# Preamble
The iPSpine data management plan (DMP) provides an initial overview of the
data and information collected within and throughout the iPSpine project. The
DMP shows the interrelation of the data collecting activities within and
between work packages. Furthermore, the DMP also links these activities to the
iPSpine project partners and describes their responsibilities with respect to
data handling.
The DMP is intended to be a ‘living document’, which will be updated over the
course of the project when appropriate and at least at every reporting period
of the project.
This is the first version of the DMP which is part of work package 9 ‘Project
Management’.
This document made use of the HORIZON 2020 FAIR DATA MANAGEMENT PLAN TEMPLATE
and was written with reference to the Guidelines to FAIR data management in
Horizon 2020 [1] and the GDPR (Regulation (EU) 2016/679).
# 1\. Data Summary
## 1.1 Purpose of data collection/generation
iPSpine will generate data in a broad range of R&D activities in order to
achieve its objectives within the project. Research data will be generated by
the project partners. These include a large amount of data, Standard Operating
Procedures (SOPs) and guidelines. Table 1 summarizes the type of data and data
sets that are being generated in the project. Data will be made available
through publications and via two interlinked platforms: the iPSpine ‘Open-
access knowledge platform’ (WP3) and ‘smart digital ATMP management platform’
(WP4) (depending on the type of data). The table will be updated throughout
later versions of this data management plan.
<table>
<tr>
<th>
</th>
<th>
**Types of data generated in iPSpine**
</th> </tr>
<tr>
<td>
</td>
<td>
Data description
</td>
<td>
Main Partners
</td>
<td>
WP
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Without personal data
</td>
<td>
Including personal data
</td> </tr>
<tr>
<td>
**1**
</td>
<td>
Research data from i _n vitro_ and _ex vivo_ experiments
</td>
<td>
All partners involved in these
WPs
</td>
<td>
1-5
</td>
<td>
\-
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Clinical research data from _in vivo_ experiments
</td>
<td>
UU, UN, UdM
</td>
<td>
6
</td>
<td>
6 #
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Interviews with professionals, patients and other users
</td>
<td>
UMCU
</td>
<td>
\-
</td>
<td>
7
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Interviews with iPSpine researchers on ATMP
development
</td>
<td>
TU/e
</td>
<td>
\-
</td>
<td>
4
</td> </tr> </table>
# Applicable only to Partners UU and UN. It refers to personal data of the
clients participating in the clinical studies described in WP6.
**Specify if existing data is being re-used (if any), and to whom this might
be useful**
Applicable to the following partners:
**ARI** : In addition, ARI will overview and re-use existing data from
previous spine-related research projects (i.e.TargetCaRe, projects funded by
AO Foundation and AOSpine), as well as data available in data archives and
digital repositories. These previous data could be combined with iPSpine new
data to have comparison parameters.
## 1.2 Types and formats of data that will be generated throughout the
project
This is an early stage identification of standards; the consortium will define
at a later stage which formats of the raw data and the final data are most
appropriate for sharing through the aforementioned paths:
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Text based documents**
**(e.g. reports, manuscripts, deliverables, interviews, informed consents)**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
All
</td> </tr>
<tr>
<td>
Format
</td>
<td>
.doc, .docx, and .txt file formats
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
Gigabytes
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
UU: university network drives (U- drive/ O-drive) and local storage when
employing a laptop. U-drive is employed by the personnel for the
tasks/activities of the specific person. O-drive is employed for documents
that are commonly shared between the member of the UU team.
Data from local storage will be on regular basis updated; based on discussions
with the Data management team of the UU it will be decided which storage modus
is the best to use ( U-/O-drive or on One Drive (cloud); Yoda).
Research data (including finalized protocols and raw research data from in
vitro & in vivo data) is stored at the E-lab.
TU/e: Local drive, central server, mirrored backup NAS, cloud-based disk
(collective univ.-based)
UMCU: Storage on local UMC Utrecht G drive.
NUIG: OneDrive for Business NUIG and M:drive
ARI: the data will be directly collected on computers and stored in project
folders in the local "I" drive at AO Foundation Davos.
PharmaLex: Storage on Server
</td> </tr>
<tr>
<td>
Comments
</td>
<td>
UU: To minimize data size the UU iPSpine group will be working with OneNote
for minutes of the groups meetings and share files via Sharepoint.
UCMU: Considers both primary and secondary data (transcript of interviews as
well as interpretation after use of N-Vivo program)
ARI: ARI: each person has a user name and a password to enter in the local
drive.
TU/e: All encrypted
PharmaLex: Considers only secondary data of in vitro and in-vivo study reports
(such as for review or for interpretation for regulatory purposes)
NUIG: Local drive at Genomic and Screening Core Facility NCBES, Biomedical
Science Building, M:drive and OneDrive for Business
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Microsoft office for research data raw data based on continues and binary
data**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
All, except for Catalyze and ReumaNL
</td> </tr>
<tr>
<td>
Format
</td>
<td>
.xls, .xlsx, .csv
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
Idem as text based documents
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
Idem as text based documents
</td> </tr>
<tr>
<td>
Comments
</td>
<td>
Idem as text based documents
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Presentations**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
All
</td> </tr>
<tr>
<td>
Format
</td>
<td>
.ppt, .pptx.
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
Idem as text based documents
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
Idem as text based documents
</td> </tr>
<tr>
<td>
Comments
</td>
<td>
Idem as text based documents
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Illustrations and graphic design**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
All
</td> </tr>
<tr>
<td>
Format
</td>
<td>
Microsoft Visio (Format: .vsd), Graphpad Prism (Format: .pzf, .pzfx),
Photoshop
(Format: different types possible, mostly .png), and will be made available as
.jpg, .psd, .tiff, .png and/or .ai files. PDFs, PIDs and layouts will
preferentially use inkscape.org, an open source software for vector graphics.
(Format: .svg), and will be made available as .png, .jpg and .pdf files.
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
Idem as text based documents
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
Idem as text based documents
</td> </tr>
<tr>
<td>
Comments
</td>
<td>
Idem as text based documents
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Audio files**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
TU/e, UMCU
</td> </tr>
<tr>
<td>
Format
</td>
<td>
MP3 or WAV
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
GBs
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
UMCU: Storage on local UMC Utrecht G drive.
</td> </tr>
<tr>
<td>
Comments
</td>
<td>
UMCU: Concerns primary data
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Magnetic Resonance Imaging**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
UU
</td> </tr>
<tr>
<td>
Format
</td>
<td>
DICOM files
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
0.5-2MB (~20-150kB per MR slide and size of the object ; ~500 kB per CT slide)
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
Stored online on a server (Xero platform) which can be accessed by a UU app to
visualize and analyse data.
</td> </tr>
<tr>
<td>
Comments
</td>
<td>
If and when files will be shared with Partners they will be anonymized and
zipped to minimize size of data transfer. It is anticipated that this data
will be at least shared with UUlm and SpineServe
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Video files**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
UN, TU/e, UBern
</td> </tr>
<tr>
<td>
Format
</td>
<td>
Quicktime Movie or Windows Media Video
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
TBs
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
Idem as text based documents
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Mass-spectometry (LC-MS/MS)**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
NUIG
</td> </tr>
<tr>
<td>
Format
</td>
<td>
.RAW, .csv
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
20GB
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
Local data server at Conway Core Facility University College Dublin, M:drive
and OneDrive for Business at NUIG
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Mass-spectometry (UPLC)**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
NUIG
</td> </tr>
<tr>
<td>
Format
</td>
<td>
.DAT, .EXP, .CKS, .csv, .pdf
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
20GB
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
Local drive at NIBRT, Dublin, M:drive and OneDrive for Business at NUIG
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Confocal imaging**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
NUIG
</td> </tr>
<tr>
<td>
Format
</td>
<td>
.OIF, .OIB, .tif, .avi
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
50GB
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
Local shared M:drive and OneDrive for Business at NUIG
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**qPCR**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
NUIG, ARI, UU
</td> </tr>
<tr>
<td>
Format
</td>
<td>
.eds, .xls, .csv
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
50MB
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
Idem as text based documents
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Flow-cytometry**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
NUIG
</td> </tr>
<tr>
<td>
Format
</td>
<td>
.fcs
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
50MB
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
Local drive at Flow Cytometry Core Facility NCBES, Biomedical Science
Building, M:drive and OneDrive for Business at NUIG
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**ELISA**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
NMI-RI, NMI-RI
</td> </tr>
<tr>
<td>
Format
</td>
<td>
.exp, .csv
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
50MB
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
Local drive at Genomic and Screening Core Facility NCBES, Biomedical Science
Building, M:drive and OneDrive for Business at NUIG
</td> </tr> </table>
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Genomic data**
</th> </tr>
<tr>
<td>
Project Partners
</td>
<td>
UU
</td> </tr>
<tr>
<td>
Format
</td>
<td>
Read data: general(CRAM, BAM, Fastq) Assembled and annotated sequence data:
flat file format (FASTA, XML), Multiple Sequence Alignment (MSA) formats
Quanitative tabular data with minimal metadata:.csv; .tab; .xls; .xlsx, .txt;
.mdb; .accdb; .dbf; .ods
Quantittatve tabular data with extensive metadata: .por; SPSS, .sav; .dta
Qualitative data: .xml; .rtf; .txt; .html; .doc; .docx;
</td> </tr>
<tr>
<td>
Size of data
(approximately)
</td>
<td>
Sequencing data ±10GB/sample, Other files: MBs
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
Idem as text documents
</td> </tr>
<tr>
<td>
Comments
</td>
<td>
UU: e.g. Sequencing (DNA, RNA,), annotation of features, protein structural
information, gene expression profiles, alignment data, chromosomal mapping,
phylogenetic trees, Single Nucleotide Polymorphisms (SNPs), functional
genomics, Proteomics,
</td> </tr> </table>
These file formats have been chosen because they are accepted standards and in
widespread use. Files will be converted to open file formats where possible
for long-term storage.
# 2\. FAIR data
## 2\. 1. Making data findable, including provisions for metadata
For sharing of the output of finalized research data during the Project (e.g.
deliverables, publications, other dissemination activities) the consortium
uses Microsoft Sharepoint Teamsite which is hosted from Utrecht University and
is fully compliant with regulations for data security and privacy.
All data files used in the Sharepoint Teamsite are related to project
management activities, include the term “iPSpine”, followed by file name which
briefly describes its content, followed by a version number (or the term
“FINAL”), followed by the short name of the organisation which prepared the
document (if relevant). An example of the Teamsite is provided below.
UU: A folder structure is created that is guided by the work plan description
of the iPSpine Action. The folder structure is as follows: work packages,
within the work packages the different tasks in which the UU is involved,
within the tasks separate folders entail the different experiments conducted.
Raw and analyzed data will be separately stored per task as defined within the
iPSpine. Raw data will be stored in a separate file that will be marked with
“read only”; those will be stored in E-lab. Master copies are maintained at
one location, for this purpose the team stores data on E-lab. Back up will be
organized with Yoda.
**Outline the discoverability of data (metadata provision)** This will be
updated later in the project.
**Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers?**
UU: Data that is stored at YODA will receive a DOI and become open access upon
publication of the respective manuscript. Specifically RNAseq and Chip-seq
data will be submitted to ArrayExpress or GEO. Regardless of the conditions
all data will comply with the MISEQ standards.
**Outline naming conventions used**
All Partners should be using the same approach for naming conventions used.
This is currently being discussed at the ESC level and will communicated to
the consortium.
**Outline the approach for clear versioning**
All Partners should be using the same approach for versioning. This is
currently being discussed at the ESC level and will communicated to the
consortium.
**Specify standards for metadata creation (if any). If there are no standards
in your discipline describe what metadata will be created**
At the consortium level the following will be organized for proper data
management:
At least the first and the corresponding author of any publication should have
complete access to the data they are reporting on, and in case that someone
wants a replication package they should only need to contact the corresponding
author of the publication.
That is upon data creation every institution is responsible for storing and
eventually archiving that data. If data need to be put together to create a
publication and this involves various institutions there will inevitably be
one corresponding author or (last author ) who is the lead in that particular
publication. For this to happen data needs to be transferred and congregated
in one institution ( or as many of the institutions that work together on that
publication) to properly interpret and analyze the data and to write the
article. The corresponding author’s institution will be responsible for this
data and archive all the data that was required to create the publication
accordingly within their own institution so that if someone contacts them to
gain access he should be able to grant access easily.
All Partners should be using the same approach for metadata. This is currently
being discussed at the ESC level and will communicated to the consortium.
### 2.2. Making data openly accessible
Data will be made "as open as possible, as closed as necessary". In this
respect, the consortium aims to make research data publicly available where
possible and make sure that data is closed where necessary for protection of
the results of the project, as described in article 23 of the grant agreement.
Research data which is created within the project is owned by the partner who
generates it (Art. 26 of the grant agreement). Each partner must disseminate
its results as soon as possible unless there is a legitimate reason to protect
the results.
Where possible all data will be licenced using CC BY 4.0 or later (Creative
Commons Corporation). Where mandated by the publisher, the individual journal
licence where the data is published, will be utilised. More restrictive,
custom licenses will only be used for commercially sensitive data. The
researchers will decide at a later stage for which data an embargo period will
apply as well as for the duration of such embargo, especially after the
project has finished.
_**Figure 1.** Open access to scientific publications and research data in the
wider context of dissemination and exploitation. From Guidelines to the Rules
on Open Access to Scientific publications and Open Access to Research Data in
Horizon 2020) _
Each beneficiary must ensure open access (free of charge online access for any
user) to all peer-reviewed scientific publications (Article 29 of the Grant
Agreement). Research data needed to validate the results in the scientific
publications coming from the project must be deposited in a publicly
accessible data repository, as depicted in figure 1.
Data must be made available to project partners upon request, including in the
context of checks, reviews, audits or investigations, following the
regulations described in Art. 25 of the Grant Agreement. In the case of
personal data, data will be anonymized before it is made available to others.
Data will be made accessible and available for re-use and secondary analysis.
The smart digital ATMP management platform will be based on the latest
information technology developments from the fields of business process
management and adaptive case management. The platforms will be open to the
public only after patent filing, publication and/or completion of the project.
### 2.3. Making data interoperable
The iPSpine project aims to collect and document the data in a standardised
way to ensure that, the datasets can be understood, interpreted and shared
alongside accompanying metadata and documentation. Generated data will be
preserved on institutional intranet platforms until the end of the Project.
In addition to these datasets that are not intended for interoperability and
belong to and are hosted with individual partners, the consortium uses two
platforms to collaborate on specific datasets.
1. **Open-access knowledge-sharing platform for high-quality data on the epigenetic, genetic, phenotypic, transcriptional and proteomic profiles**
An open-access knowledge-sharing platform will be generated within WP3, to
collect and share highquality data on the epigenetic, genetic, phenotypic,
transcriptional and proteomic profiles of cells cultured with or without the
biomaterials. Firstly a prototype platform will be developed by partner NMI-RI
and its linked third party, the bioinformatics group core facility of the
University of Tuebingen (QBIC). This platform will be accessible to the
iPSpine partners for depositing and accessing data, protocols and tools. The
iPSpine partners will as such perform the usability and functionality testing
of the platform, to further optimize the platform towards a first-viable
product with open-access at project end. In this platform data will be
formatted and stored in such a way that it allows integrative bioinformatics
analysis of big data towards pattern identification, pathway and network
analysis.
The suggested data management setup follows the FAIR guidelines. iPSpine data
from qPortal is disseminated to public repositories using an automated
interface. Furthermore, to make sure that the data that results from raw data
processing pipelines is as “findable” as the actual data, DOIs (digital object
identifiers) will be utilised. Adoption of open data standards is crucial for
the data interoperability. To improve data ‘Interoperability’, where
applicable, we will adopt open standard data formats (mzML, mzIdentML, mzTab,
etc) for iPSpine datasets. qPortal already uses a variety of public metadata
ontologies such as the vocabulary taken from the NCBI taxonomy database. For
mass spectrometry, metadata following the Proteomics Standards Initiative
(PSI) vocabularies is automatically extracted. In addition, qPortal supports
open standards for data sharing like ISA-Tab. We are working on export
functionality to other format standards like GEO to disseminate data and
metadata to public repositories. The Quantitative Biology Center (QBiC) will
enhance the qPortal infrastructure should important metadata vocabularies for
standard data types, as they are used in the project, be missing. If it is
necessary to produce uncommon data or metadata, mappings to more commonly used
ontologies will be provided. We will also promote that the software produced
supports these standards. We follow FAIR guidelines and provide tested and
versioned software by utilising proven tools like Maven, Travis and GitHub.
The reproducibility of data analysis will be guaranteed by developing state-
of-the-art processing pipelines for ‘omics’ data. Most of the data analysis
procedures are not performed by using monolithic software, but by deploying
complex pipelines. Tuebingen University recently joined the nf-core community
(https://github.com/nf-core) that aims at collecting high-quality scientific
workflow that base on Nextflow as a workflow engine. As part of the open
software practices, the most frequently used metabolomics workflows will be
ported to Nextflow and will be made available through nf-core. Furthermore,
data will be formatted and stored in a manner allowing integrative
bioinformatics analysis of big data towards pattern identification, pathway
and network analysis. In applications where reference data is needed for
analysis, such as genetic or proteome analyses, standard reference genomes
from EnsEMBL, UCSC and/or NCBI can be used. All parameters used in runs of
different pipelines are stored as metadata with the results, facilitating
reproducibility.
2. **Open digital platform to guide the design of in vitro/ex vivo Proof-of-Concept demonstration for advanced therapies**
Furthermore, within iPSpine an open digital platform will be developed to
guide design of in vitro/ex vivo Proof-of-Concept demonstration for advanced
therapies, complying with the 3Rs principles. Based on guideline requirements
(Task 4.1) and regulatory requirements (Task 7.6-8) a smart digital platform
will be designed and developed for more efficiently managing the innovative
preclinical translation process of ATMPs and biomaterials. In-depth interviews
with experts in ATMPs/biomaterials (consortium partners and advisors) will be
performed to extract knowledge on the general structure and bottlenecks of the
translation process, including the testing procedures and the resulting
decisions that the translation requires for a specific ATMP/biomaterial. Based
on these interviews, both a template ATMP/biomaterial translation process with
decision points as well as different template testing procedures for
ATMPs/biomaterials will be designed. The template process and procedures will
be best practices that can be easily adjusted to meet the needs for a specific
ATMP/biomaterials translation process. The platform will use the templates to
provide advanced automated support for the flexible design and execution of
the complete translation process. Innovative information technology from the
fields of business process management and adaptive case management will serve
as foundation for the platform to streamline the translation process and
remove inefficiencies such as rework and other development bottlenecks. In
addition, the platform will also keep track of different regulatory
requirements to significantly improve the quality and efficiency of the
translation process. The platform will support the smart instantiation and
execution of translation processes for new ATMPs/biomaterial. This includes
the instantiation and execution of related testing procedures and their
follow-up decisions. Decision points within translational process and the data
sources upon which decisions are based are also registered in the platform,
which will help to significantly speed up and make more efficient the
translation processes. The platform will also check, register and report on
the compliance of executed translation processes and the performed testing
procedures and decisions, with pre-identified regulatory requirements.
Development of the platform and data for it will be generated from the
consortium (WP1-7) using the iPS-NLC:biomaterials ATMP as a show case. To
validate the platform and demonstrate its potential, the platform will be used
retrospectively near the end of the program to determine how the process could
have been done more efficiently. Although the platform will initially be
specific to the ATMP/biomaterials developed in this program, its architecture
and processes may be reused and translated thereafter to include other
ATMPs/biomaterials and targets. Thus, the smart ATMP/biomaterial translation
process management platform will become an innovative solution that enables a
speed-up in the effective development of new ATMPs/biomaterials in line with
the 3Rs philosophy.
The platform collects data generated by iPSpine researchers in experiments. We
aim to store this data using ontologies, e.g. OSCI (
_http://www.ontobee.org/ontology/OSCI_ ) . However, stem cell ontologies
appear to be under development; there is not yet a well-accepted standard. The
choice for an ontology will be made in consultation with the researchers.
**2.4. Increase data re-use (through clarifying licences)**
Currently the topic of data re-use through clarifying licenses does not apply
to the iPSpine project. If this becomes applicable later in the project, the
DMP will be updated.
# 3\. Allocation of resources
Data management of the iPSpine project will be done as part of WP9, and UU as
project coordinator, will lead the data management efforts in the project. UU,
as well as all other partners have allocated a part of the overall budget
(including person months) to WP9 in order to cover for these activities. Costs
related to open access of scientific publications are eligible for funding as
part of the H2020 grant, and are covered by the budget of the individual
partners. Within WP3 and WP4 two platforms will be developed by designated
Partners and the costs for the data storage in these platforms is being
covered by the budget of the individual Partners (i.e. NMI-IT and TU/e). NMI-
RI has a linked third party involved for this specific task in the iPSpine
project: QBiC has had start-up funding of the German Research Foundation to
build up a bioinformatics and data management support infrastructure.
# 4\. Data security
For the duration of the project, all research data will be stored at the
individual partner’s storage system. Each partner is responsible to ensure
that the data is stored safely and securely and in full compliance with the EU
data protection legislature. For data that is transferred to a data repository
during or after the project, all responsibilities concerning data security and
recovery will be shifted to the repository chosen for storing the dataset.
Periodic risk evaluation of privacy risks related to data processing
activities of the Project will be conducted and reported corresponding with
the reporting periods of the Project.
<table>
<tr>
<th>
**Partner**
</th>
<th>
**Provisions in place for data security**
</th> </tr>
<tr>
<td>
**UU**
</td>
<td>
Data stored on OneDrive/Surfdrive by the individual scientists and final
versions of the documents and the raw data are place in YODA as repository.
</td> </tr>
<tr>
<td>
**UN**
</td>
<td>
UN-cloud University of Nantes. Personal hard copy securely stored at
University of Nantes
</td> </tr>
<tr>
<td>
**TU/e**
</td>
<td>
Data will be stored in a datalab environment such as DataVerse or iRODS; data
archive is available via 4TU.ResearchData. To safeguard the privacy of
patients while being able to trace the patients in case needed, we request
that research organizations that provide experimental data for the smart
digital ATMP platform pseudonymise their data, i.e., replace each patient ID
by a unique number that TU/e cannot link to the patient, and anonymise all
other private data of the patient.
</td> </tr>
<tr>
<td>
**UMCU**
</td>
<td>
G drive of the UMC Utrecht Julius Center (which is the secured drive where all
research is stored)
</td> </tr>
<tr>
<td>
**NUIG**
</td>
<td>
The raw data are stored on an institutional core facility data server or hard
drive. These data are backed up and securely stored together with all analysed
data and research files on NUIG network such as M:drive and OneDrive for
Business for long term storage. M:drive is used to store and collaboratively
share the research data among iPSpine researchers in NUIG.
</td> </tr>
<tr>
<td>
**UULM**
</td>
<td>
All data are stored on the institute server with daily backups to the
university backup system
</td> </tr>
<tr>
<td>
**UBERN**
</td>
<td>
Secured institutional storage. Further information: Informatikdienste Bern.
[email protected]_
</td> </tr>
<tr>
<td>
**INSERM**
</td>
<td>
INSERM data are under the supervision of the General Data Protection
Regulation (GDPR) previously cited above
</td> </tr>
<tr>
<td>
**NMI-RI**
**QBiC**
</td>
<td>
Project related data and metadata at QBiC are stored on a password-secured,
geographically redundant storage system which is backed-up continuously. Data
integrity is guaranteed by a RAID system. Access via the web interface of
qPortal is safeguarded in two layers. User credentials allow the use of the
general portal functionality. Data of a project is stored in one or more
workspaces of our data management system openBIS. Multiple users can be
assigned to a workspace. Users
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
can only create, access or download project data and metadata if they are
assigned to the respective workspace. Single sign-on (SSO) access control
achieved via the Lightweight Directory Access Protocol (LDAP) is used to
connect the two layers. The same credentials can be used to download data via
the command line, if a user has access to the respective workspace.
</th> </tr>
<tr>
<td>
**ARI**
</td>
<td>
All project related research data will be stored in the password-secured
storage drive of the AO-IT (Information Technology Group in the Support Units
Department of the AO Center). The access, handling, storage and backups of the
data will be planned and controlled by the AO-IT department according to
internal guidelines, which include daily backups and mirroring the server to
an offsite location.
</td> </tr>
<tr>
<td>
**SHU**
</td>
<td>
[Provisions in place] All laboratory procedures will be recorded in laboratory
notebooks recording methodology and results, all data files containing results
from experimental analysis will be cross referenced to enable exact
experimental procedures to be linked to results. All data files will contain
clear descriptions of variables under investigation. All lab books will be
checked and countersigned by the line managers to confirm the recordings are
correct and that all documentation is clear during regular briefings.
All primary data generated by the group will be stored locally and backed up
immediately onto the University research storage facility (Q:drive). A shared
folder will be provided for the project. Access to the folder is restricted to
researchers working on the project. The primary copy of the data is stored on
a storage array located in one of the university's data centres. As data is
written it is replicated over a secure private network to a storage array
located in the other data centre. This provides an up to date second copy of
the data providing excellent disaster recovery capabilities. Access to the
Q:drive over the network is secured by a number of methods. Users are required
to enter a valid username and password before access is permitted. The service
is protected from malicious attack by firewalls and anti-virus software.
Systems are patched on a regular basis to protect against known
vulnerabilities. All data transfers over the internet are encrypted.
_http://research.shu.ac.uk/rdm/research-store.html_
All data from the study at completion will be archived in the SHU Research
Data Archive (SHURDA, http://shurda.shu.ac.uk). The University retention
schedule stipulates data will be stored for 10 years since the last time any
third party has requested access to it.
</td> </tr>
<tr>
<td>
**UCBM**
</td>
<td>
All data will be stored in a dedicated folder on the UCBM server and saved
regularly (back-up every night). The access to the folder will be given only
to our staff and closed to third parties.
</td> </tr>
<tr>
<td>
**NTrans**
</td>
<td>
Printed data is securely stored within the NTrans facility. Furthermore,
printed data is scanned and stored in digital format in a password protected
Cloud-based database. This Cloud-based storage ensures instant storage and
back-up of all data.
A back-up of the digital database is locked in a safe to ensure long term
preservation.
Storage and sharing of personal data of NTrans personnel (date of birth,
social security number, employment contracts) is done within the OwnCloud
database. With respect to laws on private information, NTrans personnel has
signed a consent to allow
</td> </tr>
<tr>
<td>
</td>
<td>
keeping personal records in its administration and to allow sharing of this
data with regulatory bodies like accountants and grant administration offices.
</td> </tr>
<tr>
<td>
**UdM**
</td>
<td>
Experimental data are stored on institutional secure storage backup and on
secure Cloud Octopus Backup (rented to and hosted by Computer Services )
</td> </tr>
<tr>
<td>
**MU**
</td>
<td>
Under UM policy data is securely stored, managed and accessed using BOX, a
highly encrypted, secure (password protected links using a double-
authentication system, folders have the ability to restrict permissions and
set expi-ration dates), online (cloud-based) environment.
</td> </tr>
<tr>
<td>
**SpineServ**
</td>
<td>
All data are stored on our local server with daily backups and yearly backup
in different places
</td> </tr>
<tr>
<td>
**HKU**
</td>
<td>
All project related information will be stored on a password-secured storage
system which is backed-up regularly within the investigator’s lab. HKU also
have a data repository facility for the storage of data with restricted assess
and data sharing.
</td> </tr>
<tr>
<td>
**PharmaLex**
</td>
<td>
All project related information is stored on a password-secured storage system
which is backed-up continuously. PharmaLex does not create or store any
research data.
</td> </tr>
<tr>
<td>
**Catalyze**
</td>
<td>
All project related information is stored on a password-secured storage system
which is backed-up continuously. Catalyze does not create or store any
research data.
</td> </tr>
<tr>
<td>
**ReumaNL**
</td>
<td>
ReumaNL does not create or store any research data.
</td> </tr> </table>
# 5\. Ethical aspects
This section deals with ethical and legal compliance. Data protection and good
research ethics are major topics for the iPSpine consortium.
iPSpine partners have to comply with the ethical principles set out in Article
34 of the Grant Agreement. This article states that all activities must be
carried out in compliance with:
* ethical principles (including the highest standards of research integrity)
* applicable international, EU and national law.
There will be regular ethics checks for all ethical aspects concerning human
participants, human cells/tissues, animals for all EU and non-EU partners
involved in the iPSpine project. To enable structured ethics checks an
overview table will be generated and uploaded on the team site. Herein it is
the responsibility of each Partner to upload the ethics related documents in
the designated file and inform the coordinator on ethical approval.
## 5.1 Informed Consent
Informed consent forms will be provided to any individual participating in
iPSpine interviews, workshops or other research activities which may lead to
the collection of data that will ultimately be used in the project. An example
of an Informed Consent Form is provided in the Annex of this document. Signed
informed consent forms are collected by the Partner leading the activity and
stored appropriately to meet the GDPR.
<table>
<tr>
<th>
**Partner**
</th>
<th>
**WP**
</th>
<th>
**Collecting informed consent forms?**
</th> </tr>
<tr>
<td>
**UU**
</td>
<td>
6
</td>
<td>
Yes, client owned dogs that will participate in the clinical trial described
in WP6 will be informed and provided with an informed consent form. This is
due in year 4 of the Project
</td> </tr>
<tr>
<td>
**UN**
</td>
<td>
6
</td>
<td>
No
</td> </tr>
<tr>
<td>
**UMCU**
</td>
<td>
1, 7
</td>
<td>
Yes, informed consent will be obtained in the interviews
</td> </tr>
<tr>
<td>
**UBern**
</td>
<td>
1
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**SHU**
</td>
<td>
1, 3
</td>
<td>
Yes, informed consent is obtained from patients / relatives for disc tissue
collection. Furthermore informed consent is obtained from patient users who
join the local patient user groups.
</td> </tr> </table>
## 5.2 Confidentiality
iPSpine partners must retain any data, documents or other material as
confidential during the implementation of the project. Article 36 of the Grant
Agreement describes further details on confidentiality, along with Article 27,
which describes the obligation to protect results. Awareness of
confidentiality will be guaranteed by putting this as a regular agenda point
during each Project Steering Committee meetings, and meetings of the
Scientific advisory board and the Patient advisory board. The members of the
Scientific advisory board and the Patient advisory board will be asked to sign
a confidentiality agreement prior to committing to this task.
## 5.3 Involvement of non-EU countries
iPSpine non-EU partners (UBERN, UM, HKU) have confirmed that the ethical
standards and guidelines of Horizon 2020 will be applied, regardless of the
country where the research activities are carried out. Activities carried out
outside the EU will be executed in compliance with the legal obligations in
the country where they are carried out, with an extra condition that the
activities must also be allowed in at least one EU Member State. Each Party
has agreed that personal data will not be transferred from the EU to a non-EU
country. However, in the case this becomes necessary, these transfers will be
made in accordance with Chapter V of the General Data Protection Regulation
2016/679.
# 6\. Outlook towards the next version of the data management plan
The next version of the data management plan will be prepared latest after
month 18 since an update of this plan will be part of the periodic reporting
of reporting period 1. As emphasized in the introduction of this document, the
DMP is a living document, which will be updated over the course of the
project. Hence, the next version of the DMP will update the issues raised
above.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0483_REMEB_641998.md
|
**1\. EXECUTIVE SUMMARY**
The data management plan (DMP) is a written document that describes the data
expected to acquire or generate during the course of REMEB project by the
consortium, under _Article 29_ of the Grant Agreement Number 641998. According
to this Grant Agreement, It is mandatory the use of open access to scientific
publications ( _Article_ _29.2_ ), with the exemption shown in _Article 29.3_
[1].
The DMP is a live document, which will vary during the course of the project,
this document will define how the data will be managed, described, treated and
stored, during and after the end of the project. In addition, the mechanisms
to use the results at the end of the project, will be described to share and
preserve the data.
A description of the existing data relevant to the project and a discussion
about the data’s integration will be provided, together with the description
of the metadata to be provided, related to the subject of the project.
The document will provide a description of how the results will be shared,
including access procedures, embargo periods, technical mechanisms for
dissemination. Besides, it will foresee whether access will be opened
according to the two main routes of open access to publications: self-
archiving and open access publishing.
Finally, the document will show the procedures for archiving and preservation
of the data, including the procedures expected once the project has finished.
The application of this document will be a responsibility of all REMEB project
partners. This document will be updated through the lifecycle of REMEB project
extending the information given now, or including new issues or changes in the
project procedures. DMP will be updated when significant changes are aroused
(new data sets, changes in consortium policies or external factors) as a
deliverable [1]. As a minimum, the DMP will be updated and sent as a part of
the mid-term report and final report. Every time that the document is updated,
the draft version will be sent to all project partners to be reviewed. Once
approved, the definitive version will be sent to the consortium.
**2\. DATA SET REFERENCE, NAME AND DESCRIPTION**
This section shows a description of the information to be gathered, the nature
and the scale of the data generated or collected during the project. These
data are listed below:
* Membrane composition: during the execution of the project, different membrane compositions will be tested in order to find the one with the best permeability.
* Manufacturing process of the membrane: membranes will be manufactured by extrusion at pilot and industrial scale.
* Membrane module configuration: the design parameters of the module where the membranes will be placed.
* MBR operating parameters such as sludge retention time, F/M ratio, solid concentration, etc.
It is foreseen to protect some of these results through a patent. These issues
will be addressed in the following updated versions of this document.
**3\. STANDARDS AND METADATA**
Open Acces will be implemented in peer-review publications (scientific
research articles published in academic journals), conference proceedings and
workshop presentations carried out during and after the end of the project. In
addition, nonconfidential PhD or Master Thesis and presentations will be
disseminated in OA.
The publications issued during the project will include the Grant Number,
acronym and a reference to the H2020 Programme funding, including the
following sentence:
“REMEB project has received funding from the European Union´s Horizon 2020
research and innovation programme under grant agreement No 641998”.
In addition, all the documents generated during the project should indicate in
the Metadata the reference of the project: REMEB H2020 641998\.
Each paper must include the terms Horizon 2020, European Union (EU), the name
of the action, acronym and the grant number, the publication date, the
duration of embargo period (if applicable) and a persistent identifier (e.g.
DOI).
The purpose of the requirement on metadata is to maximise the discoverability
of publications and to ensure the acknowledgment of EU funding. Bibliographic
data mining is more efficient than mining of full text versions. The inclusion
of information relating to EU funding as part of the bibliographic metadata is
necessary for adequate monitoring, production of statistics, and assessment of
the impact of Horizon 2020 [2].
**4\. DATA SHARING**
All the publications of a Horizon 2020 project are automatically aggregated to
the OpenAIRE portal (provided they reside in a compliant repository). Each
project has its own page on OpenAIRE ( _Figure 1_ ) featuring project
information, related project publications and datasets and a statistics
section.
Consortium will ensure that all publications issued from REMEB project are
available as soon as possible, taking into account embargo period (in case
they exist).
_Figure 1_ : REMEB information in OpenAIRE web (www.openaire.eu)
It is important that the partners involved check periodically if the list of
publications is completed. In case there are articles not listed it is
necessary to notify to the portal.
The steps to follow to publish an article and the subsequent OA process are:
* A partner prepares a publication and sends it to the project coordinator and other partners involved.
* Once approved, the partner submits the article to the selected journal.
* The final peer-reviewed manuscript is added to an OA repository.
* The reference and the link to the publication should be included in the publication list of the progress Report.
When the publication is ready, the author has to send it to the coordinator,
who will report to the EC through the publication list included in the
progress reports. Once the EC has been notified by the coordinator about the
new publication, the EC will automatically aggregate it at the OpenAIRE
portal.
**5\. ARCHIVING AND PRESERVATION**
In order to achieve an efficient access to research data and publications in
REMEB project, Open Access (OA) model will be applied. Open access can be
defined as the practice of providing on-line access to scientific information
that is free of charge to the end-user. As it has been stated, OA will be
implemented in peer-review publications (scientific research articles
published in academic journals), conference proceedings and workshop
presentations carried out during and after the end of the project. In
addition, non-confidential PhD or Master Thesis and presentations will be
disseminated in OA.
Open access is not a requirement to publish, as researchers will be free to
publish their results or not. This model will not interfere with the decision
to exploit research results commercially e.g. through patenting [3].
The publications made during REMEB project will be deposited in an open access
repository (including the ones that are not intended to be published in a
peer-review scientific journal). The repositories used by project partners
will be:
* ZENODO will be used by the partners that do not have a repository.
* The University Jaume I uploads all its publications to its own repository (web link: _http://repositori.uji.es/xmlui/_ ) . ITC, as a member of the university, follows the same policy.
As stated in the Grant Agreement (Article 29.3): _“As an exception, the
beneficiaries do not have to ensure open access to specific parts of their
research data if the achievement of the action´s main objective, as described
in Annex I, would be jeopardized by making those specific parts of the
research data openly accessible. In this case, the data management plan must
contain the reasons for not giving access”._
This rule will be followed only in some specific cases, in those that will be
necessary to preserve the main objective of the project.
According to the “Guidelines on Open Access to Scientific Publications and
Research Data in Horizon 2020” [2], there are two main routes of open access
to publications:
* **Self-archiving (also referred to as “green open access”):** in this type of publication, the published article or the final peer-reviewed manuscript is archived (deposited) by the author \- or a representative - in an online repository before, alongside or after its publication. Some publishers request that open access be granted only after an embargo period has elapsed.
* **Open access publishing (also referred to as “gold open access”):** in this case, the article is immediately provided in open access mode as published. In this model, the payment of publication costs is shifted away from readers paying via subscriptions. The business model most often encountered is based on one-off payments by authors. These costs (often referred to as Article Processing Charges, APCs) can usually be borne by the university or research institute to which the researcher is affiliated, or to the funding agency supporting the research.
As a conclusion, the process involves two steps, firstly the consortium will
deposit the publications in the repositories and then they will provide open
access to them. Depending on the open access route selected self-archiving
(Green OA) or open access publishing (Gold OA), these two steps will take
place at the same time or not. In case of self-archiving model, embargo period
will have to be taken into account (if any).
**5.1. Green Open Access (self-archiving)**
This model implies that researchers deposit the peer-reviewed manuscript in a
repository of their choice (e.g. ZENODO).
Depending on the journal selected, the publisher may require an embargo period
between 6 and 12 months.
The process to follow for REMEB project is:
1. The partner prepares a publication for a peer-review journal.
2. After the publication has been accepted for publishing, the partner will send the publication to the project coordinator.
3. The coordinator will notify the publication details to the EC, through the publication list of the progress report. Then, the publication details will be updated in OpenAIRE.
4. The publication may be stored in a repository (with restricted access) for a period between 6 and 12 months (embargo period) as a requirement of the publisher.
5. Once the embargo period has expired, the journal gives Open Access to the publication and the partner can give Open Access in the repository.
project
**5.2. Gold Open Access (open access publishing)**
When using this model, the costs of publishing are not assumed by readers and
are paid by the authors, this means that these costs will be borne by the
university or research institute to which the researcher is affiliated, or to
the funding agency supporting the research. These costs can be considered
eligible during the execution of the project.
The process foreseen in REMEB project is:
1. The partner prepares a publication for a peer-reviewed journal.
2. When the publication has been accepted for publishing, the partner sends the publication to the project coordinator.
3. The coordinator will notify the publication details to the EC, through the publication list of the progress report. Then, the publication details will be updated in OpenAIRE.
4. The partner pays the correspondent fee to the journal and gives Open Access to the publication. This publication will be stored in an Open Access repository.
project
**6\. BIBLIOGRAPHY**
1. E. Commission, "Guidelines on Data Management in Horizon 2020. Version 2.1," 15 February 2016.
2. E. COMMISSION, "Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020. Version 2.0," 30 October 2015\.
3. E. Commission, Fact sheet: Open Access in Horizon 2020, 9 December 2013.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0484_SOLUS_731877.md
|
# FAIR DATA
3.1. Making data findable, including provisions for metadata:
* **Outline the discoverability of data (metadata provision)**
* **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?**
* **Outline naming conventions used**
* **Outline the approach towards search keyword**
* **Outline the approach for clear versioning**
* **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how**
The two datasets will be identified using two unique identifiers (DOI) by
uploading them onto a public repository. Data discoverability will be
facilitated by adding a data description with keywords related to potential
users (e.g. developers of new analysis tools), as described above.
For the Phantom dataset, different updated measurement sessions are possible
depending on updated versions of the prototype. Conversely, for the Clinical
dataset a single measurement session is foreseen, since there is no provision
to recall back the same patient. Different versions of analysis are possible,
depending on the update of the analysis tools. Therefore, the versioning will
foresee a first number for the raw data acquisition (only for phantoms) and a
second number for the analysis.
Naming conventions will be specified in a more advanced version of the DMP
foreseen at month 24 of the SOLUS Project, and still before the actual data
collection (starting after month 24).
Apart from clinical images (e.g. US images) for which the DICOM standard is
usually adopted, there are no specific standards for optical data. In general,
we will create metadata files in XML, embedding large binary data in XML with
Base91 encoding.
3.2. Making data openly accessible:
* **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so**
* **Specify how the data will be made available**
* **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?**
* **Specify where the data and associated metadata, documentation and code are deposited**
* **Specify how access will be provided in case there are any restrictions**
Data will be made "as open as possible, as closed as necessary". In this
respect, all data described above will be made open apart from:
* algorithms for data analysis which could be considered for IP protection
* personal data subject to privacy protection as foreseen in the clinical protocol (Deliverable D5.1) and ethical provisions.
Final decisions on these two aspects and specific identification of closed
data, or data subject to specific embargo related to IP policies will be taken
in the updated DMP at month 24. Related access policies will be defined at due
time.
All specifications required to access the data will be inserted in the data
repository. The segmentation of US images, and in general the extraction of
optical properties for suspect lesions/inhomogeneities require advanced
analysis tools, generally pertaining to the methods of inverse problems in
diffuse optics. If already published or not involved in IP protections, the
algorithms will be described in detail to permit replications. Inclusion of
software tools for data processing will be considered if not causing
significant overburden distracting important energies from the fulfilment of
the project aims.
A three-phase process for data storage is foreseen. Initially, data will be
collected by the SOLUS prototype and stored locally on the instruments, while
other information will be gathered by clinicians and recorded on paper (as
described in Deliverable D5.1). In the second phase, all collected data will
be stored at POLIMI data warehouse, apart from protected clinical information
which will be retained at Ospedale San Raffaele. This will permit construction
of the database and initial tests on analysis. In the third phase, when data
acquisition is complete, data will be uploaded on an open repository. At
present, the choice is for Zenodo, because of perfect match with requirements,
and increased interest in the International community. Still final decision
will be taken close to the actual deposition (not earlier than m36) to take
into account the updated status of public repositories.
3.3. Making data interoperable:
* **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.**
* **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?**
The realm of clinical optical data is at present not covered by standards or
specific vocabularies. The numerosity of the clinical study limits its
potential use mainly to researchers and operators within the field. The
definition of metadata and in particular the fields in the XLM will match the
vocabularies most often covered by scientific publications in diffuse optics.
3.4. Increase data re-use (through clarifying licenses):
* **Specify how the data will be licenced to permit the widest reuse possible**
* **Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed**
* **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why**
* **Describe data quality assurance processes**
* **Specify the length of time for which the data will remain re-usable**
Licensing policies will be defined later (around M24) when the general
dissemination, IP protection and exploitation policies are more clearly drawn.
Typically, a 6-12 months embargo after acceptance of relevant publications can
be considered.
Data will be made available and reusable through open data repositories for
periods of 10 years.
# ALLOCATION OF RESOURCES
**Explain the allocation of resources, addressing the following issues:**
* **Estimate the costs for making your data FAIR. Describe how you intend to cover these costs**
* **Clearly identify responsibilities for data management in your project**
* **Describe costs and potential value of long term preservation**
Since data deposit in a local data warehouse and an external repository will
not start earlier than 2 years from now, cost estimate will be performed at
due time since policies and costs are rapidly changing under great internal
and external pressure on data preservation and sharing. In general terms, it
is highly probable that no extra-costs will be incurred for the storage of
data since the overall dimension of data will be handled by standard POLIMI
data facilities and fit in the free allowances of Zenodo repository. Dr Andrea
Farina is responsible for the coordination of the overall data management.
# DATA SECURITY
**Address data recovery as well as secure storage and transfer of sensitive
data**
The second phase of data storage will be perfomed internally at a data
warehouse of POLIMI and at Ospedale San Raffaele for protected clinical
information. No access external to the consortium will be possible.
The actual data repository in force for the research group at POLIMI is stored
in secure hard-drives provided by a redundant system (RAID 5) that is backed
up every week by an incremental back-up script (rsbackup) to other external
servers. The data servers are located in the basement of the Physics
Department of Politecnico di Milano in a restricted access area. The data
servers have an access controlled by passwords, and they are part of a VLAN
without access from outside the POLIMI institution. The VLAN at which not only
the data servers are connected but all the PCs used for this project is part
of an institutional network protected by a firewall. We note that POLIMI group
has a proven track-record in long-term data storage and access going back to
the 80s.
In the final phase, the public repository will be chosen to grant requirements
of long-term secure storage. The most probable choice - Zenodo - already
fulfils all requirements.
Sensitive data - mostly personal data of the clinical study - will not be
shared and will be stored only at Ospedale San Raffaele to comply with the
privacy policies foreseen in the clinical protocol.
# ETHICAL ASPECTS
**To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former**
The Clinical protocol (Deliverable 5.1 - Definition of the clinical protocol -
produced at M3) and the ethical requirements in terms of protection of
personal data (Deliverable D5.2 - Approval of clinical protocol by ethical
committee - due at M36) set specific requirements for anonymization of data
and protection of personal data of patients. These requirements will be
strictly followed and will prevent sharing of some part of information.
All data stored at POLIMI data warehouse and deployed at public repository
will be completely anonymized.
The patient information and consent will follow the guidelines set forth in
ISO 14155 for patient information and informed consent, and will imply also
sharing of data excluding sensitive data.
# OTHER
**Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)**
At present, the main local procedures for data management are related to the
requirements of sensitive data protection described in the clinical protocol
(Deliverable D5.1) and operated by Ospedale San Raffaele. No other
prescriptive procedures are identified so far. However, since local policies
are rapidly evolving to cope with the increased demand for Open Data and Data
Management, this section will be updated in a future release (M24) to describe
the actual situation.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0485_MiLEDI_779373.md
|
# 1\. General information about the Project
This Data Management Plan (DMP) will be divided in five sections following the
guidelines suggested by the H2020 and by Digital Curation Centre 1 :
1. General information about the project;
2. Dataset description;
3. Data, Metadata and standards;
4. Policy for access and sharing;
5. Plan of Archiving, Preservation and Responsibilities.
The general information of the MILEDI project is reported in Table 1\.
## Table 1
<table>
<tr>
<td>
**Title and acronym**
</td>
<td>
**MI** cro QD- **LE** D/OLED **DI** rect patterning (MILEDI) **MILEDI**
</td> </tr>
<tr>
<td>
**Grant number (H2020)**
</td>
<td>
779373
</td> </tr>
<tr>
<td>
**Project Coordinator (Name, Family name)**
</td>
<td>
Francesco Antolini
</td> </tr>
<tr>
<td>
**Contacts (e-mail and phone)**
</td>
<td>
e-mail [email protected]_ Phone +39 06 94005059
</td> </tr> </table>
### 1.1 Brief description of the project
The project MILEDI aims to realise micro-Light Emitting Diodes (mQDL) and
micro Organic Light Emitting Diodes (mQDO) using direct laser or electron beam
patterning of nanometer-scale Quantum Dots (QDs) to write the Red-Green-Blue
(RGB) arrays for display manufacturing.
The main idea sustaining the project is to form the coloured green-red light-
emitting QDs directly over a matrix of blue emitting micro QDL/QDO arrays, so
that the QDs act as frequency down-converters and constitute a RGB micro-
display.
Both direct-writing technologies will be thoroughly developed to optimize the
QD light emission spectrum of the display and its stability. They are expected
to provide patterning resolution at micrometric scales, depending on the laser
spot areas and particle beam dimensions and operation.
These techniques together with the direct formation of QDs assure highly
flexible and simple manufacturing processes, in few steps and with low
chemical impact The MILEDI approach to both micro QDL and QDO RGB displays
manufactured by direct laser/electron
beam patterning of QDs is validated by the production of a final prototype of
Rear Projection display through the existing supply chain of the project.
# 2\. Dataset description
The MILEDI project will develop materials, techniques of characterization of
materials methodologies of patterning of materials and micro-displays
manufacturing.
The data that will be produced during the research will be of different types
and range from chemical and physical to engineering science. The chemistry
teams will produce protocols (texts) and characterization data (optical,
structural, images), the physical and engineering groups will manage data from
optical characterization of materials, laser source manufacturing, laser
patterning machine and devices manufacturing (micro-display specification).
All these amounts of data in different forms will be managed depending upon
their nature and importance for the project. Indeed part of them will be:
1. protected by patents (IPR policy see dissemination and exploitation plan Report);
2. published in open access journals;
3. withhold for internal use.
Each Partner of the project will identify the type of data that he will
produce during the research and will prepare a table indicating the main
characteristics of the dataset.
Table 2 below shows an example of the dataset description that will be
generated during the life of the project.
## Table 2
<table>
<tr>
<th>
**DATASET DESCRIPTION**
</th>
<th>
**Element description**
</th> </tr>
<tr>
<td>
**Dataset name**
</td>
<td>
Dataset name
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
Description of the type of data reported in this dataset
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
Describes the data format, for example ascii, csv, pdf, doc, txt, xml, etc
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Describes the type and structure of metadata associated to the data (see
paragraph 3)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Indicates which repository is selected for this dataset and the type of
software used to open the dataset
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Indicates which repository will be selected for the dataset storage
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
# 3\. Data, Metadata and Standards
The scientific and technical results of MILEDI project are based on data and
associated metadata needed to validate the results presented in scientific
publications.
The “metadata” refers to “data about data”, i.e. all the information that
accompanies the data or all the contextual documentation that clarify the data
itself. The metadata must allow the proper organisation search and access to
the generated information and can be used to identify and locate the data.
The metadata that would describe better the data depends of the nature of the
data. For MILEDI and in general for research data, it is difficult to
establish a global criterion of metadata due to the different dataset that
will be identified, however a general scheme of metadata can be proposed
(Table 3 2 ).
## Table 3 Data and metadata standards
<table>
<tr>
<th>
**DATA, METADATA AND STANDARDS**
</th>
<th>
**Type of metadata**
</th>
<th>
**Description of metadata**
</th> </tr>
<tr>
<td>
**Methodology for**
**data**
**collection/generation**
</td>
<td>
**Title**
</td>
<td>
Free text
</td> </tr>
<tr>
<td>
</td>
<td>
**Creator/Owner**
</td>
<td>
Last name, First name
</td> </tr>
<tr>
<td>
</td>
<td>
**Date**
</td>
<td>
Date of creation dd/mm/yyyy
</td> </tr>
<tr>
<td>
</td>
<td>
**Contributor**
</td>
<td>
Information about the project and its funding
</td> </tr>
<tr>
<td>
**Data quality and standards**
</td>
<td>
**Subject**
</td>
<td>
Series of key words
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Free text explaining the content of the data and the contextual information
needed for the correct interpretation of the data
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
Details of the file format
</td> </tr>
<tr>
<td>
**Resource type**
</td>
<td>
Data set, image, audio
</td> </tr>
<tr>
<td>
**Identifier**
</td>
<td>
DOI
</td> </tr>
<tr>
<td>
**Privacy level**
</td>
<td>
Partner, Consortium, Public
</td> </tr> </table>
The data will be acquired by experienced scientists taking into account all
the parameters that influence the measurements and ensuring that the
experimental setup is in the conditions to get reproducible measurements.
The metadata will be stored together with the generated data in a “xml” file
containing all the information reported in table 2.
# 4\. Policy for Access and Sharing
The data will be shared in three different ways: i) filing patents, ii)
publishing in journals (see section 2) iii) withholding for internal use. The
manuscripts can be deposited in the ENEA institutional repository for public
access.
The Partners will decide which data will be shared. In any case the data which
underpin the patents and publications will be made accessible only after
filing patents or after publishing papers.
The Partners will select a research data repository both for sharing and
storage (see section 5).
# 5 Plans for Archiving, Preservation and Responsibilities
Any data from this project that underpin or contribute to patent application
or subsequent research publication will be retained and preserved by the
Partner who obtained the data.
## 5.1 Short term data storage
During the project, data will be stored on the hard drive of the Principal
Investigator (PI) of each Partner that will produce the data. The PI will
perform backup on a regular schedule (each month) by using external hard
drives other media or cloud computing solutions. The files will be encrypted
so only the researcher and PI can access the data.
## 5.2 Long term data preservation
The data from this project that underpin or contribute to patent application
or subsequent research publication will be considered to be a long-term value
and will be retained and preserved.
The data Partners will evaluate which database will fit better for data
preservation and assess its cost, if any.
## 5.3 Responsibility
Each Partner of the project is the responsible of the policy and management of
the data he obtained during and after the data collection.
## 5.4 Ethical issues
MILEDI does not handle personal data and does not work with human cells or
embryos.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0486_CResPace_732170.md
|
# INTRODUCTION 2\. CResPace DATA REPOSITORIES 3. DATA SUMMARY 4. FAIR DATA 5.
DATA SECURITY 6. ETHICS 7. CONCLUSION
## INTRODUCTION
This DMP has been developed with the input from all partners that have
been/will be producing data throughout the project. Accordingly, all
researchers working on this project will manage their data per this plan. This
DMP will be updated whenever necessary throughout the project.
## CResPace DATA REPOSITORIES
UBAH is responsible for setting up, backing up and maintaining both public and
private data repositories of CResPace project. The digital data generated
throughout CResPace project will be archived in the University of Bath
Archives and Research Collections described on
_http://researchdata.bath.ac.uk/guide/archiving-data/_
_2.1 Data repository with restricted access_
UBAH setup a project folder in M1 (January 2017) on a dedicated secure drive
at university servers where only project partners can access with their
credentials.
_2.2 Data repository with public access_
Data and publications to which open access are provided will be stored on
Zenodo.
## DATA SUMMARY
CResPace project is in the process of generating mathematical models, computer
programs, VLSI circuit designs and experimental data resulting from both
physical and pre-clinical trials.
Data are being collected/generated for reports of project deliverables,
scientific publications and also to support patent applications. Relevant
publications and patent applications will describe the development of the
technology underpinning the neural pacemaker to be developed.
Data will be useful for the scientific community, relevant directorates and
agencies of the European Commission, industry and the society.
All data produced as an outcome of this project is new. Data generated will be
about 2Tb.
Details of data that are planning to be produced by various partners are
listed as follows:
**Partner 2 BRISTOL, UK** will generate the electrophysiological data on
medullary neurons and networks stimulated by tailored current protocols. The
Bristol team will in the process develop pharmacological and multi-electrode
recording procedures which will be published and output in text format.
**Partner 5 MEDTRONIC, NL** will be generating sensor data as a natural part
of the sensor development. This being early assay characterization and
optimization using standard assay formats to be analysed using commercially
available scientific instruments. Subsequently, data will be generated by the
developed CResPace sensors in the process of optimization and simulated use
tests in a lab. Data will be output from **(1)** Various scientific
instruments. Data will have various format set by the instrument manufacturers
as well as text file format for further data processing. **(2)** Sensors
developed throughout the project will output analog data. In general, data
will be sampled using a commercially available data acquisition unit. Data
will be custom made formats as well as text file formats for further data
processing.
Data will in general be very specific to the sensor development. However,
partners involved in the sensor interface and in silico neurons will need
access to sensor output data for the optimized sensors.
**Partner 6 MUW, AT** will be generating **(1)** a large amount of data in the
course of in vivo studies planned within work package 9 Task 9.2. This data
will comprise pictures and datasets from cMRI+LE (DICOM), Angiography (DICOM),
ECG (print and scan) and NOGA mapping, **(2)** pictures (tif and vsi) on an
Olympus microscope; partly fluorescence pictures as an output of evaluation by
histology.
**Partner 7 UMC UTRECHT, NL** will be collecting numerous physiological
parameters both under physiological as pathological circumstances in the dog
under different provocations. Within the scope of the project physiological
data sets such as pO 2 , pCO 2 and frequency of inhalation and blood
pressure will be generated.
## FAIR DATA
FAIR Data refers to research data generated within the project being findable,
accessible, interoperable and re-usable according to the guidelines of EC Data
Management in H2020. CResPace partners will do the following to commit to
that:
**Making data findable:**
Data generated throughout the project will be discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism.
A unique Digital Object Identifier (DOI) will be assigned by UBAH to all data
produced in the project. Search keywords will be provided to optimize
possibilities for re-use.
All data will have clear version numbers.
Standards and metadata will be applied that are relevant to data origin i.e
input data, experimental results, publications etc via repositories.
Initially, the project will make use of Zenodo for both publications and data.
**Making data openly accessible:**
Publications preprints, conference proceedings and presentations will be made
openly available. Data will be made available via Zenodo, project webpage and
conference webpages.
**Making data interoperable:**
Metadata archiving is via cross referencing of published material i.e Pubmed,
ArXiv, etc.
**Increasing data re-use:**
Open data will be stored on Zenodo database.
Restricted data will be licensed before publication in line with the
consortium agreement. Manuscripts are embargoed by default until the
publication date. Manuscripts on commercially sensitive topics will not be
submitted for publication until after a patent has been filed.
Open access data may be used by third parties.
All the research output will be published in peer reviewed publications which
assure the quality of data generated within the project. Data will remain re-
usable forever.
## DATA SECURITY
UBAH is responsible for setting up, backing up and maintaining both public and
private data repositories of CResPace project. The digital data generated
throughout CResPace project will be stored, backed up and archived in the
University of Bath servers. The data will be archived in the University of
Bath Archives and Research Collections described on
_http://researchdata.bath.ac.uk/guide/archiving-data/._
UBAH setup a project folder in M1 (January 2017) on a dedicated secure drive
where only project partners can access with their credentials.
## ETHICAL ASPECTS
Relevant reports for ethics D11.1 (Ethics - Requirement) and D11.2 (Ethics -
GEN licenses) were submitted in M2 (February 2017) of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0488_Quaco_689359.md
|
# Executive summary
_The QUACO Data Management Plan (DMP) shows the tools and life cycle of data
generated and used during and after the project. All data generated in this
project (both on PCP and on technical development) will be centrally stored at
CERN who will act as central repository for archiving also after the end of
the project. The DMP also outline the accessibility of the data, which is in
line with the project’s Open Access policy._
# INTRODUCTION
The main deliverable of QUACO PCP is a potential component of the HL-LHC
project. For this reason, the HL-LHC Data management Plan concepts are the
basis for this document.
Data Management is a process that provides an efficient way of sharing
knowledge, information and thinking among the project's participants and
stakeholders.
In this document, we will define:
* The type of Data that will be generated and managed;
* The life cycle of the data and in particular the data control process; The tools used to manage the data.
The process of communication and dissemination of the data it is part of the
Deliverable 8.1.
# DATA HANDLED BY THE PROJECT
## OVERVIEW
The QUACO project will produce two different type of documents and data. From
one side the information exchange on the PCP process, from the other the
information related to the design and fabrication of the MQYY first of a kind
magnet.
The type of documents and data and the way they will be treated will be very
different.
## DATA LINKED TO THE PCP PROCESS
QUACO is a collaborative acquisition process. There will be documents and data
related to the exchange of information with/among QUACO partners and the
interaction with: Partner labs interested in the use of the PCP instrument
in the future;
* Industrial suppliers;
* Public; Stakeholders.
There will be also data created by the analysis of the interaction of these
four groups with QUACO.
## DATA LINKED TO THE PRODUCTION OF THE FIRST OF A KIND MAGNET
The QUACO technical specification [1] gives a detailed list of technical and
managerial documents and data that will be produced during the three phases.
Among them:
* Technical Documents such as the Conceptual Design Report of the magnet;
* Managerial Documents such as the Development and Manufacturing plan;
* Acquisition Documents such as the technical specifications for the tendering of tooling and components;
* 2D and 3D models such as the As-built 2D and 3D CAD manufacturing drawings;
* Data such as the control parameters during the winding process or the dimensional checks.
* Contract Follow up Documents such as minutes and visit reports.
There will be also data created by the internal exchange among QUACO partners
on the progress done by the suppliers.
# LIFE CYCLE AND DATA CONTROL PROCESS
## OVERVIEW
The PCP Project has a Data Management Plan because needs that:
* Data required for the project is identified, traced and stored;
* Documents are approved for adequacy prior to issue;
* Documents are reviewed and updated as necessary;
* Changes to Data and Documents are identified;
* Relevant versions of applicable documents are available at points of use;
* Documents remain legible and readily identifiable;
* Documents of external origin are identified and their distribution controlled.
To manage and control a document we shall establish several sub processes:
1. **Identification** : what kind of document shall be managed.
2. **Labeling** : how the document shall be named.
3. **Lifecycle** : how shall be ensured the adequacy of the document before distribution.
4. **Availability** : how shall be ensured that the document arrives to the right person and can access it as long as required.
5. **Traceablity** : how it is record the changes and location of the document.
Except for the labelling process, the same sub processes are applicable for
the data that will be handled by the project.
## IDENTIFICATION OF DATA MANAGED
The QUACO project shall manage and control all documents required to guaranty
the full life cycle of the Project.
Points 2.2 and 2.3 list the type of data and documents identified to follow
the PCP process and the production of the first of a kind MQYY.
Among those documents we can distinguish two classes:
* Baseline documents: Documents that will have to be stored after the end of QUACO.
* Non-baseline documents: documents that are required for the well functioning of the project but that which storage will not be considered critical after Phase 3.
The management and control sub processes of these two types of documents will
be handled differently.
The HL-LHC Configuration, Quality and Resource Officer ensures the training of
the different QUACO partners [2] on the identification of the data to be
managed.
## LABELING
Baseline and Non-Baseline Documents shall follow the HL-LHC Quality plan
naming convention [3] and shall be labelled with an EDMS number.
## LIFECYCLE
The lifecycle of a document includes the publishing (proofreading, peer
review, authorization, printing), the versioning and the workflow involved on
this two processes.
Concerning peer reviewing as a general rule:
* Baseline documents shall be peer reviewed (verification process) by a group of people knowledgeable on the subject and those interfacing with the system/process described in the document. As default the peer review is done by the QUACO PMT or one of its members.
* Non Baseline documents peer review process is generally managed by the author.
In particular for the Tender Documents (Baseline Documents)
* The JTEC reviews the Prior Information Notice, the draft Invitation to Tender, the draft Subcontracts, as prepared by the Lead Procurer in accordance with the laws applicable to it and the Specific PCP Requirements and submit to the STC all above documents for approval
Concerning Authorization the process is adapted to the type of document. As a
general rule:
* For Baseline documents, the STC gives the Final approval of technical and contractual specifications for the PCP tender, Approval of tender selection and Management of knowledge (IPR), dissemination & exploitation documents. The Project Coordinator always approve all Baseline Documents.
* For Non Baseline documents the process depends mainly on the type of document, but are mainly approved by the WP Leader.
Every time there is a change in the lifecycle of a document a new version of
the document shall be created. Changes are traced by the revision index. The
revision index will increase by 0.1 for minor changes. In case of major
changes will be the first digit that will be moved to the next integer.
## AVAILABILITY
Table 1 and Table 2 give the general guidelines for the visibility and storage
of documents in the project. The process of communication and dissemination of
the data it is part of the Deliverable 8.1.
_Table 1: Visibility guidelines_
<table>
<tr>
<th>
**Document class**
</th>
<th>
**Visibility**
</th> </tr>
<tr>
<td>
Baseline documents
</td>
<td>
Financial, resource oriented and with sensitive information
</td>
<td>
QUACO Partners and EU Stakeholders
</td> </tr>
<tr>
<td>
Commercial
</td>
<td>
QUACO Partners, EU Stakeholders and
Partner labs interested in the use of the PCP instrument in the future
</td> </tr>
<tr>
<td>
Technical
</td>
<td>
QUACO Partners, EU Stakeholders and
Partner labs
In some cases Industrial partners*
</td> </tr>
<tr>
<td>
Non Baseline documents
</td>
<td>
Technical documents
</td>
<td>
QUACO Partners, EU Stakeholders and
Partner labs
In some cases Industrial partners*
</td> </tr>
<tr>
<td>
Scientific publications
</td>
<td>
Worldwide
</td> </tr>
<tr>
<td>
Outreach
</td>
<td>
Worldwide
</td> </tr> </table>
(*) Follows IP rules described in the Tendering Documentation and on the Grant
Agreement
_Table 2: Storage time and format requirements_
<table>
<tr>
<th>
**Document class**
</th>
<th>
**Storage time**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
Baseline documents
</td>
<td>
Forever
</td>
<td>
Native format and at least a long term readable format
</td> </tr>
<tr>
<td>
Non Baseline documents
</td>
<td>
Limited time
</td>
<td>
Native format or long term readable format
</td> </tr> </table>
## TRACEABILITY
Traceability includes the record of the lifecycle of the document and the
metadata that describes the document. A document is fully trace if we can
retrieve:
* Label including version number,
* Properties (Author, creation date, title, description)
* Life cycle information,
* Storage location,
* List of actions and comments with their author linked to changes in the life cycle.
Baseline documents shall be fully traced. For Non Baseline documents it is not
required a complete traceability of the actions and comments with their author
linked to changes in the life cycle.
# TOOLS
CERN has two documentation management systems EDMS and CDS. EDMS is the tool
used for the control of engineering documents and presentations. CDS is the
tool used for the control of scientific documents, meetings documentation and
graphic records.
To ensure the long term storage of Baseline documents they shall be stored in
EDMS. Non Baseline documents can be stored in another documentation management
system that can ensure the correct level of approval, availability and
traceability.
_Table 3: Recommended tools_
<table>
<tr>
<th>
**Document class**
</th>
<th>
**Tool**
</th> </tr>
<tr>
<td>
Baseline documents
</td>
<td>
EDMS
</td> </tr>
<tr>
<td>
Non Baseline documents
</td>
<td>
Meetings: Indico, EDMS, SharePoint
Technical: EDMS (requiring approval process), SharePoint
Scientific: CDS
Commercial: CFU, EDMS
Outreach: WWW
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Technical: MTF
Non-Technical: SharePoint
Outreach: Twitter, LinkedIn
</td> </tr> </table>
# LINKS TO THE TOOLS
CDS: https://cds.cern.ch/
EDMS: https://edms.cern.ch/project/CERN-0000154893
Indico: https://indico.cern.ch/category/7138/
LinkedIn: _https://www.linkedin.com/in/quaco/en_
MTF: https://edms5.cern.ch/asbuilt/plsql/mtf_eqp_sel.adv_list_top
Twitter: https://twitter.com/HL_LHC_QUACO
SharePoint: https://espace.cern.ch/project-HL-LHC-Technical-
coordination/QUACO/ WWW: https://quaco.web.cern.ch/
# TEMPLATES
The HL-LHC Quality support Unit maintains a series of templates that are
accessible in the EDMS [4].
# CONCLUSIONS
The QUACO Project has identified the different type of documents and data that
have to be managed to ensure its full life cycle. The different sub process
such as labelling, publishing or traceability have been analyzed and adapted
to the project. Finally different tools have been identified and deployed to
ensure the sub processes.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0489_Co4Robots_731869.md
|
D1.2 H2020-ICT-731869 Co4Robots June 28, 2017
the project’s efforts in this area. At any time, the Data Management Plan will
reflect the current state of the consortium’s agreements regarding data
management, exploitation and protection of rights and results.
# 1.4 Outline
For each partner involved in the collection or generation of research data a
short technical description is given stating the context in which the data has
been created. The different data-sets are identified by project wide unique
identifiers and categorized through additional meta-data such as, for example,
the sharing policy attached to it. The considered storage facilities are
outlined and tutorials are provided for their use (submitting and retrieving
the research data).
# Chapter 2 Data Sharing, Access and Preservation
The digital data created by the project will be curated differently depending
on the sharing policies attached to it. For both open and non-open data, the
aim is to preserve the data and make it readily available to the interested
parties for the whole duration of the project and beyond.
## 2.1 Non-Open research data
The non-open research data will be archived and stored long-term in the
**Alfresco** and **GitLab** portal administered by PAL.
* The **Alfresco** platform is currently been employed to coordinate the project’s activities and tasks, as shown in Figure 2.1.
Figure 2.1: Alfresco interface to manage project tasks, data and assignment.
* The **GitLab** platform is mainly used to develop and store all the digital material such as sensory data, source code and simulation/experiment videos, connected to Co4Robots, as shown in Figure 2.2.
## 2.2 Open research data
The open research data will be archived on different platforms depending on
the preferences of each partners.
D1.2 H2020-ICT-731869 Co4Robots June 28, 2017
Figure 2.2: GitLab interface to manage software collaboration and
distribution.
* Scientific reports and publications will be uploaded to various open-access archive sites such as DiVA portal ( http://www.diva-portal.org) and arXiv ( https://arxiv.org/) .
* Software, experiment data and source codes that can be open to public will be published via **GitLab** as open repositories.
Finally, each uploaded data-set is assigned a unique URL or DOI making the
data uniquely identifiable and thus traceable and referenceable.
# Chapter 3 Description for Alfresco
Alfresco is a useful Enterprise content management (ECM) to organizing and
storing an organization’s documents, and other content, that relate to the
organization’s processes. In our case the organization is Co4Robots. There are
mainly 3 kinds of functional lists:
* Workflow. Any user can assign tasks to any project mate or also to yourself to remember something to do. Other users can review and approve them (see chapter 3 of the manual for details).
* Sites. (see chapter 4 of the manual for details).
* Project Consortium. This site provides information about the project Partners, their logos, their web pages links, a calendar of the tasks and all the users that are enrolled to the project.
* Discussions. This is the communication site. Here you can open a new topic issue to comment with your team mates.
* Wiki. This site is configured to create Wiki entries. Here you can create all the pages that you want. It will be focused on technical issues documentation.
* Documents management and file sharing with version control, share links.
Private areas are also allowed where you have the documents that you don’t
want to share and the project area (shared files). In the project area you can
find all the Work packages folders, also the milestones demos and the Project
Proposal files. You can create documents or upload new ones (see section 2.3
of the manual for details).
# Chapter 4 Annex: Tutorial on Alfresco
**How to use C4R share & communication platform **
**PAL ROBOTICS S.L. Author: Jesús Planas Sentís**
Pujades 77-79, 4º 4ª Tel: +34 934 145 347 [email protected]
08005 Barcelona, Spain Fax: +34 932 091 109 www.pal-robotics.com
## Index
1. Basic configuration
1. How to access
2. Account configuration
3. Your own Dashboard
2. Documents management
1. My files
2. Shared files
3. Create new files from Google Apps
4. Modify documents and upload new versions
5. Share links
3. Workflow
1. Create a new workflow
2. My workflows
4. Sites
1. Project Consortium
2. Discussions
3. Wiki
5. Other settings
### 1\. Basic configuration
#### 1.1. How to access
You should have received an email with your login credentials. When you’ll
have it you can access to the platform as:
URL: _https://c4r.pal-robotics.com:8080_
Username: namesurname
Password: (email)
#### 1.2. Account configuration
The first of all that you should do when you’ll access to the platform is
configure your profile. To do this, please follow this steps:
1. Click on your name located at the top right of the webpage and click on “My Profile”:
2. You should be redirected to your info profile settings. Click the button “edit profile” located at the right side of this page
3. Here you should fill all the fields that you would like to set up for your profile.
#### 1.3. Your own Dashboard
At your “Home” page you can see your dashboard. This contains the information
about your activities, your recent files, your assigned tasks and your
favourite sites.
If you click at the gear icon placed at the top right side of this page, you
can configure the display setting as you would like to be distributed, also
you can add dashlets to optimize your dashboard. Is up to you if you would
like to show other information.
Also, at the top of this page you will see a “Get started” guide. This is
related to the functions that are present in this kind of Alfresco platform,
if you would, you can take a look to the videos to learn more about this
platform.
### 2\. Documents management
There are 2 zones where you can place documents:
#### 2.1. My Files
This zone is your private data storage. Here you can place documents or any
kind of data that you wouldn’t share with anybody.
Here you can create folders and upload documents. You can drag and drop
documents and folders directly from your computer to the platform.
#### 2.2. Shared files
This is the project zone data storage. Here you can find all the Work packages
folders, also de milestones demos and the Project Proposal files.
Here is the zone where you should place all the files related with Co4Robots
European project.
#### 2.3. Create new files from Google Apps
This platform have an API that can communicate with your google account if you
have one. If you haven’t avoid this step.
You have the possibility to create new documents like spreadsheets, text
documents presentations with the Google docs applications. Only you should
place in the folder that you want to create the new document and click on
“Create” button.
If you click on any of the Goolge Docs application, you must to sync your
account with the platform and you will be prompted to sync it. Only you should
say yes and allow to the prompted windows.
It’s possible that your browser block any pop-up from the server, please set
up properly the pop-up priorities for this site and allow the pop-ups to bring
access to Alfresco Platform to communicate with your Google account.
#### 2.4. Modify documents and upload new versions
Here there are some different ways to modify any document placed at the
platform.
**2.4.1. Modify documents with Google Docs API**
If you did the step 2.3 of this manual you can modify any document with Google
docs API, if you didn’t do this, avoid this step.
If you have any document and you would like to edit it with the Google Docs,
you only should click con the document, open the preview and at the right
menu, you can click on “ Edit in Google Docs” button.
After that you will be redirected to the Google Doc window. Here you can edit
the document and when you finish the modification, you can close it and all
the changes will be stored in your Google drive account.
At your Drive account, you will see a new folder called “Alfresco working
directory”, inside will be the document. To keep the changes saved at
Co4Robots Alfresco platform, you must to go to the Alfresco platform and click
on “Check in Goolge Docs”
When you click on “Check in Google Doc” button, all the changes will be saved
at Co4Robots Alfresco platform and the document disappear from your Google
Drive account.
If you would like to continue editing with Google Docs, you can click on
“Resume editing in Google Doc”, or if you wouldn’t save the changes and lose
all the changes did at the document, you can click on “Cancel editing in
Google Docs” button.
**It’s really important that every time that you finish the modifications on
any document, click on the “check in Google Docs” button to save the changes
properly at Co4Robots Alfresco platform.**
It’s possible that if you try to modify any document setted up with an other
text editor like OpenOffice, LibreOffice or MS Office, will change some
aspects of the document formatting.
If you would like to avoid this possible problems, use the offline editor
explained at the next step.
**2.4.2. Modify documents offline**
This could be the best option to edit any kind of document saved at Co4Robots
Alfresco platform. You can download the file that you need to modify and open
it with your default text editor.
To upload the new version, follow the steps explained at the next step of this
manual.
**2.4.3. Version control and New versions upload**
**2.4.3.1. Version control**
When you click on any document and you open it with the preview, at the right
side you can see all the document options. If you do a scroll down of this
options, you will be able to see all the versions that have this document at “
Version History” space.
You will be able to replace the last version with another older if you need,
you can download older versions and modify it at your computer and also you
can upload a new version.
When you revert an older version, you don’t lose the last version, it will
place a new version of this document and you will be prompted with a windows
that forced you to determine if its a major or minor change. Also you will be
able to place a comment of this revert.
**2.4.3.2. Upload a new document version**
After you edit any document offline with your default text editor, you will be
able to upload a new version of the original document placed at Co4Robots
Alfresco platform.
You only should click on “Upload a New Version” button placed at the right
side of the document Actions menu and follow the steps that will say the
prompted window
This is the prompted window that will appear when you click on “Upload a New
Version” button:
Click on “ Select files to Upload” and select the file. Set if is a minor or
major version and, if you would, you can place a comment.
#### 2.5. Share links
All the documents and folders have a link auto-generated and displayed at
“Document Actions” or “Folder Actions” menu at the right side of this document
or folder. It will be at “Share” departament of this menu
You will be able to copy this link and share with your project mates if they
need it to a quick access to the file or folder.
Document link:
Folder link:
You can send it by mail or place it in any Discussion topic or at the wiki
site if you need.
### 3\. Workflow
#### 3.1. Create a new workflow
Here you can assign tasks to any project mate or also to yourself to remember
something to do.
You can assign a task with or without a document attachment. To create a new
workflow you should follow this steps:
* Click on “Tasks” at the top menu
* Click on “My tasks”
* At the top of this page, you can see the “Start Workflow” button
* Follow the steps of the wizard
* Select which kind of task would you like to create
* Complete the fields that you need and click on start workflow
○ Take care if the “Other options” checkbox is selected because if isn’t
selected and you assign the task to any mate, they couldn’t be notified be
mail.
● If you have any task assigned to you you will be able to see it at “My
tasks” section.
#### 3.2. My workflows
If you click another time on “Tasks” button at the top menu you will be able
to see the “Workflow I’ve started”. Here appears all the workflows that you’ve
started. You can sort it with the items that are displayed at the left menu.
Also here you can start another workflow too.
### 4\. Sites
This platform, apart of the file sharing platform, provides a discussion and
wiki sites that are configured to place any kind of topic that you would like
to share with all the partners of the project. Here I explain how it works.
You can access to the sites clicking on the “Sites” buttons at the top menu.
There you will be able to see different options: My sites, Site finder, Create
site and Favourites. If you click on “My sites” appears a list of the sites
that you will be enrolled.
When you access at first time, the sites will be displayed at this list, but
after that all the sites that you access will appear at your dashboard as a
shortcut list.
#### 4.1. Project Consortium
This site provides information about the project Partners, their logos, their
webpages links, a calendar of the tasks and all the users that are enrolled to
the project.
#### 4.2. Discussions
This is the communication site. Here you can open a new topic issue to comment
with your mates. To create a new topic, you only should click on “New topic”
button located at the top of this page.
After that, you will be redirected to the new topic creator page, like this
one:
Here you should fill the fields needed and click on save after that.
When you see any topic just created and you would like to reply to it you only
just to click at the reply button at the topic box:
#### 4.3. Wiki
This site is configured to create Wiki entries. Here you can create all the
pages that you want. It will be focused on technical issues documentation. To
create new wiki pages you should click at “ New Page” button at the top of
this page.
**C4R Alfresco user manual**
### 5\. Other settings
● **Trashcan:** if you click at your profile name at the top right of any page
at the platform you will be able to see “ My profile”. If you click there, you
will be redirected to your profile settings. There you can see a item called “
Trashcan”. This is your recycle bin. If you delete any document and you will
recover it, you only should select the file to recover and click on recover or
delete.
18
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0490_FoTRRIS_665906.md
|
# Introduction
This report presents the data management plan for the co-RRI competence cells
established during the FoTRRIS project. It should be seen as an addendum to
the first Data Management Plan (D5.2) submitted in the first reporting period
of the project and the updated Data Management Plan written at the end of the
second reporting period (D5.6). The plan contains contributions of each of the
project’s partners and gives more information about the future data management
in each of the co-RRI competence cells established during the FoTRRIS project.
# Data Summary
## Description of data sets
Throughout the project’s different work packages the following data were
collected:
### WP1: Data resulting from in-depth interviews and online surveys
These data were collected to complement the insights on the functioning of
contemporary research and innovation systems gained through desk research of
academic and other relevant literature (see also D1.1). It concerns survey and
interview data. The surveys were addressing a wide variety of knowledge actors
from public and private research performing and funding organisations. The
interviews, on the other hand, were done with a selection of key-persons (key-
informants) from local research and innovation communities.
### WP2: Data related to the competence cells’ activity and governance
models, interview data and data related to the design of the web-based
platform
Part of the work done during the FoTRRIS project existed of developing a
governance and an activity model for the co-RRI competence cells. These tasks
therefore required data on, for instance, personnel costs, estimated revenues,
in-kind contributions, divisions of responsibilities among actors involved,
and other data needed to come to a comprehensive view on the (core)
activities, potential organisational structures and steering models for these
competence cells (see also D2.3 and D2.5).
In addition to this, also 8 interviews were conducted in WP2, which provided
information for Task 2.4 ‘Activity model for the competence cells, and
alternative funding and evaluation methods for RRI projects and solutions’,
and for the guidance book ‘How to set up a competence cell’, which became part
of D4.3 ‘Materials for uptake’. It concerns data on the structure and
functioning of existing organisations (mission, business models, history,
etc.) that could function as an example for the competence cells. These data
came from the interviewees themselves and can therefore be considered primary
data. These data were complemented with some personal details such as the name
and surname of the interviewees, email, phone number and position in the
organisation. Yet, after the interviews all data have been anonymised, and at
the end of the project all personal data have been deleted from LGI’s secured
server and computers.
Finally, WP2 also delivered the code of the FoTRRIS web-based platform. This
code has been published as open source in GitHub (see below), so other groups
and interested people can install their own instance of the platform, as well
as collaborate to improve the software. Its success will depend on the ability
to create an active community around this open source project. This approach
will also allow the open community to improve the platform and adapt it to
future needs or restrictions. This code is delivered as open source in the
GitHub repository of the UCM-GRASIA research group:
* Web Module: web application ( _fotrrisweb_ ): Repository at _https://github.com/grasia/_
* Collaborative Module: collaborative content server ( _fotrrisserver_ ) : Repository at _https://github.com/grasia/_
### WP3: Workshop data
In each of the partner countries and regions contributing to the project, that
is Austria, Flanders, Hungary, Italy and Spain, a series of four (or more)
workshops was organised. In these workshops participated a selection of
stakeholders familiar with one of the challenges tackled in it, such as
‘sustainable use of building materials’, ‘a sustainable energy system in the
Madonie region’ or ‘a sustainable food system in the Graz region’ (see also
D3.1, D3.2 and D3.3). Several spread sheets were compiled in relation to these
activities, containing the contact details of actors active in the respective
thematic field and other (potential) relevant stakeholders.
The ‘data output’ of these series of workshops can be categorised as products
reporting on the content of these workshops, on the one hand, and products
providing more information process-wise. The first category comprises, amongst
other things, analyses of glocal problems, an overview of barriers and
leverages related to systemic solutions for these glocal problems and concepts
of projects that could contribute to sustainable solutions for these problems.
The latter category contains, for instance, the scenarios used during these
workshops and reflections of the workshops’ participants on certain aspects
related to these workshops.
### WP4: Workshop data
By means of a back-casting exercise, the FoTRRIS project created an
opportunity to, on the one hand, complement the insights about the functioning
of research and innovation systems at the regional and national level, but
also to go beyond these levels and to search for communalities in the visions
on future European research and innovation. This back-casting exercise was
organized as a joint European workshop to which experts in the field of
research and innovation were invited from European and global networks.
Similar to the workshop data collected in WP3, the output of this activity
contains spread sheets with the contact details of groups of interested
stakeholders, and files reporting on the content and the process of this
workshop.
## Data utility
These data might be interesting to the following users:
### WP1: Data resulting from in-depth interviews and online surveys
The survey data were anonymized so that personal identification is not
possible. The primary data from this survey will not be made openly
accessible, as we indicated in the accompanying information that data will
only be used for FoTRRIS. Thus the survey data are stored at the IFZ password
secured server, and will be shared on request with FoTRRIS consortium members
for FoTRRIS related publications only.
However, the summary of data and analysis was published in the project
evaluation report D3.2, and can be used for further purposes by anybody
interested in it.
The interviews made for WP1 were not made openly accessible because it would
have been impossible to guarantee the interviewees’ anonymity based on full
transcripts, which also contained information about the interviewees’
institutions, their work and their positions. Thus the interview primary data,
audio files and interview transcripts, are password saved on partners’
servers, only accessible for the team in charge of further elaborating on the
material. A detailed, but fully anonymized summary of each of the in-depth
interviews was provided to the task leaders in charge of further analysis for
the report D1.1. As we guaranteed confidentiality to our interviewees, no
primary data were shared with any other FoTRRIS consortium members. Access to
these summaries, which were collected by IFZ, and are password saved on their
server, was only granted to IFZ-FoTRRIS team members.
It might, however, be possible that future research of one of the research
teams participating in FoTRRIS invites them to go back to these interviews and
to re-examine and reframe the resulting analyses. Thus, on request, FoTRRIS
partners also can get access to these summaries, but only for FoTRRIS
publication purposes. They may therefore have some future utility for the
FoTRRIS partners and the co-RRI competence cells.
Still, no access of any interview data can be granted to third persons, nor
within partners’ institutions, nor to other institutions, as the informed
consent did not only guarantee full anonymity, but also indicated a data use
for FoTRRIS only. Further use of the data beyond FoTRRIS related work would
need an extra agreement from the interviewees. Finally, the interviews were
performed in the national languages, which would have made these transcripts
difficult to understand for a broad scientific readership anyway.
### WP2: Data related to the competence cells’ activity and governance
models, interview data and data related to the design of the web-based
platform
Data related to the competence cell’s activity and governance models: All data
that were meaningful and that could be made publicly accessible can be found
in D2.3 and D2.5. These reports are interesting in the first place for the
FoTRRIS partners investing in the development of a competence cell as they
allow to compare the structural embedding of each of the cells. In addition to
this readership, also RRI researchers interested in the development of RRI
practices and organisations may be interested in the content of these reports.
The same applies for the interview data from WP2. All these data can be found
in the WP2 deliverables (cfr. above). To re-use these data, one only needs to
quote the deliverable. We assume that also these data are interesting in the
first place for people involved in the establishment and the development of
the competence cells and other RRI researchers interested in RRI practices and
organisations.
The code of the FoTRRIS web-based platform, on the other hand, can, given its
open source nature, be modified and be the basis for new software products
(e.g. using a fork of the current project in GitHub), or taken as it is to
install a customized platform, following the instructions that are provided in
deliverable D2.1 and D2.2. Therefore, there are two public targets:
* Entities that want to setup a Competence Cell with their own instance of the web based platform, which can be customized. They should look at the instructions for installation that are provided in Deliverable 2.1 _Design and specs of the CO-RRI web-based platform_ . The platform provides also a set of easy to configure customization parameters, and how to do it is described in detail in Deliverable 2.2: _User Manual of FoTRRIS CO-RRI web Platform._
* Software developers, who want to contribute to the evolution of the platform or create new products that are based on this code (forks of the original project). They will find the information on the software architecture and the distribution of the code in the Deliverable 2.1 _Design and specs of the CO-RRI web-based platform._ This requires some basic knowledge on the use of GitHub, but this is common for most software engineers today.
### WP3: Workshop data
All detailed and in-depth data linked to the regional/national workshops
performed in WP3 are in the national languages. Together with the non-codified
knowledge that is needed to interpret these data, that is all information
needed to contextualise and interpret these data, this automatically restricts
their utility to the persons involved in processing all data flows related to
these workshops. The only exception are the spread sheets with personal data
of interested groups of stakeholders. These data cannot be re-used, however,
without the informed consent of the stakeholders involved, and therefore
currently have no further utility for third parties.
### WP4: Workshop data
The same factors apply in the case of the back-casting exercise performed in
WP4: The ‘raw data’ resulting from this workshop can only be correctly
interpreted by the persons involved in processing these data, and therefore
only have further utility to these persons. The spread sheets with personal
data of interested groups of stakeholders cannot be re-used without the
informed consent of the stakeholders involved, and hence cannot be shared with
third parties.
# FAIR data
## Making data findable
All annotated data and information resulting from the research activities
performed during the FoTRRIS project can be found in the project’s
deliverables. These deliverables were placed, amongst others, on the project’s
website ( _http://www.fotrris-h2020.eu_ ) . Worth mentioning, is that the
websites of the co-RRI competence cells will have a link to the FoTRRIS
website and that the deliverables will therefore also be accessible via this
channel.
In addition to these deliverables, also the papers, book chapters and other
publications published during and after the project contain relevant data and
findings. D5.5 presents an overview of these publications and mentions, when
applicable, their DOI or ISSN. All these publications and other published
results are freely accessible via the project’s website (
_http://fotrris-h2020.eu/resources/_ ) .
Yet, these annotated data are based on the raw data gathered throughout the
FoTRRIS project. These data are in the national languages of the project
partners, that is in Spanish, Italian, Hungarian, German and Dutch, and are
stored by the respective partner institutions. Part of these raw data could be
digitalized and were made openly accessible via Zenodo (see also the next
chapter). However, this was not possible for all data. It is therefore
recommended that people interested in the whole primary output of one of the
experiments contact the projects’ partner institutions.
* For the Austrian experiment, contact IFZ via Sandra Karner ( [email protected]_ )
* For the Flemish experiment, contact VITO via Nele D’Haese ( [email protected]_ )
* For the Hungarian experiment, contact ESSRG via György Pataki ( [email protected]_ )
* For the Italian experiment, contact CESIE via Jelena Mazaj ( [email protected]_ ; [email protected]_ )
* For the Spanish experiments, contact UCM via Juan Pavon ( [email protected]_ )
Also the FoTRRIS co-creation platform (
_http://ingenias.fdi.ucm.es/fotrris/home.php_ ) can be used to get in touch
with the project teams involved in the experiments. Interested actors can
always ask to join one or more of the project groups.
## Making data openly accessible
One of the products of the FoTRRIS project is a web-based online collaboration
platform (see also _http://ingenias.fdi.ucm.es/)_ . After the project, it
will be supported and maintained by the RRIIA association in collaboration
with the UCM-GRASIA research group. However, given its open source nature, it
can also be deployed in other servers. The UCM group has resources to support
the platform during at least two years after the project. Later on, the use
and evolution of the platform will determine how to maintain it.
The software has been published as open source in _GitHub_ (
_https://github.com/_ ) , which is the largest open software repository in
the world. This is based on the _git_ tool ( _https://git-scm.com/_ ) , a
distributed version control system. This repository facilitates access to the
code and the ability to download it, control modifications of it, create new
projects from it (through fork), so that other groups and interested people
can install their own instance of the platform, as well as collaborate to
improve the software. Its success will depend on the ability to create an
active community around this open source project. This approach will also
allow the open community to improve the platform and adapt it to future needs
or restrictions.
In order to install the platform, instructions are provided in Deliverable 2.1
_Design and specs of the CO-RRI web-based platform_ . The platform provides
also a set of easy to configure customization parameters, and how to do it is
described in detail in Deliverable 2.2: _User Manual of FoTRRIS CO-RRI web
Platform._
All other data resulting from the project’s workshops and other activities
that could be made openly accessible, can be consulted via Zenodo. The
following list gives an overview of the items that can be accessed via this
repository.
<table>
<tr>
<th>
TITLE
</th>
<th>
DOI
</th>
<th>
ZENODO LINK
</th> </tr>
<tr>
<td>
Hungarian transition experiment:
Examples of workshop outputs
</td>
<td>
10.5281/zenodo.1465851
</td>
<td>
_https://zenodo.org/record/1465851#.W8injWgzbcc_
</td> </tr>
<tr>
<td>
UCM Experiment on Women with disabilities: Examples of workshop output
</td>
<td>
10.5281/zenodo.1465861
</td>
<td>
_https://zenodo.org/record/1465861#.W8iozmgzbcc_
</td> </tr>
<tr>
<td>
UCM Experiment on Refugees: Examples of workshop output
</td>
<td>
10.5281/zenodo.1465843
</td>
<td>
_https://zenodo.org/record/1465843#.W8ipkmgzbcc_
</td> </tr>
<tr>
<td>
Flemish experiment: Examples of workshop output
</td>
<td>
10.5281/zenodo.1466001
</td>
<td>
_https://zenodo.org/record/1466001#.W8l-I2gzbcc_
</td> </tr>
<tr>
<td>
Italian transition experiment: Examples of workshop output
</td>
<td>
10.5281/zenodo.1465837
</td>
<td>
_https://zenodo.org/record/1465837#.W8isjWgzbcc_
</td> </tr>
<tr>
<td>
Austrian experiment: Examples of workshop output
</td>
<td>
10.5281/zenodo.1466379
</td>
<td>
_https://zenodo.org/record/1466379#.W8mB0Wgzbcc_
</td> </tr> </table>
Other data cannot be consulted without involvement of one of the project
partners due to the following main reasons:
* FoTRRIS guaranteed anonymity to the persons collaborating with the consortium. Making certain data openly accessible, such as interview transcripts, would reveal their identity.
* The data could only be used for the FoTRRIS project unless the persons involved explicitly agree. This means that these data cannot be used in the context of other projects.
* A correct interpretation of the data is not possible because of a lack of contextual information for persons not involved in the research activities. For instance, certain data resulting from the workshops are difficult to correctly interpret for persons not having the necessary theoretical, practical and background knowledge about these workshops.
* Many of the workshops involved individual and group exercises that made it difficult to digitalize the output. For example, group discussions, quick drawing exercises, schemes that had to be quickly filled in individually before starting up a group discussion, or wall posters of which the content continually changed by means of post-its, cannot be easily digitalized. Furthermore, the content of these exercises should be evaluated in its particular context, which is very difficult to make publicly accessible.
* Certain information and data were used, or will be used, to create new project proposals. As also third parties are involved in the development of these proposals, certain confidentiality restrictions are applicable, which make that these data cannot be made openly accessible.
## Making data interoperable
Future exchange of FoTRRIS data between the partner institutions collaborating
in the FoTRRIS project and the competence cells resulting from the project
will be possible, because these partners share a common background that allows
them to correctly interpret these data. As already mentioned in the previous
paragraphs, for third parties this would be very difficult.
## Increase data re-use
Only for the data related to the online collaboration platform it was
meaningful to take measures to increase the re-use of the data and code
produced during the FoTRRIS project. This means that the consortium offers
these data and code under a Creative Commons CC BY 3.0 licence. Anyone
interested in re-using these can therefore:
* Share — copy and redistribute the material in any medium or format
* Adapt — remix, transform, and build upon the material for any purpose, even commercially.
The platform will be maintained by the RRIIA association (a spin-off of the
project), for at least 3 years after the project ended. Given the open source
nature of the code of the platform, and its availability in the GitHub
repository, it is easy to get the code and reuse it for other projects, as it
has been mentioned in previous sections.
# Allocation of resources
This chapter focuses on the following 2 elements essential to FAIR data
management:
* Who will be responsible for data management in the co-RRI competence cells?
* Are the resources for long-term data preservation discussed?
## Austrian competence cell
Good and FAIR data management will be guaranteed in the Austrian competence
cell as it is in line with any other activity carried out at IFZ. This means
that the responsibility for data management lies with the respective project
manager, who will take care that collected scientific data and produced
results will be Findable, Accessible, Interoperable, and Re-usable according
to the rules as agreed on within the respective project teams engaged in the
R&I activities.
Access to data, which are to be supposed open access, will be granted by using
the FoTRRIS online platform.
For the long-term storage of data, IFZ will provide server space, or in case
of collaborative projects, data storage will be the responsibility of the data
producing institutions. In case this is not an option, procedures and
resources for long-term storage will be anchored in a contractual covenant
with the other project partners.
## Flemish competence cell
Good data management is essential to any research and service unit of a
research performing organization and can therefore be seen as one of the core
activities of the co-RRI competence cell. Managing data will be part, in some
way or another, of almost all activities performed by the cell. As a result,
the responsibility for data management will be shared among all the staff
members and will be based on set of rules agreed on by all staff members.
The competence cell will make use of a website and an online collaborative
space, linked to this website, for the exchange and (long-term) storage of
(project) data. The discussion about the resources needed for longterm data
preservation therefore was, and still is, part of the discussions and
investigations covering the design of the website and this online
collaborative space.
## Hungarian competence cell
Good data management will be guaranteed by the Hungarian competence cell in
line with any other research projects carried out by ESSRG. The responsibility
lies with the respective project manager, who will take care that collected
scientific data and produced results will be handled according to the rules as
agreed on within the respective project teams engaged in the collaborative R&I
activities. Access to data, which are supposed to be open access, will be
granted by using the FoTRRIS online platform.
## Italian competence cell
One of the main objectives of CESIE’s work is to respect fundamental rights
and data protection. Based on this, the organization has developed different
core regulations for the implementation of activities in Italy, Europe and
internationally. These can be found in documents such as ‘Privacy Policy in
accordance with EU Regulation 2016/679’ and ‘Child Protection Policy’. Being a
part of the CESIE structure, the Competence Cell guarantees that all data will
be collected and shared according to these and other ethical requirements.
Data is saved/monitored and used using an internal enterprise level cloud
system. Access to the data and all files is enforced by security measures and
user access is protected by personal passwords, which guarantee that
personal/private date cannot be used by third parties. CESIE is responsible
for data management. Data sharing is possible based on a permission of the
Higher Education and Research Unit’s Coordinator (who leads the Competence
Cell), if this sharing respects all internal rules/policies and regulations.
All data are stored for a period of ten years.
## Spanish competence cell (RRIIA)
The RRIIA association has the maintenance and evolution of the FoTRRIS online
platform defined as one of its tasks. This will require the development of a
work plan and a data management strategy, in accordance with RRI principles.
Both this work plan and the data management strategy are still under
construction.
# Data security
This chapter covers the following 2 questions:
* What provisions are in place in each of the competence cells for data security (including data recovery as well as secure storage and transfer of sensitive data)?
* Is the data safely stored in certified repositories for long-term preservation and curation?
## Austrian competence cell
Data storage will be password secured on servers, either on the IFZ server or
a partner organisation’s server, which will be decided case by case for each
project separately. Rights to access data will be granted by the responsible
administrator for each project separately.
Personal data will be handled in line with the applicable data protection
laws, e.g. for Austrian the Datenschutzgrundverordnung – DVGO (25/05/2018).
The IFZ established the function of a data security officer in June 2018, who
will consult and support the Austrian CC in regard to ensure the handling of
personal data in line with the new law.
## Flemish competence cell
The online collaborative space that will be used for (long-term) data storage
by the Flemish competence cell will be a password secured environment. The
cell’s staff will act as the administrator of this online work environment and
only registered users invited by the administrator will be able to add and/or
consult data. This means that all data on this platform will be stored
electronically on one of the secured VITO servers.
In relation to sharing, editing and storing confidential data the following
measures are taken. The person who puts data on the online platform will be
the only one who decides on the security level that is applicable to these
data. This person will decide on the accessibility for other users of the
files he or she entered in the collaborative space, and whether these data can
be edited or not. He or she will also be able, at any time, to withdraw this
information.
Finally, all personal data managed by the co-RRI competence cell will be
processed in compliance with the applicable personal data protection laws.
VITO’s data protection officer supervises all practices related to the
implementation of CRM-like systems within VITO.
## Hungarian competence cell
Data storage will be password secured on a hosting server of a provider
contracted by ESSRG. Rights to access data will be granted by the responsible
project manager of ESSRG. Personal data will be handled in compliance with the
applicable data protection laws (under Hungarian legislation and EU common
regulations).
## Italian competence cell
Data storage will be password secured on servers, either on the CESIE internal
server or on hosting server provided by a provider. Rights to access data will
be granted by CESIE. Personal data will be handled in line with the applicable
data protection laws (under Italian legislation and EU common regulations).
## Spanish competence cell (RRIIA)
The online collaborative space that will be used for (long-term) data storage
by RRIIA association will be a password secured environment. The association
staff will act as the administrator of this online work environment and only
registered users invited by the administrator will be able to add and/or
consult data. This means that all data on this platform will be stored
electronically on one of the secured RRIIA servers.
Given the design of the online platform, the administrator will decide on the
accessibility for other users to the services of the platform and in which
projects can they collaborate, so to edit the corresponding pads and chat
rooms. The administration will have the rights, at any time, to withdraw those
rights on detection of bad behaviours.
Finally, all personal data managed by the RRIIA association will be processed
in compliance with the applicable personal data protection laws.
# Ethical aspects
As already mentioned in the previous chapters of this data management plan,
not all data collected throughout the FoTRRIS project can be re-used. In most
cases the reason for this is that FoTRRIS guarantees full anonymity to the
persons who contributed with sensitive information and/or personal opinions.
These data cannot be used for other purposes and in other projects unless the
persons involved explicitly agree. Obviously, the co-RRI competence cells will
follow the same policy and will only make use of these data when they have
obtained an informed consent of each of the relevant persons.
More specific details about the ethical requirements that were taken into
account in relation to data collection during the FoTRRIS project can be found
in D7.3 ‘Ethical requirements’. The co-RRI competence cells are committed to
implementing the same ethical principles as explained in this report.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0491_FoTRRIS_665906.md
|
# Executive Summary
This Data Management Plan (DMP) provides information about the main FoTRRIS
policies regarding the management of the data during the lifespan of the
project. It indicates which data will be collected,
describes the processes how the data will be collected and processed, what
methodology and standards will be applied, whether data will be
shared/preserved.
## 1\. Open access to publications
FoTRRIS will make all its peer reviewed publications Open Access by depositing
its articles in a repository for scientific publications: Zenodo . Moreover,
all deliverables will be published on the project website:
_http://www.fotrris-h2020.eu/_ .
## 2\. Data set reference and name
The final DMP will include here the persistent identifier (a DOI) that the
data repository will issue once we deposit the dataset will be deposited.
## 3\. Data set description
The aim of the project is to develop and introduce new governance practices to
adopt Responsible Research and Innovation(RRI) policies and methods in
Research and Innovation (R&I) systems. In order to develop the method and
institutional structure, interviews will be performed with ‘key knowledge
actors’, and online surveys addressing a wide variety of members of the
research and innovation (R&I) community will be launched.
The research data will be collected through desk research of academic and
other relevant literature. In order to complement the information gained
through desk research, supplementary empirical data will be collected (surveys
with knowledge actors from public and private research performing and funding
organisations and further in-depth interviews with key-persons from the local
research and innovation community will be performed).
To identify and recruit research participants the Consortium will follow the
procedures below:
IFZ
A list of potentially relevant key persons for the interviews and the survey
was compiled based on screening of funding agencies, research funding
programs, research projects, which could be related to or which might become
relevant for RRI. The screening was based on the team’s knowledge about the
Austrian R&I landscape, information gained through team members involvement in
the Austrian RRIplatform, publications, personal contacts to/previous co-
operations with persons, and an additional web search.
Persons have been determined as relevant due to their engagement in RRI-
related funding programs (program managers, administrators), RRI-related R&I
projects (coordinators, researcher from public an d private research
organisations, non-academic research participants), or due to their engagement
in activities linking/integrating science and society (e.g. intermediaries,
brokers – e.g. science center, science shop, knowledge transfer center).
<table>
<tr>
<th>
For the interviewee recruitment we compiled a priority list of 15 persons
according to:
* their potential role for a transition towards more RRI (strategic considerations)
* their (anticipated) knowledge about the R&I system in Austria and awareness about RRI
* thematic focus related to Food & Agriculture
* diversity criteria: gender balance, anticipated viewpoint (less & more critical), representatives of various actor groups within R&I system
Interviewees are personally addressed by invitation letters sent via e-mail.
For the online survey the list of potential participants is compiled the same
way. Participants will then be recruited by e-mail including a request to
forward the survey link to other people. The invitation will include
information about the FoTRRIS project, the purpose of the survey, the informed
consent, how anonymity and data privacy will be preserved, and how long the
survey will take.
Participants from the online survey as well as interviewees will be asked
whether they want to be included into the project’s “participant pool”
(centrally administered by ERRIN): either to be contacted for further
engagement or to be kept updated on project news.
For the WP3 transition experiments we take a selective approach, which means
that we will launch an open call for participants online (webpage,
newsletter), and we then will select from applicants, who respond to this
call, participants by assessing their relevance for the experiments
considering their field of activity and expertise. Diversity criteria will be
considered for the selection as well. Single key persons, whom we consider as
particularly relevant to implement the experiments will be contacted
personally.
The invitation to and participation in the knowledge arenas will be open and
fully inclusive; however a gatekeeper permission will need to be requested to
access ‘private spaces’ within the online platform.
REMARK: All contact details included in the contact list(s) are publicly
accessible online or are provided by the persons themselves (e.g. in the
context of responding to open calls).
VITO
For VITO, I started to compile a first list of potentially relevant key
persons for task 1.2 (Knowledge actors’ perspectives on RRI) and a second list
of potentially relevant key persons for WP3 (Test of the multi-actor
experiment in the domain of materials scarcity. The first list is based on my
familiarity with the Flemish R& I system. I invited persons from funding
agencies, from public authorities in the domain of Economics, Science and
innovation, from research performing organizations (universities and
university colleges). I also invited non-academic researchers, working for
NGOs. I invited knowledge actors familiar with predecessors of RRI (such as
STS, TA).but not necessarily familiar with the RRI-concept. I compiled a
priority list of more or less 15 persons, based on diversity criteria such as
gender balance, anticipated
</th> </tr> </table>
<table>
<tr>
<th>
viewpoint (less & more critical) . All contact details of our interviewees are
publicly accessible in the internet.
I invited the interviewees personally, (via an invitation letter, accompanied
by a participant information sheet and a description of FoTRRIS’ data privacy
policy). In case invited persons consent to be interviewed, they are asked
when, how (personally or via a phone call or skype) and where they want to be
interviewed and whether they want to be included into our project’s
“participant pool” (centrally administered by ERRIN): either to be contacted
for further engagement or to be kept updated on project news.
For the second list, research participants for the WP3 transition experiments,
I compiled a priority list of persons according to their
* potential role for a transition in the domain of materials scarcities (strategic considerations)
* thematic focus related to materials scarcities
* diversity criteria: gender balance, anticipated viewpoint (less & more critical), representatives of various actor groups within the domain of materials scarcities, both knowledge actors from universities and university colleges, from NGOs, from public administrations, from sector organisations, from industry.
The second list is based on my familiarity with the Flemish materials system
and on the familiarity of my VITO-colleagues with this system. Moreover, an
open call for participants online (webpage, newsletter), will be launched and
we then will select from applicants, who respond to this call, participants by
assessing their relevance for the experiments considering their field of
activity and expertise. Diversity criteria will be considered for the
selection as well. Single key persons, whom we consider as particularly
relevant to implement the experiments will be contacted personally.
The invitation to and participation in the knowledge arenas will be open and
fully inclusive; however a gatekeeper permission will need to be requested to
access ‘private spaces’ within the online platform REMARK: All contact details
of our interviewees are publicly accessible in the internet.
For the online survey the list of potential participants is compiled the same
way. Participants will then be recruited by e-mail including a request to
forward the survey link to other people. The invitation will include
information about the FoTRRIS project, the purpose of the survey, the informed
consent, how anonymity and data privacy will be preserved, and how long the
survey will take.
Participants from the online survey as well as interviewees will be asked
whether they want to be included into the project’s “participant pool”
(centrally administered by ERRIN): either to be contacted for further
engagement or to be kept updated on project news.
CESIE
</th> </tr> </table>
<table>
<tr>
<th>
Implementation of the T1.2: aiming to organize in-depth interviews with key
persons from the local research and innovation community and surveys with
knowledge actors from academia, business, policy, civil society, CESIE
compiled a list of potentially relevant stakeholders based on:
* Knowledge about responsible research and innovation (RRI) in the field of renewable energy;
* Academic and non-academic research in the mentioned field;
* Activities linked with science and society.
Personal contacts and web search were involved in this process, however, all
contact information of invited interviewees is publicly available online. A
first contact with interviewees was organized with the help of an introduction
call, after all information about the project and interview was sent by email.
A list of fifteen key persons from the local research and innovation community
is structured according to:
* Expertise in renewable energy and RRI;
* Gender balance and balance between participants from different work/action fields (academia, business, policy, CSO);
* Potential participation in future activities of the project.
ESSRG
For the interviews and the survey we compiled a list of potentially relevant
key actors of the Hungarian research and innovation system
(http://nkfih.gov.hu/innovacio/hazai-innovacios). This list is based on the
open online database of the National Research Development and Innovation
Office. This database embraces actors from different fields, such as: research
institutions, universities, technology transfer organizations, advocacy
organizations, innovative enterprises, financial intermediaries, research
infrastructure, clusters and national technology platforms. We complemented
this database by adding organizations that have been consortium partners in
RRI related FP7 or H2020 projects, and the Hungarian beneficiaries of the
H2020 SME instrument (https://ec.europa.eu/easme/en/sme-
instrumentbeneficiaries). For the contacts details of these organizations we
used data openly accessible through the internet.
For the interviewee recruitment we compiled a priority list of 30 persons (who
are in key positions of the relevant organizations; 15 primary and 15
subsidiary contacts) according to:
* their key role in the present research and innovation system;
* their potential role for a transition towards more RRI (strategic considerations);
* their experience with regard to RRI or similar activities; and
* diversity criteria: gender balance, anticipated viewpoint (less & more critical),
</th> </tr> </table>
representatives of various actor groups within R&I system
Interviewees are personally addressed by invitation letters sent via e-mail.
UCM
In order to make a list of relevant key persons for task 1.2 we use our
personal contacts from local research projects related to or which might
become relevant for RRI and information gained through web search.
For the interviewees’ recruitment, we create a priority list of more or less
15 persons, based on diversity criteria such as gender balance and expertise
in persons with disability and RRI. We focus on:
* persons with knowledge about responsible research and innovation (RRI) inside and outside the field of people with disability.
* Important people in the domain of academic and non-academic research and experience in the mentioned field.
* People related to activities linked with science and inclusion in society.
We invited the interviewees via email with an invitation letter, accompanied
by a participant information sheet and a description of the FoTRRIS’ data
privacy policy. In case invited persons consent to be interviewed, they are
asked when, how (personally or skype) and a meeting was fixed.
All contact details of our interviewees are publicly accessible in the
internet.
* to identify potentially relevant key persons for RRI based according to their experience in RRI, engagement in RRI-related funding programmes, RRI-related R&I projects, in activities linking/integrating science and society .
* to contact them via email, phone or face to face and receive a written agreement for participation in different project activities- to create a database of contacts regarding future cooperation.
### 3.1 FoTRRIS Survey Data
The survey data will be anonymized so that personal identification will not be
possible. Findings of surveys and interviews will be synthesised and
integrated with the results of the literature research in a report
(Deliverable D1.1 – month 9). Data from surveys will not be openly accessible.
### 3.2 FoTRRIS interview transcripts or analyses
Interviews will be audio recorded and transcribed, and a content analysis will
be performed.
Interview transcripts will not be made open access, because we cannot
guarantee anonymity to our interviewees if full transcripts are published,
since interviews will be carried out in national languages (so, is will be
rather easy to identify the national background of the persons interviewed
and, possibly, for those who happen to know the national R&I community, to
identify the person herself. This risk is plausible, since the persons
interviewed are experts from a certain field, so it is likely that interviews
could at least be traced back to certain institutions. Due to such conditions,
it is likely that interviewees would refuse to openly talk to us, which
certainly will affect our research.
### 3.3 FoTRRIS workshop data
Further, workshops will be organized to create, together with the workshop
participants, a common problem definition regarding a problem of resource
scarcity and a common definition of a potential solution to the problem
defined. Workshop group reflections will be recorded as a matter of
convenience for analysis. Audiorecords will not be made open access. The
problem definitions and definitions of potential solutions will be made open
access and downloaded on zenodo as pdf files. No extra costs are involved in
achieving our data in Zenodo. Zenodo guarantees the long term preservation of
data.
### 3.4 Processing operations
The contact details of research participants will be processed and be used to
organise interviews, focus group discussions and workshops and to document
their participation in those events. Contact details will be used to invite
research participants to participate in surveys.
Research partners will process the research participants' contact details in
compliance with the applicable personal data protection laws.
Research partners will participate in:
* one-on-one interviews, which may be recorded;
* focus group discussion;
* collaborative workshops;
* and possibly surveys, which may be conducted online.
The collected personal data will only consist of:
Names/contact details/Sex/Personal insights and opinions/Images /Voice
recordings/Content analysis or transcripts of interviews/Outcomes of surveys.
The transcripts and content analyses will be stored electronically on secure
servers, in a data repository which will only accessible for project team
members directly engaged in the corresponding research work.
Furthermore the servers will be password secured and will be changed every six
months to ensure security.
Interviewees’ Names will be encrypted in the transcription files.
Contact details will only be used for this research project, and will not be
used further for other purposes, unless participants explicitly agree.
Personal data will be used for scientific analysis and the production of
research reports. They may also be used in newsletters, the project website
and other media reporting on the project’s activity. These reports,
newsletters and website may identify individuals that participated in
interviews, workshops, group discussions and survey, but will not attribute
particular opinions of individual participants, unless they explicitly
consent.
## 4\. Data Sharing
The survey participants may withdraw any time they wish from the study and the
information that they provided will be deleted upon request. This also applies
for the use of personal data. Data which have been already processed and
published can further be used for the project.
The files will be deleted 3 years after the project ends the latest. Research
participants have the right to request access to their personal data and to
have these data rectified. They also have the right to refuse the use of their
personal data. However, personal data that are already processed for this
research project can be used further within this research project. If
participants wish to exercise their rights, they should contact the FoTRRIS
scientific representatives.
## 5\. Archiving and preservation
The personal data of research participants will be maintained securely on the
servers of the organisations participating at FoTRRIS, which can only be
accessible to FoTRRIS researchers.
Moreover, the personal view of research participants will be deposited in a
data repository which will have restricted access thus allowing for long term
use, preservation and accessible to all researchers. The personal data
concerned and the recordings will only be accessible to the research partners
directly involved in this study.
Research partners do not intend to rely on third party service providers for
the processing of research participants' personal data.
## Conclusion
This document is the first draft of the data management plan, which will be
updated during the lifecycle of the project. The updated version of the
document will present detailed information regarding data collection and use,
for example: in T1.2.
## Bibliography
European Commision. Directorate-General for Research & Innovation (2016).
Guidelines on Data Management in Horizon 2020. Version 2.1. Available online:
_https://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hioa-
data-mgt_en.pdf_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0492_PRINTEGER_665926.md
|
<table>
<tr>
<th>
</th>
<th>
Immediate contributions of PRINTEGER will include raised attention for
realistic and effective integrity measures through dissemination, including a
large conference, and immediate trial and use of much improved educational
resources for teaching research ethics to future and young scientists.
**Website:** Printeger.eu
</th> </tr>
<tr>
<td>
**The main data activities of**
**PRINTEGER**
</td>
<td>
Besides the normal data activities of a EU funded research project (life
cycle) the DMP of PRINTEGER will focus on four activities:
1. Web-based questionnaire (Task IV2);
2. Focus groups (Task IV3);
3. case studies (Task III3);
4. interviews.
**_Web-based questionnaire_ ** .
Participants (approximately 2000) will be invited to participate on a
voluntary basis. An information sheet will be produced containing information
concerning the objectives of PRINTEGER and the way data will be managed.
Research data will be anonymous and no data will be collected that would make
participants identifiable.
Primary data will only be shared beyond the researchers immediately involved
in this specific work package after complete anonymity has been verified. The
objective is to analyse factors that promote or hamper integrity in research.
We will not collect or publish accusations of misconduct concerning
identifiable individual researchers or institutes. The objective of the
project and the data-management strategy will be carefully explained to
potential participants.
**_Focus groups_ . **
Participants (120) will be invited to participate on a voluntary basis. An
information sheet will be produced containing information concerning the
objectives of PRINTEGER and the way data will be managed.
Researchers commit to select a reasonable representation of gender, age,
position (student, professor, manager, etc.) and other relevant social
categories in the focus groups. The objective of the project and the
datamanagement strategy will be carefully explained.
We will not collect or publish accusations of misconduct concerning
</td> </tr>
<tr>
<td>
</td>
<td>
identifiable individual researchers or institutes. We will take care not to
give rise to stigmatisation (information on ‘ethnicity’ or religion for
instance, will not be part of our data).
Reports of sessions will be anonymous and draft versions will be distributed
among participants for comments and corrections. We will articulate conditions
of privacy and confidentiality of the data also in the information sheet.
**_Case studies_ : **
The consortium will use secondary analysis, that is, the consortium will
analyse case materials already available in the public domain. This also
applies to Institutional responses to scientific misconduct studied by
PRINTEGER: this again refers to materials available in the public domain. In
some cases, interviews will be used to shed light on the cases. For this
method, please see the considerations immediately below.
**_Interviews_ ** :
Experts will be invited to participate on a voluntary basis. Before the
interview, The objective of the project and the strategy for data-management
and data-analysis will be carefully explained with the help of the PRINTEGER
information sheet explaining objectives and data management strategy.
We will articulate feedback of the results to the participants of the
interviews. Draft reports of the interviews will be presented to participants
for corrections. Reports will be anonymised, unless explicit consent for
quoting identifiable participant views has been acquired.
</td> </tr> </table>
# Data management roles
<table>
<tr>
<th>
**Who is involved in writing the DMP?**
</th>
<th>
The Project Management Team (RU) will prepare a Data Management Plan in which
the participants determine and explain which of the generated (research) data
will be made open or reasons for not giving access.
</th> </tr>
<tr>
<td>
**Data Manager PRINTEGER / role**
</td>
<td>
Willem Halffman will act as Data Manager PRINTEGER. He shall be responsible
for executing the DMP PRINTEGER. Every consortium meeting the Data Manager
will provide an update and discuss (strategic) issues with the partners / EAB
members like approval for access. He shall act as focal point for matters
regarding requests from partners outside the consortium, situations like loss
of data etc.
</td> </tr>
<tr>
<td>
**Who is creating the data?**
</td>
<td>
Project Partners
</td> </tr>
<tr>
<td>
**Who is processing and analysing the data?**
</td>
<td>
Project Partners
</td> </tr>
<tr>
<td>
**Who is preserving and giving access to the data?**
</td>
<td>
**During the project** : project partners are responsible for preserving data
and giving access to data in line with the DMP PRINTEGER.
All PRINTEGER deliverables will be made freely available to a wide audience
through the website Printeger.eu and active dissemination.
**After project completion** : Vital resources will be made available also
beyond the duration of the project, e.g.: - _Educational tools_ : will
continue to be curated by RU, as they will function in the daily educational
activities of RU. In the unlikely event that RU web servers prove unwieldy or
technical support falls short, then another host with active interest in
teaching will be found.
\- _Misconduct incidence data_ , the code and guideline inventory, researcher
leader tools, and similar output that will continue to be of active use beyond
PRINTEGER will be hosted by a large and stable research organization that has
an active interest in the curation of such resources. For such purposes, the
project manager will seek cooperation with an academy of science or
professional organisation, such as ALLEA.
</td> </tr>
<tr>
<td>
**Who owns the data?**
</td>
<td>
Data are owned by the project partner that generates them
</td> </tr>
<tr>
<td>
**Who may want to reuse the data?**
</td>
<td>
Researchers, research funding organisations, National and local policymakers
[..]
</td> </tr>
<tr>
<td>
**Supervision**
</td>
<td>
Expert Advisory Board (EAB) of PRINTEGER. Their supervision is of particular
importance for misconduct incidence data, but arrangements will be made to
provide access to data, under restrictions of privacy.
</td> </tr>
<tr>
<td>
**Are there any other roles concerning research data management of importance
for your research?**
</td>
<td>
All Partners will work closely with their local Data
Management Officers and Data Security Officers in order to comply with the
local and national rules and regulations. Information which is of importance
for this DMP shall be communicated directly to the Data Manager PRINTEGER.
</td> </tr> </table>
# Data standards and security
## Data Standards
All data in PRINTEGER will be collected using the principles and best
practices of qualitative and quantitative data collection. For example
interviews will be undertaken with informed consent of the interviewees and
participants (more on this in Section 4), the goals will be communicated to
them clearly, and recorded answers will be checked with the
interviewees/participants.
Every dataset will contain instructions (readme.txt) and if needed quality
testing information (eg. methodology). Files and folders will be versioned and
structured using a name convention consisting of Work Package name, task name
(Figure 1), and file name. The file naming convention is: short name of file
contents, date and version number (Figure 2).
**Figure 1)** Example of PRINTEGER **Figure 2)** Example of PRINTEGER file
data structuring convention naming convention
Analysis of the data will be performed using standard software (e.g. MS
Office, Windows Media Player, SPSS, NVivo) provided by host institutions or
freely available open source software tools. The short- and long-term storage
of the data belong to the project partner that generated it.
## Security
Partners will take all necessary measures in order to prevent loss of data.
This will include the protection of primary data by keeping data out of the
cloud and other sharing services that go beyond the local research
organization. If needed, Partners will get advice and approval from their Data
Security Manager about using specific programmes and services.
Access to unprocessed primary data will be restricted to researchers involved,
through secure data storage. This point will be addressed in the appropriate
deliverable.
Loss of data is safeguarded against by:
* storing the data on secure servers supervised by project partners’ host institutions;
* regular backing up of the data on the abovementioned servers;
* encrypting all of the data by the abovementioned institutions as soon as it is stored on their servers.
Sharing of the data and levels of access are explained in Section 5\.
# Privacy
We will adhere to local and national ethical rules and guidelines as well as
with national and EU legislation. Overall, our project does not involve the
use of identifiable data relating to persons. To the extent that identifiable
data (quotes etc.) will be used, explicit consent will be acquired in
accordance with research ethics guidelines and best practices.
Participation of persons (as respondents in interviews or participants in
interviews, survey, focus groups) will be _entirely voluntary_ . We will
obtain (and clearly document) their informed consent in advance. For this
purpose, we will prepare an informed consent form and a detailed information
sheets which:
* are in a language and in terms fully understandable to them;
* describe the aims, methods and implications of the research, the nature of the participation and any benefits, risks or discomfort that might be involved;
* explicitly state that participation is voluntary and that anyone has the right to refuse to participate — without any consequences;
* indicate how (personal) data will be collected, protected during the project
* describe how anonymised data will be stored for transparency reasons, but not for future reuse;
The consortium ensures that the potential participant has fully understood the
information and does not feel pressured or forced to give consent. If the
consent cannot be given in writing, for example because of illiteracy, the
non-written consent will be formally documented and independently witnessed.
Also our participants are _not_ used as ‘research subjects’ in the sense of
traditional social science research, but as professionals, academics and
research managers, that is: as sources of insight and information, and as
_active_ participants in the project, in accordance with the concept of
responsible research and innovation (RRI). _In other_ _words, they are not
mere sources of project data, but contributors to a co-constructive_ _and
interactive process_ .
We will point out that, rather than ‘benefits, risks or discomfort’, the
participation will allow participants to contribute to a co-creative and
interactive process designed to strengthen integrity in research. We will
clearly explain nonetheless what participation entails, for instance in terms
of amount of time. It will be explicitly statement that participants have the
right not to participate in the project, although we expect this to be
obvious, and that they are entitled to withdraw their participation without
any consequences at all stages.
Informed consent procedures will be installed in accordance with policy and
regulations of participating countries and universities.
## Unexpected findings
During the first consortium meeting, we not only agreed on informed consent
procedures and data management plan in outline, but also discussed the issue
of how to act in the case of unexpected findings.
* The consortium members are aware of the fact that we are collecting and processing data on sensitive issues (misconduct, integrity) which can have an impact in careers of individuals and on the reputation of institutions, or even on research as such. Privacy and anonymity must be respected.
* Data and findings must be processed in such a way that stigmatisation of groups or institutes is prevented.
* we are interested in factors that promote or deter integrity, not in allegations concerning traceable / identifiable individuals or institutions; it is not the task of the consortium to detect or report individual or institutional cases of scientific misconduct. Should we be informed about cases of misconduct, for instance in the context of interviews or focus groups, we may encourage participants to seek advice from local integrity offices or local integrity board, but it is not our task to undertake such actions ourselves.
* Nonetheless, it may be the case that, in the course of our project, consortium members will be provided with evidence concerning extreme cases of fraud and misconduct in such a way that serious conflicts of conscience may arise and the principle of confidentiality comes under pressure. For instance: cases of largescale financial fraud or sexual exploitation. We have decided that, should such a situation arise, we will call a consortium meeting and decide on the basis of unanimity how to deal with the data in this case.
# Overview of research data / draft
## Data used during research
These data are primarily for internal use. It is shared within the project via
a Dropbox folder or a specific sharing program (short-term storage) which
follows data privacy and security standards described in Sections 3 and 4.
Long-term storage of these data will be provided by the servers of the host
institution of the partner which generated the data.
<table>
<tr>
<th>
**Task**
</th>
<th>
**Description**
</th>
<th>
**Stage**
</th>
<th>
**Source**
</th>
<th>
**Access level**
</th> </tr>
<tr>
<td>
</td>
<td>
Database of stakeholders with their personal data
</td>
<td>
Raw
</td>
<td>
Own contacts
</td>
<td>
Restricted to project partners
</td> </tr>
<tr>
<td>
II.1
</td>
<td>
Inventory of key documents
</td>
<td>
Analysed
</td>
<td>
Own research
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
IV.3
</td>
<td>
Focus groups
</td>
<td>
Raw
</td>
<td>
Own research
</td>
<td>
Restricted to project partners
</td> </tr>
<tr>
<td>
</td>
<td>
Focus group report
</td>
<td>
Analysed
</td>
<td>
Own research
</td>
<td>
Public
</td> </tr>
<tr>
<td>
IV.2
</td>
<td>
Questback Survey
</td>
<td>
Raw
</td>
<td>
Own research
</td>
<td>
Restricted to immediately involved researchers
</td> </tr>
<tr>
<td>
</td>
<td>
Paper/report presenting results from the survey
</td>
<td>
Analysed
</td>
<td>
Own research
</td>
<td>
Public
</td> </tr> </table>
## Data shared after research project completion
The columns Stage, Source, and Access are omitted for the data below since
they are the same for all of them. Thus, the Stage for all of these data sets
is “Analysed”, the Source is “Own research”, and the Access level is “Public”.
Long-term storage for these data will be provided via the project website (
_http://printeger.eu_ ) and the knowledge platform (to be announced).
<table>
<tr>
<th>
**Task**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
II.1
</td>
<td>
Inventory of key documents
</td> </tr>
<tr>
<td>
</td>
<td>
interviews’ results 1
</td> </tr>
<tr>
<td>
IV.2
</td>
<td>
Survey Scheme
</td> </tr>
<tr>
<td>
</td>
<td>
Paper/reports presenting results from the survey
</td> </tr>
<tr>
<td>
IV.3
</td>
<td>
Focus group report
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
## Data size
<table>
<tr>
<th>
**Task**
</th>
<th>
**Description**
</th>
<th>
**Data size**
</th> </tr>
<tr>
<td>
</td>
<td>
Database of stakeholders
</td>
<td>
< 1 MB
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
IV.2
</td>
<td>
Survey report
</td>
<td>
5 MB
</td> </tr>
<tr>
<td>
IV.3
</td>
<td>
Focus group report
</td>
<td>
< 1 MB
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
# Appendix I Type of research data
_Type of research data_
Try to identify possible **types of research data** as early in your research
as possible. You may at least distinguish between the following three data
stages:
* **Raw data** : this is the original data that you collected but did not process or analyse yet. For instance: audio files, archives, observations, field notes, data from experiments. When you reuse existing data – data you haven’t collected yourself – these data may be considered raw data.
* **Processed data** : this is the data that you digitised, translated, transcribed, cleaned, validated, checked and / or anonymised.
* **Analysed data** : these are the models, graphs, tables, texts etc. you created based on the raw and processed data, aimed at discovering useful information, suggesting conclusions and supporting decision-making.
Be aware that when you haven’t created the data yourself, this may influence
what you are allowed to do with the data ( _data ownership_ ).
_Examples of the diverse types of research data are_ :
* Documents (text, MS Word), spread sheets
* Scanned laboratory notebooks, field notebooks and diaries
* Online questionnaires, transcripts and surveys
* Digital audio and video recordings
* Transcribed test responses
* Database contents
* Digital models, algorithms and scripts
* Contents of an application (input, output, log files for analysis software, simulation software and schemas)
* Documented methodologies and workflows
* Records of standard operating procedures and protocols
# Appendix II Dataflow
<table>
<tr>
<th>
**Phase 1. Creating data**
* design research
* plan data management (formats, storage, etc.)
* plan consent for sharing
* locate existing data
* collect data (experiment, observe,
measure, simulate)
* capture and create metadata
</th>
<th>
</th>
<th>
**Phase 2. Processing data**
* enter, digitise, transcribe, translate data
* check, validate, clean data
* anonymise data where necessary
* describe data
* manage and store data
</th> </tr>
<tr>
<td>
**Phase 3. Analysing data**
* interpret data
* derive data
* produce research outputs
* author publications
* prepare data for preservation
</td>
<td>
</td>
<td>
**Phase 4. Archiving data**
* migrate data to the best format
* migrate data to a suitable medium or media
* back up and store data
* create metadata and documentation
* archive data
</td> </tr>
<tr>
<td>
**Phase 5. Giving access to data**
* distribute data
* share data
* control access to data
* establish copyright
* promote data
</td>
<td>
</td>
<td>
**Phase 6. Reusing data**
* follow up on research
* carry out new research
* conduct research reviews
* scrutinise findings
* teach and learn
</td> </tr> </table>
_**Radboud University policy for storage and management of research data** _
_**(Executive Board decision dated 25-11-2013)** _
**Appendix III Informed consent form**
# Appendix IV Agreements
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0493_GOAL_731656.md
|
# 1 Introduction
In line with the principles of Open Access to research data and publications
generated through H2020 programmes, the GOAL project produces this deliverable
to present and explain how GOAL aims to improve access to and use of data
generated by the project, following the principles of the Open Research Data
Pilot of the European Commission.
# 2 Data Management Plan
This section describes the DMP policy of GOAL project, as described in
Deliverable D2.3.
## 2.1 Purpose
The purpose of data collection and generation in the GOAL project serves four
distinct purposes. First, data collection is necessary for the development of
underlying intelligence and algorithms. Second, data collection, storage and
processing is necessary for key functionalities of the platform to work (e.g.
GOAL coin generation). Third, additional types of data will be collected for
the purpose of evaluating the effectiveness of platform components, or the
service as a whole in order to make sure that improvements to the product are
perceived as valuable, and have the desired effect on the target population
(i.e. in order to successfully execute _build-measure-learn_ loops). Fourth,
and last, additional data may be collected strictly for the purpose of
generating knowledge about our target users and the domain of physical-,
cognitive-, and social behavior. In short, data collection in GOAL targets
either: _**_Algorithm Development_ ** _ , _**_Operation_ ** _ , _**_Learning_
** _ , or _**_Knowledge Generation_ ** _ . The types of data collected in
these four categories are listed below.
<table>
<tr>
<th>
**2.2**
</th>
<th>
**Types and formats**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**2.2.1**
</td>
<td>
**Data required for Algorithm Development**
</td> </tr> </table>
Physical activity measurement requires the application of signal processing
algorithms on the data measured by the wearable device or the smartphone. To
develop these algorithms, the consortium partners are collecting datasets
while performing different physical activities. These datasets are not
collected by the GOAL platform; instead the consortium is utilizing either
proprietary sensor recording programs, or third-party ones. For Android
smartphones, typically _AndroSensor_ is used (
_https://play.google.com/store/apps/details?id=com.fivasim.androsensor &hl=en
_ ) .
The data collected span a variety of sensors. Those that are currently
utilized by the consortium are:
* Atmospheric pressure sensor, used for altitude change estimation
* Acceleration sensor (3 axes), used for activity intensity classification and step counting
* Step counter sensor (not always present), used for step counting
* GPS sensor (latitude, longitude, elevation), used for speed and altitude change estimation
The activities recorded are both scripted (to have a ground truth) and free-
running. They mainly span walking, running and climbing stairs in a controlled
environment (indoors, treadmill) or outdoors (asphalt, dirt roads, forest).
The sampling rate of the collected data also varies. The two most widely used
values are 20Hz and 50Hz. Only step detection when running utilizing the
acceleration data appears to benefit from the higher sampling rates.
<table>
<tr>
<th>
**2.2.2**
</th>
<th>
**Data required for Operation**
</th> </tr> </table>
The following types of collected data are required for the successful
operation of the GOAL platform:
* Core Platform Information o History of GOAL Coins earned and spent o GOAL Achievements and/or Badges earned o History of Social Marketplace activity
* Tasks Created
* Rewards given to others
* Tasks completed
* Basic User Information – E.g. required for account creation and personalization of different GOAL services: o Username / Email address o Password o Age o Gender o Weight
* Height
* Date of birth o First name o Last name o Nickname o Picture URL
* Daily Physical Activity Data – The GOAL platform collects and stores data that describes the daily levels of physical activity of their users, in order to (1) award users with GOAL Coins, (2) provided personalized goals through the goal-setting algorithms, and (3) provide motivational feedback through a virtual agent on current behavior. The daily physical activity data can originate from:
* Processed accelerometer data (e.g. steps, step rates, integrals of acceleration vector magnitudes) as they are provided by processing the smartphone, smartwatch or proprietary sensors’ values. o Processed GPS data (e.g. distances, speeds, altitudes) as they are provided by processing the smartphone or smartwatch sensors’ values.
* Processed atmospheric pressure data (e.g. altitude changes) as they are provided by processing the smartphone or smartwatch sensors’ values.
* Performance Data for Mobile Games – For games that are considered “coin generators” (e.g. cognitive/puzzle games), the GOAL platform will track the performance of the user within the game in order to (1) award users with GOAL Coins, (2) set personalized cognitive goals, and (3) provide motivational support through the virtual agent. Mobile games data is collected through:
* Score obtained, as it is calculated by each game by taking into account factors like performance, difficulty level and time to complete, and normalizing by maximum achievable score or current high score.
<table>
<tr>
<th>
**2.2.3**
</th>
<th>
**Data required for Learning**
</th> </tr> </table>
The following types of data are collected, stored and processed within the
project’s consortium in order to validate internally whether the correct
design decisions were made; these data include:
* Usage / User Interaction Log Data – For all the different front-end applications we collect and store data about how users navigate through the applications. This data can be used to analyse whether the front-end designs are logical, and e.g. which features are popular among which types of users. Interaction data is stored for the following front-end applications: o Main GOAL Mobile App; o Main GOAL Web App;
* RRD Activity Coach (GOAL Integrated Health App);
* Virtual Coach (Integrated “front-end” within Main GOAL apps).
* Server side logging data – The webserver that runs the main GOAL platform will store information about who accessed which services at which point. This information is only used in case of errors and not stored permanently.
<table>
<tr>
<th>
**2.2.4**
</th>
<th>
**Data required for Knowledge Generation**
</th> </tr> </table>
Knowledge Generation in the GOAL project happens in Work Package 6 that covers
the final demonstration of the project’s results. Deliverable 6.1 describes
the project’s final demonstration protocol, and the types of data needed to
collect from our end-users in order to evaluate the GOAL platform’s usability,
acceptance and user experience. Below we describe the types of data required
during this phase of the project:
* Recruitment of end-users. Users were invited to participate in the GOAL evaluation by contact via **email address** . In the initial phases of the evaluation, friends and colleagues were contacted. In the later phases, recruitment of older adult end-users occurred primarily through the Roessingh Research and Development research panel. Users in the region surround RRD have voluntarily signed-up for this panel in order to be invited to participate in research studies. This database of user information is maintained at RRD, following the principles of GDPR, including e.g. the ability to easily sign off from this list.
Participants of the final GOAL evaluation are asked to provide informed
consent (see Deliverable 6.1, e.g. Section 4.1), where for each participant
the following information is requested:
* Name
* Email Address
* Phone Number
* Address
* Signature (including Date)
Digital copies of the completed informed consent forms are stored on the
private servers of RRD, where access is granted only to RRD’s DPO and the
primary responsible researcher for the GOAL evaluation.
Upon inclusion in the GOAL evaluation, the following data is collected for
each user:
* Video and audio recording of a think-aloud pre-test session, in which the user is asked to perform a series of tasks. These video/audio recordings are stored privately and are transcribed and anonymized by the primary researchers.
* Evaluation forms describing the performance of the executed tasks:
* Task completed successfully (Yes/No) o Description of encountered difficulties during task • Demographic questionnaire: o Gender
* Date of Birth o Occupation
* Highest Completed Education o Use of Smartphone
* System usability scale (SUS) questionnaire
* User experience questionnaire (based on TAM)
Besides the data collected specifically for the purpose of the project’s
evaluation phase, user data is collected by the platform (see Section 2.2.2 –
Data required for Operation): • Actual system use (log data from the platform
and applications)
## 2.3 FAIR data
The GOAL project, had a clear focus on the development of a market-ready
solution for stimulating healthy behavior through a gamification hub/platform.
As such, the project’s primary objective has never been to generate datasets
that are re-usable for whichever purpose. The project has _not_ defined at its
first stage any policy for:
* Making data findable, including provisions for metadata
* Making data openly accessible
* Making data interoperable
* Increase data re-use (through clarifying licenses)
However, consortium partners agreed that making data available will support
researchers in the field and that will have an indirect benefit, not to
mention of course the contribution to the research society. Therefore, the
project has made available anonymized datasets from consortium members that
have actively tested the GOAL platform and applications over the course of the
project. Such data is very rare to be found as open-data on the web and they
can be very useful to the research for developing users’ models and other
applications that can use them as samples (see Section 3 below).
# 3 Data Repository
The datasets described above (section 2.3, FAIR data) are available in:
http://www.goal-h2020.eu/open-data/
These data contain a month’s worth of 15-minute records from consortium
members that have explicitly given consent to have their data published
anonymously. There are four users contributing 2,880 records each, distributed
into four CSV files, one per user.
Each record contains the following columns:
* Start Date: the timestamp of the beginning of the 15-minute interval
* Steps: the number of steps walked in the interval
* Meters Climbed: the number of meters climbed upwards in the interval
* Energy Burned: the number of MET-minutes burned in the interval
* Light Minutes, Moderate Minutes, Heavy Minutes, Very Heavy Minutes, Extreme Minutes: the number of minutes spent in the 5 intensity categories during the interval
Should these data be proven useful to the community, the consortium is willing
to publish longer durations from the existing users, as well as data from more
users, should they provide us with their consent.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0500_OSTEOproSPINE_779340.md
|
# Data Summary
The Data Management Plan (DMP) for OSTEOproSPINE project is developed to
facilitate data flow and utilization of the data between the parties,
including third parties/public where appropriate, and ensure proper data
preservation for future use. DMP is developed in line with Guidelines on FAIR
Data Management in Horizon 2020. The purpose of the DMP is to cover the
complete research data life cycle and to describe the types of data that will
be generated/collected during the project, the standards that will be used,
how data will be preserved and what parts will be shared for verification or
use. The team is aware of the sensitivity of clinical data related to personal
data protection, as well as exploitation and licensing needs, so it will keep
certain data closed and stick with "as open as possible, as closed as
necessary".
The purpose of clinical data management is to define the data quality
standards for Protocol GR-OG-279239-03 ensuring the interventional trial
database stores appropriately complete, accurate, and logically consistent
data, sufficient to achieve protocol objectives and accurately represent the
status of subjects. Clinical, non-clinical and industrial data will be
collected throughout the project. The focus of this document will be on
clinical data stored properly by 2KMM. Within WP3 a Clinical DMP will be
generated. The clinical data are collected in the eCRF. 2KMM will provide Data
Management Services for the Sponsor on the GoResearch TM EDC Platform. The
TMF collected by CF comprises all documents collected during the trial. The
ISF comprises all documents collected before, during and after the trial. They
will be collected by MUW, KU and MUG (clinical sites). Non-clinical data
represent the data collected from _in vivo_ and _in vitro_ laboratory
experiments, mostly created by UZSM (the coordinator) and UZFVM. CMC data will
be generated by GEN.
Expected size of eCRF clinical data will be up to 10MB in case exporting
collected data to text format, for example .csv, and around 100MB as a
database backup file.
The collected data will be primarily used for the purpose of further
development of the IMP, regulatory reporting and new submissions, IP
protection, licencing and technology transfer but also for results
dissemination and communication to interested stakeholders, including
scientific community, patients and wide public.
# FAIR data
The FAIR Data Principles (Findable, Accessible, Interoperable, Reusable) have
received worldwide recognition as a useful framework for thinking about
sharing data in a way that will enable maximum use and reuse. The adherence to
the principles: (1) supports knowledge discovery, innovation and knowledge
integration, (2) promotes sharing and reuse of data across disciplines, (3)
supports new discoveries through the harvest and analysis of multiple
datasets. OSTEOproSPINE will follow these principles the best possible.
## Making data findable, including provisions for metadata
OSTEOproSPINE will collect data from human subjects involved in the Phase II
clinical trial. The clinical health information will include data from medical
and medication history; physical exam; safety lab tests; anti-rhBMP6 antibody
monitoring; quality of life, back pain and leg pain questionnaires
information, X-ray, and adverse event monitoring.
Data collection will be performed using eCRF prepared on the GoResearch™
platform. GoResearch™ is an internet EDC platform for clinical research. It is
compliant with regulatory requirements of FDA’s 21 CFR Part 11 and specific
areas of GCP regarding electronic data.
Over the course of the data collection phase, regular data management
activities will be performed to ensure that data quality standards for the
trial are met and the database stores appropriately complete, accurate, and
logically consistent data, sufficient to achieve protocol objectives and
accurately represents the status of subjects. These activities will include
System Level Data modifications, query processing and management and data
cleaning, including data reviews and self-evident corrections of obvious data
processing errors. Upon completion of data collection, the final data cleaning
will be performed, database locked, and clean study data exported for the
statistical analysis.
Anonymized external data from laboratory will be provided in xls file.
CF will be responsible for:
* Setup of TMF and ISFs at each site o Auditing of collection of clinical data by CF
* Execution of clinical trial and gathering of clinical data according to GEN and CF’s SOPs for clinical trials
The clinical sites MUW, KU and MUG will collect the data that will be entered
into the eCRF according to the Study schedule described in the Study protocol.
PV support will be provided by a consortium member CF, who will appoint a PV
manager for the trial. All adverse events in all treatment groups, regardless
of suspected causal relationship to study treatment or seriousness, will be
recorded on the adverse event page(s) of the eCRF and reported in the final
Clinical trial report.
The samples for immediate assessments will be collected, handled, processed
and analysed according to the good practices, hospital SOPs and sponsor SOPs.
The data generated related to IMP management include the IMP storage,
packaging, and labelling, QP release and transport to the clinical sites
generated by the subcontractor. All the data generated and documentation
issued by the subcontractor is made available to GEN and to all consortium
members that need to use and store the data according to GMP and GCP
procedures and institutional SOPs. Following delivery of the IMP to the
clinical site the hospital pharmacy and study personnel will generate
necessary documentation related to IMP storage, accountability, dispensing,
discarding, reconciliation and return of the unused IMP.
Non-clinical data entail:
* Data from the pre-clinical studies: _in vitro_ and _in vivo_ testing
* CMC data related to IMP manufacturing
* Data on authorisations and training certificates
The partners generating pre-clinical data are GEN, UZSM and UZFVM and they
operate according to GMP and GLP principles in their facilities and comply
with the institutional SOP’s related to data management.
The raw data are stored in databases on the associated computers or are
transferred to the experimenter’s computer or lab notebook as the primary
storage record. The secondary storage record is the shared drive with folders
available to the researchers on the institutional servers. The folders are
located on shared network drives and are of appropriate size and security.
Additionally, research data are generated from the preclinical _in vitro_
assays and animal models studies that are most meticulously planned,
implemented and data recorded and saved. Two consortium partners will be
involved in the animal experimentation: UZSM and UZFVM.
Laboratory notebooks are used to document all the experiments performed and
are used in paper form. The lab notebooks are kept in secure places and are
archived for at least 20 years.
Data from the study reports in the form of study summaries are used to update
and generate regulatory documentation.
Novel data on rhBMP6 manufacture, quality control and quality assurance will
also be generated enabling increased industrial and technological advancements
of biotech companies involved in the project. The data will be generated by
GEN and UZSM and the approved subcontractors according to GMP procedures. The
data is generated, transferred, stored, used, shared, and archived according
to institutional SOPs and the OSTEOproSPINE Consortium Agreement and policies.
GEN will ensure that all new clinical batches undergo release by an EU
qualified person in accordance with the requirements of Article 13.3 Directive
2001/20/EC (or the new EU CT regulation) and that new batches of DS and rhBMP6
DP are tested and data obtained in accordance with the commitments made to the
regulatory agencies.
For the purpose of complying with ethics principles and ensuring the
conformity with the ethics requirements, the data on authorisations and
training certificates are continuously collected.
They consist of:
* authorisation certificates related to facilities with adequate physical conditions and equipment and carefully controlled and monitored conditions for animal husbandry as well as
* appropriate training certificates and/or personal licences for the staff involved in the animal experimentation are obtained by relevant authorities and stored in paper form in the facility archive and as electronic records on shared drive folders.
* other documentation related to management, distribution of responsibilities, training and continuing education of specialist staff involved and health monitoring as well as regular yearly inspections of the national competent authorities
These data are kept in paper form in secure locations by the manager for the
Animal and Breeding Facility and as electronic records on personal computers
and on shared drive folders and are available to any interested party upon
request.
In order to gain regulatory and ethics approval in the country where the Phase
II clinical study OSTEOproSPINE will be conducted all of the required
regulatory documentation is prepared, shared among the partners and stored
according to GCP principles. The data from the preclinical and clinical
studies are summarized in the documents that represent the core of the CTA
which will be maintained through preparation and submission of substantial
amendments when required.
During the clinical study and in line with the newly generated data, it is
likely that additional changes to the protocol or to other elements of CTA
will need to be introduced. UZSM will evaluate all changes and decide (with
other consortium partners) if they constitute significant changes that need to
be submitted to the relevant regulatory agencies or ethics committees. If they
are, then amendment submissions to relevant authorities will be prepared.
The new EU clinical trial regulation will come into force in 2019/2020.
Consortium members will ensure all regulatory submissions are in accordance
with this once it is applicable.
## Making data openly accessible
All data planned to be collected within OSTEOproSPINE clinical trial will be
focused on the research and will be collected, processed and stored in the
manner which protects the privacy of the health information. It will be in
strict compliance with Directive 95/46/EC of the European Parliament and of
the Council of 24 October 1995 on the protection of individuals with regard to
the processing of personal data and on the free movement of such data, and its
Article 29 – Working Party on the Protection of Individuals (8/2010 opinion).
On 25/05/2018 the EU GDPR (Regulation (EU) 2016/679), revising Directive
95/46/EC on Data Protection and Privacy, came into force and the consortium
took this into account and will ensure continuous compliance.
Patients’ records obtained in this project, as well as related health records,
will remain strictly confidential at all times. However, these will need to be
made available to others working on
GEN’s behalf, the IDSMB members and Medicines Regulatory Authorities. Informed
Consent Procedure will be implemented according to the EU Regulation No
536/2014 on GCP.
Each Informed Consent will be reviewed and approved by the Institutional
review boards at the study sites, National/Central Ethics Committee and
Regulatory Authority of the countries participating in proposed clinical
trial. By signing the ICF, subjects agree to this access for the current trial
and any further research that may be done. However, OSTEOproSPINE will take
steps to protect all personal information and will not include subject’s names
on any sponsor forms, reports, publications or in any future disclosures. All
information collected in this trial will be treated as confidential and
subject’s identification will not be revealed to outsiders, unless required by
law. Every effort is made to ensure continued privacy of study participants
and the data they have contributed to the study. All subjects will be
identified in study documents by study specific subject identification number.
Each research centre will have the key code list for identification of the
study subject if needed; principal investigator and the study nurse having
access to this list. The samples collected during the study will be coded and
stored in each research centre until analysed centrally. The data created in
the study will not be included in the subject’s medical records.
## Making data interoperable
To meet GCP requirements, collected clinical data will be managed in line with
the Clinical DMP, which will be developed in WP3, and will represent an
integral part of the overall DMP. A dedicated data management task in WP3 will
develop the necessary structure for clinical data capture and warehousing.
2KMM is an expert in the field and is responsible for this task. Data
classifications will be harmonized across the clinical sites involved in order
to enable them for further integration. Within the consortium, all partners
will follow their own ethical protocols and informed consent, and patient’s
identity will remain in the secured databases at each study centre.
Data management-related activities in WP7 will first focus on the data
harmonization, curation and data integration tasks. The consortium data
originating from various partners collected with different SOP’s and standards
will need to be harmonized. For each data type, the respective delivery sites
will decide on one SOP and harmonization model. All partners will take part in
this effort and harmonize metadata and data formats. UZSM together with Eurice
will be responsible for the DMP. Due to exploitation plans of the
OSTEOproSPINE results, it is crucial to protect the results and all data
created in the project and keep them confidential, in order to retain the
interest of pharma industry that are very sensitive to confidentiality issues,
ownership exclusivity and freedom-to-operate. Consortium members will follow
the principle “as open as possible, as closed as necessary”.
All the data that are generated, will be handled, verified, shared and stored
according to GMP, GCP and GLP procedures and will be available to the partners
authorised to use such data.
## Increase data re-use (through clarifying licences)
Data relevant for commercial use will be exploited through patenting, know-how
and potentially licensing. In terms of further use for research purposes, the
data will be used to further plan and design later phases of clinical
development and explore the therapeutic utilisation in the context of other
human pathological conditions and ultimately apply the generated knowledge in
the clinical practice for the benefit of patients and healthcare systems.
Furthermore, it is ensured that the results of all OSTEOproSPINE scientific
publications can independently be validated.
# Allocation of resources
All partners are requested to collect and manage the data in accordance with
the DMP and other common professional practices, especially the task leaders.
The WP leaders will be responsible for the implementation of the DMP and will
monitor data management activities. The list of WPs is presented in Table 1
and Data management is included in all of them.
Table 1. List of WPs
<table>
<tr>
<th>
**WP Number**
</th>
<th>
**WP Title**
</th>
<th>
**Lead beneficiary**
</th> </tr>
<tr>
<td>
WP1
</td>
<td>
Phase II clinical trial
</td>
<td>
3-MUW
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
Regulatory support
</td>
<td>
1-UZSM
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
Data Management and Biostatistic
</td>
<td>
11-2KMM
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
Investigational medicinal product supply for the clinical trial
</td>
<td>
2-GEN
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
Studies to boost OSTEOproSPINE differentiation and market potential
</td>
<td>
1-UZSM
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
Innovation management:
Communication, Dissemination and Exploitation
</td>
<td>
13-Eurice
</td> </tr>
<tr>
<td>
WP7
</td>
<td>
Project management
</td>
<td>
1-UZSM
</td> </tr>
<tr>
<td>
WP8
</td>
<td>
Ethics requirements
</td>
<td>
1-UZSM
</td> </tr> </table>
The task leader of WP3 (Clinical Data Management and Biostatistic) is 2KMM and
will be responsible for the following tasks:
* Database and users administration;
* Electronic data transfers;
* Data reviews;
* Query management; - Database lock.
The task leader of WP7 (Project management) is UZSM with support of Eurice.
W7 will provide a clear organizational framework and all necessary support
mechanisms to enable smooth project workflow needed for efficient and timely
data management.
Aim of WP7:
* Provide optimal guidance and support to all partners through a quick set-up of effective management & communication structures
* Transparency for consortium partners and the EC through proper project documentation
* Maximize effectiveness of project activities: ensure the timely and qualitative achievement of project results through scientific and administrative coordination.
* Ensure efficiency: use resources wisely, avoid duplication of efforts, reduce waste of time and energy to a minimum.
A dedicated data management task in WP3 will develop the necessary structure
for data warehousing. All partners will take part in this effort and harmonize
metadata and data formats. Data classifications will be harmonized in all
clinical cohorts to able to compare studies across the sites involved. Within
the consortium, all clinical partners will follow their own ethical protocols
and ensure informed consent signatures, and patient identity will remain in
the secured databases at each cohort centre. The consortium is very much aware
of privacy and GDPR issues and will ensure that subjects cannot be directly or
indirectly identified. Therefore, no identifiable data will be transferred
from the cohorts to the consortium and central databases, but will be replaced
by project-specific codes for data integration. This information is physically
separated from the database and - if stored in electronic form - physically
separated from the network (and internet). Access to this information is only
for authorized persons and will be password protected. Data relevant for
commercial use will be exploited through patenting, know-how and potentially
licensing. In terms of further use for research purposes, the data will be
used to further plan and design later phases of clinical development and
explore the therapeutics in the context of other indications and ultimately
apply the generated knowledge in the clinical practice for the benefit of
patients and healthcare systems. Furthermore, it will be ensured that the
results of all OSTEOproSPINE scientific publications can independently be
validated.
# Data security
Study related data from MUW will be stored till close-out visit at site, after
that it will be stored according to MUW SOPs.
Study related data from MUG will be stored in lockable cabinets and the
documents are kept for 15 years after the end of the study or as indicated in
the study protocol.
The documents as well as the images in KU are stored in digital form. Study
related data from KU will be stored according to KU SOP.
The current requirements about archiving study documentation are as follows:
the study documentation (ISF) is stored at the research center, usually 15
years after the close-out visit, with the intention of making the
documentation available for possible inspections after the completion of the
trial. In doing so, the internal regulations of health institutions where the
research is taking place are taken into account, and the most stringentrule is
followed.
The sponsor keeps TMF (complete study documentation, for all research centers)
for 15 years.
2KMM is compliant with ISO 27001:2014 Information Security Management System
(certificate since 2010) and ISO 9001:2015 Quality Management System
(certificate since 2009).The building in which the head office of the 2KMM and
the data processing area are located, is protected by a security company and
an electronic alarm system.
Rooms where data processing devices are located and where the data are
processed are secured against outsiders access. Paper documentation in the
form of files, indexes, records, etc. which are database media is stored in
locked office cabinets to which only authorized persons have access.
The server rooms are equipped with a fire safety system to prevent any fire
expansion as well as fire extinguishing system.
All critical systems are regularly tested according to schedule based on
standard procedures. The computer systems and IT services are continuously
monitored. Critical systems run as fail-over clusters, and all of them are
replicated into backup data centre. Databases backups are made in different
schedules.
Offline media are kept in a safe in 2KMM's protected area room. Media used for
long-term data archiving and their ISO disk image stored on network drivers
are checked regularly. Capacity of backup systems and other carriers depends
on their volume. If it is necessary to increase the capacity of backup
systems, further disk expansion enclosures or disk arrays are added.
The study database will be retained at 2KMM for 15 years after database lock.
For data security CF follows its own SOP procedures.
The pre-clinical and ethics data will be kept, stored, protected and retained
according to the institutional policies. The IT departments of the UZSM, UZVFM
ensures the access, storage, maintenance, protection, and retention of the
folders and the data stored. The data on the computers is protected by the
encryption of the discs. Regular maintenance of servers and internal IT
systems is performed. The data on the internal servers will be retained as
long as required and currently there is no time limit imposed on the data
retention. The data will be stored for 10 years after the end of the project,
and after this period it will be revised whether there will be a need for
further storage.
Data are shared only internally. The clinical data are all in one database.
Data transfer along with study management documentation will be transferred to
GEN in an electronic archive (zip file). Transfer media will be agreed with
the GEN and should guarantee secure handover.
# Ethical aspects
The rapid scientific and technological advances in the field of bone
regeneration research are contributing to the well-being and the economic
wealth of European citizens. They have, however, evoked some ethics concerns,
which include involvement of human subjects and study related physical
interventions, human cells and tissues, collection and/or processing of
personal data as well as involvement of experimental animals. Such concerns,
together with the relevant national and EU legislation or directives and the
Ethics rules of the Horizon 2020 Framework Programme Regulation, have been
considered by the OSTEOproSPINE consortium and OSTEOproSPINE will operate in
full compliance with the existing national legislation and EC directives and
rules on ethical issues that are relevant to the project.
Based on the Ethics Issues Checklist, OSTEOproSPINE has identified 4 issues
requiring ethics clarification:
* Involvement of human participants
* Involvement of physical interventions on the study participants
* Involvement of personal (health) data collection and/or processing
* Involvement of experimental animals
WP8 (Ethics requirements) sets out the 'ethics requirements' that the project
must comply with and UZSM as lead beneficiary will coordinate, implement and
report all ethical aspects relevant for project activities.
The consortium is very much aware of privacy issues and will ensure that
subjects cannot be directly or indirectly identified. Therefore, no
identifiable data will be transferred from clinical sites to the consortium
and central databases, but will be replaced by project-specific codes for data
integration. The personally identifiable information will be exclusively kept
at the clinical site, physically separated from the database and - if stored
in electronic form - physically separated from the network (and internet).
Access to this information will be limited to authorized persons only, and
will be password protected, if stored in electronic form.
Data relevant for commercial use will be exploited through patenting, know-how
and potentially licensing. In terms of further use for research purposes the
data will be used to further plan and design higher phases of clinical
development and explore the biomedical uses in the context of other human
pathological conditions and ultimately apply the generated knowledge in the
clinical practice for the benefit of patients and healthcare systems.
# Other issues
CF will follow its own DM policy in the form of SOP Information Protection
Responsibility.
Every clinical site in OSTEOproSPINE project has SOP´s and DM policies. There
is a new Data protection law and topics related to its implementation will be
continually discussed among consortium members. The new European regulation
(GDPR) is in force since 25 th of May and this relates also to all
OSTEOproSPINE partners.
2KMM will follow its own DM policy in the form of SOP (listed in Clinical
DMP).
Academic institutions have various policies and SOPs that also include data
management. All are in compliance with GDPR.
Incidental finding is a finding concerning an individual research participant
that has a potential health or reproductive importance and is discovered in
the course of conducting research but is beyond the aims of the study. The
clinical team (composed primarily of experienced orthopaedic surgeons and
radiologists, as well as research nurses) will be primarily in charge of the
data collection and interpretation of clinical findings. Other researchers
will have a limited access to the data with no ability of participant
identification and communication.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0503_RADICLE_636932.md
|
### Introduction
This report outlines a data dissemination and analysis exercise carried out by
LOE working with Nottingham Trent University (NTU). As part of the funding of
the RADICLE project it is intended that weld signal data generated should
become publicly available for wider learning and understanding. NTU used
internal funding to enable 3 of their staff to work with LOE on behalf of
RADICLE to investigate the limitations of the data formats and how to make it
accessible for wider sharing.
### Work outline
LOE have worked with Nottingham Trent University (NTU) to explore how the open
access data set can be analysed by third parties to the RADICLE project. At
EWF’s suggestion, TWI undertook a set of welding experiments on steel S355 as
this a conventional material with little proprietary knowledge involved, but
of much wider interest than some of the specialised aerospace grade materials
welded for end users.
Lecturer Dr Georgina Cosma and Research Associates Tolulope Oluwafemi and
Sadegh Mousaabadi Salesi were given a set of photo diode, acoustic, LDD and
camera data recorded during the welding. Weld quality data was provided in the
form of x-rays of the welded part and a ‘traffic light’ status of the weld.
Due to the limited time available it was decided that rather than try to make
a feedback process, a more limited quality prediction tool would be
considered. This allows the computational algorithm more time to analyse the
data it is fed so that less optimisation is required. As a result, the only
outcome is a quality prediction, rather than a specific defect location. Each
of the welds had one of three statuses: good, questionable or bad.
The first observation was that the data set was very limited in size making it
difficult to sufficiently train a machine learning algorithm. A number of the
data recordings were incomplete or had other issues, meaning that just 8 welds
were suitable for use. None of those considered useable had the questionable
status. As a result, classification and prediction of the data set can only
produce 2 outcomes a positive or negative status. This means that very high
statistical predictions are achieved, a success which would not be replicated
with a much larger and more representative data set.
In order to be able to make any predictions about a data set it is first
necessary to ‘train’ a model and to prepare the data. Preparing the data
includes stages such as defining the start and end point to remove any ‘non-
signal’ data and aligning the different signal types. LOE provided x-ray data
produced by TWI in image form and in normalised numerical form which required
development of an image processing tool.
Training the model then requires the data set to be classified. A number of
these methods are available and are detailed in the NTU reports attached.
Clustering the datasets breaks down the raw data into computer defined
significant features. The number of these clusters varies and the derivation
of them is statistically derived. Using the best approach tested, it was found
that 3 clusters showed the best quality indicator for the available data. If
the majority of data points are in Cluster 1, then the quality is ‘bad’. If
Clusters 2 and 3 are bigger, then the quality is ‘good’. Due to the limited
data set it has not been determined if there is more meaning that could be
derived from clusters 2 and 3.
Once the clustering has been performed, the true machine learning can be
applied. Again a number of algorithms can be applied. These can be tested by
removing the quality indicator and processing one of the data sets. The
algorithm then compares it to the datasets for which it has a quality status
and looks for similarities. By matching it to the closest data set, a
predicted quality status can be generated. In testing, a number of algorithms
showed very similar and very good performance. This is misleading as the data
set is very small and the range of statuses tested binary. The prediction made
is therefore binary; anything other than a near 100% prediction would be
closer to guess work than a statistical indictor.
### Future work
To aid wider use of the data set annotations will need to be made to explain
the data. At present the start and end points of the weld are observed
manually: for wider use there needs to be an explanation of where these occur
and how to identify them. This is complicated by the length of path travelled
while the beam is on being different to the heat affected area: the keyhole
was a width which extends beyond the nominal point of incidence of the beam.
A major challenge that still remains is to align a higher resolution status
with the signal data. This means converting the resolution of an x-ray,
typically 600 pixels (for a 70mm weld), to the same resolution as the video
data (3000) and photodiode data (180,000) points. To achieve this,
interpolation is required. It also requires interpretation of the x-ray to
define acceptable pores or clusters and bad areas for each defect category.
While x-rays showing the presence of low density areas was available, the
cause of these was not available during this study.
For dissemination purposes, the method of sharing the data remains a
challenge. A typical set of signal data generated by the LOE system equates to
around 100mB per weld. The Permanova seam tracking video potentially adds
several hundred mB. In addition to this the quality data, X-ray, CT scans and
surface profiles add significantly more data, especially in their raw form.
Sharing this data on physical memory or the server time and space to make it
available on a ‘sharepoint’ ftp site are costs which have not been factored in
to this aim.
This study has simply attempted to find a way to predict global quality of a
weld. To turn this into a system which can predict the cause of a change
quality and make a process parameter change to correct it is a significantly
harder challenge. This work suggests that there is useful data in all the
signals generated which allows quality predictions to be made. This provides
evidence that the signals gathered could, in theory, allow real time control
and analysis to be achieved.
LOE would like to thank VTT for help critiquing NTUs work and for helping
provide feedback to direct the study.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0504_ER4STEM_665972.md
|
# 1 EXECUTIVE SUMMARY
<table>
<tr>
<th>
**1.1**
</th>
<th>
**ROLE/PURPOSE/OBJECTIVE OF THE DELIVERABLE**
</th> </tr> </table>
This Data Management Plan outlines how the research data collected was handled
during and after the ER4STEM project. It describes the data set, how it was
archived and preserved as well as how it will be shared. It also provides a
description on how the data was treated before been uploaded in the Zenodo
repository.
<table>
<tr>
<th>
**1.2**
</th>
<th>
**RELATIONSHIP TO OTHER ER4STEM DELIVERABLES**
</th> </tr> </table>
Research data was collected in work packages WP2 (Workshops and Curricula) and
WP3 (Conferences and Competitions). The evaluation process is described in
D6.1 (Evaluation Pre-Kit) and will be refined in D6.2. Deliverables D6.3, D6.4
and D6.5, which report the evaluation results with the data collected every
year. The dissemination deliverables D8.2, D8.3 and D8.4, especially the
report on scientific dissemination, are also closely linked to the data
management plan.
<table>
<tr>
<th>
**1.3**
</th>
<th>
**STRUCTURE OF THE DOCUMENT**
</th> </tr> </table>
The Introduction states the purpose of the document and explains the type of
information it contains. In chapter 3, the data collected during the project
is described in detail. Chapter 4 deals with standards and metadata. Chapter 5
elaborates on which data was shared with whom and how and chapter 6 gives an
overview on how the data was archived and stored during the project and
afterwards. Chapter 7 describes how the data was treated and structure before
being upload in the Zenodo repository.
<table>
<tr>
<th>
**2**
</th>
<th>
**INTRODUCTION**
</th> </tr> </table>
The ER4STEM Data Management Plan (DMP) outlines how the research data
collected during the project was handled during and after the project.
It is structured as suggested in [1] and describes:
* the data set
* standards and metadata
* data sharing
* archiving and preservation
Throughout these sections, reference is made to data protection, ethics, the
evaluation (WP6), publications and other forms of dissemination (WP7) as well
as to the two main activities of data collection (WP2 and WP3).
The DMP was not a fixed document; it evolved and gained more precision and
substance during the lifespan of the project. The first version of the DMP was
delivered in project month 6 (M6). It was updated at month 35 (M35), before
the final project review, to fine-tune it to the data generated and the uses
identified by the project consortium. The data management was discussed in
milestone MS2 in project month M4, where important decisions were taken for
the first version of the DMP document. The DMP was discussed during each
milestone reviews. Nevertheless, the biggest modification was done after the
final milestone M35, when was introduced the treatment that must be done to
the data before been uploaded to the Zenodo repository.
<table>
<tr>
<th>
**3**
</th>
<th>
**DATA SET**
</th> </tr> </table>
During the project research data was collected during the workshops and
conferences. As part of work packages 2, 3 and 6 (WP2, WP3 & WP6), data is
collected by partners from multiple sources at multiple sites. This data is
quantitative and qualitative in nature and will be analysed from different
perspectives for project development and scientific evaluation with results
published in scientific conferences and journals.
Data was only collected following the informed consent, and in the case of
minors, their parent or guardian.
<table>
<tr>
<th>
**3.1**
</th>
<th>
**DATA SET REFERENCE AND NAME**
</th> </tr> </table>
In this document data regarding the workshops conducted during the project by
each of the partners will be referenced to as **workshops data set** and will
include data of over 4000 children by the end of the project. A similar
approach will be followed for the conference data, but with smaller numbers.
This data set will be referenced to as **conferences data set** . Collected
data is anonymized by using participant numbers (a randomly assigned number
with partner code and project year). The participant key, which connects
participant information to participant numbers, is the only document that
contains personally sensitive material (name of the participant, age, parent
or school name and contact information) and will not be shared outside of the
partner organisation or with people in the partner organisation who do not
require direct access to this information. The participant key will be stored
securely according to Data Protection Laws and will not be removed from the
partner organisations.
## 3.1.1 WORKSHOPS DATA SET
Since the data comes from multiple sources, the workshops data set had its own
folder structure and following documents collected: Workshop information, pre-
questionnaire, post-questionnaire, observations, interviews, artefacts of
learning, tutor reflections, and encrypted sensitive data like videos and
audio files. These files will be named using the following convention:
* For documents and templates created by the partner responsible for evaluation, Cardiff University, the data will be named after the organisation conducting the workshop or conference, a six-digit date of the workshop or conference, and the original file name. For example,
PRIA_160416_ObservationSchedule.doc
* If multiple documents from the same organisation on the same date exist, identifiers will be added as appropriate to the data, i.e. TutorName or GroupName. For example,
TUW_250616_Lara_TutorReflection.doc or TUW_250616_Julian_TutorReflection.doc
* If no Cardiff University template exists (typically for artefacts, audio and video), the name will state the organisation, the six-digit date, then the group name and data type. For example, AL_040216_RobotAddicts_AudioInterview
* The files will be stored in a folder structure as in Figure 1. The detailed process of planning and conducting the workshops is part of work package 2, their evaluation is part of work package 6 and already described in D6.1.
**Figure 1: Workshop data folder structure**
## 3.1.2 CONFERENCES DATA SET
The conferences data set is a compact and slightly adapted form of the
workshops data set with fewer numbers of children and follows the same naming
conventions.
In ECER2016, roughly 300 students were expected to be present, many of whom
will have participated in preparatory workshops and thus will have already
contributed to the workshops data set.
<table>
<tr>
<th>
**3.2**
</th>
<th>
</th>
<th>
**DATA SET DESCRIPTION**
</th> </tr> </table>
## 3.2.1 WORKSHOPS DATA SET
A detailed description of the evaluation method and the rationale behind it as
well as detailed information on the collected data is provided in D6.1. In
this document, the data set will be described as a summary.
The workshops data set includes following documents and information:
* Workshop Session Information (.doc)
* Partner name
* Dates (to-from)
* Number of sessions
* Location
* Lead by
* Other tutors/mentors
* Age of students
* Total number of students
* Male/Female numbers
* Group sizes
* Total number of groups
* How were the groups formed? Why?
* Robotics kit
* Programming languages
* Domain
* Aims of workshop
* Please include all relevant lesson materials (e.g. activity plan, each session/lesson plan, handouts, etc.) in the folder with this document
* Draw a scientist (writings translated into English, anonymised and digitalised .pdf)
* Filled by the participants
* To answer the question “are popular gender stereotypes about STEM held?”
* Questionnaires (anonymised and online or .xls)
* Filled by the participants
* Pre- and post-workshop questionnaires are used to collect largely quantitative data
* Questions are split into personal information (age, gender and school), past experience and existing attitudes to STEM subjects and careers
* The post-workshop questionnaire also includes questions about the activities to help understand learners’ experiences of the workshop as a whole, what participants feel they have learned and what their future intentions are
* Paper questionnaire answers to be entered into the online system
* Free-text responses translated into English and entered into excel files
* Observations (translated into English, anonymised and digitalised .xls)
* Observation protocol filled by the workshop facilitators or other observers
* Video observation where possible to verify and expand upon observation notes, as well as sensitising data analysts to the context.
* Interviews (anonymised, transcribed and translated into English, .xls)
* With focus groups, audio-recorded
* Conducted to understand the experience of participants and their reasons for particular actions
* Artefacts of learning (translated into English where applicable, anonymised and digitalised .pdf)
* Created by the participants
* Identified as group work
* Participant reflections (translated into English, anonymised if applicable and digitalised if applicable,
.xls)
* Created individually and as a team
* Either as a blog which acts as a reflection tool and a living artefact of the learning process, or as a guided reflection document
* Tutor reflections (translated into English, anonymised and digitalised, .xls)
* Done by each of the tutors, mentors or workshop facilitators
* The purpose is two-fold: 1) To document changes to workshop plans and the reasons for these; and 2) to document the evolution of activity plans between workshops.
* Sensitive data (audio and video recordings)
* Audio recordings of interviews and any video recordings are encrypted and stored by the partner organisation and only encrypted video files are shared with evaluation partner Cardiff University for the purpose of analysis and archiving.
The data was collected during the workshops by partner organizations mostly
from the workshop participants, children ages 7 to 18. Over 4000 participants
in five European countries were planned for the whole duration of the project.
The first year is regarded as a pilot year with approximately 1000 students
participating in the pilot evaluation. Collected data was used to improve
processes regarding the workshops as well as their evaluation. The data also
informed the development of the framework. It was planned that the results of
the data analysis will be used in scientific publications, along with
illustrative, fully anonymised extracts from the data set. Parts of the data
set will be made available via open access (details in section 5).
## 3.2.2 CONFERENCES DATA SET
The conferences data set is a compact form of the workshops data set with
fewer participants (in ECER2016 roughly 180 students participated). Collected
data was used to improve processes regarding the conferences as well as their
evaluation. The data will also inform the development of the framework. It is
planned that the data will be used in scientific publications and parts of it
made available via open access (details in section 5).
* Conference Session Information (.doc)
* Partner name
* Dates (to-from)
* Number of sessions
* Location
* Lead by
* Other tutors/mentors
* Age of students
* Total number of students
* Male/Female numbers
* Group sizes
* Total number of groups
* How were the groups formed? Why?
* Robotics kit
* Programming languages
* Domain
* Aims of conference
* Please include all relevant materials (e.g. activity plan, each session/lesson plan, handouts, etc.) in the folder with this document
* Questionnaires (anonymised and online or .xls)
* Filled by the participants
* Used to collect largely quantitative data
* Questions are split into personal information, existing attitudes and conference experience.
* Paper questionnaire answers to be entered into the online system
* Free-text responses translated into English and entered into excel files
* Observations (translated into English, anonymised and digitalised .xls)
* Observation protocol filled by the conference facilitators or other observers
* Video observation where possible to verify and expand upon observation notes, as well as sensitising data analysts to the context.
* Interviews (anonymised, transcribed and translated into English, .xls)
* With focus groups, audio-recorded
* Conducted to understand the experience of participants and their reasons for particular actions
* Artefacts of learning (translated into English where applicable, anonymised and digitalised .pdf)
* Created by the participants
* Identified as group work
* Participant reflections (translated into English, anonymised if applicable and digitalised if applicable,
.xls)
* Created individually and as a team
* Either as a blog which acts as a reflection tool and a living artefact of the learning process, or as a guided reflection document
* Tutor reflections (translated into English, anonymised and digitalised, .xls)
* Done by each of the tutors, mentors or workshop facilitators
* The purpose is two-fold: 1) To document changes to workshop plans and the reasons for these; and 2) to document the evolution of activity plans between workshops.
* Sensitive data (audio and video recordings)
* Audio recordings of interviews and any video recordings are encrypted and stored by the partner organisation and only encrypted video files are shared with evaluation partner Cardiff University for the purpose of analysis and archiving.
It was planned that the results of the data analysis was going to be used in
scientific publications, along with illustrative, fully anonymised extracts
from the data set. Parts of the data set will be made available via open
access (details in section 5).
# 4 STANDARDS AND METADATA
The ER4STEM project flowed the Ethical standards of the Cardiff School of
Social Sciences and has ethical approval from the School of Social Sciences
Research Ethics Committee. Besides following a rigorous evaluation protocol
that included informed consent of children participating in ER4STEM activities
as well as their legal guardian, the project will comply with national and EU
legislation on Data Protection, particularly the European Data Protection
Legislation (Directive 95/45/EC). All ER4STEM project collected data was
anonymised before research analysis and any data that might make personal
identification possible was protected with adequate measures. For details,
please see section 6.
The main purpose of the data collection was the evaluation of the impact of
the framework tools and activities on young people. The findings are available
via the project deliverables and scientific publications. Workshops are an
important part of the data collection, therefore metadata needed for the
workshops was defined through the cooperation of work packages 2 and 6 (see
section 4.1.1). This metadata or parts of it can used as search parameters in
an open access research repository that provides access to anonymised and
processed research data (details of data sharing is in section 5). The
metadata can also be made available in an appropriate form in the ER4STEM
repository (work package 5).
## 4.1.1 WORKSHOPS METADATA
The workshops metadata includes the following information for each session:
* Partner name
* Dates (to-from)
* Number of sessions
* Location
* Lead by
* Other tutors/mentors
* Age of students
* Total number of students
* Male/Female numbers
* Group sizes
* Total number of groups
* How the groups were formed
* Robotics kit
* Programming languages
* Domain
* Aims of workshop
## 4.1.2 CONFERENCES METADATA
The conference metadata includes the following information:
* Partner name
* Dates (to-from)
* Number of sessions
* Location
* Lead by
* Other tutors/mentors
* Age of students
* Total number of students
* Male/Female numbers
* Group sizes
* Total number of groups
* How the groups were formed
* Robotics kit
* Programming languages
* Domain
* Aims of conference
<table>
<tr>
<th>
**5**
</th>
<th>
**DATA SHARING**
</th> </tr> </table>
All collected and anonymised data from the workshops and conferences as
outlined in the sections before, were disseminated in one form or another. So
far, these datasets do not include any information that the consortium
considers worth protection for exploitation. All collected data were used for
scientific evaluation and findings were published via scientific channels.
Open access to these publications are available in the repository. It is
important to highlight that these files correspond to the camera ready and not
the files available on the publisher website. However, not all of the raw data
can be made accessible to everyone for ethical reasons. Figure 2 outlines the
decisions made by the consortium on how to handle data sharing at this point
in time. In the following sections the decisions will be explained in further
detail.
Research
results and
data
Decision to
disseminate
Publications
Deliverables
Conference
Proceedings
Journals
Research Data
Open Access
Repository
Restricted
access
Restricted data
Decision to
exploit
**Figure 2: Data sharing plan**
## 5.1.1 RESEARCH RESULTS AND DATA
As a European project under Horizon2020, the ER4STEM project consortium has
declared its willingness to make all knowledge generated from the project
publically available and provide open access to its scientific publications
and research data [2]. Therefore, all deliverables of the project are open to
public and accessible via the project’s web page **er4stem.com** . The data
sharing decisions was taken regarding these data sets only and need to be
revised when other data sets are added.
## 5.1.2 DECISION TO EXPLOIT
The consortium agreed that the workshops data set does not include any
information that should be protected for exploitation reasons. However, the
project created an educational robotics repository (repository.er4stem.com),
which will be sustainable after the project. Therefore, some data or knowledge
generated or collected in the project might be identified as a unique selling
point worthwhile of protection.
## 5.1.3 DECISION TO DISSEMINATE
The consortium decided to disseminate findings from the research data in
scientific publications. The consortium also decided to use other non-
scientific means to promote the project and its tools as well as the
scientific findings. Scientix has already been proven to be a very competent
collaboration partner in reaching one of the main stakeholders – STEM teachers
– all over Europe.
The decision about dissemination channels was also affected by the cost factor
(such as open access for scientific publications) and, although some budget
was foreseen for open access publications, the consortium will prefer routes
that minimize costs in order to make the research and knowledge generated by
the project as diversely public as possible.
<table>
<tr>
<th>
**5.1.3.1**
</th>
<th>
**DELIVERABLES**
</th> </tr> </table>
Findings from the project research data were publish in deliverables D6.3,
D6.4 and D6.5.
<table>
<tr>
<th>
**5.1.3.2**
</th>
<th>
**CONFERENCE PRESENTATIONS AND PROCEEDINGS**
</th> </tr> </table>
Conference proceedings and books are very expensive for open access.
Nevertheless, conference camera-ready versions can be shared under certain
conditions. For example, IEEE and ACM allow authors to share cameraready
versions without problem, while Springer just allow authors to do the same
after 12 months. Therefore, camera-ready versions of the articles written in
the project and are available in the repository.
<table>
<tr>
<th>
**5.1.3.3**
</th>
<th>
**JOURNALS**
</th> </tr> </table>
Journal publications are open access and publicly available, linked through
via the repository. It is also a common route to publish first findings in
conferences, and then enhance them together with further findings in a journal
paper which then can be open access. It was decided that the final scientific
results that are going to be published in a journal paper could be gold
access. Other journal papers will have green access for financial reasons
(minimised cost, maximised dissemination).
## 5.1.4 RESEARCH DATA OPEN ACCESS REPOSITORY
The consortium committed to make all research data. However, the consortium
also decided that the data and the persons having access to the research data
should be restricted. In the following subsections the restrictions and the
rationale behind these are explained.
<table>
<tr>
<th>
**5.1.4.1**
</th>
<th>
**RESTRICTED DATA**
</th> </tr> </table>
The research data collected during workshops and conferences were anonymised
so that participants cannot be identified from the data. However, there is
always the potential that individuals can be identified in audio, video and
still images, even though they have been anonymised. Sharing this data with
third parties would infringe data and child protection laws of the consortium
countries. The consortium is not equipped with the competencies and time to
take measures against this kind of identification, and even if it did so, it
cannot guarantee that others could not apply countermeasures once in the
possession of these materials. Thus the consortium decided not to share with
third parties audio, video or still images which include any participant.
Even within the consortium, only the evaluation lead partner and partners who
originally collected the data will have access to this data, which has been
stored as described in section 6 of this document. The transcribed interviews
and observation protocols may be made available via open access but only if
they contain no identifying information.
The consortium decided that all research data needs to be “cleaned” and
processed, for example, school names or other identifying information needs to
be removed, and brought into a form that is useful for other researchers to
validate or replicate research results, and also fitting into the open access
repository metadata and search options. There needs to be a compromise between
the chosen repository and its data formats and the research data processed for
the repository. The consortium concluded that excel and word formats will be
best formats in an open access repository.
As part of the process of gaining informed consent to collect data from
participants, they must be informed of the storage, protection and use of the
data. If the data is made accessible to others, the consent is not fully
informed, as the consortium has limited means to identify for what purposes
the data will be used and by whom. Therefore, the open access pilot is
explained to participants and they are given the opportunity to opt out of
open access, thus data from participants who have opted out of open access
pilot will not be made accessible. In addition, the consortium will limit the
access to data to researchers who confirm their compliance with the same
ethical obligations stated in the informed consent process.
<table>
<tr>
<th>
**5.1.4.2**
</th>
<th>
**RESTRICTED ACCESS**
</th> </tr> </table>
In order to ensure that the research data is used by third parties as
explained to the participants with the informed consents, access to the data
needs to be restricted. The consortium needs to know who is accessing the
database and for what purpose. Criteria for access will include membership of
a research institution based in Europe and they must submit a plan outlining
how they will use the data (research questions and analysis approach) which
will be reviewed before any decision to grant access.
The time frame of access is also part of the restriction. The research data
cannot be made accessible before scientific publication. It therefore needs to
be decided how long after publication it could be useful for other researchers
to have access to the research data, this could range from one year post-
publication to five years after the project.
Each person who would be interested to have access to any data set must make a
request to the project team, stating the reason for access to the data and how
it will be used. Also the following are the condition of use:
* The project team must be informed of intended publications arising from analysis of the data.
* The data set must be used in any publications with acknowledgement given to the authors and ER4STEM project.
* The dataset must not be used for commercial purposes and cannot be shared with others without express written consent from the project team.
On the other hand, the internal procedure to process a new request is the
following: whoever picks up the request for access, must inform the partner
leads, who are Markus Vincze(TU Wien), Wilfried Lepuschitz (PRIA), Ivo
Gueorguiev (ESICEE), Angele Giuliano (AL), Chronis Kynigos (UoA) and Carina
Girvan (CU). This should include information about the person and their stated
reasons for wanting access to the data. The partner who collected the data can
veto any access requests - allow 1 calendar month for this. If there are no
causes for concern, access is granted.
<table>
<tr>
<th>
**6**
</th>
<th>
**ARCHIVING AND PRESERVATION**
</th> </tr> </table>
All research data will be stored until 2023. Partners also will need to
archive personal data about the participants (the participant key) in a
separate location, so that participants are able to use their right of
withdrawing from the research anytime. The project partners will comply with
national and EU legislation on Data Protection, particularly the European Data
Protection Legislation (Directive 95/45/EC).
Each project partner will store the research data that is collected by that
partner anonymously on a password protected server. Videos and audio files
(and files where participant can be recognized) will only be stored in an
encrypted drive and shared encrypted. Cardiff University will store all the
data in the same way for all partners to ensure archiving in two separate
locations. TU Wien will save Cardiff University research data for the same
reason. The consortium has agreed to use the software VeraCrypt for
encryption. For the needs of ER4STEM VeraCrypt provides sufficient security.
It is open source, can be used on different systems (Windows, OS, Linux), it
originates in Europe, is maintained by a French company, and is free of
charge.
Software: _https://veracrypt.codeplex.com/_
Tutorial:
_https://veracrypt.codeplex.com/wikipage?title=Beginner%27s%20Tutorial_
The website er4stem.com, which stores all publications of the project for
public access, and the ER4STEM repository, which will store different tools
and plans developed in the project, will be available at least five years
after the project. Details about the research data open access repository can
be found in section 5.1.5.
<table>
<tr>
<th>
**7**
</th>
<th>
**UPLOADING DATA SETS IN ZENODO**
</th> </tr> </table>
Each partner will upload all the data sets created during the project in the
repository Zenodo ( _https://zenodo.org/_ ) . This repository was selected
because it is located in Europe and it complies with all the requirements
demanded by Cardiff University, which was the institution with the strictest
and specific requirements between all partners. The following sections provide
important information that must be considered by each partner while they are
uploading the data sets. A guidance is provided in Annex A: Guidance for Open
Access Data Sets.
<table>
<tr>
<th>
**7.1**
</th>
<th>
**FOLDER STRUCUTRE**
</th> </tr> </table>
The following is the structure of each one of the data sets:
* Country + Partner initials o Year X
▪ Workshop + 3 digit code Artefacts of learning o (Use sub-folders as
appropriate)
* Draw-a-scientist
* Interviews
* Observation notes
* Questionnaires
* Reflections o Tutor o Student
At the Country + Partner level, a **README.txt** file is included with the
following information:
* Language used to ask questions:
* Note to say that answers were translated from X language to English
Names of schools, tutors and teachers should be **REMOVED** from any file.
This is due to ensure the anonymity.
<table>
<tr>
<th>
**7.2**
</th>
<th>
**CLEANING DATA**
</th> </tr> </table>
Each partner has to follow the next steps to all the data generated by them:
Step 1: Remove any participant for whom there is no consent to make their data
open access.
Step 2: Anonymise all data
* No names of students, teachers, tutors or schools in ANY data (replace with student ID as appropriate)
* No images of children, adults or school names (blurred or not these must NOT be included)
* No audio or video recordings
* No sensitive data
<table>
<tr>
<th>
**7.3**
</th>
<th>
**ADDITIONAL CONSIDERATIONS**
</th> </tr> </table>
The following are additional considerations that each partner must take when
they are uploading any data set on Zenodo:
* **Authors** : List all members of your team that did data collection and then add the following:
* George Sharkov; European Software Institute – Center Eastern Europe
* Wilfried Lepuschitz; Practical Robotics Institute Austria
* Angele Giuliano; AcrossLimits
* Chronis Kynigos; Kapodistrian University of Athens o Carina Girvan; Cardiff University o Markus Vincze; Technical University Wein
* **Description** : “This dataset was collected as part of the Educational Robotics for STEM (ER4STEM) project, funded by the European Commission’s Horizon 2020 programme, grant agreement No. 665792\. The dataset includes quantitative and qualitative data collected over 3 years of robotics workshops held in XXX (Replaces with the correct text). Only data with informed consent to be shared via an open access repository is included. Only anonymised data is included and some data is excluded to protect vulnerable participants.”
* **Keywords** : Educational robotics; ER4STEM; K-12; Science; Technology; Engineering; Mathematics; STEM; Quantitative; Qualitative
* **Licence** : Select - Restricted Access. The following text to conditions: “Access to this dataset is restricted. Access requests must be made to the project team, stating the reason for access to the data and how it will be used. Conditions for use: The project team must be informed of intended publications arising from analysis of the data. The dataset do must be used in any publications with acknowledgement given to the authors and ER4STEM project. The dataset must not be used for commercial purposes and cannot be shared with others without express written consent from the project team.
* **Funding** : 665792 OR ER4STEM
# 8 CONCLUSION / OUTLOOK
The ER4STEM DMP Version 2.0 outlines how the research data collected during
the project was handled during and after the project. The document was
reviewed at each milestone meeting and adapted as the project progresses.
Also, it provides instructions on how to update the data sets on Zendodo,
which is the repository selected to store the data created from the project.
# 9 GLOSSARY / ABBREVIATIONS
<table>
<tr>
<th>
EC
</th>
<th>
European Commission
</th> </tr>
<tr>
<td>
ER4STEM
</td>
<td>
Educational Robotics for STEM
</td> </tr>
<tr>
<td>
DMP
</td>
<td>
Data Management Plan
</td> </tr>
<tr>
<td>
REA
</td>
<td>
Research Executive Agency
</td> </tr>
<tr>
<td>
STEM
</td>
<td>
Science, Technology, Engineering, and Mathematics
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
# 10 BIBLIOGRAPHY
1. European Commision. Guidelines on Data Management in Horizon 2020\. Version 2.0. 30 October 2015
2. European Commision. Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020. Version 2.0. 30 October 2015
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0505_RADICLE_636932.md
|
# 2\. Research data
RADICLE’s DMP aims to provide an analysis of certain elements of the data
management policy that will be used by the Consortium with regard to the
project research data.
The DMP covers the complete research data life cycle. It will describe the
selected types of research data that will be collected during the project, the
data standards that will be used, how the research data will be preserved and
what parts of the datasets will be shared for verification or reuse. It also
reflects the current state of the Consortium agreements on data management and
must be consistent with exploitation and IPR requirements.
The DMP deals with how the project participants will manage the research data
generated and/or collected during the project. As agreed by the RADICLE
partners, the type of data that will be generated will relate specifically to:
* Characterisation of welding joints;
* Sensor outputs and how they relate to detection of defects;
* Data collection and manipulation;
* Specific knowledge relating to the particular end-user samples.
All data will be stored in-line with the requirements of the Data Protection
Directive (95/46/EC) and the European General Data Protection Regulation that
will supersede this. The data will be curated by individual partners overseen
by the Project Coordinator.
Data created during the project development is being held on secure servers
either at local or CLOUD level (or both) depending on partner preference.
Access will be provided to all non-confidential results through the gold open
access procedures. Green archiving procedures will be used for confidential
information that is commercially or technologically sensitive – with eventual
access to material that is protected or otherwise becomes declassified.
All aspects of the data will be covered by the Consortium Agreement The
appropriate structure of the consortium to support exploitation is addressed
in section 2.3.2. The consortium is working work as a part of the Pilot on
Open Research Data in Horizon 2020 on a voluntary basis.
## 2.1. Data Identification
The Data Identification consists in a Data set reference and a Data set name.
2.2. Data Set Description The Data Set Description includes:
Data Description, Type (Collected/Processed/Generated), Origin (if
Collected/Processed), Format, Nature, Scale, Useful to Whom, Does it underpin
a scientific publication, Information on existing similar data, Possibility
for integration and reuse, Storage and Backup.
## 2.3. Data Standards and Metadata
Standards used or, if these do not exist, an outline on how and what metadata
will be created.
## 2.4. Data Sharing
Steps to Protect Privacy, Security, Confidentiality, IPR, How the Data will be
Shared, Access Procedures, Who controls It, Embargo Periods, Outlines of
Technical Dissemination, Software and Tools to Enable Re-Use, Widely Open
Access or Restricted to Specific Groups, Repository Where Data will be Stored,
Type of Repository
(institutional, standard repository for the discipline, etc.
In the case Dataset cannot be shared, the reasons (ethical, rules of personal
data, intellectual property, commercial, privacy-related, security-related)
will be described.
## 2.5. Archiving and Preservation (including storage and backup)
The Archiving and Preservation must describe the Procedures for Log-Term
Preservation, How Long should the Data be Preserved, Approximated End Volume,
Associated Costs and How these are Planned to be Covered.
In addition to the project database, relevant datasets will be also stored in
_ZENODO_ , which is the open access repository of the Open Access
Infrastructure for Research in Europe, OpenAIRE.
ZENODO was built and developed by researchers, to ensure that everyone can
join in Open Science.
The OpenAIRE project, in the vanguard of the open access and open data
movements in Europe was commissioned by the EC to support their nascent Open
Data policy by providing a catch-all repository for EC funded research. CERN,
an OpenAIRE partner and pioneer in open source, open access and open data,
provided this capability and Zenodo was launched in May 2013.
# 3\. RADICLE Datasets
Based on the type of data generated during the development of the technical
work of the RADICLE project, the consortium has identified the Datasets that
will be shared with other researchers, with an Open Access policy. Other types
of data that are produced during the project, for instance relating to the
end-user samples, are not subject to release to the public, due to IPR
restrictions.
Table 1 lists the datasets identified for each Work Package within the RADICLE
project.
**Table 1 – RADICLE Datasets**
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset**
</th>
<th>
**Main responsible for data**
</th>
<th>
**Related WP(s)**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Acoustic monitoring sensor data
</td>
<td>
TWI, LOE
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
2
</td>
<td>
S355 test data
</td>
<td>
TWI, LOE
</td>
<td>
WP2, WP3
</td> </tr>
<tr>
<td>
3
</td>
<td>
Validation trials data for S355
</td>
<td>
MTC
</td>
<td>
WP6
</td> </tr> </table>
As the research on this data is still ongoing, the information provided in
this report is subject to be updated until the end of the project, and
presented with further detail on D7.20 - Final Data management plan produced.
# 4\. Dataset #1 Acoustic monitoring sensor data
The initial characterisation of Dataset number 1, the acoustic monitoring
sensor data is presented next, on Table 2:
**Table 2 - Acoustic monitoring sensor data**
<table>
<tr>
<th>
**Dataset Characterisation**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Dataset reference and name**
</td>
<td>
RADICLE_Acoustic_Sensor_Data
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The RADICLE_Acoustic_Sensor_Data consists on the data generated by one of the
methods used to inspect the laser welding process.
Data is created by acoustic monitoring sensors, that allow for the recording
of high-frequency tones derived frond the interactions of the laser beam with
the molten metal. Non-contact acoustic monitoring is something which is
relatively cheap to implement (thus may be interesting for further research
applications) and so is continuing to be monitored during the project.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
To be decided when data is moved to ZENODO.
</td> </tr>
<tr>
<td>
**Data storing**
</td>
<td>
Currently the dataset
RADICLE_Acoustic_Sensor_Data is being stored on a secure hard drive, under the
responsibility of WP3 leader, LOE.
</td> </tr>
<tr>
<td>
**Archiving, preservation and sharing**
</td>
<td>
Open access RADICLE data will be designed to remain operational for 5 years
after project end. By the end of the project, the final dataset will be
transferred to the ZENODO repository, which ensures sustainable archiving of
the final research data.
Items deposited in ZENODO will be retained for the lifetime of the repository,
which is currently the lifetime of the host laboratory CERN and has an
experimental programme defined for the at least next 20 years. Data files and
metadata are backed up on a nightly basis, as well as replicated in multiple
copies in the online system.
</td> </tr> </table>
# 5\. Dataset #2 S355 test data
The initial characterisation of Dataset number 2, the S355 test data is
presented next, on Table 2:
**Table 3 - S355 test data**
<table>
<tr>
<th>
**Dataset Characterisation**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Dataset reference and name**
</td>
<td>
RADICLE_S355_Test
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The RADICLE_S355_Test dataset consists on the raw data related to testing the
laser welding process on S355 steel (a high-strength low-alloy structural
grade), in a butt-weld configuration.
S355 is a material commonly applied throughout the ‘heavy industry’ sectors,
including transport (road, rail and marine), yellow goods (earth-moving and
construction machinery), civil engineering and energy sectors. Hence, the
RADICLE consortium will provide open access to the data collected during the
test trials, to allow for further use of this information.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
To be decided when data is moved to ZENODO.
</td> </tr>
<tr>
<td>
**Data storing**
</td>
<td>
Currently the dataset RADICLE_S355_Test is being stored on both a secure hard
drive and a web repository, under the responsibility of TWI and VTT.
</td> </tr>
<tr>
<td>
**Archiving, preservation and sharing**
</td>
<td>
As it has been described before, open access RADICLE data will be designed to
remain operational for 5 years after project end.
By the end of the project, the final RADICLE_S355_Test dataset will be
transferred to the ZENODO repository, which ensures sustainable archiving of
the final research data.
</td> </tr> </table>
# 6\. Dataset #3 Validation trials data for S355
The initial characterisation of Dataset number 3, the validation trials data
for S355 is presented next, on Table 2:
**Table 4 - Validation trials data for S355**
<table>
<tr>
<th>
**Dataset Characterisation**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Dataset reference and name**
</td>
<td>
RADICLE_S355_Validation
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The RADICLE_S355_Validation dataset consists on the raw data related to
validations trials for the laser welding process on S355 steel (a high-
strength low-alloy structural grade), in a butt-weld configuration.
S355 is a material commonly applied throughout the ‘heavy industry’ sectors,
including transport (road, rail and marine), yellow goods (earth-moving and
construction machinery), civil engineering and energy sectors. Hence, the
RADICLE consortium will provide open access to the data collected during the
validation trials, to allow for further use of this information.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
To be decided when data is moved to ZENODO.
</td> </tr>
<tr>
<td>
**Data storing**
</td>
<td>
The dataset RADICLE_S355_Validation not yet available. Storing procedures will
be agreed with MTC, WP6 leader.
</td> </tr>
<tr>
<td>
**Archiving, preservation and sharing**
</td>
<td>
As it has been described before, open access RADICLE data will be designed to
remain operational for 5 years after project end.
By the end of the project, the final
RADICLE_S355_Validation dataset will be transferred to the ZENODO repository,
which ensures sustainable archiving of the final research data.
</td> </tr> </table>
# 7\. Conclusions
This documents is the second iteration of RADICLE’s Data Management Plan
(DMP). The purpose of the DMP is to provide an analysis of the main elements
of the data management policy that will be used by the Consortium with regards
to the project research data.
The DMP is not a fixed document: on the contrary, it has and will evolve
during the lifespan of the project. This second version of the DMP includes an
overview of the datasets to be produced by the project, and the specific
conditions that are attached to them.
The final version of the DMP will get into more detail and describe the
practical data management procedures implemented by the RADICLE project, with
the goal of complying with the requirements set out by RADICLE’s participation
in the Pilot on Open Research Data launched by the European Commission along
with the H2020 programme.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0506_RADICLE_636932.md
|
# 2.4 Data Sharing
Steps to Protect Privacy, Security, Confidentiality, IPR, How the Data will be
Shared, Access Procedures, Who controls It, Embargo Periods, Outlines of
Technical Dissemination, Software and Tools to Enable Re-Use, Widely Open
Access or Restricted to Specific Groups, Repository Where Data will be Stored,
Type of Repository (institutional, standard repository for the discipline,
etc..
In the case Dataset cannot be shared, the reasons (ethical, rules of personal
data, intellectual property, commercial, privacy-related, security-related).
# 2.5 Archiving and Preservation (including storage and backup)
The Archiving and Preservation must describe the Procedures for long-term
preservation, how long should the data be preserved, approximated End Volume,
associated costs and how these are planned to be covered.
# 3 Storage and access plan
The project website will be one of the main platforms used by the consortium
for sharing and storing the project results, including the project
deliverables and other reports. The website will be kept online for a minimum
of 5 years after the end of the project.
During the project duration the consortium will look at identifying other
platforms that can ensure the access to the project results and data for a
longer period.
**4 DMP template**
The Project DMP can be filled in in the template (Annex I) or in DCC data
repository.
# DMP template
**Data Management Plan**
1. **Data Identification**
Data set reference
Data set name
2. **Data set Description**
Data Description
Type (Collected/Processed/Generated)
Origin (if Collected/Processed)
Format
Nature
Scale
Useful to whom
Does it underpins a scientific publication
Information on existing similar data
Possibility for integration and reuse
Storage and Backup
**3\. Data Standards and Metadata**
Standards used or, if these do not exist, an Outline on How and What Metadata
will be created
## 4\. Data Sharing
Steps to Protect Privacy, Security, Confidentiality, IPR
How the Data will be shared
Access Procedures
Who controls it?
Embargo Periods
Outlines of Technical Dissemination
Software and Tools to Enable Re-Use
Widely Open Access or Restricted to Specific Groups
Repository Where Data will be Stored
Type of Repository (institutional, standard repository for the discipline,
etc.)
In the case Dataset cannot be shared, the reasons (ethical, rules of personal
data, intellectual property, commercial, privacy-related, security-related)
## 5\. Archiving and Preservation (including storage and backup)
Procedures for Log-Term Preservation
How long the Data should be preserved
Approximated End Volume
Associated Costs and how these are planned to be covered
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0509_RadioNet_730562.md
|
# Introduction
This deliverable provides the RadioNet data management plan (DMP) version 1.0.
It is important to mention that the RadioNet project does not create data in
the true sense of the word. However, one of the activities is designed to
develop software for astronomical data. Thus some of the astronomical data
will be used in the development phase of the software testing.
This document outlines how the collected or created research data will be
managed during and after the RadioNet project. The DMP describes, which
standard and methodology will be followed for data collection and generation,
and whether and how the data will be available.
This document follows the template (ver. 3.0, 26.7.2016) provided by the
European Commission in the Participant Portal 1 .
# 1 Data Summary
The RadioNet project will generate software products for radio astronomy data
sets in the JRA RINGS (WP7). RINGS will generate reference data, both actual
data and simulations, from various facility sets (LOFAR, _e_ -Merlin, etc.).
The purpose of these data sets is the validation and verification of the
developed algorithms. The generated astronomical data will be in the common
formats, i.e. the MeasurementSet [see
_http://dx.doi.org/10.1016/j.ascom.2015.06.002_ ] and FITS images. If data
sets are already available then those will be reused. Otherwise, new
observations and simulations will be requested to generate a reference data
set. The software will reuse the CASA and CASACORE libraries. The origin of
the data are the radio astronomy facilities. The expected size of the data is
several TBytes to 1 PByte. Software will be of the order of several kloc.
After the lifetime of the project, the data sets will be kept available on
Github and maintained by the RINGS partners for any future improvements of the
algorithms. The main users of the results are radio astronomers.
# 2 FAIR data
## 2.1 Making data findable, including provisions for metadata
The data sets will be stored in the existing archives of the facilities.
TABLE 2.1 ARCHIVES OF THE RAW DATA OF THE RADIONET INFRASTRUCTURES
<table>
<tr>
<th>
TA Name
</th>
<th>
Archive address
</th>
<th>
Access conditions
</th> </tr>
<tr>
<td>
EVN
</td>
<td>
http://www.jive.nl/select-experiment
</td>
<td>
Free after 1 year
</td> </tr>
<tr>
<td>
_e_ -MERLIN
</td>
<td>
_http://www.e-merlin.ac.uk/archive/_
</td>
<td>
Free after 1 year
</td> </tr>
<tr>
<td>
Effelsberg
</td>
<td>
_http://www.mpifr-bonn.mpg.de/en/effelsberg_
</td>
<td>
Free, upon request
</td> </tr>
<tr>
<td>
LOFAR
</td>
<td>
_http://lofar.target.rug.nl/_
</td>
<td>
Free after 1 year
</td> </tr>
<tr>
<td>
IRAM
</td>
<td>
_http://www.iram-institute.org/EN/content-page-240-7158-240-0-0.html_ ; new to
be ready in 2016
</td>
<td>
Free after 1 year
</td> </tr>
<tr>
<td>
TA Name
</td>
<td>
Archive address
</td>
<td>
Access conditions
</td> </tr>
<tr>
<td>
APEX
</td>
<td>
_http://archive.eso.org/wdb/wdb/eso/apex/form_
</td>
<td>
Free after 1 year
</td> </tr>
<tr>
<td>
ALMA
</td>
<td>
_https://almascience.eso.org/alma-data/archive_
</td>
<td>
Free after 1 year
</td> </tr>
<tr>
<td>
LOFAR
</td>
<td>
_http://lofar.target.rug.nl_
</td>
<td>
Free after 1 year
</td> </tr>
<tr>
<td>
WSRT/ALTA
</td>
<td>
_https://www.astron.nl/wsrt-_
_archive/php/QueryForm.php_ (ready in 2018)
</td>
<td>
Free
</td> </tr> </table>
The metadata standards and discovery from those facilities will be used.
Simulated data will be reproducible by the sets of parameters to generate
them. These parameters will be documented wherever the simulations are used.
Software products will be integrated in CASA/CASACORE and use the associated
discovery mechanisms. For the data products, we will follow the naming
conventions of the facilities. For the software products we will use the
naming conventions of CASA/CASACORE. The metadata in the archives of the
various facilities adequately describes all relevant parameters and keywords
for searching. For software products the search keywords are not applicable.
There will be no clear version numbers in case of the archives, as the
metadata contains timestamps that uniquely identify the observation data.
However, in case of the software – version numbers with the versioning schemes
of CASA/CASACORE will be followed. The metadata created for observations is
described in the Measurement Set standard [see reference above] and in the
FITS standard. The metadata associated with software products are the headers
in the code and the software documentation.
## 2.2 Making data openly accessible
The archives of all RadioNet facilities comply with the open standards
policies. All data will be available during and after the project´s lifetime.
Software products will become available as open source. The archives are
accessible via web interfaces, most of them complying with the Virtual
Observatory (VO) standards. The software products will be made accessible via
the CASA/CASACORE repository in Github.
Depending on the data size, the data is directly downloadable or is accessible
by interaction with the observatory staff. Where possible VO tools can be used
to access images. For software products, direct download from the repositories
will be available and do not require additional tooling.
The various archives have their own documentation about the software needed to
access the data. It is not necessary to include the relevant software, as all
tools are openly available. Data and the associated metadata will be stored in
the archives of the RadioNet facilities. Code implementing calibration
algorithms and the associated documentation will be integrated with the
CASA(-CORE) repositories. Both are open source and there are no restrictions
on use. An appropriate arrangement with the identified repository has been
explored. There is no need for a data access committee.
## 2.3 Making data interoperable
The data produced is stored in common formats and standards of the
astronomical communities and the software products will adhere to the
interoperability conventions of CASA/CASACORE, that is allowing data exchange
and re-use between researchers, institutions, organisations, countries, etc.
The Measurement Set and FITS standards will be used for data and metadata
vocabularies in order to make the data interoperable. The standard
vocabularies will be used for all data types present in the data set, to allow
inter-disciplinary interoperability. A use of uncommon or generated project
specific ontologies or vocabularies is not foreseen.
## 2.4 Increase data re-use (through clarifying licences)
Software products are published in the CASACORE repository under GNU General
Public License v3.0. Data products are subject to the data policies and
licenses of the RadioNet facilities (see Table 2.1). If new data is required,
the data will generally become available 1 year after the observation has
taken place (see also Table 2.1). No re-use of the data outside the radio
astronomy community is currently foreseen. However, the data is openly
accessible for third parties. The project will seek interaction with
industrial partners to investigate the reuse of the software products in other
domains. The data storage terms determined by the archive policy of the
facilities, which is commonly to store data indefinitely. There are no limits
foreseen to the reusability of software products delivered by RINGS. The data
quality is ascertained by the quality procedures of the facilities.
# 3 Allocation of resources
There are no costs required to make the data FAIR. The JRA RINGS leader will
be responsible for the data management. There is no need for plans for long-
term preservation, as they will be designed by the facilities and the
CASA/CASACORE collaboration partners
# 4 Data security
The data is secured according to the policies and arrangements of the RadioNet
facilities, which are publicly available (See table 2.1 for the address). They
assure a long-term preservations and curation of the data.
**5 Ethical aspects**
There are no ethical or legal issues that can have an impact on data sharing.
# 6 Other issues
The RadioNet JRA RINGS is using the data generated by RadioNet facilities,
which follow their own procedures for the data management. However, since the
RadioNet facilities follow the open policy procedure, no particular influence
on the FAIR is expected.
**Copyright**
_© Copyright 2017 RadioNet_
_This document has been produced within the scope of the RadioNet Project._
_The utilization and release of this document is subject to the conditions of
the contract within the Horizon2020 programme, contract no. 730562_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0511_RNADIAGON_824036.md
|
# FAIR DATA
## Making data findable, including provisions for metadata
Within RNADIAGON, we will standardize rules and guidelines for how laboratory
logbooks are maintained and archived at each of project partners. So far, the
practice has been diverse and quality control relied exclusively on the
responsible research group leaders. RNADIAGON will establish naming
conventions and provide resources to produce structured metadata. All datasets
produced within the project will be indexed in repositories Accession Numbers.
Metadata of the published datasets will include information include all
critical information necessary to reproduce the experiment: source and storage
of material before the experiment, experimental conditions, equipment,
controls and treatments. The data and metadata produced by core facilities do
not follow any clear standards, it is generally the responsibility of users to
identify the proper identification guidelines for their experiment.
## Making data openly accessible
Within RNADIAGON project, raw sequencing data, pre-processed, processed data,
and metadata will be produced. These data will be openly available by
uploading to public database GEO and could be freely shared. Restrictions and
accessibility of the data could be managed by the person who uploads the data
to the database. To access to the data, Command Line Utility Tool might be
employed due to big size of raw data packages. Anyway, there is no need of
software documentation to access the data included. Regarding the type of
uploaded data, metadata will be uploaded to GEO database together with raw,
pre-processed, and processed data, concrete documentation and code will be on
request. As GEO database, a user-friendly tool, is publicly free, there are no
restrictions on use.
## Making data interoperable
Interoparability of the data, i.e. allowing data exchange and re-use between
the cooperating institutions, organisations, and researchers, will be
guaranteed by following the manufacturer’s recommendations and depositing them
in a free GEO database with no restrictions to download and share them. To
allow inter-disciplinary interoperability, standard vocabularies for all data
types present in our data sets will be used.
## Increase data re-use (through clarifying licences) there
Generally, we will not limit re-use by academic users by any restrictive
licences. We will follow licensing policies of public repositories used to
publish the data. Where possible, we will employ the Creative Commons CC-BY
standard, which will allow wide re-use of data while assuring proper
attribution of origin in results. We will disclose data at the time of
publishing as required by majority of journals, or only after making sure that
no potential in commercialization is endangered. The experiences of BioVendor,
Inc. (biotech industry partner) will significantly increase the competences of
academic partners to identify opportunities for transfer of knowledge and
develop strategies to valorise them (WPs 6 and 7). Thus, we will be ready to
assess the relevance of research results and data for application and proceed
towards protection of intellectual property in timely manner, making sure that
all results will be available to scientific community that may further profit
from them as soon as possible.
# ALLOCATION OF RESOURCES
The RNADIAGON project expects to allocate financial resources to cover open
access costs of scientific publications published as a result of the project.
There is no need for additional costs regarding data management as only NGS
data (raw, pre-processed, processed, metadata) will be produced and these will
be stored in publicly available database GEO.
Ondřej Slabý as a project coordinator will be responsible for supervision and
monitoring of data policy and Data Management Plan implementation and its
regular update in cooperation with partner PIs.
The allocation of capacity to ensure that the data generated by the project
are FAIR will be the responsibility of research group leaders involved in the
project. The standard time frame for data storage is 10 years.
# DATA SECURITY
Produced NGS data (raw, pre-processed, processed, metadata) will be safely
stored in local repositories for long term preservation. The data
infrastructure at cooperating institutions is operated by locally approved
institutes responsible for data management.
# ETHICAL ASPECTS
RNADIAGON project does not involve direct participation of human/patients.
Peripheral blood samples to be processed in this study were anonymized
immediately after their collection and sample register at clinical centres and
all participants of RNADIAGON project will work only with fully anonymized
samples under codes. These codes do not allow to identify and trace back the
patients. All patients included in the project were informed in detail of the
specific use of their biological material; they were also informed about risks
and possible consequences associated with peripheral blood collection.
Simultaneously, the possibility to refuse to be involved in the study, or to
withdraw after the project has started on their own request without giving
reasons were highlighted to patients; this decision did not affect the care
that patient received at a hospital. Patients were also able to ask any
questions before agreeing to be involved and during the biobanking. Patient’s
handwritten signature of informed consent were required before collecting
their blood plasma/serum samples.
The RNADIAGON partners will not come into direct contact with any patients.
The RNADIAGON partners will only be working with blood/serum samples obtained
from specific biobanks. Each subject was informed about the storage of its
plasma/serum samples in biobank, the procedures, and the intended purposes,
and only after signing an informed consent, blood plasma/serum samples have
been taken. Issues on insurance, incidental findings and the consequences of
leaving the study are discussed according to the European guidelines and local
regulations. After reading and discussing the patient information sheet, all
patients recruited into the biobanks gave written informed consent.
The RNADIAGON research will not work with any personal data, including genetic
data. We will only work with fully anonymised biological samples
(serum/plasma) and with RNA which does not allow to uncover personal identity.
Only fully anonymized plasma/serum samples will be exported from EU to the
University of Texas, USA where will be used for fulfilling the project tasks.
Before an export of peripheral blood plasma/serum samples to USA, Import
Permit Approval Letter (Permit to Import Infectious Biological Agents,
Infectious Substances, and Vectors) will be obtained (also compliant with D
9.6 NEC – Requirement No. 20).
Ethical aspects related to this issue are closely descried in Description of
Action, Part B, chapter 5 and further addressed via related ethics
deliverables (D9.4 HCT – Requirement No.10, D9.5 PODP – Requirement No. 17 and
D 9.6 NEC – Requirement No. 20).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0516_SUNSET_785585.md
|
# 1\. Executive Summary
This document, D1.2 - Data Management Plan (DMP) is a deliverable of the
SUNSET project launched under the CS2-Innovative Action, which is funded by
the European Union’s H2020 through Clean Sky 2 Programme under Grant Agreement
#785585.
As part of the development of the more electric aircraft, SUNSET’s main goal
is to develop a demonstrator of a new generation of high-density energy module
and bidirectional converter for on-ground operations.
The purpose of the Data Management Plan is to provide an analysis of the main
elements of the data management policy that will be used by the consortium
regarding to the data generated and managed on Sunset project. This document
describes the type of research data that will be generated or collected during
the project, the standards that will be used, how the data generated will be
preserved and what parts of the different datasets will be shared for
verification or reused. The DMP reflects the exploitation and IPR requirements
as defined in the Consortium agreement.
The present document is the first version of SUNSET DMP which includes an
overview of datasets to be produced by the project, and specific conditions to
be applied for sharing and re-use. To ensure both privacy and dissemination
efficiency, the Data Management Plan will be updated during the lifecycle of
the project, to classify each set of data as a knowledge that can be
disseminated or protected, as well as the definition of the lifespan of every
set of data. More specifically, the revisions of this document will be
delivered in the periodic reporting of the project as defined below:
* **M15** : First periodic review – Technological trade-off will be finalized, and preliminary design phase will be completed. The first evaluation of data to be shared will be completed.
* **M30** : Second periodic review – TRL4 technical review will be completed and the necessary tools to preserve and curate the data generated by Sunset project must be specified.
* **M45** : At the end of the project, the TRL6 demonstrator will be integrated and validated on the overall system. At this point, the consortium will be able to update and finalize all the policies regarding data reuse that were described in older versions of the DMP, in accordance with the rules set out in the Grant agreement and Consortium Agreements that all the partners of Sunset project have signed.
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement N°785585. This output
reflects only the author’s view and that the European Union is not responsible
for any use that may be made of the information it contains
10961Q12-B 5
Data Management Plan: 785585 – SUNSET – H2020-CS2-CFP06-2017-01
# 2\. DATA MANAGEMENT AND RESPONSABILITY
## 2.1. DMP Internal Consortium Policy
As a project participating in the Open Research Data Pilot (ORDP) in Horizon
2020, SUNSET will make its research data findable, accessible, interoperable
and reusable (FAIR). Nevertheless, data sharing in the open domain can be
restricted, considering “the need to balance openness and protection of
scientific information, commercialization and Intellectual Property Rights
(IPR), privacy concerns, security as well as data management and preservation
questions” as stated in Guidelines on FAIR Data Management in H2020 published
by European Commission. In these conditions, datasets which are candidates for
dissemination and sharing will be checked to ensure that:
* They are not confidential and do not include commercially sensitive information.
* They are compliant with the Grant Agreement and Consortium Agreements signed by that all partners of Sunset project.
* The dissemination of the data does not damage exploitation or IP protection prospects.
Sunset is an CS2JU project which is linked to an ITD/IADP demonstrator. The
coordinator Centum Adeneo must consult the topic manager Safran Landing System
to determine the scope and perimeter of the possible open access data and
identify in written the data that will be generated out of the action or
exchanged during the action implementation. The data described below which can
be disseminated and made available under the open access regime have been
approved by the Topic manager.
## 2.2. DATA MANAGEMENT Responsible
For all data types, the consortium will examine the aspects of potential
conflicts against the IPR protection issues of the knowledge generated before
deciding which information could be made public and when. The decision process
will be described in the deliverable “D1.3 – Plan for Communication,
Dissemination and exploitation of project results”.
The role of the project data contact (PDC) is to manage the relationship with
the topic manager and the partners for the dissemination of the data according
to the Data Management Plan.
<table>
<tr>
<th>
Project Data Contact (PDC)
</th>
<th>
**Emmanuel FRELIN**
</th> </tr>
<tr>
<td>
PDC Affiliation
</td>
<td>
**Centum Adeneo**
</td> </tr>
<tr>
<td>
PDC mail
</td>
<td>
**[email protected]**
</td> </tr>
<tr>
<td>
PDC telephone number
</td>
<td>
**+33 (0)4 72 18 08 40**
</td> </tr> </table>
This project has received funding from the Clean Sky 2 Joint Undertaking under
the European Union’s Horizon 2020 research and innovation programme under
grant agreement N°785585. This output reflects only the author’s view and that
the JU is not responsible for any use that may be made of the information it
contains
10961Q12-B 6
Data Management Plan: 785585 – SUNSET – H2020-CS2-CFP06-2017-01
## 2.3. DATA nature, link with previous data and potential users
The data collected, generated and used in Sunset project includes the
following types: - _Experimental and performance data_ : Technological trade-
off results and technical data related to the performance of the different
demonstrators or prototypes developed for Sunset project fall into this data
type. The level of access to this kind of data will be approved by the
partners and by the topic manager.
* _Deliverables:_ Written documents that describe the technical work performed in Sunset project and its outcomes. The level of access of the deliverables produced is regulated by the Grant and Consortium agreements.
* _Reports:_ Written reports such as meeting minutes, periodic and final reports, presentations, etc. fall into this data type. The level of access of the deliverables produced is regulated by the Grant and Consortium agreements.
* _Scientific publications and documentation:_ Documentation such as presentations for exhibition, posters, promotional materials etc. or publications in relevant scientific journal, books and conference which report on the work of the project, fall into this data type. All project related publications will be approved by the topic manager and will contain an explicit acknowledgment to Sunset project, in which the name and the EU grant number will be mentioned.
Of course, the data are not limited to the description above and can evolve
during the Sunset project.
## 2.4. Data summary
The following categories of outputs are declared “ORDP” in the Grant Agreement
and will be made “Open Access” (to be provided free of charge for public
sharing). These data will be managed according to the present Data Management
Plan.
\- Public deliverables:
* D1.2: Data Management Plan (This document) o D1.3: Plan for communication, dissemination and exploitation of project results
* D1.4: Report on communication, dissemination and exploitation actions. - Articles published in Open Access scientific journal - Conference and Workshop abstract or article.
In addition to the data above and in agreement with the topic manager, the
following data will be provided in open access. Of course, this data will be
submitted for approval by the partners of Sunset project and by the topic
manager before dissemination.
This project has received funding from the Clean Sky 2 Joint Undertaking under
the European Union’s Horizon 2020 research and innovation programme under
grant agreement N°785585. This output reflects only the author’s view and that
the JU is not responsible for any use that may be made of the information it
contains
10961Q12-B 7
Data Management Plan: 785585 – SUNSET – H2020-CS2-CFP06-2017-01
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
**Data Set**
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
**Data Description**
</td>
<td>
**Data Ref.**
</td>
<td>
**Data Sharing**
</td>
<td>
**Data type**
</td>
<td>
**Data source**
</td>
<td>
**Data Format**
</td>
<td>
**Reuse of existing data**
</td>
<td>
**Potential for reuse**
</td>
<td>
**Diffusion principles**
</td> </tr>
<tr>
<td>
WP1-Project Management
</td>
<td>
**D1.2 - Data**
**Management Plan**
</td>
<td>
DS-10961Q12
</td>
<td>
openaccess
</td>
<td>
document
</td>
<td>
compilation
</td>
<td>
.pdf
</td>
<td>
</td>
<td>
</td>
<td>
Secure file-sharing platform
</td> </tr>
<tr>
<td>
WP1-Project Management
</td>
<td>
**D1.3 - Dissemination,**
**Communication and Exploitation**
</td>
<td>
DS-10961Q13
</td>
<td>
openaccess
</td>
<td>
document
</td>
<td>
compilation
</td>
<td>
.pdf
</td>
<td>
</td>
<td>
</td>
<td>
Secure file-sharing platform
</td> </tr>
<tr>
<td>
WP1-Project Management
</td>
<td>
**D1.4 - Project**
**Exploitation Results**
**Report**
</td>
<td>
DS-10961Q14
</td>
<td>
openaccess
</td>
<td>
document
</td>
<td>
compilation
</td>
<td>
.pdf
</td>
<td>
</td>
<td>
The data will be useful for other research groups and industrials working on
related subjects in the area of electrical energy storage and mobility
</td>
<td>
Secure file-sharing platform
</td> </tr>
<tr>
<td>
WP3-Technologies Trade-
Off
</td>
<td>
**Extract of**
**semiconductors tradeoff**
</td>
<td>
DS-10961D05
</td>
<td>
openaccess
</td>
<td>
document
</td>
<td>
datasheet
</td>
<td>
.pdf
</td>
<td>
compilation of existing data
</td>
<td>
The data will be useful for other reserch groups and industrials working on
related subjects in the area of electrical energy storage and mobility
</td>
<td>
Secure file-sharing platform
</td> </tr>
<tr>
<td>
WP3-Technologies Trade-
Off
</td>
<td>
**Extract of power converter topologies trade-off**
</td>
<td>
DS-10961D04
</td>
<td>
openaccess
</td>
<td>
document
</td>
<td>
scientific article
</td>
<td>
.pdf
</td>
<td>
compilation of existing data
</td>
<td>
The data will be useful for other reserch groups and industrials working on
related subjects in the area of electrical energy storage and mobility
</td>
<td>
Secure file-sharing platform
</td> </tr>
<tr>
<td>
WP3-Technologies Trade-
Off
</td>
<td>
**Extract of batteries technologies trade-off**
</td>
<td>
DS-10961D03
</td>
<td>
openaccess
</td>
<td>
document
</td>
<td>
datasheet
</td>
<td>
.pdf
</td>
<td>
compilation of existing data
</td>
<td>
The data will be useful for other reserch groups and industrials working on
related subjects in the area of electrical energy storage and mobility
</td>
<td>
Secure file-sharing platform
</td> </tr>
<tr>
<td>
WP8-TRL6 Demonstrator
Integration
</td>
<td>
**Extract or Test report**
**synthesis**
**(evolution compared to state of the art)**
</td>
<td>
TBD
</td>
<td>
openaccess
</td>
<td>
document
</td>
<td>
engineering
</td>
<td>
.pdf
</td>
<td>
</td>
<td>
Evolution compared to the state of the art
</td>
<td>
Secure file-sharing platform
</td> </tr> </table>
This project has received funding from the Clean Sky 2 Joint Undertaking under
the European Union’s Horizon 2020 research and innovation programme under
grant agreement N°785585. This output reflects only the author’s view and that
the JU is not responsible for any use that may be made of the information it
contains
10961Q12-B 8
Data Management Plan: 785585 – SUNSET – H2020-CS2-CFP06-2017-01
**3\. FAIR data**
# 2\. 1. Making data findable, including provisions for metadata
All project developed by Centum Adeneo is identified with an identification
number and all the documentation is referenced with it.
The identification number for Sunset project is **10961** and the project
documentation follows the Centum Adeneo formalism described below:
<table>
<tr>
<th>
**Identification number**
</th>
<th>
**Document type**
</th>
<th>
**Product number**
</th>
<th>
**Version**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**0**
</td>
<td>
**9**
</td>
<td>
**6**
</td>
<td>
**1**
</td>
<td>
**X**
</td>
<td>
**Y**
</td>
<td>
**Y**
</td>
<td>
Z
</td> </tr>
<tr>
<td>
5 digits project number.
</td>
<td>
A : Project management
D : Technical document
Q : Quality assurance documents
</td>
<td>
Incremental number by document type
</td>
<td>
Between A and Z
</td> </tr> </table>
The data that will be released in open access will be identified by the string
“DS-“ before the identification number of the project like “
_**DS-19061Q01-A00** _ ”.
## 2.2. Making data openly accessible
* _Internal data:_
All data currently produced on Sunset project is stored in an access-
restricted drive that is accessible by any consortium member quickly. Each
workpackage leader and task leader will have to upload their technical
reports, tests reports, milestone and deliverables on the project repository.
This drive and the Sunset project repository is managed by Centum Adeneo.
* _Open access data:_
For all open data identified on SUNSET project, Zenodo will be used as the
project open data repository. Zenodo provides Digital Object Identifier (DOI)
for all datasets, thus ensured that all SUNSET data uploaded will have a
persistent and unique identifier.
## 2.3. Making data interoperable
Generally, all data dedicated to open data access will be available in PDF
format. Scientific publications or posters will follow the format required by
the conference or the journal which the data will appear.
For deliverables and written reports, the partners and topic manager have
already agreed on templates and formats to be used on the project.
## 2.4. Increase data re-use (through clarifying licences)
Open data will be available for publication in ZENODO at the end of respective
workpackage and after the Topic Manager validation.
All data deposited on ZENODO will be accessible without restriction for public
and the access is unlimited.
This project has received funding from the Clean Sky 2 Joint Undertaking under
the European Union’s Horizon 2020 research and innovation programme under
grant agreement N°785585. This output reflects only the author’s view and that
the JU is not responsible for any use that may be made of the information it
contains
10961Q12-B 9
Data Management Plan: 785585 – SUNSET – H2020-CS2-CFP06-2017-01
# 3\. Allocation of resources
The cost of publishing FAIR data includes:
* Maintenance of the physical servers
* Time dedicated to data generation
* Long term preservation of the data
Sunset is an CS2JU project which is linked to an ITD/IADP demonstrator with a
considerable amount of confidential data. Therefore, resources to maintain and
generate data are supported by SUNSET project. Long term preservation of data
is free of charge by uploading the data on Zenodo for Open Access Data and
covered by the project for confidential data.
A repository will be created on ZENODO for the project’s open data.
# 4\. Data security
The process of Backup and Archiving is described below:
* _Backup_ : o Every day (at noon and midnight), a differential backup is performed.
* Each week a full backup is performed. The backup media is stored in a safe fireproof or CENTUM ADENEO site manager home for a period of 1 month.
* Each month a full backup is performed. The backup media is stored in a safe fireproof or CENTUM ADENEO site manager home for a period of 1 month.
* Each year, a full backup is performed. This backup media is stored at the CENTUM ADENEO site manager home and in a fireproof safe.
* _Archive_
As required by the Grant Agreement, the Sunset database will remain
operational for at least one year after the project completion. After this
one-year period, the database of the project is archived as described below:
* Computer archiving: the data are archived to tape in duplicate. A tape is stored in a fireproof safe, the other band is delocalized.
* A register of project archives keeps track of the archived data and backup media.
* At each change of media type, drive or software, a compatibility check with existing archive is performed. The data on non-compatible media will be transferred on the new media.
**5\. Ethical aspects**
Non-applicable for Sunset project.
# 6\. Other issues
Non-applicable for Sunset project.
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement N°785585. This output
reflects only the author’s view and that the European Union is not responsible
for any use that may be made of the information it contains
10961Q12-B 10
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0521_OceanSET_840651.md
|
# INTRODUCTION
A Data Management Plan (DMP) has been developed using FAIR data principles –
Findable, Accessible, Interoperable and Reusable. The DMP outlines what
datasets the project will generate and compile, and how these datasets will be
made accessible and stored. The DMP also describes measures that have been
taken to safeguard and protect sensitive data and emphasizes that the produced
results must be easily located and accessible.
OceanSET has chosen to use the template provided for the Data Management Plan.
At present, very little data has been collected by the project. The OceanSET
DMP is intended to be a ‘living’ document that will outline how the OceanSET
research data will be handled during and after the project, and so it will be
reviewed and updated over the course of the project whenever significant
changes arise, such as (but not limited to):
* new data being gathered;
* generation of periodic reports;
* development of final report;
* changes in consortium policies;
* changes in consortium composition and external factors (e.g. new consortium members joining or old members leaving).
In preparation for this report the OceanSET partners considered a number of
issues to be addressed. Table 1 provides a summary of the issues considered
when preparing this DMP.
SEAI will be responsible for disseminating this DMP to all project partners.
Each project partner will be responsible for managing their data, metadata,
and insuring their data meets the quality standard set out in the OceanSET
Quality Handbook.
**TABLE 1: DMP COMPONENTS**
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
**1\. Data**
**summary**
</td>
<td>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</td> </tr>
<tr>
<td>
2. **FAIR Data**
2.1. Making data findable, including
provisions for metadata
</td>
<td>
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify standards for metadata creation (if any). If there are no standards in
your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
</td>
<td>
Specify which data will be made openly available? If some data is kept closed
provide rationale for doing so
Specify how the data will be made available
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify where the data and associated metadata, documentation and code are
deposited
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify how access will be provided in case there are any restrictions
</td> </tr>
<tr>
<td>
2.3. Making data interoperable
</td>
<td>
</td>
<td>
Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
2.4. Increase data re-use (through clarifying licences)
</td>
<td>
</td>
<td>
Specify how the data will be licenced to permit the widest reuse possible
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Describe data quality assurance processes
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
**3\. Allocation of resources**
</td>
<td>
</td>
<td>
Estimate the costs for making your data FAIR. Describe how you intend to cover
these costs
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Clearly identify responsibilities for data management in your project
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Describe costs and potential value of long-term preservation
</td> </tr>
<tr>
<td>
**4\. Data security**
</td>
<td>
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive data
</td> </tr>
<tr>
<td>
**5\. Ethical aspects**
</td>
<td>
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td> </tr>
<tr>
<td>
**6\. Other**
</td>
<td>
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr> </table>
## Definitions
This section provides definitions for terms used in this document.
**TABLE 2: DEFINITIONS**
<table>
<tr>
<th>
Project partners/ consortium
</th>
<th>
The organization constituted for the purposed of the OceanSET project
comprising of:
* Sustainable Energy Authority of Ireland (SEAI)
* Wave Energy Scotland (WES)
* Directorate General of Energy and Geology (DGEG)
* Ocean Energy Europe (OEE)
* France Energies Marines (FEM)
* National Agency for New Technologies, Energy and Sustainable Economic (ENEA)
* Ente Vasco de la Energía (EVE)
* The University of Edinburgh (UEDIN)
* Oceanic Platform of the Canary Islands (PLOCAN)
</th> </tr>
<tr>
<td>
Dataset
</td>
<td>
Digital information created in the course of research, but which is not a
published research output. Research data excludes purely administrative
records. The highest priority research data is that which underpins a research
output. Research data do not include publications, articles, lectures or
presentations.
</td> </tr>
<tr>
<td>
Data Type
</td>
<td>
* R: Document, report (excluding the periodic and final reports)
* DEM: Demonstrator, pilot, prototype, plan designs
* DEC: Websites, patents filing, press & media actions, videos, etc.
* OTHER: Software, technical diagram, etc.
</td> </tr>
<tr>
<td>
Dissemination
Level
</td>
<td>
* PU: Public, fully open, e.g. web
* CO: Confidential, restricted under conditions set out in Grant Agreement CI: Classified, information as referred to in Commission Decision 2001/844/EC.
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Information about datasets stored in a repository/database template, including
size, source, author, production date etc.
</td> </tr>
<tr>
<td>
Repository
</td>
<td>
A digital repository is a mechanism for managing and storing digital content.
</td> </tr> </table>
# 1\. Data Summary
**1a. What is the purpose of the data collection/generation and its relation
to the objectives of the project?**
The OceanSET Data Management Plan (DMP) aims to provide a strategy for
managing data generated and collected during the project and to optimise
access to and re-use of research data. Data generated during the project can
be divided into the following groups:
* Data collected from stakeholders during the annual mapping exercise. This consists primarily of raw data collected via individual stakeholder questionnaires;
* Data generated by project partners during the analysis and monitoring exercises. This dataset consists of compiling raw data provided by stakeholders in the format of agreed metrics.
The purpose of this data collection is to help achieve our 3 main objectives
of the project:
* Facilitate the implementation of the technology development actions for Ocean energy in the SET Plan;
* Promote knowledge sharing across the European Commission, Member States, Regions and other stakeholders in the ocean energy sector;
* Investigate collaborative funding mechanisms between Member States and Regions.
These objectives in turn will incentivise the engagement of stakeholders in
the annual mapping exercise. The Plan for Exploitation and Dissemination
Report (PEDR) will devise a means of feeding project results back to
stakeholders who provided information in questionnaires. This feedback loop is
seen as a key knowledge sharing mechanism and means of continued engagement
with stakeholders. The PEDR for OceanSET is available on the OceanSET website:
_www.oceanset.eu_ .
**1b. What types and formats of data will the project generate/collect and
what is the origin of the data?**
Table 3 and table 4 below set out the data sets types and format the will be
generated and collected by the OceanSET project as well as the related Work
Package (WP) number. Table 3 is the list of ‘Open Access Content’, presented
in relation to the project Deliverables.
**TABLE 3: PUBLIC OCEANSET DATA**
<table>
<tr>
<th>
#
</th>
<th>
**Data Type**
</th>
<th>
**Origin**
</th>
<th>
**WP#**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Project Website
</td>
<td>
FEM & Publicly available data
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
2
</td>
<td>
Metrics for OE Sector
</td>
<td>
DGEG & WES
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
3
</td>
<td>
Report Plan for Exploitation and Dissemination of Results
</td>
<td>
FEM
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
4
</td>
<td>
Report on Project data management Plan
</td>
<td>
SEAI
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
5
</td>
<td>
Report on Knowledge sharing workshops
</td>
<td>
DGEG
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
6
</td>
<td>
Publication and Promotion of Annual Reports
</td>
<td>
SEAI & Secondary data
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
7
</td>
<td>
Report on Dissemination Workshops
</td>
<td>
Secondary data
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
8
</td>
<td>
Financial requirements for SET PLAN
</td>
<td>
WES & Secondary data
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
9
</td>
<td>
Public/private financing ratio for each action, or bundle of actions, in the
SETPlan IP
</td>
<td>
Secondary data
</td>
<td>
WP3
</td> </tr> </table>
Table 4 is the list of ‘Closed Content’, presented in relation to the project
Deliverables.
**TABLE 4: PRIVATE OCEANSET DATA**
<table>
<tr>
<th>
#
</th>
<th>
**Data Type**
</th>
<th>
**Origin**
</th>
<th>
**WP#**
</th> </tr>
<tr>
<td>
1
</td>
<td>
POPD - Requirement No. 3
</td>
<td>
SEAI
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
2
</td>
<td>
Project Management handbook
</td>
<td>
SEAI
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
3
</td>
<td>
Quality Handbook
</td>
<td>
SEAI
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
4
</td>
<td>
H - Requirement No. 1
</td>
<td>
SEAI
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
5
</td>
<td>
Refined Technology Strategy
</td>
<td>
WES
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
6
</td>
<td>
Agreed PCP operating mechanism
</td>
<td>
WES Primary Data
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
7
</td>
<td>
Annual mapping and analysis progress report
</td>
<td>
SEAI
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
8
</td>
<td>
Annual Funding Gap analysis and recommendation report
</td>
<td>
WES
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
9
</td>
<td>
Annual Monitoring and Review Report
</td>
<td>
DGEG
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
10
</td>
<td>
Annual Report on Dissemination and Communication activities
</td>
<td>
FEM & Secondary
Data
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
11
</td>
<td>
Call Documentation for PCP
</td>
<td>
WES
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
12
</td>
<td>
Design of Insurance and Guarantee Fund
</td>
<td>
Primary Data
</td>
<td>
WP3
</td> </tr> </table>
**1c. Will you re-use any existing data and how?**
While it is envisaged that most data collected will be open access and widely
disseminated, some results of the project may need to be protected due to IP
rights. Regarding the exploitation of results, there currently is no plan to
exploit the information other than in the sense of maximizing communication
and dissemination actions to the benefit of the project goals. In the
Consortium Agreement there is an option to exploit jointly owned data between
partners via request to the partners under fair and reasonable use. However,
currently there is no active plan to ‘exploit’ the data.
Even if these databases are deemed valuable, the objective of the project is
not to derive direct business opportunities for the partners involved, but
rather to inform ocean energy stakeholders, funders and policy makers of the
availability of project results. This will remain central in any agreements
around data collection and dissemination.
**1d. To whom might it be useful ('data utility')?**
The data generated in the project will be very beneficial to a variety of
stakeholders including: policy makers, public funders, device developers,
utilities, private investors and supply chain companies. The data collected
will be relevant to key areas of OE project development including; technology
development, consenting and project finance. Information gathered will help
identify challenges in these areas and serve as an input for policy design at
national and European level in these areas. Finally, the data will inform the
wider public about the developments and potential of the ocean energy sector.
# 2\. FAIR data
The OceanSET DMP (D6.2) applies the Findable, Accessible, Interoperable,
Reusable (FAIR) approach for the project’s results.
## Making data findable, including provisions for metadata
It is envisaged that the report that will contain the bulk of data collected
by OceanSET will be the annual reports. OceanSET will publish 3 annual reports
which will be the primary method of disseminating the data collected. The
annual reports will present data as meta or grouped data; individual sources
of data will not be identifiable. The annual report will include references to
the original data source. Keywords will be provided which will clearly
identify the type of data contained in the report.
All OceanSET’s documents will be identifiable based on a common naming
convention. Version control will be clearly identified and will follow the
version control set out in the OceanSET Project Management Handbook, which is
available to the OceanSET consortium.
To ensure document and data control, each document and data set shall be
uniquely identifiable. Each deliverable and data set must be associated unique
document name to ensure version control. The deliverable and data identifier
must be used in the deliverable filename.
The data identifier for the deliverable must be: ** <Deliverable
identifier><Up-to-three-words-from the data name>_<followed by the version
number>_<Partner & (authors initials)>_<date dd-mmyyyy>.doc **
Example: Deliverable7.4_OceanSET Annual
Report_v0.1_SEAI(JF)_01-01-2019.docMaking data openly accessible
OceanSET will focus on assessing the progress of the ocean energy sector and
will monitor the National and EU funded projects in delivering successful
supports. Relevant data will be collected annually and will be used to inform
Member States (MS) and the European Commission (EC) on progress of the sector.
It will also be used to review what works and what doesn’t and to assess how
to maximise the benefit of the funding streams provided across the MS, Regions
and the EU.
The metadata will be disseminated through the Annual Report. These reports
will be published in full as appropriate in the following locations:
* The OceanSET project website _www.oceanset.eu_
* The Community Research and Development Information Service (CORDIS) which is the European Commission's primary source of results from the projects funded by the EU's framework programmes for research and innovation including Horizon 2020.
* A research data repository, compatible and compliant with OpenAIRE guidance. The repository will host scientific publications and all datasets associated with such publications.
The Confidential project data sets and reports will be hosted on:
* The OceanSET Consortium private file sharing folder, this allows secure data share across partners. This area provides a space for information exchange and an archive for all the documentation produced along the Project lifespan.
* Individual partner’s institutional online repositories will host and preserve data until the end of the project.
Table 3 outlines the data that will be made public, with Table 4 indicating
the private data generated within OceanSET.
In accordance with Article 27 of the grant agreement (Appendix A), the project
partners are obliged to protect the results where these can be expected to be
commercially or industrially exploited.
## Making data interoperable
Data will be collected and shared in a standardised way using a standard
format for that data type. As required, reference will be made to any software
required to run it. Given the scope of this project it is anticipated that
publicly available software will be used to store data. Barriers to access
through interoperability issues are not anticipated.
The metadata format will follow the convention of the hosting research data
repository. A draft metadata format is set out below and this is subject to
review in the next DMP update.
General Information
* Title of the dataset/output
* Dataset Identifier (using the naming convention outlined in Section 2.1)
* Responsible Partner
* Work Package
* Author Information
* Date of data collection/production
* Geographic location of data collection/ production
* The title of project and Funding sources that supported the collection of the data i.e. European Union’s Horizon 2020 research and innovation programme under grant agreement No 840651.
Sharing/Access Information
* Licenses/access restrictions placed on the data
* Link to data repository
* Links to other publicly accessible locations of the data, see list in Section 2.2 Links to publications that cite or use the data Was data derived from another source?
Dataset/Output Overview
* What is the status of the documented data? – “complete”, “in progress”, or “planned”
* Date of production
* Date of submission/publication Are there plans to update the data?
* Keywords that describe the content
* Version number
* Format - Post Script (PDF), Excel (XLSX, CSV), Word (DOC), Power Point (PPT), image (JPEG, PNG, GIF, TIFF).
* Size - MBs
Methodological Information
* Used materials
* Description of methods used for experimental design and data collection
* Methods for processing the data
* Instruments and software used in data collection and processing-specific information needed to interpret the data
* Standards and calibration information, if appropriate
* Environmental/experimental conditions
* Describe any quality-assurance procedures performed on the data Dataset benefits/utility
## Increase data re-use (through clarifying licences)
OcenanSET is focused on gathering existing data and monitoring. The metadata
will be available for re-use through the OceanSET website where the annual
reports will be stored and will be published on CORDIS. Data sets uploaded in
OceanSET repository will be accessible to the public by contacting OceanSET
via the website to request access to the datasets. Potential users are
expected to adhere with the Terms of Use of the repository. OceanSET will
ensure that all data requests will be subject to scrutiny under GDPR and that
no data which can identify individual sources or technology will be released.
# 3\. Allocation of resources
3. **What are the costs for making data FAIR in your project?**
The activities related to making the data/outputs open access are anticipated
to be covered within the allocated budget for each work package. Further
investigation of potential cost related to a repository need to be done. The
repository will ensure that data is stored safely and securely and in full
compliance with European Union data protection laws and in accordance with
Article 27 (Appendix A).
**3.b How will these be covered? Note that costs related to open access to
research data are eligible as part of the Horizon 2020 grant (if compliant
with the Grant Agreement conditions).**
The costs related to open access to research data are eligible as part of the
OceanSET Horizon 2020 grant. The costs of making scientific publications,
hosting a project website and the partners and open access data repositories
are contained within the OceanSET budget as eligible costs.
**3c. Who will be responsible for data management in your project?**
SEAI has been appointed as the Quality and Data Manager (QDM), in
collaboration with consortium partners, to manage the data generated during
the project. The QDM will identify an appropriate data repository to store and
safeguard the datasets but ensure that data is readily accessible. Data
generated during the project can be divided into the following groups:
* Data collected from stakeholders during the annual mapping exercise. This consists primarily of raw data collected via individual stakeholder questionnaires.
* Data generated by project partners during the analysis and monitoring exercises. This dataset consists of compiling raw data provided by stakeholders in the format of agreed metrics.
France Energies Marine (FEM) have been appointed as Communication and
Dissemination Manager (CDM), in collaboration with consortium partners, they
will oversee the identification of which datasets will be disseminated and the
most appropriate means of disseminating this data. This has been conducted and
the defined strategy is presented in the Plan for Exploitation and
Dissemination Report (PEDR) in Deliverable 6.1.
**3d. Are the resources for long term preservation discussed (costs and
potential value, who decides and how what data will be kept and for how
long)?**
Resources for long term preservation, associated costs and potential value, as
well as how data will be kept beyond the project and how long, will be
discussed by the Consortium’s General Assembly (GA) at the SET Plan’s
Implementation Working Group meeting. OceanSET aims to align with the long-
term preservation of data of the SET Plan.
# 4\. Data security
**4 What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?**
For the duration of the project, datasets will be stored on the responsible
partner’s storage system. Every partner is responsible to ensure that the data
are stored safely and securely and in full compliance with European Union data
protection laws and in accordance with Article 27 (Appendix
A).
SEAI as lead partner will have the responsibility to store the bulk of the
data collated from questionnaires. While a bespoke repository has not yet been
designed, a system is planned that will be aligned with requirements under
Article 27.
SEAI’s IT data handling and security policy is available below in Appendix B.
It is the policy of the SEAI to:
* Implement human, organisational, and technological security controls to preserve the confidentiality, availability and integrity of its information.
* Comply with all laws and regulations governing information security.
* Develop and maintain appropriate policies, procedures and guidelines to achieve a high standard of information security, reflecting industry best practice.
* Actively assess and manage risks to SEAI information.
* Continuously review and improve SEAI information security controls.
* Respond to any breach of security to minimise damage to information systems.
After the completion of the project, all the responsibilities concerning data
recovery and secure storage will go to the repository storing the dataset.
**4b. Is the data safely stored in certified repositories for long term
preservation and curation?**
The data will be stored long term in a certified repository that is in line
with the requirements of the SET PLAN. This will be discussed at the SET PLAN
OceanSET Implementation Work Group.
# 5\. Ethical aspects
**5 Are there any ethical or legal issues that can have an impact on data
sharing? These can also be discussed in the context of the ethics review. If
relevant, include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA).**
The General Data Protection Regulation will be respected for all relevant
personal data collected. When the project involves access by the partners to
personal data, the partners shall be regarded as responsible for treatment of
said data and shall comply with rules. The ethics requirements of the OceanSET
project are further described in the Work Package 1 which includes Deliverable
1.1: H - Requirement No. 1 and D1.2: POPD - Requirement No. 3.
OceanSET have developed a procedure to insure any personal data we receive is
given by consent. Participants who sign up to OceanSET will sign up via our
3-step sign-up process.
**Step 1:** Interested parties in the OceanSET project can become participants
of the project in two ways:
* Participants can visit our website _www.oceanset.eu_ and if they are interested in getting involved, sign up to our mailing list/database;
* Project partners inform their relevant business contacts who may be interested about the project and encourage them to sign up via the website or by return email **.**
**Step 2** : Once an interested party signs up to our mailing list/database,
he or she will receive an email asking to confirm the requested subscription
by clicking the “Yes, subscribe me to the list”. This email also provides a
point of contact should the interested party have any questions about what
they are joining.
**Step 3** : If the interested party confirms the subscription, he or she will
be prompted to confirm that the respondant is human by clicking the “I’m not a
Robot”
The sign-up form and emails from project partners inform participants about
the OceanSET project and identify the point of contact. The website also
includes a Privacy Policy adjacent to the sign-up on the website which
includes;
* Types of Collected Data,
* The purpose for which the data will be processed, The persons to whom the data may be disclosed.
If any changes occur to our privacy policy, we will notify participants of
these changes by posting the new Privacy Policy on the website. The OceanSET
Privacy Policy is available on the OceanSET website _www.oceanset.eu_ .
Data collected and produced as part of the project will be done in accordance
with the ethical principles, notably to avoid fabrication, falsification,
plagiarism or other research misconduct. It is not the intent of the OceanSET
questionnaires or interviews to collect personal data.
Participants confirm consent by completing the survey/interview. Participants
will be informed that the business information provided may be put in the
public domain in anonymised, aggregated
format. Questionnaires provided by OceanSET will include a copy of the
OceanSET Privacy Policy as well as contact details of the appointed Data
Protection Officer (DPO).
# 6\. Other issues
**6a. Do you make use of other national/funder/sectorial/departmental
procedures for data management? If yes, which ones?**
The lead partner is subject to the Data protection Act; Freedom of Information
Act and General Data Protection Regulation. All data must be collected, stored
and disseminated in accordance with these Acts.
# Appendix A: OceanSET Grant Agreement Extract
### ARTICLE 27 — PROTECTION OF RESULTS — VISIBILITY OF EU FUNDING 27.1
Obligation to protect the results
Each beneficiary must examine the possibility of protecting its results and
must adequately protect them — for an appropriate period and with appropriate
territorial coverage — if: (a) the results can reasonably be expected to be
commercially or industrially exploited and
(b) protecting them is possible, reasonable and justified (given the
circumstances). When deciding on protection, the beneficiary must consider its
own legitimate interests and the legitimate interests (especially commercial)
of the other beneficiaries.
### 27.2 _Unspecified Granting Authority_ ownership, to protect the results
If a beneficiary intends not to protect its results, to stop protecting them
or not seek an extension of protection, _the Unspecified Granting Authority_
may — under certain conditions (see Article 26.4) — assume ownership to ensure
their (continued) protection.
### 27.3 Information on EU funding
Applications for protection of results (including patent applications) filed
by or on behalf of a beneficiary must — unless the _Commission_ requests or
agrees otherwise or unless it is impossible — include the following:
“The project leading to this application has received funding from the
_European Union’s Horizon 2020 research and innovation programme_ under grant
agreement No 840651”.
### 27.4 Consequences of non-compliance
If a beneficiary breaches any of its obligations under this Article, the grant
may be reduced (see Article 43).
Such a breach may also lead to any of the other measures described in Chapter
6.
**ARTICLE 29 — DISSEMINATION OF RESULTS — OPEN ACCESS — VISIBILITY OF EU
FUNDING**
### 29.1 Obligation to disseminate results
Unless it goes against their legitimate interests, each beneficiary must — as
soon as possible — ‘ **disseminate** ’ its results by disclosing them to the
public by appropriate means (other than those resulting from protecting or
exploiting the results), including in scientific publications (in any medium).
This does not change the obligation to protect results in Article 27, the
confidentiality obligations in Article 36, the security obligations in Article
37 or the obligations to protect personal data in Article 39, all of which
still apply.
A beneficiary that intends to disseminate its results must give advance notice
to the other beneficiaries
### 29.2 Open access to scientific publications
Each beneficiary must ensure open access (free of charge online access for any
user) to all peer-reviewed scientific publications relating to its results. In
particular, it must:
(a) as soon as possible and at the latest on publication, deposit a machine-
readable electronic copy of the published version or final peer-reviewed
manuscript accepted for publication in a repository for scientific
publications;
Moreover, the beneficiary must aim to deposit at the same time the research
data needed to validate the results presented in the deposited scientific
publications. (b) ensure open access to the deposited publication — via the
repository — at the latest:
1. on publication, if an electronic version is available for free via the publisher, or
2. within six months of publication (twelve months for publications in the social sciences and humanities) in any other case.
(c) ensure open access — via the repository — to the bibliographic metadata
that identify the deposited publication.
The bibliographic metadata must be in a standard format and must include all
of the following: of — unless agreed otherwise — at least 45 days, together
with sufficient information on the results it will disseminate.
Any other beneficiary may object within — unless agreed otherwise — 30 days of
receiving notification, if it can show that its legitimate interests in
relation to the results or background would be significantly harmed. In such
cases, the dissemination may not take place unless appropriate steps are taken
to safeguard these legitimate interests.
If a beneficiary intends not to protect its results, it may — under certain
conditions (see Article 26.4.1) — need to formally notify the _Commission_
before dissemination takes place.
* the terms _“European Union (EU)” and “Horizon 2020”_ ;
* the name of the action, acronym and grant number;
* the publication date, and length of embargo period if applicable, and - a persistent identifier.
### 29.3 Open access to research data
_Regarding the digital research data generated in the action (‘**data** ’),
the beneficiaries must: (a) deposit in a research data repository and take
measures to make it possible for third parties tocaccess, mine, exploit,
reproduce and disseminate — free of charge for any user — the following: _
1. _the data, including associated metadata, needed to validate the results presented in scientific publications, as soon as possible;_
2. _not applicable;_
3. _other data, including associated metadata, as specified and within the deadlines laid down in the ‘data management plan’ (see Annex 1); (b) provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves)._
_This does not change the obligation to protect results in Article 27, the
confidentiality obligations in Article 36, the security obligations in Article
37 or the obligations to protect personal data in Article 39, all of which
still apply._
_As an exception, the beneficiaries do not have to ensure open access to
specific parts of their research data under Point (a)(i) and (iii), if the
achievement of the action's main objective (as described in Annex 1) would be
jeopardised by making those specific parts of the research data openly
accessible. In this case, the data management plan must contain the reasons
for not giving access._
**29.4 Information on EU funding — Obligation and right to use the EU emblem**
Unless the _Commission_ requests or agrees otherwise or unless it is
impossible, any dissemination of results (in any form, including electronic)
must: (a) display the EU emblem and (b) include the following text:
“This project has received funding from the _European Union’s Horizon 2020
research and innovation programme_ under grant agreement No 840651”.
When displayed together with another logo, the EU emblem must have appropriate
prominence. For the purposes of their obligations under this Article, the
beneficiaries may use the EU emblem without first obtaining approval from the
_Commission_ .
This does not however give them the right to exclusive use.
Moreover, they may not appropriate the EU emblem or any similar trademark or
logo, either by registration or by any other means.
### 29.5 Disclaimer excluding _Commission_ responsibility
Any dissemination of results must indicate that it reflects only the author's
view and that the _Commission_ is not responsible for any use that may be made
of the information it contains.
### 29.6 Consequences of non-compliance
If a beneficiary breaches any of its obligations under this Article, the grant
may be reduced (see Article 43). Such a breach may also lead to any of the
other measures described in Chapter 6.
# Appendix B: SEAI Information Security & Handling Policy
SEAI Information
Security Policy.pdf
SEAI Information
Classification Handling
**CONTACT DETAILS**
Ms. Patricia Comiskey Project Coordinator, SEAI
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement N°840651
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0526_SENET_825904.md
|
Executive Summary 6
1 Introduction 7
2 Data summary 8
2.1 Purpose of data collection/generation in SENET 8
2.1.1 Data collection/generation in WP1 8
2.1.2 Data collection/generation in WP2 9
2.1.3 Data collection/generation in WP4 10
3 FAIR data principles 11
3.1 Making data findable (including provisions for metadata) 11
3.2 Making data openly accessible 11
3.2.1 Deposition in an open access repository 11
3.2.2 Methods or software tools needed to access the data 13
3.2.3 Restriction on use 13
3.2.4 Data Access Committee 13
3.2.5 Ascertainment of the identity of the person accessing the data 13
3.3 Making data interoperable 13
3.3.1 Interoperability 13
3.3.2 Standards or methodologies 13
3.3.3 Standard vocabularies and mapping to more commonly used ontologies 13
3.4 Increase data re-use 14
3.4.1 Data licenses 14
3.4.2 Date of data availability 14
3.4.3 Usability by third parties after the end of the project 14
3.4.4 Data quality assurance processes 14
4 Allocation of resources, data security and ethical aspects 14
4.1 Costs for making SENET data FAIR 14
4.2 Responsibility for data management 14
4.3 Data security 15
4.4 Ethical aspects 15
4.5 Other issues 15
5\. Conclusion and further development of the DMP 16
# Executive Summary
This document represents deliverable 4.5 (D4.5) – the initial version of the
Data Management Plan (DMP) – elaborated in the framework of SENET. It has been
implemented by the project coordinator Steinbeis 2i GmbH (S2i) and has been
written as part of work package (WP) 4 – Impact maximisation: communication,
dissemination and exploitation.
During the lifetime of SENET, several activities, such as surveys, interviews
and Expert Group meetings, will involve the collection, processing and/or
generation of data, in order to obtain meaningful insights which will feed
back into the project. In this context, this initial version of the DMP
describes the data management procedures for all data to be collected,
processed and/or generated in the framework of SENET in line with the
Guidelines on FAIR Data Management in Horizon 2020 1 .
The DMP is intended to be a living document to which more detailed information
can be added through updates as the implementation of the project progresses
and when significant changes occur. The DMP will be updated at least twice
during the project’s lifetime – before the periodic review in month 18 and the
final review in month 30. Further updates to e.g. include new data, changes in
the consortium policies or composition may be implemented on an ad hoc basis.
The revision history (see page 3) and version number clearly indicate who has
implemented the changes and when.
_The terms and provisions of the EU Grant Agreement (and its annexes) and the
SENET Consortium Agreement will prevail in the event of any inconsistencies
with recommendations and guidelines defined in this deliverable D4.5._
# 1 Introduction
The SENET project participates to the Open Research Data Pilot (ORDP) which
aims to improve and maximise access to and re-use of research data generated
by Horizon 2020 projects. A DMP is required for all projects participating in
this pilot. The ORDP applies primarily to the data needed to validate the
results presented in scientific publications. Other data can also be provided
by the beneficiaries on a voluntary basis.
The DMP is a key element of good data management. It describes the data
management procedures for the data to be collected, processed and/or generated
by a Horizon 2020 project in line with the Guidelines on FAIR Data Management
2 .
In order to make research data findable, accessible, interoperable and re-
usable (FAIR), a DMP should include information on:
* the handling of research data during and after the end of the project,
* what kinds of data will be collected, processed and/or generated,
* which methodology and standards will be applied,
* whether data will be shared/made openly accessible and
* how data will be curated and preserved (including after the end of the project).
The SENET DMP will therefore help to:
* sensitise project partners for data management,
* describe the project specific data management procedures,
* assure continuity in data usage if project staff leave and new staff join,
* easily find project data when partners need to access it,
* avoid unnecessary duplication e.g. re-collection or re-working of data,
* keep data updated,
* make project results more visible.
This document presents the initial version of the DMP, delivered in month 6 of
the project. The DMP is intended to be a living document to which more
detailed information can be added through updates as the implementation of the
project progresses and when significant changes occur. The DMP will be updated
at least twice during the project’s lifetime – before the periodic review in
month 18 and the final review in month 30. Further updates to e.g. include new
data, changes in the consortium policies or composition may be implemented on
an ad hoc basis. The revision history (see page 3) and version number clearly
indicate who has implemented the changes and when. The next versions of the
DMP will go into more detail and describe the practical data management
procedures implemented by the
SENET partners.
# 2 Data summary
The following section describes the overall purpose of data
collection/generation, the types and formats of data generated and collected
throughout the project, the re-use of existing data and the data origin, the
expected size of data as well as data utility on WP level. A detailed list of
all data and their respective format to be made accessible by an open access
repository is included in section 3.2. The following section concentrates on
WPs 1, 2 and 4 since WPs 3, 5 and 6 only comprise confidential data.
## 2.1 Purpose of data collection/generation in SENET
The SENET project has three specific objectives. It aims to
1. identify health research and innovation challenges of common interest between the EU and China.
2. create a sustainable health networking and knowledge hub which facilitates favourable conditions for a dialogue between Chinese and EU research and innovation entities.
3. implement collaborative health research and innovation initiatives between the EU and China.
To this purpose, SENET collects and generates data for internal use and
further processing by the SENET project partners such as surveys, interview
transcripts, reports, plans, and focus group documentations (i.e. SENET Expert
Groups) as well as data that will be made accessible for external users such
as the project deliverables and communication and dissemination materials.
Being a Coordination and Support Action (CSA), SENET does not generate typical
research data. Analyses generated during the project’s lifetime will be made
accessible on a voluntary basis (if applicable and according to data
protection/ethical requirements).
### 2.1.1 Data collection/generation in WP1
The data generated in WP1 support the assessment of strategic health
priorities and the health research and innovation landscape in Europe and
China. The data will be generated via an online survey, (telephone) interviews
and desk research.
_Table 1: Data collection in WP1_
<table>
<tr>
<th>
Purpose of data collection/generation
</th> </tr>
<tr>
<td>
Elaboration of deliverables:
* D1.1 Scoping paper: Review on health research and innovation priorities in Europe and China
* D1.2 Map of major funding agencies and stakeholders in Europe and China
* D1.3 Guide for health researchers from Europe and China through the funding landscape
* D1.4 Strategy paper: Towards closer EU-China health research and innovation collaboration
</td> </tr>
<tr>
<td>
Types and formats
</td> </tr>
<tr>
<td>
* Deliverables – Format: .docx, .pdf
* Online survey results – Format: .xlxs
* Telephone interviews transcripts – Format: .docx/.pdf
* Desk research – Format: .docx, .pdf, .html
</td> </tr>
<tr>
<td>
Re-use of existing data
</td> </tr>
<tr>
<td>
Evidence from the literature (e.g. policy documents, agreements, grey
literature, academic studies) collected through desk research (see reference
lists in deliverables)
</td> </tr>
<tr>
<td>
Data origin
</td> </tr>
<tr>
<td>
* Primary data (online survey, interviews)
* Data from the literature
</td> </tr>
<tr>
<td>
Expected size
</td> </tr>
<tr>
<td>
Not yet known.
</td> </tr>
<tr>
<td>
Data utility
</td> </tr>
<tr>
<td>
The raw data (interview transcripts, survey results) generated in WP1 will not
be made openly accessible. These data will be useful for the SENET partners to
prepare the deliverables. The deliverables from WP1 will feed into the Expert
Group consultations in WP2. They may also be useful for other researchers /
consultants and policy makers.
</td> </tr> </table>
### 2.1.2 Data collection/generation in WP2
WP2 aims to develop a sustainable network between the EU and China to
facilitate a constant dialogue on addressing common health research and
innovation challenges and facilitating the identification of relevant topics
in healthcare. The data generated in WP2 comes from the SENET Expert Group
meetings / consultations.
_Table 2: Data collection in WP2_
<table>
<tr>
<th>
Purpose of data collection/generation
</th> </tr>
<tr>
<td>
* Elaboration of deliverables:
o D2.1 Modus operandi – Operational manual for the meetings o D2.2 Initial
roadmap for enhancing EU-China health research and innovation collaboration o
D2.3 Strategic recommendations for health research and innovation
collaborations o D2.4 Consolidated action plan for research and innovation
priorities in health
* Planning / execution of Expert Group meetings
</td> </tr>
<tr>
<td>
Types and formats
</td> </tr>
<tr>
<td>
* Deliverables – Format: .docx, .pdf
* Stakeholder list – Format: .xlsx
* Event calendar – Format: .xlsx
* Expert Group meeting protocols/minutes – Format: .docx, .pdf
</td> </tr>
<tr>
<td>
Re-use of existing data
</td> </tr>
<tr>
<td>
WP2 will re-use data from WP1.
</td> </tr>
<tr>
<td>
Data origin
</td> </tr>
<tr>
<td>
* SENET project and consortium
* Desk research (event calendar)
* Primary data (Expert Group meeting protocols/minutes)
</td> </tr>
<tr>
<td>
Expected size
</td> </tr>
<tr>
<td>
Not yet known.
</td> </tr>
<tr>
<td>
Data utility
</td> </tr>
<tr>
<td>
The raw/sensitive data (stakeholder list, protocols/minutes) generated in WP2
will not be made openly accessible. These data will be useful for the SENET
partners to prepare the Expert Group meetings and deliverables. The
deliverables from WP2 may be useful for other researchers / consultants,
policy makers, funding and health authorities, programme owners and managers.
</td> </tr> </table>
### 2.1.3 Data collection/generation in WP4
WP4 delivers the formal structure and processes for the effective
communication and dissemination of project results. It thereby produces a wide
range of data in the form of online and printed communication materials such
as the website, newsletters, social media contributions, press releases and
flyers. The dissemination activities are described in D4.1 “Communication and
dissemination plan and material developed” and monitored in D4.4
“Communication and dissemination action report”.
_Table 3: Data collection in WP4_
<table>
<tr>
<th>
Purpose of data collection/generation
</th> </tr>
<tr>
<td>
* Elaboration of deliverables:
o D4.1 Communication and dissemination plan and material developed o D4.2
Launch of SENET website / mobile app and report on functionalities o D4.3
Exploitation plan o D4.4 Communication and dissemination action report o D4.5
Data Management Plan
* Dissemination and communication materials
</td> </tr>
<tr>
<td>
Types and formats
</td> </tr>
<tr>
<td>
* Deliverables – Format: .docx, .pdf
* Presentations – Format: .pptx, .pdf
* Business card, flyer – Format: .docx, .pdf, printed
* Newsletters, press releases, other publications – Format: .docx, .pdf
* Project website – Format: .html
* Contact data of newsletter subscribers – Format: .csv
* Communication and dissemination monitoring – Format: .xlsx
* Social media analysis – Format: .csv
* Website analysis – Format: .xlsx, .pdf, .csv
</td> </tr>
<tr>
<td>
Re-use of existing data
</td> </tr>
<tr>
<td>
Data from other WPs will be re-used in WP4 (e.g. information for newsletters
from WP1 deliverables).
</td> </tr>
<tr>
<td>
Data origin
</td> </tr>
<tr>
<td>
SENET project
</td> </tr>
<tr>
<td>
Expected size
</td> </tr>
<tr>
<td>
Not yet known.
</td> </tr>
<tr>
<td>
Data utility
</td> </tr>
<tr>
<td>
The dissemination and communication materials developed in WP4 will be useful
to increase the project visibility and to inform stakeholders about the
project. The confidential data (some of the deliverables, contact data, etc.)
will be useful for the project partners.
</td> </tr> </table>
# 3 FAIR data principles
## 3.1 Making data findable (including provisions for metadata)
SENET data are currently not discoverable with metadata. Metadata may be
created after the project’s end for specific deliverables that will be
considered as “worthy” to be identified. Descriptive and administrative
metadata will be created, cataloguing the project data after the end of the
project.
Where applicable, data produced are identifiable and locatable by means of
search keywords.
In order to facilitate easy referencing of the data, a standard naming and
versioning convention will be employed, as follows:
_Project name + item name + version number_
1) Example for deliverables:
SENET_Dx.x_shorttitle_vx.x.docx (or .pdf) 2) Example for documents at task
level:
SENET_Taskx.x_shorttitle_vx.x.docx (or .pdf, .pptx, etc.)
3) Example for documents not being assignable to a specific task or
deliverable: SENET_WPx_shorttitle_vx.x.docx
## 3.2 Making data openly accessible
### 3.2.1 Deposition in an open access repository
All SENET deliverables classified as “public” and other significant data such
as communication and dissemination materials are made openly accessible via
the SENET project website. The following table summarises these SENET’s
deliverables and other data.
_Table 4: Openly accessible data in SENET_
<table>
<tr>
<th>
Dataset
</th>
<th>
Dissemination
level
</th>
<th>
Format
</th>
<th>
Repository
</th> </tr>
<tr>
<td>
WP1
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
D1.1 Scoping paper: Review on health research and innovation priorities in
Europe and China
</td>
<td>
Public
</td>
<td>
PDF
</td>
<td>
Shared via SENET website
</td> </tr>
<tr>
<td>
D1.2 Map of major funding agencies and stakeholders in Europe and China
</td>
<td>
Public
</td>
<td>
PDF
</td>
<td>
Shared via SENET website
</td> </tr>
<tr>
<td>
D1.3 Guide for health researchers from Europe and China through the funding
landscape
</td>
<td>
Public
</td>
<td>
PDF
</td>
<td>
Shared via SENET website
</td> </tr>
<tr>
<td>
D1.4 Strategy paper: Towards closer EU-China health research and innovation
collaboration
</td>
<td>
Public
</td>
<td>
PDF
</td>
<td>
Shared via SENET website
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
D2.2 Initial roadmap for enhancing EU-China health research and innovation
collaboration
</td>
<td>
Public
</td>
<td>
PDF
</td>
<td>
Shared via SENET website
</td> </tr>
<tr>
<td>
D2.3 Strategic recommendations for health research and innovation
collaborations
</td>
<td>
Public
</td>
<td>
PDF
</td>
<td>
Shared via SENET website
</td> </tr>
<tr>
<td>
D2.4 Consolidated action plan for research and innovation priorities in health
</td>
<td>
Public
</td>
<td>
PDF
</td>
<td>
Shared via SENET website
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
D4.5 Data Management Plan
</td>
<td>
Public
</td>
<td>
PDF
</td>
<td>
Shared via SENET website
</td> </tr>
<tr>
<td>
SENET website
</td>
<td>
Public
</td>
<td>
HTML
</td>
<td>
Openly accessible via world wide web
</td> </tr>
<tr>
<td>
SENET business card
</td>
<td>
Public
</td>
<td>
PDF
</td>
<td>
Shared via SENET website (and print)
</td> </tr>
<tr>
<td>
SENET flyer
</td>
<td>
Public
</td>
<td>
PDF
</td>
<td>
Shared via SENET website (and print)
</td> </tr>
<tr>
<td>
SENET press releases
</td>
<td>
Public
</td>
<td>
PDF
</td>
<td>
Shared via SENET website
</td> </tr>
<tr>
<td>
SENET newsletters
</td>
<td>
Public
</td>
<td>
PDF
</td>
<td>
Shared via SENET website
</td> </tr> </table>
Data containing personal information (e.g. name, email address, etc.) of
individuals is considered confidential under the General Data Protection
Regulations (GDPR) and cannot be made openly accessible. This refers in
particular to:
* Transcripts / audio files from interviews (in order to guarantee that personal opinions cannot be linked to a specific individual).
* Individual results of the online stakeholder survey (as it was guaranteed that the survey was completely anonymous and that the results will be only used for the SENET project).
* Protocols/minutes SENET Expert Groups as (in order to guarantee that personal opinions cannot be linked to a specific individual).
* Files containing contact information such as the stakeholder list or newsletter contact list (due to GDPR).
In case a data cannot be shared, the reasons for this will be mentioned in
further versions of the DMP (e.g. ethics, personal data, intellectual
property, privacy-related, security-related). In principle, data sharing and
re-use policies will comply with the privacy and ethics guidelines of the
SENET project. Consent will be requested from all external participants to
allow data to be shared and re-used. All sensitive data will be anonymised
before sharing.
### 3.2.2 Methods or software tools needed to access the data
No specific software tools are needed to access SENET data. Common software
such as Microsoft Word, Excel, PowerPoint and Adobe Acrobat Reader or an
alternative Open Office software are sufficient to gain access.
### 3.2.3 Restriction on use
SENET data can be shared and re-used with the exception of personal data
(which will be treated according to the GDPR). All SENET deliverables that are
classified as “public” will be made available through the SENET project
website.
### 3.2.4 Data Access Committee
If applicable, all data access issues will be discussed with the entire
consortium at any given time throughout the project’s lifetime.
### 3.2.5 Ascertainment of the identity of the person accessing the data
There is no way of ascertaining the identity of a person accessing the data
via the project website.
The project website and Twitter account are monitored using Google Analytics.
Google Analytics data cannot be related to an individual person but provides
helpful information for the analysis of what visitors do, like and share on
the project website and social media.
## 3.3 Making data interoperable
### 3.3.1 Interoperability
Data produced by the SENET project is interoperable. This means that data
exchange and re-use between researchers, institutions, organisations,
countries, etc. is allowed. SENET data classified as “public” are generated
according to standards for formats, are as compliant as possible with
available
(open) software applications and can be used in combination with other
datasets from different origins.
SENET data can be shared and re-used with the exception of personal data
(which will be treated according to the GDPR).
### 3.3.2 Standards or methodologies
SENET does not follow any specific data and metadata vocabularies, standards
or methodologies to make data interoperable. Data is stored in on the openly
accessible project website and explained in the SENET deliverables. Mainstream
software is used to generate the data. The language used is English. Metadata
assorting the datasets will be defined at a later stage of the project life
cycle.
### 3.3.3 Standard vocabularies and mapping to more commonly used ontologies
Standard vocabularies will be used for all data types present in the SENET
data set, wherever possible, to allow inter-disciplinary interoperability.
In case it is unavoidable that SENET uses uncommon or generates project
specific ontologies or vocabularies, mappings to more commonly used ontologies
will be provided.
## 3.4 Increase data re-use
### 3.4.1 Data licenses
The choice of licensing schemes will be discussed with all partners during the
next consortium and the information will be updated in the next version of the
DMP. In case of generation of data subject to licensing, a scheme will be
picked to fit the need of SENET’s open data ensuring not only their long-term
preservation and re-use but also the interests of the consortium along with
the rights of individuals whose date has been collected.
At this point of the project, the use of a creative common license CC4 seems
most likely.
### 3.4.2 Date of data availability
SENET deliverables are published on the project website and thus are made
available for use by third parties once they have been approved by the
European Commission.
Any other data will be made available for re-use immediately after the end of
the project, after careful evaluation on what should be kept confidential due
to privacy concerns and what will be shared openly.
No embargo to give time to publish or seek patents is foreseen at this point.
### 3.4.3 Usability by third parties after the end of the project
Apart from the data that has to be kept confidential due to privacy concerns,
data produced and generated during the project are useable by third parties
even after the end of the project. The SENET project website will be online
for two years after the project end (SENET open access repository). Hence,
third parties will be free to re-use the data.
The SENET project partners are obliged to preserve the project for five years
after the project end (until June 2027). However, making data available and
re-usable indefinitely (e.g. by using other online repositories) will be
considered during the project’s lifetime where applicable.
### 3.4.4 Data quality assurance processes
The overall SENET project does not describe any data quality assurance
processes. Data quality is assured during the implementation of each task by
the respective project partner (quality assurance procedures described in D5.2
Quality assurance and risk management plan).
# 4 Allocation of resources, data security and ethical aspects
## 4.1 Costs for making SENET data FAIR
The SENET project website has been selected as an open access repository for
all public SENET data which is covered as part of the expenses in WP4.
Resources for long-term preservation have not been discussed.
## 4.2 Responsibility for data management
Each project partner is responsible for a reliable data management regarding
their work within the SENET project. S2i as the project coordinator and task
leader of 4.5 Data management is responsible for the overall data management
at project level.
## 4.3 Data security
Each project partner is responsible for the security and preservation of their
data and the consideration of the project’s ethical requirements (described in
D6.1, D6.2, D6.3 and D6.4).
To make sure that the data loss is prevented, the project partners’ servers
are regularly and continuously backed-up.
Furthermore, the SENET project data are saved in an online platform
(NextCloud). In order to keep the data secure, access will be controlled by
encryption of long-term cloud-based backup. S2i as the project coordinator
ensures that all project partners can access the data securely by providing
web access via password control.
In the event of an incident, the data will be recovered according to the
necessary procedures of the data repository owner. The next version of the DMP
shall contain more details on the exact data recovery procedure that will be
adopted in SENET.
Personal data will be secured further by password-protecting the individual
documents. The passwords will be kept secure and only be shared with partners
who need to work with the data.
SENET data is not stored in certified repositories for long-term preservation
and curation.
## 4.4 Ethical aspects
SENET entails activities which involve the collection of data from selected
individuals (i.e. survey, interviews, SENET Expert Groups). The collection of
data from participants in these activities will be based upon a process of
informed consent. The participants’ right to control their personal
information will be respected at all times. The project coordinator S2i in
cooperation with the Steering Committee will deal with any ethical issues that
may arise during the project’s lifetime.
All SENET partners will conform to the Horizon 2020 Ethics and Data Protection
Guidelines 3 and any personal information will be handled according to the
principles laid out in the GDPR. Therefore, SENET project partners will only
collect and process data which is necessary to perform the research and
development activities of the project.
All relevant ethical aspects as identified and established by the SENET Ethics
Summary Report will be further described in a specific work package (WP6). In
this context, the following four deliverables will be submitted in project
month 12 (December 2019):
* D6.1 H - Requirement No. 1
* D6.2 POPD - Requirement No. 2
* D6.3 NEC - Requirement No. 3
* D6.4 NEC - Requirement No. 4
Further information on the ethics procedures implemented in the framework of
SENET are described in the Grant Agreement Annex 1 Part B Section 5.1.
## 4.5 Other issues
SENET does not make use of any other national/funder/sectorial/departmental
procedures for data management.
# 5\. Conclusion and further development of the DMP
The present document represents the first version of the SENET Data Management
Plan established in
June 2018 (month 6). It sets a benchmark to identify the actions that need to
be implemented by the SENET project partners in order to fulfil the European
Commission’s requirements in terms of data management and accessibility of the
research data.
The DMP is a living document and will be updated and further developed during
the project’s lifetime. A second version will be elaborated before the
periodic review in month 18 (June 2020). The final version will be prepared
before for the final review in month 30 (June 2021).
_Table 5: DMP update timetable_
<table>
<tr>
<th>
Project month
</th>
<th>
Date
</th>
<th>
Responsible partner
</th>
<th>
Comments
</th> </tr>
<tr>
<td>
16
</td>
<td>
April 2020
</td>
<td>
Steinbeis 2i GmbH
</td>
<td>
Interim version of the DMP ready for periodic review in M18
</td> </tr>
<tr>
<td>
28
</td>
<td>
April 2021
</td>
<td>
Steinbeis 2i GmbH
</td>
<td>
Final version of the DMP ready for final review in M30
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0527_BPR4GDPR_787149.md
|
# Executive Summary
The results and data of the BPR4GDPR project that are necessary for the
project’s purpose will be openly published to communicate and spread the
knowledge to all interested communities and stakeholders. In this context, the
privacy by default principle will be considered. Therefore, only data that is
needed for the validation of presented results in scientific publications will
be included within the Data Management Plan (DMP). All the other data that
will be generated within the project can be published on a voluntary basis as
stated in the DMP. Published results generate wider interest towards the
improvements achieved by the project in order to facilitate and potentiate
exploitation opportunities. The goal of this deliverable is listing
publishable results and research data and investigating the appropriate
methodologies and open repositories for data management and dissemination. The
BPR4GDPR’s partners aim to offer as much information as possible generated by
the project through open access as long as it does not adversely affect its
protection or use, and subject to legitimate interests and applicable laws.
Such information include scientific publications issued by the BPR4GDPR
consortium, white papers published, open source code generated, anonymous
interview results, or mock-up datasets used for gathering customer feedback.
As it can be seen in Figure 1, different research actions lead to different
ways of dissemination or exploitation. In case of dissemination and sharing,
there are two different types of project result publishing. On the one hand,
there are publications that can have gold or green open access, or on the
other hand depositing of research data via access and use that can be either
restricted or free of charge. It is tried to make those publications and
research data available as far as possible. However, not all
collected/generated data can be published openly, as it may contain
confidential personal and business information or other information that
deserves specific protection under applicable laws or applicable contractual
agreements between the interested parties. This kind of data must be
identified and protected accordingly.
**Figure 1: Open access strategy for publications and research data**
# Introduction
## Purpose of the Document
For a good Data Management, each project in the EC's Horizon 2020 program has
to define what kind of results will be generated or collected during the
project's runtime, as well as when and how the results will be published
openly. Consequently, the following DMP regards the whole data management
lifecycle of the Horizon 2020 project “BPR4GDPR”. For all results generated or
collected during BPR4GDPR, a description is provided including the purpose of
the document, the standards and metadata used for storage and the facility
used for sharing the data, based on the EC template recommended. In detail,
the purpose of the DMP is to give information about:
(European Commission, 2016, p. 2)
* the handling of research data during & after the project,
* what data will be collected, processed or generated,
* which methodology & standards will be applied, whether data will be shared/made open access and how, how data will be curated & preserved.
In this way, data will become “FAIR” (findable, accessible, interoperable,
reusable). Furthermore, data privacy within the project and the compliance
with the General Data Protection Regulation (Regulation EU 2016/679 – "GDPR")
will be set out. Finally, the result should be a data policy that leads the
consortium partners in executing a good data management and additionally
considers resources and budgetary planning for data management.
This document is an initial version, due in project month 6. The DMP will be
updated on a regular basis in the project months 12, 24 and 36 (see
Deliverables D1.6 to D1.8 – M12, M24 and M36 Data Management Plan). It does
not describe how the results are exploited, which is part of the deliverables
D7.2 to D7.4 (Initial, intermediate and final dissemination, standardisation
and exploitation plan). Instead, the updated DMP will contain information to
new datasets that have been collected or generated in the meantime as well as
changed consortium policies and other external factors. Nevertheless, the
future versions will take into account that there is a consistency with the
exploitation actions as well as with the IPR requirements.
In particular, BPR4GDPR’s DMP will be useful for the project consortium itself
as well as for the European Commission. Furthermore, general public can
benefit from the document.
## Project Description
The objectives for BPR4GDPR are the following:
* A **reference compliance framework** that is reflecting the associated provisions and requirements for GDPR to facilitate compliance for organisations. This framework will serve as the codification of legislation.
* **Sophisticated security and privacy policies** through a comprehensive, rule-based framework capturing complex concepts in accordance with the data protection legislation and stakeholder needs and requirements.
* **By design privacy-aware process models** and underlying operations by provision of modelling technologies and tools that analyse tasks, interactions, control and data flows for natively compliant processes and workflow applications with security and privacy provisions and requirements.
* **Compliance-driven process re-engineering** through a set of mechanisms for automating the respective procedures regarding all phases of processes’ lifecycle and resulting in compliant-by-design processes.
* A configurable **compliance toolkit** that fits the needs of various organisations being subject to GDPR compliance and that incorporates functionalities for managing the interaction with the data subject and enforcing respective rights.
* The implementation of inherently offered **Compliance-as-a-Service (CaaS)** at the Cloud infrastructures of BPR4GDPR partners to achieve compliance at low cost to SMEs.
* Deployment of the BPR4GDPR technology and overall framework, corresponding to **comprehensive trials** that involve software companies, service providers and carefully selected stakeholders to assess the BPR4GDPR solution, to validate different deployment models and to define a market penetration roadmap.
* Profound **impact creation** in European research and economy, especially as regards the areas of data protection, security, BPM, software services, cloud computing, etc.
Along with these above-mentioned objectives, the BPR4GDPR data that needs to
be handled and that is described within the DMP is associated with project
results as Regulation-driven policy framework, Compliancedriven process re-
engineering, Compliance toolkit, Process discovery and mining enabling
traceability and adaptability, Compliance-as-a-Service (CaaS) and Impact
creation – holistic innovation approach resulting in sustainable business
models.
## Terminology
**Open Access** : Open access means unrestricted access to research results.
Often the term open access is used for naming free online access to peer-
reviewed publications. Open access is expected to enable others to: a) Build
on top of existing research results,
2. Avoid redundancy,
3. Participate in open innovation, and
4. Read about the results of a project or inform citizens.
All major publishers in computer science – like ACM, IEEE, Elsevier, or
Springer - participate in the idea of open access. Both green or gold open
access levels are promoted. Green open access means that authors eventually
are going to publish their accepted, peer-reviewed articles themselves, e.g.
by deposing it to their own institutional repositories or digital archives.
Gold open access means that a publisher is paid (e.g. by the authors) to
provide immediate access on the publishers website and without charging any
further fees to the readers.
**Open Research Data** : Open research data is related to the long-term
deposit of underlying or linked research data needed to validate the results
presented in publications. Following the idea of open access, all open
research data needs to be openly available, usually meaning online
availability. In addition, standardized data formats and metadata has to be
used to store and structure the data. Open research data is expected to enable
others to:
1. Understand and reconstruct scientific conclusions, and
2. To build on top of existing research data.
**Metadata** : Metadata defines information about the features of other data.
Usually metadata is used to structure larger sets of data in a descriptive
way. Typical metadata refers to names, locations, dates, storage data type,
and relations to other datasets. Metadata is very important when it comes to
index and search larger data sets for a specific kind of information.
Sometimes metadata can be retrieved automatically from a dataset, but often it
is also needed some manual classification. The well-known tags in
MP3-recordings are a good example of why metadata is necessary to find a
specific kind of genre or composer in a larger number of songs.
**FAIR Data:** To ensure a sustainable usage of Open Research Data, the
principle of “FAIR Data” should be met by the data in question as well as by
the underlying data infrastructure. Therefore, FAIR data should be **F**
indable, **A** ccessible, **I** nteroperable and **R** eusable. In detail,
this means:
Findable:
* Discoverability of data (standard identification mechanisms, naming conventions, search keywords) Approach for clear versioning
* Metadata provision and possible used standards for metadata creation
Accessible:
* Description of openly available and closed data (with reasons) and the process to make them available
* Definition of methods or software tools needed to access data
* Specification where data, associated metadata, documentation and code are deposited
Interoperable:
* Assessment of interoperability of project data (What data and metadata vocabularies, standards or methodologies?)
* Existence of standard vocabulary or commonly used ontologies for all data types in the data set
Reusable:
* Licencing of data for maximum reuse
* When will data be made available for reuse (why/for what is data embargo needed)
* Are Produced/used data reusable by third parties after project end? Why restricted?
* Data quality assurance processes
* Specification of time length for which data will be reusable
## Structure of the Document
The rest of the document is structured into four further sections.
Section 3 handles the general structuring of the data within the project,
meaning data set reference and naming as well as the usage of metadata
standards that will give the framework for the metadata template.
Section 4 defines the strategy that will be applied to all results collected
or generated during BPR4GDPR for sharing and preservation and contains a
summary of all publishing platforms to be used by the BPR4GDPR consortium.
Included is a process that defines if a result has to be published or not.
Moreover, the security of data sharing and data preservation will be taken
into consideration.
Section 5 considers costs that go along with the data management, usage of
sharing and preservation platforms and availability of open access.
Furthermore, responsibilities for data management actions including security
and quality issues will be defined.
Section 6 lists publications and other public related data(sets) that are
already or may be generated or collected during BPR4GDPR. For each result, a
short description, the chosen way of open access, and a longterm storage
solution are specified according to the EC's data management guidelines
(European Commission, 2016) and by using the metadata template presented in
Section 3.
# Data Structure
A first step to make the data in the BPR4GDPR project “FAIR” is to give the
data some structure. This means a consistent naming of the data that makes
them easier findable and that includes clear versioning and the commitment to
metadata standards for better tracing of existing and future data. Through
standardized information within a metadata template, like for example the data
set type, discoverability of the data can be increased. Moreover, it is easier
for applications to consume and process the metadata for assessing the value
of the data and for further usage.
The data title itself should also include some metadata, which help to
increase data handling and working efficiency. Possible metadata components
for the data naming are the title, version number, prefixes, linkages to work
packages or tasks, the dataset topic, creation date or the modification date.
In the case of BPR4GDPR, especially the dataset date and a versioning number
should be used for a higher transparency of data modifications as well as the
linkage to the work package for a thematic classification of the data. The
usage of these metadata components results in the following data naming:
_“BPR4GDPR_WP-No._Version-Date_Title_Deliverable-No._Version number”_
However, the metadata component “Deliverable-No.” is just optional due to the
fact that not each dataset can be directly linked to a specific deliverable.
An example for such a dataset naming could be the following:
_BPR4GDPR_WP1.1_20180920_M6 Data Management Plan_D1.5_V3_
In this context, a metadata template can be generated including information
that goes beyond the metadata that can be deduced from the dataset naming.
Apart from standard information as title, creation date or language, this
template comprises further aspects, like the data origin, expected size of the
dataset, a general description of the data, reference to publications,
keywords belonging to the data or target group. This metadata template shall
be additionally saved within the repository. The following Table 1 shows such
a template to describe data that will be produced in the context of BPR4GDPR.
**Table 1: BPR4GDPR Metadata Template**
<table>
<tr>
<th>
**Initial Dataset Template**
</th> </tr>
<tr>
<td>
**Dataset reference name**
</td>
<td>
Identifier for the data set to be produced using the above described naming
convention.
</td> </tr>
<tr>
<td>
**Dataset title**
</td>
<td>
The easy searchable and findable title of the dataset.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
Description of the data that will be generated or collected, its origin (in
case it is collected), nature and scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existences
(or not) of similar data and the possibilities for integration and reuse.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created.
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
List of keywords that are associated to the dataset.
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling reuse, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository
</td> </tr>
<tr>
<td>
</td>
<td>
where data will be stored, if already existing and identified, indicating in
particular the type of repository (institutional, standard repository for the
discipline, etc.).
</td> </tr>
<tr>
<td>
**Archiving and preservation**
**(including storage and backup)**
</td>
<td>
Description of the procedure that will be put in place for long-term
preservation of the data. Indication of how long the data should be preserved,
what is its approximated end, volume, what the associated costs are and how
these are planned to be covered.
</td> </tr>
<tr>
<td>
**Additional Dataset explanation**
</td> </tr>
<tr>
<td>
**Discoverable**
</td>
<td>
Are the data and associated software produced and / or used in the project
discoverable (and readily located), identifiable by means of a standards
identification mechanism? (e.g. Digital Object Identifier)
</td> </tr>
<tr>
<td>
**Accessible**
</td>
<td>
Are the data and associated software produced and / or used in BPR4GDPR
accessible and in what modalities, scope, licenses? (e.g. licencing framework
for research and education, embargo periods, commercial exploitation, etc.)
</td> </tr>
<tr>
<td>
**Assessable and intelligible**
</td>
<td>
Are the data and associated software produced and / or used in the project
assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review? (e.g. Are the minimal datasets handled
together with scientific papers for the purpose of peer review? Are data
provided in a way that judgments can be made about their reliability and the
competence of those who created them?)
</td> </tr>
<tr>
<td>
**Usage beyond the original purpose for which it was collected**
</td>
<td>
Are the data and associated software produced and / or used in BPR4GDPR
useable by third parties even long time after the collection of the data?
(e.g. Is the data safely stored in certified repositories for long term
preservation and curation? Is it stored together with the minimum software,
metadata and documentation to make it useful? Is the data useful for the wider
public needs and usable of the likely purpose of non-specialists?)
</td> </tr>
<tr>
<td>
**Interoperable to specific quality standards**
</td>
<td>
Are the data and associated software produced and / or used in the project
interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc.? (e.g. adhering to standards for data
annotation, data exchange, compliant with available software applications, and
allowing recombinations with different datasets from different origins)
</td> </tr> </table>
As recommended by the European Commission, also the usage of metadata
standards should be regarded. Such a metadata standard is a document that
defines how metadata will be tagged, used, managed, formatted, structured or
transmitted. Besides standardized data formats as CSV, PDF and DOC/DOCX for
texts and tables, PPT for presentations, JPEG, PNG and GIF for images, or the
XES-format for event logs that are used in BPR4GDPR to exchange event-driven
data in a unified and extensible manner, also other (meta-)data standards are
considered. For example, the RDF (Resource Description Framework) vocabulary
is a metadata standard, which can be used in case of BPR4GDPR. At first the
RDF has only been designed as a metadata standard by the World Wide Web
Consortium (W3C), however by now it is a fundamental element of the semantic
web and to formulate logical statements. Such an RDF-based vocabulary is DCAT
(data catalog vocabulary). This standard has been generated in order to
optimize the interoperability on the web along several data catalogs, where
data are described in. The DCAT vocabulary is listed in the following and can
be used for the dataset description:
* dct:identifier to provide the dataset’s unique identifier
* dct:title to give the dataset a specific title
* dct:theme to provide the main theme(s) of the dataset
* dct:descripton to describe the dataset with free-text
* dct:issued to provide the date of the issuance/publication of the dataset
* dct:modified to provide the date of the latest dataset modification/change/update
* dct:language to mention the dataset language
* dct:publisher to state the responsible entity that published the dataset/made it available
* dcat:keyword to provide several keywords that describe the dataset
* dcat:temporal to state the dataset’s temporal period
* dcat:distribution to link the dataset with all available distributions
# Data Management Strategy
The BPR4GDPR Data Management Strategy consists of a publishing process to
divide public from non-public data and strategies for data sharing as well as
archiving and preservation that together provide long-term open access to all
publishable, generated or collected results of the project. The implementation
of the project complies with laws at a national and EU level and especially
with GDPR in relation to the protection of personal data of individuals. More
specifically, there will be no cases where personal information or sensitive
information of internet users or other involved persons is collected (IP
addresses, email addresses or other personal information) or processed. For
the whole duration of the project, from the beginning to its end, the Data
Protection Manager (DPM – Mrs. Francesca Gaudino (BAK)) will carefully examine
the legality of the activities and the tools (including platforms) that will
be produced for not violating the personal data of internet users or other
involved persons. In the potential future case where the BPR4GDPR consortium
will collect, record, store or process any personal information, it will be
ensured that this will be done on a basis of respecting citizens’ rights,
preventing their identification and keeping their anonymization. The
publishing process as well as the data sharing, archiving and preservation
strategies are described in the following subsections. Furthermore, it will be
explained how data security will be handled within the process and strategies.
Through the whole data management strategy the consistency with the project’s
exploitation actions and IPR requirements, as well as compliance with WP8
Ethics requirements will be guaranteed. As set in the DoW of BPR4GDPR, the
project’s partners ensure to share and disseminate their own knowledge as far
as it does not adversely affect its protection or use. Furthermore, the IPR
consortium agreement takes into consideration a workshop after the project end
in order to provide a list of all generated results with a separate decision
of joint or single ownership for each result. Eventually, first suggestions
for usable sharing platform have been mentioned as the open access
infrastructure for Research in Europe “OpenAIRE”, the scholarly open access
repository arXiv, BPM Center or the project portal itself.
## Publishing Process
A simple and deterministic process has been defined that decides if a result
in BPR4GDPR has to be published or not. The term “result” is used for all kind
of artefacts generated during BPR4GDPR like white papers, scientific
publications, and anonymous usage data. By following this process, each result
is either classified public or nonpublic. Public means that the result must be
published under the open access policy. Non-public means that it must not be
published.
For each result generated or collected during BPR4GDPR runtime, the following
questions have to be answered to classify it:
1. _Does a result provide significant value to others or is it necessary to understand a scientific conclusion?_
If this question is answered with yes, then the result will be classified as
public. If this question is answered with no, the result will be classified as
non-public. Such a result could be code that is very specific to BPR4GDPR
platform (e.g. a database initialization) which is usually of no scientific
interest to anyone, nor add any significant contribution.
2. _Does a result include personal information that is not the author's name?_
If this question is answered with yes, the result will be classified as non-
public. Personal information beyond the name must be removed if the result
should be published. This also bares witness on the repetitive nature of the
publishing process, where results which are deemed in the beginning as non-
publishable can become publishable once privacy-related information or other
information subject to confidentiality obligations is removed from them.
3. _Does a result allow the identification of individuals even without the name?_
If this question is answered with yes, the result is classified as non-public.
Sometimes data inference can be used to superimpose different user data and
reveal indirectly a single user's identity. As such, in order to make a result
publishable, the included information must be reduced to a level where single
individuals cannot be identified. This can be performed by using established
anonymisation techniques to conceal a single user's identity, e.g.,
abstraction, dummy users, or non-intersecting features.
4. _Does a result include business or trade secrets of one or more partners of BPR4GDPR?_
If this question is answered with yes, the result is classified as non-public,
except if the opposite is explicitly stated by the involved partners. Business
or trade secrets need to be removed in accordance to all partners'
requirements before it can be published.
5. _Does a result name technologies that are part of an ongoing, project-related patent application?_
If this question is answered with yes, then the result is classified as non-
public. Of course, results can be published after patent has been filed.
6. _Can a result be abused for a purpose that is undesired by society in general or contradict with societal norms and BPR4GDPR’s ethics?_
If this question is answered with yes, the result is classified as non-public.
_7\. Does a result break national security interests for any project partner?_
If this question is answered with yes, the result is classified as non-public.
## Data Sharing
Consequently, with the publishing process all the data that cannot be
published due to specific reasons like ethical or privacy- and/or security-
related issues have been identified. All the other data that have been
classified as publishable/public will be considered in the following sections
of the deliverable.
For sharing the data among the consortium partners, a Nextcloud repository has
been set up. The repository has been selected since it allows a secure and
facilitated sharing of documents between the partners via web interface and on
several devices. The Nextcloud is extensible for further plugins and
applications and is hosted by the consortium partner Università di Roma “Tor
Vergata”. Access to the repository is only granted to consortium members.
Nextcloud includes the assignment of rights, like the right of re-sharing,
creation, change, deletion and a settable expiration date.
For public sharing in BPR4GDPR, the consortium partners use several platforms
to publish our results openly and to provide them for re-usage. All the
consortium partners should make their generated results as quickly as possible
available unless there have been reasons identified along the publishing
process (see section 4.1), that classify them as non-public. The following
list presents a closer selection of platforms that should be considered for
data sharing and describes their concepts for publishing, storage and backup.
After having selected all relevant datasets and results that can be published
and that have not been identified as “non-public”, the datasets/documents
should be archived in a selected repository upon acceptance for public. In
such manner, either a publisher’s final version of a paper or the final
manuscript that has been accepted for publication, both including peer review
modifications, should be deposited. The selected repository depends on the
dataset type. While some repository platforms only integrate publications,
others also accept datasets, no matter if the dataset is linked to a
publication or not.
**4.2.1 Data Sharing Platforms Project Website/Project Portal:**
The partners in the project consortium decided to setup a project-related
website. This website describes the mission, the objectives, the benefits and
impact, as well as the general approach of BPR4GDPR and its development
status. Moreover, all interesting news considering announcements, conferences
and events or other related information are disseminated on a regular basis.
Later in the project, the developed BPR4GDPR policy framework and compliance
toolkit will be announced. A dedicated area for downloads is made available in
order to publish reports and white papers as well as scientific publications.
All documents are published using the portable document format (PDF). All
downloads are enriched by using simple metadata information, such as the title
and the type of the document. The website is hosted by partner Eindhoven
University of Technology. All webpage-related data is backed on a regular
basis. All information on the project website can be accessed without creating
an account. Web-Link: _http://www.bpr4gdpr.eu/_
**OpenAIRE:**
OpenAIRE is an Open Access infrastructure for Research in Europe that is
recommended by the European Commission and that allows access to research
results that have been funded by FP7 and ERC resources. OpenAIRE allows to
monitor, identify, deposit and access research outcomes throughout Europe and
supports the potential of international open access repositories
collaboration. Through such workflows that go beyond repository content,
interoperability across several repositories is achieved.
The project started in December 2009 and aimed to support the implementation
of Open Access in Europe. For the development of the infrastructure, state-of-
the-art software services that have been generated within the DRIVER and
DRIVER-II projects as well as repository software by CERN have been used.
Especially research data on areas like health, energy, environment or ICT are
deposited via OpenAIRE. Through this platform, researchers and universities
that are involved in a Horizon 2020 project are supported in fulfilling the
EC’s open access policy. Web-Link: _http://www.openaire.eu_
**Zenodo:**
Zenodo is a research data archive/online repository which helps researchers to
share research results in a wide variety of formats for all fields of science.
It was created through EC's OpenAIRE+ project and is now hosted at CERN using
one of Europe's most reliably hardware infrastructures. Data is backed nightly
and replicated to different locations. Zenodo not only supports the
publication of scientific papers or white papers, but also the publication of
any structured research data (e.g., using XML). Zenodo provides a connector to
GitHub that supports open collaboration for source code and versioning for all
kinds of data. All uploaded results are structured by using metadata, like for
example the contributors’ names, keywords, date, location, kind of document,
license and others. All metadata is licensed under CC0 license (Creative
Commons ‘No Rights Reserved’). The property rights or ownership of a result
does not change by uploading it to Zenodo. Web-Link: _http://zenodo.org_
**arXiv:**
The high-automated electronic archive “arXiv” is another scholarly open access
repository for preprints that is hosted by the Cornell University Library.
This distribution server concentrates on research articles that are rather
located in technical areas as mathematics, statistics, physics, computer
science or electrical engineering. Nontechnical information should not be
shared over this platform. Through an arXiv Scientific Advisory Board and an
arXiv Member Advisory Board that consists of scientists and the scientific
cultures it serves, the repository is guided and maintained. Further subject
specialists check and review the publications in accordance to their relevance
and their compliance to standards. Moreover, an endorsement by an already
renowned author is necessary to deposit any articles on arXiv. For publishing
the article, data formats as PDF or LaTeX are possible. Web-Link:
_http://arxiv.org_ **GitHub:**
GitHub is a well-established online repository, which supports distributed
source code development, management and revision control. It is primarily used
for source code data. It enables world-wide collaboration between developers
and provides also some facilities to work on documentation and to track
issues. The platform uses metadata like contributors’ nicknames, keywords,
time, and data file types to structure the projects and their results. The
terms of service state that no intellectual property rights are claimed by the
GitHub Inc. over provided material. For textual metadata items, English is
preferred. The service is hosted by GitHub Inc. in the United States. GitHub
uses a rented Rackspace hardware infrastructure where data is backed up
continuously to different locations.
Web-Link: _https://github.com/_
**BPM Center:**
The BPM Center is a center founded in 2004 at Eindhoven University of
Technology in collaboration with Queensland University of Technology in
Australia explicitly for research in the Business Process Management (BPM)
field. The virtual research center handles business processes along all the
lifecycle that covers phases like the process modelling , process monitoring
or process mining. Especially in case of BPR4GDPR that handles Business
Process Re-engineering in accordance with GDPR and due to the fact that the
repository derives from the partner TU/e, this research center plays an
interesting role.
Another opportunity of the Eindhoven University of Technology to share data
and maximise the value for others by sharing knowledge is through the 4TU
programme. The four Universities of technology in the Netherlands set up this
programme with the aim to exploit knowledge in the technology as far as
possible. Particularly, the 4TU Centre for Research Data is the most
prestigious technical and scientific data archive in the Netherlands.
Web-Links: _http://bpmcenter.org/_ _https://www.4tu.nl/en/_
All public results generated or collected during the project lifetime will be
uploaded to one of these above mentioned repositories for long-term storage
and open access. Thereby, the choice of an adequate repository depends on the
dataset type. Source-code components will be differently published as
publications. Furthermore, the sharing platform will be selected depending on
the target group that could be interested in the data.
### Artefact Types
This section just attempts to enumerate specific datasets, software and
research items (from which publications can be produced) and whether all these
artefacts can be publishable or not according to what means. Each type of
artefact requires a specific kind of sharing platform. The artefacts are
deduced from the expected results referring to the BPR4GDPR proposal.
**Table 2: Project Artefacts**
<table>
<tr>
<th>
**Artefact Type**
</th>
<th>
**Artefact**
</th>
<th>
**Possible Publication Means**
</th> </tr>
<tr>
<td>
**Research Item**
</td>
<td>
Regulation-driven policy framework
</td>
<td>
OpenAIRE, Zenodo, arXiv, project website
</td> </tr>
<tr>
<td>
**Research Item**
</td>
<td>
Impact creation – holistic innovation approach resulting in sustainable
business models
</td>
<td>
OpenAIRE, Zenodo, arXiv, project website
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Compliance-driven process reengineering
</td>
<td>
OpenAIRE, Zenodo, GitHub, BPM Center
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Compliance toolkit
</td>
<td>
OpenAIRE, Zenodo, GitHub
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Process discovery and mining enabling traceability and adaptability
</td>
<td>
OpenAIRE, Zenodo, GitHub, BPM Center
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Compliance-as-a-Service (CaaS)
</td>
<td>
OpenAIRE, Zenodo, GitHub
</td> </tr>
<tr>
<td>
**Dataset**
</td>
<td>
Anonymous usage statistics
</td>
<td>
OpenAIRE, Zenodo, 4TU
</td> </tr>
<tr>
<td>
**Dataset**
</td>
<td>
UseCase data
</td>
<td>
Non-publishable
</td> </tr> </table>
## Archiving and Preservation
Within the data management plan, also the long-term preservation of the data,
that goes beyond the project lifetime, has to be considered. For these
preservation procedures, the duration of data preservation, end volume of the
data and preservation mediums will be regarded.
The preservation of BPR4GDPR data will succeed via institutional and free
usable platforms. Especially, Zenodo and 4TU seem to be adequate solutions for
archiving as these platforms do not create any additional, further costs to
store the data on the repository. Through linked metadata templates the data
can be made more findable and accessible as well as through a Digital Object
Identifier (DOI) that is matched to each upload. On top of that, it
constitutes a repository for a variety of topics that is recommended by the
European Commission, based on OpenAIRE and that accepts all kinds of dataset
types. But also OpenAIRE itself represents a similar attractive repository for
archiving that is equally proposed as an Open Access repository by the
European Commission.
By using at least one of the described repositories, the assumed preservation
time will be set to 5 years to guarantee a long access and the re-usability of
the project data. For this purpose, it will be strived to provide data of
maximal quality and validity to ensure data usability during this preservation
time. Therefore, datasets will be updated in case of adjusted data available.
However, the archiving and preservation strategy could possibly be altered and
updated during the project lifetime since further advantages or disadvantages
of several repositories could be identified during this time, which could lead
to a necessary adjustment of the preservation intentions. Possible amendments
will be documented in a later, updated version of the data management plan
(see deliverables D1.6 to D1.8 – M12, M24 and M36 Data Management Plan).
## Data Security
Data security is considered during the management of data in accordance to the
GDPR regulations and applicable laws on data protection and data security. For
this reason, the publishing process in Section 4.1 deals with the separation
of publishable and non-public data and results that arise within BPR4GDPR. As
a result, only data that are necessary for the purpose of the project will be
regarded, in accordance with the privacy by default principle.
In case of personal identification data, that are said to be of real
scientific interest, specific measures will be deployed in order to anonymize
data, such as the use of aggregated and statistic data, so that personal
identification data will be transformed in information that cannot identify an
individual anymore, not even indirectly.
As already mentioned in Section 4.2, the data handling among the consortium
partners takes place via Nextcloud. Data security on this repository will be
ensured by hosting the data at a partner’s private server (Università di Roma
“Tor Vergata”) and by using a platform, that covers several security issues
and that is compliant with GDPR itself. This is assured by the reliance to the
EU authentication platform and security protocols for data sharing, strict
policies for granting and revoking platform access (access to the repository
is only granted to consortium members), and the recording of user identity
during data access, download, and upload. In this way, Nextcloud enables the
project to assign rights to specific data (re-sharing, creation, change,
deletion or a settable expiration date). On top of that, Nextcloud is
extensible for further security plugins.
Also the used repositories for public datasets and results comply with
security and privacy regulations. For example, data on the Zenodo platform are
stored on the same infrastructure as the research data of CERN, that is hosted
on a reliable hardware infrastructure in Europe. Furthermore, data on those
repositories is backed up on a regular basis.
# Costs and Responsibilities
To manage the data within BPR4GDPR also the costs necessary for making
research data “FAIR” (see Section 2.3) have to be estimated and taken into
consideration. These costs arise due to the required measures for integrating
data into a platform and storing data during the project lifetime and
afterwards. However, costs associated with open access of research data within
any Horizon 2020 project can be handled as eligible costs.
Nonetheless, the selected repositories for data sharing and preservation are
on the one hand free for researchers to deposit their publications (see
Zenodo, arXiv etc.) and on the other hand the stored data can be freely
accessed by public. For example, arXiv only offers a special membership
programme for arXiv's heaviest institutional users. In contrast, GitHub
provides paid as well as free service plans. Free service plans can have any
number of public, open-access repositories with unlimited collaborators.
Private, non-public repositories require a paid service plan. Many open-source
projects use GitHub to share their results for free. Beyond that, no further
costs are at this stage anticipated relating to data management during and
after the project runtime. In case of any alteration, further emerging costs
will be outlined in an updated data management plan (D1.6 to D1.8 – M12, M24
and M36 Data Management Plan).
The compliance of the project’s data management with the described security
and cost related issues will be handled by the project’s Data Protection
Manager. Especially the assurance that data collection and processing within
the project align to EU and national legislation are a main part of this role.
This means security assessments and reporting of potential security threats.
In the case of BPR4GDPR Mrs. Francesca Gaudino (BAK) will be responsible for
this task.
However, for the data preparation as well as for the relevance, quality and
currency of the data, the respective data owners are responsible. To this end,
data preparation considers the data anonymization and data processing to make
these data ready for publishing. On top of that, data owners have to ensure
the compliance of uploaded data with the conditions that have been defined in
the project’s data management plan. Furthermore, also the completion of the
specific metadata templates for any dataset is the responsibility of the data
owners.
In turn, the BPR4GDPR partner that is hosting the Nextcloud repository for the
project’s internal data exchange (Università di Roma) is responsible for the
maintenance of this repository and its components as well as for the user
group management.
# Project Results
In this section the metadata template introduced in Section 3, that describes
the data within BPR4GDPR, will be used for the current project results. Every
use case will provide such a template to share the gained knowledge as far as
feasible and in relation to the described publishing process and its questions
in Section 4.1. This will occur during the project duration, whereby the data
management plan will be regularly updated in this time (see deliverables D1.6
to D1.8 – M12, M24 and M36 Data Management Plan).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0528_BPR4GDPR_787149.md
|
# Executive Summary
The results and data of the BPR4GDPR project that are necessary for the
project’s purpose will be openly published to communicate and spread the
knowledge to all interested communities and stakeholders. In this context, the
privacy by default principle will be considered. Therefore, only data that is
needed for the validation of presented results in scientific publications will
be included within the Data Management Plan (DMP). All the other data that
will be generated within the project can be published on a voluntary basis as
stated in the DMP. Published results generate wider interest towards the
improvements achieved by the project in order to facilitate and potentiate
exploitation opportunities. The goal of this deliverable is listing
publishable results and research data and investigating the appropriate
methodologies and open repositories for data management and dissemination. The
BPR4GDPR’s partners aim to offer as much information as possible generated by
the project through open access as long as it does not adversely affect its
protection or use, and subject to legitimate interests and applicable laws.
Such information include scientific publications issued by the BPR4GDPR
consortium, white papers published, open source code generated, anonymous
interview results, or mock-up datasets used for gathering customer feedback.
As it can be seen in Figure 1, different research actions lead to different
ways of dissemination or exploitation. In case of dissemination and sharing,
there are two different types of project result publishing. On the one hand,
there are publications that can have gold or green open access, or on the
other hand depositing of research data via access and use that can be either
restricted or free of charge. It is tried to make those publications and
research data available as far as possible. However, not all
collected/generated data can be published openly, as it may contain
confidential personal and business information or other information that
deserves specific protection under applicable laws or applicable contractual
agreements between the interested parties. This kind of data must be
identified and protected accordingly.
**Figure 1: Open access strategy for publications and research data**
# Introduction
## Purpose of the Document
For a good Data Management, each project in the EC's Horizon 2020 program has
to define what kind of results will be generated or collected during the
project's runtime, as well as when and how the results will be published
openly. Consequently, the following DMP regards the whole data management
lifecycle of the Horizon 2020 project “BPR4GDPR”. For all results generated or
collected during BPR4GDPR, a description is provided including the purpose of
the document, the standards and metadata used for storage and the facility
used for sharing the data, based on the EC template recommended. In detail,
the purpose of the DMP is to give information about:
(European Commission, 2016, p. 2)
* the handling of research data during & after the project,
* what data will be collected, processed or generated,
* which methodology & standards will be applied, whether data will be shared/made open access and how, how data will be curated & preserved.
In this way, data will become “FAIR” (findable, accessible, interoperable,
reusable). Furthermore, data privacy within the project and the compliance
with the General Data Protection Regulation (Regulation EU 2016/679 – "GDPR")
will be set out. Finally, the result should be a data policy that leads the
consortium partners in executing a good data management and additionally
considers resources and budgetary planning for data management.
This document is an initial version, due in project month 12. The DMP is
updated on a regular basis in the project months 12, 24 and 36 (see
Deliverables D1.6 to D1.8 – M12, M24 and M36 Data Management Plan). D1.6 is
almost identical to D1.5, as no changes were reported. It does not describe
how the results are exploited, which is part of the deliverables D7.2 to D7.4
(Initial, intermediate and final dissemination, standardisation and
exploitation plan). Instead, the updated DMP will contain information to new
datasets that have been collected or generated in the meantime as well as
changed consortium policies and other external factors. Nevertheless, the
future versions will take into account that there is a consistency with the
exploitation actions as well as with the IPR requirements.
In particular, BPR4GDPR’s DMP will be useful for the project consortium itself
as well as for the European Commission. Furthermore, general public can
benefit from the document.
## Project Description
The objectives for BPR4GDPR are the following:
* A **reference compliance framework** that is reflecting the associated provisions and requirements for GDPR to facilitate compliance for organisations. This framework will serve as the codification of legislation.
* **Sophisticated security and privacy policies** through a comprehensive, rule-based framework capturing complex concepts in accordance with the data protection legislation and stakeholder needs and requirements.
* **By design privacy-aware process models** and underlying operations by provision of modelling technologies and tools that analyse tasks, interactions, control and data flows for natively compliant processes and workflow applications with security and privacy provisions and requirements.
* **Compliance-driven process re-engineering** through a set of mechanisms for automating the respective procedures regarding all phases of processes’ lifecycle and resulting in compliant-by-design processes.
* A configurable **compliance toolkit** that fits the needs of various organisations being subject to GDPR compliance and that incorporates functionalities for managing the interaction with the data subject and enforcing respective rights.
* The implementation of inherently offered **Compliance-as-a-Service (CaaS)** at the Cloud infrastructures of BPR4GDPR partners to achieve compliance at low cost to SMEs.
* Deployment of the BPR4GDPR technology and overall framework, corresponding to **comprehensive trials** that involve software companies, service providers and carefully selected stakeholders to assess the BPR4GDPR solution, to validate different deployment models and to define a market penetration roadmap.
* Profound **impact creation** in European research and economy, especially as regards the areas of data protection, security, BPM, software services, cloud computing, etc.
Along with these above-mentioned objectives, the BPR4GDPR data that needs to
be handled and that is described within the DMP is associated with project
results as Regulation-driven policy framework, Compliancedriven process re-
engineering, Compliance toolkit, Process discovery and mining enabling
traceability and adaptability, Compliance-as-a-Service (CaaS) and Impact
creation – holistic innovation approach resulting in sustainable business
models.
## Terminology
**Open Access** : Open access means unrestricted access to research results.
Often the term open access is used for naming free online access to peer-
reviewed publications. Open access is expected to enable others to: a) Build
on top of existing research results,
2. Avoid redundancy,
3. Participate in open innovation, and
4. Read about the results of a project or inform citizens.
All major publishers in computer science – like ACM, IEEE, Elsevier, or
Springer - participate in the idea of open access. Both green or gold open
access levels are promoted. Green open access means that authors eventually
are going to publish their accepted, peer-reviewed articles themselves, e.g.
by deposing it to their own institutional repositories or digital archives.
Gold open access means that a publisher is paid (e.g. by the authors) to
provide immediate access on the publishers website and without charging any
further fees to the readers.
**Open Research Data** : Open research data is related to the long-term
deposit of underlying or linked research data needed to validate the results
presented in publications. Following the idea of open access, all open
research data needs to be openly available, usually meaning online
availability. In addition, standardized data formats and metadata has to be
used to store and structure the data. Open research data is expected to enable
others to:
1. Understand and reconstruct scientific conclusions, and
2. To build on top of existing research data.
**Metadata** : Metadata defines information about the features of other data.
Usually metadata is used to structure larger sets of data in a descriptive
way. Typical metadata refers to names, locations, dates, storage data type,
and relations to other datasets. Metadata is very important when it comes to
index and search larger data sets for a specific kind of information.
Sometimes metadata can be retrieved automatically from a dataset, but often it
is also needed some manual classification. The well-known tags in
MP3-recordings are a good example of why metadata is necessary to find a
specific kind of genre or composer in a larger number of songs.
**FAIR Data:** To ensure a sustainable usage of Open Research Data, the
principle of “FAIR Data” should be met by the data in question as well as by
the underlying data infrastructure. Therefore, FAIR data should be **F**
indable, **A** ccessible, **I** nteroperable and **R** eusable. In detail,
this means:
Findable:
* Discoverability of data (standard identification mechanisms, naming conventions, search keywords) Approach for clear versioning
* Metadata provision and possible used standards for metadata creation
Accessible:
* Description of openly available and closed data (with reasons) and the process to make them available
* Definition of methods or software tools needed to access data
* Specification where data, associated metadata, documentation and code are deposited
Interoperable:
* Assessment of interoperability of project data (What data and metadata vocabularies, standards or methodologies?)
* Existence of standard vocabulary or commonly used ontologies for all data types in the data set
Reusable:
* Licencing of data for maximum reuse
* When will data be made available for reuse (why/for what is data embargo needed)
* Are Produced/used data reusable by third parties after project end? Why restricted?
* Data quality assurance processes
* Specification of time length for which data will be reusable
## Structure of the Document
The rest of the document is structured into four further sections.
Section 3 handles the general structuring of the data within the project,
meaning data set reference and naming as well as the usage of metadata
standards that will give the framework for the metadata template.
Section 4 defines the strategy that will be applied to all results collected
or generated during BPR4GDPR for sharing and preservation and contains a
summary of all publishing platforms to be used by the BPR4GDPR consortium.
Included is a process that defines if a result has to be published or not.
Moreover, the security of data sharing and data preservation will be taken
into consideration.
Section 5 considers costs that go along with the data management, usage of
sharing and preservation platforms and availability of open access.
Furthermore, responsibilities for data management actions including security
and quality issues will be defined.
Section 6 lists publications and other public related data(sets) that are
already or may be generated or collected during BPR4GDPR. For each result, a
short description, the chosen way of open access, and a longterm storage
solution are specified according to the EC's data management guidelines
(European Commission, 2016) and by using the metadata template presented in
Section 3.
# Data Structure
A first step to make the data in the BPR4GDPR project “FAIR” is to give the
data some structure. This means a consistent naming of the data that makes
them easier findable and that includes clear versioning and the commitment to
metadata standards for better tracing of existing and future data. Through
standardized information within a metadata template, like for example the data
set type, discoverability of the data can be increased. Moreover, it is easier
for applications to consume and process the metadata for assessing the value
of the data and for further usage.
The data title itself should also include some metadata, which help to
increase data handling and working efficiency. Possible metadata components
for the data naming are the title, version number, prefixes, linkages to work
packages or tasks, the dataset topic, creation date or the modification date.
In the case of BPR4GDPR, especially the dataset date and a versioning number
should be used for a higher transparency of data modifications as well as the
linkage to the work package for a thematic classification of the data. The
usage of these metadata components results in the following data naming:
_“BPR4GDPR_WP-No._Version-Date_Title_Deliverable-No._Version number”_
However, the metadata component “Deliverable-No.” is just optional due to the
fact that not each dataset can be directly linked to a specific deliverable.
An example for such a dataset naming could be the following:
_BPR4GDPR_WP1.1_20180920_M12 Data Management Plan_D1.6_V3_
In this context, a metadata template can be generated including information
that goes beyond the metadata that can be deduced from the dataset naming.
Apart from standard information as title, creation date or language, this
template comprises further aspects, like the data origin, expected size of the
dataset, a general description of the data, reference to publications,
keywords belonging to the data or target group. This metadata template shall
be additionally saved within the repository. The following Table 1 shows such
a template to describe data that will be produced in the context of BPR4GDPR.
**Table 1: BPR4GDPR Metadata Template**
<table>
<tr>
<th>
**Initial Dataset Template**
</th> </tr>
<tr>
<td>
**Dataset reference name**
</td>
<td>
Identifier for the data set to be produced using the above described naming
convention.
</td> </tr>
<tr>
<td>
**Dataset title**
</td>
<td>
The easy searchable and findable title of the dataset.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
Description of the data that will be generated or collected, its origin (in
case it is collected), nature and scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existences
(or not) of similar data and the possibilities for integration and reuse.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created.
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
List of keywords that are associated to the dataset.
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling reuse, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository
</td> </tr>
<tr>
<td>
</td>
<td>
where data will be stored, if already existing and identified, indicating in
particular the type of repository (institutional, standard repository for the
discipline, etc.).
</td> </tr>
<tr>
<td>
**Archiving and preservation**
**(including storage and backup)**
</td>
<td>
Description of the procedure that will be put in place for long-term
preservation of the data. Indication of how long the data should be preserved,
what is its approximated end, volume, what the associated costs are and how
these are planned to be covered.
</td> </tr>
<tr>
<td>
**Additional Dataset explanation**
</td> </tr>
<tr>
<td>
**Discoverable**
</td>
<td>
Are the data and associated software produced and / or used in the project
discoverable (and readily located), identifiable by means of a standards
identification mechanism? (e.g. Digital Object Identifier)
</td> </tr>
<tr>
<td>
**Accessible**
</td>
<td>
Are the data and associated software produced and / or used in BPR4GDPR
accessible and in what modalities, scope, licenses? (e.g. licencing framework
for research and education, embargo periods, commercial exploitation, etc.)
</td> </tr>
<tr>
<td>
**Assessable and intelligible**
</td>
<td>
Are the data and associated software produced and / or used in the project
assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review? (e.g. Are the minimal datasets handled
together with scientific papers for the purpose of peer review? Are data
provided in a way that judgments can be made about their reliability and the
competence of those who created them?)
</td> </tr>
<tr>
<td>
**Usage beyond the original purpose for which it was collected**
</td>
<td>
Are the data and associated software produced and / or used in BPR4GDPR
useable by third parties even long time after the collection of the data?
(e.g. Is the data safely stored in certified repositories for long term
preservation and curation? Is it stored together with the minimum software,
metadata and documentation to make it useful? Is the data useful for the wider
public needs and usable of the likely purpose of non-specialists?)
</td> </tr>
<tr>
<td>
**Interoperable to specific quality standards**
</td>
<td>
Are the data and associated software produced and / or used in the project
interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc.? (e.g. adhering to standards for data
annotation, data exchange, compliant with available software applications, and
allowing recombinations with different datasets from different origins)
</td> </tr> </table>
As recommended by the European Commission, also the usage of metadata
standards should be regarded. Such a metadata standard is a document that
defines how metadata will be tagged, used, managed, formatted, structured or
transmitted. Besides standardized data formats as CSV, PDF and DOC/DOCX for
texts and tables, PPT for presentations, JPEG, PNG and GIF for images, or the
XES-format for event logs that are used in BPR4GDPR to exchange event-driven
data in a unified and extensible manner, also other (meta-)data standards are
considered. For example, the RDF (Resource Description Framework) vocabulary
is a metadata standard, which can be used in case of BPR4GDPR. At first the
RDF has only been designed as a metadata standard by the World Wide Web
Consortium (W3C), however by now it is a fundamental element of the semantic
web and to formulate logical statements. Such an RDF-based vocabulary is DCAT
(data catalog vocabulary). This standard has been generated in order to
optimize the interoperability on the web along several data catalogs, where
data are described in. The DCAT vocabulary is listed in the following and can
be used for the dataset description:
* dct:identifier to provide the dataset’s unique identifier
* dct:title to give the dataset a specific title
* dct:theme to provide the main theme(s) of the dataset
* dct:descripton to describe the dataset with free-text
* dct:issued to provide the date of the issuance/publication of the dataset
* dct:modified to provide the date of the latest dataset modification/change/update
* dct:language to mention the dataset language
* dct:publisher to state the responsible entity that published the dataset/made it available
* dcat:keyword to provide several keywords that describe the dataset
* dcat:temporal to state the dataset’s temporal period
* dcat:distribution to link the dataset with all available distributions
# Data Management Strategy
The BPR4GDPR Data Management Strategy consists of a publishing process to
divide public from non-public data and strategies for data sharing as well as
archiving and preservation that together provide long-term open access to all
publishable, generated or collected results of the project. The implementation
of the project complies with laws at a national and EU level and especially
with GDPR in relation to the protection of personal data of individuals. More
specifically, there will be no cases where personal information or sensitive
information of internet users or other involved persons is collected (IP
addresses, email addresses or other personal information) or processed. For
the whole duration of the project, from the beginning to its end, the Data
Protection Manager (DPM – Mrs. Francesca Gaudino (BAK)) will carefully examine
the legality of the activities and the tools (including platforms) that will
be produced for not violating the personal data of internet users or other
involved persons. In the potential future case where the BPR4GDPR consortium
will collect, record, store or process any personal information, it will be
ensured that this will be done on a basis of respecting citizens’ rights,
preventing their identification and keeping their anonymization. The
publishing process as well as the data sharing, archiving and preservation
strategies are described in the following subsections. Furthermore, it will be
explained how data security will be handled within the process and strategies.
Through the whole data management strategy the consistency with the project’s
exploitation actions and IPR requirements, as well as compliance with WP8
Ethics requirements will be guaranteed. As set in the DoW of BPR4GDPR, the
project’s partners ensure to share and disseminate their own knowledge as far
as it does not adversely affect its protection or use. Furthermore, the IPR
consortium agreement takes into consideration a workshop after the project end
in order to provide a list of all generated results with a separate decision
of joint or single ownership for each result. Eventually, first suggestions
for usable sharing platform have been mentioned as the open access
infrastructure for Research in Europe “OpenAIRE”, the scholarly open access
repository arXiv, BPM Center or the project portal itself.
## Publishing Process
A simple and deterministic process has been defined that decides if a result
in BPR4GDPR has to be published or not. The term “result” is used for all kind
of artefacts generated during BPR4GDPR like white papers, scientific
publications, and anonymous usage data. By following this process, each result
is either classified public or nonpublic. Public means that the result must be
published under the open access policy. Non-public means that it must not be
published.
For each result generated or collected during BPR4GDPR runtime, the following
questions have to be answered to classify it:
1. _Does a result provide significant value to others or is it necessary to understand a scientific conclusion?_
If this question is answered with yes, then the result will be classified as
public. If this question is answered with no, the result will be classified as
non-public. Such a result could be code that is very specific to BPR4GDPR
platform (e.g. a database initialization) which is usually of no scientific
interest to anyone, nor add any significant contribution.
2. _Does a result include personal information that is not the author's name?_
If this question is answered with yes, the result will be classified as non-
public. Personal information beyond the name must be removed if the result
should be published. This also bares witness on the repetitive nature of the
publishing process, where results which are deemed in the beginning as non-
publishable can become publishable once privacy-related information or other
information subject to confidentiality obligations is removed from them.
3. _Does a result allow the identification of individuals even without the name?_
If this question is answered with yes, the result is classified as non-public.
Sometimes data inference can be used to superimpose different user data and
reveal indirectly a single user's identity. As such, in order to make a result
publishable, the included information must be reduced to a level where single
individuals cannot be identified. This can be performed by using established
anonymisation techniques to conceal a single user's identity, e.g.,
abstraction, dummy users, or non-intersecting features.
4. _Does a result include business or trade secrets of one or more partners of BPR4GDPR?_
If this question is answered with yes, the result is classified as non-public,
except if the opposite is explicitly stated by the involved partners. Business
or trade secrets need to be removed in accordance to all partners'
requirements before it can be published.
5. _Does a result name technologies that are part of an ongoing, project-related patent application?_
If this question is answered with yes, then the result is classified as non-
public. Of course, results can be published after patent has been filed.
6. _Can a result be abused for a purpose that is undesired by society in general or contradict with societal norms and BPR4GDPR’s ethics?_
If this question is answered with yes, the result is classified as non-public.
_7\. Does a result break national security interests for any project partner?_
If this question is answered with yes, the result is classified as non-public.
## Data Sharing
Consequently, with the publishing process all the data that cannot be
published due to specific reasons like ethical or privacy- and/or security-
related issues have been identified. All the other data that have been
classified as publishable/public will be considered in the following sections
of the deliverable.
For sharing the data among the consortium partners, a Nextcloud repository has
been set up. The repository has been selected since it allows a secure and
facilitated sharing of documents between the partners via web interface and on
several devices. The Nextcloud is extensible for further plugins and
applications and is hosted by the consortium partner Università di Roma “Tor
Vergata”. Access to the repository is only granted to consortium members.
Nextcloud includes the assignment of rights, like the right of re-sharing,
creation, change, deletion and a settable expiration date.
For public sharing in BPR4GDPR, the consortium partners use several platforms
to publish our results openly and to provide them for re-usage. All the
consortium partners should make their generated results as quickly as possible
available unless there have been reasons identified along the publishing
process (see section 4.1), that classify them as non-public. The following
list presents a closer selection of platforms that should be considered for
data sharing and describes their concepts for publishing, storage and backup.
After having selected all relevant datasets and results that can be published
and that have not been identified as “non-public”, the datasets/documents
should be archived in a selected repository upon acceptance for public. In
such manner, either a publisher’s final version of a paper or the final
manuscript that has been accepted for publication, both including peer review
modifications, should be deposited. The selected repository depends on the
dataset type. While some repository platforms only integrate publications,
others also accept datasets, no matter if the dataset is linked to a
publication or not.
**4.2.1 Data Sharing Platforms Project Website/Project Portal:**
The partners in the project consortium decided to setup a project-related
website. This website describes the mission, the objectives, the benefits and
impact, as well as the general approach of BPR4GDPR and its development
status. Moreover, all interesting news considering announcements, conferences
and events or other related information are disseminated on a regular basis.
Later in the project, the developed BPR4GDPR policy framework and compliance
toolkit will be announced. A dedicated area for downloads is made available in
order to publish reports and white papers as well as scientific publications.
All documents are published using the portable document format (PDF). All
downloads are enriched by using simple metadata information, such as the title
and the type of the document. The website is hosted by partner Eindhoven
University of Technology. All webpage-related data is backed on a regular
basis. All information on the project website can be accessed without creating
an account. Web-Link: _http://www.bpr4gdpr.eu/_
**OpenAIRE:**
OpenAIRE is an Open Access infrastructure for Research in Europe that is
recommended by the European Commission and that allows access to research
results that have been funded by FP7 and ERC resources. OpenAIRE allows to
monitor, identify, deposit and access research outcomes throughout Europe and
supports the potential of international open access repositories
collaboration. Through such workflows that go beyond repository content,
interoperability across several repositories is achieved.
The project started in December 2009 and aimed to support the implementation
of Open Access in Europe. For the development of the infrastructure, state-of-
the-art software services that have been generated within the DRIVER and
DRIVER-II projects as well as repository software by CERN have been used.
Especially research data on areas like health, energy, environment or ICT are
deposited via OpenAIRE. Through this platform, researchers and universities
that are involved in a Horizon 2020 project are supported in fulfilling the
EC’s open access policy. Web-Link: _http://www.openaire.eu_
**Zenodo:**
Zenodo is a research data archive/online repository which helps researchers to
share research results in a wide variety of formats for all fields of science.
It was created through EC's OpenAIRE+ project and is now hosted at CERN using
one of Europe's most reliably hardware infrastructures. Data is backed nightly
and replicated to different locations. Zenodo not only supports the
publication of scientific papers or white papers, but also the publication of
any structured research data (e.g., using XML). Zenodo provides a connector to
GitHub that supports open collaboration for source code and versioning for all
kinds of data. All uploaded results are structured by using metadata, like for
example the contributors’ names, keywords, date, location, kind of document,
license and others. All metadata is licensed under CC0 license (Creative
Commons ‘No Rights Reserved’). The property rights or ownership of a result
does not change by uploading it to Zenodo. Web-Link: _http://zenodo.org_
**arXiv:**
The high-automated electronic archive “arXiv” is another scholarly open access
repository for preprints that is hosted by the Cornell University Library.
This distribution server concentrates on research articles that are rather
located in technical areas as mathematics, statistics, physics, computer
science or electrical engineering. Nontechnical information should not be
shared over this platform. Through an arXiv Scientific Advisory Board and an
arXiv Member Advisory Board that consists of scientists and the scientific
cultures it serves, the repository is guided and maintained. Further subject
specialists check and review the publications in accordance to their relevance
and their compliance to standards. Moreover, an endorsement by an already
renowned author is necessary to deposit any articles on arXiv. For publishing
the article, data formats as PDF or LaTeX are possible. Web-Link:
_http://arxiv.org_ **GitHub:**
GitHub is a well-established online repository, which supports distributed
source code development, management and revision control. It is primarily used
for source code data. It enables world-wide collaboration between developers
and provides also some facilities to work on documentation and to track
issues. The platform uses metadata like contributors’ nicknames, keywords,
time, and data file types to structure the projects and their results. The
terms of service state that no intellectual property rights are claimed by the
GitHub Inc. over provided material. For textual metadata items, English is
preferred. The service is hosted by GitHub Inc. in the United States. GitHub
uses a rented Rackspace hardware infrastructure where data is backed up
continuously to different locations.
Web-Link: _https://github.com/_
**BPM Center:**
The BPM Center is a center founded in 2004 at Eindhoven University of
Technology in collaboration with Queensland University of Technology in
Australia explicitly for research in the Business Process Management (BPM)
field. The virtual research center handles business processes along all the
lifecycle that covers phases like the process modelling , process monitoring
or process mining. Especially in case of BPR4GDPR that handles Business
Process Re-engineering in accordance with GDPR and due to the fact that the
repository derives from the partner TU/e, this research center plays an
interesting role.
Another opportunity of the Eindhoven University of Technology to share data
and maximise the value for others by sharing knowledge is through the 4TU
programme. The four Universities of technology in the Netherlands set up this
programme with the aim to exploit knowledge in the technology as far as
possible. Particularly, the 4TU Centre for Research Data is the most
prestigious technical and scientific data archive in the Netherlands.
Web-Links: _http://bpmcenter.org/_ _https://www.4tu.nl/en/_
All public results generated or collected during the project lifetime will be
uploaded to one of these above mentioned repositories for long-term storage
and open access. Thereby, the choice of an adequate repository depends on the
dataset type. Source-code components will be differently published as
publications. Furthermore, the sharing platform will be selected depending on
the target group that could be interested in the data.
### Artefact Types
This section just attempts to enumerate specific datasets, software and
research items (from which publications can be produced) and whether all these
artefacts can be publishable or not according to what means. Each type of
artefact requires a specific kind of sharing platform. The artefacts are
deduced from the expected results referring to the BPR4GDPR proposal.
**Table 2: Project Artefacts**
<table>
<tr>
<th>
**Artefact Type**
</th>
<th>
**Artefact**
</th>
<th>
**Possible Publication Means**
</th> </tr>
<tr>
<td>
**Research Item**
</td>
<td>
Regulation-driven policy framework
</td>
<td>
OpenAIRE, Zenodo, arXiv, project website
</td> </tr>
<tr>
<td>
**Research Item**
</td>
<td>
Impact creation – holistic innovation approach resulting in sustainable
business models
</td>
<td>
OpenAIRE, Zenodo, arXiv, project website
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Compliance-driven process reengineering
</td>
<td>
OpenAIRE, Zenodo, GitHub, BPM Center
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Compliance toolkit
</td>
<td>
OpenAIRE, Zenodo, GitHub
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Process discovery and mining enabling traceability and adaptability
</td>
<td>
OpenAIRE, Zenodo, GitHub, BPM Center
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
Compliance-as-a-Service (CaaS)
</td>
<td>
OpenAIRE, Zenodo, GitHub
</td> </tr>
<tr>
<td>
**Dataset**
</td>
<td>
Anonymous usage statistics
</td>
<td>
OpenAIRE, Zenodo, 4TU
</td> </tr>
<tr>
<td>
**Dataset**
</td>
<td>
UseCase data
</td>
<td>
Non-publishable
</td> </tr> </table>
## Archiving and Preservation
Within the data management plan, also the long-term preservation of the data,
that goes beyond the project lifetime, has to be considered. For these
preservation procedures, the duration of data preservation, end volume of the
data and preservation mediums will be regarded.
The preservation of BPR4GDPR data will succeed via institutional and free
usable platforms. Especially, Zenodo and 4TU seem to be adequate solutions for
archiving as these platforms do not create any additional, further costs to
store the data on the repository. Through linked metadata templates the data
can be made more findable and accessible as well as through a Digital Object
Identifier (DOI) that is matched to each upload. On top of that, it
constitutes a repository for a variety of topics that is recommended by the
European Commission, based on OpenAIRE and that accepts all kinds of dataset
types. But also OpenAIRE itself represents a similar attractive repository for
archiving that is equally proposed as an Open Access repository by the
European Commission.
By using at least one of the described repositories, the assumed preservation
time will be set to 5 years to guarantee a long access and the re-usability of
the project data. For this purpose, it will be strived to provide data of
maximal quality and validity to ensure data usability during this preservation
time. Therefore, datasets will be updated in case of adjusted data available.
However, the archiving and preservation strategy could possibly be altered and
updated during the project lifetime since further advantages or disadvantages
of several repositories could be identified during this time, which could lead
to a necessary adjustment of the preservation intentions. Possible amendments
will be documented in a later, updated version of the data management plan
(see deliverables D1.6 to D1.8 – M12, M24 and M36 Data Management Plan).
## Data Security
Data security is considered during the management of data in accordance to the
GDPR regulations and applicable laws on data protection and data security. For
this reason, the publishing process in Section 4.1 deals with the separation
of publishable and non-public data and results that arise within BPR4GDPR. As
a result, only data that are necessary for the purpose of the project will be
regarded, in accordance with the privacy by default principle.
In case of personal identification data, that are said to be of real
scientific interest, specific measures will be deployed in order to anonymize
data, such as the use of aggregated and statistic data, so that personal
identification data will be transformed in information that cannot identify an
individual anymore, not even indirectly.
As already mentioned in Section 4.2, the data handling among the consortium
partners takes place via Nextcloud. Data security on this repository will be
ensured by hosting the data at a partner’s private server (Università di Roma
“Tor Vergata”) and by using a platform, that covers several security issues
and that is compliant with GDPR itself. This is assured by the reliance to the
EU authentication platform and security protocols for data sharing, strict
policies for granting and revoking platform access (access to the repository
is only granted to consortium members), and the recording of user identity
during data access, download, and upload. In this way, Nextcloud enables the
project to assign rights to specific data (re-sharing, creation, change,
deletion or a settable expiration date). On top of that, Nextcloud is
extensible for further security plugins.
Also the used repositories for public datasets and results comply with
security and privacy regulations. For example, data on the Zenodo platform are
stored on the same infrastructure as the research data of CERN, that is hosted
on a reliable hardware infrastructure in Europe. Furthermore, data on those
repositories is backed up on a regular basis.
# Costs and Responsibilities
To manage the data within BPR4GDPR also the costs necessary for making
research data “FAIR” (see Section 2.3) have to be estimated and taken into
consideration. These costs arise due to the required measures for integrating
data into a platform and storing data during the project lifetime and
afterwards. However, costs associated with open access of research data within
any Horizon 2020 project can be handled as eligible costs.
Nonetheless, the selected repositories for data sharing and preservation are
on the one hand free for researchers to deposit their publications (see
Zenodo, arXiv etc.) and on the other hand the stored data can be freely
accessed by public. For example, arXiv only offers a special membership
programme for arXiv's heaviest institutional users. In contrast, GitHub
provides paid as well as free service plans. Free service plans can have any
number of public, open-access repositories with unlimited collaborators.
Private, non-public repositories require a paid service plan. Many open-source
projects use GitHub to share their results for free. Beyond that, no further
costs are at this stage anticipated relating to data management during and
after the project runtime. In case of any alteration, further emerging costs
will be outlined in an updated data management plan (D1.6 to D1.8 – M12, M24
and M36 Data Management Plan).
The compliance of the project’s data management with the described security
and cost related issues will be handled by the project’s Data Protection
Manager. Especially the assurance that data collection and processing within
the project align to EU and national legislation are a main part of this role.
This means security assessments and reporting of potential security threats.
In the case of BPR4GDPR Mrs. Francesca Gaudino (BAK) will be responsible for
this task.
However, for the data preparation as well as for the relevance, quality and
currency of the data, the respective data owners are responsible. To this end,
data preparation considers the data anonymization and data processing to make
these data ready for publishing. On top of that, data owners have to ensure
the compliance of uploaded data with the conditions that have been defined in
the project’s data management plan. Furthermore, also the completion of the
specific metadata templates for any dataset is the responsibility of the data
owners.
In turn, the BPR4GDPR partner that is hosting the Nextcloud repository for the
project’s internal data exchange (Università di Roma) is responsible for the
maintenance of this repository and its components as well as for the user
group management.
# Project Results
In this section the metadata template introduced in Section 3, that describes
the data within BPR4GDPR, will be used for the current project results. Every
use case will provide such a template to share the gained knowledge as far as
feasible and in relation to the described publishing process and its questions
in Section 4.1. This will occur during the project duration, whereby the data
management plan will be regularly updated in this time (see deliverables D1.7
and D1.8 –M24 and M36 Data Management Plan).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0529_OCEAN_767798.md
|
data, which can be made publically accessible, in order to protect the
industrial interests.
The Art.29.3 suggests that participants will have to provide information, via
the repository, about tools and instruments needed for the validation of
project outcomes, without to infringe industrial interests and Consortium
Agreement. Following these indications about data protection, this article
will be applied to only those tools and instruments that do not interfere with
the confidentiality and protection of industrial interest aspects.
_Will you re-use any existing data and how?_
No. OCEAN data are non-previously existing data and these will be generated
within the Project development by the partners.
_What is the origin of the data?_
The origin of the data derives from the experimentation made in the frame of
the project. According to Consortium Agreement and project politics to
preserve confidentiality, the Steering Committee has decided that can be
shared data which have been before specifically approved by the Exploitation
and Innovation Committee (EIC) of the Consortium, and after publication in
Open Access journals or after embargo period in other journal, when they do
not infringe journal politics or Consortium interests. The rules indicated in
the Consortium Agreement for dissemination (in particular section 8.4 –
Dissemination of the Results) should be respected. Project aspects which have
been instead specifically indicates as public, such as some deliverables, are
instead out of these restrictions and will be made available on the
repository.
For the first version of the project DMP, the analysis is based on the
following series of datasets (DSx) and related subsets (DSx/y), indicated
below.
<table>
<tr>
<th>
_Ref_
</th>
<th>
_Title_
</th>
<th>
_Partner (*)_
</th>
<th>
_Data Type_
</th>
<th>
_WP or Task_
</th>
<th>
_~Size & _
</th>
<th>
_Access level_
</th> </tr>
<tr>
<td>
DS0
</td>
<td>
General Aspects on OCEAN
Project
</td>
<td>
ERIC
</td>
<td>
Public info for the project, open presentations at conferences and other
events, public
accessible Deliverables
</td>
<td>
WP8
</td>
<td>
500
MB
</td>
<td>
Public
</td> </tr>
<tr>
<td>
DS1
</td>
<td>
CO 2 reduction Demo Cell
</td>
<td>
AVT
</td>
<td>
Design data for Demo cell and related components
</td>
<td>
WP1
</td>
<td>
1,5 GB
(total)
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS1/1
</td>
<td>
Process design specifications and
design of Demo
cell
</td>
<td>
AVT
</td>
<td>
Design specifications and concept for the Demo Cell
</td>
<td>
WP1
(T1.1.
T1.4))
</td>
<td>
300
MB
</td>
<td>
Confidential
</td> </tr> </table>
<table>
<tr>
<th>
DS1/2
</th>
<th>
Catalysts for CO 2 reduction in
Demo Cell
</th>
<th>
AVT
</th>
<th>
Catalyst characteristics and performances
</th>
<th>
WP1
(T1.2)
</th>
<th>
300
MB
</th>
<th>
Confidential
</th> </tr>
<tr>
<td>
DS1/3
</td>
<td>
Gas diffusion electrode
</td>
<td>
GSKL
</td>
<td>
Procedures for preparation of GDE
</td>
<td>
WP1
(T1.3)
</td>
<td>
100
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS1/4
</td>
<td>
Testing and validation of
Demo Cell
</td>
<td>
RWE
</td>
<td>
Data about process stability, efficiency and product quality
</td>
<td>
WP1
(T1.6,
T1.7)
</td>
<td>
800
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS2
</td>
<td>
Paired electrosynthesis
</td>
<td>
GENS
</td>
<td>
Anode and cathode characteristics, and direct heating technology
</td>
<td>
WP2
</td>
<td>
0,8 GB
(total)
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS2/1
</td>
<td>
Anode catalyst development and scale-up
</td>
<td>
AVT
</td>
<td>
Anode catalyst characteristics and performances, scaleup procedures
</td>
<td>
WP2
(T2.1,
T2.2)
</td>
<td>
300
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS2/2
</td>
<td>
Direct electrode heating
</td>
<td>
GENS
</td>
<td>
Design data, and scale-up of direct electrode heating technology, prototype
features and performances
</td>
<td>
WP2
(T2.3,
T2.4,
T2.5,
T2.6)
</td>
<td>
500
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS3
</td>
<td>
Process for formate to oxalate
</td>
<td>
HYS
</td>
<td>
Design data and characteristics for a process for formate to oxalate
conversion
</td>
<td>
WP3
</td>
<td>
1,4 GB
(total)
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS3/1
</td>
<td>
Process design and input data in batch conditions
</td>
<td>
HYS/AVT
</td>
<td>
Process design specifications and engineering, batch process tests
</td>
<td>
WP3
(T3.1,
T3.2,
T3.3)
</td>
<td>
600
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS3/2
</td>
<td>
Prototype design and manufacture, performances
</td>
<td>
HYS/AVT
</td>
<td>
Prototype manufacture data and testing, including with real feeds
</td>
<td>
WP3
(T3.4,
T3.5,
T3.6)
</td>
<td>
800
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS4
</td>
<td>
Electrochemical acidification
</td>
<td>
AVT
</td>
<td>
Multifunctional electrochemical salt splitting system data
</td>
<td>
WP4
</td>
<td>
2,0 GB
(total)
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS4/1
</td>
<td>
Process design for acidification with membranes
</td>
<td>
IIT
</td>
<td>
Design data of bipolar membrane based modules and related specs
</td>
<td>
WP4
(T4.1,
T4.2)
</td>
<td>
400
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS4/2
</td>
<td>
Coupling with oxidative or reductive electrosynthesis
</td>
<td>
IIT
</td>
<td>
Data on electrocatalysts/ electrodes for the oxidative or reductive
electrosynthesis
</td>
<td>
WP4
(T4.3,
T4.4)
</td>
<td>
500
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS4/3
</td>
<td>
Unit design and control
</td>
<td>
IIT
</td>
<td>
Data on ESS system for controlling the process
</td>
<td>
WP4
(T4.5)
</td>
<td>
500
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS4/4
</td>
<td>
Prototype design and performances
</td>
<td>
AVT
</td>
<td>
Data on prototype design, manufacture and testing, including with real feeds
</td>
<td>
WP4
(T4.6,
T4.7,
T4.8)
</td>
<td>
600
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS5
</td>
<td>
Conversion of formate and oxalate to highvalue products
</td>
<td>
ERIC
</td>
<td>
Data on catalysts and reaction conditions for the conversion of formate and
oxalate to high-value products
</td>
<td>
WP5
</td>
<td>
2.4 GB
(total)
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS5/1
</td>
<td>
Electrochemical hydrogenation of
oxalate
</td>
<td>
ERIC
</td>
<td>
Data on electrocatalysts and performances in the electrochemical hydrogenation
of oxalate, including with real feeds
</td>
<td>
WP5
(T5.1, T5.2.
T5.3)
</td>
<td>
800
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS5/2
</td>
<td>
Catalytic hydrogenation and
hydroformulation
</td>
<td>
UVA
</td>
<td>
Data on catalysts and performances in the hydrogenation and hydroformulation,
including with real feeds
</td>
<td>
WP5
(T5.4. T5.5,
T5.6)
</td>
<td>
800
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS5/3
</td>
<td>
Polymerization to new polyesters
</td>
<td>
UVA
</td>
<td>
Data on the assessment of polymer targets and polymerization processes for
glycolic acid polyesters and oxalate diester, including tests at larger scale
</td>
<td>
WP5
(T5.7. T5.8,
T5.9)
</td>
<td>
800
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS6
</td>
<td>
Process assessment
</td>
<td>
IIT
</td>
<td>
Data on process assessment, based on LCA analysis
</td>
<td>
WP6
</td>
<td>
1,2 GB
(total)
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS6/1
</td>
<td>
Life cycle analysis
</td>
<td>
IIT
</td>
<td>
Data on life cycle analysis (LCA) modelling for the estimation of the
environmental footprint
</td>
<td>
WP6
(6.1)
</td>
<td>
700
MB
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
DS6/2
</td>
<td>
Process assessment
</td>
<td>
IIT
</td>
<td>
Data on process assessment and quantification of environmental impact
</td>
<td>
WP6
(6.2)
</td>
<td>
500
MB
</td>
<td>
Confidential
</td> </tr> </table>
(*) responsible of Task or WP ( & ) up to this size, depends on which
specific data are planned to be sgared as indicated in the text.
The Consortium has reckoned that the commercial interests of the industrial
partners must be preserved and the protection of the competitiveness of the
European industry in this sector shall been ensured. Priority objective shall
be to avoid disadvantage on the market and the preservation of commercial
interest over Open dissemination. At this stage of the project, where results
are still under development, the Consortium has agreed to undertake a
conservative approach, thus classifying several datasets as Confidential
(either at Consortium or Beneficiary level). Access level of datasets will be
further discussed and revised along the project implementation, based on more
concrete exploitation analysis. For this reason, at the current stage access
level for data produced in the project are indicated as Confidential, but this
will be not excludes that specific subset of data, which upon approval of EIC,
are decided to be disseminated, will be put in a repository that will be
organized according to the structure indicated in the Table above, after the
eventual embargo period which may derive from the publication in some
journals. The dataset D0, referring to general aspects on OCEAN Project, i.e.
public info for the project, open presentations at conferences and other
events, public accessible Deliverables, etc., instead to not follow above
restrictions.
_What type of data are produced?_
With the increasing share of renewable electricity in the overall energy
production, there is a renewed interest in electrochemistry in industry as a
clean and carbon-neutral energy source to drive chemical reactions. Despite
electrochemistry and electrosynthesis being known for decades, application of
electrochemical synthesis in industry so far is limited. Therefore, both the
demonstration of electrochemical processes to proof the industrial and
economic feasibility, as well as the development of new advanced
electrochemical methodologies is needed to overcome current challenges and
create new applications for electrochemistry. The overall objectives of the
OCEAN project are to
1. provide a proof of the economic and industrial feasibility of the electrochemical technology to convert carbon dioxide
2. develop and demonstrate innovative electrochemical technologies to overcome current challenges in electrochemistry
3. Integration of the electrochemical technologies into industrial operations
As part of these objectives, OCEAN project thus aims to proof the industrial
and economic feasibility of the developed technologies, develop innovative
electrochemical methodologies and integrate into industrial operations, as
defined in the Description of the Work of the project. Therefore, most of the
data realized cannot be made publically available, without infringe the
industrial interests on the project, both in terms of known-how and
data/technologies which can be patented. Furthermore, a key element in the
project is an electrochemical process acquired by Avantium with relative IPs,
and which details are shared among the Consortium partners for the purpose of
the projects, but that cannot be diffused publically.
Based on these considerations, the purpose of the data collection/generation
and its relation to the objectives of the project have thus to be reconsidered
with respect to the Open access to research data politics, which thus should
be limited to only those data which, after internal approval by the
Exploitation and Innovation Committee (EIC) of the OCEAN project (the body
dedicated to analysis of these aspects), could be diffused
Different type of data will be generated within the project, depending on WPs.
* In WP1 (CO 2 reduction Demo Cell) data generated will regard process design specifications (T1.1), characteristics of catalysts (T1.2), procedures for scale-up of gas diffusion electrode (T1.3), engineering and design of the Demo Cell (T1.4), manufacturing and assembling of the Demo Cell (T1.5), data on the testing, validation and demonstration of the Demo Cell (T1.5, T1.6). Data format are in the form of word or excel reports, and PFD and P&ID data.
* In WP2 (Paired electrosynthesis) data generated regard anode catalyst development and scale-up (T2.1, T2.2), process design specifications (T2.3), scale-up of direct electrode heating (T2.4), prototype manufacture, testing and demonstration (T2-4-2.8).
* In WP3 (Formate to oxalate) data generated regard process design specifications (T3.1), optimize batch process conditions (T3.2), engineer and design continuous process (T3.3), manufacturing and testing (T3.4-3.6).
* In WP4 (Electrochemical acidification) data generated regard conception of bipolar membrane based modules and related specs (T4.1), development of TRL5 module (T4.2), coupling with oxidative electrosynthesis (T4.3) and with reductive electrosynthesis (T4.4), unit design and control, manufacturing and testing (T4.5-T4.8).
* In WP5 (High-value products from formate and oxalate) data generated regard electrochemical hydrogenation of oxalate (screening, optimization, tests with real feed (T5.1-T5.3), catalytic hydrogenation and hydroformulation (screening, optimization, tests with real feed (T5.4-T5.6), polymerization (developing new polyesters, optimization, tests at large scale) (T5.7-T5.9).
* In WP6 (Life-Cycle Analysis) data generated regard life cycle analysis (T6.1), process assessment and quantification of environmental impact (T6.2).
* In WP7 (Business case and exploitation strategy) data generated regard market Analysis (T7.1), Business Case (T7.2).
As emerges from this survey of data generated within the project
1. none of these tasks will generate data using standard procedures which can be put in a standardized database, open within the Consortium or externally; all the raw data need to
be processed and elaborated, because in the absence of the proper procedure of
data elaboration and specific expertise, they can generate misinterpretation,
which can be the bases also of patents infringements and other IPR issues. For
this reason, the Consortium has decided to make available in database only
data published in papers and related support information, but after specific
analysis that they cannot determine IPR issues. The data will be in an
exchangeable format such as word, excel, PFD, P&ID or other
graphical/vectorial data.
2. all these data should be maintained confidential by Consortium, and specifically saved by beneficiaries who generated them, and shared only between the Consortium for the purpose of the project, if not decided otherwise by EIC.
It should be also noted that the datasets indicated above are not
characterized by single or limited type of results, which can be stored in a
searchable database, or which may be used without specific or sometime
proprietary elaboration methodologies. The datasets do not have intrinsically
the characteristics of dataset interoperability, management and re-use. The
recommendations by OpenAIRE cannot be applied, as well as they do not
compliant with UK Data Archive.
The repository for the dataset will be made available through links in the
public part of the project web site.
_What is the expected size of the data?_
As indicated above, the total size of the data is up to about 9-10 GB, but
depends on the specific data which will be shared in the databases as
explained in the text. Better estimations will be made in a later stage of the
project.
_To whom might it be useful ('data utility')?_
At this stage of the project, as indicated above, data utility will be
strictly limited either at Consortium or Beneficiary level. The datasets are
functional to reach the objectives of the WPs and Tasks indicated, and the
overall project objectives. As indicated, specific subset of data in the
databases will be make public available, after EIC decision to allow
publication of specific data, and after eventual embargo period relative to
journals politics. The data will be relative to those published and related
supporting information, which should be specifically approved by EIC and
follow Consortium Agreement politics about dissemination of the results, as
well as the Dissemination Plan.
Within the constrains indicated above, data utility will be for all those
(researchers and other) that like to obtain further indications on the
published results by the project, as well as to find in a single place all the
published data. The dataset will allow third parties to be able to access,
mine, exploit, reproduce and disseminate the data, if they do not infringe
copyrights related to publications in journals. The access to data will help
to validate the results presented in scientific publications.
# 2 FAIR data
## 2\. 1. Making data findable, including provisions for metadata
_Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?_
The data will be available through links reported in the OCEAN web site, which
will refer to repositories open accessible. The data will refer to specific
publications and supporting info, and thus will be identified by the DOI of
the related publications, which will be used also as metadata.
_What naming conventions do you follow?_
As indicated above, we will use DOI of publication as naming convention.
_Will search keywords be provided that optimize possibilities for re-use?_
Search keywords will be a secondary level of DOI to allow a better
identification of the available data.
_Do you provide clear version numbers?_
The version number will refer to DOI of related publication.
_What metadata will be created? In case metadata standards do not exist in
your discipline, please outline what type of metadata will be created and
how._
The metadata will be related to DOI of publication and second level searchable
keywords.
### 2.2. Making data openly accessible
_Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions._
The data openly available as the default are those corresponding to D0 dataset
indicate in the Table above. For the other datasets, as indicated, only the
subset of data which have been specifically indicate by EIC as publishable,
will be make available, for confidentiality motivations.
_Note that in multi-beneficiary projects it is also possible for specific
beneficiaries to keep their data closed if relevant provisions are made in the
consortium agreement and are in line with the reasons for opting out._
Depending on the type of data, they can be keep for specific beneficiaries, in
agreement with consortium agreement indications.
_How will the data be made accessible (e.g. by deposition in a repository)?_
Data indicated as openly accessible will in repository for each dataset, which
link will be indicated in the public accessible part of the OCEAN web site.
_What methods or software tools are needed to access the data?_
The data will be in an exchangeable format such as word, excel, PFD, P&ID or
other graphical/vectorial data.
_Is documentation about the software needed to access the data included?_
The software needed will be common software identified by their extension.
_Is it possible to include the relevant software (e.g. in open source code)?_
No, because they will be not in open source code, but commercial software.
_Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible._
We prefer to use repositories by beneficiaries, to have a better control.
_Have you explored appropriate arrangements with the identified repository?_
It is in progress.
_If there are restrictions on use, how will access be provided?_
Access will be granted upon specific request, with personal data and
indications of the use.
_Is there a need for a data access committee?_
Access will be decided by EIC.
_Are there well described conditions for access (i.e. a machine readable
license)?_
Ye, conditions for access will be described.
_How will the identity of the person accessing the data be ascertained?_
It will be asked to have specific demonstration of who will access to the data
and motivations.
### 2.3. Making data interoperable
_Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different origins)?_
The data, within restrictions indicate above, will allow project
interoperable, data exchange and re-use between researchers, institutions,
organisations, countries, etc. For their characteristics, however, there is no
available (open) software applications, and recombinations with different
datasets from different origins will be not possible.
_What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?_
We will use DOI and keywords, and use of commercially widely available
software, for data interoperability.
_Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?_
Yes, as indicated above. However, a variety of type of data, with different
characteristics, will be produced, which do not fit requirements for standard
vocabularies for all data types present in datasets
_In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?_
Yes, we will provide the necessary indications.
### 2.4. Increase data re-use (through clarifying licences)
_How will the data be licensed to permit the widest re-use possible?_
Data open will be only those specifically defined by EIC, and they will not
need specific license, expect for possible restrictions by copyrights
associated to journal politics.
_When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible._
Embargo politic will depend on the specific journals in which publication is
made. The scientific quality of the journal, the readership and type of
audience will be the elements for decision on where to published, rather than
open access. Most of the open access journal have an unacceptable low
scientific level; they are just made for business.
_Are the data produced and/or used in the project useable by third parties, in
particular after the end of the project? If the re-use of some data is
restricted, explain why._
Decisions on how to maintain repository after the end of the project is
postponed at a later stage of the project.
_How long is it intended that the data remains re-usable?_
It is planned for now that the data remain accessible for the time of the
project, because do not exist resources for maintenance after this period.
However, decisions on how to maintain repository after the end of the project
is postponed at a later stage of the project.
_Are data quality assurance processes described?_
Data quality assurance is guaranteed by making available data of publications
in peer reviewed journals.
## 3\. Allocation of resources
_What are the costs for making data FAIR in your project?_
No (costs).
_How will these be covered? Note that costs related to open access to research
data are eligible as part of the Horizon 2020 grant (if compliant with the
Grant Agreement conditions)._
Are not indicated costs for FAIR in the Grant Agreement and should be thus
identified a way on how to cover the costs.
_Who will be responsible for data management in your project?_
The project and scientific coordinators.
_Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?_
No, they are not identified, and decision is postponed at a later stage of the
project.
## 4\. Data security
_What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?_
It will be followed standard procedures for data security.
_Is the data safely stored in certified repositories for long term
preservation and curation?_
No, because they are no resources for now for certified repository. It will be
look if they can be identified.
## 5\. Ethical aspects
_Are there any ethical or legal issues that can have an impact on data
sharing? These can also be discussed in the context of the ethics review. If
relevant, include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA)._
No
_Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?_
Yes. It will follow the GDPR rules of the EU.
## 6\. Other issues
_Do you make use of other national/funder/sectorial/departmental procedures
for data management? If yes, which ones?_
No.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0531_CoSIE_770492.md
|
collected in each sub-work packages. The master document is maintained by TUAS
and made accessible via the project’s management platform Basecamp (see more
on D9.1). The document includes contact persons from an academic partner for
each country, who are in charge of the national level of data management
during the project. While each partner is expected to send an update relating
to their data management activities twice a year to the project coordinator,
each partner also updates the tool locally on a monthly basis (see Appendix
1). For reuse purposes, the document includes brief descriptions of each
datasets.
A protocol is in place for monitoring the collection, consistency and quality
of all data (quantitative and qualitative) collecting during the CoSIE
project. This includes guidelines about the data collection procedures, use of
standardized instruments, handling of missing data, entry of data, data
destruction, use of software and data shells, storing of data, access to data,
etc. Adherence to the protocol will be monitored by TUAS.
# Storage and Backup
All data will be collected, accessed and stored locally according to the
legislative framework of each partner country with the support of DPOs at each
partner organization. While data entry and storage will be managed locally and
researchers will be permitted to keep a copy of local data at their site,
anonymized data from each local site is transferred through a secure network
on yearly basis to the project archive administrated by the project
coordinator. TUAS is responsible for the backup and recovery of the archive.
In order to avoid data losses, a common backup mechanism will be utilised
ensuring data reliability.
Hard copies of consent forms, information sheets as well as all audio and
visual data will be stored by WP leaders on university or partner
organizations’ premises in secure environments.
The data collected is confidential and only the members of the CoSIE
consortium have access to it during the project. Each research site is
responsible for the access, security, backup and recovery of the data they
have collected and stored.
All user and agency level data collected during the project will receive
unique identifiers and all identifying information (name, date of birth, etc.)
will be removed before data is transferred to project’s archive in the secure
network. Lists linking unique identifiers with identifying information will be
stored separately and securely, and will only be accessible by local principal
investigators and project manager (TUAS).
All processed data packages (with no sensitive information) are uploaded to a
Microsoft Sharepoint cloud folder maintained by TUAS, requiring a Microsoft
account from all partners. The folder enables joint use and storage during the
dynamic phase and immediately after the project. In the case of Community
Reporting videos, the files will be available through the website
_https://communityreporter.net/_ . All data shared through any channel will
also be available via the CoSIE website.
Despite a strong emphasis on data security and privacy, the CoSIE consortium
recognizes risks still accrue in relation to possible breaches of
confidentiality and anonymity. Where confidentiality is nonetheless
threatened, relevant records will be destroyed.
# Ethics and Legal Compliance
The CoSIE consortium will carry out the project in compliance with the highest
standards of research ethics and applicable international, EU and national
law. This compliance is also a contractual obligation for the consortium. The
grant agreement mentions the importance of following ethical principles:
honesty, reliability, objectivity, impartiality, open communication, duty of
care, fairness and responsibility for future science generations. The CoSIE
consortium is fully aware of the ethics and privacy issues stemming from the
deployment of ICT-related technologies and social media. This approach is
especially important as the project relates to the pervasiveness of a
technology, which many people do not understand, and which becomes even more
evident when gathering information from social media. See more on D10.1 and
D10.2.
All participants have the right to know how ethical issues are addressed in
the pilot they are participating. The CoSIE project is conducted in full
compliance with European Union legislation, conventions and declarations. No
data collected is sold or used for any purposes other than the current
project. A data minimization policy is adopted at all levels of the project
and is supervised by each WP leader. Moreover, any shadow (ancillary) personal
data obtained during the course of the pilots is immediately cancelled.
However, the plan is to minimize this kind of ancillary data as much as
possible.
The CoSIE IP plan covers the following issues: project’s knowledge management,
access to background data and knowledge, ownership and transfer of ownership
of results, protection and exploitation of results as well as settlement of
disputes (see more on 9.2). Partners will be given training of intellectual
property and copyright issues. Where appropriate, a flexible IP rights
licensing model such as Creative Commons will be utilised to ensure an
appropriate level of IP rights protection for the content creators while
allowing easy sharing of information.
# The FAIR Approach
The CoSIE consortium is committed to making appropriate research data publicly
available as per the Open Science & Research Initiative in Finland (
_http://openscience.fi/_ ) and EU’s data management guidance (
_http://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-
cuttingissues/open-access-data-management/data-management_en.htm_ ) . In
order to ensure the maximum impact of the collated data (both raw data and
processed datasets) in the CoSIE project, each dataset is managed in one of
two separate processes based on its possibility for anonymization.
1. When a dataset has been anonymized, it is sent by TUAS to Finnish Social Science Data Archive (FSD, https://www.fsd.uta.fi/en/) for long-term archiving. Full anonymization is required, as based on the EU Data Protection Regulation (GDPR) and the Finnish Personal Data Act, FSD requires the removal of all identifiers before submitting data to their system. Also, FSD does not archive audio-visual material.
All related metadata is gathered by TUAS according to the FSD metadata
template. Through FSD, which is a service provider for CESSDA (Consortium of
European Social Science Data Archives), the datasets are findable by their
Data Catalogue. CESSDA adheres to the FAIR principles and aims at being a part
of the European Science Cloud. Additionally, the data is also findable via the
national non-field-specific Etsin metadata search service available in
Finland.
2. If a dataset cannot be anonymized without significant loss of data, it is made available through the CoSIE website with archival copies saved at TUAS. Like in the process regarding anonymized data, also these copies will be available via the national Etsin metadata search service. Also, all appropriate audio-visual material is offered to be disseminated for further research by the Language Bank of Finland (https://www.kielipankki.fi/language-bank/).
Both of the above mentioned processes provide a unique identifier to the
material, which makes it easy to refer to the data in all further research.
Suitable naming conventions are utilized, and special attention will be paid
to versioning to ensure clean datasets. Also the used terminology within the
datasets is kept as general as possible for inter-disciplinary utilisation.
The data will remain at the TUAS project archive for the duration of the
project as well as for at least 2 years after the end of the project.
The metadata is going to be produced according to the requirements of the
Etsin and FSD services in addition to appropriate project-specific information
like Discipline as well as Geographical and Temporal coverage of the data.
Furthermore, the online text categorization and analysis software developed in
WP5 will be made freely available to researchers and developers. Based on open
source software (Drupal & General Public License 2), the software can be
easily extended and new methods added. The software will be stored on the
server managed by TUAS.
Social media data collected in WP5 will not be shared publicly. The social
media layer utilised in the project will employ third party social media
platforms, such as Facebook, Twitter, YouTube, Instagram and similar. As the
CoSIE project consortium cannot have ownership of the data stored onto the
social media platforms – instead the data is stored in ICT systems operated by
third party social media organizations – the privacy concerns regarding any
information posted and stored on any social media platform depend on the
contracts (usually Terms of Service) and agreements between the user and the
social media platform operator. This directly implies that the CoSIE
consortium cannot affect the content of this contract between the user and the
social media operator.
As a principle, all anonymized data will be made freely available to the
research community during or presently after the project has ended. All
exceptions must be approved by the CoSIE Executive Board.
The CoSIE consortium is committed to comply with Open Access publishing
principles. In order to ensure the free dissemination of scientific knowledge,
the CoSIE researchers aim to publish their papers written within the scope of
the project in open access journals (gold open access) or via self-archiving
(green open access).The objective is that articles will be freely available to
the public immediately upon publication.
The costs of open access publishing principles have been taken into account in
the project’s budget. WP5 will co-operate with the service provider who is
specialized for crawling and scraping extensive amount of data from social
media platforms. Also these costs have been taken into account in the
project’s budget. There are no other specific financial or performance
considerations (of which the CoSIE consortium is currently aware) which might
influence neither short nor long-term management of the data.
**Appendix 1. The Summary & Instructions sheet and an empty Partner-Specific
Sheet from the CoSIE Data Management Tool. **
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0532_ClimeFish_677039.md
|
# Introduction
Over the course of a research project, substantial amounts of data are either
generated within the project, or collected from existing sources and collated.
Often, this data is not made available for researchers in related areas;
meaning considerable time and effort are spent gathering similar data. There
is thus a need to promote reuse of data, through making it accessible to a
wider audience. As a participant in the Horizon 2020 Open Research Data Pilot
(ORD Pilot), ClimeFish will take measures to ensure that collected research
data is made accessible and to enable future sharing, thereby promoting reuse
of data and transparency.
ClimeFish follows the FAIR Data principles (DG RTD 2016a; Wilkinson et.al.
2016), which entails making data:
* Findable
* Accessible
* Interoperable
* Reusable
The ORD initiative towards making research data openly accessible is based on
the principle of " _as open as possible, as closed as necessary_ " (DG RTD
2016b). This means that while all data is initially suitable for being made
public, exceptions are made for data that contains sensitive or otherwise
proprietary information. While the initiative applies primarily to data
underpinning scientific results and publications, all other data used within
the project is also applicable.
As part of this initiative, one of the objectives of Work Package 2 is " _to
connect ClimeFish to relevant data repositories where the generated data can
be made available after the project end_ ". In order to achieve this, plans
for archiving and sharing data must be made. This is handled as part of task
2.5, and involves " _updating the data management plan, providing accurate
metadata descriptions, and uploading relevant data to the Climate-ADATP_
_platform and to the H2020 Research Data Pilot_ ".
This deliverable specifies how and where data should be uploaded, what
provisions must be made to ensure access, etc. No actual data will be uploaded
at this point. The data upload itself will not take place until nearer the
project end, and will be documented in deliverable 2.4 " _Final list of data
collected and collated, with archiving and sharing in effect_ ", planned for
M45.
# Data collected
As part of the H2020 ORDP, a data management plan (DMP) has been developed.
The DMP details what data the project generates, how that data will be
archived, barriers to making data publicly available, etc. It contains one
form per dataset. The DMP was written as part of Deliverable 2.1, and is
updated once within each 18-month reporting period, ensuring an up-to-date
overview of the data used in ClimeFish.
Version 2 of the DMP (written in October 2017) contains 39 forms covering all
16 ClimeFish case studies. Table 1 below shows the full list of datasets for
the four main case study areas: "Marine fisheries", "Lake and pond
production", "Marine aquaculture", and "European waters overall". A
standardized naming convention has been used for all datasets, based on the
following structure: < _Case study code_ > – < _Species_ > – < _Geographic
area_ > – < _Dataset description_ >. Where applicable, the time period covered
by the dataset is included. If the data covers three species or more, the
collective term" Several species" has been used.
_Table 1 ClimeFish datasets as of October 2017_
<table>
<tr>
<th>
**Marine fisheries**
</th> </tr>
<tr>
<td>
C1F – Several species – Northeast Atlantic – Catch statistics, 2005-2015
</td> </tr>
<tr>
<td>
C1F – Several species – Northeast Atlantic – Stock size and recruitment,
2005-2015
</td> </tr>
<tr>
<td>
C1F – Several species – Iceland – Fishing vessels, production, exports and
catch values, 2005-2015
</td> </tr>
<tr>
<td>
C1F – Several species – Northeast Atlantic – Biological and physical data,
2005-2015
</td> </tr>
<tr>
<td>
C1F – Mackerel – Northeast Atlantic – Trawl survey data, 2007-2017
</td> </tr>
<tr>
<td>
C1F – Several species – Northeast Atlantic – Stock assessment
</td> </tr>
<tr>
<td>
C2F – Herring, sprat – Baltic Sea – Environmental, biological and fishery data
</td> </tr>
<tr>
<td>
C3F – Cod – Baltic Sea – Environmental, biological and fishery data
</td> </tr>
<tr>
<td>
C4F – Cod, haddock – Barents Sea – Environmental data
</td> </tr>
<tr>
<td>
C5F – Hake, cod – West of Scotland – Trawl survey data
</td> </tr>
<tr>
<td>
C5F – Hake, cod – West of Scotland – Economic data
</td> </tr>
<tr>
<td>
C5F – Hake, cod – West of Scotland – Oceanographic data
</td> </tr>
<tr>
<td>
C5F – Hake, cod – West of Scotland – Simulation data
</td> </tr>
<tr>
<td>
C6F – Demersal fishery – Adriatic Sea – Demersal fishery landings data
</td> </tr>
<tr>
<td>
**Lake and pond production**
</td> </tr>
<tr>
<td>
C7F – Several species – Norway – Freshwater fish survey data
</td> </tr>
<tr>
<td>
C7F – Several species – Norway – Limnological data
</td> </tr>
<tr>
<td>
C7F – Several species – Norway – IBM modeling output
</td> </tr>
<tr>
<td>
C8F – Whitefish, arctic char – Lake Garda – Fishery data
</td> </tr>
<tr>
<td>
C9F – Several species – Czech Republic, The Netherlands – Gillnet catch data
</td> </tr>
<tr>
<td>
C9F – Several species – Czech Republic – Angling data
</td> </tr>
<tr>
<td>
C9F – Several species – Czech Republic – Limnological data
</td> </tr>
<tr>
<td>
C10A – Carp, catfish – Hungary – Industry data, 2000-to date
</td> </tr>
<tr>
<td>
C10A – Carp, catfish – Hungary – Farm production and input data
</td> </tr>
<tr>
<td>
C10A – Carp, catfish – Hungary – Simulation data
</td> </tr>
<tr>
<td>
**Marine aquaculture**
</td> </tr>
<tr>
<td>
C11A – Salmon – Chile – Environmental and production data
</td> </tr>
<tr>
<td>
C11A – Salmon - Norway – Environmental and production data
</td> </tr>
<tr>
<td>
C11A – Salmon – Scotland – Environmental and production data
</td> </tr>
<tr>
<td>
C12A – Seabass, meagre – Greece – Growth, consumption and temperature data
</td> </tr>
<tr>
<td>
C12A – Seabass, meagre – Greece –Simulation data
</td> </tr>
<tr>
<td>
C13A – Blue mussel, carpet shell – Spain – Meteorological data
</td> </tr>
<tr>
<td>
C13A – Blue mussel, carpet shell – Spain – Environmental data
</td> </tr>
<tr>
<td>
C13A – Blue mussel, carpet shell – Spain – Harmful algal blooms data
</td> </tr>
<tr>
<td>
C13A – Blue mussel, carpet shell – Spain – Mussel larvae settlement and
recruitment data
</td> </tr>
<tr>
<td>
C13A – Blue mussel, carpet shell – Spain – Weight ratio
</td> </tr>
<tr>
<td>
C13A – Blue mussel, carpet shell – Spain – Simulation data
</td> </tr>
<tr>
<td>
C14A – Shellfish – Scotland – Environmental and production data
</td> </tr>
<tr>
<td>
C15A – Blue mussel, carpet shell – Northern Adriatic Sea – Environmental and
production data
</td> </tr>
<tr>
<td>
**European waters overall**
</td> </tr>
<tr>
<td>
C16AF – Several species – European waters – Production by production source,
1950-2014
</td> </tr>
<tr>
<td>
C16AF – Several species – European waters – Life history traits matrix
</td> </tr> </table>
While the majority of data is not proprietary and can be shared freely,
certain datasets containing commercially sensitive information such as
production figures and socio-economic data collected in agreement with
industry actors are subject to restrictions.
# Data archiving and sharing
As noted in the DMP, the majority of data is currently stored in in-house
repositories belonging to the different ClimeFish partners. However, in order
to be made available to both the research community and the wider public,
data, publications 1 and similar resources must be deposited in an open
access data repository. The OpenAIRE guidelines recommends four ways for
selecting a suitable data repository, in order of preference (OPENAIRE 2016):
1. Use an external data archive or repository already established for your research domain to preserve the data according to recognised standards in your discipline.
2. If available, use an institutional research data repository, or your research group’s established data management facilities.
3. Use a cost-free data repository such as Zenodo.
4. Search for other data repositories here: re3data.org. On top of specific research disciplines you can filter on access categories, data usage licenses, trustworthy data repositories (with a certificate or explicitly adhering to archival standards) and whether a repository gives the data a persistent identifier.
As noted in the DMP, some of the data used consists of data gathered from
public databases, either in the form of premade datasets downloaded from a
website, or datasets generated through the database in question. In general,
for data already openly accessible on the web, only a link to the original
source along with necessary metadata is needed, rather than copies of the
original datasets themselves. Original data generated by the project, however,
must be uploaded in full to a repository.
As specified in the Description of Action, applicable data from the ClimeFish
project will be shared on the Climate-ADAPT platform. Climate-ADAPT 2 is a
joint initiative between several DGs within the European Commission and the
European Environment Agency, allowing for sharing research data, case studies,
map data, publications, or other resources pertaining to climate change. More
specifically, Climate-ADAPT contains information pertaining to the following
categories:
* Publications and reports
* Information portals
* Guidance documents
* Tools
* Research and knowledge projects
* Adaptation options
* Case studies
* Organisations
* Indicators
In order to submit content to the platform, users must request an EIONET
account. Access can be requested by emailing the EIONET helpdesk (
[email protected]_ ) and providing your name, email address and
organisation. When submitted, metadata will be evaluated by the EEA/ETC-CCA
team before being accepted for publishing.
Specific details on the necessary information and metadata for each of the
nine information categories can be found through:
* _http://climate-adapt.eea.europa.eu/help/share-your-info_
Data uploaded to the Climate-ADAPT platform must be related to " _climate
change impacts, vulnerability and adaptation in Europe"_ . Therefore, data not
pertaining to " _impacts, vulnerability, and adaptation_ ", such as data
focusing solely on climate change mitigation, or data not related to Europe,
is outside of the scope of the platform.
Data not within the scope of the Climate-ADAPT platform will be uploaded to
Zenodo 3 if no other suitable repository within the given scientific domain
exists, or if the institution in question does not have a suitable open access
data repository of their own. Zenodo is a "catch-all" repository, hosted by
CERN and part of the OpenAIRE project. All data uploaded to Zenodo is given a
DOI (digital object identifier). A DOI is a persistent identifier that is
linked to- and uniquely identifies objects such as reports, datasets, etc.
Zenodo supports DOI versioning, meaning that if data is updated, each version
is given its own unique DOI. In order to upload content to Zenodo, users must
register an account. Registration is open to all parties.
Figure 1 illustrates the proposed timeline for archiving and sharing data
towards the project end. As per the ORDP, the DMP must be updated within each
18-month reporting period. The deadline for the next update is planned for
M43. Participants will be asked to update their respective forms, with special
attention being paid to whether the data contains sensitive information and if
it can be shared, and if so, suitable open access repositories in instances
where Climate-ADAPT is not appropriate.
Project participants will be asked to upload either full copies of the
datasets or simply associated metadata to applicable open access data
repositories themselves. If needed, Nofima staff involved in WP2 can assist in
making data available. All applicable data will be uploaded to a repository by
M45. This will be documented as part of Deliverable 4.5 "Final list of data
collected, with archiving and sharing in effect".
<table>
<tr>
<th>
Project partners requested to update
datasets, metadata, etc. (M42)
</th>
<th>
D2.4 Final list of data collected, with
archving and sharing in effect (M45)
</th> </tr> </table>
Final version of data Project end (M48) management plan
(M43)
_Figure 1 Timeline for archiving and sharing of ClimeFish project data_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0533_PASSION_780326.md
|
# EXECUTIVE SUMMARY
The present document represents “D1.3 Data Management Plan” of the PASSION
Project. The project aims at the development of new photonic technologies to
support agile metro networks, enabling capacities of Tb/s per spatial channel,
100 Tb/s per link and Pb/s per node over increased transport distances. The
project breakthroughs are achieved by developing all the essential photonic
building blocks. On the transmitter side, a novel 3D stacked modular design
will be developed combining a silicon photonics (SiPh) circuit layer with
directly modulated high-bandwidth 1550nm VCSELs light sources. At the receiver
side a novel InP based coherent receiver array which handles polarization on
chip making polarization handling off chip unnecessary will be implemented.
Finally, the partners will develop a compact and cost-effective switching
concept which can support the Pb/s capacities generated by the transceiver
modules, using a combination of InP and SiPh PICs.
In order to achieve the claimed objective project partners will have to
collect and manage data related to several domains: suppliers’ data (e.g.
projects, technical details of market technologies), technological details
delivered throughout the project (e.g. tests results), partners’ personal data
(e.g. emails of partners personnel), and data collected from dissemination
activities (e.g. data collected through presentations, communication through
social media).
The purpose of this document is to provide the plan for managing the data
generated and collected during the project; the Data Management Plan (DMP).
Specifically, the DMP describes the data management life cycle for all
datasets to be collected, processed and/or generated by a research project. It
covers:
* the handling of research data during and after the project
* which data will be collected, processed or generated
* which methodology and standards will be applied
* whether data will be shared/made open and how
* how data will be curated and preserved
Following the EU’s guidelines regarding the DMP, this document is prepared for
M6 and will be updated, if necessary, during the project lifetime (in the form
of an updated deliverable).
# INTRODUCTION
## DMP AND AUDIENCE
This document is the PASSION Data Management Plan that the project consortium
is required to create as the project participates to the Open Research Data
pilot. The DMP describes the data management life cycle for all datasets to be
collected, processed and/or generated by a research project.
The intended audience for this document is the PASSION consortium, the
European Commission, and all stakeholders (e.g. technology suppliers, research
community, etc.) who will interact with the project in several forms and whose
data will be collected by the project partners.
## METHODOLOGY
The methodology used by the consortium to prepare the document is as follows.
* The objectives and the general structure of the DMP were illustrated and shared with partners.
* A template for the collection of data was defined.
* Each partner was invited to prepare an individual contribution to the DMP.
* Contributions were integrated and revised.
## STRUCTURE OF THE DMP
For each identified data set the DMP provides a description of the following
elements.
* Data summary, describing the types of data, the origin of the data (generated internally or collected from external sources), how they fit into the project (where they are produced or collected and what contribution is provided to project objectives).
* Details on how to make data findable, including provisions for metadata.
* Details on how to access data and how data will be findable and accessible.
* Details on how to make data interoperable.
* Policies to support re-use and sharing of data.
* Resources necessary to support the collection and maintenance of data.
* Policies to guarantee a secure management of data.
* Ethical aspects and other issues.
These elements correspond to the sections of the document. The elements are
provided for each partner.
# INDIVIDUAL DMP
For each partner, details on data sets and how they will be produced, managed,
and shared in accordance with EU indications on DMP are provided in the
following sections.
## DATA SUMMARY
<table>
<tr>
<th>
1\. POLIMI
</th>
<th>
POLIMI will collect data on:
1. test measurement of transmitters, receivers, and node characterization of device, systems, and sub-systems
2. simulation of systems and sub-systems performance
3. simulation of device design for systems and sub-systems
Collection of data is instrumental to achieve the following objectives of the
project:
Objective 1. Design and development of photonic technologies for the
realization of a new generation of energy-efficient and compact transmitter
(Tx)
modules for the metro network enabling up to Tb/s capacity per PIC Objective
2. Design and development of photonic technologies for the realization of a
new generation of compact, flexible receiver (Rx) modules for the metro
network, able to sustain the PASSION sliceable- bandwidth/bitrate approach.
Objective 4. Design and development of scalable and modular S-BVT
architectures, allowing to adaptively generate multiple flows of Tb/s capacity
and enabling up to 100 Tb/s aggregated capacity per link
Objective 5. Development of scalable and modular metro network architectures
for subsystem sharing and functional reuse to support flexible agile
spectrum/spatial switching addressing capacities of Pb/s per node.
In particular, data will be used in the following WPs: 2, 3, 5.
Data are generated by project activities; no re-use of previously generated
data is foreseen in this context.
In the context of the project, data are useful for the technology partners
(VTT, TUE, VERT, CTTC, EFP, NICT), for technology suppliers (SMO, TID), for
dissemination partners (EPIC).
Data format will comprise Excel files (.xls), Matlab files (.mat, .dat), txt.
Access to data will be granted to:
* all partners of the PASSION project
* external organisations that will submit an access request to POLIMI and be approved (if necessary after consultation with the other PASSION partners).
</th> </tr>
<tr>
<td>
2\. CTTC
</td>
<td>
CTTC will collect data on:
1. Measurements and experimental analysis of transmitters, receivers, systems, and sub-systems, related to the PASSION S-BVT architectures supported by the developed photonic technologies and devices
2. Simulation of systems and sub-systems for suitable transceiver architecture design and performance evaluation
3. Overall setup time attained in the targeted project demonstrations when automatically programming an end-to-end optical connection involving transmitters, receivers and optical switch nodes developed according to the devised PASSION solutions.
4. Devised data model (YANG) and related encoding (JSON or.XML) for configuring and retrieving status of the PASSION network elements and devices.
5. Collection of different performance metrics (i.e., connection blocking, average SBVT utilization, average optical spectrum usage, control setup time, etc.) when dynamically provisioning optical flows aligned with the defined PASSION use cases
The collection of this data is crucial to cope with the following PASSION
objectives:
Objective 4. Design and development of scalable and modular S-BVT
architectures, allowing to adaptively generate multiple flows of Tb/s capacity
and
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
enabling up to 100 Tb/s aggregated capacity per link
Objective 5. Development of scalable and modular metro network architectures
for subsystem sharing and functional reuse to support flexible agile
spectrum/spatial switching addressing capacities of Pb/s per node.
In particular, the generated data will be used within WP2 and WP5.
Data are generated by project activities; specific existing data (such as
systems and sub-systems specifications) on CTTC laboratory facilities and
ADRENALINE Testbed® are envisioned to be shared (re-used) if needed by the
project.
In the context of the project, data are useful for the rest of the PASSION
partners including technology partners, technology suppliers and dissemination
partners.
Data format will comprise Excel files (.xls), Word Documents (.doc), .txt, as
well as specific files for the used data model (.yang) and the protocol
encoding (.xml or .json)
Access to data will be granted to:
* all partners of the PASSION project
* external organisations that will submit an access request to CTTC either directly or through the coordinator POLIMI and be approved (if necessary after consultation with the other PASSION partners).
</th> </tr>
<tr>
<td>
3\. TUE
</td>
<td>
TUE will collect data on:
1. simulation of device design for systems and sub-systems
2. devices fabrication
3. test measurement of transmitters, receivers, and node characterization of device, systems, and sub-systems
Collection of data is instrumental to achieve the following objectives of the
project:
Objective 1. Design and development of photonic technologies for the
realization of a new generation of energy-efficient and compact transmitter
(Tx) modules for the metro network enabling up to Tb/s capacity per PIC
Objective 2. Design and development of photonic technologies for the
realization of a new generation of compact, flexible receiver (Rx) modules for
the metro network, able to sustain the PASSION sliceable- bandwidth/bitrate
approach.
Objective 3: Development of energy-efficient and small-footprint switching
technologies for a node featuring functional aggregation/disaggregation,
together with switching in the space and wavelength domain in order to handle
1-Pb/s capacity.
Objective 4. Design and development of scalable and modular S-BVT
architectures, allowing to adaptively generate multiple flows of Tb/s capacity
and enabling up to 100 Tb/s aggregated capacity per link
Objective 5. Development of scalable and modular metro network architectures
for subsystem sharing and functional reuse to support flexible agile
spectrum/spatial switching addressing capacities of Pb/s per node.
Data are generated by project activities; no re-use of previously generated
data is foreseen in this context.
In the context of the project, data are useful for the technology partners
(POLIMI, VTT, VERT, CTTC, EFP, NICT), for technology suppliers (SMO, TID), for
dissemination partners (EPIC).
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Data format will comprise Excel files (.xls), Matlab files (.mat, .dat), txt.
GDS files (.gds) and photos (.jpg)
Access to data will be granted to:
\- all partners of the PASSION project
</th> </tr>
<tr>
<td>
4\. VTT
</td>
<td>
VTT will collect data on:
1. Optical, Electrical, Thermomechanical Simulations of Silicon Photonic chips, fiber-coupled packaged transmitter modules with VCSEL and node switching sub-assemblies.
2. Test measurement of the above at room temperature and operating temperatures
Collection of data is instrumental to achieve the following objectives of the
project:
Objective 1. Design and development of photonic technologies for the
realization of a new generation of energy-efficient and compact transmitter
(Tx) modules for the metro network enabling up to Tb/s capacity per PIC
Objective 2. Highly dense Packaging Design and manufacturing technologies for
VCSEL-based, energy-efficient,Tb/s capacity transmitter (Tx) modules
In particular, data will be mainly used in the following WPs: 3, 4\.
Data are generated by project activities; no re-use of previously generated
data is foreseen in this context.
In the context of the project, data are useful for the technology partners
(TUE, VERT, VLC, OPSYS, POLIMI), for technology suppliers (SMO, TID), for
dissemination partners (EPIC).
Data format will comprise Excel files (csv.xls), Matlab files (.mat, .dat),
txt.
Access to data will be granted to:
* all partners of the PASSION project
* external organisations that will submit an access request to POLIMI and be approved (if necessary after consultation with the other PASSION partners).
</td> </tr>
<tr>
<td>
5\. VERT
</td>
<td>
Vertilas will collect data on:
1. Test data of VERTILAS VCSELs
2. Test data of VERTILAS VCSELs with laser drivers
3. Data on optical coupling of VCSELs and PICs
4. Data on assembly and integration of VCSELs and PICs This data is required to achieve the following objectives:
Objective 1. Verify the VCSEL design and laser prodiuctio parameters Objective
2: Characterise the VCSELs for project partners for system design concept,
module integration and performance evaluation.
Objective 3: Set requirements and operation parameters to operate VCSELs with
other components and achieve optimized performance
Objective 4: Provide data to partners to derive system requirements from
VCSELs functionality and performance
Objective 5. Define and support component integration techniques
Data generated by VERTILAS is useful for project partners, e.g. POLIMI; VTT,
TUE, CCTC and others. For dissemination, data can be provided to EPIC.
Data formats used will be mainly excel files, graphs and text (word,
powerpoint). Data will be made accessible to project partners for the system
design and component integration.
</td> </tr>
<tr>
<td>
6\. VLC
</td>
<td>
VLC will collect data on:
1. Simulation data for the design and layout of photonic integrated circuits.
2. Characterization of photonic integrated circuits and building blocks. Collection of data is instrumental to achieve the following objectives of the project:
Objective 1. Improving the performance of the target PICs through iterative
design based on characterization data.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Objective 2. Performance evaluation towards achieving the target goals.
Objective 3. Validating the fabrication platforms.
In particular, data will be used in the following WPs: 3, 4, and partially 5.
Data are generated by project activities; no re-use of previously generated
data is foreseen in this context.
In the context of the project, data might be useful for the technology
partners (VTT, TUE, POLIMI, VERT, OPSYS, SMO, EFP, ETRI).
Data format will comprise Excel files (.xls), Matlab files (.mat, .dat), .txt.
GDS design files will not be shared.
Access to data will be granted to:
* all requesting partners of the PASSION project
* external organisations that will submit an access request to POLIMI and be approved (if necessary after consultation with the other PASSION partners).
</th> </tr>
<tr>
<td>
7\. OPSYS
</td>
<td>
OPSYS will collect data on:
1\. Characterization data different types of switches and WSS. Data on
performance of node different parts and at the end the sub-systems and system
level performance including different transmission scenarios. 2. Data on
Techno-economic analysis of the proposed solutions
3\. Data on recommended device packaging design reliable for the sub-systems
integration
Collection of data is instrumental to achieve the following objectives of the
project:
Objective 1. Design and development of photonic technologies for the
realization of a new generation of energy-efficient and compact transmitter
(Tx) modules for the metro network enabling up to Tb/s capacity per PIC
Objective 3: Development of energy-efficient and small-footprint switching
technologies for a node featuring functional aggregation/disaggregation,
together with switching in the space and wavelength domain to handle 1-Pb/s
capacity. In particular:
* Design of the optical switching node with added flexibility through implementation of different levels of aggregation, as in spectrum and in space, to improve effective and agile usage of the traffic pipes.
* Design of compact and low number of electrodes WSSs and low insertion loss high connectivity WDM and multicast switches (MCSs).
Objective 4. Design and development of scalable and modular S-BVT
architectures, allowing to adaptively generate multiple flows of Tb/s capacity
and enabling up to 100 Tb/s aggregated capacity per link
Objective 5. Development of scalable and modular metro network architectures
for subsystem sharing and functional reuse to support flexible agile
spectrum/spatial switching addressing capacities of Pb/s per node.
In particular, data will be used in the following WPs: 2, 3, 4, 5\.
Data are generated by project activities; no re-use of previously generated
data is foreseen in this context.
In the context of the project, data are useful for the technology partners
(TUE, ETRI, VLC, EFP, VTT), for technology suppliers (SMO, TID, VERTILAS), for
dissemination partners (EPIC).
Data format will comprise Excel files (.xls), Matlab files (.mat, .dat), txt.
Access to data will be granted to:
* all partners of the PASSION project
* external organisations that will submit an access request to POLIMI and be approved (if necessary after consultation with the other PASSION partners).
</td> </tr>
<tr>
<td>
8\. EFP
</td>
<td>
EFP will collect data on:
* Chip level DC qualification (i.e. quantify DC figures of merit of components)
* Chip level RF qualification (i.e. quantify electro-optical response of components)
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
* Wafer level qualification (i.e. photoluminescence measurements)
* Prototype DC&RF qualification (i.e. verify prototype design satisfy requirements)
Collection of these data is key to achieve:
Objective 2. Design and development of photonic technologies for the
realization of a new generation of compact, flexible receiver (Rx) modules for
the metro network, able to sustain the PASSION sliceable- bandwidth/bitrate
approach.
Yet, as all project objectives are correlated, feedback from the above is
relevant to the achievement of the other project objectives. No re-use of
previously generated data is foreseen.
Data in custom trext files, and in the case of the wafer level qualification,
additional PL reports generated by software from equipment vendor.
Only relevant plots, and/or values of relevant figures of merit will be shared
to project partners when necessary and/or requested.
</th> </tr>
<tr>
<td>
9\. SMO
</td>
<td>
SMO will collect data on:
1. measurement of the telecommunication nodes and related sub-system, which will be developed and/or integrated during the Project development.
2. simulation results that will be produced during the project execution will be part of the collected data.
Data collection will be manly used to:
1\. characterise the behaviour of the developed systems and sub-systems,
according to the Project objectives. In particular:
* transmitter (Tx) and receiver (Rx) modules for the metro network enabling up to Tb/s capacity per PIC;
* optical nodes and switching capacity;
* scalable and modular metro network architectures;
* demonstration results and statistics
Data format will include the most common file format (e.g. Excel files _.xls_
, Matlab files _.mat_ , _.dat_ , Word files _.doc_ , _.docx_ , PowerPoint
files _.ppt_ , _.pptx_ , Text files _.txt_ , etc.)
Access to data will be granted to: all partners of the PASSION project
external organisations that will submit an access request to POLIMI as
coordinator. PASSION partners will approve access grant.
</td> </tr>
<tr>
<td>
10\. TID
</td>
<td>
Telefonica is collecting and sharing information about real network topologies
and their physical characteristics for network design and techno-economic
analysis. Telefonica is not providing any information about customers data.
</td> </tr>
<tr>
<td>
11\. EPIC
</td>
<td>
EPIC will collect data on:
1\. End-user companies interested in the technology developed by PASSION 2.
Companies interested in providing the components to the PASSION supply chain
for the next generation of metro network based on PASSION technology
3. Companies interested in the standards developed or adopted by PASSION
4. European Projects competing/complementing PASSION
Collection of data is instrumental to achieve the following objectives of the
project:
* Objective 1: provide recommendations on migration and roadmap for industrialization;
* Objective 2: coordinate and perform project results dissemination, giving appropriate visibility of PASSION to the relevant European, national and international forums;
* Objective 3: promote the technical results of PASSION to the European and global research community (e. g. setting up a project web site, dissemination
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
events);
* Objective 4: coordination of dissemination activities (e.g. participation in conferences, contribution to scientific journals, organization of workshops and events, etc.);
* Objective 5: exchange with other projects active in neighbouring fields with similar focus (within and possibly outside EU HORIZON2020);
* Objective 6: generate a software tool for metro network design useful for the operators willing to exploit PASSION technologies, devices and architectures; - Objective 7: participate to international standardization bodies
Data are generated by attending the events, organizing the workshops and also
through the website and social media; no re-use of previously generated data
is foreseen in this context.
In the context of the project, data are useful for the technology partners
(VTT, TUE, VERT, CTTC, EFP, NICT), for technology suppliers (SMO, TID), for
dissemination partners (EPIC).
Data format will comprise Excel files (.xls) and txt.
Access to data will be granted to:
* all partners of the PASSION project
* external organisations that will submit an access request and be approved (if necessary after consultation with the other PASSION partners).
</th> </tr>
<tr>
<td>
12\. NICT
</td>
<td>
NICT will collect data on:
1\. test measurement of transmitters, receivers, and node characterization of
device, systems, and sub-systems
Collection of data is instrumental to achieve the following objectives of the
project:
Objective 1. Design and development of photonic technologies for the
realization of a new generation of energy-efficient and compact transmitter
(Tx)
modules for the metro network enabling up to Tb/s capacity per PIC Objective
2. Design and development of photonic technologies for the realization of a
new generation of compact, flexible receiver (Rx) modules for the metro
network, able to sustain the PASSION sliceable- bandwidth/bitrate approach.
Objective 4. Design and development of scalable and modular S-BVT
architectures, allowing to adaptively generate multiple flows of Tb/s capacity
and enabling up to 100 Tb/s aggregated capacity per link
Objective 5. Development of scalable and modular metro network architectures
for subsystem sharing and functional reuse to support flexible agile
spectrum/spatial switching addressing capacities of Pb/s per node.
In particular, data will be used in WP 5.
Data are generated by project activities; no re-use of previously generated
data is foreseen in this context.
In the context of the project, data are useful for the technology partners
(VTT, TUE, VERT, CTTC, EFP, NICT), for technology suppliers (SMO, TID), for
dissemination partners (EPIC).
Data format will comprise Excel files (.xls), Matlab files (.mat, .dat), txt.
Access to data will be granted to:
* all partners of the PASSION project
* external organisations that will submit an access request to NICT and be approved (if necessary after consultation with the other PASSION partners).
</td> </tr>
<tr>
<td>
13\. ETRI
</td>
<td>
ETRI will collect data on:
1. Mask design of the photonic space switch (polymer based optical matrix switch).
2. Simulation results of switching characteristics (e.g. BPM simulation) 3. Device (Chip) Characterization results:
* optical insertion loss, polarization dependent loss, extinction ratio, electrical switching power
* waveguide cross-section analysis, SEM & EDAX
</td> </tr>
<tr>
<td>
</td>
<td>
Objective of the data collection:
\- Development of energy-efficient and small-footprint switching technologies
for a node featuring functional aggregation/disaggregation, together with
switching in the space domain.
Data will be used in the following WPs: 4, 5.
Data are generated by project activities; no re-use of previously generated
data is foreseen in this context.
</td> </tr> </table>
## MAKING DATA FINDABLE
<table>
<tr>
<th>
1\.
POLIMI
</th>
<th>
•
</th>
<th>
Data will be stored in a shared folder called PASSION DATA
(https://www.dropbox.com/sh/gr43zcj8vmqaeox/AAAV1NftJ85mNngZRmNGXWXDa?dl=0)
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Naming convention will be: [WP#]_[CODE]_ [CONTENT DATA]_[V].[EXT]
* [WP#] indicated the number of WP that generated the data. Optional
* [CODE] indicates the type of data (SIM for simulation, TST for test, DES for design)
* [V] indicates the version of the file
* [EXT] is the extension of the file
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Access to data will be guaranteed for the duration of the project and for 6
months after the end of the project
</td> </tr>
<tr>
<td>
2\.
CTTC
</td>
<td>
•
</td>
<td>
Data will be stored in a shared folder called PASSION DATA
(ttps://www.dropbox.com/sh/gr43zcj8vmqaeox/AAAV1NftJ85mNngZRmNGXWXDa?dl=0)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Naming convention will be: [WP#]_[CODE]_ [CONTENT DATA]_[V].[EXT]
* [WP#] indicated the number of WP that generated the data. Optional
* [CODE] indicates the type of data (e.g. SIM for simulation, TST for test, DES for design, …)
* [V] indicates the version of the file
* [EXT] is the extension of the file
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Access to data will be guaranteed for the duration of the project and for 6
months after the end of the project
</td> </tr>
<tr>
<td>
3\. TUE
</td>
<td>
•
</td>
<td>
Data will be stored in a shared folder
( _https://www.dropbox.com/sh/njorvbpkal6hptw/AADdsrhZ-lUmztQm7e2laUgKa?dl=0_
)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Naming convention will be: [WP#]_[CODE]_ [CONTENT DATA]_[V].[EXT]
* [WP#] indicated the number of WP that generated the data. Optional
* [CODE] indicates the type of data (SIM for simulation, TST for test, DES for design)
* [V] indicates the version of the file
* [EXT] is the extension of the file
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Access to data will be guaranteed for the duration of the project and for 6
months after the end of the project
</td> </tr>
<tr>
<td>
4\. VTT
</td>
<td>
•
</td>
<td>
Data will be stored in a shared folder called PASSION DATA
(https://www.dropbox.com/sh/gr43zcj8vmqaeox/AAAV1NftJ85mNngZRmNGXWXDa?dl=0)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Naming convention will be: [WP#]_[CODE]_ [CONTENT DATA]_[V].[EXT]
* [WP#] indicated the number of WP that generated the data. Optional
* [CODE] indicates the type of data (SIM for simulation, TST for test, DES for design)
* [V] indicates the version of the file
* [EXT] is the extension of the file
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Access to data will be guaranteed for the duration of the project and for 6
months after the end
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
of the project
</th> </tr>
<tr>
<td>
5\.
VERT
</td>
<td>
* Technical information and VCSEL parameters for project partners will be stored on the shared folder (Repository URL will be provided once ready)
* Naming convention will be: [WP#]_[CODE]_ [CONTENT DATA]_[V].[EXT]
* [WP#] indicated the number of WP that generated the data. Optional
* [CODE] indicates the type of data (SIM for simulation, TST for test, DES for design)
* [V] indicates the version of the file
* [EXT] is the extension of the file
Access to data will be guaranteed for the duration of the project and for 6
months after the end of the project
</td> </tr>
<tr>
<td>
6\. VLC
</td>
<td>
* Data will be stored in a shared folder called PASSION DATA
(https://www.dropbox.com/sh/gr43zcj8vmqaeox/AAAV1NftJ85mNngZRmNGXWXDa?dl=0 )
* Naming convention will be: [WP#]_[CODE]_ [CONTENT DATA]_[V].[EXT]
* [WP#] indicated the number of WP that generated the data. Optional
* [CODE] indicates the type of data (SIM for simulation, TST for test, DES for design)
* [V] indicates the version of the file
* [EXT] is the extension of the file
* Access to data will be guaranteed for the duration of the project and for 6 months after the end of the project
</td> </tr>
<tr>
<td>
7\.
OPSY
S
</td>
<td>
* Data will be stored in a shared folder called PASSION DATA
(https://www.dropbox.com/sh/gr43zcj8vmqaeox/AAAV1NftJ85mNngZRmNGXWXDa?dl=0 )
* Naming convention will be: [WP#]_[CODE]_ [CONTENT DATA]_[V].[EXT]
* [WP#] indicated the number of WP that generated the data. Optional
* [CODE] indicates the type of data (SIM for simulation, TST for test, DES for design)
* [V] indicates the version of the file
* [EXT] is the extension of the file
* Access to data will be guaranteed for the duration of the project and for 6 months after the end of the project
</td> </tr>
<tr>
<td>
8\. EFP
</td>
<td>
* Data stored in in-house database
* Since only relevant plots and/or values of relevant figures of merit will be provided when necessary and/or requested, the internally used naming conventions are irrelevant to share
</td> </tr>
<tr>
<td>
9\. SMO
</td>
<td>
* All the relevant data collected will be send to the Project Coordinator. PoliMI will store and manage the data in a shared folder called PASSION DATA
(https://www.dropbox.com/sh/gr43zcj8vmqaeox/AAAV1NftJ85mNngZRmNGXWXDa?dl=0)
* Naming convention will be: [WP#]_[CODE]_ [CONTENT DATA]_[V].[EXT]
* [WP#] indicated the number of WP that generated the data. Optional
* [CODE] indicates the type of data (SIM for simulation, TST for test, DES for design)
* [V] indicates the version of the file
* [EXT] is the extension of the file
* Access to data will be guaranteed for the duration of the project and for 6 months after the end of the project
</td> </tr>
<tr>
<td>
10\. TID
</td>
<td>
• Reference Network topologies and fibre characteristics are reported in D21.
</td> </tr>
<tr>
<td>
11\. EPIC
</td>
<td>
* Data will be stored in a shared folder called PASSION DATA
(https://www.dropbox.com/sh/gr43zcj8vmqaeox/AAAV1NftJ85mNngZRmNGXWXDa?dl=0 )
* Naming convention will be: [Name of the company]_[Website of the company]_[Name of the contact]_ [Details of the company]
* Access to data will be guaranteed for the duration of the project and for 6 months after the end of the project
</td> </tr>
<tr>
<td>
12\. NICT
</td>
<td>
* Data will be stored in a shared folder (Dropbox URL)
* Naming convention will be: [WP#]_[CODE]_ [CONTENT DATA]_[V].[EXT]
* [WP#] indicated the number of WP that generated the data. Optional
* [CODE] indicates the type of data (SIM for simulation, TST for test, DES for design)
* [V] indicates the version of the file
* [EXT] is the extension of the file
* Access to data will be guaranteed for the duration of the project and for 6 months after the end of the project
</td> </tr>
<tr>
<td>
13\. ETRI
</td>
<td>
•
</td>
<td>
Data will be stored in a shared folder called PASSION DATA
(https://www.dropbox.com/sh/gr43zcj8vmqaeox/AAAV1NftJ85mNngZRmNGXWXDa?dl=0 )
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Raw measurement data will be stored in the local server of ETRI
(Measurement data will be shared to the project cloud shared folder if
requested)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Naming convention will be: [WP#]_[CODE]_ [CONTENT DATA]_[V].[EXT]
* [WP#] indicated the number of WP that generated the data. Optional
* [CODE] indicates the type of data (SIM for simulation, TST for test, DES for design)
* [V] indicates the version of the file
* [EXT] is the extension of the file
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Access to data will be guaranteed for the duration of the project and for 6
months after the end of the project
</td> </tr> </table>
## MAKING DATA OPENLY ACCESSIBLE
<table>
<tr>
<th>
1\. POLIMI
</th>
<th>
•
</th>
<th>
Data will be available to all project partners through access to the shared
folder. Access to folder will be granted by the repository owner (coordinator)
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data can be accessed through applications that depends on the format of data
(e.g. MW Office or other office software, Matlab, etc.)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
No metadata will be associated to the files
</td> </tr>
<tr>
<td>
2\. CCTC
</td>
<td>
•
</td>
<td>
Data will be available to all project partners through access to the shared
folder. Access to folder will be granted by the repository owner (coordinator)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data can be accessed through applications that depends on the format of data
(e.g. MW Office or other office software, etc.)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
No metadata will be associated to the files
</td> </tr>
<tr>
<td>
3\. TUE
</td>
<td>
•
</td>
<td>
Data will be available to all project partners through access to the shared
folder. Access to folder will be granted by the repository owner
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data can be accessed through applications that depends on the format of data
(e.g. MW Office or other office software, Matlab, etc.)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
No metadata will be associated to the files
</td> </tr>
<tr>
<td>
4\. VTT
</td>
<td>
•
</td>
<td>
Data will be available to all project partners through access to the shared
folder. Access to folder will be granted by the repository owner (coordinator)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data can be accessed through applications that depends on the format of data
(e.g. MW Office or other office software, Matlab, etc.)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
No metadata will be associated to the files
</td> </tr>
<tr>
<td>
5\. VERT
</td>
<td>
•
</td>
<td>
Data will be available to all project partners through access to the shared
folder. Access to folder will be granted by the repository owner (coordinator)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data can be accessed through applications that depends on the format of data
(e.g. MW Office or other office software, Matlab, etc.)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
No metadata will be associated to the files
</td> </tr>
<tr>
<td>
6\. VLC
</td>
<td>
•
</td>
<td>
Data will be available to all project partners through access to the shared
folder. Access to folder will be granted by the repository owner (coordinator)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data can be accessed through applications that depends on the format of data
(e.g. MW Office or other office software, Matlab, etc.).
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
No metadata will be associated to the files.
</td> </tr>
<tr>
<td>
7\. OPSYS
</td>
<td>
•
</td>
<td>
Data will be available to all project partners through access to the shared
folder. Access to folder will be granted by the repository owner (coordinator)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data can be accessed through applications that depends on the format of data
(e.g. MW Office or other office software, Matlab, etc.)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
No metadata will be associated to the files
</td> </tr>
<tr>
<td>
8\. EFP
</td>
<td>
•
</td>
<td>
Only relevant plots and/or values of relevant figures of merit will be
provided to project partners when necessary and/or requested. This will be
done by e-mail or on project’s shared folder (access granted by the
coordinator, who is the repository owner)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data can be accessed through applications of common use (e.g. Microsoft
Office, Matlab, etc.)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
No metadata will be associated to the files
</td> </tr>
<tr>
<td>
9\. SMO
</td>
<td>
•
</td>
<td>
Data will be available to all project partners through access to the shared
folder. Access to folder will be granted by the repository owner (coordinator)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data can be accessed through applications that depends on the format of data
(e.g. MW Office or other office software, Matlab, etc.)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
No metadata will be associated to the files
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
The project will not provide the tool and the licenses that may be needed to
read the stored data.
</td> </tr>
<tr>
<td>
10\. TID
</td>
<td>
•
</td>
<td>
Reference networks and cost models will be included in public deliverables
</td> </tr>
<tr>
<td>
11\. EPIC
</td>
<td>
•
</td>
<td>
Data will be available to all project partners through access to the shared
folder. Access to folder will be granted by the repository owner (coordinator)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data can be accessed through applications that depends on the format of data
(e.g. MW Office or other office software)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
No metadata will be associated to the files
</td> </tr>
<tr>
<td>
12\. NICT
</td>
<td>
•
</td>
<td>
Data will be available to all project partners through access to the shared
folder. Access to folder will be granted by the repository owner (coordinator)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data can be accessed through applications that depends on the format of data
(e.g. MW Office or other office software, Matlab, etc.)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
No metadata will be associated to the files
</td> </tr>
<tr>
<td>
13\. ETRI
</td>
<td>
•
</td>
<td>
Data will be available to all project partners through access to the shared
folder. Access to folder will be granted by the repository owner (coordinator)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data can be accessed through applications that depends on the format of data
(e.g. MW Office or other office software, Matlab, etc.)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
No metadata will be associated to the files
</td> </tr> </table>
## MAKING DATA INTEROPERABLE
<table>
<tr>
<th>
1\. POLIMI
</th>
<th>
•
</th>
<th>
No specific issues of interoperability are present
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Since data will be “flat” and consist of one single entity with attributes,
structure of data will not be described and the schema of the table will be
representative of the structure
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Mapping with ontologies will not be provided
</td> </tr>
<tr>
<td>
2\. CCTC
</td>
<td>
•
</td>
<td>
No specific issues of interoperability are present
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Since data will be “flat” and consist of one single entity with attributes,
structure of data will not be described and the schema of the table will be
representative of the structure
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Mapping with ontologies will not be provided
</td> </tr>
<tr>
<td>
3\. TUE
</td>
<td>
•
</td>
<td>
No specific issues of interoperability are present
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Since data will be “flat” and consist of one single entity with attributes,
structure of data will not be described and the schema of the table will be
representative of the structure
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Mapping with ontologies will not be provided
</td> </tr>
<tr>
<td>
4\. VTT
</td>
<td>
•
</td>
<td>
No specific issues of interoperability are present
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Since data will be “flat” and consist of one single entity with attributes,
structure of data will not be described and the schema of the table will be
representative of the structure
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Mapping with ontologies will not be provided
</td> </tr>
<tr>
<td>
5\. VERT
</td>
<td>
•
</td>
<td>
No specific issues of interoperability are present
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Since data will be “flat” and consist of one single entity with attributes,
structure of data will not be described and the schema of the table will be
representative of the structure
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Mapping with ontologies will not be provided
</td> </tr>
<tr>
<td>
6\. VLC
</td>
<td>
•
</td>
<td>
No specific issues of interoperability are present.
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
PDAflow foundation standards will be used to compile any PDK or design
library, making them interoperable with the main photonic design frameworks.
</td> </tr>
<tr>
<td>
7\. OPSYS
</td>
<td>
•
</td>
<td>
No specific issues of interoperability are present
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Since data will be “flat” and consist of one single entity with attributes,
structure of data will not be described and the schema of the table will be
representative of the structure
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Mapping with ontologies will not be provided
</td> </tr>
<tr>
<td>
8\. EFP
</td>
<td>
•
</td>
<td>
Not applicable
</td> </tr>
<tr>
<td>
9\. SMO
</td>
<td>
•
</td>
<td>
No specific issues of interoperability are present
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Since data will be “flat” and consist of one single entity with attributes,
structure of data will not be described and the schema of the table will be
representative of the structure
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Mapping with ontologies will not be provided
</td> </tr>
<tr>
<td>
10\. TID
</td>
<td>
•
</td>
<td>
No issues on interoperability are expected
</td> </tr>
<tr>
<td>
11\. EPIC
</td>
<td>
•
</td>
<td>
No specific issues of interoperability are present
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Since data will be “flat” and consist of one single entity with attributes,
structure of data will not be described and the schema of the table will be
representative of the structure
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Mapping with ontologies will not be provided
</td> </tr>
<tr>
<td>
12\. NICT
</td>
<td>
•
</td>
<td>
No specific issues of interoperability are present
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Since data will be “flat” and consist of one single entity with attributes,
structure of data will not be described and the schema of the table will be
representative of the structure
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Mapping with ontologies will not be provided
</td> </tr>
<tr>
<td>
13\. ETRI
</td>
<td>
•
</td>
<td>
No specific issues of interoperability are present
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Since data will be “flat” and consist of one single entity with attributes,
structure of data will not be described and the schema of the table will be
representative of the structure
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Mapping with ontologies will not be provided
</td> </tr> </table>
## INCREASE DATA REUSE
<table>
<tr>
<th>
1\. POLIMI
</th>
<th>
•
</th>
<th>
Use of data will be freely available to all partners following the access
rights regulated by the PASSION CA
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
After the end of the project partner will agree on the kind of access to data
and on the limitations (including embargo periods)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Quality of data will be guaranteed through repetition of tests
</td> </tr>
<tr>
<td>
2\. CTTC
</td>
<td>
•
</td>
<td>
Use of data will be freely available to all partners following the access
rights regulated by the PASSION CA
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
After the end of the project, partners will agree on the kind of access to
data and on the limitations (including embargo periods)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Quality of data will be guaranteed through repetition of tests
</td> </tr>
<tr>
<td>
3\. TUE
</td>
<td>
•
</td>
<td>
Use of data will be freely available to all partners following the access
rights regulated by the PASSION CA
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
After the end of the project partner will agree on the kind of access to data
and on the limitations (including embargo periods)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Quality of data will be guaranteed through repetition of tests
</td> </tr>
<tr>
<td>
4\. VTT
</td>
<td>
•
</td>
<td>
Use of data will be freely available to all partners following the access
rights regulated by the PASSION CA
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
After the end of the project, partner will agree on the kind of access to data
and on the limitations (including embargo periods)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Quality of data will be guaranteed through repetition of tests if specifically
</td> </tr>
<tr>
<td>
</td>
<td>
authorized by the project coordinator
</td> </tr>
<tr>
<td>
5\. VERT
</td>
<td>
* Licensing of VERTILAS data is not planned
* Use of data will be freely available to all partners following the access rights regulated by the PASSION CA
* After the end of the project partner will agree on the kind of access to data and on the limitations (including embargo periods)
* Quality of data will be guaranteed through repetition of tests
</td> </tr>
<tr>
<td>
6\. VLC
</td>
<td>
* Use of data will be freely available to all partners for R&D purposes, following the access rights regulated by the PASSION CA
* After the end of the project partner will agree on the kind of access to data and on the limitations (including embargo periods)
* Quality of data will be guaranteed through repetition of tests
</td> </tr>
<tr>
<td>
7\. OPSYS
</td>
<td>
* Use of data will be freely available to all partners following the access rights regulated by the PASSION CA
* After the end of the project partner will agree on the kind of access to data and on the limitations (including embargo periods)
* Quality of data will be guaranteed through repetition of tests
</td> </tr>
<tr>
<td>
8\. EFP
</td>
<td>
* Use of data will be freely available to all partners following the access rights regulated by the PASSION CA
* After the end of the project, partners will agree on the kind of access to data and on the limitations (including embargo periods)
* Quality of data will be guaranteed through repetition of tests
</td> </tr>
<tr>
<td>
9\. SMO
</td>
<td>
* Use of data will be freely available to all partners following the access rights regulated by the PASSION CA
* After the end of the project partner will agree on the kind of access to data and on the limitations (including embargo periods)
* Quality of data will be guaranteed through repetition of tests
</td> </tr>
<tr>
<td>
10\. TID
</td>
<td>
• Reference Networks and cost models included in public deliverables can be
freely reused
</td> </tr>
<tr>
<td>
11\. EPIC
</td>
<td>
* Use of data will be freely available to all partners following the access rights regulated by the PASSION CA
* After the end of the project partner will agree on the kind of access to data and on the limitations (including embargo periods)
</td> </tr>
<tr>
<td>
12\. NICT
</td>
<td>
* Use of data will be freely available to all partners following the access rights regulated by the PASSION CA
* After the end of the project partner will agree on the kind of access to data and on the limitations (including embargo periods)
Quality of data will be guaranteed through repetition of tests
</td> </tr>
<tr>
<td>
13\. ETRI
</td>
<td>
* Use of data will be freely available to all partners following the access rights regulated by the PASSION CA
* After the end of the project partner will agree on the kind of access to data and on the limitations (including embargo periods)
* Quality of data will be guaranteed through repetition of tests
</td> </tr> </table>
## ALLOCATION OF RESOURCES
<table>
<tr>
<th>
1\. POLIMI
</th>
<th>
•
</th>
<th>
Costs of maintaining the repository of the data are covered by the hosting
institution and will not be charged on the project
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Responsible for the maintenance of the infrastructure is the repository owner
</td> </tr>
<tr>
<td>
2\. CTTC
</td>
<td>
•
</td>
<td>
Costs of maintaining the repository of the data are covered by the hosting
institution and will not be charged on the project
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Responsible for the maintenance of the infrastructure is the repository
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
owner
</td> </tr>
<tr>
<td>
3\. TUE
</td>
<td>
•
</td>
<td>
Data will be kept in Dropbox, no extra cost
</td> </tr>
<tr>
<td>
4\. VTT
</td>
<td>
•
</td>
<td>
Costs of maintaining the repository of data are covered by the hosting
institution and will not be charged on the project
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Responsible for the maintenance of the infrastructure is the repository owner
</td> </tr>
<tr>
<td>
5\. VERT
</td>
<td>
•
</td>
<td>
Costs of maintaining the repository of the data are covered by the hosting
institution and will not be charged on the project
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Responsible for the maintenance of the infrastructure is the repository owner
</td> </tr>
<tr>
<td>
6\. VLC
</td>
<td>
•
</td>
<td>
Costs of maintaining the repository of the data are covered by the hosting
institution and will not be charged on the project
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Responsible for the maintenance of the infrastructure is the repository owner
</td> </tr>
<tr>
<td>
7\. OPSYS
</td>
<td>
•
</td>
<td>
Costs of maintaining the repository of the data are covered by the hosting
institution and will not be charged on the project
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Responsible for the maintenance of the infrastructure is the repository owner
</td> </tr>
<tr>
<td>
8\. EFP
</td>
<td>
•
</td>
<td>
Costs of maintaining the repository of the data are covered by the hosting
institution and will not be charged on the project
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Responsible for the maintenance of the infrastructure is the repository owner
</td> </tr>
<tr>
<td>
9\. SMO
</td>
<td>
•
</td>
<td>
Costs of maintaining the repository of the data are covered by the hosting
institution and will not be charged on the project
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Responsible for the maintenance of the infrastructure is the repository owner
</td> </tr>
<tr>
<td>
10\. TID
</td>
<td>
•
</td>
<td>
Costs are included in the technical WP effort
</td> </tr>
<tr>
<td>
11\. EPIC
</td>
<td>
•
</td>
<td>
Costs of maintaining the repository of the data are covered by the hosting
institution and will not be charged on the project
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Responsible for the maintenance of the infrastructure is the repository owner
</td> </tr>
<tr>
<td>
12\. NICT
</td>
<td>
•
</td>
<td>
Costs of maintaining the repository of the data are covered by the hosting
institution and will not be charged on the project
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Responsible for the maintenance of the infrastructure is the repository owner
(coordinator)
</td> </tr>
<tr>
<td>
13\. ETRI
</td>
<td>
•
</td>
<td>
Costs of maintaining the repository of the data are covered by the hosting
institution and will not be charged on the project
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Responsible for the maintenance of the infrastructure is the repository owner
</td> </tr> </table>
## DATA SECURITY
<table>
<tr>
<th>
1\. POLIMI
</th>
<th>
•
</th>
<th>
Since data are stored in a shared cloud repository, cloud will guarantee
backups of data. Furthermore, Coordinator will access to data through
dedicated app that will guarantee replication of data at local level
</th> </tr>
<tr>
<td>
2\. CTTC
</td>
<td>
•
</td>
<td>
Since data are stored in a shared cloud repository, cloud will guarantee
backups of data. Furthermore, CTTC will access to data through dedicated app
that will guarantee replication of data at local level
</td> </tr>
<tr>
<td>
3\. TUE
</td>
<td>
•
</td>
<td>
Since data are stored in a shared cloud repository, cloud will guarantee
backups of data. Furthermore, TUE will access to data through dedicated app
that will guarantee replication of data at local level
</td> </tr>
<tr>
<td>
4\. VTT
</td>
<td>
•
</td>
<td>
Since data are stored in a shared cloud repository, cloud will guarantee
backups of data. Furthermore, VTT will access to data through dedicated
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
app that will guarantee replication of data at local level
</td> </tr>
<tr>
<td>
5\. VERT
</td>
<td>
•
</td>
<td>
Since data are stored in a shared cloud repository, cloud will guarantee
backups of data. Furthermore, VERT will access to data through dedicated app
that will guarantee replication of data at local level
</td> </tr>
<tr>
<td>
6\. VLC
</td>
<td>
•
</td>
<td>
Since data are stored in a shared cloud repository, cloud will guarantee
backups of data. Furthermore, VLC will access to data through dedicated app
that will guarantee replication of data at local level
</td> </tr>
<tr>
<td>
7\. OPSYS
</td>
<td>
•
</td>
<td>
Data are stored in in-house database and partially shared through the cloud
repository
</td> </tr>
<tr>
<td>
8\. EFP
</td>
<td>
•
</td>
<td>
Data is stored in an in-house database and local storage with back-up online
</td> </tr>
<tr>
<td>
9\. SMO
</td>
<td>
•
</td>
<td>
Since data are stored in a shared cloud repository, cloud will guarantee
backups of data. Furthermore, SMO will access to data through dedicated app
that will guarantee replication of data at local level
</td> </tr>
<tr>
<td>
10\. TID
</td>
<td>
•
</td>
<td>
Project information is stored in cloud back ups.
</td> </tr>
<tr>
<td>
11\. EPIC
</td>
<td>
•
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive data
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Since data are stored in a shared cloud repository, cloud will guarantee
backups of data. Furthermore, EPIC will access to data through dedicated app
that will guarantee replication of data at local level
</td> </tr>
<tr>
<td>
12\. NICT
</td>
<td>
•
</td>
<td>
Since data are stored in a shared cloud repository, cloud will guarantee
backups of data. Furthermore, NICT will access to data through dedicated app
that will guarantee replication of data at local level
</td> </tr>
<tr>
<td>
13\. ETRI
</td>
<td>
•
</td>
<td>
Since data related to PASSION project are stored in a shared cloud repository,
cloud will guarantee backups of data. Furthermore, ETRI network blocks the
unauthorized access from outside and only limited information (e.g. e-mail,
electronic approval system) is accessible through VPN connection. The
unauthorized USB memory cannot be used at the computers in the ETRI networks.
</td> </tr> </table>
## ETHICS
<table>
<tr>
<th>
1\. POLIMI
</th>
<th>
•
</th>
<th>
No sensitive data are collected. No personal data are collected. No ethical
issues is associated to the process of collecting data, to their content and
maintenance
</th> </tr>
<tr>
<td>
2\. CTTC
</td>
<td>
•
</td>
<td>
No sensitive data are collected. No personal data are collected. No ethical
issues is associated to the process of collecting data, to their content and
maintenance
</td> </tr>
<tr>
<td>
3\. TUE
</td>
<td>
•
</td>
<td>
No sensitive data are collected. No personal data are collected. No ethical
issues is associated to the process of collecting data, to their content and
maintenance
</td> </tr>
<tr>
<td>
4\. VTT
</td>
<td>
•
</td>
<td>
No sensitive data are collected. No personal data are collected. No ethical
issues is associated to the process of collecting data, to their content and
maintenance
</td> </tr>
<tr>
<td>
5\. VERT
</td>
<td>
•
</td>
<td>
No sensitive data are collected. No personal data are collected. No ethical
issues is associated to the process of collecting data, to their content and
maintenance
</td> </tr>
<tr>
<td>
6\. VLC
</td>
<td>
•
</td>
<td>
No sensitive data are collected. No personal data are collected. No ethical
issues is associated to the process of collecting data, to their content and
maintenance
</td> </tr>
<tr>
<td>
7\. OPSYS
</td>
<td>
•
</td>
<td>
No sensitive data are collected. No personal data are collected. No ethical
issues is associated to the process of collecting data, to their content and
maintenance
</td> </tr>
<tr>
<td>
8\. EFP
</td>
<td>
•
</td>
<td>
No ethical issues are associated to the process of collecting data, to their
content and maintenance
</td> </tr>
<tr>
<td>
9\. SMO
</td>
<td>
•
</td>
<td>
No sensitive data are collected. No personal data are collected. No ethical
issues is associated to the process of collecting data, to their content and
maintenance
</td> </tr>
<tr>
<td>
10\. TID
</td>
<td>
•
</td>
<td>
No sensitive data are collected
</td> </tr>
<tr>
<td>
11\. EPIC
</td>
<td>
•
</td>
<td>
Sensitive data are collected. Contacts must agree to share their contact
information.
</td> </tr>
<tr>
<td>
12\. NICT
</td>
<td>
•
</td>
<td>
No sensitive data are collected. No personal data are collected. No ethical
issues is associated to the process of collecting data, to their content and
maintenance
</td> </tr>
<tr>
<td>
13\. ETRI
</td>
<td>
•
</td>
<td>
No sensitive data are collected. No personal data are collected. No ethical
issues is associated to the process of collecting data, to their content and
maintenance
</td> </tr> </table>
## OTHER
<table>
<tr>
<th>
1\. POLIMI
</th>
<th>
•
</th>
<th>
Not relevant
</th> </tr>
<tr>
<td>
2\. CTTC
</td>
<td>
•
</td>
<td>
Not relevant
</td> </tr>
<tr>
<td>
3\. TUE
</td>
<td>
•
</td>
<td>
Not relevant
</td> </tr>
<tr>
<td>
4\. VTT
</td>
<td>
•
</td>
<td>
Not relevant
</td> </tr>
<tr>
<td>
5\. VERT
</td>
<td>
•
</td>
<td>
Not relevant
</td> </tr>
<tr>
<td>
6\. VLC
</td>
<td>
•
</td>
<td>
Not relevant
</td> </tr>
<tr>
<td>
7\. OPSYS
</td>
<td>
•
</td>
<td>
Not relevant
</td> </tr>
<tr>
<td>
8\. EFP
</td>
<td>
•
</td>
<td>
Not relevant
</td> </tr>
<tr>
<td>
9\. SMO
</td>
<td>
•
</td>
<td>
Not relevant
</td> </tr>
<tr>
<td>
10\. TID
</td>
<td>
•
</td>
<td>
Not relevant
</td> </tr>
<tr>
<td>
11\. EPIC
</td>
<td>
•
</td>
<td>
Not relevant
</td> </tr>
<tr>
<td>
12\. NICT
</td>
<td>
•
</td>
<td>
Not relevant
</td> </tr>
<tr>
<td>
13\. ETRI
</td>
<td>
•
</td>
<td>
Not relevant
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0536_C3-Cloud_689181.md
|
# EXECUTIVE SUMMARY
This document is the Data Management Plan for the C3-Cloud project. Its
purpose is to provide an inventory of the kinds of data that are being
generated within the project. For each category, this document indicates:
where and how the data are generated; their purpose; whether they are personal
data or not; how they are safeguarded; and what opportunity there might be for
data sharing and wider reuse of the data beyond the project.
The reason for this deliverable is to align with the EC ambition to promote
wider sharing and reuse of data generated by its funded research projects, in
order to grow the scale of data reuse and research potential across Europe.
All of the partners support that ambition, and the consortium has examined
carefully what opportunities might exist to make data assets of the project
available to others downstream.
A significant amount of the data generated in the project is personal data,
captured through the evaluation studies at the three demonstration sites
within the consortium, in the UK, Spain and Sweden. The nature of the ethical
approvals granted at the sites, and the patient consent that will be obtained,
do not permit this information to be shared at subject level, even if
anonymised, beyond the pilot sites. Similar constraints apply to evaluation
questionnaires completed by study participants. These will be collected
anonymously, online, by the lead evaluation partner.
Aggregated research results will be shared beyond the project. The interim and
final evaluation results will be made available in the public deliverables
D9.5 and D9.6. These results will also be included within academic
publications, and in supplementary data submitted online to the journals which
publish our papers. The consortium will make every effort to curate an openly
shareable set of useful aggregated data results and find appropriate channels,
whereby these can be discovered and accessed.
This deliverable presents a summary template for each of the eight categories
of data that we have identified being generated and handled within the
project, as summarised in Table 1 within the main text of the document. The
templates themselves provide a high-level summary of the approach being taken.
More detailed documents on information governance, information security and
the evaluation methodology of the project are given in other deliverables.
# INTRODUCTION
## Open Research Data in Horizon 2020
The European Commission defines open research data as “the data underpinning
scientific research results that has no restrictions on its access, enabling
anyone to access it.” 1
The Commission is running a pilot on open access to research data in Horizon
2020: the Open Research Data (ORD) pilot. The pilot aims to improve and
maximise access to and re-use of research data generated by Horizon 2020
projects, taking into account:
* the need to balance openness and protection of scientific information
* commercialisation and intellectual property rights
* privacy concerns
* security
* data management and preservation questions
Participating projects are required to develop a Data Management Plan, in
which they must specify what data will be open.
## Open Research Data in C3-Cloud
The partners of the C3-Cloud consortium are strongly supportive of open access
and to the principles of open data, data sharing, reusing data resources and
research transparency. The Open Research Data pilot clearly states the need to
balance openness and protection. In the case of C3-Cloud, this protection
relates to the protection of privacy, and not to the protection of partner
interests or exploitation potential. The reason for the latter not being a
concern is because C3-Cloud intends to exploit its foreground software but not
any knowledge derived from data (which will be openly published).
However, the validation of C3-Cloud’s implementation takes place in three
healthcare pilot sites that will collect and use personal data. The project
will primarily respect conformance to the EU GDPR above any wish to make
research data openly accessible. The consortium has considered carefully the
legal basis on which pilot site data will be collected, how they will be
processed and what may be retained post-project. It has concluded that it will
not be possible to provide individual level data as an open access resource to
the research community. Because these patients will have potentially unusual
combinations of disease and other clinical characteristics, the project has
concluded that anonymised patient-level data cannot be published as open
access data. However, aggregate data that shows the utilisation and benefit of
using C3-Cloud solutions will be published, as described further in Section 8.
The majority of the data will remain locally held at each pilot site, retained
for continuity of care and medico-legal purposes, and will not exist as a
central project resource.
This deliverable is the C3-Cloud Data Management Plan. It outlines each of the
different kinds of data that the project will generate, how each will be
managed and protected, and what potential exists for the wider use of the data
beyond the project. This analysis is presented as a series of tables, one per
category of data.
## Categories of data
Table 1 below lists eight categories of data that are being generated within
the C3-Cloud project. The data management plan for each of these eight is
provided as a template in the following eight sections of this document.
<table>
<tr>
<th>
**Category**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Patient-level clinical data, fully identifiable, to be created and used within
each pilot sites exclusively for patient care, and troubleshooting by
technical partners.
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Patient-level clinical data, anonymised, for use in the development of the
discrete event simulation tool.
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Anonymous, individual questionnaire responses on the C3-Cloud users’
perception of the usability, satisfaction and acceptability of the C3-Cloud
components.
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Anonymous, individual data summarising various aspects of system usage
statistics, shared from each pilot site with Empirica, the evaluation lead
partner. The pilot sites will be supported by the technical partners in
extraction of this data from the C3-Cloud platform.
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
System audit logs and other reporting information that assist technical
partners with monitoring and evaluation the performance of technical
components.
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
Analysed aggregated data processed by Empirica and shared with the full
consortium as the study results, for inclusion in deliverables and
publications. Scientific reports of the aggregated results in journals and as
European Commission deliverables will be published as open data.
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
Knowledge assets created within the project to populate components (e.g.
harmonised multi-condition guidelines) in human readable and computable
formats and might be reusable after the project, by others.
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
Educational resources that were created during and used in the pilot study and
might be reusable after the project, by others.
</td> </tr> </table>
Table 1: Categories of C3-Cloud generated data covered by this DMP
# IDENTIFIABLE PATIENT LEVEL DATA
The pilot sites will collect healthcare and care planning data on enrolled
patients, collected by patients, informal caregivers and healthcare providers
who will use C3-Cloud components to enter and review the data. Some data will
have been imported directly into C3-Cloud components from existing electronic
health record (EHR) systems.
<table>
<tr>
<th>
**Template for reporting the C3-Cloud Data Management Plan**
</th> </tr>
<tr>
<td>
**For what category (1-8) above does this template apply**
</td> </tr>
<tr>
<td>
1\. Patient level clinical data, fully identifiable, to be created and used
within each pilot site exclusively for patient care and troubleshooting by
technical partners.
</td> </tr>
<tr>
<td>
**What kind of data is being collected or processed (high-level description)**
</td> </tr>
<tr>
<td>
**Patient-level demographic and clinical data, fully identifiable, to be
collected and used within each pilot site exclusively for patient care, and
accessed by specific agreement by technical partners for troubleshooting.**
</td> </tr>
<tr>
<td>
**For what purposes are the data being processed in C3-Cloud**
</td> </tr>
<tr>
<td>
Direct patient care.
</td> </tr>
<tr>
<td>
**Where do the data originate (which party or which system creates the
data?)**
</td> </tr>
<tr>
<td>
The data are taken primarily from the EHRs of the pilot sites through direct
electronic interfaces or through data extracts. Data is also entered manually
into the C3-Cloud system by healthcare professionals and patients.
</td> </tr>
<tr>
<td>
**Are the data personal or not (i.e. are they identifiable, pseudonymous,
anonymous, aggregated) - at the point of origin**
**\- when shared within the project**
</td> </tr>
<tr>
<td>
Data are identifiable at all stages.
</td> </tr>
<tr>
<td>
**What is the legal basis for C3-Cloud to process the data if it is personal
according to the GDPR? (e.g. is it with participant consent.) State “Not
applicable” if the data are not personal.**
</td> </tr>
<tr>
<td>
Consent will be obtained from all patients and healthcare professionals to
access their personal information prior to the start of the study, after
ethical approval has been obtained.
</td> </tr>
<tr>
<td>
**With which parties will the data be shared within the consortium?**
</td> </tr>
<tr>
<td>
Only healthcare professionals who are directly involved with the care of the
patient will access identifiable data about patients in the C3-Cloud system.
Pilot sites will only have access to their own data, not the data of other
pilot sites.
With the appropriate data processing agreements in place, C3-Cloud technical
partners may access identifiable data when providing support and maintenance
to the system. Requirements for access will be assessed and authorised on a
case by case basis, i.e. access to data in the system will not be permanently
enabled.
</td> </tr>
<tr>
<td>
**Where and for how long data will be stored, under which partner’s control?**
</td> </tr>
<tr>
<td>
Data will be stored by the pilot sites according to the pilot sites’ own legal
requirements. Secure destruction of the data will take place after this. A
patient’s C3-Cloud record may be extracted as a PDF file and attached to the
patient’s record in the appropriate EHR at the end of the study.
</td> </tr>
<tr>
<td>
**What downstream derived data will be created from this category of data, if
any?**
</td> </tr> </table>
<table>
<tr>
<th>
Aggregated data will be used to support the evaluation of outcomes (Section
8).
</th> </tr>
<tr>
<td>
**What post-project data reuse is expected outside of the consortium, if
any?**
</td> </tr>
<tr>
<td>
None
</td> </tr> </table>
# ANONYMISED PATIENT-LEVEL DATA
Anonymised patient data from all three pilot sites will be used for discrete
event simulations for predictive modelling of large-scale impact assessment.
The data originates from local EHRs and from the C3-Cloud system.
<table>
<tr>
<th>
**Template for reporting the C3-Cloud Data Management Plan**
</th> </tr>
<tr>
<td>
**For what category (1-8) above does this template apply**
</td> </tr>
<tr>
<td>
2\. Patient-level clinical data, anonymised, for use in the development of the
discrete event simulation tool.
</td> </tr>
<tr>
<td>
**What kind of data is being collected or processed (high-level description)**
</td> </tr>
<tr>
<td>
Anonymised, patient-level demographic and clinical data. These will be
extracted from local EHR systems - the pilot sites will be supported by the
technical partners in the processes how this data can be extracted from the
EHRs. Data will also originate from the C3-Cloud system.
</td> </tr>
<tr>
<td>
**For what purposes are the data being processed in C3-Cloud**
</td> </tr>
<tr>
<td>
To develop, validate and run the discrete event simulation-based modelling
tool.
</td> </tr>
<tr>
<td>
**Where do the data originate (which party or which system creates the
data?)**
</td> </tr>
<tr>
<td>
EHR extracts from the local systems of pilot sites and C3-Cloud FHIR
repository.
</td> </tr>
<tr>
<td>
**Are the data personal or not (i.e. are they identifiable, pseudonymous,
anonymous, aggregated) - at the point of origin**
**\- when shared within the project**
</td> </tr>
<tr>
<td>
The data will be anonymous at the point of origin.
The data will be anonymous when sharing within the project.
</td> </tr>
<tr>
<td>
**What is the legal basis for C3-Cloud to process the data if it is personal
according to the GDPR? (e.g. is it with participant consent.) State “Not
applicable” if the data are not personal.**
</td> </tr>
<tr>
<td>
Not applicable
</td> </tr>
<tr>
<td>
**With which parties the data will be shared within the consortium?**
</td> </tr>
<tr>
<td>
Aggregated data will be shared as results in several public deliverables.
</td> </tr>
<tr>
<td>
**Where and for how long data will be stored, under which partner’s control?**
</td> </tr>
<tr>
<td>
Retained securely by University of Warwick for a minimum of ten years.
</td> </tr>
<tr>
<td>
**What downstream derived data will be created from this category of data, if
any?**
</td> </tr>
<tr>
<td>
The results of large-scale impact modelling of the C3-Cloud application by
evaluating the estimated/predicted impact of C3-Cloud application.
</td> </tr>
<tr>
<td>
**What post-project data reuse is expected outside of the consortium, if
any?**
</td> </tr>
<tr>
<td>
The data will be used for the predictive modelling for the project.
No re-use is planned.
</td> </tr> </table>
The variables that will be used are detailed below:
<table>
<tr>
<th>
**Data item**
</th>
<th>
**Value**
</th> </tr>
<tr>
<td>
Patient age
</td>
<td>
54 or younger
55-59
60-64
65-69
70-74
75-79
80-84
85-89
90 or older
Missing value
</td> </tr>
<tr>
<td>
Patient sex
</td>
<td>
male female other
Missing value
</td> </tr>
<tr>
<td>
Patient location
</td>
<td>
Basque Country, Spain
Region Jämtland Härjedalen, Sweden
South Warwickshire, UK
Missing value
</td> </tr>
<tr>
<td>
Technology trial group
</td>
<td>
Intervention group
Control group
Missing answer
</td> </tr>
<tr>
<td>
Has the patient an informal caregiver?
</td>
<td>
Yes
No
Missing value
</td> </tr>
<tr>
<td>
Diabetes Melltitus Type II diagnosed?
</td>
<td>
Yes
No
Missing value
</td> </tr>
<tr>
<td>
Heart failure in compliance with NYHA I-II diagnosed?
</td>
<td>
Yes
No
Missing value
</td> </tr>
<tr>
<td>
Renal failure with estimated or measured Glomerular filtration rate GFR of
30-59 diagnosed?
</td>
<td>
Yes
No
Missing value
</td> </tr>
<tr>
<td>
Mild or moderate depression diagnosed?
</td>
<td>
Yes
No
Missing answer
</td> </tr>
<tr>
<td>
For intervention patients: Did patient drop out?
</td>
<td>
Yes
No
Missing value
</td> </tr>
<tr>
<td>
Dropout because of death?
</td>
<td>
Yes
No
Missing value
</td> </tr>
<tr>
<td>
Dropout date
</td>
<td>
\- Insert Date -
Missing value
</td> </tr>
<tr>
<td>
List of all drugs prescribed or administered during C3-Cloud trial period, in
relation to the four inclusion health conditions. All fields required for each
drug.
</td>
<td>
Drug name
ATC classification code
Drug doses
Number of days that the drug was prescribed Missing value
</td> </tr>
<tr>
<td>
**Data item**
</td>
<td>
**Value**
</td> </tr>
<tr>
<td>
List of all contact dates between the patient and the primary care doctor at
the care centre.
</td>
<td>
List of dates per patient Missing value
</td> </tr>
<tr>
<td>
List of all remote contact dates between the patient and the primary care
doctor.
</td>
<td>
List of dates per patient Missing value
</td> </tr>
<tr>
<td>
List of all home visit dates between the patient and the primary care doctor.
</td>
<td>
Date
Missing value
</td> </tr>
<tr>
<td>
List of all contact dates between the patient and the primary care nurses at
the care centre.
</td>
<td>
List of dates per patient Missing value
</td> </tr>
<tr>
<td>
List of all remote contact dates between the patient and the primary care
nurses.
</td>
<td>
List of dates per patient Missing value
</td> </tr>
<tr>
<td>
List of all home visit dates between the patient and the primary care nurses.
</td>
<td>
List of dates per patient Missing value
</td> </tr>
<tr>
<td>
List of all contact dates between the patient and the cardiologist /
cardiology department.
</td>
<td>
List of dates per patient Missing value
</td> </tr>
<tr>
<td>
List of all contact dates between the patient and the endocrinologist /
endocrinology department.
</td>
<td>
List of dates per patient Missing value
</td> </tr>
<tr>
<td>
List of all contact dates between the patient and the nephrologist /
nephrology department.
</td>
<td>
List of dates per patient Missing value
</td> </tr>
<tr>
<td>
List of all contact dates between the patient and the psychiatrist /
psychology department.
</td>
<td>
List of dates per patient Missing value
</td> </tr>
<tr>
<td>
List of all contact dates between the patient and the internal specialist /
internal medicine department.
</td>
<td>
List of dates per patient Missing value
</td> </tr>
<tr>
<td>
List of all contact dates between the patient and the Accident and Emergency
department (A&E services).
</td>
<td>
List of dates per patient
A&E diagnosis (ICD-10)
Missing value
</td> </tr>
<tr>
<td>
List of all periods when a patient was hospitalized.
</td>
<td>
Admission date
Discharge date
Admission diagnosis (ICD-10)
Missing value
</td> </tr>
<tr>
<td>
List of all periods when a patient was home hospitalized.
</td>
<td>
Start date
End date
Main diagnosis (ICD-10)
Missing value
</td> </tr>
<tr>
<td>
For control patients:
Did the patient leave the region (loss to follow up)?
</td>
<td>
Yes
No
Missing value
</td> </tr> </table>
# ANONYMOUS QUESTIONNAIRE RESPONSES
Pilot site patient participants, informal caregivers and healthcare
professionals will all complete evaluation questionnaires during the
technology trial, about their experience of using the C3-Cloud solution. These
will be anonymous data at the point of capture.
<table>
<tr>
<th>
**Template for reporting the C3-Cloud Data Management Plan**
</th> </tr>
<tr>
<td>
**For what category (1-8) above does this template apply**
</td> </tr>
<tr>
<td>
3\. Anonymous, individual questionnaire responses on the C3-Cloud users’
perception of the usability, satisfaction and acceptability of the C3-Cloud
system.
</td> </tr>
<tr>
<td>
**What kind of data is being collected or processed (high-level description)**
</td> </tr>
<tr>
<td>
Anonymous, individual questionnaire responses on the C3-Cloud users’
perception of the usability, satisfaction and acceptability of the C3-Cloud.
</td> </tr>
<tr>
<td>
**For what purposes are the data being processed in C3-Cloud**
</td> </tr>
<tr>
<td>
To evaluate usability, satisfaction and acceptability of the C3-Cloud
components.
</td> </tr>
<tr>
<td>
**Where do the data originate (which party or which system creates the
data?)**
</td> </tr>
<tr>
<td>
Survey responses (data) is created by patients, their informal caregivers and
MDT members in all three pilot sites on an online questionnaire platform
hosted on Empirica servers (called “LimeSurvey”).
</td> </tr>
<tr>
<td>
**Are the data personal or not (i.e. are they identifiable, pseudonymous,
anonymous, aggregated) - at the point of origin**
**\- when shared within the project**
</td> </tr>
<tr>
<td>
The data will be anonymous at the point of origin with four stratifiers: Age
group (5-year ranges), sex, region, user category (MDT or patient).
The data will be aggregated when sharing within the project.
</td> </tr>
<tr>
<td>
**What is the legal basis for C3-Cloud to process the data if it is personal
according to the GDPR?**
**(e.g. is it with participant consent.) State “Not applicable” if the data
are not personal.**
</td> </tr>
<tr>
<td>
Not applicable
</td> </tr>
<tr>
<td>
**With which parties the data will be shared within the consortium?**
</td> </tr>
<tr>
<td>
Aggregated data will be held by Warwick for long-term storage.
Aggregated data will be shared as results in several public deliverables.
</td> </tr>
<tr>
<td>
**Where and for how long data will be stored, under which partner’s control?**
</td> </tr>
<tr>
<td>
Retained securely by University of Warwick for a minimum of ten years.
</td> </tr>
<tr>
<td>
**What downstream derived data will be created from this category of data, if
any?**
</td> </tr>
<tr>
<td>
Questionnaire data will be aggregated and presented in several deliverables.
</td> </tr>
<tr>
<td>
**What post-project data reuse is expected outside of the consortium, if
any?**
</td> </tr>
<tr>
<td>
The data is used to evaluate the usability, satisfaction and acceptability of
the C3-Cloud solutions only.
No re-use is planned.
</td> </tr> </table>
The following table lists the questionnaires that will be completed by study
participants. The full questionnaire questions are reported in deliverable
D9.2.
<table>
<tr>
<th>
**Survey**
</th>
<th>
**Questionnaires included in the survey**
</th> </tr>
<tr>
<td>
First survey for patients – Survey for all patients
</td>
<td>
Baseline - UTAUT patients (acceptability of C3-Cloud)
</td> </tr>
<tr>
<td>
Second survey for patients
</td>
<td>
Study end - UTAUT patients (acceptability of C3-Cloud)
</td> </tr>
<tr>
<td>
Detailed survey for 50 patients (number 1) – survey for 150 layer 3 patients
</td>
<td>
Baseline - Patient Questionnaire (usefulness of C3-Cloud for care planning and
empowerment)
Baseline - QUIS7 Patients (Usability questionnaire)
Baseline - Patient Material Output (Evaluation of training material) (video,
information leaflet wallet card)
</td> </tr>
<tr>
<td>
Detailed survey for 50 patients (number 2) – survey for 150 layer 3 patients
</td>
<td>
Study end - Patient Questionnaire (usefulness of C3Cloud for care planning and
empowerment)
Study end - QUIS7 Patients (Usability questionnaire)
Study end - eCCIS patient (System satisfaction questionnaire)
Study end - Patient Material Outputs (Evaluation of training materials
(Leaflets and web pages as well as peer support groups)
</td> </tr>
<tr>
<td>
First survey for MDTs – survey for all MDTs (layer 3 and 4)
</td>
<td>
Baseline - UTAUT MDT (acceptability of C3-Cloud)
Baseline - QUIS7 MDTs (Usability questionnaire)
</td> </tr>
<tr>
<td>
Second survey for MDTs - survey for all MDTs (layer 3 and 4)
</td>
<td>
Study end - MDT Questionnaire (usefulness of C3-Cloud for care planning and
empowerment)
Study end - UTAUT MDT (acceptability of C3-Cloud)
Study end - QUIS7 MDTs (Usability questionnaire)
Study end - eCUIS MDT (System satisfaction questionnaire)
</td> </tr>
<tr>
<td>
First survey for informal caregivers
</td>
<td>
Baseline - eCCIS informal caregivers (System satisfaction questionnaire)
</td> </tr>
<tr>
<td>
Second survey for informal caregiver
</td>
<td>
Study end - eCCIS informal caregiver (System satisfaction questionnaire)
</td> </tr>
<tr>
<td>
Survey about sensor device usage for patients
</td>
<td>
Study end - Device usage patients (feasibility study to show usage of data
from multiple sources)
</td> </tr>
<tr>
<td>
Survey about sensor device usage for
MDTs
</td>
<td>
Study end - Device usage MDTs (feasibility study to show usage of data from
multiple sources)
</td> </tr> </table>
# ANONYMOUS USAGE DATA
Since the C3-Cloud components will log the entry of new data in all modules,
and have audit trails that monitor access as well as data creation, there will
be data that tracks when and how each user has used the system. This will
complement the evaluation questionnaire data to provide insight into the use
made of C3-Cloud by different actors.
<table>
<tr>
<th>
**Template for reporting the C3-Cloud Data Management Plan**
</th> </tr>
<tr>
<td>
**For what category (1-8) above does this template apply**
</td> </tr>
<tr>
<td>
4\. Anonymous, individual data summarising various aspects of system usage
statistics, shared from each pilot site with Empirica. The pilot sites may be
supported by the technical partners in the processes how this data can be
extracted from the C3-Cloud platform.
</td> </tr>
<tr>
<td>
**What kind of data is being collected or processed (high-level description)**
</td> </tr>
<tr>
<td>
Anonymous, individual d **ata summarising various aspects of system usage
statistics** .
</td> </tr>
<tr>
<td>
**For what purposes are the data being processed in C3-Cloud**
</td> </tr>
<tr>
<td>
To evaluate frequency of use and effectiveness of C3-Cloud components.
</td> </tr>
<tr>
<td>
**Where do the data originate (which party or which system creates the
data?)**
</td> </tr>
<tr>
<td>
FHIR repository extracts at the pilot sites.
</td> </tr>
<tr>
<td>
**Are the data personal or not (i.e. are they identifiable, pseudonymous,
anonymous, aggregated) - at the point of origin**
**\- when shared within the project**
</td> </tr>
<tr>
<td>
The data will be anonymous at the point of origin with four stratifiers: Age
group (5-year ranges), sex, region, user category (MDT or patient).
The data will be aggregated when sharing within the project.
</td> </tr>
<tr>
<td>
**What is the legal basis for C3-Cloud to process the data if it is personal
according to the GDPR?**
**(e.g. is it with participant consent.) State “Not applicable” if the data
are not personal.**
</td> </tr>
<tr>
<td>
Not applicable
</td> </tr>
<tr>
<td>
**With which parties the data will be shared within the consortium?**
</td> </tr>
<tr>
<td>
Anonymous data will be shared with Warwick for long-term storage.
Aggregated data will be shared as results in several public deliverables.
</td> </tr>
<tr>
<td>
**Where and for how long data will be stored, under which partner’s control?**
</td> </tr>
<tr>
<td>
Retained securely by University of Warwick for a minimum of ten years.
</td> </tr>
<tr>
<td>
**What downstream derived data will be created from this category of data, if
any?**
</td> </tr>
<tr>
<td>
None anticipated, the data will be examined internally to monitor system usage
and behaviour
</td> </tr>
<tr>
<td>
**What post-project data reuse is expected outside of the consortium, if
any?**
</td> </tr>
<tr>
<td>
FHIR data on system usage and effectiveness will be used only for the
reporting within the project. No re-use is planned.
</td> </tr> </table>
The usage data are responses to the following questions:
* From which pilot site does the FHIR repository data originate?
* When did the technology trial start at the pilot site?
* When did the technology trial end at the pilot site?
* What is the number of CDS-detected disease-disease interactions at the pilot site over the project time?
* What is the number of CDS-detected disease-drug and drug-disease interactions at the pilot site over the project time?
* The number of CDS-detected drug-drug contraindications at the pilot site over the project time? The different types of drug-drug contraindication classifications are: "to be avoided, used with caution, requires monitoring, other considerations, contraindicated, save to use, not recommended"
* What is the number of all digital PEP communication messages between a patient and their MDT (per patient)?
* Reason for dropout
* What C3DP feedback regarding the CDS was received from clinicians through feedback function?
* List the care plan goals per patient that were defined, including its status (e.g. 'in progress'; 'achieved', 'rejected').
* List each type of care plan activity from the activities taxonomy that was prescribed on the patients' care plans and the number how often it was prescribed during the trial.
* List each care plan activity title that was prescribed on the patients' care plans manually (not from the taxonomy).
* List care plan goal title from the goals taxonomy that was defined on the patients' care plans and the number how often it was defined during the trial.
* List each care plan goals title that was defined on the patients' care plans manually (not from the taxonomy).
* What is the conformance level of prescribed and performed weight self-measurements?
* Extract the weight measurement activity attribute and the linked measurements coming from the patient.
* What is the conformance level of prescribed and performed glucose level self-measurements?
* Extract the glucose measurement activity attribute and the linked measurements coming from the patient.
* What is the conformance level of prescribed and performed blood pressure self-measurements?
* Extract the blood measurement activity attribute and the linked measurements coming from the patient.
* What is the conformance level of prescribed and performed heart rate self-measurements?
* Extract the heart rate measurement activity attribute and the linked measurements coming from the patient.
* What is the average care team member session duration per month?
# SYSTEM AUDIT LOGS
In addition to the usage data referred to in the previous section, the audit
logs will contain more detailed system and actor activity records that may
serve to detect or investigate errors, and other issues with the software and
networks.
<table>
<tr>
<th>
**Template for reporting the C3-Cloud Data Management Plan**
</th> </tr>
<tr>
<td>
**For what category (1-8) above does this template apply**
</td> </tr>
<tr>
<td>
5\. System audit logs and other reporting information that assists technical
partners with monitoring and evaluation the performance of technical
components.
</td> </tr>
<tr>
<td>
**What kind of data is being collected or processed (high-level description)**
</td> </tr>
<tr>
<td>
In C3-Cloud each Create, Read, Update or Delete (CRUD) activity performed in
the C3-Cloud FHIR Repository (where patient data collected from local care
systems via Technical Interoperability Layer (TIS) and Patient Empowerment
Platform (PEP) and care plan being created and managed by
Coordinated Care and Cure Delivery Platform (C3DP) are stored) are audited to
an Audit Record Repository in conformance to IHE ATNA Profile. In addition to
this, each component: TIS, SIS, C3DP and PEP has its own local system logs.
</td> </tr>
<tr>
<td>
**For what purposes are the data being processed in C3-Cloud**
</td> </tr>
<tr>
<td>
The Audit Record Repository logs are stored and processed to ensure
accountability. The system logs are utilized for logging errors, and
performance issues.
</td> </tr>
<tr>
<td>
**Where do the data originate (which party or which system creates the
data?)**
</td> </tr>
<tr>
<td>
The audits of the CRUD activities on top of the C3Cloud FHIR repository are
created by C3Cloud FHIR repository. Apart from that each component (i.e. TIS,
SIS, C3DP and PEP) creates its own system logs.
</td> </tr>
<tr>
<td>
**Are the data personal or not (i.e. are they identifiable, pseudonymous,
anonymous, aggregated) - at the point of origin**
**\- when shared within the project**
</td> </tr>
<tr>
<td>
The data stored in audit logs in the audit record repository may contain
patient and professional identifiers. System logs kept for logging errors and
performance issues do not contain identifiable data.
</td> </tr>
<tr>
<td>
**What is the legal basis for C3-Cloud to process the data if it is personal
according to the GDPR? (e.g. is it with participant consent.) State “Not
applicable” if the data are not personal.**
</td> </tr>
<tr>
<td>
Participant consent.
</td> </tr>
<tr>
<td>
**With which parties the data will be shared within the consortium?**
</td> </tr>
<tr>
<td>
These audit logs will be anonymised and aggregated and will be shared with
evaluation team (Empirica, Osakidetza) for impact analysis studies.
</td> </tr>
<tr>
<td>
**Where and for how long data will be stored, under which partner’s control?**
</td> </tr>
<tr>
<td>
The audit logs are stored in Audit Record Repository, which will be deployed
at local sites. Hence it will be under the pilot site’s control. It will be
managed based on the data processing rules of local pilot sites.
</td> </tr>
<tr>
<td>
**What downstream derived data will be created from this category of data, if
any?**
</td> </tr>
<tr>
<td>
Performance and effectiveness indicators will be derived from this data and
used for the usage data in Section 6\.
</td> </tr>
<tr>
<td>
**What post-project data reuse is expected outside of the consortium, if
any?**
</td> </tr>
<tr>
<td>
None.
</td> </tr> </table>
The data from the C3-Cloud FHIR repository are as follows (part of data in
Section 4):
* Patient age (ranges)
* Patient sex
* Patient location (pilot site)
* Technology trial group
* Has the patient an informal caregiver? Diabetes Melltitus Type II diagnosed?
* Heart failure in compliance with NYHA I-II diagnosed?
* Renal failure with estimated or measured Glomerular filtration rate GFR of 30-59 diagnosed?
* Mild or moderate depression diagnosed?
* For intervention patients: Did patient drop out?
* Dropout because of death?
* Dropout date
* List of all drugs prescribed or administered during C3-Cloud trial period, in relation to the four inclusion health conditions. All fields required for each drug.
# ANALYSED AGGREGATED DATA
Evaluation questionnaires, activity audit logs and other information will be
analysed for evaluating the solution and its acceptance, usability and utility
at the pilot sites. The aggregated and statistically analysed data are new
(derived) forms of data that will be used for academic publications and in
deliverables.
<table>
<tr>
<th>
**Template for reporting the C3-Cloud Data Management Plan**
</th> </tr>
<tr>
<td>
**For what category (1-8) above does this template apply**
</td> </tr>
<tr>
<td>
6\. Analysed aggregated data processed by Empirica and shared with the full
consortium as the study results, for inclusion in deliverables and
publications.
</td> </tr>
<tr>
<td>
**What kind of data is being collected or processed (high-level description)**
</td> </tr>
<tr>
<td>
Analysed aggregated data, described in Sections 4, 5, 6 and 7.
</td> </tr>
<tr>
<td>
**For what purposes are the data being processed in C3-Cloud**
</td> </tr>
<tr>
<td>
No new collection is done. Data collected under category 2, 3 and 4 will be
reported in an aggregated format
</td> </tr>
<tr>
<td>
**Where do the data originate (which party or which system creates the
data?)**
</td> </tr>
<tr>
<td>
No new data is created. See data categories 2, 3 and 4.
</td> </tr>
<tr>
<td>
**Are the data personal or not (i.e. are they identifiable, pseudonymous,
anonymous, aggregated) - at the point of origin**
**\- when shared within the project**
</td> </tr>
<tr>
<td>
The data will be anonymous and aggregated at the point of reporting with four
stratifiers: Age group, sex, region, user category (MDT or patient).
</td> </tr>
<tr>
<td>
**What is the legal basis for C3-Cloud to process the data if it is personal
according to the GDPR?**
**(e.g. is it with participant consent.) State “Not applicable” if the data
are not personal.**
</td> </tr>
<tr>
<td>
Not applicable
</td> </tr>
<tr>
<td>
**With which parties the data will be shared within the consortium?**
</td> </tr>
<tr>
<td>
Aggregated data will be shared as results in several public deliverables.
</td> </tr>
<tr>
<td>
**Where and for how long data will be stored, under which partner’s control?**
</td> </tr>
<tr>
<td>
Stored by University of Warwick for a minimum of ten years.
</td> </tr>
<tr>
<td>
**What downstream derived data will be created from this category of data, if
any?**
</td> </tr>
<tr>
<td>
Aggregated data will largely be published in deliverables and papers. Further
derived visualisations (e.g. charts) might be included in slide presentations
and other communications materials.
</td> </tr>
<tr>
<td>
**What post-project data reuse is expected outside of the consortium, if
any?**
</td> </tr> </table>
<table>
<tr>
<th>
The aggregated data and respective analysis that will be made public in the
deliverables D9.5 and D9.6 and D4.3, and publications:
* from MDT members: acceptance, usability and usefulness, impact on the clinical care process, any safety implications, impact on multidisciplinary team cooperation;
* from patients and caregivers: acceptance, usability and usefulness, perspectives on communicating with the MDT, use made of the care plan, the training materials, use of the PEP software, relevance and utility of the advice given, impact on adherence to goals.
</th> </tr> </table>
# KNOWLEDGE ASSETS
Harmonised clinical guidelines will be represented in computable form for
operation within the C3Cloud components, mapped to clinical terminology. The
consortium has agreed that these will be published later in the project (after
the pilot studies are completed).
<table>
<tr>
<th>
**Template for reporting the C3-Cloud Data Management Plan**
</th> </tr>
<tr>
<td>
**For what category (1-8) above does this template apply**
</td> </tr>
<tr>
<td>
7\. Knowledge assets created within the project to populate components
</td> </tr>
<tr>
<td>
**What kind of data is being collected or processed (high-level description)**
</td> </tr>
<tr>
<td>
Knowledge assets created within the project to populate components (e.g.
harmonised multicondition guidelines) in human readable and computable formats
and might be reusable after the project, by others.
</td> </tr>
<tr>
<td>
**For what purposes are the data being processed in C3-Cloud**
</td> </tr>
<tr>
<td>
For C3-Cloud solution to be able to present relevant care plans based on
clinical knowledge.
</td> </tr>
<tr>
<td>
**Where do the data originate (which party or which system creates the
data?)**
</td> </tr>
<tr>
<td>
NICE guidelines and the pilot sites’ clinical representatives.
</td> </tr>
<tr>
<td>
**Are the data personal or not (i.e. are they identifiable, pseudonymous,
anonymous, aggregated) - at the point of origin**
**\- when shared within the project**
</td> </tr>
<tr>
<td>
These are not personal data.
</td> </tr>
<tr>
<td>
**What is the legal basis for C3-Cloud to process the data if it is personal
according to the GDPR? (e.g. is it with participant consent.) State “Not
applicable” if the data are not personal.**
</td> </tr>
<tr>
<td>
Not applicable.
</td> </tr>
<tr>
<td>
**With which parties the data will be shared within the consortium?**
</td> </tr>
<tr>
<td>
All parties.
</td> </tr>
<tr>
<td>
**Where and for how long data will be stored, under which partner’s control?**
</td> </tr>
<tr>
<td>
Within each customer site using the C3-Cloud solution, under each pilot site
clinician’s control.
</td> </tr>
<tr>
<td>
**What downstream derived data will be created from this category of data, if
any?**
</td> </tr>
<tr>
<td>
None.
</td> </tr>
<tr>
<td>
**What post-project data reuse is expected outside of the consortium, if
any?**
</td> </tr>
<tr>
<td>
These will be published in order to promote reuse of these knowledge assets,
and also serve as inspiration templates for harmonized guidelines of other
clinical conditions outside of the C3-Cloud scope. However, the publication of
knowledge assets derived from third-party resources will be conditional on the
third-party licenses.
</td> </tr> </table>
# EDUCATIONAL RESOURCES
A series of educational materials will be produced by each pilot site, in
different printed and electronic formats, to explain multi-morbidity, the
C3-Cloud project and how to use the C3-Cloud applications. Some of these will
be reusable by others tackling multimorbidity issues across Europe, such as
explaining what multi-morbidity is.
<table>
<tr>
<th>
**Template for reporting the C3-Cloud Data Management Plan**
</th> </tr>
<tr>
<td>
**For what category (1-8) above does this template apply**
</td> </tr>
<tr>
<td>
8\. Educational resources that were created during and used in the pilot study
and might be reusable after the project, by others.
</td> </tr>
<tr>
<td>
**What kind of data is being collected or processed (high-level description)**
</td> </tr>
<tr>
<td>
**Educational resources created during the project.**
* _Introductory training video_ on the impact and complexity of long-term disease and multimorbidity, stressing the importance of self-management and treatment compliance (versions in English, Swedish and Spanish).
* _Leaflet_ which provides an overview of the training materials that are available in C3-Cloud, the purpose of these materials and how they can be used (versions in English, Swedish and Spanish).
* _Wallet sized project card_ which provides basic information about the project, including the location of the system (URL) and where to get help if needed (versions in English, Swedish and Spanish).
</td> </tr>
<tr>
<td>
**For what purposes are the data being processed in C3-Cloud**
</td> </tr>
<tr>
<td>
In C3-Cloud, patients and their informal care givers will be given access to
educational materials at the relevant points in their care plan to support and
educate them at the appropriate time.
* Video: to help to prepare and empower patients for their educational journey, to help patients to better appreciate the complexity of co-morbidity and chronic disease and to explain the purpose of the training materials.
* Leaflet: this is not strictly an educational material but will encourage and allow patients to use the training materials more effectively.
* The wallet card will provide an ongoing reminder of a patient’s involvement in the study, to ensure that they have details of how to access the system to hand at all times, and know who to contact for further information or for assistance with emergency medical situations.
</td> </tr>
<tr>
<td>
**Where do the data originate (which party or which system creates the
data?)**
</td> </tr>
<tr>
<td>
* Video, although it was inspired by an existing animated video which is used in Basque Country, it was created from scratch by Task 5.1 team. Once completed, the storyboard was submitted to a professional audio-visual (AV) company, ‘Old Port Films’.
* The leaflet was developed iteratively in conjunction with the Task 5.1 team. The current version, in English, has to be updated once the system is in a sufficiently developed state, in the framework of Task 9.4.
* The wallet-sized card provides basic details of the project, e.g. the C3-CLOUD logo, title of the trial, how to find the system (PEP URL), contact details for the trial and for emergencies etc. The current version is in English. It has been developed by the Task 5.1 team. It will be updated once the system is in a sufficiently developed state, in the framework of Task 9.4.
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**Are the data personal or not (i.e. are they identifiable, pseudonymous,
anonymous, aggregated) - at the point of origin**
**\- when shared within the project**
</td> </tr>
<tr>
<td>
None of the three educational materials mentioned above (video, leaflet and
wallet card) are personal data.
</td> </tr>
<tr>
<td>
**What is the legal basis for C3-Cloud to process the data if it is personal
according to the GDPR? (e.g. is it with participant consent.) State “Not
applicable” if the data are not personal.**
</td> </tr>
<tr>
<td>
Not applicable
</td> </tr>
<tr>
<td>
**With which parties the data will be shared within the consortium?**
</td> </tr>
<tr>
<td>
With all parties
</td> </tr>
<tr>
<td>
**Where and for how long data will be stored, under which partner’s control?**
</td> </tr>
<tr>
<td>
Each pilot site will store the corresponding educational materials translated
in their own language.
The materials will be stored during the trial under each pilot site’s control.
The YouTube videos will be retained on the C3-Cloud web site, after the end of
the project.
</td> </tr>
<tr>
<td>
**What downstream derived data will be created from this category of data, if
any?**
</td> </tr>
<tr>
<td>
The data will be evaluated on user satisfaction and usage as part of the
evaluation layer 3 in T9.3.
</td> </tr>
<tr>
<td>
**What post-project data reuse is expected outside of the consortium, if
any?**
</td> </tr>
<tr>
<td>
The video could be edited to extract a generic educational resource about
multi-morbidity for patients, which can be shared with others after the
project.
</td> </tr> </table>
# CONCLUSION
This Data Management Plan has been updated at the end of year three of the
project, since the range of different categories of foreground data has become
clear, and consultation has been possible within the pilot sites where most of
the data will originate.
C3-Cloud has the goal of designing and implementing novel ICT solutions to
support the care of patients with multi-morbidity. The data it collects
therefore serves the purpose of supporting the designs and evaluating the
implementations. The project has not sought to undertake clinical research and
therefore does not have the primary intention of generating new research data
sets. Partly for this reason, and party in order to comply with the GDPR, only
limited amounts of aggregated evaluation data are expected to be shareable
beyond the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0539_NANOPHLOW_766972.md
|
The metadata will describe the different type of data generated experimentally
and computationally. We do not envisage a unique standard since the
experimental setups differ considerably from each other and simulation data
will be produced with independent computational models.
As the projects develop will identify groups of produced data that are
amenable to a common structure.
# Making data openly accessible
_**Which data produced and/or used in the project will be made openly
available as the default? If certain datasets cannot be shared (or need to be
shared under restrictions), explain why, clearly separating legal and
contractual reasons from voluntary restrictions.** _
We do not foresee that making data openly available is not the standard for
the academic partners. The level of accessibility will be associated to the
publication of the corresponding scientific results. Whenever possible, we
will make use of the facilities offered by scientific journals to store data
and make it more publicly available
_**How will the data be made accessible?** _
We will make use of the repositories provided by the academic institution
involved as beneficiaries. We will identify the facilities provided, the type
of data they can store and how they make it available.
We will also take advantage of the facilities provided by some scientific
journals to store data associated to scientific publications in order to make
them accessible to a larger audience
_**What methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?** _
We will identify the methods to access the data offered by the institution
repositories. In the case of data associated to scientific journals, they take
care of indicating the procedure to access the corresponding data
_**Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.** _ As we have mentioned previously, we will make
use of the facilities provided by the involved academic institutions. These
repositories are developed according to well defined protocols and hence are
certified. We will clarify and make it transparent the certifications each
potential repository fulfills.
_**Have you explored appropriate arrangements with the identified repository?
If there are restrictions on use, how will access be provided? Is there a need
for a data access committee? Are there well described conditions for access?
How will the identity of the person accessing the data be ascertained?** _
We have agreed to identify the repositories and how to interact with them.
This arrangement has started to be achieved in some of the beneficiary
institutions
# Making data interoperable
_**Are the data produced in the project interoperable, that is allowing data
exchange and reuse between researchers, institutions, organisations,
countries, etc. ?** _
The data expected as outcome of the project is heterogeneous because they will
be produced with a wide variety of experimental setups. The same applies to
the expected simulation data. However, in all cases both experimental and
numerical data are produced in a well recognized scientific environment.
Therefore, the data produced can be understood and reused by other research
groups carrying their activities using similar experimental setups or dealing
with computational data
_**In case it is unavoidable that you use uncommon or generate project
specific ontologies or vocabularies, will you provide mappings to more
commonly used ontologies?** _
As mentioned above, the data that will be generated in the development of the
project, even if heterogeneous, will be produced in a well defined scientific
environment. Therefore, the lack of a unique project specific ontology does
not prevent the accessibility of data by potentially interested users.
# Increase data reuse (through clarifying licenses)
_**How will the data be licensed to permit the widest reuse possible?** _
As mentioned earlier data produced in academic led projects will not have
restrictions beyond those specified by the repositories where they will be
stored.
In the case of industrially involved projects the license will be discussed
case by case
_**When will the data be made available for reuse? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.** _
We will follow standard academic practice and make data available after
academic publication. If a patent is potentially involved, we will follow the
advice provided by the relevant patent office in the academic institution
involved. In these situations we will identify the different class of data
produce and separate the data that is amenable to be associated to scientific
publications on a shorter time scale, and the data that must be kept
confidential until the patent is released.
_**Are the data produced and/or used in the project useable by third parties,
in particular after the end of the project? If the reuse of some data is
restricted, explain why.** _
We do not expect that the data produced suffers from restriction in their use
after the project is finished. Only in case of patent processes, or on
specific cases of industrially related projects, a longer time before the data
is public may be required. In these eventualities, the responsible of Data
Management will supervise how to proceed and will arrange the corresponding
procedures with the institutions involved and the corresponding facilities for
Data Management.
_**How long is it intended that the data remains reusable?** _
We produced data that can be used by researchers as far as the corresponding
activities are meaningful to the community. The data per se dos not
deteriorate in its relevance. Obviously, as the knowledge of the community
advances, new data and standards will develop that will superseed the data
that will come out from NANOPHLOE.
_**Are data quality assurance processes described?** _
We have not identified the need of a quality assurance process for data. The
data will be produced in the context of scientific projects. The quality of
the scientific content of the projects ensure that the data produce will meet
the expected standards by the scientific community
# Allocation of resources
_**How will these be covered? Note that costs related to open access to
research data are eligible as part of the Horizon 2020 grant.** _
We can foresee costs for publication in Open Journals. These costs were
already included in the proposal.
_**Who will be responsible for data management in your project?** _
The Grant Agreement established that the team at the University of Barcelona
will be responsible
_**Are the resources for long term preservation discussed?** _
The resources for long term will be discussed with the local infrastructures
provided by the Academic Institutions
# Data security
_**What provisions are in place for data security?** _
We will rely on the services provided by institutional repositories
# Data Collection
_**What data will we collect or create?** _
The data produced from this consortium will fall into two categories:
1. Simulation data associated to the theoretical projects.
2. Experimental data obtained both by the academic partners and involved SMEs. _**How will the data be collected or created?** _
Data will be collected independently by the different teams. Data acquisition
is very heterogeneous in this project. Each team is responsible for the
acquisition and storage of data.
Simulations data are generated by running simulations in a variety of
supports, ranging from desktop computers, to supercomputing centers, including
the exploitation of devoted clusters in the corresponding academic
institutions.
Experimental data are produced by the different experimental setups used and
developed by the partners.
# Documentation and Metadata
_**What documentation and metadata will accompany the data?** _
Different sets of data will be stored following a standarized procedure to
name the files and ensure data can be findable.
Data produced by SMEs will be kept by them, and a document describing the
content and type of data will be produced.
In the case of academic partners, the outcome of the network activities will
lead to data scientific publications. In this case the associated produced
data will be generated and the publication will indicate where the data is
stored.
The consortium will follow well established good practices to create the
corresponding, relevant metada. Due to the different institutions involved in
NANOPHLOW, each partner will build on best practices and guidelines specified
by the corresponding institution.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0545_EWC_776247.md
|
# 1\. Introduction
We report on the deliverable 8.2 in Work Package 8 “Data Management Plan”. The
deliverable is to set up the EWC Data Management Plan (DMP) as required by the
EU Open Research Data Pilot of which EWC is a member.
The DMP was written by PI Kitching with advice from the Project Manager and
the Management Committee (MC). The primary source of information required was
from the Open Research Data Pilot (ORDP) website
_https://www.openaire.eu/what-is-theopen-research-data-pilot_ and the template
document titled “h2020-tpl-oa-data-mgtplan_en” was used. The source of the
content of the DMP was from the WPs that were derived from the GA.
# 2\. Data Management Plan
## a) Data Summary
In EWC we will collect raw astronomical imaging and spectroscopic data from
three primary sources: the Hubble Space Telescope Archive (
_https://archive.stsci.edu/hst/),_ the ESO public archive (
_http://archive.eso.org/cms.html_ ), and from the PauCAM Survey (PAUS,
_https://www.pausurvey.org_ ) .
The HST data is pre-existing and public data. EWC will download this raw
public data and re-analyse it to meet the EWC objectives. In the process
secondary data products will be produced that include ‘reduced’ (science
ready) images, and catalogues of galaxy and star properties; as well as
scientific papers submitted to journals. These outputs are described in the
deliverables.
The ESO data is also pre-existing and public data. EWC will download this raw
public data and re-analyse it to meet the EWC objectives. In the process
secondary data products will be produced that include ‘reduced’ (science
ready) images, and catalogues of galaxy and star properties; as well as
scientific papers submitted to journals. These outputs are described in the
deliverables.
The PAUS data is private and data access is granted to members of the PAUS
consortium. In this case all leads and developers in WPs that require PAUS
data access are PAUS consortium members; the relation between PAUS and EWC is
controlled by the EWC-PAUS MOU. EWC will re-analyse this to meet the EWC
objectives. In the process secondary data products will be produced that
include catalogues of galaxy and star properties; as well as scientific papers
submitted to journals. These outputs are described in the deliverables.
The project will collect astronomical imaging data and spectroscopic data. The
data that will be generated will be processed imaging data, and catalogues.
The format
used in astronomy for all these projects is the FITS format
_(https://fits.gsfc.nasa.gov/fits_ _documentation.html)._
We expect the data that we generate to be useful for the general astronomical
community, and in particular the Euclid Consortium who is will be the primary
user of the EWC data – the EWC objectives are to the provide calibration
products for the weak lensing method used on the Euclid data.
Secondary products include code in Python and C++ format, and scientific
papers in Latex and PDF format.
**b) FAIR data**
## Making data findable, including provisions for metadata
As recommended by the Open Research Data Pilot we will use Zenodo to publish
data that satisfies the FAIR requirements _https://www.zenodo.org_ .
The naming conventions for the imaging and spectroscopic data are to be
determined by the Management Committee. It is expected that standard astronomy
naming conventions will be used. For images the position of the sky in RA and
dec coordinates is used as a standard naming convention coupled with the name
of the survey and/or team. Data will be published on fully searchable public
archives. Version numbers are agreed and listed in the deliverables where
multiple versions of the same underlying product will be issue d. The FITS
format allows for searchable metadata to be included in the primary data in
the form of a “header”. Zenodo also provides meta data searching and a unique
object ID.
## Making data openly accessible
All data will be fully public (see below for license). For re-processed
imaging and spectroscopic data we will use the Euclid Consortium and PAUS
public archives. The use of these archives is agreed in the Euclid and PAUS
MOUs.
For the case of code publication we will use Github public repositories than
enable searches to be performed on data and metadata associated with any code.
## Making data interoperable
The data formats we use for imaging, spectroscopy and catalogue data (the FITS
format), has a vast amount of existing infrastructure publically available for
use, manipulation and transformation into different formats. For example all
main coding languages have standard libraries (e.g. CFITSIO, AstroPy, Matlab)
for the manipulation of these formats.
## Increase data re-use (through clarifying licences)
All data will be published using a license that enables reuse of the data for
any purposes, with appropriate citation (such as the CC BY 4.0 license or
similar; to be determined by the management committee).
## c) Allocation of resources
There are no additional costs incurred to making the EWC data FAIR. Convention
within the astronomical community is to publish all papers on the arXiv at
submission to the journal. We will use existing infrastructure in the Euclid
and PAU consortia, agreed in the MOUs, to host the data as well as on Zenodo.
Human resources required to make the data in the format required for
publication are covered by the FTE stated in GA.
The WP leaders associated with each deliverable will be responsible for the
data management of those products from their WPs.
## d) Data security
By using Zenod our research output will be stored safely for the future in the
same cloud infrastructure as CERN's own LHC research data.
**e) Ethical aspects**
There are no ethical aspects associated with the data products from the EWC.
# 3\. Conclusions and future steps
As stated by the ORDP, the DMP is a living document that should be updated
regularly. The MC will review the DMP on an annual basis and issue a new
version if any changes need to be made. Furthermore WP leaders are responsible
for informing the MC immediately if any differences are required as a result
of the research carried out, in particular at the time that data products are
delivered.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0546_SPICE_713481.md
|
# Data summary
The main objective of the SPICE project is to realize a novel integration
platform that combines photonic, magnetic and electronic components. To align
with the objective of the project, all Partners have been asked to provide
their inputs to this DMP document on what data are going to be collected, in
which format, how they are going to be stored, how they are going to be
deposited after the project and finally what is the estimated size.
Data management is essential for SPICE due to the synergistic approach taken
in this project. In a hierarchical manner, data from each Partner and/or WP
will be required by another Partner and/or WP to build on. For example,
material characterization data from WP1 will be used in the magnetic tunnel
junction design in WP2. These data will also be used in the development of
theoretical models and simulation tools in WP5. All these data will be
required to support the development of an architecture-level simulation and
assessment and an experimental demonstrator in WP4. Since the various WPs are
managed by various Partners, interaction and data exchange is of key
importance. The following main data types and formats are identified,
alongside their origin, expected size and usefulness:
* Laboratory experimental characterization data will typically be stored in ascii or binary format, in a (multidimensional) array. These include the characterization of magneto-optic materials, magnetic tunnel junction (MTJ) elements, photonic circuits, and the demonstrator. Data _originate_ from laboratory instrumentation, including lasers, optical spectrum analyzers, electrical source meters, thermos-electric control elements, power meters, etc. Data _size_ depends on the resolution, the amount of devices measured, etc., but does typically not exceed the ~1MB level per dataset and the ~TB level overall. The _usefulness_ is the validation and quantification of performance, which in turn can validate models.
* Simulation data will be stored in simulation-tool specific formats. This includes the QW Atomistix tool, the Verilog tool and the Lumerical tool, for example. Some tools use an open file format, others are proprietary. In all cases, final simulation results can be exported to ascii or binary, if required for communication and documentation. The data _originate_ from running the simulation algorithms, with appropriate design and material parameters. Data _size_ depends, again, on resolution of parameter sweeps, and varies a lot, although is overall not expected to exceed the ~TB level. The _usefulness_ is to provide a quantified background for the design of materials, devices, and circuits, as well as helping with the interpretation and validation of experimental results.
* Process flows are used to describe the fabrication process in detail, of either material growth/deposition, MTJ fabrication and/or PIC fabrication. These are foundry and toolspecific and are stored in either a text document, e.g., “doc(x)”, – or similar – or a laboratory management tool. These typically _originate_ from a set of process steps, which are toolspecific, e.g., dry etching, wet etching, metal sputtering or evaporation, oxide deposition, etc., and are compiled by process operators and process flow designers. The _size_ is limited to a list of process steps in text, possibly extended with pictures to illustrate the cross-sections, i.e., not exceeding ~10MB per file. The _usefulness_ is to store process knowledge and to identify possible issues when experimental data indicate malfunction. Existing knowledge in processing, including process flows, will be _reused_ .
* Mask design data are stored in design-tool specific format, but are eventually exported to an open format like “gds”. Their _origin_ depends on how these masks are designed. These can be drawn directly by the designer, or the designer can use a process-design kit (PDK) to use pre-defined building blocks. Data _size_ depends on mask complexity, but typically does not exceed ~100MB per mask set. The _usefulness_ is the identification of structures on a mask, during experimental characterization, also by other Partners and in other WPs, as well as – obviously – providing the necessary input for lithography tools. Together with a mask design, a design report, showing details on the structures and designs and a split chart, should be included. This should also refer to the used process flow. The format is typically text based, e.g., “doc(x)”, and its size does not exceed 10MB.
* Dissemination and communication data take the form of reports, publications, websites and video, using the typical open formats, like “pdf” and “mpeg”. The _origin_ is the effort of the management and dissemination WPs, i.e., these are written or taped by the consortium Partners. The _usefulness_ is the communication between Partners, between the Consortium and the EC, and with the various target audiences outside the Consortium, including students, peers and general public.
A summary of the data types with SPICE is shown in table 1. More detailed data
description related to the tasks within SPICE are tabulated at the end of this
document.
Table 1. Summary of the data types in SPICE project
<table>
<tr>
<th>
**Description of data**
</th>
<th>
**Responsible organization**
</th>
<th>
**Type** 1
</th>
<th>
**Related WP**
</th>
<th>
**To whom might it be useful**
**(‘Data utility')?**
</th> </tr>
<tr>
<td>
Mask Design Data
</td>
<td>
AU & IMEC
(Photonic
Integrated
Circuits) and
CEA (MTJ design)
</td>
<td>
gdsii
</td>
<td>
2, 3 and 4
</td>
<td>
Research
institutes
</td> </tr>
<tr>
<td>
Process flows
</td>
<td>
CEA & IMEC
</td>
<td>
docs/ppt
and pdf
</td>
<td>
1,2, and 3
</td>
<td>
Research
institutes
</td> </tr>
<tr>
<td>
Simulation data
</td>
<td>
All
</td>
<td>
Depending on the
simulation
tools (.scs,
.va, …)
</td>
<td>
1-5
</td>
<td>
Research
institutes and companies
</td> </tr>
<tr>
<td>
Software
</td>
<td>
Synopsys
</td>
<td>
ATK commercial tools
</td>
<td>
5
</td>
<td>
Companies and research institutes
</td> </tr>
<tr>
<td>
Dissemination and communication data
</td>
<td>
All
</td>
<td>
pdf,doc,
ppt, mpeg, mp3
</td>
<td>
all
</td>
<td>
Public,Companies
and research institutes
</td> </tr> </table>
# FAIR data
## Making data findable, including provisions for metadata
Most of the SPICE datasets outlined above are not useful by itself, and depend
on context, i.e., the metadata have to be provided to interpret these data,
possibly by connecting these to other datasets. This is typically done using
logbooks or equivalent. This is necessary for experimental datasets, obtained
in the laboratory. For simulation data, obtained with commercial simulation
tools, the metadata are typically part of the data file, although not directly
visible, unless the file is opened. So, also in that case, a logbook is
required. In general, the SPICE consortium aims to provide accessible
logbooks, design reports or equivalent as a means to make datasets findable
_within_ the Consortium. These logbooks will list all relevant datasets.
Datasets and logbooks will be stored on shared folders (on a server), if
relevant for other Partners. Logbooks will have a version number to allow for
adding datasets.
A typical example is a chip design report, which will include a reference to
the process flow (including version number) and a reference to the mask file,
including a detailed description of the designs, as well as an overview of the
simulations, including, e.g., design curves, and with reference to all
simulation datasets.
To make the datasets SPICE _findable_ , we use the following naming convention
for all the datasets produced within SPICE: the naming starts with the WP
number, then the WT number within the WP and finally the dataset title is
added. These are all separated by underscore, i.e.,
<Beneficiary>_<WP#>_<WT#>_<dataset_title>). For example, if the data is
related to the dataset of WP1 (i.e. Magneto-Optic Interaction) with the WT
number of 2 with the dataset_title of “Magneto-
Optic_Interaction” from the beneficiary RU, then the naming will be
“RU_WP1_2_Magneto_Optic_Interaction”. A version number will be added to the
end of the title if required.
The Consortium recognizes that some data are confidential and cannot be shared
even within the Consortium. This should not prevent communication and
dissemination, though, and measures should be taken to allow for maximum
information flow, while protecting sensitive information. If, for example, the
exact process details of a component on a chip are confidential, some critical
gds layers can be removed from the shared dataset and/or a so-called ‘black
box’ can replace such components. The gds file can then still fulfill its main
purpose, namely the identification of relevant structures on a chip during
experiments.
The main means of communicating datasets _outside_ the Consortium is through
publications, which have a level of completeness as required by typical peer-
reviewed journals. These publications will be findable through the keywords
provided and the publication can be tracked through a digital object
identifier (DOI). If applicable and/or required, full or partial datasets will
be published alongside, as per the journal’s policy.
Specific datasets that will be shared publicly, outside the Consortium, will
have targeted approaches to make these _findable_ . For example, Verilog/spice
models, developed within SPICE, will be uploaded on, e.g., Nano-Engineered
Electronic Device Simulation Node (NEEDS) from nanohub.org, to be found and
used by others. An extensive set of magneto-optic material parameters will be
made available through the SPICE website, including context and introduction.
## Making data openly accessible
The goal of SPICE is to make as many data and results public as possible.
However, the competitive interest of all Partners need to be taken into
account. The data that will be made _openly available_ are:
* Reports, studies, slidesets and roadmaps indicated as ‘public’ in the GA. These will be made available through the EC website and the SPICE website, typically in pdf format. Additional dissemination is expected through social media, like LinkedIN, to further attract readership. These documents will be written in such a way that these are ‘self-explanatory’ and can be read as a separate document, i.e., including all relevant details and references.
* Verilog/spice models of the MTJs can be made available, for example, on NEEDS, including a “readme” file on how to use the models. These models can be used by commercial tools from Cadence/Synopsys, which are available to most of the universities and industry, e.g., through Europractice in Europe. Furthermore, there is a possibility to develop tools running on the nanohub.org server for the provided models.
* Novel simulation algorithms for the Atomistix toolkit of QW will be made available to the market, through this commercially available toolkit.
* Scientific results of the project, i.e., in a final stage, will be published through scientific journals and conferences. The format is typically pdf, and an open access publication format will be chosen, i.e., publications will be available from either the publisher’s website (Gold model) or from the SPICE and university websites (Green model).
The data that will remain _closed_ are:
* Simulation and characterization data sets that are generated in order to obtain major publishable results and deliverables will remain closed for as long as the major results and deliverables have not been published. This is to project the Partners and the Consortium from getting scooped.
* Detailed process flows and full mask sets will not be disclosed to protect the proprietary and existing fabrication IP of, most notably, partners IMEC and CEA. If successful, SPICE technology can be made available in line with these Partners’ existing business models. IMEC, for example, offers access to its silicon photonics technology through Europractice.
* Source code of simulation tools developed for the Atomistix toolkit. This is key IP for partner QW, as it brings these tools to the market.
* Final scientific results that have been submitted to scientific journals, but not yet accepted and/or published. This is a requirement of many journals.
These _closed_ datasets will be kept on secure local servers.
**The homepage of the SPICE will be used for open-access data repository for
SPICE project. The data will kept for 5 years after the project. The budget
will be covered by SPICE project’s budget. The budget is around 500Euro.**
## Making data interoperable
Open data formats like pdf and doc(x) (reports), gds (mask layout), ascii and
binary (experimental data) will be used as much as possible, which allows for
sharing data with other Partners. Freely available software can be used to
read such files. Design software like Atomistix, Cadence Virtuoso, PhoeniX,
Lumerical and Luceda has proprietary data formats, and it will be investigated
how these can most easily be exported to open formats, in case there is a need
for this.
## Increase data re-use (through clarifying licences)
Experimental and simulation data sets will in principle not be re-usable by
itself, unless otherwise decided. Re-use of these data sets will be
facilitated through scientific publications, which also provide the necessary
context. Conditions for re-use are then set by the publishers’ policies. The
peer-review process, as well as adhering to academic standards, _ensures the
quality_ . These publications will remain re-usable for an indefinite time.
The underlying experimental and simulation data sets will be stored for a time
as prescribed by national and EU laws, though at least 5 years after the SPICE
project ends.
Process flows can potentially be re-used through the specific foundry
facilities, for example as a fabrication service or through a multi-project
wafer run, e.g., through Europractice. Process flows itself will not be
disclosed and cannot be re-used. This is partially to protect the foundry IP,
and partially because process flows are foundry-specific anyway. The
Consortium will discuss a policy for this when the SPICE technology is up and
running. Quality assurance will be aligned with the foundries’ existing
standards for performance, specifications, yield and reproducibility.
Mask designs, or component designs, can only be re-used when the underlying
fabrication process is made available. In that case, designs can be made part
of a PDK. Support and quality assurance, however, will be an open issue. The
Consortium will discuss this when the SPICE technology is up and running.
Simulation tools based on the Atomistix toolkit will be marketed by QW to
ensure the widest possible re-use, under the assumption that there is enough
market potential. Licenses can be obtained on a commercial base by third
parties. QW will remain responsible for their toolkit development, quality and
support and has a team in place to ensure that. The duration and scope of a
license and support will be determined between QW and their potential users at
a later stage. Simulation tools based on Verilog will be publicly shared for
widest re-use. No support is envisioned beyond the duration of SPICE, though,
so quality assurance is an open issue for the moment.
# Allocation of resources
In the SPICE project, data management is arranged under WP6 (Dissemination and
Exploitation) and any cost related to the FAIR data management during the
project will be covered by the project budget. The homepage of the SPICE will
be used for open-access data repository; a total budget of 2000 Euro for 5
years is estimated.
The consortium has decided that a specific data manager is not required within
SPICE. In this case, each partner provides the dataset corresponding to the
tasks and the WP to the Dissemination and Exploitation WP leader and these
data will be uploaded for usability by others if applicable.
# Data security
All data sets are backed up routinely onto the Partners’ servers, via local
network drives. Data sets are backed up on a regular basis, typically on a
daily basis. In addition, all processed data will be version controlled, which
is updated with similar frequency. No backups are stored on laptops, or
external media, nor do we use external services for backup.
The common files will be shared into a depository located under
_https://svn.nfit.au.dk/SPICE_
# Ethical aspects
No ethical aspects have been identified.
# Other issues
An open issue is the local, national and EU policies with respect to data
management, and of which the Consortium does not have a complete overview.
# Appendix – partner input
<table>
<tr>
<th>
**WP /**
**Task**
</th>
<th>
**Responsibl e partner**
</th>
<th>
**Dataset name**
**(for WT of X)**
</th>
<th>
**File types**
</th>
<th>
**Findable**
**(e.g. for WT of 1 for each WP)**
</th>
<th>
**Accessible**
</th>
<th>
**Inter oper**
**able**
</th>
<th>
**Reusable**
</th>
<th>
**Size**
</th>
<th>
**Security**
</th> </tr>
<tr>
<td>
1/X
</td>
<td>
RU
</td>
<td>
RU_WP1_X_Mag neto_Optic_Intera ction_v1
</td>
<td>
*.xlsx , *.doc, *.pdf, *.dat,
*.jpeg
</td>
<td>
All the produced data will be available in the dataset with following the
naming of
RU_WP1_1_Magn eto_Optic_Interacti
on_v1 (No meta
data)
</td>
<td>
Available through scientific reports and publications
</td>
<td>
N/A
</td>
<td>
On a
depository server for 5 years after the project
</td>
<td>
1 TB
</td>
<td>
Confidential data will be stored and backed up continuously on a secured
server from RU and confidential reports and presentations will be uploaded on
the secured area of the website. Some reports and data will be shared on
Dropbox.
</td> </tr>
<tr>
<td>
2/X
</td>
<td>
SPINTEC
</td>
<td>
SPINTEC_WP2_
X_Spintronic - Photonic integration_v1
</td>
<td>
SEM and
TEM images
(*.jpeg), electrical data (*.xlsx, *.dat, etc.)
</td>
<td>
SPINTEC_WP2_1_
Spintronic -
Photonic integration_v1 (No meta data)
</td>
<td>
available through scientific reports and publications
</td>
<td>
NA
</td>
<td>
On a
depository server (TBD) for 5 years after the project
</td>
<td>
500
GB
</td>
<td>
Confidential data will be stored and backed up continuously on a secured
server at SPINTEC and confidential reports and presentations will be uploaded
on the secured area of the website. Some reports and data will be shared on
Dropbox.
</td> </tr>
<tr>
<td>
3/X
</td>
<td>
IMEC
</td>
<td>
IMEC_WP3_X_
Photonic_Distribut ion_Layer_v1
</td>
<td>
*.dat, *.docx,
*.pdf
</td>
<td>
IMEC_WP3_1_
Photonic_Distributi on_Layer_v1 (No meta data)
</td>
<td>
available through scientific reports and publications
</td>
<td>
?
</td>
<td>
On a
depository server (TBD) for 5 years after the project
</td>
<td>
500
GB
</td>
<td>
Confidential data will be stored and backed up continuously on a secured
server at AU and IMEC, and confidential reports and presentations will be
uploaded on the secured area of the website. Some reports and data will be
shared on Dropbox.
</td> </tr>
<tr>
<td>
4/X
</td>
<td>
AU
</td>
<td>
AU_WP4_X_
Architecture_and_
Demonstrator_v1
</td>
<td>
*.dat, *.docx,
*.pdf, *.m
</td>
<td>
AU_WP4_1_
Architecture_and_
Demonstrator_v1
(No meta data)
</td>
<td>
available through scientific reports and publications
</td>
<td>
</td>
<td>
On a
depository server (TBD) for 5 years after the project
</td>
<td>
1 TB
</td>
<td>
Confidential data will be stored and backed up continuously on a secured
server at AU and confidential reports and presentations will be uploaded on
the secured area of the website. Some reports and data will be shared on
Dropbox. The Verilog/spice data
</td> </tr> </table>
Page 10 of 11
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
will be shared on some gateways to be used by other people
</th> </tr>
<tr>
<td>
5/X
</td>
<td>
QW
</td>
<td>
QW_WP5_X_Sim
ulation_and_Desi gn_Tools_v1
</td>
<td>
*.doc, *.pdf, *.xlsx, *.py,
*.hdf5, *.tex
</td>
<td>
QW_WP5_1_Simul ation_and_Design_ Tools_v1 (No meta
data)
</td>
<td>
available through scientific reports and publications
</td>
<td>
</td>
<td>
On a
depository server (TBD) for 5 years after the project
</td>
<td>
1 TB
</td>
<td>
Confidential data will be stored and backed up continuously on a secured
server at QW and confidential reports and presentations will be uploaded on
the secured area of the website.
</td> </tr>
<tr>
<td>
6/X
</td>
<td>
AU
</td>
<td>
AU_WP6_X_Diss emination_and_E xploitation_Tools_ v1
</td>
<td>
</td>
<td>
AU_WP6_X_Disse mination_and_Expl oitation_Tools_v1 (No meta data)
</td>
<td>
Available on the AU website
</td>
<td>
</td>
<td>
On a
depository server (TBD) for 5 years after the project
</td>
<td>
5 GB
</td>
<td>
The dissemination reports will be kept on a secured server at AU and also
uploaded on SyGMa as well as publicly available on the SPICE website.
</td> </tr>
<tr>
<td>
7/X
</td>
<td>
AU
</td>
<td>
AU_WP7_X_Man
agement _v1
</td>
<td>
*.xlsx , *.doc, *.pdf, *.jpeg, *.mp3, *.mpeg
</td>
<td>
AU_WP7_1_Mana gement _v1 (No
meta data)
</td>
<td>
The confidential data will not be accessible to the public. The public data,
reports, presentations will be available on AU website.
</td>
<td>
</td>
<td>
On a
depository server (TBD) for 5 years after the project
</td>
<td>
100
MB
</td>
<td>
The annual reports will be confidential and so will not be available for
public. Some minutes, presentations, press release etc. will be available for
public through website.
</td> </tr> </table>
Page 11 of 11
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0547_DocksTheFuture_770064.md
|
# Executive summary
_This deliverable is an update of the Data Management Plan deliverable
(D6.6)._
_D6.6 outlines how the data collected or generated will be handled during and
after the DocksTheFuture project, describes which standards and methodology
for data collection and generation will be followed, and whether and how data
will be shared._
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used by the
Consortium with regard to the project research data. The DMP covers the
complete research data life cycle. It describes the types of research data
that will be generated or collected during the project, the standards that
will be used, how the research data will be preserved and what parts of the
datasets will be shared for verification or reuse. It also reflects the
current state of the Consortium Agreements on data management and must be
consistent with exploitation.
This Data Management Plans sets the initial guidelines for how data will be
generated in a standardised manner, and how data and associated metadata will
be made accessible. This Data Management Plan is a living document and will be
updated through the lifecycle of the project.
# EU LEGAL FRAMEWORK FOR PRIVACY, DATA PROTECTION AND SECURITY
Privacy is enabled by protection of personal data. Under the European Union
law, personal data is defined as “any information relating to an identified or
identifiable natural person”. The collection, use and disclosure of personal
data at a European level are regulated by the following directives and
regulation:
* Directive 95/46/EC on protection of personal data (Data Protection Directive)
* Directive 2002/58/EC on privacy and electronic communications (e-Privacy Directive)
* Directive 2009/136/EC (Cookie Directive)
* Regulation 2016/679/EC (repealing Directive 95/46/EC)
* Directive 2016/680/EC according to the Regulation 2016/679/EC, personal data
_means any information relating to an identified or identifiable natural
person (‘data subject’); an identifiable natural person is one who can be
identified, directly or indirectly, in particular by reference to an
identifier such as a name, an identification number, location data, an online
identifier or to one or more factors specific to the physical, physiological,
genetic, mental, economic, cultural or social identity of that natural person_
(art. 4.1). The same Directive also defines personal data processing as
_any operation or set of operations which is performed on personal data or on
sets of personal data, whether or not by automated means, such as collection,
recording, organisation, structuring, storage, adaptation or alteration,
retrieval, consultation, use, disclosure by transmission, dissemination or
otherwise making available, alignment or combination, restriction, erasure or
destruction (art. 4.2)._
# Purpose of data collection in DocksTheFuture
This Data Management Plan (DMP) has been prepared by taking into account the
template of the
“Guidelines on Fair Data Management in Horizon 2020”
(
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020hi-
oa-data-mgt_en.pdf_ ) . According to the latest Guidelines on FAIR Data
Management in Horizon 2020 released by the EC Directorate-General for Research
& Innovation “beneficiaries must make their research data findable,
accessible, interoperable and reusable (FAIR) ensuring it is soundly managed”.
The elaboration of the DMP will allow to DTF partners to address all issues
related with ethics and data. The consortium will comply with the requirements
of Directive 95/46/EC of the European Parliament and of the Council of 24
October 1995 on the protection of individuals with regard to the processing of
personal data and on the free movement of such data.
DocksTheFuture will provide access to the facts and knowledge gleaned from the
project’s activities over a two-year and a half period and after its end, to
enable the project’s stakeholder groups, including creative and technology
innovators, researchers and the public at large to find/re-use its data, and
to find and check research results.
The project’s activities aim to generate knowledge, methodologies and
processes through fostering cross-disciplinary, cross-sectoral collaboration,
discussion in the port and maritime sector. The data from these activities
will be mainly shared through the project website. Meeting with experts and
the main port stakeholders will be organised in order to get feedback on the
project and to share its results and outcomes.
DocksTheFuture will encourage all parties to contribute their knowledge
openly, to use and to share the project’s learning outcomes, and to help
increase awareness and adoption of ethics and port sustainability.
# Data collection and creation
Data types may take the form of lists (of organisations, events, activities,
etc.), reports, papers, interviews, expert and organisational contact details,
field notes, videos, audio and presentations. Video and Presentations
dissemination material will be made accessible online via the DocksTheFuture
official website and disseminated through the project’s media channels
(Twitter, LinkedIn and Facebook), EC associated activities, press, conferences
and presentations.
DocksTheFuture will endeavour to make its research data ‘Findable, Accessible,
Interoperable and Reusable (F.A.I.R)’, leading to knowledge discovery and
innovation, and to subsequent data and knowledge integration and reuse.
The DocksTheFuture consortium is aware of the mandate for open access of
publications in the H2020 projects and participation of the project in the
Open Research Data Pilot.
More specifically, with respect to face-to-face research activities, the
following data will be made publicly available:
* Data from questionnaires in aggregate form;
* Visual capturing/reproduction (e.g., photographs) of the artefacts that the participants will co-produce during workshops.
# Data Management and the GDPR
In May 2018, the new European Regulation on Privacy, the General Data
Protection Regulation, (GDPR) came into effect. In this DMP we describe the
measures to protect the privacy of all subjects in the light of the GDPR. All
partners within the consortium will have to follow the same new rules and
principles.
In this chapter we will describe how the founding principles of the GDPR will
be followed in the Docks The Future project.
Lawfulness, fairness and transparency
_Personal data shall be processed lawfully, fairly and in a transparent manner
in relation to the data subject._
All data gathering from individuals will require informed consent individuals
who are engaged in the project. Informed consent requests will consist of an
information letter and a consent form. This will state the specific causes for
the activity, how the data will be handled, safely stored, and shared. The
request will also inform individuals of their rights to have data updated or
removed, and the project’s policies on how these rights are managed. We will
try to anonymise the personal data as far as possible, however we foresee this
won’t be possible for all instances. Therefore further consent will be asked
to use the data for open research purposes, this includes presentations at
conferences, publications in journals as well as depositing a data set in an
open repository at the end of the project. The consortium tries to be as
transparent as possible in their collection of personal data. This means when
collecting the data information leaflet and consent form will describe the
kind of information, the manner in which it will be collected and processed,
if, how, and for which purpose it will be disseminated and if and how it will
be made open access. Furthermore, the subjects will have the possibility to
request what kind of information has been stored about them and they can
request up to a reasonable limit to be removed from the results.
Purpose limitation
_Personal data shall be collected for specified, explicit and legitimate
purposes and not further processed in a manner that is incompatible with those
purposes._
Docks The Future project won’t collect any data that is outside the scope of
the project. Each partner will only collect data necessary within their
specific work package.
Data minimisation
_Personal data shall be adequate, relevant and limited to what is necessary in
relation to the purposes for which they are processed._
_Only data that is relevant for the project’s questions and purposes will be
collected. However since the involved stakeholders are free in their answers,
this could result in them sharing personal information that has not been asked
for by the project. This is normal in any project relationship and we
therefore chose not to limit the stakeholders in their answer possibilities.
These data will be treated according to all guidelines on personal data and
won’t be shared without anonymization or explicit consent of the stakeholder._
_Accuracy_
_Personal data shall be accurate and, where necessary, kept up to date_
_All data collected will be checked for consistency._
Storage limitation
_Personal data shall be kept in a form which permits identification of data
subjects for no longer than is necessary for the purposes for which the
personal data are processed_
_All personal data that will no longer be used for research purposes will be
deleted as soon as possible. All personal data will be made anonymous as soon
as possible. At the end of the project, if the data has been anonymised, the
data set will be stored in an open repository. If data cannot be made
anonymous, it will be pseudonymised as much as possible and stored for a
maximum of the partner’s archiving rules within the institution._
_Integrity and confidentiality_
_Personal data shall be processed in a manner that ensures appropriate
security of the personal data, including protection against unauthorised or
unlawful processing and against accidental loss, destruction or damage, using
appropriate technical or organisational measures._
_All personal data will be handled with appropriate security measures applied.
This means:_
* _Data sets with personal data will be stored at a Google Drive server at the that complies with all GDPR regulations and is ISO 27001 certified._
* _Access to this Google Drivel be managed by the project management and will be given only to people who need to access the data. Access can be retracted if necessary._
* _All people with access to the personal data files will need to sign a confidentiality agreement._
_Accountability_
_The controller shall be responsible for, and be able to demonstrate
compliance with the GDPR._
_At project level, the project management is responsible for the correct data
management within the project._
# DocksTheFuture approach to privacy and data protection
On the basis of the abovementioned regulations, it is possible to define the
following requirements in relation to privacy, data protection and security:
* Minimisation: DocksTheFuture must only handle minimal data (that is, the personal data that is effectively required for the conduction of the project) about participants.
* Transparency: the project will inform data subjects about which data will be stored, who these data will be transmitted to and for which purpose, and about locations in which data may be stored or processed.
* Consent: Consents have to be handled allowing the users to agree the transmission and storage of personal data. The consent text included Deliverable 7.1 must specify which data will be stored, who they will be transmitted to and for which purpose for the sake of transparency. An applicant, who does not provide this consent for data necessary for the participation process, will not be allowed to participate.
* Purpose specification and limitation: personal data must be collected just for the specified purposes of the participation process and not further processed in a way incompatible with those purposes. Moreover, DocksTheFuture partners must ensure that personal data are not (illegally) processed for further purposes. Thus, those participating in project activities have to receive a legal note specifying this matter.
* Erasure of data: personal data must be kept in a form that only allow forthe identification of data subjects for no longer than is strictly necessary for the purposes for which the data were collected or for which they are further processed. Personal data that are not necessary any more must be erased or truly anonymised.
* Anonymity: The DocksTheFuture consortium must ensure anonymity by applying two strategies. On the one hand, anonymity will be granted through data generalisation and; on the other hand, stakeholders’ participation to the project will be anonymous except they voluntarily decide otherwise
The abovementioned requirements translate into three pillars:
1. Confidentiality and anonymity – Confidentiality will be guaranteed whenever possible. The only exemption can be in some cases for the project partners directly interacting with a group of participants (e.g., focus group). The Consortium will not make publicly accessible any personal data. Anonymity will be granted through generalisation.
2. Informed consent – The informed consent policy requires that each participant will provide his/her informed consent prior to the start of any activity involving him/her. All people involved in the project activities (interviews, focus groups, workshops) will be asked to read and sign an Informed Consent Form explaining how personal data will be collected, managed and stored.
3. Circulation of the information limited to the minimum required for processing and preparing the anonymous open data sets –The consortium will never pass on or publish the data without first protecting participants’ identities. No irrelevant information will be collected; at all times, the gathering of private information will follow the principle of proportionality by which only the information strictly required to achieve the project objectives will be collected. In all cases, the right of data cancellation will allow all users to request the removal of their data at any time
# FAIR (Findable, Accessible, Interoperable and Re-usable) Data within Docks
The Future
DMP component Issues to be addressed
1. Data summary
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
<table>
<tr>
<th>
The purpose of data collection in Docks The Future is understanding opinions
and getting
feedbacks on the Port of The Future of proper active stakeholders - defined as
groups or organizations having an interest or concern in the project impacts
namely individuals and organisations in order to collect their opinions and
find out their views about the “Port of the Future” concepts, topics and
projects. This will Include the consultation with the European Technological
Platforms on transport sector (for example, Waterborne and ALICE), European
innovation partnerships, JTIs, KICs.Consortium Members have (individually) a
consolidated relevant selected Stakeholders list.
The following datasets are being collected:
* Notes and minutes of brainstorms and workshops and pictires of the events(.doc format, jpeg/png)
* Recordings and notes from interviews with stakeholders (.mp4, .doc format)
* Transcribed notes/recordings or otherwise ‘cleaned up’ or categorised data. (.doc, .xls format)
No data is being re-used. The data will be collected/generated before during,
or after project meetings and through interviews with stakeholders.
The data will probably not exceed 2 GB, where the main part of the storage
will be taken up by the recordings.
The data will be useful for other project partners and in the future for other
research and innovation groups or organizations developing innovative ideas
about ports.
</th> </tr> </table>
2. Making data findable, including provisions for metadata
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
<table>
<tr>
<th>
The following metadata will be created for the data files:
* Author
* Institutional affiliation
* Contact e-mail
* Alternative contact in the organizations
* Date of production
* Occasion of production
Further metadata might be added at the end of the project.
All data files will be named so as to reflect clearly their point of origin in
the Docks The Future structure as well as their content. For instance, minutes
data from the meeting with experts in work package 1 will be named “yyy mmm
ddd DTF –WP1-meeting with experts”.
No further deviations from the intended FAIR principles are foreseen at this
point.
</th> </tr> </table>
3. Making data openly accessible
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited
* Specify how access will be provided in case there are any restrictions
Data will initially be closed to allow verification of its accuracy within the
project.
Once verified and published all data will be made openly available. Where
possible raw data will be made available however some data requires additional
processing
and interpretation to make it accessible to a third party, in these cases the
raw data
will not be made available but we will make the processed results available.
Data related to project events, workshops, webinars, etc will be made
available on the docks the future website. No specific software tools to
access the data are needed.
. No further deviations from the intended FAIR principles are foreseen at this
point
4. Making data interoperable
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
The collected data will be ordered so as to make clear the relationship
between questions
being asked and answers being given. It will also be clear to which category
the different
respondents belong (consortium members, external stakeholder).
Data will be fully interoperable – a full unrestricted access will be provided
to datasets that are stored in data files of standard data formats, compliant
with almost all available software applications. No specific ontologies or
vocabularies will be used for creation of
metadata, thus allowing for an unrestricted and easy interdisciplinary use
5. Increase data re-use (through clarifying licences)
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
Datasets will be publicly available. Information to be available at the later
stage of the project. To be decided by owners/ partners of the datasets.
It is not envisaged that Docks The Future will seek patents. The data
collected, processed and analyzed during the project will be made openly
available following deadlines (for
deliverables as the datasets. All datasets are expected to be publicly
available by the end of the project.
The Docks The Future general rule will be that data produced after lifetime of
the project will be useable by third parties. For shared information, standard
format, proper documentation will guarantee re-usability by third parties.
The data are expected to remain re-usable (and maintained by the partner/
owner) as long as possible after the project ended,
6. Allocation of resouces
* Estimate the costs for making your data FAIR. Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project ⮚ Describe costs and potential value of long term preservation
Data will be stored at the coordinator’s repository, and will be kept
maintained, at least, for 5 years after the end of the project (with a
possibility of further
prolongation for extra years).
Data management responsible will be the Project Coordinator (Circle).
No additional costs will be made for the project management data.
7. Data Security
* Address data recovery as well as secure storage and transfer of sensitive data
Circle maintains a backup archive of all data collected within the project.
After the Docks The Future lifetime, the dataset will remain on Circle’s
server
and will be managed by the coordinator.
8. Ethical Aspects
* To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former
No legal or ethical issues that can have an impact on data sharing arise at
the
moment
# Open Research Data Framework
The project is part of the Horizon2020 Open Research Data Pilot (ORD pilot)
that “aims to make the research data generated by selected Horizon 2020
projects accessible with as few restrictions as possible, while at the same
time protecting sensitive data from inappropriate access. This implies that
the DocksTheFuture Consortium will deposit data on which research findings are
based and/or data with a long-term value. Furthermore, Open Research Data will
allow other scholars to carry on studies, hence fostering the general impact
of the project itself.
As the EC states, Research Data “refers to information, in particular facts or
numbers, collected to be examined and considered as a basis for reasoning,
discussion, or calculation. […] Users can normally access, mine, exploit,
reproduce and disseminate openly accessible research data free of charge”.
However, the ORD pilot does not force the research teams to share all the
data. There is in fact a constant need to balance openness and protection of
scientific information, commercialisation and Intellectual Property Rights
(IRP), privacy concerns, and security.
The DocksTheFuture consortium adopts the best practice the ORD pilot
encourages – that is, “as open as possible, as closed as necessary”. Given the
legal framework for privacy and data protection, in what follows the strategy
the Consortium adopts to manage data and to make them findable, accessible,
interoperable and re-usable (F.A.I.R.) is presented.
# Data collected during the first reporting period
During the first reporting period, there have been occasions in which data
have been collected for the project implementation. These moments are
described below:
* Online Stakeholder Consultation. The stakeholders’ consultation, whose results fed into D1.2- _Stakeholders consultation proceedings,_ was carried out through an online survey based on the Google forms platform. The online survey was launched the 14th September 2018 and remained open until the 1st of October. After the first launch, a second reminder was sent on 26 September. The official survey was preceded by 5 interviews that were aimed at testing the stakeholders’ answer. The interviews were partially close to the current survey since they were mainly based on open questions. After this “testing phase”, the consortium decided to administer an online survey, made up by both open and closed questions, a smaller number of open questions and a greater adherence to deliverable D1.1 Desktop analysis of the concept including EU Policies, that, in the meantime, was submitted and completed. To reach out a larger community of interested stakeholders, the link to the web-based survey has been disseminated using: the official project website, the project newsletter and dedicated emails to selected stakeholders. The online survey has been closed on the 1st of October 2018 with 72 complete individual answers
* Workshop with experts, 29 th and 30 th October 2018, Oporto. The workshops were hosted by APDL (Administração dos Portos do Douro, Leixões e Viana do Castelo) in the Port of Leixões. The event aimed at getting the vision, sharing knowledge and ideas about the Port of The Future: the DocksTheFuture project. The participants were from different sectors of the maritime and port industry. There were experts from wide range of organizations and institutions like Maritime & Mediterranean Economy Department at SRM, Hellenic Institute of Transport, the Baltic Ports Organization, Fraunhofer IFF’s Digital Innovation Hub, ALICE, PIXEL Ports Project, Delft University of Technology, University of
Genova, Port of Barcelona, Kühne Logistics University (KLU), Escola Europea –
Intermodal Transport, PortExpertise, PIXEL, Irish Maritime Development Office,
KEDGE Business School etc. As underlined in the Grant Agreement, this workshop
was conducted with reference to Task 1.5 and its specific goal was that the
experts validate WP1 outputs. Therefore, after having conducted a desktop
analysis of what Ports might look like in the near future, it is undoubtedly,
essential to validate those conclusions with those who are on the field and
have unquestionable expertise on the subject matter. The interesting
discussions went on five breakout sessions in the following topics:
o Digitalisation and digital transformation o Sustainability o Port-city
relation o Infrastructure, means of transport, and accessibility, and o
Competition, cooperation, and bridging R&D and implementation
* Workshop with experts, 3 rd April, Trieste. The main goal of the workshop was twofold. To validate the selected projects and initiatives of interest with reference to WP2- _Selection and Clustering of Projects and Initiatives of Interest_ , on the one hand, and to present/add further projects and initiatives not considered, on the other hand.
Before each of the above-mentioned activities, the involved experts and
stakeholders were asked to fill the informed consent form, (refer to
deliverable 7.1- H-Requirement N1) before giving their inputs (e.g. filling
the online survey, sharing the presentations received from them).
# Updated of the consent form
The above-mentioned consent form has been further updated according to the
Regulation (EU) 2016/679 ("GDPR"). The updated consent form is presented below
(additions marked in yellow) and will be used from now on in the second
reporting period.
Disclaimer
_The views represented in this document only reflect the views of the authors
and not the views of Innovation & Networks Executive Agency (INEA) and _ _the
European Commission. INEA and the European Commission are not liable for any
use that may be made of the information contained in this document.
Furthermore, the information are provided “as is” and no guarantee or warranty
is given that the information fit for any particular purpose. The user of the
information uses it as its sole risk and liability_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0548_ETIP PV - SEC II_825669.md
|
# 1\. Introduction
The report sets out ETIP PV’s approach to managing the data it generates and
the personal data it collects, linking to several other reports or external
sources:
* The EC’s official summary of GDPR
* the Grant Agreement and
* the Consortium Agreement
# 2\. Objectives
The objective of the data management plan is to have a consortium-wide data
management policy as outlined in this document.
# 3\. Data Summary
The type of data generated by the project will mainly be expert reports and
political messages, newsletters and press releases, aiming at supporting all
stakeholders from the Photovoltaic sector and related sectors to contribute to
the SET-Plan. Deliverables are generally public, except in WP1 Management:
deliverables relating to internal management of the project.
The members of the ETIP PV will generate a significant amount of information
during the project especially und WP2. ETIP PV will produce reports and will
organize and carry out workshops and conferences. All finalized documents will
be made public. All events that ETIP PV organizes will be open to the public.
Presentations, proceedings and any other relevant materials from the events
will be made available in the ETIP PV website at www.etip-pv.eu.
The collection of data is treated in the Consortium Agreement based on the
DESCA model which was adapted by the project. The foreground of the ownership
and knowledge is also covered in the Consortium Agreement which includes the
intelligent property rights for the members involved in data generation and
collection. All data collection, processing, storage, sharing, preservation,
and archiving will respect ethical research practices and national, EU and
international law, including privacy law. The project participates in the
H2020 open Research Data Pilot.
The list of deliverables can be found in Annex 1 (table extracted from ETIP PV
– SEC II Grant Agreement):
#### a) Policy on public (PU) deliverables
Final versions of public deliverables, or of reports that contribute to a
public deliverable, will be disseminated after their acceptance by the
European Commission. Draft versions will remain confidential.
Public reports will identify their lead author and co-authors.
__Tracking contributions_ _
Draft versions may collect individual partner’s contributions to a document
word-by-word or comment-by-comment. Tracking partner’s contributions to
deliverables is necessary to quantify the contribution to the report,
providing a basis to calculate any budget redistribution between partners
should that need arise. Tracking also enables the lead author to understand
the context for a comment.
#### b) Policy on confidential (CO) deliverables
Final versions of confidential deliverables, or of reports that contribute to
a confidential deliverable, will be for the consortium and the European
Commission. Draft versions will be for the consortium alone.
The reports will identify their lead author and co-authors.
__Tracking contributions_ _
As above under 1 a)
# 4\. Management of intellectual property rights
The Consortium has concluded a Consortium Agreement describing the partners’
rights to use ETIP PV SEC - II’s foreground, how the Consortium will police
the release of information to the public domain, and the rights that the
partners retain over any background that they bring to ETIP PV SEC II.
# 5\. E-DATA
Data relating to people acting in their professional capacity will be gathered
and stored electronically during the project, by two main routes:
#### a) Website
Visitors to www.etip-pv.eu will be invited to accept a Cookies policy that
allows their interaction with the site to be tracked anonymously. The Cookies
policy is accessible via a pop-up for first-time visitors to the site. The
pop-up will display so long as the policy has not been explicitly accepted as
an ever-present reminder to the visitor. The Privacy Policy is publicly and
clearly stated on the website.
There is a password-protected part of the site for the members of the ETIP PV.
Visitors can sign up for an email newsletter. Unsubscribing from the
newsletter is easy and effective from the moment they give the instruction.
The name and place of work of registrants is requested both to match the
requirements of our mail-merge software, and to give us information on the
kind of organisations that find ETIP PV interesting.
A connector to Twitter is provided. Twitter collects far more data on its
users than ETIP PV does. WIP has no ability to control this.
#### b) Work Package 3: Organisation of events
Registrations for the WP3 Annual ETIP PV conference and workshops will be
taken electronically via an event tool. Personal data will not be collected
(unless required for any payment, where a third party will take it). Limited
professional data will be collectedto analyse the success of the conference.
Registrants’ data will be distributed beyond the consortium / European
Commission only if they allow it.
A list of registrants to the conference will be circulated. Delegates will be
given badges showing their name and company. Photos will be made at the
conference. Delegates will consent to participation at the conference being
publicised in this way.
The opportunity to opt in for the newsletter will be offered to registrants to
the conference as they make their registrations.
#### c) Sharing data within consortium
All partners may have access to electronic data that the person that it
relates to has consented to share to the extent that it is necessary for their
work in ETIP PV.
# 6\. Data Management Officer
The party responsible for processing data on this website is:
WIP Wirtschaft und Infrastruktur GmbH & Co Planungs-KG
Sylvensteinstr. 2
81369 München
Germany
Telephone: +49-89-720 12 735 Email: [email protected]
_Statutory data protection officer_
We have appointed a data protection officer for our company.
DATATREE
Heubestraße 10
40597 Düsseldorf
Germany
Email: [email protected]
# 7\. Responsibilities under GDPR
ETIP PV - SEC II’s Grant Agreement states that “a document will be written and
circulated to partners outlining their duties under the GDPR Regulation”, and
that this should be part of the Data Management Plan. Since the EC has itself
produced a summary (‘The GDPR: new opportunities, new obligations’:
_https://ec.europa.eu/commission/sites/beta-_
_political/files/data-protection-factsheet-sme-obligations_en.pdf_ ) , this
is not necessary. The summary is attached to this document in Annex 2\.
# 8\. Contacts
### **Project coordinator**
WIP Renewable Energies
Sofía Arancón
Sylvensteinstrasse 2, D-81369 Munich, Germany
Email: [email protected]
Phone: +49-89-720 12 722
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0549_MegaRoller_763959.md
|
# EXECUTIVE SUMMARY
The scope of the MegaRoller Project is to develop and demonstrate a Power
Take-Off (PTO) for wave energy converters. During the project, MegaRoller will
generate, collect and reuse various types of research data.
The purpose of the MegaRoller Data Management Plan (DMP) is to contribute to
good data handling, to indicate what research data the project expects to
generate/collect and what can be shared with the public. The DMP gives
instructions on naming conventions, metadata structure and storing of the
research data set. During the 36 months of active project, a Sharepoint site
will be used as the working and collaboration area for this project. All data
sets will be uploaded to this site and metadata will be added. Detailed
instruction on uploading research data set is given.
MegaRoller will use the Zenodo repository to comply to H2020 Open Access
Mandate. The mandate applies to research data underlying publications, but
beneficiaries can also voluntarily make other data set open. In MegaRoller all
scientific publications and the research data set underlying will be uploaded
to the MegaRoller Community in Zenondo. Other data set with dissemination
level "Public" will also be uploaded to Zenodo. Each data set will be given a
persistent identifier (DOI), supplied with relevant metadata and closely
linked to MegaRoller grant number and project acronym. Publications and
underlying research data will be linked. Creative Common licences will
regulate reuse of the MegaRoller research data. Data security arrangements are
defined for the Sharepoint site and Zenodo. Ethical aspects affecting data
sharing have been considered.
# 1 INTRODUCTION AND DATA SUMMARY
## 1.1 Purpose of the document
The purpose of this Data Management Plan (DMP) is to contribute to good data
handling during the MegaRoller project and detailing what data the project
will generate, whether and how it will be exploited or made accessible for
verification and re-use, and how it will be curated and preserved.
## 1.2 List of definitions, acronyms, and abbreviations
<table>
<tr>
<th>
**BibTeX**
</th>
<th>
is a reference management software for formatting lists of references.
</th> </tr>
<tr>
<td>
**CC licence**
</td>
<td>
Creative Commons licences are tools to grant copyright permissions to creative
work.
</td> </tr>
<tr>
<td>
**CC-BY**
</td>
<td>
This CC-license lets others distribute, remix, tweak, and build upon your
work, even commercially, as long as they credit you for the original creation.
This is the most accommodating of licenses offered. Recommended for maximum
dissemination and use of licensed materials.
</td> </tr>
<tr>
<td>
**CC-BY-SA**
</td>
<td>
This CC-license lets others remix, tweak, and build upon your work even for
commercial purposes, as long as they credit you and license their new
creations under the identical terms. This license is often compared to
“copyleft” free and open source software licenses. All new works based on
yours will carry the same license, so any derivatives will also allow
commercial use.
</td> </tr>
<tr>
<td>
**CC-BY-NC**
</td>
<td>
This CC-license lets others remix, tweak, and build upon your work non-
commercially, and although their new works must also acknowledge you and be
non-commercial, they don’t have to license their derivative works on the same
terms.
</td> </tr>
<tr>
<td>
**CSL**
</td>
<td>
Citation Style Language is an open XML-based standard to format citations and
Bibliographies.
</td> </tr>
<tr>
<td>
**DMP**
</td>
<td>
Data management plan.
</td> </tr>
<tr>
<td>
**DOI**
</td>
<td>
Digital Object Identifier.
</td> </tr>
<tr>
<td>
**FAIR data**
</td>
<td>
**F** indable, **A** ccessible, **I** nteroperable and **R** e-useable data
</td> </tr>
<tr>
<td>
**JSON**
</td>
<td>
JavaScript Object Notation is an open-standard file format.
</td> </tr>
<tr>
<td>
**MARCXML**
</td>
<td>
MARCXML is an XML schema based on the common MARC21 standards.
</td> </tr>
<tr>
<td>
**OAI-PMH**
</td>
<td>
The Open Archives Initiative Protocol for Metadata Harvesting.
</td> </tr> </table>
**Research data** Refers to information, in particular facts or numbers,
collected to be examined and
considered as a basis for reasoning, discussion, or calculation. In a research
context,
examples of data include statistics, results of experiments, measurements,
observations resulting from fieldwork, survey results, interview recordings,
and images.
<table>
<tr>
<th>
**REST API**
</th>
<th>
REST is an architectural style that defines a set of constraints to be used
for creating web Services. API means Application Programming Interface.
</th> </tr>
<tr>
<td>
**SSL/TLS**
</td>
<td>
Secure Sockets Layer / Transport Layer Security are protocols offering secure
communication on the internet.
</td> </tr>
<tr>
<td>
**Zenodo**
</td>
<td>
Zenodo is a catch-all repository that enables researchers, scientists, EU
projects and institutions to share research results, make research results
citable and search and reuse open research results from other projects. Zenodo
is harvested by the OpenAIRE portal.
</td> </tr> </table>
## 1.3 Structure of the document
This document is structured as follows:
* Section 1 is an introduction chapter describing the main purpose of the DMP and data summary.
* Section 2 describes the main principles (FAIR) for the data management in the project and how MegaRoller will comply with the H2020 Open Access Mandate.
* Section 3 describes the allocation of resources.
* Section 4 gives a detailed description of data security arrangements.
* Section 5 deals with ethical aspects connected to data management in the MegaRoller project.
The tool DMPOnline, hosted by Data Curation Centre has been used generating
this document. It is based on the Horizon 2020 DMP template.
## 1.4 Relationship with other deliverables
The DMP is not a fixed document but evolves during the lifespan of the
project. Deliverable D6.5 is the initial version of the MegaRoller Data
Management Plan.
* Deliverable D6.19 Data management update, due month 18 will be a more detailed and updated version of the document.
* D6.14 Data management report, due month 36 will be the final version of this document.
This document complements the following deliverables:
* D6.1 Communication plan
* D6.3 Dissemination plan
* D7.1 Project Quality Handbook
## 1.5 Summary of data
The scope of the MegaRoller Project is to develop and demonstrate a Power
Take-Off (PTO) for wave energy converters. MegaRoller will generate and reuse
various types of data. The data will have various formats as: txt, xls, mat,
mdl, pdf and others.
The size of the data will vary but will in total be moderate and should not
represent any challenge regarding storage capacity or handling. A more
detailed list of planned data sets and accessibility is given in Table 2-1.
### 1.5.1 MegaRoller Sharepoint site
All data sets in the MegaRoller project will be stored in a SINTEF Sharepoint
project site. This will be the projects working and collaboration area during
36 months of active project period. Every partner will be responsible for
uploading the data sets they have created/collected. All datasets will use
standard Sharepoint version control.
These metadata will be provided for each data set:
* File name
* Date
* Version
* File type
* Description
* WP number
* Responsible person
* Lead
* Dissemination level
### 1.5.2 MegaRoller community in Zenodo
The MegaRoller project will use Zenodo to comply with H2020 Open Access
mandate. All scientific publications and underlying data set will be uploaded
to the MegaRoller community in Zenodo. In addition, the project will upload
other data sets with dissemination level "Public" and make them open
accessible via Zenodo.
### 1.5.3 Upload instructions
The text box below and Figure 1-1 give the upload instructions for Sharepoint
and Zenodo:
<table>
<tr>
<th>
**Upload instructions - MegaRoller Sharepoint Site**
</th> </tr>
<tr>
<td>
* Please upload all MegaRoller data sets to this folder in the MegaRoller Sharepoint site: o __100 Research data_ _
* Use this naming convention (for details se 2.1.2): o _Descriptive text H2020_MagaRoller_DeliverableNumber_UniqueDataNumber_ o _Descriptive text H2020_MegaRoller_PublicationNumber_UniqueDataNumber_
* Be sure to use the same file name when uploading later versions
* Register mandatory metadata on your data set by adding a new item to this list: o __MegaRoller Research Data_ _
</td> </tr>
<tr>
<td>
**Upload instructions - Zenodo**
</td> </tr>
<tr>
<td>
* Research data underlying scientific publications/classified as "Public" should, in addition, be uploaded to the __MegaRoller Community_ _ in Zenodo.
o Create a profile in Zenodo to be able to upload files
* Uploading should be done as soon as possible and at the latest on article publication. Each partner is responsible for uploading data sets created/collected by them. If needed task leader WP6.7 will supply assistance.
</td> </tr> </table>
1
1 The figure is based on the graph "Open access to scientific publication and
research data in the wider context of dissemination and exploitation" in
_Guidelines to the Rules on Open Access to Scientific Publications and Open_
_Access to Research Data in Horizon 2020_
# 2 FAIR DATA IN THE MEGAROLLER PROJECT
The MegaRoller project work according to the principles of **FAIR data**
(Findable, Accessible, Interoperable and Re-usable.) The project aims to
maximise access to and re-use of research data generated by the project. At
the same time, there are data sets generated in this project that cannot be
shared due to commercial or IPR reasons. Table 2.1 gives details on the data
sets and accessibility.
## 2.1 Findable data
### 2.1.1 MegaRoller community in Zenodo
MegaRoller will use Zenodo repository as the main tool to comply with the
H2020 Open Access mandate. A MegaRoller community has been established. All
scientific articles/papers and public data sets will be uploaded to this
community in Zenodo and enriched with standard Zenodo metadata, including
Grant Number and Project Acronym. Every partner will be responsible for
uploading data sets that they have created/collected and to assign relevant
keywords. Zenodo provides version control and assigns DOIs to all uploaded
elements.
### 2.1.2 Naming conventions
Data will be named using the following naming conventions:
_Descriptive text H2020_MegaRoller_DeliverableNumber_UniqueDataNumber_
_Descriptive text H2020_MegaRoller_PublicationNumber_ UniqueDataNumbe_ r
**Example:** RMS_simulation_H2020_MegaRoller_2.2_3
### 2.1.3 Digital Object Identifiers (DOI)
DOI's for all data sets will be reserved and assigned with the DOI
functionality provided by Zenodo. DOI versioning will be used to assign unique
identifiers to updated versions of the data records.
### 2.1.4 Metadata in Zenodo
Metadata associated with each published data set will by default be
* Digital Object Identifiers and version numbers
* Bibliographic information
* Keywords
* Abstract/description
* Associated project and community
* Associated publications and reports
* Grant information
* Access and licensing info
* Language
## 2.2 Accessible data
The H2020 Open Access Mandate aims to make research data generated by H2020
projects accessible with as few restrictions as possible, but also accept
protecting of sensitive data due to commercial or security reasons.
All public data sets underlying scientific publications will be uploaded to
Zenodo, and made open, free of charge. The project will, in addition, make
other data sets with dissemination level "Public" open access via Zenodo.
Publication and underlaying data sets will be linked through persistent
identifiers. Data sets with dissemination level "Confidential" will not be
shared due to commercial exploitation.
Metadata including licences for individual data records as well as record
collections will be harvestable using the OAI-PHM protocol by the record
identifier and the collection name. Metadata is also retrievable through the
public REST API. The data will be available through www.zenodo.org, and hence
accessible using any web browsing application.
Information on needed software tools will be provided.
The table below provides a list of all data expected to be generated in the
MegaRoller project and their planned accessibility. We recognize that this
list will be more complete or can change as the project proceeds.
### Table 2-1 All data planned to be generated in the MegaRoller project, and
their accessibility
<table>
<tr>
<th>
**Task**
</th>
<th>
**Description/Name of data**
</th>
<th>
**Purpose**
</th>
<th>
**Format**
</th>
<th>
**Origin (Lead)**
</th>
<th>
**Class**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
1.1
</td>
<td>
Wave prediction at MegaRoller
installation site
</td>
<td>
Specify the wave characteristics at MegaRoller installation site for power
performance and load estimation
</td>
<td>
</td>
<td>
UiB
</td>
<td>
PU
</td>
<td>
Subject to approval from partners
</td> </tr>
<tr>
<td>
1.2
</td>
<td>
WEC-Sim outputs. Load
characterisation
</td>
<td>
Supports the evidence that the code is functional, and the results are
acceptable
</td>
<td>
MATLAB files (*.mat)
</td>
<td>
CAT
</td>
<td>
CO
</td>
<td>
Subject to approval from AWE
</td> </tr>
<tr>
<td>
1.3
</td>
<td>
WEC-Sim outputs. Load
characterisation
</td>
<td>
Provide load and motion envelope faced by drivetrains for input to Task 1.5
</td>
<td>
MATLAB files (*.mat)
</td>
<td>
CAT
</td>
<td>
CO
</td>
<td>
Subject to approval from AWE
</td> </tr>
<tr>
<td>
1.4
</td>
<td>
WEC-Sim outputs. Validation data
</td>
<td>
verification, pre-validation and validation results of the WEC numerical model
</td>
<td>
MATLAB files (*.mat)
</td>
<td>
CAT
</td>
<td>
CO
</td>
<td>
Subject to approval from AWE
</td> </tr>
<tr>
<td>
1.5
</td>
<td>
List of requirements to update the design of the mechanical structure. Design
of test bench
</td>
<td>
Specify the components so that they can be ordered and installed
</td>
<td>
</td>
<td>
CAT
</td>
<td>
CO
</td>
<td>
Subject to approval from AWE
</td> </tr>
<tr>
<td>
1.6
</td>
<td>
Records all inspection and testing requirements relevant to the construction
activities. CTI, safety and quality plan
</td>
<td>
Guide the implementation and
integration of the upgraded test
bench
</td>
<td>
</td>
<td>
CA/AWE/ABB/
Hydman/Hydroll
</td>
<td>
CO
</td>
<td>
Subject to approval from partners
</td> </tr> </table>
<table>
<tr>
<th>
2.1
</th>
<th>
Reports, excel-sheets
</th>
<th>
Determine the charging method for the PTO
</th>
<th>
pdf/xls
</th>
<th>
Hydroll
</th>
<th>
CO
</th>
<th>
The method could be patented. After patent, it will be public but not before.
</th> </tr>
<tr>
<td>
2.1
</td>
<td>
Conceptual and detailed hydraulic design
</td>
<td>
To find hydraulic components
</td>
<td>
Report
(word/pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
</td> </tr>
<tr>
<td>
2.2
</td>
<td>
electrical layout, design and select the
electrical
components
</td>
<td>
To find electrical components for
PTO system and electric distribution
</td>
<td>
tbd
</td>
<td>
ABB
</td>
<td>
CO
</td>
<td>
Commercial reason and Security reasons
</td> </tr>
<tr>
<td>
2.2
</td>
<td>
Specifications, Data for LCOE
</td>
<td>
The expected cost of the main equipment needs to be
</td>
<td>
xls
</td>
<td>
ABB
</td>
<td>
CO
</td>
<td>
Commercial exploitation
</td> </tr>
<tr>
<td>
2.3
</td>
<td>
Conceptual and detailed mechanical design
</td>
<td>
Twin drive train innovation
</td>
<td>
Report
(word/pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
</td> </tr>
<tr>
<td>
2.4
</td>
<td>
Conceptual and detailed control system design
</td>
<td>
Control system design
</td>
<td>
Report
(word/pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
</td> </tr>
<tr>
<td>
2.5
</td>
<td>
Preliminary Life Cycle Assessment report and
</td>
<td>
Environmental and socio-economic acceptance of the project
</td>
<td>
pdf
</td>
<td>
WavEC
</td>
<td>
CO
</td>
<td>
Data confidentiality reasons regarding the LCA
</td> </tr>
<tr>
<td>
2.5
</td>
<td>
Environmental Impact Assessment standard model
</td>
<td>
Environmental and socio-economic acceptance of the project
</td>
<td>
pdf
</td>
<td>
WavEC
</td>
<td>
PU
</td>
<td>
EIA standard model can be shared with the Public
</td> </tr>
<tr>
<td>
3.1
</td>
<td>
Description of hydraulic
components
</td>
<td>
Hydraulic implementation
</td>
<td>
Report
</td>
<td>
</td>
<td>
CO
</td>
<td>
Confidential, only for members of the consortium (including the Commission
Services)
</td> </tr>
<tr>
<td>
3.2
</td>
<td>
Specifications,
Datasheets
</td>
<td>
Datasheets of the main components for checking more
</td>
<td>
pdf
</td>
<td>
ABB
</td>
<td>
CO
</td>
<td>
Commercial exploitation
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
detail specifications of the components
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
3.2
</td>
<td>
ABB manufacture, deliver and commissioning
electrical
components
</td>
<td>
To find electrical components for
PTO system and electric distribution
</td>
<td>
tbd
</td>
<td>
ABB
</td>
<td>
CO
</td>
<td>
Commercial reason and Security reasons
</td> </tr>
<tr>
<td>
3.3
</td>
<td>
Mechanical implementation
</td>
<td>
Mechanical components
</td>
<td>
Report (word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
</td> </tr>
<tr>
<td>
3.4
</td>
<td>
Wave height/ MegaRoller panel
movement prediction
</td>
<td>
Control system implementation
</td>
<td>
txt
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
Public data sharing requires approval from the consortium.
</td> </tr>
<tr>
<td>
3.4
</td>
<td>
Control system implementation
</td>
<td>
Control system software
</td>
<td>
Report (Word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
</td> </tr>
<tr>
<td>
3.5
</td>
<td>
Handover meeting report
</td>
<td>
Hand-over to capture assembly requirements
</td>
<td>
Report (word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
Report/Document
</td> </tr>
<tr>
<td>
4.1
</td>
<td>
Health & safety risk assessment report
</td>
<td>
</td>
<td>
Report (word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
Report/Document
</td> </tr>
<tr>
<td>
4.2
</td>
<td>
CTI, safety and quality assurance plans (PTO)
</td>
<td>
</td>
<td>
Report (word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
Document
</td> </tr>
<tr>
<td>
4.4
</td>
<td>
Assembly instructions and order
</td>
<td>
Assembly work
</td>
<td>
Report (word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
</td> </tr>
<tr>
<td>
5.2
</td>
<td>
PTO performance and power quality validation results
</td>
<td>
Validation of PTO
</td>
<td>
Report (word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
5.2
</th>
<th>
PTO performance and power quality
validation results, Report
</th>
<th>
Contrast the experimental results with the simulated ones.
</th>
<th>
TXT, XLS or
PDF
</th>
<th>
VTT, ABB
</th>
<th>
CO
</th>
<th>
Commercial exploitation
</th> </tr>
<tr>
<td>
5.3
</td>
<td>
Validation data
</td>
<td>
Validation of PTO reliability
</td>
<td>
</td>
<td>
VTT
</td>
<td>
CO
</td>
<td>
</td> </tr>
<tr>
<td>
5.4
</td>
<td>
Final LCA report and
Environmental Impact Assessment of the MegaRoller device
</td>
<td>
Evaluation of the socio-economic impacts of the project
</td>
<td>
pdf
</td>
<td>
WavEC
</td>
<td>
PU
</td>
<td>
</td> </tr>
<tr>
<td>
5.5
</td>
<td>
LCC data
</td>
<td>
Validation of PTO reliability
</td>
<td>
</td>
<td>
VTT
</td>
<td>
CO
</td>
<td>
</td> </tr>
<tr>
<td>
6.3
</td>
<td>
Innovation management plan
</td>
<td>
</td>
<td>
Report (word/ pdf)
</td>
<td>
AWE
</td>
<td>
PU
</td>
<td>
Document
</td> </tr>
<tr>
<td>
6.3
</td>
<td>
Innovation management report
incl. IPR Registry I
</td>
<td>
</td>
<td>
Report (word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
Report/Document
</td> </tr>
<tr>
<td>
6.3
</td>
<td>
Innovation management report
incl. IPR Registry II
</td>
<td>
</td>
<td>
Report (Word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
Report/Document
</td> </tr>
<tr>
<td>
6.4
</td>
<td>
Find new opportunities and potential users of the Mega Roller project
technologies
</td>
<td>
To find electrical components for
PTO system and electric distribution
</td>
<td>
tbd
</td>
<td>
ABB
</td>
<td>
CO
</td>
<td>
Commercial reason and Security reasons
</td> </tr>
<tr>
<td>
6.5
</td>
<td>
Exploitation plan draft
</td>
<td>
</td>
<td>
Report (word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
Document
</td> </tr>
<tr>
<td>
6.5
</td>
<td>
Exploitation plan update I and II
</td>
<td>
</td>
<td>
Report (Word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
Document
</td> </tr> </table>
<table>
<tr>
<th>
6.5
</th>
<th>
LCOE report
</th>
<th>
</th>
<th>
Report (Word/ pdf)
</th>
<th>
AWE
</th>
<th>
PU
</th>
<th>
Report/Document
</th> </tr>
<tr>
<td>
6.5
</td>
<td>
Business cases
</td>
<td>
</td>
<td>
Report (Word/ pdf)
</td>
<td>
AWE
</td>
<td>
PU
</td>
<td>
Report/Document
</td> </tr>
<tr>
<td>
6.6
</td>
<td>
Description of the existing standards and their applicability. Gap analysis
matrix
</td>
<td>
Assesses the salient gaps in certification related to the specifics of the
technology
</td>
<td>
pdf
</td>
<td>
CAT
</td>
<td>
PU
</td>
<td>
Useful for Technology developers considering a certification process;
standardisation work groups; certification bodies
</td> </tr>
<tr>
<td>
7.1
</td>
<td>
Project Quality
Handbook
</td>
<td>
</td>
<td>
Report (Word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
Document
</td> </tr>
<tr>
<td>
7.1
</td>
<td>
Project management plan
</td>
<td>
</td>
<td>
Report (Word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
Document
</td> </tr>
<tr>
<td>
7.1
</td>
<td>
Project management report I and II
</td>
<td>
</td>
<td>
Report (Word/ pdf)
</td>
<td>
AWE
</td>
<td>
CO
</td>
<td>
Report/Document
</td> </tr>
<tr>
<td>
7.1
</td>
<td>
Final (Public) Report
</td>
<td>
</td>
<td>
Report (Word/ pdf)
</td>
<td>
AWE
</td>
<td>
PU
</td>
<td>
Report/Document
</td> </tr>
<tr>
<td>
8.1
</td>
<td>
Project coordination plan
</td>
<td>
</td>
<td>
Report
</td>
<td>
</td>
<td>
CO
</td>
<td>
Document
Confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
8.2
</td>
<td>
Project coordination update
</td>
<td>
</td>
<td>
Report
</td>
<td>
</td>
<td>
CO
</td>
<td>
Document
Confidential, only for members of the consortium (including the
Commission Services)
</td> </tr> </table>
<table>
<tr>
<th>
8.3
</th>
<th>
Project coordination report
</th>
<th>
</th>
<th>
Report
</th>
<th>
</th>
<th>
CO
</th>
<th>
Document
Confidential, only for members of the consortium (including the
Commission Services)
</th> </tr> </table>
Page
**20**
of
**23**
## 2.3 Interoperable data
Zenodo uses JSON schema as internal representation of metadata and offers
export to other formats such as Dublin Core, MARCXML, BibTeX, CSL, DataCite
and export to Mendeley. The data record metadata will utilise the vocabularies
applied by Zenodo. For certain terms, these refer to open, external
vocabularies, e.g.: license (Open Definition), funders (FundRef) and grants
(OpenAIRE). Reference to any external metadata is done with a resolvable URL.
## 2.4 Reusable data
The MegaRoller project will enable third parties to access, mine, exploit,
reproduce and disseminate (free of charge for any user) all public data sets,
and regulate this by using Creative Commons Licences.
### 2.4.1 Recommended Creative Commons (CC) licences
Creative Commons licences are a tool to grant copyright permissions to
creative work.
As a default, the CC-BY-SA license will be applied for public MegaRoller data.
This license lets others remix, tweak, and build upon your work even for
commercial purposes, as long as they credit you and license their new
creations under the identical terms. This license is often compared to
“copyleft” free and open source software licenses. All new works based on
yours will carry the same license, so any derivatives will also allow
commercial use. This does not preclude the use of less restrictive licenses as
CC-BY or more restrictive licenses as CC BY-NC not allowing commercial usage.
This will be assessed in each case.
### 2.4.2 Availability of the MegaRoller research data sets
For data published in scientific journals, the underlying data will be made
available no later than by journal publication. The data will be linked to the
publication. Data associated with public deliverables will be shared once the
deliverable has been approved by the EC.
Open data can be reused in accordance with the Creative Commons licences. Data
classified as confidential will as default not be reusable due to commercial
exploitation.
The public data will remain reusable via Zenodo for at least 20 years. This is
currently the lifetime of the host laboratory CERN. In cases, Zenodo is phased
out their policy is to transfer data/metadata to other appropriate
repositories.
The process of classifying research outputs from MegaRoller is described in
D7.1 Project Quality Handbook.
This project has received funding from the European Union's Horizon 2020
research and innovation programme under grant agreement No 763959\.
Page **21** of **23**
# 3 ALLOCATION OF RESOURCES
MegaRoller uses standard tools and a free of charge repository. The costs of
data management activities are limited to project management costs and will be
covered by the project grants.
(Resources needed to support reuse of data after active project period will be
solved from case to case).
SINTEF Energi AS is the lead for MegaRoller WP 6 Dissemination,
Standardization & Exploitation, and for task 6.7 Data management activities.
Task leader is Laila Økdal Aksetøy.
This project has received funding from the European Union's Horizon 2020
research and innovation programme under grant agreement No 763959\.
Page
**22**
of
**23**
# 4 DATA SECURITY
In this chapter, the security issues of the research data infrastructure in
the MegaRoller project are explained.
**4.1 Active project - Data security as specified for SINTEF Sharepoint
site.**
SINTEF Sharepoint is the working/collaboration area for the MegaRoller
project. A dedicated folder for research data sets has been established.
The MegaRoller Sharepoint site has these security settings:
* Access level: Restricted to persons (project members only)
* Encryption with SSL/TLS protects data transfer between partners and SINTEF Sharepoint site
* Threat management, security monitoring, and file-/data integrity prevents or registers possible manipulation of data
Documents and elements in SINTEF Sharepoint sites are stored in Microsoft's
cloud solutions - in Ireland and the Netherlands. There is no use of data
centres in the US or outside EU/EEA (Norway, Iceland or Switzerland).
Nightly back-ups are handled by SINTEF's IT operations contractor. All project
data will be stored for 10 years according to SINTEF ICT policy.
## 4.2 Repository - Data security as specified for Zenodo
The MegaRoller project has chosen Zenodo as its repository. All scientific
publications, public deliverables, and public research data set will be
uploaded to the MegaRoller community in Zenodo and made open accessible for
everyone.
These are the security settings for Zenodo:
* Versions: Data files are versioned. Records are not versioned. The uploaded data is archived as a
Submission Information Package. Derivatives of data files are generated, but
original content is never modified. Records can be retracted from public view;
however, the data files and record are preserved.
* Replicas: All data files are stored in CERN Data Centres, primarily Geneva, with replicas in
Budapest. Data files are kept in multiple replicas in a distributed file
system, which is backed up to tape on a nightly basis.
* Retention period: Items will be retained for the lifetime of the repository. This is currently the lifetime of the host laboratory CERN, which currently has an experimental programme defined for the next 20 years at least.
* Functional preservation: Zenodo makes no promises of usability and understandability of deposited objects over time.
* File preservation: Data files and metadata are backed up nightly and replicated into multiple copies in the online system.
* Fixity and authenticity: All data files are stored along with an MD5 checksum of the file content.
Files are regularly checked against their checksums to assure that file
content remains constant.
* Succession plans: In case of closure of the repository, best efforts will be made to integrate all content into suitable alternative institutional and/or subject based repositories.
This project has received funding from the European Union's Horizon 2020
research and innovation programme under grant agreement No 763959\.
Page **23** of **23**
# 5 ETHICAL ASPECTS
Currently, no ethical or legal issues that can have an impact on data sharing
have been identified. Ethical aspects connected to research data generated by
the project will be considered as the work proceeds.
This project has received funding from the European Union's Horizon 2020
research and innovation programme under grant agreement No 763959\.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0552_InsSciDE_770523.md
|
# I. Context for an InsSciDE Data Management Plan (DMP)
As a publicly funded H2020 project InsSciDE is required to submit, within its
first six months, a Data Management Plan to aid compliance with the Open
Research Data Pilot (ORDP). That pilot "aims to make the research data
generated by Horizon 2020 projects accessible with as few restrictions as
possible, while at the same time protecting sensitive data from inappropriate
access" 1 . The ORDP 'follows the principle " _as open as possible, as
closed as necessary_ " and focuses on encouraging sound data management as an
essential part of research best practice.' 2
InsSciDE moreover works with human subjects, meaning that the data generated
by our project includes personal information – which is explicitly protected
by the General Data Protection Regulation (GDPR) coming into force in May
2018.
The InsSciDE Data Management Plan (DMP) therefore addresses pragmatic, ethical
and legal questions that arise as we consider to what extent and how the data
we collect can be "free to access, reuse, repurpose, and redistribute" 1 .
This first draft of the DMP targets the main questions identified at the time
of settling the Grant Agreement, in light notably of the Ethics Report
submitted by proposal reviewers. As deliverable 10.2a it will be reviewed,
improved and updated in line with the first Project Review at 12 months.
# II. Essential procedures – quick summary
InsSciDE researchers and other staff must observe several obligations when
they collect, store, delete, or otherwise process **personal data** ( _any
information relating to an identified or identifiable natural person)_ for
project uses. _Each member of InsSciDE should consider herself personally_
_responsible for meeting these obligations_ :
* in light of European law and regulations and
* in light of ethical standards and standards observed in the research discipline.
_The WPL ensures that CSA and staff have been correctly informed of
obligations._ InsSciDE recognizes that different members of the project may
make different disciplinary or ethical claims. However, _all_ _commit to
observing at least the basic standard_ which is described in this Data
Management Plan and in the project Information sheet and Consent form. All
members can initiate discussion within the InsSciDE community, or request WPL
or Coordinator guidance, to clarify obligations and standards.
According to law, individuals in Europe always have the right to gain access
upon request to all their personal data which might be stored by a business or
other institution, to be informed about the processing of this personal data,
to rectify inaccurate personal data, and to oppose its further processing. _To
exercise these 'data rights'_ , every person easily learns the name and
contact email of InsSciDE officials from our public communications, website,
Information sheet, Consent form etc.
## **Interview data**
_Interviewees must give their active and informed consent before an interview
can take place._
<table>
<tr>
<th>
* InsSciDE provides a project Information sheet on our public website where it can be consulted by the prospective interviewee.
* Basic templates of the Sheet and Form are available from the WPL or from the project intranet. In agreement with the WPL, _each researcher adapts the Information sheet and the Informed consent form to her specific needs and ethical commitments_ .
* Before the interview, when meeting the person, the researcher provides a print out of this adapted Sheet and two copies of the Informed consent form and answers any questions.
* _To give consent the interviewee signs both copies of the Consent form. The researcher acknowledges her commitments by countersigning both copies._ The interviewee keeps one copy of the signed Form.
* _The researcher stores the remaining copy of the signed Form and identifies and stores the audio recording according to safe procedures set by her own institution, and fills out the InsSciDE Monitoring Tool to confirm that the consent form was signed and archived._ The (anonymisable) Monitoring Tool will be developed as part of D10.4_Quality templates.
* InsSciDE plans to deposit interview recordings at the Historical Archives of the EU after the close of the project. _The interviewee can opt IN to this central archiving on the Consent form._
_The researcher can opt OUT at the close of the project._
</th> </tr> </table>
## **Mailing list and administrative data**
Our mailing lists are composed of persons who have _opted in, either by
registering directly with the_ _project or by expressing interest in InsSciDE,
its activities, or science diplomacy_ . _All persons can opt_ _out_ through a
link provided in our digital newsletters and other communications.
_We restrict access to personal data_ collected for mailing lists and other
administrative purposes, and _destroy centralized data listings when they are
no longer needed for InsSciDE_ (as early as possible and at the latest 2
months after the end of the project, unless other rules override).
Project members need not destroy mailing lists that they build up during
InsSciDE. However they _commit to continue beyond the project lifetime to
respect data protection obligations and persons'_ _data rights_ , notably by
using the lists only for scientific purposes and by offering opt-out links.
# III. Features and procedures of the InsSciDE DMP
Guidance 3 suggests that an H2020 DMP should describe
* The data set. _What kind of data will the project collect or generate, and to whom might they be useful later on?_
* Standards and metadata. _What disciplinary norms are adopted in the project? What is the data about? Who created it and why? In what forms it is available?_
* Data sharing. _By default as much of the resulting data as possible should be archived as Open Access; which are legitimate reasons for not sharing resulting data?_
* Archiving and preservation. _How will data be made_ _available for a suitable period beyond the life of the project to ensure that publicly funded research outputs can have a positive impact on future research, for policy development, and for societal change?_
D10.2a covers each of these points in a preliminary way, detailing some
project procedures and also some archiving opportunities. In particular **we
describe below the data protection responsibilities of InsSciDE researchers.**
## **Which data are concerned? Which objectives are sought in managing the
data?**
InsSciDE involves human participants:
1. **who are volunteers** for research in the social sciences or humanities, including persons interviewed for historical case studies or oral histories ; participants in public or other project events ; _and/or_
2. **who consent** to provide contact data to be stored in the form of mailing lists for specific project activities and dissemination.
The project does not involve children or others unable to give consent.
In light of participation by human subjects certain ethical obligations ensue.
InsSciDE consortium partners recognize the need to carry out all the work in
our project to the highest ethical standards in full compliance with relevant
international, national and local legislation, regulations and codes of
conduct. In our practical research context these standards address in
particular the need to respect the volunteer participants’ right to
confidentiality and control over their **personal data** (see Box 1).
#### Box 1 – What does 'personal data' mean?
Under EU law (the General Data Protection Regulation which comes into force 25
May 2018), personal data means _any information relating to an identified or
identifiable natural person_ ('data subject'). An identifiable natural person
is one who can be identified, directly or indirectly, in particular by
reference to an identifier such as a name, an identification number, location
data, an online identifier or to one or more factors specific to the physical,
physiological, genetic, mental, economic, cultural or social identity of that
natural person. It doesn’t have to be confidential or sensitive to qualify as
personal data.
Current European legislation requires the **active consent** of persons to
share their data. The GDPR mandates requirements for **collection, storage,
deletion and other processing of personal data** . This applies notably to
personal data stored by **digital** means; InsSciDE researchers and other
staff are attentive also to the proper treatment of **paper** records, as well
as respect for confidentiality in their **conversations and exchanges taking
place by any means** .
InsSciDE data management procedures should support us in properly respecting
our participants’ **data rights.** They should help us judiciously meet
obligations related to our being part of the Open Research Data Pilot, which
aims to make data findable, accessible, interoperable and reusable (FAIR). The
same management procedures should provide pragmatic solutions acceptable in
light of **disciplinary standards** in the social sciences and humanities
(including history, science technology and society STS studies, media studies,
political science, etc.) as diversely claimed by our researchers.
It is recognized that while our project is by default part of the ORDP, "there
are good reasons to keep some or even all research data generated in a project
closed" 4 and therefore the Commission offers robust opt-out procedures.
Good reasons for InsSciDE researchers to opt out include:
* respect for overriding legal obligations to protect privacy and confidentiality of the persons who consent to be interviewed, and/or
* researchers' interpretation of their ethical commitment according to their personal or disciplinary standard (which shall not be lower than the standard expressed in the project Information sheet or Informed consent form D11.1.2).
The rationales for both open data sharing and data protection are reviewed in
an infographic provided by H2020, annotated by InsSciDE (Annex 1).
## **Who is responsible for protecting InsSciDE participants’ data rights?**
The H2020 Grant Agreement n°770523 signed by InsSciDE consortium partners
stated that: _Each [consortium partner] shall be responsible to ensure that
their researchers operate within their national legislation and comply with
their relevant national ethical guidelines_ .
However this DMP proposes that in acquitting their ethical commitment,
**InsSciDE staff shall observe the principle of subsidiarity** , as adapted to
project management: that is, ‘ _a central authority should have a subsidiary
function, performing only those tasks which cannot be performed at a more
local level_ ’ (Oxford English Dictionary). In practice this means:
* **Each staff member takes personal responsibility for ensuring, at her own level, respect for the data rights of the InsSciDE participants with whom she is in contact** : persons interviewed for historical case studies or oral histories ; participants in public or other project events ; persons sharing their data for social or traditional media uses (e.g., subscribing to the project mailing list).
* In order to carry out this personal responsibility, each staff member may **refer for help or guidance to the next higher organizational level** : her Work Package Leader, the project Coordination, or relevant officials at her place of employment (in particular, the Data Protection Officer appointed at latest on 25 May 2018 under European legislation). _In all cases it is advised to keep the WPL informed of any issues or problems encountered._
* A Work Package Leader can also ask for a point to be addressed at the next meeting of the project Management Board.
* The Coordination can consult, in case of need, the European Union Research Programme Officer and/or members of the project Advisory Board.
* As regulated by the Consortium Agreement, the Consortium Assembly may be asked to deliberate in case of need.
This DMP, and subsequent drafts, intend to foster subsidiarity by providing
guidance and procedures applicable by all.
All participants submitted to the Coordination copy of their institutional
ethical guidelines relevant to volunteer human participation in social
sciences or humanities research and contact information for the Data
Protection Officer. These texts, or the link to the public online access point
for these texts or instructions to request them from the Coordination, appear
in Annex 2.
## **Project Procedures and Deliverables**
The Coordination develops and oversees procedures to ensure the proper
integration and observance of H2020 guidance and European and national ethical
requirements on research with human subjects and protection of personal data.
This is achieved through the following actions and resources:
* D11.1.2_Consent Form and Project information sheet are updated templates developed with input from the project advisor D. Schlenker (Historical Archives of the European Union, HAUE) and the Management Board. The document containing instructions and the templates for adaptation by each researcher is available to all project partners through the Coordination, WPL and/or through the intranet.
* The Coordination will introduce a monitoring tool to control the actual collection of consent forms and their local or central archiving as appropriate. This tool is communicated as part of D10.4_Quality Templates and Administrative Procedures.
* In line with the fact that each consortium partner is ultimately responsible for the respect of legislation, Annex 2 provides access to each partner's institutional ethical guidelines relevant to volunteer human participation in research, and the contact for the Data Protection Office of each institution (the person who replies to any claims and who can provide information on facilities and practices for the protection of personal data).
* The Coordination ensures that the Management Board is fully informed of project requirements as well as the existence of European and institutional requirements and codes, and that the MB cascades their application to all concerned case study authors or other research or subcontracted personnel. This is formally achieved through the review of the present D10.2a and a first MB virtual meeting held on 27 April 2018\.
* The Coordination maintains communication with WPL to assess the application of policy regarding incidental findings that may emerge during interviews with diplomatic personnel, as well as overall compliance with the project Data Management Plan.
* Detailed information is kept by the Coordination in the project files on the informed consent procedures that will be implemented with regards to the collection, storage and protection of personal data (D11.1.2, Information sheet and Consent form templates). The observed procedures are made available on request to external parties and a notice to this effect is included on the public website.
### Qualitative interviews and oral histories
InsSciDE will conduct qualitative interviews and oral histories. Such
enquiries, traditional and widely practiced in social sciences and humanities
research, rely on the participation of volunteers who give their informed
consent. The purpose of conducting oral history is to have insights not easily
found in printed and published sources, providing the chance for a clearer
understanding of subjective interpretation as well as the recording of overall
(impersonal) facts and trends.
The qualitative research does not seek to access personal information but
rather deals with work culture and practices. The qualitative material and
personal data eventually collected in the course of this research will not be
subjected to any statistical treatment. The material collected will be
transformed into depersonalized InsSciDE study materials for open and closed
meetings, and into the finalized Case Study Library accessible to
professionals, scholars, teachers, students, and the interested public.
In order to obtain the informed consent of interview and oral history
subjects, the recruitment includes a procedure of providing information (
**D11.1.2** ) on the nature of the enquiry, the overall aims of InsSciDE, the
expected participation and means of recording qualitative input, and the uses
to which the information will be put, including the depersonalized character
of its transformation into project study materials and published case studies,
and finally the means by which consent may be withdrawn. The procedure also
informs the subject of measures that will be applied to protect personal data,
and the transparency procedures allowing access and rectification, etc. in
conformity with legislation. The recruitment is sealed by the signature of the
subject on an individual consent form which is countersigned by the researcher
and archived at the researcher institution and if appropriate or necessary by
the InsSciDE Coordination.
The procedural information, the content of the consent form, the approach to
processing collected qualitative input, its management, the protection of
personal data, and the identification in published materials of the role or
nature of the participants, are all framed by existing H2020 guidance and by
European legislation and codes (see the DMP Section III).
WPL are responsible for ensuring the necessary training of researchers to
correctly apply the procedures. Effective application is monitored by the
Coordination in liaison with the MB through the Monitoring Tool provided as
part of D10.4_Quality templates.
### Handling of incidental findings
Incidental findings emerging from interviews are those which may reasonably be
judged sensitive in that they regard policy decisions and actions of
governmental organizations. The draft incidental findings policy (Box 2) takes
account of European ethical and security requirements on managing and
archiving research data.
#### Box 2 - Draft incidental findings policy
<table>
<tr>
<th>
Researchers performing interviews or collecting oral histories shall firstly
observe all data collection and management procedures as stipulated by the
partner institution, by the Consortium Agreement, by the project ethical
guidance and in the project Data Management Plan. **The latter includes
compliant procedures for data collection, storage, protection, retention and
destruction, and stipulates security measures for the case of incidental
findings of sensitive nature** . Partners are responsible for ensuring that
each researcher is properly instructed in the procedures and cognizant of the
personal responsibility these imply for research personnel.
The information provided to volunteer subjects in view of their consent shall
explain the project procedures concerning such potentially sensitive or
damaging content that could emerge in the course of interviews. These
procedures are briefly outlined here and may be **developed and formalized**
in light of experience.
1. In the interview situation and afterwards, each researcher shall be vigilant with regard to the expression by the interview subject of incidental findings with security relevance (for example, but not limited to, information on policy decisions or government actions). At the start of each formal interview, recorded or not, the researcher shall remind the subject of the need to avoid disclosure of sensitive or damaging information. At the close of the interview the researcher will ask the subject whether any information provided during the interview should be redacted and/or requires the application of security measures. The researcher shall act in a timely manner on the reply received and shall alert the WPL that the procedure has been activated.
2. If, subsequent to the interview situation, in particular at the time of reviewing collected material, the researcher judges that incidental findings may be sensitive, the researcher will alert the WPL to the existence of such findings, and apply the security measures agreed in the Data Management Plan. In cases when the researcher is unsure of whether the information is of sensitive nature, a conservative decision shall be taken.
3. The WPL shall report all such alerts received to the Coordination in real time. The Coordination will log occurrences and include this statistic in periodic reporting. In case of concern, the WPL or Coordinator may request that the agenda of the quarterly Project Management Board virtual meeting include a point for discussion. At no time will any sensitive information itself be relayed or discussed.
4. Any data breaches are immediately reported by the involved partner in **compliance with the GDPR.**
</th> </tr> </table>
The targeted subjects include diplomatic personnel, scientific and technical
experts. They will be recruited based on their expertise, experience, insight
and/or role in actual cases of scientific diplomacy as described in the
relevant case study abstracts of the InsSciDE project. It is expected that
most of the interviewees are bound by their contractual confidentiality
agreements with their employer and moreover that their professional experience
has educated them to the necessity of keeping sensitive information secret. In
general, expert researchers, postdoctoral researchers and graduate students
who may conduct InsSciDE interviews and oral histories will have no
relationship to their subjects outside of a research relationship.
### Personal data protection
InsSciDE WP1 and WP9 have established opt-in databases of mailing list data
relative to stakeholder participants and persons who consent to receive the
newsletter and other project dissemination deliverables. The same persons are
enabled to opt out ('unsubscribe' link in each digital project newsletter, and
publication of contacts on the InsSciDE website).
Primary details of the procedures that implemented for data collection,
storage, protection, retention and destruction for the personal data collected
for dissemination mailing lists, confirmed as in compliance with national and
EU legislation, are displayed in Box 3.
#### Box 3 - Primary details of compliant procedures for data collection,
storage, protection, retention and destruction.
<table>
<tr>
<th>
</th>
<th>
Responsible person: The **responsible person in each partner organization** is
identified (or if necessary, assigned on project basis); see Annex 2.
**Contact information** for overall (Coordinationlevel) project data
controller and data protection officer is provided on the project **website**
: [email protected]_
</th> </tr>
<tr>
<td>
</td>
<td>
Data collection: All personal data (generally limited to mailing-list type
information) is collected on an opt-in or consent basis using **transparent
web forms and/or paper coupons** . The inclusion of personal data (limited to
name, title, and institution) in public records of project activities,
deliverables or dissemination publications shall be **agreed by the person**
at the time of opt-in.
</td> </tr>
<tr>
<td>
</td>
<td>
Storage: **Database characteristics are submitted to the supervisory authority
in France and in Poland** (mailing list data storage partners’ domiciliation)
for validation if required by the prevailing legislation; otherwise prior
validations and public explanatory text in line with the law are relied on.
</td> </tr>
<tr>
<td>
</td>
<td>
Protection: Privacy by Design and by Default are applied.
</td> </tr>
<tr>
<td>
</td>
<td>
Retention: **Personal data specifically in the form of mailing lists shall not
be kept for longer than is necessary for the purpose of InsSciDE research and
dissemination activities** .
</td> </tr>
<tr>
<td>
</td>
<td>
Destruction: **Personal data in digital form, specifically information
provided by persons at the time of opting in to project mailing lists, shall
be erased by InsSciDE coordination within 2 months** of the close of the
project. When relationships have been established between researchers and
these persons, or between consortium partner institutions and these persons,
InsSciDE shall not seek to control or remove personal data managed by these
researchers and partners.
</td> </tr> </table>
At present, the InsSciDE consortium cannot foresee any ethical implications
arising from results generated by the project (e.g. very limited potential for
dual use; no foreseen risks to rights and freedoms of consenting volunteers
and mailing list members, etc.) but in the **ongoing evaluation of outputs
they will take into account the opinions of e.g. the European Group on Ethics
in Science** and New Technologies (as from 1998). In case of need the
appropriate risk assessment and subsequent actions in light of the GDPR will
be applied.
### Archiving and preservation of personal data
Each InSciDE partner institution has provided access to its ethical policy
(Annex 2) which alongside
European regulations governs the handling of research data containing personal
information.
InsSciDE researchers and staff take appropriate care to restrict access to
data stored locally in any form. Data no longer needed after the close of the
project is destroyed.
As for long term archiving and preservation of research data for the benefit
of European citizens and other researchers, InsSciDE accepts the invitation by
Advisory Board member Dieter Schlenker, HAEU to make a private project deposit
of interview recordings to the Historical Archives in Florence. Box 4 traces
the archiving discussion held at the first MB virtual meeting (27 April 2018)
with the participation of D. Schlenker, HAEU. The meeting addressed the
balance to be found between
* the ORDP requirement to make research data "FAIR" (findable, accessible, interoperable and reuseable), and
* respect for legal and ethical commitments to confidentiality and privacy.
The solution of archiving recordings and metadata at the HAEU, combined with
carefully tailored optin consent appears to offer a good balance.
#### Box 4: Making InsSciDE interview data FAIR: Findable Accessible
Interoperable & Reusable
<table>
<tr>
<th>
The InsSciDE budget does not include funding for transcription of interviews,
so we discussed the preservation and reuse of audio recordings. Dieter
Schlenker of the Historic Archives of the European Union (HAEU) renewed his
invitation to preserve our material after the close of the project. This is
considered a "private deposit from a research group" and the HAEU places no
stipulations on format or content, including whether interviewees are
identified (oral histories) or anonymized. A written agreement stipulates how
the material is labeled, embargoed, etc. The depositor must fill a metadata
grid. It is not expected that depositors otherwise prepare the material for
the use of future researchers.
InsSciDE researchers may wish to create _chronothematic tables_ identifying
themes and the audio sequences in which they appear. These tables are
suggested for our own use and are not a requirement for deposit in HAEU.
However, InsScIDE WP3 formally requests that researchers facilitate the
exploitation of the interviews for this cross-cutting work package.
Researchers should signal any information uncovered by the interview about the
role of academies in science diplomacy, or about networks of science
diplomats.
The following questions were discussed with Dieter Schlenker during our MBM:
1. **What do our interview subjects need to know about the opportunity for our interview recordings to be placed in the HAEU?**
The project Information sheet and Informed consent form clearly indicate this
opportunity and Dieter advises that we add a checkbox on the form to allow
interviewees to choose.
1. _When would researchers who opt to do so, place recordings in the archives?_ After the close of the project and after the major scientific articles are published.
2. _When would the recordings become publicly accessible?_ Immediately after the HAEU inventory process, and according to the explicit instructions provided by the depositor who may choose to embargo the material for a further 1, 2 or 3 years in case new publications are anticipated. Longer embargos are not advised.
2. **What does "OFF THE RECORD" mean in this context?**
InsSciDE is conducting historical research, not looking for journalistic
scoops. After the end of the recorded interview the researcher asks the
interviewee if any aspects should be considered off the record. Some InsSciDE
researchers will explicitly provide an "off the record" option up front.
1. _Should we turn off the recorder during the interview_ ? – It is certainly possible to do so to maintain the interviewee's confidence. Several InsSciDE researchers include this in their practice.
2. _Can we edit the recording before archiving it?_ – This is also possible but it represents some technical effort. Often archived recordings from other contexts contain exchanges which the interviewee identifies as off the record. The HAEU guidelines direct persons consulting the recording to respect this wish.
3. **What about copyright and intellectual property?**
1. _Do H2020 researchers have to relinquish copyright on their data when they opt to archive it?_ Because
</th> </tr> </table>
<table>
<tr>
<th>
we are not providing transcripts, we are advised that there is no need to
relinquish copyright to the archives (no copies will be made).
b) _What about intellectual property?_ The deposit will be made by InsSciDE,
and labeled accordingly. The researcher conducting each interview is
identified as the permanent holder of the intellectual property. The HAEU
guidelines for reuse stipulate that future researchers must cite the original
depositor and research owner.
**4\. What safeguards are applied by HAEU for reuse of the recordings?**
1. _Ethical treatment for the interviewee_
2. _Ethical treatment for the owner of the research= the InsSciDE researcher_
Dieter Schlenker will point us to the 5-page guidelines that HAEU gives to
researchers who consult recordings in the archives. Again, there are no
absolute controls but the guidelines state what is regarded as ethical
conduct. Although the archives are officially open to the public, the vast
majority of persons consulting the archives are professional researchers.
Occasionally journalists consult to gather information about famous
personalities to inform obituaries, or at anniversaries of major events.
</th> </tr> </table>
# IV. Relevant Legislation and Formal Documents
Some of the text below is adopted from the Grant Agreement n°770523 which
binds the consortium partners. It is included in the DMP in order to encourage
all InsSciDE staff to consult and consider the relevant regulations and codes.
### European Union Regulations and Codes
When signing the Grant Agreement, all InsSciDE consortium partners committed
to respect all ethical requirements in project objectives, methodology and
practices. InsSciDE partners confirm that the ethical standards and guidelines
of Horizon2020, including those set out in The European Code of Conduct for
Research Integrity, Revised Edition, 2017 5 , will be rigorously applied,
regardless of the country in which the research is carried out.
The work in InsSciDE shall be performed in accordance with regulations at the
European level. The InsSciDE consortium committed to respect the following EU
legislation and regulations:
* The Charter of Fundamental Rights of the European Union (2000/C 364/01)
* The European Convention on Human Rights as amended by Protocols Nos. 11 and 14 and supplemented by Protocols Nos. 1, 4, 6, 7, 12 and 13
* Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data
* General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) as of its application on 25 May 2018.
Annex 2 provides the **contact information for the Data Protection Officer**
of each InsSciDE consortium partner. The InsSciDE Coordination keeps this
information on file in order to provide it immediately upon request to any
staff member or other participant in InsSciDE.
### National Legislation
As confirmed by the Grant Agreement, in accordance with EC rules all the
personal data that will be collected or used in the InsSciDE project will be
duly treated, respecting legal and ethical requirements in obedience to the
legislation of data protection of the countries in which consortium partners
are legally established.
All research activities within the project shall be conducted in compliance
with the fundamental ethical principles and shall conform to current
legislation and regulations in the countries where the research is carried
out.
Mailing and attendance lists shall be processed in compliance with national
and project procedures to protect personal data.
### Consortium Agreement
The Consortium Agreement signed by all InsSciDE partners including those
external to the European Union (4-UNESCO, 5-UiT in Norway,) contains a
requirement on observance of European standards of ethical treatment of social
sciences or humanities research subjects, and a requirement to respect the
provisions to be agreed in the present project DMP (D10.2).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0553_ENHANCE_722496.md
|
# 2.2 Data set description
Each data set that will be collected, generated or processed within the
project will be shortly described; the description shall include the following
elements:
* The nature of the data set;
* Whether the data is associated with a scientific publication;
* Information on the existence of similar data sets;
* Possibility of reuse and integration with other data sets;
* If data is collected from other sources, the origin will also be provided.
# 2.3 Data sharing
The coordinator, along with all the work package leaders, will define how data
will be shared and made public; more specifically the access procedures, the
embargo periods, the necessary software and tools to enable the re-use of all
data sets that will be generated, collected or processed during the project.
If one specific data set cannot be shared, the reasons will be mentioned
(potential reasons can be ethical, intellectual property, commercial, privacy
related, security related).
# 2.4 Archiving and preservation
The Consortium has discussed and decided upon the procedures to be employed to
ensure the long-term preservation of data sets. The Database/Repository that
will be employed for the preservation of data is Zenodo, hosted by CERN,
https://zenodo.org. Moreover, in order to contribute to the outreach
activities and dissemination of knowledge, the Consortium is planning to
create a dedicated Wiki pages with open databases/libraries of metal-organic
precursors and physical properties of piezoelectric materials.
# Data sets
The Consortium has discussed the type of data that will be generated by the
research activity and what standards are going to be employed concerning
metadata, naming conventions, clear version numbers, software needed to access
the generated data, its interoperability, reusability and storage.
## Metadata
The Beneficiaries, while performing the research activity in the framework of
the project, will describe and document the generated data employing the best
practices in their specific scientific field. The information will include
metadata such as, but might not be limited to depending on the type of
research:
* Operational conditions of vibrational energy harvesters in the cars, including temperature and its gradients, vibrations, mechanical chocks, working atmosphere, etc.
* Physical and chemical properties of metal-organic precursors of alkali metals, Nb, and Ta including chemical and thermal stability, vapour pressure, evaporation temperature, vapour pressure, oligomerization, crystal structure, evaporation and decomposition temperatures, melting point, thermo-gravimetric data, differential scanning calorimetry, IR spectra, NMR data, solubility.
* Physical properties of lead-free piezoelectric materials (crystals, ceramics, thin films and nanomaterials) including piezoelectric, dielectric, and elastic constants, structural and morphogical data.
All data sets will be presented with keywords, time/date and other info
relative to the specific experiment that might be deemed appropriate to be
shared.
## Naming conventions & Employed standards
The following naming conventions and software standards will be employed in
the generation of the datasets, should different standards be necessary, this
Data Management Plan will be updated.
Naming conventions:
* IEEE convention for the description of the piezoelectric material orientation and related physical constants, described by tensors. Matrix notation will be used for the tensors defined in orthogonal XYZ setting. XYZ setting will be defined according to the principal symmetry elements of the structure.
* Crystal structure and crystallographic orientations will be named according to the International Crystallographic convention.
* IUPAC convention will be used for the names of the metal-organic and organic compounds.
* SI system will be used for the units of the physical and chemical properties.
Data standards and necessary software to access the generated data:
* Description of data will be presented in PDF files. Any web browser will be able to open the generated data.
* Raw data will be given in txt or dat files, which can be opened by any software for the data treatment (Exel, Spreadsheet, Origin, Kaleidagraph, Matlab, etc.) Version numbers: the latest updated reference will be only provided, the date of updating will be indicated.
## Reusability and Interoperability
Data generated by the project will have to be scrutinised by the Consortium
that will discuss whether or not it is the case to publish it on the public
domain. Some data might be object of patents, in that case it will not be made
available until a patent request is filed or data is officially published.
With regards to the Interoperability and Reusability of data, metadata will be
made available for reuse with demand to be properly referenced. Moreover, as
stated above, common-use data standards will be employed to allow easy access.
## Data Security
The Consortium has decided that data will be stored independently by each
beneficiary, no common recovery facility will be put in place.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0557_REGIONS4PERMED_825812.md
|
also be saved for follow-up or statistical purposes. Personal data such as
contact name or contact e-mail will never be communicated outside the
Regions4PerMed project. Any communication to the Regions4PerMed distribution
list will be handled directly by all the partners. Appropriate observation of
European (EU General Data Protection Regulation 2016/679) and national data
privacy regulations will be ensured. Interviews will be conducted with
selected individuals to further develop specific survey findings or to
complete information gaps;
* Participatory workshops and conferences (responsible all partners): Data collected on the participants attending workshops will be limited to required registration data (name, organisation, position, etc.). Participants authorisation will be sought before listing publicly their names as attendees to a workshop. This information will be kept until the end of the project. Appropriate observation of European (EU General Data Protection Regulation 2016/679) and national data privacy regulations will be ensured;
* Outreach, dissemination and exploitation (responsible TLS): The project’s dissemination and communication activities may lead to a set of public and private deliverables.
**4\. Data Management Plan**
# 4.1 Data set references
In accordance with the Article 13 of Regulation EU / 679/16, Regions4PerMed
project partners as independent Data Controllers will collect information
regarding the processing of personal data (name, surname, professional mail
address, professional e-mail, phone number, photo) of participants to surveys
and/or events that are project-related.
Hori
zon
2020
<table>
<tr>
<th>
**Responsibility for the data**
</th> </tr>
<tr>
<td>
Toscana Life Sciences Foundation (TLS, coordinator)
</td>
<td>
[email protected].
</td> </tr>
<tr>
<td>
Regional Foundation for Biomedical Research (FRRB)
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Sächsisches Staatsministerium für Wissenschaft und Kunst (SMWK)
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Axencia de Coñecemento en Saúde (ACIS)
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Wroclaw Medical University (WMU)
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Urząd Marszałkowski Województwa Dolnośląskiego (UMWD)
</td>
<td>
[email protected]
</td> </tr> </table>
The partners agree to manage their respective data within the aims of Grant
Agreement and
Consortium Agreement, and in compliance with EU Regulation 679/2016. They
mutually
## D1.6 – ORDP
authorize the data processing necessary to achieve the project deliverables.
The type of data that may be collected to properly manage the project related
activities, are limited to:
1. financial information of the project parties strictly limited to the project-related expenditure;
2. personal data of the parties‘ employees, colleagues;
3. personal data of participants and speakers of the project related activities. In this perspective the data that are concerned by the data management plan are related to personal data of participants and speakers of the project related activities (including newsletter subscription);
4. the management of the results of the surveys;
5. the deliverables submitted to the European Officer of the project.
The data are collected only to guarantee a proper implementation of the
project as described in the Grant Agreement and Consortium agreement, included
but not limited to, administrative, reporting, dissemination and publication
purposes, organisation of conferences, workshops and in situ visits.
# 4.2 Dataset description
The Regions4PerMed partners have identified the dataset that will be produced
during the different phases of the project.
Hori
zon
2020
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset Name**
</th>
<th>
**Responsible Partner**
</th>
<th>
**Related WP**
</th> </tr>
<tr>
<td>
1
</td>
<td>
DS1: Newsletter subscribers
</td>
<td>
TLS
</td>
<td>
7
</td> </tr>
<tr>
<td>
2
</td>
<td>
DS2: Survey/interviews respondents
</td>
<td>
TLS
</td>
<td>
7
</td> </tr>
<tr>
<td>
3
</td>
<td>
DS3: Contact list for events (conference and workshop)
</td>
<td>
All
</td>
<td>
2-6
</td> </tr>
<tr>
<td>
4
</td>
<td>
DS4: Project deliverables
</td>
<td>
TLS
</td>
<td>
7
</td> </tr> </table>
## 4.2.1 Dataset 1
Hori
zon
2020
<table>
<tr>
<th>
**DS1 Newsletter subscribers**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset includes the names and mailing list of partners, advisory boards
committee and
Regions4PerMed newsletters recipients.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
The dataset is generated by the subscription to the website of visitors that
sign up for newsletter and by all people that contact the members of the
consortium.
</td> </tr>
<tr>
<td>
**Partner responsibilities**
</td> </tr>
<tr>
<td>
In charge of the collection
</td>
<td>
TLS
</td> </tr>
<tr>
<td>
In charge of storage
</td>
<td>
TLS
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Format and data volume
</td>
<td>
This dataset will be imported as txt or excel file
</td> </tr>
<tr>
<td>
**Data exploitation**
</td> </tr>
<tr>
<td>
Main use of the data
</td>
<td>
The mailing list will be used to disseminate the project newsletter to a
targeted audience.
</td> </tr>
<tr>
<td>
Dissemination level
</td>
<td>
The access to the mailing list will be available to the consortium members
only
</td> </tr>
<tr>
<td>
Sharing and re-use
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: have you gained (written) consent from data subjects
to collect this information?
</td>
<td>
The mailing list contains personal data (names and email addresses of
newsletter subscribers, partner and advisory board committee). People
interested in the project voluntarily register, through the project website,
or were directly contacted by the Regions4PerMed partners to receive the
project newsletter with the subscription, they will be asked for consent to
use and store their data.. They can unsubscribe at any time.
</td> </tr>
<tr>
<td>
**Archiving and storage**
</td> </tr>
<tr>
<td>
Data storage: where? How long?
</td>
<td>
The dataset will be preserved on the Regions4PerMed reserved area of the
website. The data will be stored for 5 years after the end of the project.
</td> </tr> </table>
D1.7 – Advisory Board
## 4.2.2 Dataset 2
Hori
zon2020
<table>
<tr>
<th>
</th>
<th>
**DS2 Survey/Interviews**
</th> </tr>
<tr>
<td>
**Data identification**
</td>
<td>
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset includes results of the surveys
</td> </tr>
<tr>
<td>
Source
</td>
<td>
The survey will be distributed via email to all people that have registered to
the website or participate in Regions4PerMed events.
</td> </tr>
<tr>
<td>
**Partner responsibilities**
</td>
<td>
</td> </tr>
<tr>
<td>
In charge of the collection
</td>
<td>
TLS
</td> </tr>
<tr>
<td>
In charge of storage
</td>
<td>
TLS
</td> </tr>
<tr>
<td>
**Standards**
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Format and data volume
</td>
<td>
This dataset will be imported as txt or excel file.
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
</td> </tr>
<tr>
<td>
Main use of the data
</td>
<td>
This data will be used to assess the progress of the project and help the
consortium planning and tailoring actions to be taken for the achievement of
the expected project results.
</td> </tr>
<tr>
<td>
Dissemination level
</td>
<td>
The access to the mailing list will be available to the consortium members
only.
</td> </tr>
<tr>
<td>
Sharing and re-use
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: have you gained (written) consent from data subjects
to collect this information?
</td>
<td>
In the survey the participants will be asked to to share their details and
they will be asked for consent to store and use their data.
</td> </tr>
<tr>
<td>
**Archiving and storage**
</td>
<td>
</td> </tr>
<tr>
<td>
Data storage: where? How long?
</td>
<td>
The dataset will be preserved on the Regions4PerMed reserved area of the
website. The data will be stored for 5 years after the end of the project.
</td> </tr> </table>
## 4.2.3 Dataset 3
<table>
<tr>
<th>
**DS3 Contact list for events**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset includes information of people who will participate at the
Regions4PerMed events: conferences and workshops, capacity building.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
The dataset includes all members of the interregional committee and all people
registered to the events (website or directly to the events) that will be
invited for the next conference and/or workshop.
</td> </tr>
<tr>
<td>
**Partner responsibilities**
</td> </tr>
<tr>
<td>
In charge of the collection
</td>
<td>
TLS
</td> </tr>
<tr>
<td>
In charge of storage
</td>
<td>
TLS
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Format and data volume
</td>
<td>
This dataset will be imported as txt or excel file.
</td> </tr>
<tr>
<td>
**Data exploitation**
</td> </tr>
<tr>
<td>
Main use of the data
</td>
<td>
This dataset will be used to invite interested participants to the events.
</td> </tr>
<tr>
<td>
Dissemination level
</td>
<td>
The access to the mailing list will be available to the consortium members
only.
</td> </tr>
<tr>
<td>
Sharing and re-use
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: have you gained (written) consent from data subjects
to collect this information?
</td>
<td>
The dataset contains personal data (names and email addresses) of people
interested in our events. People interested in the project voluntarily
register, through the project website or are directly contacted by the
Regions4PerMed consortium members. With the subscription, they will be asked
for consent to use and store their data. They can unsubscribe at any time.
</td> </tr>
<tr>
<td>
**Archiving and storage**
</td> </tr>
<tr>
<td>
Data storage: where? How long?
</td>
<td>
The dataset will be preserved on the Regions4PerMed reserved area of the
website. The data will be stored for 5 years after the end of the project
</td> </tr> </table>
## 4.2.4 Dataset 4
<table>
<tr>
<th>
</th>
<th>
**DS4 Project deliverables**
</th> </tr>
<tr>
<td>
**Data identification**
</td>
<td>
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset includes all the deliverables planned in the Grant Agreement and
submitted via Participant Portal.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
The list of deliverables will be updated by the project coordinators as long
as the deliverables are submitted.
</td> </tr>
<tr>
<td>
**Partner responsibilities**
</td>
<td>
</td> </tr>
<tr>
<td>
In charge of the collection
</td>
<td>
TLS
</td> </tr>
<tr>
<td>
In charge of storage
</td>
<td>
TLS
</td> </tr>
<tr>
<td>
**Standards**
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Format and data volume
</td>
<td>
This dataset will be imported as txt or excel file.
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
</td> </tr>
<tr>
<td>
Main use of the data
</td>
<td>
This data will be used to assess the state of the art of the project.
</td> </tr>
<tr>
<td>
Dissemination level
</td>
<td>
The access to the list of deliverables will be available to the consortium
members only. The public deliverables will be made available on the project
website.
</td> </tr>
<tr>
<td>
Sharing and re-use
</td>
<td>
The public report will be used to disseminate the project on the
Regions4PerMed website and social.
</td> </tr>
<tr>
<td>
Personal data protection: have you gained (written) consent from data subjects
to collect this information?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Archiving and storage**
</td>
<td>
</td> </tr>
<tr>
<td>
Data storage: where? How long?
</td>
<td>
The dataset will be preserved on the Regions4PerMed reserved area of the
website. The data will be stored for 5 years after the end of the project
</td> </tr> </table>
**5\. Conclusion**
This DMP provides an overview of the data that the Regions4PerMed project will
produce and the measures that will be taken to protect the ‘personal data” and
securely store it.
All consortium members agree to manage their respective data, within the aims
of Grant Agreement and Consortium Agreement, and in compliance with EU
Regulation 679/2016.
Personal data will be organized in the private area of the website. in
compliance with the provisions of the EU Regulation 679/16. In particular,
technical and organizational security measures suitable to guarantee the
confidentiality and security will be in place. Exclusively authorised persons
specifically appointed by the individual partners will process the personal
data. For this reason no personal data will be made available outside the
consortium. The data will be kept for the period necessary to achieve the
purposes listed above, and in any case for a period not exceeding 5 years
after the closure of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0569_STIMEY_709515.md
|
# Document Purpose
### Executive Summary
This document is part of the deliverable of Work Package 10, D10.7 A Data
Management Plan.
Data Management Plan (DMP) will be implemented due to our participation in the
PILOT on Open Research Data in Horizon 2020. This deliverable describes all
the data that will be collected and generated during the STIMEY project, how
it will be created, stored and backedup, who owns it and who is responsible
for the different data and which data will be preserved and shared.
# Data Summary
STIMEY will both collect data from partners and third parties, and will
generate new data within the project. These Data will be collected and
generated with the only purpose of developing research activities in STIMEY.
The main goal of the project is to engage Society to Science, Technology,
Engineering and Mathematics (STEM), awakening and supporting scientific and
technical careers. In order to reach this goal, STIMEY will develop a platform
which will contain personal and academic information as part of the student
e-profile. The students have the opportunity of playing some serious games,
and as they play. All of them will develop their cognitive and knowledge
profile and build their personal creativity curve. The platform will have
material developed by teachers, schools, universities, research centres and
corporations they will be able to create personal learning environments
combining existing resources and updating existing information. Every
educational centre and company registered in the platform need to provide
information related to them in order to make them clearly identifiable for
further references.
Other objective within the project is to develop a socially assistive robotic
artefact, this robot will be able to communicate with others, and send back
information to the platform. This social media communication will be useful
for both scientific and educational studies.
A variety of different methods of collection are used, but all adhere to high
international standards. Personal data (e.g. age, gender, languages…) will be
collected via questionnaires or register forms.
Different Stakeholders will actively participate and collaborate in the STIMEY
project by giving specific information related to STEM areas. Educational
centres and teachers will take part in the platform by creating new specific
content or evaluating the student progress. Students will be assessed at
different stages of the year using a variety of activities, serious games,
robotic artefacts and other approaches at school and home. The task will have
been designed as a way of engaging society to science and awakening and
supporting scientific and technical careers. A complete list of datasets to be
collected and created is shown in Table 1.
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Purpose**
</th>
<th>
**Type/ Format**
</th>
<th>
</th>
<th>
**Origin of the data**
</th> </tr>
<tr>
<td>
Personal Data
</td>
<td>
* User profile in the platform
* Socioeconomic/cultural
studies
</td>
<td>
* Text
* Images
* Voice
</td>
<td>
\-
\-
</td>
<td>
Questionnaires Stakeholder & Data collection
</td> </tr>
<tr>
<td>
Academic Data
</td>
<td>
* Courses list
* Courses' scores/awards on and out of the platform
</td>
<td>
* Text
* Images
</td>
<td>
\-
\-
</td>
<td>
User input in the platform module Interconnectedness with other platforms
</td> </tr>
<tr>
<td>
Family information
</td>
<td>
\- Parent/Child (under 13) connection
</td>
<td>
* Text
* Images
</td>
<td>
\-
</td>
<td>
User input in the platform module
</td> </tr>
<tr>
<td>
Professional Data
</td>
<td>
* Work history (job titles, company names, dates)
* Achievements/awards
</td>
<td>
\- Text - Images
</td>
<td>
\-
</td>
<td>
User input in the platform module
</td> </tr>
<tr>
<td>
Stakeholder Information
</td>
<td>
\- List of collaborators in STIMEY and information about them
</td>
<td>
\- Text - Images
</td>
<td>
\-
\-
</td>
<td>
Questionnaires Stakeholder & Data collection
</td> </tr>
<tr>
<td>
Educational Center Information
</td>
<td>
\- List of schools that collaborate in STIMEY and information about them
</td>
<td>
\- Text - Images
</td>
<td>
\-
\-
</td>
<td>
Questionnaires Stakeholder & Data collection
</td> </tr>
<tr>
<td>
Cognitive profile
</td>
<td>
* Information about how the students process new information or activities
* Learning potentials
* Personal strengths
</td>
<td>
* Text
* Spreadsheet
</td>
<td>
\-
\-
</td>
<td>
Questionnaires
Platform activities
</td> </tr>
<tr>
<td>
Emotional profile
</td>
<td>
Information about:
* Self-control
* Self-motivation
* Adaptability
* Leadership
</td>
<td>
* Text
* Spreadsheet
</td>
<td>
\-
\-
</td>
<td>
Questionnaires
Platform activities
</td> </tr>
<tr>
<td>
Knowledge profile
</td>
<td>
* Skills and knowledge acquired
* Indicator related to student characteristics
* Learning improvements
</td>
<td>
* Text
* Spreadsheet
</td>
<td>
\-
\-
</td>
<td>
Questionnaires
Platform activities
</td> </tr>
<tr>
<td>
Companies working for STIMEY
</td>
<td>
\- -
\-
</td>
<td>
List of companies outsourced by STIMEY
Personal contacts of the company
Solvency index
</td>
<td>
* Text
* Spreadsheet
</td>
<td>
\-
\-
</td>
<td>
Questionnaires
Platform activities
</td> </tr>
<tr>
<td>
Statistical reports
</td>
<td>
\-
</td>
<td>
Data used as measures for indicators
</td>
<td>
* Text
* Spreadsheet
</td>
<td>
\-
</td>
<td>
Questionnaires
Platform activities
</td> </tr> </table>
_Table 1 Datasets_
In order to improve the accessibility and the digital preservation in the long
term it is recommended to use:
* Complete and open documentation
* Non-proprietary software
* No key-protection
* No total/partial encoding
* Open formats like RTF, TIFF, JPG or well-known proprietary formats
## Making data findable, including provisions for metadata [Fair Data]
Metadata facilitates exchange of data by making them Findable, Accessible,
Interoperable and Re-Usable (F.A.I.R.). Metadata will be used to identify and
locate the data through catalogues or search engines.
All the data produced in STIMEY project will be identified by using a unique
and persistent identifier: HANDLE, guaranteeing a permanent access and
allowing you to reference your data in a safety way.
The structure of HANDLE uri is: producer prefix + “/” + document suffix. i.e.:
_http://rodin.uca.es/handle/10498/14617_
The metadata will be based on a generalised metadata scheme used in the RODIN
platform. This is an institutional repository located in the University of
Cadiz. Its goal is to create a digital deposit in order to store, preserve and
disseminate all the documentation related to research activities.
RODIN repository can store data and documents related to Horizon 2020
projects. These data will be collected afterwards by OpenAIRE. Following the
generalised metadata scheme used in RODIN, we will have the elements listed
below:
* Tittle.
* Creator/s: Last name, First name
* Contributor: Information provided by the EU or the STIMEY project itself.
* Subject: List of keywords
* Description: Text explaining the content of the data.
* Date.
* Type: type of document: i.e.: “info:eu-repo/semantics/workingPape”
* Identifier: HANDLE uri
* Language: document/data language
* Relation
* Rights: license, and access. I.e.: info:eu-repo/semantics/openAccess - Format: details about the file format
A readme.txt file could be included to provide information on field methods
and procedures.
## Making data openly accessible [Fair data]
Every data collected and generated (non-personal data) with the main aim of
developing research activities within the STIMEY project will be openly
available.
Due to the participation of underage people in activities related to the
project, with the aim of protecting and guaranteeing their privacy, all their
data will be collected and treated without being identified.
All the data related to the platform will be located in a server in the
University of Emden/Leer and will be available by logging in the site.
Previously, you must have signed up using your email or social media account.
On the other hand, every personal or restricted data will be located in a
computer server in the University of Cadiz.
Data will be processed for the limited purposes of the studies; therefore,
only relevant data will be collected. Data will be used for the analysis of
the proposed studies results.
Every data generated in this project will be:
* Collected in a fair, faithful and transparent way.
* Collected with the only purpose of develop research activities in STIMEY - Suitable, appropriate and limited to necessary for the project.
* Stored no longer than necessary
* Protected against non-authorized treatment and guarantee their security
Regarding to personal data, every person involved in research studies will be
assigned a unique key. In order to generate this key/code we will have a
system with a deterministic hash function, this type of function receive a
string and always return the same value. This string will be form by the full
name and birthdate of the person. It is important to know that it is a one-way
function, it is not possible to obtain a name with a given key.
Only the researcher team of the project will have access to the codify data.
In order to be on the list (authorized people) it will be necessary to have
the authorization of the Data Controller and the STIMEY coordinator.
Servers and the storage in the University of Cadiz will be located in their
Data Center of level TIER III following the standard classification
ANSI/TIA-942.
Apart from these repositories, STIMEY will also use the centralised repository
RODIN to ensure the maximum dissemination of the information generated in the
project. This repository make use of the OAI-PMH protocol (Open Archives
Initiative Protocol for Metadata Harvesting), what allows that the content can
be properly found by means of the defined metadata.
## Making data interoperable [Fair data]
By using RODIN repository to store and disseminate the data and metadata
generated (nonpersonal data) associated to the STIMEY project we are
facilitating the interoperability of the data. All the data generated can be
exchanged between researchers and partners in STIMEY.
RODIN follows the _Dublin Core Scheme_ ; it is a small set of vocabulary terms
that is used to describe web resources such as videos or images. The complete
list of terms can be found on the Dublin Core Metadata Initiative (DCMI)
website. The DCMI Abstract Model was designed to bridge the paradigm
unbounded, linked data graphs with the more familiar paradigm of validatable
metadata records like those used in OAI-PMH.
The full list of fifteen-element of metadata terms (DCMI) are ratified in the
following standards: IETF RFC 5013, ANSI/NISO Standard Z39.85-2007, and ISO
Standard 15836:2009.
All the Metadata used in the STIMEY project will be “mappeable” on standard
vocabularies by following the _Dublin Core Scheme._
## Increase data re-use (through clarifying licenses) [Fair data]
Data generated (non-personal data) and associated software will be deposited
in RODIN. To facilitate the re-use of the data, these will be made available
under a Creative Commons BYNC-SA License, this licence permit to others to
copy, distribute, display and perform the work for not commercial purposes
only, also permits other to create and distribute derivative works, but only
under the same or a compatible license.
This project will use “Open access publishing, also called ‘Gold’ open access,
this means that an article is immediately provided in open access mode by the
scientific publisher. The associated costs are shifted away from readers, and
instead to (for example) the university or research institute to which the
researcher is affiliated, or to the funding agency supporting the research.
All the data produced (non-personal data) in the project will be made
available for reuse at the end of project. These data will be also usable by
third parties the project but private or restricted data which won’t be
available in any case. The length of time for storing data will be, at least
until the end of the project.
The data quality is ensured by different measures. These include validation of
the sample, replication, comparison with results of similar studies and
control of systematic distortion and statistical reports based on several
indicators.
The procedures that are fundamental to effective data quality assurance could
include:
* Document data quality requirements and define rules for measuring quality
* Assess new data to create a quality baseline
* Implement semantic metadata management processes
* Keep on top of data quality problems
# Allocation of Resources
We will consider the costs of data for this project related to the price of
keeping the information in the different repositories both for public data and
for private data.
University of Cadiz will use RODIN repository and this service is for free. HS
EL University, which will work with data of the STIMEY platform, uses
BitBucket repository also for free if the number of users in the repository is
less than five.
Associated costs for dataset preparation and data management during the
project will be covered by the project itself.
Inmaculada Medina Bulo as General Director of Information Systems at
University of Cádiz, has been assigned as Data Controller of the STIMEY
project and has been added to the Project Advisory Board and the consultant
committee for ethics aspects and personal data protection. Every partner has
assigned one person (Data Processor) responsible for following up the
procedures designed by the Data Controller, specially collecting and storing
procedures including informed consent process and transparency process.
# Data Security
During the project, data will be automatically saved on an institutional
server with backup on a separate offsite server. Backup will be checked and
validated manually.
The key generator system will be located in a computer server in the
University of Cadiz. This computer will be turn off and with no connection,
excepting one hour a week. In that period, the teacher in charge of the
students will be able to access the system and generate new keys for them.
Every access will be authenticated and stored in a logging file. Every IP
address will be validated and previously known, minimizing every possible risk
when it is generated a key. It is important to highlight that every name of
the participants in STIMEY activities will be never stored.
In the same way, we can affirm that every access to the information located in
Spain will be exclusive to the personal in charge of the project, following
the recommendations of the standard ISO/IEC 27002:2005.
# Ethical Aspects
According to the principle of “data minimization” based on article 5 section
c) of the general data protection regulation, it set out that it will not be
collected any data that contains ethnic or racial information, union or
political affiliation, religious beliefs, as in genetic data, biometrical
data, data health data, data related to the life or sexual orientation of a
person. 1
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0572_AUTOPILOT_731993.md
|
# Executive Summary
In Horizon 2020 a limited pilot action on open access to research data has
been implemented. Participating projects are required to develop a Data
Management Plan (DMP).
This deliverable provides the second version of the DMP elaborated by the
AUTOPILOT project. The purpose of this document is to provide an overview of
the main elements of the data management policy. It outlines how research data
will be handled during the AUTOPILOT project and describes what data will be
collected, processed or generated and following what methodology and
standards, whether and how this data will be shared and/or made open, and how
it will be curated and preserved. Besides a data types list, metadata and
global data collection processes are also defined in this document.
The AUTOPILOT data management plan refers to the latest EC DMP guidelines 1
. This version has explicit recommendations for full lifecycle management
through the implementation of the FAIR principles, which state that the data
produced shall be Findable, Accessible, Interoperable and Reusable (FAIR).
Since the data management plan is expected to mature during the project while
taking into account the progress of the work, the last version will be
produced as additional deliverable by the end of the project.
# 1\. Introduction
## 1.2 Objectives of the project
Automated driving is expected to increase safety, to provide more comfort and
to create many new business opportunities for mobility services. The Internet
of Things (IoT) is about enabling connections between objects or "things"; it
is about connecting anything, anytime, anyplace, using any service over any
network.
**AUTO** mated Driving **P** rogressed by **I** nternet **O** f **T** hings”
(AUTOPILOT) project will especially focus on utilizing the IoT potential for
automated driving.
The overall objective of AUTOPILOT is to bring together relevant knowledge and
technology from the automotive and the IoT value chains in order to develop
IoT-architectures and platforms which will bring Automated Driving towards a
new dimension. This will be realized through the following main objectives:
* Use, adapt and innovate current advanced technologies to define and implement an IoT approach for autonomous and connected vehicles
* Deploy, test and demonstrate IoT-based automated driving use cases at several permanent pilot sites, in real traffic situations with: Urban driving, Highway pilot, Automated Valet Parking, Platooning and Real-time car sharing
* Create and deploy new business products and services for fully automated driving vehicles, used at the pilot sites: by combining stakeholders’ skills and solutions, from the supply and demand side
* Evaluate, with the involvement of users, public services and business players at the pilot sites:
* The suitability of the AUTOPILOT business products and services as well as the ability to create new business opportunities
* The user acceptance related to using the Internet of Things for highly or fully automated driving
* The impact on the citizens’ quality of life
* Contribute actively to standardization activities as well as to consensus building in the areas of Internet of Things and communication technologies
Automated vehicles largely rely on on-board sensors (LiDAR, radar, cameras,
etc.) to detect the environment and make reliable decisions. However, the
possibility of interconnecting surrounding sensors (cameras, traffic light
radars, road sensors, etc.) exchanging reliably redundant data may lead to new
ways to design automated vehicle systems potentially reducing cost and adding
detection robustness.
Indeed, many types of connected objects may act as an additional source of
data, which will very likely contribute to improving the efficiency of
automated driving functions and enabling new automated driving scenarios. This
will also improve the safety of the automated driving functions while
providing driving data redundancy and reducing implementation costs. These
benefits will help push the SAE level of driving automation to full
automation, keeping the driver out of the loop. Furthermore, by making
autonomous cars a full entity in the IoT, the AUTOPILOT project enables
developers to create IoT/AD services as easy as accessing any entity in the
IoT.
The Figure above depicts AUTOPILOT’s overall concept. The main ingredients
needed to apply IoT to autonomous driving as represented in the image are:
* The overall IoT platforms and architecture, allowing the use of IoT capabilities for autonomous driving.
* The Vehicle IoT integration and platform to make the vehicle an IoT device, using and contributing to the IoT.
* The Automated Driving relevant sources of information (pedestrians, traffic lights, etc.) becoming IoT devices and extending the IoT eco-systems to allow enhanced perception of the driving environment on the vehicle.
* The communication network using appropriate and advanced connectivity technology for the vehicle as well as for the other IoT devices.
## 1.3 Purpose of the document
This deliverable presents the second version of the data management plan
elaborated for the AUTOPILOT project. The purpose of this document is to
provide an overview of the dataset types present in the project and to define
the main data management policy adopted by the Consortium.
The data management plan defines how data in general and research data in
particular will be handled during the research project and will make
suggestions for data management after the project. It describes what data will
be collected, processed or generated by the IoT devices and by the whole IoT
ecosystem, what methodologies and standards shall be followed during the
collection process, whether and how this data shall be shared and/or made open
not only for the evaluation needs but also to comply with the ORDP
requirements 2 , and how it shall be curated and preserved. Besides, the
data management plan identifies the four (4) key requirements that define the
data collection process and provides first recommendations to be applied.
In comparison to the first version provided at M06, this **second version
(M16)** of the data management plan includes more detailed dataset
descriptions according to the progress of the work done in the WP2, WP3 and
WP4. The descriptions will be filled following the template provided in
chapter 5.
The AUTOPILOT data management plan will be updated by the end of the project.
The **M32 upcoming version** will outline the details of all datasets involved
in the AUTOPILOT project. These datasets include acquired or derived data and
aggregated data (IoT data, evaluation data, test data and research data).
These dataset types are explained in detail in chapter 5.
This document is structured as follows: **Chapter 2** outlines a data overview
in the AUTOPILOT project. It details AUTOPILOT data categories, data types and
metadata, then the data collection processes to be followed and finally the
test data flow and test data architecture environment.
**Chapter 3** gives a global vision of the test data management methodology
developed in WP3 across pilot sites.
**Chapter 4** gives insights into the Open Research Data Pilot under H2020
guidelines.
**Chapter 5** provides a detailed description of the datasets used in the
AUTOPILOT project with focus on used methodologies, standard and data sharing
policies.
**Chapter 6** gives insights into the FAIR Data Management principle under
H2020 guidelines and the steps taken by AUTOPILOT in order to be FAIR
compliant.
Finally, the chapters 7 and 8 outline the necessary roles, responsibilities
and ethical issues.
## 1.4 Intended audience
The AUTOPILOT project addresses highly innovative concepts. As such, the
intended audience of the project is the scientific community interested in IoT
and/or automotive technologies. In addition, due to the strong expected impact
of the project on their respective domains, the other expected audience
consists of automotive industrial communities, telecom operators and
standardization organizations.
# 2 Data in AUTOPILOT: an overview
The aim of this chapter is:
* To provide a first categorization of the data;
* To identify a list of the data types that will be generated;
* To provide a list of metadata that will be used to describe generated data and enable data re-use;
* To provide recommendations on data collection and sharing processes during the project and beyond.
The AUTOPILOT project will collect a large amount of raw data to measure the
benefit of IoT for automated driving with multiple automated driving use cases
and services, at different pilot locations.
Data from vehicles and sensors will be collected and managed through a
hierarchy of IoT platforms as illustrated in the architectural diagram 3 of
Figure 2.
The diagram above shows a federated architecture with the following four
layers:
* **In-vehicle IoT Platforms:** Here is everything that is mounted inside the vehicle, i.e., components responsible for AD, positioning, navigation, real-time sensor data analysis, and communication with the outside world. All mission critical autonomous driving functions should typically reside in this layer.
* **Road-side IoT Platforms:** Road-side and infrastructure devices, such as cameras, traffic light sensors, etc., are integrated and managed as part of road-side IoT platforms covering different road segments and using local low latency communication networks and protocols as required by the devices and their usage.
* **Pilot Site IoT Platforms:** This layer constitutes the first integration level. It is responsible for collecting, processing and managing data at the pilot site level.
* **Central IoT Platform:** This is a Cloud-based top layer that integrates and aggregates data from the various pilot sites as well as external services (weather, transport, etc.). This is where the common AD services such as car sharing, platooning, etc. will reside. Data, at this level, are standardized using common formats, structures and semantics. The central IoT platform will be hosted on IBM infrastructure.
The data analysis will be performed according to Field Operational Test
studies (FOT 4 ) and using FESTA 5 methodology. The FESTA project funded
by the European Commission developed a handbook on FOT methodology which gives
general guidance on organizational issues, methodology and procedures, data
acquisition and storage, and evaluation.
From raw data a large amount of derived data will be produced to address
multiple research needs. Derived data will follow a set of transformations:
cleaning, verification, conversion, aggregation, summarization or reduction.
In any case, data must be well documented and referenced using rich metadata
in order to facilitate and foster sharing, to enable validity assessments and
to enable its usage in an efficient way.
Thus, each piece of data must be described using additional information called
metadata. The latter must provide information about the data source, the data
transformations and the conditions in which the data has been produced. More
details about the metadata in AUTOPILOT are described in section 2.2.
## 2.1 Dataset categories
The AUTOPILOT project will produce different categories of datasets:
* **Context data** : data that describe the context of an experiment (e.g. metadata);
* **Acquired and derived data** : data that contain all the collected information from measurements and sensors related to an experiment;
* **Aggregated data** : data summary obtained by reduction of acquired data and generally used for data analysis.
**2.1.1 Context data**
Context data is any information that helps to explain observation during a
study. Context data can be collected, generated or retrieved from existing
data. For example, it contains information such as vehicle, road or driver
characteristics.
**2.1.2 Acquired and derived data**
Acquired data is all data collected to be analysed during the course of the
study. Derived data is created by different types of transformations including
data fusion, filtering, classification and reduction. Derived data are easy to
use and they contain derived measures and performance indicators referring to
a time period when specific conditions are met. This category includes
measures from sensors coming from vehicles or IoT and subjective data
collected from either the users or the environment.
The following list outlines the data types and sources that will be collected:
<table>
<tr>
<th>
**In-vehicle measures** are the collected data from vehicles, either using
their original in-car sensors or sensors added for AUTOPILOT purposes. These
measures can be divided into different types:
</th> </tr>
<tr>
<td>
</td>
<td>
**Vehicle dynamics** are measurements that describe the mobility of the
vehicle. Measurements can be for example longitudinal speed, longitudinal and
lateral acceleration, yaw rate, and slip angle.
</td> </tr>
<tr>
<td>
</td>
<td>
**Driver actions** on the vehicle commands that can be measured are, for
instance, steering wheel angle, pedal activation or HMI button press
variables, face monitoring indicators characterizing the state of the driver,
either physical or emotional.
</td> </tr>
<tr>
<td>
</td>
<td>
**In-vehicle systems state** can be accessed by connecting to the embedded
controllers. It includes continuous measures like engine RPM or categorical
values like ADAS and active safety systems activation.
</td> </tr>
<tr>
<td>
</td>
<td>
**Environment detection** is the environment data that can be obtained by
advanced sensors like RADARs, LIDARs, cameras and computer vision, or more
simple optical sensors. For instance, luminosity or presence of rain, but also
characteristics and dynamics of the infrastructure (lane width, road
curvature) and surrounding objects (type, relative distances and speeds) can
be measured from within a vehicle.
</td> </tr>
<tr>
<td>
</td>
<td>
**Vehicle positioning** is the geographical location of a vehicle determined
with satellite navigation systems (e.g. GPS) and the aforementioned advanced
sensors.
</td> </tr>
<tr>
<td>
</td>
<td>
**Media** mostly consist of video. The data consist of media data but also
index files used to synchronize the other data categories. They are also often
collected from the road side.
</td> </tr>
<tr>
<td>
**Continuous subjective measures:** Complimentary to sensors and
instrumentation, some continuous measures can also be built in a more
subjective way, by analysts or annotators, notably using video data.
</td> </tr>
<tr>
<td>
**Road-side measures** are the vehicle speed measurement and positioning,
using radar, rangefinders, inductive loops or pressure hose. In ITS systems,
it may also contain more complex information remotely transferred from
vehicles to road-side units.
</td> </tr>
<tr>
<td>
**Experimental conditions** are the external factors which may have an impact
on participants’ behaviour. They may be directly collected during the
experiment, or integrated from external sources. Typical examples are traffic
density and weather conditions.
</td> </tr>
<tr>
<td>
**IoT data** are the external sources of data that will be collected/shared
through IoT services.
</td> </tr>
<tr>
<td>
</td>
<td>
**Users Data** can be generated by smartphones or wearables. The users can be
the pedestrians or the car drivers. These data helps the user experience for
the usage of services by vehicle or infrastructure. The privacy aspects are
well explained in chapter 4.
</td> </tr>
<tr>
<td>
</td>
<td>
**Infrastructure Data** are all the data giving additional information about
the environment. Typical examples are the traffic status, road-works,
accidents and road conditions. They can also be directly collected from Road-
side cameras or traffic light control units and then transferred to IoT
Platforms. For instance, the infrastructure data can transfer hazard warnings
or expected occupancy of busses on bus lanes to vehicles using communication
networks.
</td> </tr>
<tr>
<td>
</td>
<td>
**In-Car data** defines the connected devices or sensors in vehicles. Typical
examples are navigation status, time distance computations, real-time pickup /
drop-off information for customers, and events detected by car to be
communicated to other vehicles or GPS data to be transferred to maps.
</td> </tr>
<tr>
<td>
**Surveys data** are data resulting from the answers of surveys and
questionnaires for user acceptance evaluation
</td> </tr> </table>
**2.1.3 Aggregated data**
Aggregated data is generally created in order to answer the initial research
question. They are supposed to be verified and cleaned, thus facilitating
their usage for analysis purposes.
Aggregated data contains a specific part of the acquired or derived data (e.g.
the average speed during a trip or the number of passes through a specific
intersection). Its smaller size allows a simple storage in e.g. database
tables and an easy usage suitable for data analysis. To obtain aggregated
data, several data reduction processes are performed. The reduction process
summarizes the most important aspects in the data into a list of relevant
parameters or events, through one or all of the following processes:
validation, curation, conversion, annotation.
Besides helping in answering new research questions, aggregated data may be
re-used with different statistical algorithms without the need to use raw
data. For AUTOPILOT, aggregated data will represent the most important data
types that will be shared by the project. It does not allow potentially
problematic re-uses because it does not contain instantaneous values that
would highlight illegal behaviour of a vehicle, a driver or another subsystem.
## 2.2 Metadata
**2.2.1 General principles**
This section reviews the relevant metadata standards developed or used in the
previous and ongoing FOTs and naturalistic driving studies (NDS) as a basis
for the development of the metadata specifications of the pilot data. Such
standards will help the analysis and re-use of the collected data within the
AUTOPILOT project and beyond.
The text in this section is derived from the work done in the FOT-Net Data
project 5 for sharing data from field operational tests. The results of this
work are described in the Data Sharing Framework 7 . The CARTRE project 8
is currently updating this document to specifically addressing road automation
pilots and FOTs.
As described in the previous sections, the pilots will generate and collect a
large amount of raw and processed data from continuous data-logging, event-
based data collection, and surveys. The collected data will be analysed and
used for various purposes in the project including the impact assessment
carried out by partners who are not involved in the pilots. This is a typical
issue encountered in many FOT/NDS projects in which the data analyst (or
reuser) needs to know how the raw data was collected and processed in order to
perform data analysis, modelling and interpretation.
Therefore, good metadata is vital. The Data Sharing Framework defines metadata
as ‘ **any information that is necessary in order to use or properly interpret
data** ’. The aim of this section is to provide methods to efficiently
describe a dataset and its associated metadata. The result will serve as
suggestions for good practices in documenting a data collection and datasets
in a structured way.
Following the definition of metadata by the data sharing framework, we divide
the AUTOPILOT metadata into four different categories as follows:
* **AUTOPILOT pilot design and execution** documentation, which corresponds to a high-level description of data collection: its initial objectives and how they were met, description of the test site, etc.
* **Descriptive** metadata, which describes precisely each component of the dataset, including information about its origin and quality;
* **Structural** metadata, which describes how the data is being organized;
* **Administrative** metadata, which sets the conditions for how the data can be accessed and how this is being implemented.
Full details of these metadata categories can be found in the Deliverables of
the FOT-Net Data project such as D4.1 Data Catalogue and D4.3 Application of
Data Sharing Framework in Selected Cases which can be found on the project
website 9 .
FOTs have been carried out worldwide and have adopted different metadata
formats to manage the collected data. One good example is the ITS Public Data
Hub hosted by the US Department of Transport 10 . There are over 100
datasets created using ITS technologies. The datasets contain various types of
information --such as highway detector data, travel times, traffic signal
timing data, incident data, weather data, and connected vehicle data -- many
of which will also be collected as AUTOPILOT data. The ITS Public Data Hub
uses ASTM 2468-05 standard format for metadata to support archived data
management systems. This standard would be a good starting point to design
metadata formats for various types of operational data collected by the IoT
devices and connected vehicles in AUTOPILOT.
In a broader context of metadata standardisation, there are a large number of
metadata standards available which address the needs of particular user
communities. The Digital Curation Centre (DCC) provides a comprehensive list
of metadata standards 11 for various disciplines such as general research
data, physical science as well as social science and humanities. It also lists
software tools that have been developed to capture or store metadata
conforming to a specific standard.
**2.2.2 IoT metadata**
The metadata describing IoT data are specified in the context of OneM2M
standard 12 . In such a context “data” signifies digital representations of
anything. In practice, that digital representation is associated with a
“container” resource having specific attributes. Those attributes are both
metadata describing the digital object itself, and the values of the variables
of that object, which are called “content”.
Every time an IoT device publishes new data on the OneM2M platform a new
“content instance” is generated, representing the actual status of that
device.
All the “content instances” are stored in the internal database with a unique
resource ID.
9
http://fot-net.eu/Documents/fot-net-data-final-deliverables/ 10
https://catalog.data.gov/dataset 11
http://www.dcc.ac.uk/resources/metadata-standards/list 12
http://www.onem2m.org/
The IoT metadata describe the structure of the information, according to the
OneM2M standard. The IoT metadata are described in the table below.
### Table 1 – OneM2M Metadata for IoT data 6
<table>
<tr>
<th>
**Metadata Element**
</th>
<th>
**Extended name**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
pi
</td>
<td>
parentID
</td>
<td>
ResourceID of the parent of this resource.
</td> </tr>
<tr>
<td>
ty
</td>
<td>
resourceType
</td>
<td>
Resource Type attribute identifies the type of the resource as specified in
clause. E.g. “4 (contentInstance)”.
</td> </tr>
<tr>
<td>
ct
</td>
<td>
creationTime
</td>
<td>
Time/date of creation of the resource.
This attribute is mandatory for all resources and the value is assigned by the
system at the time when the resource is locally created. Such an attribute
cannot be changed.
</td> </tr>
<tr>
<td>
ri
</td>
<td>
resourceID
</td>
<td>
This attribute is an identifier for the resource that is used for 'non-
hierarchical addressing method', i.e. this attribute contains the
'Unstructured-CSErelative-Resource-ID' format of a resource ID as defined in
table 7.2-1 of [5].
This attribute is provided by the Hosting CSE when it accepts a resource
creation procedure. The Hosting CSE assigns a resourceID which is unique in
that CSE.
</td> </tr>
<tr>
<td>
rn
</td>
<td>
resourceName
</td>
<td>
This attribute is the name for the resource that is used for 'hierarchical
addressing method' to represent the parent-child relationships of resources.
See clause 7.2 in [5] for more details.
</td> </tr>
<tr>
<td>
lt
</td>
<td>
lastModifiedTime
</td>
<td>
Last modification time/date of the resource. The lastModifiedTime value is
updated when the resource is updated.
</td> </tr>
<tr>
<td>
et
</td>
<td>
expirationTime
</td>
<td>
Time/date after which the resource will be deleted by the Hosting CSE.
</td> </tr>
<tr>
<td>
acpi
</td>
<td>
accessControlPolicyIDs
</td>
<td>
The attribute contains a list of identifiers of an <accessControlPolicy>
resource. The privileges defined in the <accessControlPolicy> resource that
are referenced determine who is allowed to access the resource containing this
attribute for a specific purpose (e.g. Retrieve, Update, Delete, etc.).
</td> </tr>
<tr>
<td>
lbl
</td>
<td>
label
</td>
<td>
Tokens used to add meta-information to resources.
This attribute is optional.
The value of the labels attribute is a list of individual labels, that can be
used for example for discovery purposes when looking for particular resources
that one can "tag" using that label-key.
</td> </tr>
<tr>
<td>
st
</td>
<td>
stateTag
</td>
<td>
An incremental counter of modification on the resource. When a resource is
created, this counter
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
is set to 0, and it will be incremented on every modification of the resource.
</td> </tr>
<tr>
<td>
cs
</td>
<td>
contentSize
</td>
<td>
Size in bytes of the content attribute.
</td> </tr>
<tr>
<td>
cr
</td>
<td>
creator
</td>
<td>
The ID of the entity (Application Entity or Common Services Entity) which
created the resource containing this attribute.
</td> </tr>
<tr>
<td>
cnf
</td>
<td>
contentinfo
</td>
<td>
Information that is needed to understand the content. This attribute is a
composite attribute. It is composed first of an Internet Media Type (as
defined in the IETF RFC 6838) describing the type of the data, and second of
an encoding information that specifies how to first decode the received
content. Both elements of information are separated by a separator defined in
OneM2M TS0004 [3].
</td> </tr>
<tr>
<td>
or
</td>
<td>
ontologyRef
</td>
<td>
This attribute is optional.
A reference (URI) of the ontology used to represent the information that is
stored in the contentInstances resources of the <container> resource. If this
attribute is not present, the contentInstance resource inherits the
ontologyRef from the parent <container> resource if present.
</td> </tr> </table>
# 3 Data management methodology in AUTOPILOT
The AUTOPILOT data collection process and data management is built upon
requirements coming from 4 processes:
* **The evaluation requirement** defines the minimum data that must be collected in order to perform the evaluation process at the end of the project
* **The test specification** provides details about the data to be collected on the basis of the evaluation requirements and according to use cases specifications
* **The test data management** defines the data collection, harmonization, storage and sharing requirements using the above two processes and the ORDP process
* **The Open Research Data Pilot** 7 **(ORDP)** defines the requirement related to sharing of research data
## 3.1 Evaluation process requirements
The evaluation process is defined in task 4.1 which develops the evaluation
methodology. Named FESTA (Field opErational teSt supporT Action), this
methodology must be implemented thoroughly and incorporated into the planning
to guarantee that all pilots are collecting the required information needed
for the evaluation.
The following figure shows a high-level view of the data that will be
collected and integrated in the evaluation process. Different types of data
(in blue) are collected, stored and analysed by different processes. The
workflow will be defined per pilot site but in a homogeneous way. The data
types and requested formats will be defined in the evaluation task deliverable
D4.1.
To fulfil the project objectives, a design of experiment is performed during
the evaluation task. This design creates requirements that define the number
of scenarios and test cases, the duration of tests and test runs, the number
of situations per specific event, the number of test vehicles, the variation
in users, the variation in situations (weather, traffic, etc.). Each pilot
site must comply with this design of experiment and provide sufficient and
meaningful data with the required quality level to enable technical
evaluation. Refer to D1.1 for additional information regarding design of
experiment and data quality (Time synchronization of devices & logging,
accuracy & frequency of logging, alternative data sources, cross-checking from
automated vehicles, on-board devices, road side detectors, detection of
failures in systems and logging).
## 3.2 Tests specification process requirements
The pilot tests specification Task T3.1 plays a major role that must be
thoroughly followed. Indeed, this task will convert the high-level
requirements defined in the evaluation process into precise and detailed
specifications of data formats, data size, data currencies, data units, data
files, and storage. The list of requirements will be defined for each of the
following items: Pilot sites, Scenarios, Test Cases, Measures, Parameters,
Data quality, etc. and will be described in deliverable D3.1. All the
development tasks of WP2 must implement completely, if impacted, the
requirement defined in D3.1 in order to provide all the data (test data) as
expected by the technical evaluation.
## 3.3 Open research data pilot requirement process
Additional requirements related to ORDP are defined in this document to
guarantee that the collected data will be provided in compliance to European
Commission Guidelines 8 on Data Management in Horizon 2020. Those
requirements are clearly defined and explained in chapter 4.
## 3.4 Test data management methodology
The main objective of the data management plan is to define the methodology to
be applied in AUTOPILOT across all pilot sites, in particular test data
management. This includes the explanation of the common data collection and
integration methodology.
One of the main objectives within T3.4 “Test Data Management” is to ensure the
comparability and consistency of collected data across pilot sites. In this
context, the methodology is highly impacted by the pilot site specifications
of Task 3.1 and compliant with the evaluation methodologies developed in Task
4.1. In particular, technical evaluation primarily needs log data from the
vehicles, IoT platforms, cloud services and situational data from pilot sites
to detect situations and events, and to calculate indicators.
The log data parameters that are needed for technical evaluation are organized
by data sources (vehicle sources, vehicle data, derived data, positioning, V2X
messages, IoT messages, events, situations, surveys and questionnaires).
For IoT data, some pilot sites use proprietary IoT platforms in order to
collect data from specific devices or vehicles (e.g. the Brainport car sharing
service and automated valet parking service use Watson IoT Platform™ to
collect data from their vehicles).
On the top of that, we have a OneM2M interoperability platform in each pilot
site. This is the interoperability IoT platform for exchanging IoT messages
relevant to all autonomous driving (AD) vehicles at pilot site level. Then,
the test data will be stored in pilot site test server storage that will
contain mainly the vehicle data, IoT data and survey data. Further, the test
data will be packaged and sent to the AUTOPILOT central storage that will
allow evaluators to access all the pilot site data in a common format. This
includes the input from all pilot sites and use cases and for all test
scenarios and test runs.
Every pilot site has its own test storage server for data collection
(distributed data management). In addition, there is a central storage server
where data from all pilot sites will be stored for evaluation and analysis.
The following figure represents the data management methodology and
architecture used in AUTOPILOT across all pilot sites.
**Figure 4 – Generic scheme of data architecture in AUTOPILOT**
# 4 Participation in the open research data pilot
The AUTOPILOT project has agreed to participate in the Pilot on Open Research
Data in Horizon 2020 16 . The project uses specific Horizon 2020 guidelines
associated with ‘open’ access to ensure that the project results provide the
greatest impact possible.
AUTOPILOT will ensure open access 17 to all peer-reviewed scientific
publications relating to its results and will provide access to the research
data needed to validate the results presented in deposited scientific
publications.
The following lists the minimum fields of metadata that should come with an
AUTOPILOT project-generated scientific publication in a repository:
* The terms: “European Union (EU)”, “Horizon 2020”
* Name of the action (Research and Innovation Action)
* Acronym and grant number (AUTOPILOT, 731993)
* Publication date
* Length of embargo period if applicable
* Persistent identifier
When referencing Open access data, AUTOPILOT will include at a minimum the
following statement demonstrating EU support (with relevant information
included into the repository metadata): “This project has received funding
from the European Union’s Horizon 2020 research and innovation program under
grant agreement No 731993”.
The AUTOPILOT consortium will strive to make many of the collected datasets
open access. When this is not the case, the data sharing section for that
particular dataset will describe why access has been restricted (See Chapter
5).
A number of AUTOPILOT project partners maintain institutional repositories
that will be listed in the following DMP version, where the project’s
scientific publications and in some instances, research data will be
deposited. The use of a specific repository will depend primarily on the
primary creator of the publication and on the data in question.
Some other project partners will not operate publically accessible
institutional repositories. When depositing scientific publications they will
use either a domain specific repository or use the EU recommended service
OpenAIRE (http://www.openaire.eu) as an initial step to finding resources to
determine relevant repositories.
Project research data will be deposited in the online data repository ZENODO
18 . It is a free service developed by CERN under the EU FP7 project
OpenAIREplus (grant agreement no.283595).
The repository will also include information regarding the software, tools and
instruments that were used by the dataset creator(s) so that secondary data
users can access and then validate the results.
The AUTOPILOT data collection can be accessed in the ZENODO repository at an
address such as the following link: _https://zenodo.org/collection/
<<autopilot _ > >
16
http://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-
cutting-
issues/open-access-dissemination_en.htm 17
http://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-
cutting-
issues/open-access-data-management/open-access_en.htm 18 https://zenodo.org/
In summary, as a baseline AUTOPILOT partners will deposit:
* Scientific publications – on their respective institute repositories in addition (when relevant) to the AUTOPILOT ZENODO repository
* Research data – to the AUTOPILOT ZENODO collection (when possible)
* Other project output files – to the AUTOPILOT ZENODO collection (when relevant)
This version of the DMP does not include the actual metadata about the
research data being produced in AUTOPILOT. Details about technical means and
services for building repositories and accessing this metadata will be
provided in the next version of the DMP. A template table is defined in
section 5.2 and will be used by project partners to provide all requested
information.
# 5 AUTOPILOT dataset description
## 5.1 General Description
This section provides an explanation of the different types of datasets to be
produced or collected in AUTOPILOT, which have been identified at this stage
of the project. As the nature and extent of these datasets can evolve during
the project, more detailed descriptions will be provided in the next version
of the DMP towards the end of the project (M32).
The descriptions of the different datasets, including their reference, file
format, standards, methodologies and metadata and repository to be used are
given below. These descriptions are collected using the pilot site
requirements and specifications.
It is important to note that the dataset bellow will be reproduced by each use
case in all the pilot sites. The dataset categories are:
* IoT dataset
* Vehicle dataset
* V2X messages dataset
* Survey dataset
## 5.2 Template used in dataset description
This table is a template that will be used to describe the datasets.
### Table 2 – Dataset description template
<table>
<tr>
<th>
Dataset Reference
</th>
<th>
**AUTOPILOT_PS_UC_datatype_ID**
Each dataset will have a reference that will be generated by the combination
of the name of the project, the pilot site (PS) and the use case in which it
is generated.
**Example** : AUTOPILOT_Versailles_Platooning_IoT_01
</th> </tr>
<tr>
<td>
Dataset Name
</td>
<td>
Name of the dataset
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
Each dataset will have a full data description explaining the data provenance,
origin and usefulness. Reference may be made to existing data that could be
reused.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The metadata attributes list and standards. The used methodologies.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
All the format that defines data
</td> </tr>
<tr>
<td>
Data Sharing
</td>
<td>
Explanation of the sharing policies related to the dataset between the next
options: **Open** : Open for public disposal
**Embargo** : It will become public when the embargo period applied by the
publisher is over. In case it is categorized as embargo the end date of the
embargo period must be written in DD/MM/YYYY format.
**Restricted** : Only for project internal use.
Each dataset must have its distribution license.
Provide information about personal data and mention if the
data is anonymized or not. Tell if the dataset entails personal data and how
this issue is taken into account.
</td> </tr>
<tr>
<td>
Archiving and Preservation
</td>
<td>
The preservation guarantee and the data storage during and after the project
</td> </tr>
<tr>
<td>
</td>
<td>
**Example** : databases, institutional repositories, public repositories.
</td> </tr> </table>
## 5.3 IoT dataset
This pro-forma table is a description of the IoT Dataset used in AUTOPILOT.
**Table 3 – IoT dataset description**
<table>
<tr>
<th>
Dataset Reference
</th>
<th>
**AUTOPILOT_PS_UC_IoT_ID**
</th> </tr>
<tr>
<td>
Dataset Name
</td>
<td>
IoT data generated from connected devices
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
This dataset refer to the IoT datasets that will be generated from IoT devices
within use cases. This includes the data coming from VRUs, RSUs, smartphones,
Vehicles, drones, etc.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
During the project, the metadata related to the IoT data are based on OneM2M
standard. The OneM2M IoT platforms are implemented across pilot sites to cover
the interoperability feature. More details are provided in section 2.2.2.
In addition, the data model of these data is inspired by the DMAG (data
management activity group) work done in T2.3. The DMAG defined a unified data
model that standardizes all the IoT messages across pilot sites. The AUTOPILOT
common IoT data model is based on different standards: SENSORIS, DATEX II.
After the project, the metadata will be enriched with ZENODO’s metadata,
including the title, creator, date, contributor, pilot site, use case,
description, keywords, format, resource type, etc.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
JSON
</td> </tr>
<tr>
<td>
Data Sharing
</td>
<td>
This dataset will be openly available for use by 3rd party applications and
will be deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
Archiving and Preservation
</td>
<td>
During the project, the data will first be stored in the IoT platform. Then,
the data will be transferred to the pilot site test server before finishing up
in the centralized test server. At the end of the project, the dataset will be
archived and preserved in ZENODO repositories.
</td> </tr> </table>
## 5.4 Vehicles dataset
### Table 4 – Vehicles dataset description
<table>
<tr>
<th>
Dataset Reference
</th>
<th>
**AUTOPILOT_PS_UC_VEHICLES_ID**
</th> </tr>
<tr>
<td>
Dataset Name
</td>
<td>
Data generated from the vehicle sensors.
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
This dataset refers to the vehicle datasets that will be generated from the
vehicle sensors within use cases. This includes the data coming from the CAN
bus, cameras, RADARs, LIDARs and GPS.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The vehicle data standards used in AUTOPILOT are developed in task 2.1. The
pilot site implementations are based on wellknown standards for common data
formats: CAN, ROS, etc.
</td> </tr>
<tr>
<td>
</td>
<td>
More details are provided in D2.1.
After the project, the metadata will be based on ZENODO’s metadata, including
the title, creator, date, contributor, pilot site, use case, description,
keywords, format, resource type, etc.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
XML, CSV, SQL, JSON, Protobuf
</td> </tr>
<tr>
<td>
Data Sharing
</td>
<td>
This dataset will be openly available for use by 3rd party applications and
will be deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
Archiving and Preservation
</td>
<td>
During the project, the data will first be stored in pilot site test servers
before finishing up in the centralized test server. At the end of the project,
the dataset will be archived and preserved in ZENODO repositories.
</td> </tr> </table>
## 5.5 V2X messages dataset
**Table 5 – V2X messages dataset description**
<table>
<tr>
<th>
Dataset Reference
</th>
<th>
**AUTOPILOT_PS_UC_V2X_ID**
</th> </tr>
<tr>
<td>
Dataset Name
</td>
<td>
V2X messages communicated during test sessions
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
This dataset refer to the V2X messages that will be generated from the
communication between the vehicles and any other party that could affect the
vehicle. This includes the other vehicles and the pilot site infrastructure.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The V2X messages are mainly generated from the ITS-G5 communication standard.
After the project, the metadata will be enriched by ZENODO’s metadata,
including the title, creator, date, contributor, pilot site, use case,
description, keywords, format, resource type, etc.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CAM, DEMN, IVI, SPAT, MAP
</td> </tr>
<tr>
<td>
Data Sharing
</td>
<td>
This dataset will be openly available for use by 3rd party applications and
will be deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
Archiving and Preservation
</td>
<td>
During the project, the data will first be stored in pilot site test servers
before finishing up in the centralized test server. At the end of the project,
the dataset will be archived and preserved in ZENODO repositories.
</td> </tr> </table>
## 5.6 Surveys dataset
**Table 6 – Surveys dataset description**
<table>
<tr>
<th>
Dataset Reference
</th>
<th>
**AUTOPILOT_PS_UC_SURVEYS_ID**
</th> </tr>
<tr>
<td>
Dataset Name
</td>
<td>
Survey data collected during test sessions
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
This data refers to the data resulting from the answers of surveys and
questionnaires for user acceptance evaluation.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Survey data will use some well-known tools (Google Forms, Survey Monkey, etc.)
The work of defining a common format for survey data is still in progress by
the user acceptance evaluation team.
</td> </tr>
<tr>
<td>
</td>
<td>
After the project, the metadata will be enriched by ZENODO’s metadata,
including the title, creator, date, contributor, pilot site, use case,
description, keywords, format, resource type, etc.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV, PDF, XLS
</td> </tr>
<tr>
<td>
Data Sharing
</td>
<td>
This dataset will be openly available for use by 3rd party applications and
will be deposited in the ZENODO repository. It is important to note that these
data will be **anonymized** before data sharing.
</td> </tr>
<tr>
<td>
Archiving and Preservation
</td>
<td>
During the project, the data will first be stored in pilot site test servers
before finishing up in the centralized test server. At the end of the project,
the dataset will be archived and preserved in ZENODO repositories.
</td> </tr> </table>
# 6 FAIR data management principles
The data that will be generated during and after the project should be
**FAIR** 9 , that is Findable, Accessible, Interoperable and Reusable. These
requirements do not affect implementation choices and don’t necessarily
suggest any specific technology, standard, or implementation solution.
The FAIR principles were generated to improve the practices for data
management and data curation. FAIR principles aim to be applicable to a wide
range of data management purposes, whether it is data collection or data
management of larger research projects regardless of scientific disciplines.
With the endorsement of the FAIR principles by H2020 and their implementation
in the guidelines for H2020, the FAIR principles serve as a template for
lifecycle data management and ensure that the most important components for
the lifecycle are covered.
The intention is to target the implementation of the FAIR concept rather than
a strict technical implementation of the FAIR principles. AUTOPILOT project
has undertaken several actions, described below, to carry on the FAIR
principles.
**Making data findable, including provisions for metadata**
* The datasets will have very rich metadata to facilitate the findability. In particular for IoT data, the metadata are based on the OneM2M standard.
* All the datasets will have a Digital Object Identifiers provided by the public repository (ZENODO).
* The reference used for the dataset will follow this format: **AUTOPILOT_PS_UC_Datatype_XX.**
* The standards for metadata are defined in chapter 5 tables and explained in section
2.2.
**Making data openly accessible**
* All the datasets that are openly available are described in the chapter 5.
* The datasets for evaluation will be accessible via AUTOPILOT’s centralized server.
* The datasets will be made available using a public repository (e.g. ZENODO) after the project.
* The data sharing in chapter 5 explains the methods or software used to access the data. Basically, no software is needed to access the data.
* The data and their associated metadata will be deposed in a public repository or in an institutional repository.
* The data sharing in the section 5 tables will outline the rules to access the data if restrictions exist
**Making data interoperable**
* The metadata vocabularies, standards and methodologies will depend on the public repository and are mentioned in the chapter 5 0tables.
* The AUTOPILOT WP2 took several steps in order to define common data formats. This work was developed in task 2.1 for vehicle data and task 2.3 for IoT data. The goal is to have the same structure across pilot sites and enable evaluators dealing with the same format for all pilot sites.
* AUTOPILOT pilot sites use IoT platforms based on OneM2M standards to enable data interoperability across pilot sites.
**Increase data re-use (through clarifying licenses)**
* All the data producers will license their data to allow the widest reuse possible. More details about license types and rules will be provided in the next version (M32).
* By default, the data will be made available for reuse. If any constrains exist, an embargo period will be mentioned in the section 4 tables to keep the data for only a period of time
* The data producers will make their data for third-parties within public repositories. They will be reused for the purpose of validating scientific publications.
# 7 Responsibilities
In order to face the data management challenges efficiently, all AUTOPILOT
partners have to respect the policies set out in this DMP and datasets have to
be created, managed and stored appropriately.
The data controller role within AUTOPILOT will be undertaken by Francois
Fischer (ERTICO) who will directly report to the AUTOPILOT Ethics Board.
Each data producer and WPL is responsible for the integrity and compatibility
of its data during the project lifetime. The data producer is responsible for
sharing its datasets through open access repositories, and for providing the
latest version.
Regarding ethical issues, the deliverable D7.1 details all the measures that
AUTOPILOT will use to comply with the H2020 Ethics requirements.
The data manager role within AUTOPILOT will directly report to the Technical
Meeting Team (TMT). The data manager will coordinate the actions related to
data management and in particular the compliance to Open Research Data Pilot
guidelines. The data manager is responsible for implementing the data
management plan and he ensures it is reviewed and revised.
# 8 Ethical issues and legal compliance
Ethical issues related to the AUTOPILOT project will be addressed in the D7.1
As explained in chapter 2, the IoT platform is a cloud platform that will be
hosted on IBM infrastructure, and maintained by IBM IE. It will integrate and
aggregate data from the various vehicles and pilot sites.
All data transfers to the IBM hosted IoT platform are subject to and
conditional upon compliance with the following requirements:
* Prior to any transfer of data to the IBM hosted central IoT platform, all partners must execute an agreement as provided for in Attachment 6 of the AUTOPILOT Collaboration Agreement.
* All the partners must agree to commit not to provide personal data to the central IoT platform and to ensure that they secures all necessary authorizations and consents before sharing data or any other type of information (“background, results, confidential information and/or any data”) with other parties.
* Every partner that needs to send and store data in the central IoT platform has to request access to the servers, and inform IBM IE what type of data they will send.
* IBM IE will review all data sources BEFORE approving them and allowing them into the central IoT platform, to ensure they are transformed into data that cannot be traced back to personal information.
* No raw videos/images or private information can be sent to the central IoT platform. The partners who will send data to the platform must anonymize data first. Only anonymized information that will be extracted from the raw images/videos (e.g., distance between cars, presence of pedestrians, etc.) will be accepted and stored.
* The central IoT platform will only be made available to the consortium partners, and not to external entities.
* IBM IE reserves the right to suspend partners’ access in case of any suspicious activities detected or non-compliant data received. IBM IE may re-grant access to the platform if a solution demonstrating how to prevent such sharing of personal data and sensitive personal data is reached and implemented.
* IBM IE may implement validation procedures to check that the submitted data structures and types are compliant with what the partners promised to send to the central IoT platform.
* All the data will be deleted at the end of the project from all servers of the central IoT platform.
The privacy and security issues related to the AUTOPILOT project will be
outlined in deliverable D7.1 and addressed in the WP1 Task 1.5 for Security,
Privacy and Data
Specification issues.
# 9 Conclusion
This deliverable provides an overview of the data that AUTOPILOT project will
produce together with related data processes and requirements that need to be
taken into consideration.
The document outlines an overview of the dataset types with detailed
description and explains the processes that will be followed for test sites
and evaluation within high-level representations.
Chapter 5, which describes the datasets, has been updated from the previous
version of the DMP (D6.7) to reflect the current progress of the project. .
This includes a detailed description of the standards, methodologies, sharing
policies and storage methods.
The Data Management Plan is a living document. The final version of the DMP
will be available at the end of the project and will provide all the details
concerning the datasets. These datasets are the result of the test sessions
performed at pilot site. The data will be stored in public repository after
the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0574_Sci-GaIA_654237.md
|
**Executive summary**
As part of the limited pilot action on open access to research data, Sci-GaIA
has implemented a limited pilot action on open access to research data based
on the “Guidelines on Data Management in Horizon 2020”. This document
specifies the Data Management Plan (DMP) for the project and has created a
detailed outline of our policy for data management.
# 1 INTRODUCTION
As part of the limited pilot action on open access to research data, Sci-GaIA
has implemented a limited pilot action on open access to research data based
on the “Guidelines on Data
Management in Horizon 2020”. As part of the overall project Management work
package (WP5), this has been captured in task T5.1 Data Management and
specifies the Data Management Plan (DMP) by creating a detailed outline of the
project policy for data management. As specified in the Guidelines, this will
consider the following:
* Determine if the project will produce new data or combine existing data
* Identify the data sources used and produced during project and the related file formats
* Describe how you will implement a Quality Assurance procedure (QA) for data collection
* Explain your strategy for preventing data loss: files organization and indexing, data backups and storage
* Depending on the dissemination level of each dataset, explain how you will ensure (1) data confidentiality, (2) restricted access, or (3) data high visibility
* Explain how data management tasks and responsibilities are distributed among partners and how they cover the entire data life cycle of the project
This document therefore outlines the first version of the project DMP. The
Sci-GaIA DMP primarily lists the different datasets that will be produced by
the project, the main exploitation perspectives for each of those datasets,
and the major management principles the project will implement to handle those
datasets. The purpose of the DMP is to provide an analysis of the main
elements of the data management policy that will be used by the consortium
with regard to all the datasets that will be generated by the project.
The DMP is not a fixed document. It will evolve during the lifespan of the
project. This first version of the DMP includes an overview of the datasets to
be produced by the project, and the specific conditions that are attached to
them. The next version of the DMP, to be published at M18, will detail and
describe the practical data management procedures implemented by the Sci-GaIA
project. The data management plan will cover all the data life cycle (figure
1).
_Figure 1: Steps in the data life cycle. Source: From University of Virginia
Library, Research Data Services_
# 2 DATASET LIST
All Sci-GaIA partners have identified the datasets that will be produced
during the different phases of the project. The list is provided below, while
the nature and details for each dataset are given in the subsequent sections.
This list is indicative and allows estimating the data that Sci-GaIA will
produce – it may be adapted (addition/removal of datasets) in the next
versions of the DMP to take into consideration the project developments.
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**#**
</td>
<td>
**Dataset (DS) name**
</td>
<td>
**Responsible partner**
</td>
<td>
**Related WP(s)**
</td> </tr>
<tr>
<td>
1
</td>
<td>
DS1_Newsletter-
Subscribers_SIGMA_V01_DATE
</td>
<td>
SIGMA
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
2
</td>
<td>
DS2_e-Infrastructure-
Survey_WACREN_V01_DATE
</td>
<td>
WACREN
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
3
</td>
<td>
DS3_User-Forum-Members_CSIR_V01_DATE
</td>
<td>
CSIR
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
4
</td>
<td>
DS4_Open-Access-
Repositories&Services_UNICT_V01_DATE
</td>
<td>
UNICT
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
DS5_Event-Membership_SIGMA_V01_DATE
</td>
<td>
SIGMA
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
DS6_Educational-
Materials_BRUNEL_V01_DATE
</td>
<td>
BRUNEL
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
DS7_Project-Deliverables_V01_DATE
</td>
<td>
BRUNEL
</td>
<td>
WP5
</td> </tr> </table>
_Table 1: Dataset list_
# 3 GENERAL PRINCIPLES
## 3.1 PARTICIPATION IN THE PILOT ON OPEN RESEARCH DATA
The Sci-GaIA project participates in the Pilot on Open Research Data launched
by the European Commission along with the Horizon 2020 programme. The
consortium strongly believes in the concepts of open science, and in the
benefits that the European innovation ecosystem and economy can draw from
allowing reusing data at a larger scale. Therefore, all data produced by the
project can potentially be published with open access – though this objective
will obviously need to be balanced with the other principles described below.
## 3.2 IPR MANAGEMENT AND SECURITY
Project partners obviously have Intellectual Property Rights (IPR) on their
technologies and data, on which their economic sustainability relies. As a
legitimate result, the Sci-GaIA consortium will have to protect these data and
consult the concerned partner(s) before publishing data.
Another effect of IPR management is that – with the data collected through
Sci-GaIA being of high value – all measures should be taken to prevent them to
leak or being hacked. This is another key aspect of Sci-GaIA data management.
Hence, all data repositories used by the project will include a secure
protection of sensitive data.
An holistic security approach will be undertaken to protect the 3 mains
pillars of information security: confidentiality, integrity, and availability.
The security approach will consist of a methodical assessment of security
risks followed by an impact analysis. This analysis will be performed on the
personal information and data processed by the proposed system, their flows
and any risk associated to their processing.
Security measures will include the implementation of PAKE protocols – such as
the SRP protocol – and protection against bots such as CAPTCHA technologies.
Moreover, the WP/Task leaders identified in Table 1 will implement monitored
and controlled procedures related to data collection, integrity and
protection. Additionally, the protection and privacy of personal information
will include protective measures against infiltration as well as physical
protection of core parts of the systems and access control measures.
## 3.3 PERSONAL DATA PROTECTION
For some of the activities to be carried out by the project, it may be
necessary to collect basic personal data (e.g. full name, contact details,
background), even though the project will avoid collecting such data unless
deemed necessary.
Such data will be protected in compliance with the EU's _Data Protection
Directive 95/46/EC_ 1 aiming at protecting personal data. National
legislations applicable to the project will also be strictly followed, such as
the < _Italian Personal Data Protection Code_ 2 _ > _ .
All data collected by the project will be done after giving data subjects full
details on the analysis to be conducted, and after obtaining signed informed
consent forms.
2 http://www.privacy.it/privacycode-en.html
# 4 DATA MANAGEMENT PLAN
## 4.1 DATASET 1: NEWSLETTER SUBSCRIBERS
<table>
<tr>
<th>
**DS1_Newsletter-Subscribers_ SIGMA_V01_DATE**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Mailing list containing email addresses and names of all subscribers to the
Sci-GaIA’s newsletter
</td> </tr>
<tr>
<td>
Source (How have the data been collected? From which tool/survey does the data
come from?)
</td>
<td>
This dataset is automatically generated in
<Mailchimp/Mailjet/…> by visitors signing up to the newsletter form available
on the project website. Additional subscribers can be manually added to the
mailing list by the partner in charge of the project communication after
receiving informed consent from the data subjects
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable).
</td>
<td>
SIGMA
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
SIGMA
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
SIGMA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
SIGMA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4, T4.1
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
Currently, at the time of this deliverable, the list is containing contact
information of around 7000 people, and is smaller than 1 Mb
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The mailing list will be used for disseminating the project newsletter to a
targeted audience. An analysis of newsletter subscribers may be performed in
order to assess and improve the overall visibility of the project
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level : confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
This dataset does not contain confidential information. However, the
information is sensitive because it implies managing personal data. Therefore,
access to the dataset is restricted to the project dissemination and
communication leader
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
The mailing list contains personal data (names and email addresses of
newsletter subscribers). People interested in the project voluntarily
register, through the project website, to receive the project newsletter. They
can unsubscribe at any time.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The mailing list will be regularly backed up in Excel file format all along
the project. Back-ups are safely stored in SIGMA’s server.
</td> </tr> </table>
## 4.2 DATASET 2: E-INFRASTRUCTURE SURVEY
**DS2_e-Infrastructure-Survey_WACREN_V01_DATE**
<table>
<tr>
<th>
**Data Identification**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Dataset containing details of people who have participated in the Sci-GaIA
e-Infrastructure Survey
</td> </tr>
<tr>
<td>
Source (How have the data been collected? From which tool/survey does the data
come from?)
</td>
<td>
This dataset is
in the survey. The
gaia.eu/index.php/531683
</td>
<td>
captured using Limesurvey as people take part link
</td>
<td>
is
</td>
<td>
http://surveys.sci-
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable).
</td>
<td>
WACREN
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
WACREN
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
WACREN
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
WACREN
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP1, T1.3
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
Currently, at the time of this deliverable, the list contains 0 people and
their responses, and is smaller than 1 Mb
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be analysed to give indications of the impact of
eInfrastructures in Africa. This will appear in D1.3.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level : confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
This dataset does not contain confidential information. However, the
information is sensitive because it implies managing personal data. Therefore,
access to the dataset is initially restricted to the task leader. However, if
the participant has indicated that they are happy to have their personal
details shared then this will be made available to the project team and within
D1.3 (i.e. participants wish to have their e-Infrastructure project efforts
shared with the international community).
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
See above.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
The survey specifically asks if the participants are happy to share their
details. If so, they indicate this in the survey document and add their
details.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The list will be stored within the Limesurvey tool or exported to Excel and
stored in the WACREN beneficiary’s computer. These will be held at WACREN. The
list will be deleted six months after the end of the project. Participants who
are happy to share their details will have their data stored within D3.1 (see
project deliverables dataset).
</td> </tr> </table>
## 4.3 DATASET 3: USER FORUM MEMBERS
<table>
<tr>
<th>
**DS3_User-Forum-Members_CSIR_V01_DATE**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Forum list containing email addresses and names of all subscribers to the Sci-
GaIA User Forum. Dataset also contains all Forum posts.
</td> </tr>
<tr>
<td>
Source (How have the data been collected? From which tool/survey does the data
come from?)
</td>
<td>
This dataset is automatically generated by visitors signing up to the User
Forum at discourse.sci-gaia.eu.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable).
</td>
<td>
CSIR
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
CSIR
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
CSIR
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
CSIR
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2, T2.1
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
Currently, at the time of this deliverable, the list is containing contact
information of around 20 people, and is smaller than 1 Mb
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The dataset is the Forum discussions that support the project’s activities.
This is “self-exploiting” in the sense of continued discussion. The Forum’s
themes and content will be analysed without reference to specific users in
D2.1 Outcomes of the Web-based Forum.
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level : confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
This dataset does have some personal data. Users have control over the
visibility of this and the degree to which this is shared with other Forum
users. Access to personal data is otherwise restricted to the task leader.
Posts in the Forum are visible to all users.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
As noted above.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
This dataset does have some personal data. Users have control over the
visibility of this and the degree to which this is shared with other Forum
users. People interested in the Forum voluntarily register and can deregister
at any time.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The forum is held on the discourse server in their own file format. The
location of the server is being investigated.
</td> </tr> </table>
## 4.4 DATASET 4: OPEN ACCESS REPOSITORIES & SERVICES
<table>
<tr>
<th>
**DS4_Open-Access-Repositories &Services_V01_DATE **
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This is the list of open access data repositories and services supported by
Sci-GaIA’s infrastructure services. Where appropriate it will list the data
management policy for a particular service.
</td> </tr>
<tr>
<td>
Source (How have the data been collected? From which tool/survey does the data
come from?)
</td>
<td>
This is a simple list that is added to when new data repositories and services
are added to our infrastructure services.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable).
</td>
<td>
UNICT
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
UNICT
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
UNICT
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UNICT
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3, T3.2
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Under development
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Under development
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Various and will reflect the services. Each will be captured along with the
service description.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level : confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Through IdP, i.e. only those will appropriate security credentials can access
the service. This will be detailed against each service.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
This will vary from service to service and will be captured with a service
description.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None.
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
The policy for each service will be captured.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
As above.
</td> </tr> </table>
## 4.5 DATASET 5: EVENT MEMBERSHIP
<table>
<tr>
<th>
**DS5_Event-Membership_ SIGMA_V01_DATE**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
A list of participants at the Sci-GaIA workshops and training events.
</td> </tr>
<tr>
<td>
Source (How have the data been collected? From which tool/survey does the data
come from?)
</td>
<td>
The dataset is generated from attendees joining the Sci-GaIA events.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable).
</td>
<td>
SIGMA
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
SIGMA
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
SIGMA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
SIGMA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4, T4.2 & T4.3
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
Currently, at the time of this deliverable, the list is containing contact
information of 0 people as the events are yet to take place.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
An analysis of event attendees may be performed in order to assess and improve
the overall visibility of the project
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level : confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
This dataset does not contain confidential information. However, the
information is sensitive because it implies managing personal data. Therefore,
access to the dataset is restricted to the project dissemination and
communication leader.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
The list contains personal data (names and email addresses of newsletter
subscribers). People interested in the events voluntarily register, through
the project website.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The mailing list will be regularly backed up in Excel file format all along
the project. Back-ups are safely stored in SIGMA’s server.
</td> </tr> </table>
## 4.6 DATASET 6: EDUCATIONAL MATERIALS
<table>
<tr>
<th>
**DS6_Educational-Materials_ BRUNEL_V01_DATE**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Educational materials created for the training workshops.
</td> </tr>
<tr>
<td>
Source (How have the data been collected? From which tool/survey does the data
come from?)
</td>
<td>
This has been developed by UNICT and Brunel to support the training events and
subsequent educational modules.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable).
</td>
<td>
UNICT, BRUNEL
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
BRUNEL, UNICT
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UNICT
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP1, T1.1
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
There are many types of data involved in this ranging from word documents to
videos. The estimated size as deployed in OPENEDX will be determined at the
time of the associated deliverable.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
This will be published under an open commons licence for anyone to exploit.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level : confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Open access according to the open commons licence.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
This will be available to all.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No personal data.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The data will be held and backed up at UNICT servers.
</td> </tr> </table>
## 4.7 DATASET 7: PROJECT DELIVERABLES
<table>
<tr>
<th>
**DS7_Project-Deliverables_ BRUNEL_V01_DATE**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
The deliverables of the project.
</td> </tr>
<tr>
<td>
Source (How have the data been collected? From which tool/survey does the data
come from?)
</td>
<td>
Generated by WP leaders.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable).
</td>
<td>
BRUNEL
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
BRUNEL (and WP leaders)
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
BRUNEL (and WP leaders)
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
SIGMA/EC
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5 and all WPs
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This will be determined by the end of the document. It will be a combination
of WORD/PDF documents and supporting information.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The deliverables present the outcomes of the project for public use.
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level : confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Open access for all deliverables apart from financial information. This is
restricted to the consortium and Commission Services.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Open expect noted above.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Any personal data will be handled according to the datasets appear in any
deliverable as noted above.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
SIGMA and EC – indefinitely.
</td> </tr> </table>
# 5 CONCLUSION
This document contains the data management policy for Sci-GaIA. The policy
will be periodically revised at Project Management Board meetings.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0576_LOFAR4SW_777442.md
|
PROJECT SUMMARY
# IV. PROJECT SUMMARY
The LOFAR4SW design study addresses all conceptual and technical aspects
required to upgrade the LOFAR radio telescope, system-wide, to take on a
parallel role as a highly innovative new facility that enables large-scale
monitoring projects generating unique data for the European (and worldwide)
space weather research community, with great downstream potential for improved
precision and advance warning of space weather events affecting crucial
infrastructures on earth. The LOFAR4SW facility will be a powerful research
infrastructure that will allow scientists to answer many important questions
with regard to the solar corona, the heliosphere, and Earth’s ionosphere. The
term “space weather” covers the effects that the Sun has on the Earth,
including: direct powerful electromagnetic emission as a result of, for
example, solar flares; the continuous, but highly variable, outflow of hot
plasma known as the solar wind, carrying with it the interplanetary magnetic
field through the heliosphere; and large ejections of solar material known as
Coronal Mass Ejections (CMEs). These conditions drive processes in the Earth’s
magnetosphere and ionosphere which can strongly affect many technologies upon
which we now rely, including satellite operations, telecommunications,
navigation systems and power grids. Reliable prediction of space weather,
necessary to provide sufficient warning for effective counter-measures against
its effects on human technology, requires a full understanding of the physical
principles driving material from the Sun out through interplanetary space, and
the resulting dynamical impact on the magnetosphere and ionosphere. Ground-
based remote-sensing observations, coupled with sophisticated analysis and
modelling techniques for further physical understanding, are of critical
importance to space weather science and forecasting capabilities.
The overarching objective of the LOFAR4SW design project is to prepare fully
for subsequent implementation of a timely and efficient upgrade, and an
expedient start of operations.
# V. EXECUTIVE SUMMARY
This deliverable is related to the Task 8.4 Project Data Management, which
comprise of the following subtasks:
* T8.4.1. Data Management Plan: creation of this document along the lines of the “Guidelines on FAIR Data Management in Horizon2020” provided by EC (20 July 2016).
* T8.4.2 Manage data according to the DMP: management and activation of project participants to populate and update the repositories.
The primary goal of this document is to present how the data will be handled
during the course of the project and after the project completion.
# 1\. Introduction
The LOFAR4SW is the part of the Open Research Data Pilot, which means that
data produced in the frame of this project will be generally available with as
few restrictions as possible, if any. This document describes how the data
will be handled during the project and after the project is completed. Figure
1.1 below presents the Data Management Plan scheme.
Figure 1.1 Data Management Plan Scheme.
Following the ORDP requirements, this document comprises the following
aspects:
1. The data set: defines and describes the data collected/generated in the project, as well as to whom the data might be useful.
2. Standards and metadata: describes the data content, applied types, formats and standards. It is desired that the scientific data (if any will be produced) are interoperable and can be accessed via queries from other platforms (e.g. VO).
Data types, formats, standards
3. Data sharing: describes the licenses and policies under which the data are accessible, although desired policy is an Open Access. It also defines the user, to whom the data will be available.
4. Archiving and preservation: describes long-term storage and data management, including data curation. It ensures that data are FAIR.
5. Budget: this point will be especially important to ensure that the data are available, reusable and accessible not only within the project time frame, but also after the project is completed.
# 2\. Data types, formats, standards
## 2.1 Scientific datasets
A basic requirement of the LOFAR4SW facility will be to provide easily
accessible, open-access space weather science data products to the community.
This functionality will require designing an update of the Data Distribution
Module of the LOFAR system software. The design for the curation and
dissemination of LOFAR4SW observatory data products through the Science Data
Centre will largely build on existing and freely available concepts such as
the Virtual Observatory. This issue will be described in more detail in the
deliverable D6.8 - Final Science DMP.
## 2.2 Documentation
During the course of the project it is envisaged to deliver detailed and
extensive documentation related to the software and hardware development,
completed milestones, produced deliverables, reports and others. Documentation
will be internally reviewed by all project partners.
It is expected that LOFAR4SW documentation will consist of various types of
documents as listed below:
* public/confidential reports
* technical/scientific publications
* project presentations and posters
* user guide
* training materials ● data policy.
The list is not closed and will be modified during the course of the project.
All released documents that have public access rights will be available in one
of the recommended formats (preferably in PDF format) on the main project
website Metadata
( _http://lofar4sw.eu_ ). The documentation with confidential access rights
will be available to the project partners via Redmine. Documentation before
final release are also treated as restricted to the project partners.
The documentation under preparation may be distributed among project partners
via selected sharing platform (see Annex B) in order to allow efficient joined
edition.
An additional type of the project documentation will be entries in social
media. The community will be informed about important updates and events
related to the project also via the project Twitter account (
_https://twitter.com/lofar4sw/_ ). This will be used to provide in an
accessible way project progress to wide community using popular social media.
The selected person from the project will be responsible for maintaining and
updating content published on the platform.
## 2.3 Software
LOFAR4SW project will design software dedicated to support/handling new
functions of the upgraded LOFAR stations and new data processing pipelines to
produce high-quality space weather science data products. Part of the software
and software documentation related to the key technical-level functionality of
the LOFAR infrastructure is expected to be confidential. However documentation
and materials related to the data processing pipelines software prototypes
accessible to users, will remain publicly available at the end of the project
via the project website (or other sharing platform).
# 3\. Metadata
## 3.1 Standards and formats
Metadata is a set of a data that describes and gives information on the data.
The queries submitted by the user can be against metadata not against data
itself. Since metadata are smaller in size then the data it describes, using
metadata helps to resolve the queries more effectively and more efficiently.
This part is mainly related to scientific data and will be described in more
detail in deliverable D6.8 - Final Science DMP
## 3.2 Data description
Data description should contain information the user will likely search for,
by submitting a query. Regarding the data type the information stored in
metadata will be as follows:
1. Scientific datasets (if any produced during design study) are expected to be compliant with:
1. Dublin Core standards
2. SPASE data model
3. ISTP
4. IVOA
2. Documentation:
1. Dublin Core standards 3) Software:
1. name
2. purpose (what it can do?)
3. input (what has to be added, submitted to use the software?)
4. output (ex. output format(s))
5. authors
6. copyrights, policies of using the software
7. version
8. software inline documentation (comments): e.g. Sphinx for python code. i) etc.
# 4\. Data exploitation, sharing and accessibility
## 4.1 Licenses and access rights
We distinguish two types of access rights that can be applied to the data
derived in the LOFAR4SW project. These are:
* public:
○ are available with no restrictions;
○ public data can be obtained via project website or other sharing platform
(see annex B); ● confidential:
○ confidential data are available only for the limited group of users, in this
project mainly it means the data restricted to the project participants or
selected persons cooperating within the LOFAR4SW project.
LOFAR4SW project will use for produced results and documentation licences
including:
* Documentation is by default licensed under CC-BY-NC-SA (this should be specified in each document).
* Codes developed in the project are recommended to be licensed under GPLv3 license.
Documents written on the basis of the selected CC license may be made
available to third parties under other selected licenses after agreement.
All other changes and additions to licensing rules will be updated during the
project.
<table>
<tr>
<th>
DATA TYPE
</th>
<th>
FORMAT
</th>
<th>
ACCESS RIGHTS
</th>
<th>
LICENSE
</th>
<th>
USER
</th> </tr>
<tr>
<td>
DOC
</td>
<td>
\- PDF,
-any other readonly formats
</td>
<td>
public
</td>
<td>
CC BY-NC-SA
</td>
<td>
regular user, anonymous user
</td> </tr>
<tr>
<td>
DOC
</td>
<td>
-any editable formats, -any read-only formats
</td>
<td>
confidential
</td>
<td>
CC BY-NC-SA
</td>
<td>
PP
</td> </tr>
<tr>
<td>
STW
</td>
<td>
executable version
</td>
<td>
public
</td>
<td>
GPLv3
</td>
<td>
regular user
</td> </tr>
<tr>
<td>
STW
</td>
<td>
-any editable formats, source codes, -executable formats
</td>
<td>
confidential
</td>
<td>
GPLv3
</td>
<td>
PP
</td> </tr> </table>
Table 4.1. Overview of the data types produced during the LOFAR4SW project
with their access rights and licences.
## 4.2 User definition and privileges
The user privileges will be fit into sharing data platform capabilities, like
www, ftp network services, other. By the default we can recognize the standard
structure of users:
1. administrators
1. system administrator
2. system operator
3. data administrator
2. users
1. registered user (mainly Project participant)
2. registered anonymous user
3. anonymous user (read only access)
At the beginning of the project, only the persons invited by the project
consortium will be able to receive the registered user or registered anonymous
user privileges.
In order to meet the requirements introduced by The General Data Protection
Regulation (GDPR), LOFAR4SW will be using platforms where appropriate
procedures are provided on the protection of natural persons with regard to
the processing of personal data (see annex B). In the case of accessing
confidential data via the project website, solution of single anonymous user
with password was chosen (registered anonymous user). Username and password
will be distributed to all those involved via the email.
The standard user privileges are presented in table below:
<table>
<tr>
<th>
privileges/ roles
</th>
<th>
administrators
</th>
<th>
</th>
<th>
users
</th>
<th>
</th> </tr>
<tr>
<th>
system
administrator
</th>
<th>
system
operator
</th>
<th>
data
administrator
</th>
<th>
registered user
(Project participant
)
</th>
<th>
registered
anonymou s user
</th>
<th>
anonymo us user
(read only access)
</th> </tr>
<tr>
<td>
maintenance
</td>
<td>
yes
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
system software installation
</td>
<td>
yes
</td>
<td>
only plugins
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
system software update
</td>
<td>
yes
</td>
<td>
only plugins
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
system software remove
</td>
<td>
yes
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
system software configuration
</td>
<td>
yes
</td>
<td>
yes
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
system backup
</td>
<td>
</td>
<td>
yes
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
user/group add/remove/edit
</td>
<td>
only system
operators
</td>
<td>
yes
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
user/group permissions to data
</td>
<td>
</td>
<td>
yes
</td>
<td>
yes
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
data metadata editing
</td>
<td>
</td>
<td>
yes
</td>
<td>
yes
</td>
<td>
yes
(limited)
</td>
<td>
yes
(limited)
</td>
<td>
</td> </tr>
<tr>
<td>
data upload
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td>
<td>
yes
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
(limited)
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
data download/view
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td>
<td>
yes
</td>
<td>
yes
</td>
<td>
yes
(limited)
</td> </tr>
<tr>
<td>
data remove
</td>
<td>
</td>
<td>
yes
</td>
<td>
yes
</td>
<td>
yes
(limited)
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
feedback
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td>
<td>
yes
</td>
<td>
yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## 4.3 Sharing platform
This section presents how the data will be handled, how it will be shared with
the user and what tools will be used in the sharing process. The section
describes the communication ways and tools between the project and the user.
LOFAR4SW project will use different sharing platforms, depending on the type
of data being shared.
Project website
The main platform that is already in use is the project website. It is
designed to be used by all users under Open Access license, although some
website content, that are sensitive information, may be accessible for the
limited group of users under certain conditions. It allows (or will allow as
the project evolves) for the following actions:
* learn about the project, products and updates,
* forward to the project Redmine platform,
* search a content,
* download the documentation (not yet available),
* log in/register (not yet available),
* access to the software facility (not yet available), ● submit a query (not yet available).
Redmine platform
Redmine is dedicated for the project participants at the current stage, since
it contains sensitive information. Its purposes is to make the project
management more efficient and effective as well as to boost communication and
exchanging information within the project. It allows for the following
actions:
* creation and assignment of the tasks,
* controlling the time schedule for milestones and deliverables,
* time schedule for teleconferences and meetings,
* storing/uploading/downloading important documentation.
ftp/torrent
This two platform are proposed as an alternative for the project website for
sharing large volume data or handling more complex user queries. It allows for
the following actions:
* query submission, ● data upload/download.
other
The list of recommended platforms is presented in Annex B and may change
during the implementation of the project.
## 4.4 Data Workflow Scheme
This section presents the relation between different components: between the
data, software, repository and the user.
In the current phase of the project only an overview of the data flow is
presented and to whom the data will be available. More detailed scheme will be
delivered at the final project Data Management Plan, when data provided by the
project will be well defined.
Figure 4.1 An overview of Data Workflow Scheme.
## 4.5 Publication Policy
Dissemination of the findings/upgrades/progress of LOFAR4SW project may go by
a different channels including:
* Journal articles
* Conference proceedings articles
* Books or book chapters
* Technical reports
* Published patents
* Published abstracts
* Invited or contributed talks
* Popular articles
* Press reports
In all publications, policy is that the list of authors include all persons
who contributed significantly to the result under discussion. It is not
expected that all project partners, who
Archiving and preservation
had minor participation in, will be included as authors. However, it is
required that one representative of each project partner will be acknowledged
in the general ‘publications’ describing the main goals and concepts of the
LOFAR4SW project. It is also required that all public written materials
include an acknowledgement statement about project funding source; see below.
The research leading to these results has received funding from the European
Community’s Horizon 2020 Programme H2020-INFRADEV-2017-1under grant agreement
777442.
In the Horizon 2020 program, open access is an obligation for scientific peer-
reviewed publications.
The 2 main routes to open access are:
* Self-archiving / 'green' open access – the author, or a representative, archives (deposits) the published article or the final peer-reviewed manuscript in an online repository before, at the same time as, or after publication. Some publishers request that open access be granted only after an embargo period has elapsed.
* Open access publishing / 'gold' open access - an article is immediately published in open access mode. In this model, the payment of publication costs is shifted away from subscribing readers. The most common business model is based on one-off payments by authors. These costs, often referred to as Article Processing Charges (APCs) are usually borne by the researcher's university or research institute or the agency funding the research. In other cases, the costs of open access publishing are covered by subsidies or other funding models. For more information please visit:
_http://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-
cuttingissues/open-access-data-management/open-access_en.htm_
# 5\. Archiving and preservation
Archiving and preservation aspects are summarised in the Table 4.1 below.
<table>
<tr>
<th>
ASPECT
</th>
<th>
DESCRIPTION
</th> </tr>
<tr>
<td>
Curation
</td>
<td>
Adding value through the data life cycle to ensure:
* interoperability,
* necessary upgrades,
</td> </tr>
<tr>
<td>
Preservation
</td>
<td>
Ensures:
</td> </tr> </table>
Budget
<table>
<tr>
<th>
</th>
<th>
* the data can be used/reused in the future,
* data can be easily interpreted in the future, ● data sharing,
</th> </tr>
<tr>
<td>
Archiving
</td>
<td>
Ensures:
* the data are secured and well protected,
* sharing policy and licenses are followed,
* the appropriate references are preserved,
</td> </tr>
<tr>
<td>
Storage
</td>
<td>
All hardware and software needed to ensure:
● restoring the data, ● sharing the data, ● data backup.
</td> </tr> </table>
Table 4.1 Archiving and preservation aspects.
# 6\. Budget
To guarantee the data availability, reusability and accessibility not only
within the project time frame, but also after the project is completed,
LOFAR4SW will be looking for solutions that are either external, open access
and ensure long term accessibility, or platforms developed and applied within
the framework of LOFAR and ILT activities with assured budget.
In Annex B a list of platforms, in line with EC policy, recommended for
sharing data is presented.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0577_GIMS_776335.md
|
International GNSS Service (IGS), the European Permanent Network (EPN) and the
Slovenian national network SIGNAL could be used in GIMS.
GNSS auxiliary data must be used as well, i.e. satellite orbits, satellite
clocks, antenna phase center offsets/variations, etc. These will be collected
from various online repositories such as those of IGS.
# SAR
For the SAR data analysis, the following data will be used: a digital terrain
model, land cover maps and optical orthoimages of the study area.
Besides the generation of the ground deformation products, the SAR raw data
will not be directly used for end users or other purposes. By contrast, the
derived products (quantitative results) can be used by different entities,
e.g. research institutions working in earth sciences, geological surveys
(national and regional), civil protection (national and regional), local
administrations, infrastructure owners, etc.
# MEMS
Generally speaking, time series of raw inertial measurements of low-cost MEMS
sensors of essentially static nature are unlikely to be of interest to other
groups other than those working on comparable topics. However, negative
statements are not easily provable. Subsets of these time series around the
times of strong motion signals may be of higher interest and therefore the
amount of data to be stored and made available to other groups can be reduced
to periods of several minutes as opposed to the storage of full, uninterrupted
data sets.
The likelihood of further use of IMU data by other groups is unpredictable.
# Geological Data
Geological maps and other existing geological data will be used to identify
the source of movements (i.e natural background, anthropogenic influence).
# 2\. FAIR data
Data should follow the FAIR logic, so they have to be findable, accessible,
interoperable and reusable (FAIR).
**MAKING DATA FINDABLE**
To allow project data to be findable, data produced in the GIMS project will
be discoverable with metadata.
The naming conventions will be as follows:
* Raw data:
* GNSS observations: MMMMDDDS.YYo, where MMMM is a 4-character marker name, DDD is a 3-digit day-of-year (from 001 to 366), S is a session id (default: “0” for daily files, letters from “a” to “x” for hourly files), YY is a 2-digit year (e.g. “18” for 2018);
* SAR data: the naming convention of the European Space Agency archives 1 will be used; o IMU/MEMS data: the same naming convention as for GNSS observations will be used.
* Quantitative results:
* ground deformation maps: convention to be agreed with project partners;
* ground deformation time series: NNNWWWWD.snx, where NNN is 3-character network id, WWWW is a 4-digit GPS week number, D is a 1-digit day-of-week (from 0 to 6, with 0 = Sunday).
Search keywords, and version numbers will not be used since they are not
consistent with the kind of data managed during the project.
Metadata will be created for quantitative results only, namely:
* for ground deformation maps:
* number of SAR used images o covered period o SAR image dates
* basic information on the type of processing o information on the key processing parameters o quality index for deformation velocity values - for ground deformation time series:
* GNSS station marker name o GNSS processing technique o time series start date o time series end date
* maintenance information
## MAKING DATA OPENLY ACCESSIBLE
The following data accessibility policy will be followed:
* Raw data:
* GNSS observations produced by GIMS units: openly available upon request;
* SAR data: Sentinel-1 raw data are publicly available; no intermediate data will be made public;
* IMU/MEMS data produced by GIMS units: simulated IMU raw data will be confidential; actual IMU raw data will be openly available upon request.
* Quantitative results:
* ground deformation maps: openly available after decision on a case-by-case basis; o ground deformation time series: openly available after decision on a case-by-case basis.
Openly available data will be made accessible by deposition in an online
repository (e.g. Amazon S3). The appropriate tools and documentation to access
the data will be provided, if needed (e.g. Amazon S3 “AWS CLI” tool, that is
freely available).
The option of using Amazon S3 will be evaluated by the GIMS consortium. GReD
already has an Amazon AWS account, where a GIMS-specific repository could be
created and managed.
Access to openly available datasets will be provided upon request by the
interested party, by filling in a registration form on the GIMS website. The
registration form will include the following information:
* Name/Institution of the requesting party
* Contact information (email address)
* Data type (GNSS observations / actual IMU raw data / ground deformation map / ground deformation time series)
* Time period (from date – to date)
The GIMS consortium does not foresee the need of a data access committee.
Data processing software will not be made available as open source code, since
the GIMS project has a strong market uptake aim.
## MAKING DATA INTEROPERABLE
Data produced in the GIMS project will adhere to widely adopted standard
formats, in particular facilitating the compatibility with available open
applications. In this respect, GIMS data are interoperable.
Standard and open formats will be chosen whenever possible. Otherwise, format
specifications will be defined and provided.
## INCREASE DATA RE-USE
Openly available data will be licensed under a Creative Commons Attribution-
NonCommercialNoDerivatives 4.0 International (CC BY-NC-ND 4.0) license (
https://creativecommons.org/licenses/by-ncnd/4.0/)
The existing data from selected pilot areas for validation of GIMS provided
data cannot be re-use by third parties, if the owner of data is a private
company
Data will be made available for re-use after the end of the GIMS project
(November 2020) and they are intended to remain re-usable indefinitely.
Due to the innovative nature of the project, at the moment it is not possible
to define the final quality assurance processes yet.
# 3\. Allocation of resources
Person/months needed to adhere to FAIR guidelines are already included in the
planned GIMS effort.
Direct costs associated, for example, to Amazon S3 storage are eligible as
part of the Horizon 2020 grant.
Dr. Lisa Pertusini of GReD will be responsible for data management for the
GIMS consortium.
Long-term preservation of the project data and results will be valuable to
GReD in terms of exposure, thus GReD will consider the feasibility of
allocating the resources needed to do that.
**4\. Data security**
Amazon S3 guarantees state-of-the-art data security.
# 5\. Ethical aspects
GIMS project does not deal with personal data gathered through questionnaires
for research purpose. No ethical aspects are considered to be relevant within
this project.
# 6\. Other issues
No other issues arise with respect to GIMS data.
We do not make use of other national/funder/sectorial/departmental procedures
for data management.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0585_ANTAREX_671623.md
|
**1 Data Management Plan**
# 1.1 Summary
Data Management Plans (DMPs) are introduced in the Horizon 2020 Work
Programmes:
_A further new element in Horizon 2020 is the use of Data Management Plans
(DMPs) detailing what data the project will generate, whether and how it will
be exploited or made_
<table>
<tr>
<th>
_accessible for verification and re-use, and how it will be curated and
preserved. The use of a_
</th> </tr>
<tr>
<td>
_Data Management Plan is required for projects participating in the Open
Research Data_
</td> </tr>
<tr>
<td>
_Pilot. Other projects are invited to submit a Data Management Plan if
relevant for their_
</td> </tr> </table>
_planned research._
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used by the
applicants with regard to all the datasets that will be generated by the
project. The DMP is not a fixed document, but evolves during the lifespan of
the project.
This document describes the Data Management Plan - DMP (D1.2) for the ANTAREX
project, generated according to the Guidelines on Data Management in H2020
(Version 2.0 dated 30/10/2015) and Guidelines on Open Access to Scientific
Publications and Research Data in H2020 (Version 2.0 dated 30/10/2015).
According to the ANTAREX DoW, the ANTAREX DMP is planned to be issued at M06
as D1.2, while updated versions of D1.2 are expected to be released at M18 and
finally at the end of the project (M36).
In this way, ANTAREX project will become eligible for the **Pilot Action on
Open Access to Research Data** as stated in H2020.
<table>
<tr>
<th>
_A detailed description and scope of the Open Research Data Pilot requirements
is provided on the Participants Portal (Guidelines on Open Access to
Scientific Publications and Research Data in Horizon 2020). Projects taking
part in the Pilot on Open Research Data are required to provide a first
version of the DMP as an early deliverable within the first six months of the
project. Projects participating in the pilot as well as projects who submit a
DMP on a voluntary basis because it is relevant to their research should
ensure that this deliverable is mentioned in the proposal. Since DMPs are
expected to mature during the project, more developed versions of the plan can
be included as additional deliverables at later stages. The purpose of the DMP
is to support the data management life cycle for all_
</th> </tr>
<tr>
<td>
_data that will be collected, processed or generated by the project._
</td>
<td>
</td> </tr> </table>
A DMP as a document outlining how research data will be handled during a
research project, and after it is completed, is very important in all aspects
for projects participating in the Horizon 2020 Open Research Data Pilot as
well as almost any other research project. Especially where the project
participates in the Pilot it should always include clear descriptions and
rationale for the access regimes that are foreseen for collected data sets.
This principle is further clarified in the following paragraph of the Model
Grant Agreement:
<table>
<tr>
<th>
_As an exception, the beneficiaries do not have to ensure open access to
specific parts of their research data if the achievement of the action's main
objective, as described in Annex I, would be jeopardised by making those
specific parts of the research data openly accessible._
</th> </tr>
<tr>
<td>
_In this case, the data management plan must contain the reasons for not
giving access_
</td>
<td>
_._
</td> </tr> </table>
# 1.2 Public Data Management Policies
## 1.2.1 Open Access Infrastructure for Research in Europe OpenAIRE
OpenAIRE 1 is an initiative that aims to promote open scholarship and
substantially improve the discoverability and reusability of research
publications and data. The initiative brings together professionals from
research libraries, open scholarship organisations, national e-Infrastructure
and data experts, IT and legal researchers, showcasing the truly collaborative
nature of this pan-European endeavour.
**Project details:**
<table>
<tr>
<th>
Project n°:
</th>
<th>
643410
</th> </tr>
<tr>
<td>
Project type:
</td>
<td>
Research and Innovation
</td> </tr>
<tr>
<td>
Start date:
</td>
<td>
01/01/2015
</td> </tr>
<tr>
<td>
Duration:
</td>
<td>
42 months
</td> </tr>
<tr>
<td>
Total budget:
</td>
<td>
13 132 500 € (4 mi are targeted towards the FP7 post grant gold OA pilot)
</td> </tr>
<tr>
<td>
Funding from the EC:
</td>
<td>
13 000 000 €
</td> </tr> </table>
A network of people, represented by the National Open Access Desks (NOADs),
organises activities to collect H2020 project outputs, and supports research
data management. Backing this vast outreach, is the OpenAIRE platform, the
technical infrastructure that is vital for pulling together and
interconnecting the large-scale collections of research outputs across Europe.
The aim of the project is to create workflows and services on top of this
valuable repository content, which will enable an interoperable network of
repositories (via the adoption of common guidelines), and easy upload into an
all-purpose repository (via Zenodo).
OpenAIRE2020 assists in monitoring H2020 research outputs and should be key
infrastructure for reporting H2020’s scientific publications as it will be
loosely coupled to the EC’s IT backend systems as stated in the project
description. The EC’s Research Data Pilot is supported through Europeanwide
outreach for best research data management practices and Zenodo, which will
provide long-tail data storage. Other activities include: collaboration with
national funders to reinforce the infrastructure’s research analytic services;
an APC Gold OA pilot for FP7 publications with collaboration from LIBER; novel
methods of review and scientific publishing with the involvement of
hypotheses.org; a study and a pilot on scientific indicators related to open
access with CWTS’s assistance; legal studies to investigate data privacy
issues relevant to the Open Data Pilot; international alignment with related
networks elsewhere with the involvement of COAR.
### Zenodo
Zenodo 2 is developed by CERN under the EU FP7 project OpenAIREplus (grant
agreement no. 283595). The repository is open to all research outputs from all
fields of science regardless of funding source. Given that Zenodo was launched
within an EU funded project, the knowledge bases were first filled with EU
project codes, but they are keen to extend this to other funders. Zenodo is
free for the long tail of Science. In order to offer services to the more
resource hungry research, they have a ceiling to the free slice and offer paid
for slices above, according to the business model developed within the
sustainability plan.
Zenodo allows to create own collections for communities and to accept or
reject uploads submitted to it. It can be used for example for workshops or
other activities.
### Content
All research outputs from all fields of science are welcome. In the upload
form it can be chosen between types of files: publications (book, book
section, conference paper, journal article, patent, preprint, report, thesis,
technical note, working paper, etc.), posters, presentations, datasets, images
(figures, plots, drawings, diagrams, photos), software, videos/audio and
interactive materials such as lessons. Zenodo assigns all publicly available
uploads a Digital Object Identifier (DOI) to make the upload easily and
uniquely citeable. Further information is in Terms of Use and Policies.
### Size limits
Zenodo currently accepts files up to 2GB (several 2GB files per upload); there
is no size limit on communities. However, they don't want to turn away larger
use cases. The current infrastructure has been tested with 10GB files, so
possibly they can raise the file size limit per community or for the whole of
Zenodo if needed. Larger files are allowed on demand. Since they target the
long-tail of science, they want public user uploads to always be free.
### Data safety
The data is stored in CERN Data Center. Both data files and metadata are kept
in multiple online replicas and are backed up to tape every night. CERN has
considerable knowledge and experience in building and operating large scale
digital repositories and a commitment to maintain this data centre to collect
and store 100s of PBs of LHC data as it grows over the next 20 years. In the
highly unlikely event that Zenodo will have to close operations, they
guarantee that they will migrate all content to other suitable repositories,
and since all uploads have DOIs, all citations and links to Zenodo resources
(such as data) will not be affected.
### Open and closed uploads
Zenodo is a strong supporter of open data in all its forms (meaning data that
anyone is free to use, reuse, and redistribute) and takes an incentives
approach to encourage depositing under an open license. They therefore only
display Open Access uploads on the front-page. Closed Access upload is still
discoverable through search queries, its DOI, and any community collections
where it is included.
Since there isn't a unique way of licensing openly and nor a consensus on the
practice of adding attribution restrictions, they accept data under a variety
of licenses in order to be inclusive. However, they take an active lead in
signaling the extra benefits of the most open licenses, in terms of visibility
and credit, and offer additional services and upload quotas on such data to
encourage using them. This follows naturally from the publications policy of
the OpenAIRE initiative, which has been supporting Open Access throughout, but
since it aims to gather all European Commission/European Research Area
research results, it allows submission of material that is not yet Open
Access.
### Future funding for Zenodo
Zenodo was launched within the OpenAIREplus project as part of a Europe-wide
research infrastructure. OpenAIREplus deliver a sustainability plan for this
infrastructure with an eye towards future Horizon 2020 projects and is thus
one of our possible funding sources. Another possible source of funding is
CERN itself. CERN hosts and develops several large services, such as CERN
Document Server and INSPIRE-HEP, which run the same software as Zenodo.
Additionally, CERN is familiar with preserving large research datasets because
of managing the Large Hadron Collider data archive of 100 petabytes.
_Information of this section was collected from official OpenAIRE and Zenodo
web sites._
## 1.2.2 Benchmarks
Although the two use cases provided by the partners will guide the research we
will do during ANTAREX, we plan to further test the developed methodologies,
techniques and tool flows using **open source benchmarks** (e.g., Table 1). We
will make available the configurations needed to execute the benchmarks, as
well as the obtained results and the information needed to reproduce the
experiments (e.g., execution times, memory accesses, profiling and
characteristics of the machines where the tests run).
**Table 1. Set of possible benchmarks to be used to validate and test
ANTAREX**
<table>
<tr>
<th>
**Benchmark**
</th>
<th>
**Type**
</th>
<th>
**URL**
</th> </tr>
<tr>
<td>
CORAL
</td>
<td>
HPC
</td>
<td>
asc.llnl.gov/CORAL‐benchmarks
</td> </tr>
<tr>
<td>
HPL
</td>
<td>
HPC
</td>
<td>
icl.eecs.utk.edu/hpl
</td> </tr>
<tr>
<td>
HPCG
</td>
<td>
HPC
</td>
<td>
_www.hpcg‐benchmark.org_
</td> </tr>
<tr>
<td>
Green Graph 500
</td>
<td>
HPC
</td>
<td>
green.graph500.org
</td> </tr>
<tr>
<td>
ASC
</td>
<td>
HPC
</td>
<td>
www.lanl.gov/projects/codesign/proxy‐apps/index.php
</td> </tr>
<tr>
<td>
NAS
</td>
<td>
HPC
</td>
<td>
www.nas.nasa.gov/publications/npb.html
</td> </tr>
<tr>
<td>
HPCC
</td>
<td>
HPC
</td>
<td>
icl.cs.utk.edu/hpcc
</td> </tr>
<tr>
<td>
BSC
</td>
<td>
HPC
</td>
<td>
pm.bsc.es/projects/bar
</td> </tr>
<tr>
<td>
PARSEC
</td>
<td>
HPC
</td>
<td>
parsec.cs.princeton.edu
</td> </tr>
<tr>
<td>
San Diego Vision
</td>
<td>
Vision
</td>
<td>
parallel.ucsd.edu/vision
</td> </tr>
<tr>
<td>
PaRMAT
</td>
<td>
Graph
</td>
<td>
github.com/farkhor/PaRMAT
</td> </tr>
<tr>
<td>
Stanford SNAP
</td>
<td>
Graph
</td>
<td>
snap.stanford.edu/data
</td> </tr> </table>
# 1.3 Private Data Management Policies
This Section describes the facilities and policies to be used for ANTAREX
Project by each partner manage private data. For the two industrial partners,
DOMPE’ and SYGIC, the private data will be managed by the two supercomputing
centers, CINECA and IT4I respectively, according to the next sections.
The **Primary Sygic contact,** Radim Cmar, is the physical person responsible
for the ANTAREX project for Sygic and for approving other Sygic user accesses
to the project. He is also the representative for Sygic for the data
management process.
The **Primary Dompe' contact,** Andrea Beccari, is the physical person
responsible for the ANTAREX project for Dompe' and for approving other Dompe'
user accesses to the project. He is also the representative for Dompe' for the
data management process.
## 1.3.1 IT4I Data Management Policies _Human roles and administration
process_
**IT4Innovations System Administrators** are full-time internal employees of
IT4Innovations, department of Supercomputing Services. The system
administrators are responsible for safe and efficient operation of the
computer hardware installed at IT4Innovations. Administrators have signed a
confidentiality agreement.
User access to IT4Innovations supercomputing services is based on projects,
membership in a project provides access to the granted computing resources
(accounted in corehours consumed). There will be one common project for
ANTAREX.
The project will have one **Primary Investigator,** a physical person, who
will be responsible for the project, and is responsible for approving other
users access to the project. At the beginning of the project, Primary
Investigator will appoint one Company Representative for each company involved
in the project.
**Company Representatives** will be responsible for approving access to
**Private Storage Areas** belonging to their company. Private Storage Areas
are designated for storing sensitive private data. Granting access permissions
to a Private Storage area must be always authorized by the respective Company
Representative AND Primary Investigator.
**Users** are physical persons participating in the project. Membership of
users to ANTAREX project is authorized by Primary Investigator. Users can log
in to IT4Innovations compute cluster, consume computing time and access shared
project storage areas. Their access to Private Storage Areas is limited by
permissions granted by Company Representatives.
User data in general can be accessed by:
1. IT4Innovations System Administrators
2. The user, who created them (i.e. the UNIX owner)
3. Other users, to whom the user has granted permission _and at the same time_ have access to the particular Private Storage Area (in the case of data stored in the Private Storage Area) granted via the “Process of granting of access permissions” process.
### Process of granting of access permissions
All communication with participating parties is in the manner of signed email
messages, digitally signed by a cryptographic certificate issued by a trusted
Certification Authority. All requests for administrative tasks must be sent to
IT4Innovations HelpDesk. All communication with HelpDesk is archived and can
be later reviewed.
Access permissions for files and folder within the standard storage areas
(HOME, SCRATCH) can be changed directly by the owner of the file/folder by
respective Linux system commands. The user can request HelpDesk for assistance
on how to set the permissions.
Access to Private Storage Areas is governed by the following process:
1. A request for access to Private Storage Area for given user is sent to IT4Innovations HelpDesk via a signed email message by a user participating in the project.
2. HelpDesk verifies the identity of the user by validating the cryptographic signature of the message.
3. HelpDesk sends a digitally signed message with request of approval to the respective Company Representative and to the Primary Investigator.
4. Both the Company Representative and the Primary Investigator must reply with a digitally signed message with explicit approval of the access to the requested Private Storage Area.
5. System administrator at HelpDesk grants the requested access permission to the user.
Company representative or Primary Investigator can also send a request to
HelpDesk to revoke access permission for a user.
### Data storage areas
There are four types of relevant storage areas: **HOME,** **SCRATCH** ,
**BACKUP and PRIVATE.**
**HOME, SCRATCH and BACKUP** are standard storage areas provided to all users
of IT4Innovations supercomputing resources (file permissions apply). **HOME**
storage is designed for long-term storage of data and is archived on the tape
library - **BACKUP** . **SCRATCH** is a fast storage for short- or mid-term
data, with no backups. **PRIVATE** storages are dedicated storages for
sensitive data, stored outside the standard storage areas.
### HOME storage
HOME is implemented as a two-tier storage. First tier is disk array and the
second tier is a NL-SAS disk array together with a partition of T950B tape
library. Migration between the two tiers is provided by SGI DMF software. DMF
creates two copies of data migrated to the second tier: one to NL-SAS drives
and the second on LTO6 tapes for backup.
HOME is realized on CXFS file system by SGI. Access to this file system on the
cluster is provided by three CXFS Edge servers and a pair of DMF/CXFS Metadata
servers, which export the file system via NFS protocol.
Each user has a designated home directory on the HOME file system at
/home/username, where username is login name given to the user. By default,
the permissions of the home directory are set to 750, and thus it is not
accessible by other users.
### SCRATCH storage
SCRATCH is running on parallel Lustre filesystem with fast access. SCRATCH
filesystem is divided into two areas: WORK and TEMP.
1. WORK filesystem. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid.** The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid.
2. TEMP area. In this area, files that are not accessed for more than 90 days will be automatically deleted. Users may freely create directories in this area, and are fully responsible for setting correct access permissions of the directories.
### PRIVATE storage
In order to provide additional level of security of sensitive data, we will
setup dedicated storage areas for each company participating in the project.
PRIVATE storage areas will be setup in a separate storage and will be not
accessible to regular IT4Innovation users. IT4Innovations can additionally
provide encryption of PRIVATE storage; the particular solution will be
discussed with regards to security and performance considerations.
### BACKUP storage
Contents of HOME storage are automatically backed up to tape library. There is
a minimal period of retention, but no maximal, so we cannot guarantee time
when the backups are removed from the tapes.
### PRIVATE BACKUP storage
It is possible to setup dedicated backups of PRIVATE storage. In this case,
unlike with the regular BACKUP, we can guarantee secure removal of data
archived in PRIVATE BACKUP.
### Data access Physical security
All data storage is placed in a single room, which is physically separated
from the rest of the building, has a single entry door and no windows. Entry
to the room is secured by electromechanical locks controlled by access cards
with PINs and non-stop alarm system. The room is connected to CCTV system
monitored at reception with 20 cameras, recording and backup. Reception of the
building has 24/7 human presence and external security guard during night.
Reception has a panic button to call a security agency.
### Remote access and electronic security
All external access to IT4I resources is provided only through encrypted data
channels (SSH, SFTP, SCP and Cisco VPN)
Control of permissions on the operating system level is done via standard
Linux facilities – classical UNIX permissions (read, write, execute granted
for user, group or others) and Extended ACL mechanism (for a more fine-grained
control of permissions to specific users and groups). PRIVATE storage will
have another level of security that will not allow mounting the storage to
non-authorized persons.
### Data lifecycle
1. **Transfer of data to IT4Innovations:** User transfers data from his facility to IT4Innovations only via safely encrypted and authenticated channels (SFTP, SCP). Unencrypted transfer is not possible.
2. **Data within IT4Innovations:** Once the data are at IT4Innovations data storage, access permissions apply.
3. **Transfer of data from IT4Innovations:** User transfers data from to facility from IT4Innovations only via safely encrypted and authenticated channels (SFTP, SCP). Users are strongly advised not to initiate unencrypted data transfer channels (such as HTTP or FTP) to remote machines.
4. **Removal of data:** On SCRATCH file system, the files are immediately removed upon user request. However, the HOME system has a tape backup, and the copies are kept for indefinite time. We advise not to use HOME storage if you do not wish to keep copies of your data on tapes. PRIVATE storage will be securely deleted upon request or when the project ends.
### Data in a computational job lifecycle
When a user wants to perform a computational job on the supercomputer the
following procedure is applied:
1. User submits a request for computational resources to the job scheduler
2. When the resources become available, the nodes are allocated exclusively for the requesting user and no other user can login during the duration of the computational job. The job is running with same permissions to data as the user who submitted it.
3. After the job finishes, all user processes are terminated and all user data is removed from local disks (including ramdisks).
4. After the cleanup is done, the nodes can be allocated to another user, no data from the previous user are retained on the nodes.
All Salomon computational nodes are diskless and cannot retain any data.
There is a special SMP server UV1 accessible via separate job queue, which has
different behavior from regular computational nodes: it has a local hard drive
installed and multiple users may access it simultaneously.
## 1.3.2 CINECA Data Management Policies
### Human roles and administration process
**CINECA HPC System Administrators** are full-time internal employees of
CINECA, department of DSET (System&Technology Dept). The system administrators
are responsible for safe and efficient operation of the HPC computer hardware
installed at CINECA. Administrators have signed a confidentiality agreement.
User access to CINECA supercomputing services is based on personal
Username/password information (for system access) and Projects (for resource
allocation).
Membership in a project provides access to the granted computing resources
(accounted in core-hours consumed in the batch mode interactive use is not
accounted) as well as to a private storage area ($WORK) reserved to the
members of the project.
Projects are hierarchically grouped into “root entities”, even if each single
sub-project is completely autonomous in terms of PI, budget, private storage
area and collaborators.
There will be several sub-projects for ANTAREX, one for each Company involved,
all of them grouped into a single root project “Antrx_”.
Each sub-project will have one **Principal Investigator,** a physical person
representative for the corresponding Company, who will be responsible for the
project, and is responsible for approving other users access to the project.
The collaborators of each sub-project will have exclusive access to the WORK
area, a p **rivate Storage Areas** associated to the project itself. The WORK
area is designated for storing sensitive private data. It is a permanent area
maintained for the full duration of the project.
**Users** are physical persons participating in the project. Users must
register to the CINECA Database of Users (UserDB) following the normal CINECA
Policy for users. They will be given a “personal username” and password that
will permit the access to CINECA supercomputing platforms.
General users will become members of the ANTAREX project only when they will
be associated to one or more ANTAREX sub-projects by one of the Principal
Investigators. Only at this point, users shall be allowed to log into the
compute cluster, consume computing resources and access the project private
storage areas.
Several data areas are available on our systems:
1. Personal storage areas (HOME and SCRATCH): each user owns such areas on the system
2. Project private storage area (WORK): each project owns such area opened to all (and only) project collaborators
3. Data Resources (DRES): private data areas owned by a physical person (DRES owner) who can share it with collaborators or even projects (all collaborators of the project)
User data in general can be accessed by:
1. System Administrators and help-desk consultants
2. The user, who created them (i.e. the UNIX owner)
3. Other users, to whom the user has granted permission for personal data areas
4. _Other collaborators of the same project, to whom the user has granted permission, for the WORK_ or DRES Private Storage Area.
### Process of granting of access permissions
All communication with participating parties is in the manner of signed email
messages, digitally signed by a cryptographic certificate issued by a trusted
Certification Authority.
All requests for administrative tasks must be sent to Cineca HelpDesk
([email protected]). All communication with HelpDesk is archived in a Trouble
Ticketing system and can be later reviewed.
Access permissions for files and folder within the personal storage areas
(HOME, SCRATCH) can be changed directly by the owner of the file/folder by
respective Linux system commands. The user can request HelpDesk for assistance
on how to set the permissions.
Access to Private Storage Areas is exclusively reserved to the collaborators
of the sub-project. In order to access it the user must be included among the
project collaborators by the PI of the project. The PI is also allowed to
remove collaborators from its project.
### Data storage areas
There are several types of relevant storage areas: **HOME,** **SCRATCH** ,
**TAPE** (user oriented), **WORK** and **DRES** (project oriented).
**HOME, SCRATCH and TAPE** are standard storage areas provided to all users of
supercomputing resources (file permissions apply). **HOME** storage is
designed for long-term storage of data and is archived on the tape library (a
disk quota applies); **SCRATCH** is a fast storage for short- or mid-term
data, with no backups and periodic data cleaning (no disk quota). **TAPE**
storages are dedicated to personal archiving to the tape library (disk quota
applies).
**WORK** is a storage area for sensitive data, provided for each project, disk
quota applies, only project collaborator can access it, data are preserved for
the full duration of the project.
**DRES** is similar to WORK, but provided only on specific request and can be
associated to multiple projects.
All storage areas in the CINECA HPC environment are managed by GPFS (General
Parallel File System). The Tape library is connected to the data storage by
the LTFS technology.
### HOME storage
Each user has a designated home directory on the HOME file system at
/<host>/userexternal/<username>, where <host> is the system name (GALILEO,
FERMI or PICO) and <username> is login name given to the user. By default, the
permissions of the home directory are set to 700, and thus it is not
accessible by other users. The user is however free to open the permissions
giving access to others to its own files.
There is a disk quota on this filesystem of 50 GB that can be extended on
request. The filesystem is daily saved to Magnetic tapes by backup. Data here
are preserved as long as the user is defined on the system.
### SCRATCH storage
SCRATCH is given to each user, though the $CINECA_SCRATCH environmental
variable. No quota applies to this filesystem and the occupancy is regularly
checked by HD staff not to overcome a given threshold. By default the
permission are set to 755, that is, open in read access to all. The user is
however free to modify the permissions closing the access. In this area a
cleaning procedure is active, deleting all files that are not accessed for
more than 30 days.
### TAPE storage
This area is given to a user on request and is reachable thought the $TAPE
environment variable. Data stored here migrates automatically to magnetic
tapes thanks to the LTFS system. A default quota of 1TB applies, even if this
limit can be increased on request. Data here are preserved as long as the user
is defined on the system.
### WORK storage
This area is given to each project active on the system and is reachable via
the $WORK environment variable. If the user participates to more than one
project he will be entitled to more than one WORK area; he will choose among
them using a specific command (chprj – Change Project). A default quota of 1
TB applies, but the value can be increased on request. Access here is strictly
reserved to project’s collaborators and it is not possible to open this area
to others. Data here are preserved as long as the project is defined on the
system.
### DRES storage
This area can be created only on request and is stored on the gss (GPFS
Storage System) disks. It is owned by a user (DRES owner) and it is
characterized by a quota, a validity and a type (FS – normal Filesystem; ARCH
– tape storage; REPO – iRods based repository).
This area is reachable from all HPC systems in CINECA (at least from the login
nodes) and can be linked to one or more projects. In this case all
collaborators of the projects are entitled to access the storage area. Data
here are preserved as long as the DRES itself is defined on the system.
### Data access Physical security
All data storage is placed in a single room, one of the two machine rooms of
CINECA. Entry to the room is secured by electromechanical locks controlled by
access cards with PINs and non-stop alarm system. The room is connected to
CCTV system monitored at reception with dozens of cameras, recording and
backup.
Reception of the building has 24/7 human presence, staff during working hours
and external security guards during nights and week-ends.
### Remote access and electronic security
All external access to cineca resources is provided only through encrypted
data channels (SSH, SFTP, SCP and Cisco VPN)
Control of permissions on the operating system level is done via standard
Linux facilities – classical UNIX permissions (read, write, execute granted
for user, group or others) and Extended ACL mechanism (for a more fine-grained
control of permissions to specific users and groups).
### Data lifecycle
1. **Transfer of data to CINECA**
User transfers data from his facility to CINECA only via safely encrypted and
authenticated channels (SFTP, SCP). Unencrypted transfer is not possible.
2. **Data within CINECA**
Once the data are at CINECA data storage, access permissions apply.
3. **Transfer of data from CINECA**
User transfers data from **CINECA** to local facility only via safely
encrypted and authenticated channels (SFTP, SCP). Unencrypted transfer is not
possible.
4. **Removal of data**
Normally the files are immediately removed upon user request. However, the
HOME system has a tape backup, and the copies are kept for indefinite time.
Data on HOME and TAPE have a life cycle related with the life of the Username
(removed one year after the removal of the Username).
Data on SCATCH will be preserved only one month in not used on a daily bases
Data on WORK follows the life of the project (removed six months after the
conclusion of the project).
Data on DRES follows the life of the DRES itself (removed six months after the
conclusion of the DRES).
### Data in a computational job lifecycle
When a user wants to perform a computational job on the supercomputer the
following procedure is applied:
* User submits a request for computational resources to the job scheduler, specifying the
project to be accounted for.
* When the resources become available, the cores are allocated exclusively for the requesting user. Other jobs con share the nodes, if they are not requested in an exclusive way. The job is running with same permissions to data as the user who submitted it.
* The job should only use the gpfs storage filesystems. Even when local disks are present, they are not guaranteed.
* After the job finishes, all user processes are terminated and the resources can be allocated to another job, no control about data from the previous user written on local disks.
## 1.3.3 POLIMI Data Management Policies
### Human roles and administration process
The **Project Coordinator** , Prof. Cristina Silvano, is the physical person
responsible for the ANTAREX project and for approving other users access to
the project. The Project Coordinator is also the **Representative** for POLIMI
for the data management process.
**Users** are physical persons participating in the project. Membership of
users to ANTAREX project is authorized by Project Coordinator. Users can log
in to the computer hardware dedicated to the ANTAREX project at POLIMI and
access the shared project storage areas. Access to POLIMI resources is
available to POLIMI users, as well as to users from other parties upon request
from the party Representative, and following authorization by the Project
Coordinator.
**System Administrators** are members of the POLIMI staff involved in the
ANTAREX project, since the computer hardware resources used for the ANTAREX
project at POLIMI are dedicated, and not shared with general POLIMI scientific
or IT personnel.
User data in general can be accessed by:
* The user who created them (i.e., the UNIX owner)
* System Administrators
* Other users who have been granted permission by the owner
### Process of granting of access permissions
Access permission requests for POLIMI resources should be sent via registered
mail signed by the party Representative. Such communication will be archived
for the duration of the project plus 5 years.
Access permissions for files and folder within the standard storage areas
(HOME) can be changed directly by the owner of the file/folder by respective
Linux system commands.
### Data storage areas HOME storage
HOME is implemented via two 2TB SATA Western Digital Black disks in RAID-1
mirror.
Each user has a designated home directory on the HOME file system at
/home/username, where username is login name given to the user. By default,
the permissions of the home directory are set to 700, and thus it is not
accessible by other users.
### SWAP storage
SWAP storage is implemented via a 120 GB Samsung 150 Evo solid state disk,
mounted as a Linux swap partition.
### Data access Physical security
All data storage is placed in a single cabinet, in a room physically separated
from the rest of the building. Entry to the room is secured by
electromechanical locks controlled by access cards. An alarm system is active
when no personnel is present.
### Remote access and electronic security
All external access to POLIMI ANTAREX resources is provided only through
encrypted data channels (SSH, SFTP, SCP).
Control of permissions on the operating system level is done via standard
Linux facilities – classical UNIX permissions (read, write, execute granted
for user, group or others) and Extended ACL mechanism (for a more fine-grained
control of permissions to specific users and groups).
### Data lifecycle
* Transfer of data: User transfers data from his facility to POLIMI only via safely encrypted and authenticated channels (SFTP, SCP). Unencrypted transfer is not possible.
* Data within POLIMI: Once the data are at POLIMI data storage, access permissions apply.
* Transfer of data from POLIMI: User transfers data from to facility from POLIMI only via safely encrypted and authenticated channels (SFTP, SCP). Unencrypted transfer is not possible.
* Removal of data: On HOME file system, the files are immediately removed upon user request, or after 2-years from the project end. The SWAP storage is managed as a standard swap partition, and no long-term storage takes place there.
## 1.3.4 UPORTO Data Management Policies
### Human roles and administration process
The **Project Coordinator** , Prof. Cristina Silvano, is the physical person
responsible for the ANTAREX project and for approving other users access to
the project. The **Representative** for UPORTO for the data management process
is Dr. João Bispo.
**Users** are physical persons. Membership of users to ANTAREX project is
authorized by Project Coordinator. Users can log in to services hosted at
UPORTO and access the shared project storage areas. Access to ANTAREX
resources is available upon request from the party Representative, and
following authorization by the Project Coordinator.
**System Administrators** are part of the ANTAREX consortium.
User data in general can be accessed by:
* The user who created them (i.e., the account owner)
* System Administrators
* Other users who have been granted permission by the owner
### Services provided by UPORTO ANTAREX OwnCloud
OwnCloud is a self-hosted Dropbox-like solution for private file storage. It
is used in the project as a repository to store files related to the project
(e.g., reports, publications, dissemination materials). We use a free version
of OwnCloud as the repository server. It is an open platform which can be
accessed through a web interface or a sync client (available for desktop and
mobile platforms). Members of the ANTAREX Consortium can access the repository
files using accounts, previously created by a system administrator. It is
possible to create public links to individual files of the repository, which
can later be used to share files publicly in the website.
### ANTAREX Wiki
We setup a self-hosted wiki in order to facilitate the communication of
knowledge between the members as well as to aid in a multitude of
collaborative tasks. This wiki is based on the Detritus release of DokuWiki.
The wiki is closed to the general public, meaning that even reading the wiki
is not possible for someone that is not logged in. In order to keep the wiki
private, new user accounts are created on demand by the system administrators.
The wiki provides a way to discuss subjects and to work in a collaborative way
in some topics.
### ANTAREX Website
The ANTAREX website is hosted externally, by the Portuguese company AMEN,
which is part of the European company DADA S.p.A. The hosting service for the
website also supports the mailing lists and official project emails. The
hosting is done over a Linux, using Apache as the HTTP server. The code of the
website is being developed in a private Git repository hosted by BitBucket,
which is responsible for maintaining a backup of the data, ensuring the
integrity of the website.
Having the website hosted externally is more secure, since we avoid possible
attack vectors related with website hosting. Since all data published in the
website is public, there is no problem in hosting it externally. All public
documents are accessed through links hosted by the self-hosted OwnCloud
repository and are not stored in the website.
### Data access Physical security
All data storage is hosted on virtual machines provided by UPORTO. The
physical machines are placed in dedicated rooms and entry to the room is
secured by electromechanical locks controlled by access cards. Users do not
have access to these rooms.
### Remote access and electronic security
All external access to ANTAREX resources hosted by UPORTO is provided through
secure data channels (e.g., HTTPS).
### Process of granting of access permissions
Access permissions for files and folder within the repository and the wiki is
controlled by system administrators.
### Data lifecycle
* **Transfer of data:** User transfers data from his facility to UPORTO via safely encrypted and authenticated channels (HTTPS).
* **Data within UPORTO:** Once the data is at UPORTO repository/wiki, access permissions apply.
* **Transfer of data from UPORTO:** User transfers data to facility from UPORTO via safely encrypted and authenticated channels (HTTPS).
* **Removal of data:** The virtual machine is included in a system of daily backups to hard-disk and bi-weekly backups to tapes, to ensure the integrity of data. They will be maintained for
at least 3 years after the end of the project. The website host and domain
will be available for two years after the end of the project. After the end of
the project, the website will be moved to a machine at UPORTO.
## 1.3.5 ETHZ Data Management Policies
### Human roles and administration process
The **Project Coordinator** , Prof. Cristina Silvano, is the physical person
responsible for the ANTAREX project and for approving other users access to
the project. The **Representative** for ETHZ for the data management process
is Prof. Luca Benini.
Users are physical persons participating in the project. Membership of users
to ANTAREX project is authorized by Project Coordinator. Users can log in to
the computer hardware dedicated to the ANTAREX project at ETHZ and access the
shared project storage areas. Access to ETHZ resources is available to ETHZ
users, as well as to users from other parties upon request from the party
Representative, and following authorization by the Project Coordinator.
**System Administrators** are members of the ETH Zurich and Integrated System
Laboratory staff.
User data in general can be accessed by:
* The user who created them (i.e., the account owner)
* System Administrators
* Other users who have been granted permission by the owner
### Data storage areas HOME storage
HOME is stored remotely in a shared data center of ETH Zurich in a physically
separated machine ZFS and backup regularly on RAID-6 and tape.
Each user has a designated home directory on the HOME file system at
/home/username, where username is login name given to the user. By default,
the permissions of the home directory are set to 755. Thus are visible from
other users at institute level.
_**SCRATCH storage** _
Scratch storage is local on user’s workstations and shared servers using SSD
disks.
### Data access Physical security
All data storage as well as servers and user’s workstations are part of a
virtual private network (VPN) at institute level.
### Remote access and electronic security
All external access to ETHZ ANTAREX resources is provided only through
encrypted data channels (SSH, SFTP, SCP).
Control of permissions on the operating system level is done via standard
Linux facilities – classical UNIX permissions (read, write, execute granted
for user, group or others) and Extended ACL mechanism (for a more fine-grained
control of permissions to specific users and groups).
### Data lifecycle
1. **Transfer of data** User transfers data from his facility to ETHZ only via safely encrypted and authenticated channels (SFTP, SCP). Unencrypted transfer is not possible.
2. **Data within ETHZ:** Once the data is at ETHZ data storage, access permissions apply.
3. **Transfer of data from ETHZ:** User transfers data from to facility from ETHZ only via safely encrypted and authenticated channels (SFTP, SCP). Unencrypted transfer is not possible.
4. **Removal of data:** On HOME file system, the files are immediately removed upon user request. At project end data are archived and preserved. The SCRATCH storage is managed as a standard scratch partition, and no long-term storage takes place there.
## 1.3.6 INRIA Data Management Policies
### Human roles and administration process
The **Project Coordinator** , Prof. Cristina Silvano, is the physical person
responsible for the ANTAREX project and for approving other users access to
the project. The **Representative** for Inria for the data management process
is Dr. Erven Rohou.
**Users** are physical persons. Membership of users to ANTAREX project is
authorized by Project Coordinator. Users can log in to the computer hardware
at Inria and access the shared project storage areas. Access to ANTAREX
resources is available to Inria users, as well as to users from other parties
upon request from the party Representative, and following authorization by the
Project Coordinator.
**System Administrators** are members of the Inria staff.
User data in general can be accessed by:
* The user who created them (i.e., the UNIX owner)
* System Administrators
* Other users who have been granted permission by the owner
### Data storage areas Inria Forge
Inria Forge is a service offered to facilitate the scientific collaborations
of people working at Inria. It offers easy access to revision control systems,
mailing lists, bug tracking, message boards/forums, task management, site
hosting, permanent file archival, full backups, and total web-based
administration. The objective is to provide everyone working at the institute
with an infrastructure for their scientific collaborations with internal
and/or external partners.
### HOME storage
HOME is implemented as a shared Network File System (NFS), mounted from user
machines. Users do not have admin privilege on the machines where a NFS volume
is mounted.
Each user has a designated home directory on the HOME file system at
/udd/username, where username is login name given to the user. By default, the
permissions of the home directory can be set to 700, and thus not accessible
by other users.
### Data access Physical security
All data storage is placed in dedicated rooms physically separated from the
rest of the building. Entry to the room is secured by electromechanical locks
controlled by access cards. Users do not have access to these rooms, only
System Administrators do. Inria Rennes has 24/7 on-site security _**Remote
access and electronic security** _
All external access to Inria ANTAREX resources is provided only through
encrypted data channels (SSH, SFTP, SCP).
### Process of granting of access permissions
Access permission requests for Inria resources should be sent via registered
mail signed by the party Representative. Such communication will be archived
for the duration of the project plus 5 years.
Access permissions for files and folder within the standard storage areas
(HOME) can be changed directly by the owner of the file/folder by respective
Linux system commands.
Control of permissions on the operating system level is done via standard
Linux facilities – classical UNIX permissions (read, write, execute granted
for user, group or others). Access to the data in the Inria Forge is based on
Extended ACL mechanism (for a more fine-grained control of permissions to
specific users and groups).
### Data lifecycle
* **Transfer of data:** User transfers data from his facility to Inria only via safely encrypted and authenticated channels (SFTP, SCP). Unencrypted transfer is not possible.
* **Data within Inria:** Once the data are at Inria data storage, access permissions apply.
* **Transfer of data from Inria:** User transfers data to facility from Inria only via safely encrypted and authenticated channels (SFTP, SCP). Unencrypted transfer is not possible.
* **Removal of data:** On HOME file system, the files are immediately removed upon user request, or after 2 years from the project end.
# 1.4 Data Management Plan Template
The DMP should address the points below on a dataset by dataset basis and
should reflect the current status of reflection within the Consortium about
the data that will be produced.
<table>
<tr>
<th>
**No.**
</th>
<th>
**Item**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**Data set reference and name**
</td>
<td>
Identifier for the data set to be produced DOI
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
**Data set description**
</td>
<td>
Description of the data that will be generated or collected, its origin (in
case it is collected), nature and scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration and reuse.
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
**Standards and metadata**
</td>
<td>
Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created.
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**Data sharing**
</td>
<td>
Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling re‐use, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.).
In case the dataset cannot be shared, the reasons for this should be mentioned
(e.g. ethical, rules of personal data, intellectual property, commercial,
privacy‐related, securityrelated).
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
Description of the procedures that will be put in place for long‐term
preservation of the data. Indication of how long the data should be preserved,
what is its approximated end volume, what the associated costs are and how
these are planned to be covered.
</td> </tr> </table>
## 1.4.1 Partner: IT4I Data Table 1
<table>
<tr>
<th>
**No.**
</th>
<th>
**Item**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**Data set reference and name**
</td>
<td>
**Graph500 benchmark results**
DOI from service defined in Section Public Data Management
Policies.
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
**Data set description**
</td>
<td>
Graph 500 is an HPC benchmark, which emphasizes the speed of memory access
instead of the speed of arithmetical operations like other widely used
benchmarks such as Top 500. The main idea behind Graph 500 is to measure the
number of traversed edges per second (TEPS) using the Breadth First Search
(BFS) algorithm on artificially generated graph. During the testing TEPS and
time are collected in 64 runs. The resulting data set then contains
information about the problem size and aggregated performance results from all
64 runs.
The result will be used for assessing effectivity and usability of ANTAREX
technologies developed within WP2 and WP3.
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
**Standards and metadata**
</td>
<td>
The detailed description of Graph 500 output standard can be found at
http://www.graph500.org/specifications
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**Data sharing**
</td>
<td>
Data sharing will follow rules of selected service defined in Section Public
Data Management Policies.
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
Archiving and preservation will follow rules of selected service defined in
Section Public Data Management Policies.
</td> </tr> </table>
## 1.4.2 Partner: IT4I Data Table 2
<table>
<tr>
<th>
**No.**
</th>
<th>
**Item**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**Data set reference and name**
</td>
<td>
**Benchmark dataset for betweenness centrality**
DOI from service defined in Section Public Data Management Policies.
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
**Data set description**
</td>
<td>
Betweeness centrality is a measure of graph vertices indicating how well is a
particular graph node connected to other nodes. It is useful for determining
important nodes of a network. Importance of a node depends not only on its
degree but also on weight of its adjacent edges. The edges can be weighted by
various values such as distance, average speed, type of the road, etc. Removal
of these nodes would result in severe degradation of flow throughput in the
network. We will use the computed betweenness for traffic routing optimization
on road networks. The result will be used for assessing efectivity and
usability of the ANTAREX technologies developed within WP2 and WP3.
Input data of the benchmark will be collected from OpenStreetMap data and
preprocessed to suit the needs of the benchmark. Several graphs will be
obtained, each having different properties (graph size, node density, etc.).
Output of the benchmark will consist of values of given performance metrics
and will be stored for evaluation. The gathered performance metrics can serve
as baseline for future improvements and optimizations of the developed
toolset.
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
**Standards and metadata**
</td>
<td>
The OpenStreetMap data are obtained from volunteers contributing to the
project in form of a results of their own geographical surveys. The data are
managed by non‐profit organization OpenStreetMap Foundation based in UK. The
OpenStreetMap data are available under ODC Open Database Lincense
(http://opendatacommons.org/licenses/odbl/1.0/).
We will use publicly available export of the map data in the form of a binary
file encoded in the Protocol Buffers binary format
(http://planet.openstreetmap.org).
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**Data sharing**
</td>
<td>
Data sharing will follow rules of selected service defined in Section Public
Data Management Policies.
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
Archiving and preservation will follow rules of selected service defined in
Section Public Data Management Policies.
</td> </tr> </table>
## 1.4.3 Partner: IT4I Data Table 3
<table>
<tr>
<th>
**No.**
</th>
<th>
**Item**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**Data set reference and name**
</td>
<td>
**Benchmark of Time dependent routing algorithm**
DOI from service defined in Section Public Data Management Policies.
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
**Data set description**
</td>
<td>
Time dependent routing is an extension of a standard vehicle routing task. In
ordinary routing problem, the routes are determined from a static unweighted
graph representation of a road network based on given points of origin and
destination. The resulting route is always the same between two given points.
The route determined by the time dependent algorithm for the same two origin
and destination points can vary in time. For example, it is more beneficial
for some days of the week to take a detour from the standard route to avoid
the morning commute and minimize the risk of possible delays.
The time dependent algorithm works with routes extracted from graph
representation of the road network where the edges hold additional metadata
about the road network throughput and state for a given timeframe.
The input dataset for the benchmark will contain pre‐defined set of routes
computed for a given set of simulated pairs of origin and destination points
and generated speed profiles. The original algorithm will be optimized by
ANTAREX technologies developed within WP2 and WP3 and executed multiple times
under different conditions. Various metrics of effectivity and profiling data
will be collected during each run of the algorithm and stored for future the
analysis.
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
**Standards and metadata**
</td>
<td>
Time dependent routing algorithm is described in the following publications:
Tomis R., Rapant L., Martinovič, J., Slaninová K. & Vondrák I.,
Probabilistic Time‐Dependent Travel Time Computation using
Monte Carlo Simulation, accepted to HPCSE 2015.
Tomis, R., Martinovič, J., Slaninová, K., Rapant, L., & Vondrák, I.,
Time‐Dependent Route Planning for the Highways in the Czech Republic. In
Lecture Notes in Computer Science, 9339, pp. 145‐153, 2015\.
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**Data sharing**
</td>
<td>
Data sharing will follow rules of selected service defined in Section Public
Data Management Policies.
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
Archiving and preservation will follow rules of selected service defined in
Section Public Data Management Policies.
</td> </tr> </table>
## 1.4.4 Partner: IT4I. Data Table 4
<table>
<tr>
<th>
**No.**
</th>
<th>
**Item**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**Data set reference and name**
</td>
<td>
**Data used and created within UC2**
DOI from service defined in Section Public Data Management Policies.
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
**Data set description**
</td>
<td>
Due to the private nature of UC2, the data set description will be included
into private deliverables of UC2.
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
**Standards and metadata**
</td>
<td>
Due to the private nature of UC2, the description of standards and metadata
will be included into private deliverables of UC2.
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**Data sharing**
</td>
<td>
Selected data will be privately available to selected ANTAREX participants.
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
All the data collections created, maintained and processed within UC2 by IT4I
and Sygic will be preserved, stored, and maintained following the rules
defined in Section IT4I Data Management
Policies.
</td> </tr> </table>
## 1.4.6 Partner: CINECA. Data Table 5
<table>
<tr>
<th>
**No.**
</th>
<th>
**Item**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**Data set reference and name**
</td>
<td>
**GALILEO‐HPL:** Galileo HPL benchmark for Top500. DOI from OpenAIRE/Zenodo
service.
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
**Data set description**
</td>
<td>
This dataset collect the data stored during the procedure of evaluation of the
Galileo machine at CINECA in order to classify it for Top500 list.
Dataset also include a report summarizing the results of benchmarks (STREAM
for single node memory assessment and HPL for HPC parallel performance)
carried out in May 2015.
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
**Standards and metadata**
</td>
<td>
Dataset is made up of ASCII files (Unix format) assembled as a tar gzipped
archive.
Full metadata description are provided within the standard dataset creation in
OpenAIRE/Zenodo service.
**Keywords:** Galileo; CINECA; TOP500; HPL; STREAM; HPC; benchmarks.
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**Data sharing**
</td>
<td>
GALILEO‐HPL dataset is and will be PUBLIC.
Access is guaranteed by OpenAIRE/Zenodo service and is widely open to public
without any restriction.
GALILEO‐HPL dataset is provided through the OpenAIRE/Zenodo web interface to
end‐user and no additional software is necessary for its dissemination and
sharing.
GALILEO‐HPL dataset is indexed within OpenAIRE and exposed to external
end‐user via standard OpenAIRE retrieval tools like those available within the
Zenodo software.
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
Storage persistence in OpenAIRE/Zenodo service is guaranteed for unlimited
time.
</td> </tr> </table>
## 1.4.7 Partner: CINECA. Data Table 6
<table>
<tr>
<th>
**No.**
</th>
<th>
**Item**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**Data set reference and name**
</td>
<td>
**GALILEO‐HPCG:** Galileo HPCG benchmark for assessing Galileo
machine performance on hybrid configuration. DOI from OpenAIRE/Zenodo service.
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
**Data set description**
</td>
<td>
This dataset collect the data stored during the procedure of evaluation of the
Galileo machine at CINECA when using Xeon Phi and K80 GPU as numerical
coprocessors.
Dataset also include a report summarizing the results of benchmarks carried
out in May 2015.
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
**Standards and metadata**
</td>
<td>
Dataset is made up of ASCII files (Unix format) assembled as a tar gzipped
archive.
Full metadata description are provided within the standard dataset creation in
OpenAIRE/Zenodo service.
**Keywords:** Galileo; CINECA; HPCG; XeonPhi; K80GPU; HPC; benchmarks.
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**Data sharing**
</td>
<td>
GALILEO‐HPCG dataset is and will be PUBLIC.
Access is guaranteed by OpenAIRE/Zenodo service and is widely open to public
without any restriction.
GALILEO‐HPCG dataset is provided through the OpenAIRE/Zenodo web interface to
end‐user and no additional software is necessary for its dissemination and
sharing.
GALILEO‐HPCG dataset is indexed within OpenAIRE and exposed to external
end‐user via standard OpenAIRE retrieval tools like those available within the
Zenodo software.
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
Storage persistence in OpenAIRE/Zenodo service is guaranteed for unlimited
time.
</td> </tr> </table>
## 1.4.8 Partner: CINECA. Data Table 7
<table>
<tr>
<th>
**No.**
</th>
<th>
**Item**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**Data set reference and name**
</td>
<td>
**LiGen‐DOCK:** Dataset of protein receptors and ligands inputs and
corresponding docking results. PID from EUDAT B2SHARE service
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
**Data set description**
</td>
<td>
This dataset collect the data stored during the procedure of evaluation of UC1
LiGen‐DOCK mini‐app on the Galileo machine at
CINECA.
Dataset is made out of a comprehensive input set of protein receptors taken
from the Protein Data Bank (PDB) and the largest set of ligand’s chemical
structures from commercial catalogs like,
i.e., Sigma‐Aldrich and/or Enamine (1,2) .
LiGen‐Dock dataset will also include the output of the performance evaluation
of the UC1 mini‐app on performing ligandreceptor docking workflow in various
computational scenarios (2) .
Dataset also include a report summarizing the results of LiGenDOCK benchmarks.
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
**Standards and metadata**
</td>
<td>
Dataset is made up of ASCII files (Unix format) assembled as a tar gzipped
archive.
Full metadata description are provided within the standard dataset creation in
EUDAT B2SHARE.
**Keywords:** Galileo; CINECA; LiGen; Docking; PDB; Sigma‐Aldrich; Enamine;
benchmarks.
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**Data sharing**
</td>
<td>
LiGen‐DOCK dataset is and will be PUBLIC.
Access is guaranteed by OpenAIRE/Zenodo service and is widely open to public
without any restriction.
LiGen‐DOCK dataset is provided through the OpenAIRE/Zenodo web interface to
end‐user and no additional software is necessary for its dissemination and
sharing.
LiGen‐DOCK dataset is indexed within OpenAIRE and exposed to external end‐user
via standard OpenAIRE retrieval tools like those available within the Zenodo
software.
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
Storage persistence in OpenAIRE/Zenodo service is guaranteed for unlimited
time.
</td> </tr> </table>
(1)
It is expected to select a subset from PDB made out of tens of protein
receptors and 1 to 10 million of ligands chemical structures.
(2) Final dimension of dataset will be defined at the second revision of this
deliverable at M18.
## 1.4.9 Partner: UPORTO. Data Table 8
<table>
<tr>
<th>
**No.**
</th>
<th>
**Item**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**Data set reference and name**
</td>
<td>
**ANTAREX‐DSL:** DSL transformations. DOI from OpenAIRE/Zenodo service.
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
**Data set description**
</td>
<td>
Collection of DSL codes used to adapt the set of applications that can be made
publicly available, together with the corresponding application code and the
transformed code after applying the DSL codes. This dataset represents the
output of the first part of the ANTAREX proposed tool‐flow, and shall cover
the two use cases of the proposal and tested benchmarks. This dataset can be
useful as an example of how we are specifying the runtime adaptation and
non‐functional requirements in the DSL, and the resulting code. The DSL
compiler shall be made available (possibly as a web interface) and the dataset
will allow any person to validate the results from the DSL transformations and
to evaluate and try the DSL compiler.
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
**Standards and metadata**
</td>
<td>
Dataset is made up of ASCII files assembled as a zipped archive. Full metadata
description are provided within the standard dataset creation in
OpenAIRE/Zenodo service. **Keywords:** LARA; DSL; benchmarks.
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**Data sharing**
</td>
<td>
ANTAREX‐DSL dataset is and will be PUBLIC.
Access is guaranteed by OpenAIRE/Zenodo service and is widely open to public
without any restriction.
ANTAREX‐DSL dataset is provided through the OpenAIRE/Zenodo web interface to
end‐user and no additional software is necessary for its dissemination and
sharing.
ANTAREX‐DSL dataset is indexed within OpenAIRE and exposed to external
end‐user via standard OpenAIRE retrieval tools like those available within the
Zenodo software.
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
Storage persistence in OpenAIRE/Zenodo service is guaranteed for unlimited
time.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0587_TOMOCON_764902.md
|
**_TOMOCON Data Management Plan_ **
# Data summary
**Provide a summary of the data addressing the following issues:**
* **State the purpose of the data collection/generation**
* **Explain the relation to the objectives of the project**
* **Specify the types and formats of data generated/collected**
* **Specify if existing data is being re-used (if any)**
* **Specify the origin of the data**
* **State the expected size of the data (if known)**
* **Outline the data utility: To whom will it be useful**
Data within TOMOCON is generated via experimental studies and numerical multi-
physics simulation of exemplary industrial processes. The data serves the
following purposes with the TOMOCON project:
1. Data is being generated to enhance the understanding of fundamental physical and chemical sub-processes in an industrial process scenario. This concerns transport of momentum, mass and energy in diverse fluid-flow dominated processes, propagation of electromagnetic fields or sound fields as well as kinetics of processes like crystallization.
2. Data is being used to simulate and assess the performance and interplay of tomographic sensors and control systems in given industrial process model scenarios.
The TOMOCON project considers exemplarily the following industrial processes
as model processes for the demonstration of the new technologies: Continuous
steel casting, batch crystallization, inline fluid separation and microwave
drying of porous products.
Data being generated typically represents adequately sampled four-dimensional
physical parameter fields and is complemented by data for geometry
specifications (e.g. CAD files) and boundary as well as initial conditions. It
originates from the following sources:
1. Numerical simulation data originates from the computational calculation of physical field quantities with specific commercial or proprietary simulation codes.
2. Experimental data is digitized measurement data from specific sensors and instrumentation, e.g. for temperature, pressure, flow rate or filling-level, and from particle image velocimetry, high-speed video imaging, infrared thermography or diverse tomographic imaging techniques.
It is expected that data being produced within the TOMOCON project is
essentially new data as it is based on novel methods, technologies and sub-
models.
By its nature the data being generated within TOMOCON will be of large size
(typically tens of megabytes to few gigabytes per data set) and of diverse and
often proprietary digital formats and encoding.
TOMOCON partners will share data in order to commonly develop new models,
sensors and process control systems. Moreover, some of the data may be of
interest for other scientists’ groups to use them for code validation and own
sensor or model developments.
The latter is subject of this data management plan and further referred to as
TOMOCON Open Access Data. TOMOCON Open Access Data shall undergo a dedicated
quality assurance before publication.
# FAIR data
**2.1. Making data findable, including provisions for metadata:**
* **Outline the discoverability of data (metadata provision)**
* **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?**
* **Outline naming conventions used**
* **Outline the approach towards search keyword**
* **Outline the approach for clear versioning**
* **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how**
TOMOCON Open Access Data will be made accessible by a unique digital object
identifier.
Significant metadata will be provided to ensure discoverability and
identifiability. Generally metadata provision follows the DataCite Metadata
Scheme. Provision of specific metadata beyond that scheme is in the
responsibility of the individual partners. However, the following type of
metadata is suggested:
_Numerical data:_ Software and version used for data generation; input data
including boundary conditions and numerical grid; methods of simulation time
step control; used submodels e.g. for turbulence, kinetics, heat transfer
etc.; digital output data format.
_Experimental data:_ Description of the experimental setup; reference to
geometry data, boundary conditions, specifications of relevant materials and
fluids; instrumentation and their specifications (sampling rate, operational
limits, accuracy/uncertainties).
Naming of data has to be done in a way that is most clearly indicating the
type of data and the general background of its generation (e.g. experimental
vs. numerical, type of experiment, related industrial application, purpose of
the study).
**2.2. Making data openly accessible:**
* **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so**
* **Specify how the data will be made available**
* **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?**
* **Specify where the data and associated metadata, documentation and code are deposited**
* **Specify how access will be provided in case there are any restrictions**
Making open access data accessible is in the responsibility of the individual
partners. Many partners have their own institutional data repositories with
specific procedures and access rules. If partners have no own repository they
may either use public repositories, such as for example the Zenodo repository
at CERN, or repositories of other TOMOCON partners. For the latter HZDR as the
Coordinator will offer its RODARE repository. As other platforms HZDR's RODARE
is interconnected to research data harvesters like OpenAire of the EU to
ensure most efficient retrievability of data.
**2.3. Making data interoperable:**
* **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.**
* **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?**
Alignment with standardized ontologies, such as DCAT Data Catalog Vocabulary
is strongly encouraged.
**2.4. Increase data re-use (through clarifying licenses):**
* **Specify how the data will be licenced to permit the widest re-use possible**
* **Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed**
* **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why**
* **Describe data quality assurance processes**
* **Specify the length of time for which the data will remain re-usable**
TOMOCON Open Access Data is public and recommended to be licensed according to
Creative Commons Attribution 4.0 International (CC BY 4.0). For public
software licenses GPLv3 is recommended. Embargo periods of up to 12 months may
be imposed to restrict the use of data within the TOMOCON consortium.
# Allocation of resources
**Explain the allocation of resources, addressing the following issues:**
* **Estimate the costs for making your data FAIR. Describe how you intend to cover these costs**
* **Clearly identify responsibilities for data management in your project**
* **Describe costs and potential value of long-term preservation**
TOMOCON Open Access Data data will be stored in repositories bound to the FAIR
principles. Costs incurring to the partners in form of labour expenditure for
preparing the data publication is covered by the EU funding. It is expected
that costs in form of labour expenditure after the funding period will be
minimal (e.g. by any kind of updating) and can be born by the respective
partners via their institutional funding. The responsibility for data
management is with the individual principal investigators of the research
groups where the data has been produced.
# Data security
**Address data recovery as well as secure storage and transfer of sensitive
data**
It is expected that the chosen institutional and public data repositories
provide an adequate frame for secure data storage and recovery. No personal
data will be stored with TOMOCON Open Access Data sets.
# Ethical aspects
**To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former**
There are no ethics issues with TOMOCON data according to the DoA. All work
including data generation will follow best practice guidelines of the EU and
the existing national rules.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0591_SECURECHAIN_646457.md
|
A priori list of potentials is to be investigated: i) Resource efficiency:
tapping largely unexplored local biomass sources, e.g. in privately owned
forests, small woodlots, riparian greenery, urban biomass, green wastes; ii)
Land use: protective functions through enhanced harvesting of biomass, e.g.
wind erosion risk > increased landscaping of hedges/linear tree structures,
forest fire risk potential > increased forest harvesting levels; iii)
Synergies: exploiting different biomass types via cost reduction, substitution
of inefficient use of high grade material, high grade mixed fuels, design
pellets, high grade wood chips; iv) Byproducts, e.g. clean ashes as components
for fertilizer or bioplastics; v) Complementary bioenergy production: biomass
as additional source in renewable fuel mix, e.g. cogeneration plants.
# Types of data and characteristics
During Life Cycle Assessment different data are collected, analysed, modelled
and produced along the whole supply chain (Figure1).
**Figure1: Data relevant steps within LCA (Source** **:**
**_http://eplca.jrc.ec.europa.eu_ ) **
For the Life Cycle Inventory in the first instance company data is collected
from participating SMEs covering their inputs and outputs such as used biomass
material, energy usage for transports and processing of materials, energy
output, emissions to air, water and soil. Secondly data from literature and
existing data bases like ecoinvent or GaBi Life Cycle data sets are used to
model the impact of the whole life cycle. This includes e.g. datasets on the
production of oil and the connected environmental impacts or process specific
data. On the basis of this data inventory the impact of the bioenergy supply
will be measured according to existing Environmental and Health impact models
like ReCiPe, LCM or Ecoinvent.
Additionally data and considerations on allocation (e.g. which part of the
impact can be allocated to forest or agricultural waste) and substitution
(e.g. which energy is substituted by the new bioenergy) have to be
investigated. Finally the environmental impacts of different bioenergy plant
types will be modelled and new LCA data sets will be available.
There are three new data types that might be produced within the project: i)
Life Cycle Inventory data, ii) Data on allocation and substitution, and iii)
LCA datasets. Depending on the type of data (Life Cycle Inventory data, LCA
datasets, Live Cycle Impact Assessment Data, Allocation and Substitution data
sets or primary produced LCA data-sets), some of these can be made publicly
available to a certain extent and under certain conditions (e.g. aggregated
sets ensuring confidentiality of enterprise-level information).
# i) Life Cycle Inventory data
If the inventory data can be made public is a case-by-case decision depending
on the restrictions of the participating companies.
# ii) Data on allocation and substitution
This Meta Data will be published following the H2020 open access approach.
Efforts will be made to ensure open access to peer-reviewed articles not
already freely available through the project website. Appropriate peer-
reviewed academic journals with open access will be favoured. Otherwise,
access rights for publishing articles on the project website will be paid to
the respective journals, thus allowing free access to the publication.
# iii) LCA datasets
LCA datasets are the main output of the study. There are different formats
available for LCA datasets which which are compatible or can with a certain
effort be made compatible to other databases. For LCA studies both open source
tools and fee based software is available. Principally, each database in
EcoSpold or ILCD format can be directly imported into openLCA. Tools like the
openLCA format converter or the EcoSpoldAccess spreadsheet macro formerly
provided by the ecoinvent centre can be used to create data in the appropriate
formats. A possibility is to create formats which could feed into the European
reference Life Cycle Database (ELCD). The ELCD generally provides Life Cycle
Inventory (LCI) data from frontrunning EU-level business associations and
other sources for key materials, energy carriers, transport, and waste
management. Focus is to freely provide background data that are required in a
high percentage of LCAs in a European market context. Coherence and quality
are facilitated through compliance with the entry-level requirements of the
Life Cycle Data Network (LCDN), as well as through endorsement by the
organisations that provide the data.
# 2.3 Standards and metadata
The LCA conducted in the project will be based on ISO14040ff as well as on the
handbook and guidelines from the International Reference Life Cycle Data
System (ILCD) 1 .
Within the project the produced LCA datasets will follow the ILCD Entry-Level
requirements, as far as necessary inventory data are available. An
implementation of the data in ILCD is envisaged. Besides, BOKU plans to build
up an own Open Access LCA database. Data will also be implemented and made
available to the broad public in this BOKU database once this database goes
online.
A publication is planned in form of a scientific article, which describes the
main project findings on LCA-based sustainability evaluation of local
bioenergy chains, to be submitted for peer review to an open access journal in
the bioenergy field (D4.4, M36).
# 2.4 Data sharing
The open access data will be shared using a suitable data repository and
broadly accessible open data formats. Due protection of personal data will be
ensured. Further details will be developed in line with the final open access
dataset.
# 2.5 Archiving and preservation
The open access data will be archived using a suitable data repository.
Further details will be developed in line with the final open access dataset.
<table>
<tr>
<th>
_Acknowledgement and Disclaimer_
IIWH / BOKU / CLUBE –Internationales Institut für Wald und Holz e.V.,
Universität für Bodenkultur – Institut für Abfallwirtschaft, Cluster of
Bioenergy, 2016.
SecureChain, Horizon 2020 project no. 646457, Data Management Plan (DMP).
Report D6.5. Münster, Vienna, Kozani.
www.securechain.eu
The SecureChain project has received funding from the European Union’s Horizon
2020 Programme under the grant agreement n°646457 from 01/04/2015 to
31/03/2018.
The content of the document reflects only the authors’ views. The European
Union is not liable for any use that may be made of the information contained
therein.
</th> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0592_FLORA_820099.md
|
# 1\. Data management and responsibility
The FLORA project is engaged in the Open Research Data (ORD) pilot which aims
to improve and maximise access to and re-use of research data generated by
Horizon 2020 projects and takes into account the need to balance openness and
protection of scientific information, commercialisation and Intellectual
Property Rights (IPR), privacy concerns, security as well as data management
and preservation questions.
The management of the project data/results requires decisions about the
sharing of the data, the format/standard, the maintenance, the preservation,
etc.
Thus the Data Management Plan (DMP) is a key element of good data management
and is established to describe how the project will collect, share and protect
the data produced during the project. As a living document, the DMP will be
up-dated over the lifetime of the project whenever necessary.
In this frame the following policy for data management and responsibility has
been agreed for the FLORA project:
* **The FLORA Project Management Committee (ECL and CERFACS) and the topic manager** analyse the results of the FLORA project and will decide the criteria to select the Data for which make the OPT-IN. They individuate for all the dataset a responsible (Data Management Project Responsible (DMPR)) that will ensure dataset integrity and compatibility for its internal and external use during the programme lifetime, etc. They also decide where to upload the data, when upload, when how often update, etc.
* **The Data Management Project Responsible (DMPR)** is in charge of the integrity of all the dataset, their compatibility, the criteria for the data storage and preservation, the long-term access policy, the maintenance policy, quality control, etc. Of course he will discuss and validate these points with the Project Management Committee (ECL and CERFACS) and the topic manager.
<table>
<tr>
<th>
**Data management Project Responsible (DMPR)**
</th>
<th>
**Pierre DUQUESNE**
</th> </tr>
<tr>
<td>
DMPR Affiliation
</td>
<td>
Ecole Centrale de Lyon
</td> </tr>
<tr>
<td>
DMPR mail
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
DMPR telephone number
</td>
<td>
**+33 (0)4 72 18 61 94**
</td> </tr> </table>
* **The Data Set Responsibles (DSR)** are in charge of their single Dataset and should be the partner producing the data: validation and registration of datasets and metadata, updates and management of the different versions, etc. The contact details of each DSR will be provided in each data set document presented in the annex I of the DMP.
In the next section “2. Data summary”, the FLORA Project Management Committee
(ECL and CERFACS) and the topic manager (SHE) have listed the project’s
data/results that will be generated by the project and have identified which
data will be open.
Data needed to validate the results presented in scientific publications can
be made accessible to third parties. Research data linked to exploitable
results will not be put into the open domain if they compromise its
commercialisation prospects or have inadequate protection, which is a H2020
obligation.
# 2\. Data summary
The next tables present the different dataset generated by the FLORA project.
For each dataset that will be open to public, a dedicated dataset document
will be completed in Annex I once the data are generated.
## 2.1 General data overview
In table 1 the different databases are presented with focus on authorship and
ownership. "WP generation" and "WP using" corresponding respectively to the
work package in which the database is generated and in which data are reused
in the FLORA project. The "Data producer" corresponds to the partner who
generates the data, "Data user" corresponds to the partners who can use data
for internal research (in addition to the data owner) and "Data owner" is the
final owner of the database. The confidentiality level includes restriction on
both external and internal data exchange. Some data associated with results
may have potential for commercial or industrial protection and thus will not
be made accessible to a third party (confidential confidentiality level) ;
other data needed for the verification of results published in scientific
journals can be made accessible to third parties (public confidentiality
level).
**Table 1. Dataset generation.**
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**WP**
**generation**
</th>
<th>
**WP using**
</th>
<th>
**Data producer**
</th>
<th>
**Data user**
</th>
<th>
**Data owner**
</th>
<th>
**Confidentiality level**
</th> </tr>
<tr>
<td>
**1\. Required data**
</td>
<td>
NA
</td>
<td>
WP 2,3
</td>
<td>
SHE
</td>
<td>
ECL/CERFACS
</td>
<td>
SHE
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**2\. Test bench data**
</td>
<td>
WP 2
</td>
<td>
WP 2
</td>
<td>
ECL
</td>
<td>
ECL
</td>
<td>
ECL
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**3\. Experimental raw data**
</td>
<td>
WP 2
</td>
<td>
WP 2
</td>
<td>
ECL
</td>
<td>
ECL
</td>
<td>
SHE
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**4\. Validated experimental data**
</td>
<td>
WP 2
</td>
<td>
WP 2,4
</td>
<td>
ECL
</td>
<td>
ECL/CERFACS*
</td>
<td>
SHE
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**5\. Data experimental guide**
</td>
<td>
WP 2
</td>
<td>
WP 2,4
</td>
<td>
ECL
</td>
<td>
ECL/CERFACS*
</td>
<td>
SHE
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**6\. Published experimental data**
</td>
<td>
WP 4
</td>
<td>
WP 4
</td>
<td>
ECL
</td>
<td>
ECL/CERFACS*
</td>
<td>
SHE
</td>
<td>
Public
</td> </tr>
<tr>
<td>
**7\. (U)RANS results data**
</td>
<td>
WP 3
</td>
<td>
WP 3,4
</td>
<td>
ECL
</td>
<td>
ECL/CERFACS*
</td>
<td>
SHE
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**8\. (U)RANS data guide**
</td>
<td>
WP 3
</td>
<td>
WP 3,4
</td>
<td>
ECL
</td>
<td>
ECL/CERFACS*
</td>
<td>
SHE
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**9\. LES results data**
</td>
<td>
WP 3
</td>
<td>
WP 3,4
</td>
<td>
CERFACS
</td>
<td>
CERFACS/ECL
</td>
<td>
SHE
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**10\. LES data guide**
</td>
<td>
WP 3
</td>
<td>
WP 3,4
</td>
<td>
CERFACS
</td>
<td>
CERFACS/ECL
</td>
<td>
SHE
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**11\. Published numerical data**
</td>
<td>
WP 4
</td>
<td>
WP 4
</td>
<td>
ECL/CERFACS
</td>
<td>
ECL/CERFACS
</td>
<td>
SHE
</td>
<td>
Public
</td> </tr> </table>
* Only for the nominal speed (100 Nn) or with a special authorisation from SHE.
## 2.2 Data purposes and objectives
Table 2 presents the type of data, the content and the objective of each
database. The last column qualifies if the database will have a long-term
value for both internal and external research.
**Table 2. Objectives of datasets.**
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Type**
</th>
<th>
**Purposes/objectives**
</th>
<th>
**Long term used**
</th> </tr>
<tr>
<td>
**1\. Required data**
</td>
<td>
CAD/Plan
</td>
<td>
\- Contains plans and CAD of the compressor module. - Provides necessary
information for test bench implementation and numerical simulation.
</td>
<td>
No
</td> </tr>
<tr>
<td>
**2\. Test bench data**
</td>
<td>
Metrology
</td>
<td>
\- Contains sensors calibration and position, testbench qualification tests,
tests log ... - Provides necessary information on the measurements and test
bench setup.
</td>
<td>
No
</td> </tr>
<tr>
<td>
**3\. Experimental raw data**
</td>
<td>
Experimental measurements
</td>
<td>
* Contains all measurements in measured primary units (generally volt). Including steady and unsteady pressure and LDA measurements.
* Provides measurement ready to be converted in the physical units.
</td>
<td>
No
</td> </tr>
<tr>
<td>
**4\. Validated experimental data**
</td>
<td>
Experimental measurements
</td>
<td>
* Contains only validated measurements in physical units.
* Provides measurements for the analysis step.
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**5\. Data experimental guide**
</td>
<td>
Documentation
</td>
<td>
* Contains measurement descriptions and the operating conditions from the validated experimental database.
* Provides necessary information to perform analysis of the validated experimental database.
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**6\. Published experimental data**
</td>
<td>
Experimental measurements
</td>
<td>
* Contains experimental data used for publication purposes.
* Provides an experimental open-access database for the research community.
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**7\. (U)RANS results data**
</td>
<td>
Numerical simulation
</td>
<td>
* Contains numerical results of the (U)RANS simulations.
* Provides (U)RANS numerical results for the analysis step.
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**8\. (U)RANS data guide**
</td>
<td>
Documentation
</td>
<td>
\- Contains the (U)RANS numerical strategy setup (excluding the mesh or all
geometrical aspects). - Provides the necessary setup to initialise numerical
simulations with elsA software.
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**9\. LES results data**
</td>
<td>
Numerical simulation
</td>
<td>
* Contains numerical results of the LES simulation.
* Provides LES numerical results for the analysis step.
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**10\. LES data guide**
</td>
<td>
Documentation.
</td>
<td>
* Contains the LES numerical strategy setup (excluding the mesh or all geometrical aspects).
* Provides the necessary setup to initialise numerical simulations withTurbo-AVBP software.
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**11\. Published numerical data**
</td>
<td>
Numerical simulation.
</td>
<td>
* Contains numerical data used for publication purposes.
* Provides a numerical open-access database for the research community.
</td>
<td>
Yes
</td> </tr> </table>
## 2.3 Data technical information
Table 3 presents the different formats used in each database, including the
data volume order of magnitude, where the data are stored and the transfer
protocol used between partners.
**Table 3. Database technical information.**
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Format**
</th>
<th>
**Volume (OOM)**
</th>
<th>
**Long-term storage**
</th>
<th>
**Transfer protocol**
</th> </tr>
<tr>
<td>
**1\. Required data**
</td>
<td>
.pdf
.step
</td>
<td>
1 GB
</td>
<td>
SHE storage server
</td>
<td>
CD
Internet
</td> </tr>
<tr>
<td>
**2\. Test bench data**
</td>
<td>
.txt
.bin
</td>
<td>
1 GB
</td>
<td>
ECL storage server
</td>
<td>
No transfer
</td> </tr>
<tr>
<td>
**3\. Experimental raw data**
</td>
<td>
.txt
.bin
</td>
<td>
1 TB
</td>
<td>
ECL storage server
</td>
<td>
No transfer
</td> </tr>
<tr>
<td>
**4\. Validated experimental**
**data**
</td>
<td>
.txt
.bin
</td>
<td>
1 TB
</td>
<td>
ECL and SHE storage servers
</td>
<td>
Hard disk
</td> </tr>
<tr>
<td>
**5\. Data experimental guide**
</td>
<td>
.docx+.pdf
</td>
<td>
10 MB
</td>
<td>
ECL and SHE storage servers
</td>
<td>
Internet
</td> </tr>
<tr>
<td>
**6\. Published experimental**
**data**
</td>
<td>
.txt
</td>
<td>
100 MB
</td>
<td>
ECL and SHE storage servers
ZENODO internet server
</td>
<td>
Internet
</td> </tr>
<tr>
<td>
**7\. (U)RANS results data**
</td>
<td>
.CGNS
</td>
<td>
TB
</td>
<td>
ECL and SHE storage servers
</td>
<td>
Hard disk
</td> </tr>
<tr>
<td>
**8\. (U)RANS data guide**
</td>
<td>
.docx+.pdf
</td>
<td>
10 MB
</td>
<td>
ECL and SHE storage servers
</td>
<td>
Internet
</td> </tr>
<tr>
<td>
**9\. LES results data**
</td>
<td>
.CGNS
</td>
<td>
TBs
</td>
<td>
CERFACS and SHE storage servers
</td>
<td>
Hard disk
</td> </tr>
<tr>
<td>
**10\. LES data guide**
</td>
<td>
.docx+.pdf
</td>
<td>
10 MB
</td>
<td>
CERFACS and SHE storage servers
</td>
<td>
Internet
</td> </tr> </table>
# 3\. FAIR Data
**3.1 Making data findable**
## Public database (data sets 6 and 11)
### Publication repository
The technical, professional and scientific publications that will be produced
in the FLORA project will be open accessed in order to be compliant with the
general principle of the Horizon 2020 funding programmes.
Expected journals to be used (but not limited to):
* International Journal of Turbomachinery Propulsion and Power
* Journal of Turbomachinery
* Experiments In Fluids
_Online depositories_
* **ZENODO** , a repository for open-access repository for publication and data by OpenAIRE and CERN.
### Publication and data identification
Articles and the attached data will be findable via their DOI, unique and
persistent identifier. A DOI is usually issued to every published record on
each publisher review and on other repositories as ZENODO and ResearchGate. A
homepage of FLORA project will be created on ResearchGate with a link to
Zenodo to make data findable.
Also, any dissemination of results from FLORA project must acknowledge the
financial support by the EU and thus the following acknowledgement will be
added to each publication and dataset description: “This project has received
funding from the European Union’s Horizon 2020 research and innovation
programme under grant agreement No 820099”
## Confidential database
### Database repository
Confidential databases are composed of both the methods (databases 5, 8 and
10) and the results (databases 3, 4, 7 and 9). SHE is the owner of all
results. Partners are owners of methods used to generate results. Each owner
is responsible for its database repository.
A partner, not owner of the result, can use results produced by himself for
internal research ECL and CERFACS are authorised to exchange all necessary
data about the nominal speed (100 Nn). For other speed, SHE needs to validate
the data exchange.
### Data identification
Each measurement raw data (database 2) are identified by a unique identifier.
Each measurement is recorded in the test log using this identifier and the
measurement information (database 3). A validated measurement data (database
4) uses the same identification as the corresponding raw data. Main
information on measurement is reported in the data experimental guide
(database 5).
Each numerical run (databases 7 and 9) corresponds to a unique identifier
recorded in the corresponding data guide (databases 8 and 10).
#### 3.2 Making data openly accessible
Some data associated with results may have potential for commercial or
industrial protection and thus will not be made accessible to a third party
(confidential confidentiality level) other data are needed for the
verification of results published in scientific journals can be made
accessible to third parties (public confidentiality level).
## Public database (data set 6 and 11)
### Access procedures
Databases declared public will be available on online depositories (ZENODO) to
a third party. All data set contains conditions to use public data in the file
header. These conditions are an obligation to refer to the original papers,
the project name and a reference to Clean Sky 2 Joint Undertaking under the
European Union’s Horizon 2020.
### Tools to read or reuse data
Public data are produced in common electronic document/data/image formats
(.docx, .pdf, .txt, .jpg, etc.) that do not require specific software.
## Confidential database
### Access procedures
ECL and CERFACS are authorised to exchange all necessary data about the
nominal speed (100 Nn). For other speed, SHE needs to validate the data
exchange. At the end of the work package, ECL or CERFACS needs to provide the
databases (4,7 and 9) to SHE on a hard disk.
At long term the data generated by ECL and CERFACS can be used for internal
research.
#### 3.3 Making data interoperable
Classical vocabulary in turbomachinery domain is used (based on the experience
of all partners in turbomachinery publications).
Even if this project is dedicated to a radial compressor, FLORA results can
help to improve other types of turbomachines. Understanding and control of the
boundary layer separation is also a challenge in modern fluid mechanics, the
FLORA case can contribute to increase the understanding of this complex flow
phenomena.
## Public database (databases 6 and 11)
For both goals, the confidentiality and the generalisation, all public values
are dimensionless. Reference values (length, frequency, velocity ...) and
formulas are clearly identified in the paper or in the published database
files. Reference values are confidential, but are selected to permit
comparison with other cases and make physical interpretation possible.
## Confidential database
Validated databases are used for analysis (4, 7 and 9). These databases are
directly expressed in physical units (using SI unit system). Necessary
information about results are recorded in the different data guides (5, 8 and
10).
**3.4 Increase data re-use**
_Data licence_
Data from public databases are open access and used a common creative licence
(CC-BY).
### Data quality assurance processes
The project will be run in the frame of the quality plan developed at LMFA,
since 2005, in the context of the measurement campaigns carried out with the
high-speed compressors of LMFA. This quality plan is based on an ISO 9001
version 2000 approach and the use of an intranet tool (MySQL database coupled
with a dynamic php web site) to store, to classify and to share the data
between the partners, such as measurement data, documents including a
reference system.
_After the end of the project_
## Public database (databases 6 and 11)
With the impulsion of FLORA project, the open access databases can be used by
other laboratories and industrials to made comparison with other machines.
Methods developed and physical analysis become references to other test cases
and improve the knowledge of the community.
## Confidential database
The experimental setup and the huge quantity of experimental and numerical
data cannot be completely exploited in the FLORA project. The project is the
starting point to a long collaboration. At the end of the project, the re-use
of data and test bench can be:
* Analysis of data generated in FLORA project:
* Subsequent projects for consortium members and SHE.
* Additional academic partners to work on not exploited data.
* Supplementary experimental measurements: o Using the already installed compressor module on new operating conditions o Measurements of supplementary field with FLORA project results.
* Investigates new concept of flow control.
* Investigation of numerical prediction performances: o Calibrate low fidelity numerical method using higher fidelity methods.
* High fidelity simulation of other speed.
For all these next projects the agreement with SHE is necessary.
# 4\. Allocation of resources
## _Costs related to the open access and data strategy_
Data storage in partner data repositories: Included in partners structural
operating cost. Data archiving with ZENODO data repositories: Free of
charge.
## _Data manager responsible during the project_
The Project Coordinator (ECL) is responsible for the establishment, the
updates during the lifetime of the project and the respect of the Data
Management Plan. The relevant experimental data and the generated data from
numerical simulations during the FLORA project will be made available to the
Consortium members within the frame of the IPR protection principles and the
present Data Management Plan.
## _Responsibilities of partners_
SHE is the owner of all generated data. Methods and analysis keep the
ownership of the partner which generates it. Every partner is responsible for
the data it produces, and must contribute actively to the data management as
set in the DMP.
# 5\. Data security
## Public database (databases 6 and 11)
_Long-term preservation_ : Using ZENODO repositories.
_Data Transfer_ : Using ZENODO web platforms
Intellectual property: All data set contains are attached to a common creative
licence.
## Confidential Data
_Long-term preservation:_ ensured by partner institutions’ data repositories.
_Data Transfer:_ depending on the data volume:
* Small and medium size files are transferred by partners securitised data exchange platform (Renater FileSender,OpenTrust MFT ...)
* Huge size files are transferred by an external hard disk during face to face meeting. This type of transfer is infrequent and only concerns transfer of final databases from ECL and CERFACS to SHE. _Intellectual property:_ Data are confidential and need to strictly respect the definition of data producer, user and owner.
6. **Ethical aspects**
No ethical issue has been identified.
7. **Other**
No other procedure for data management.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0594_GIFT_727040.md
|
repository contains 47.8 GB of data from user tests, of all the types
mentioned above except social media statistics, as none of the prototypes so
far have used (public) social media.
GIFT consortium partners remain responsible for collection, primary storage
and use of the data collected, following what is given in this DMP. The
sorting and presentation of data that can be useful for other researchers is
an important part of the research process. At this point in time, we have
identified two categories of shareable data, outlined below. **1\. Source
code**
The type of data that is most likely to be useful for other researchers is the
source code for the software prototypes developed as part of work packages 2,
3 and 6. In the case of work packages 2 and 6, this software will be published
as open source, and the code will be made available in a public repository.
(As stated in the Exploitation Strategy and Consortium Agreement, software
developed by NextGame as part of Work Package 3 is exempt from our open source
commitment.) **2\. 3D models**
In an experiment in Work Package 6, 3D models have been created from scanning
personal objects brought to the museum by visitors (see deliverable D6.1).
These models are shared publicly via an online repository (details below).
# 3\. Other data
Regarding other types of data, where such data does not contain person data or
can be anonymized with no undue extra burden on the researchers, and without
violating the ethical guidelines set out in deliverable D8.1 (on protection of
personal data), we will share these data through an open data repository.
Details about data format, amount, type of repository etc. must be decided on
a case-bycase basis, because of the nature of the methodology we are following
and the types of data collected.
Digital heritage objects, both digitized and born-digital, are contributed to
the GIFT project by the participating cultural heritage organizations. These
include images, scans and metadata of cultural heritage objects. Their
management and provenance beyond their re-use in GIFT project, including
Intellectual Property Rights (IPR), is outside the scope of this document.
When these objects are reused in the GIFT project (e.g. as content for
prototypes), they are used in compliance with the respective cultural heritage
organization’s data management policies.
# 2\. FAIR Data
Data that are considered useful for academia, cultural heritage institutions,
creative industries, or other users will be made available according to the
FAIR principles. Below, we outline how we will apply the FAIR principles to
the types of data outlined above
## 2.1 Making data findable, including provisions for metadata
The project website collects links to all data made openly available. Research
publications may cite the repositories, where relevant.
### 2.1.1 Source code
Any source code produced in work packages 2 and 6 and deemed potentially
useful for outside users will be made available through open source
repositories on GitHub. GitHub is the most widely used repository for sharing
open source code, and is the natural place to look for this kind of data for
anyone who might be interested. In order to further make the repositories and
documentation findable by other researchers in our particular area, we will
link to it from the relevant parts of the GIFT framework, so that researchers
with an interest in source code may find the code that relates to the relevant
part of the framework.
To this date, we have shared source code for the WP2 gifting prototype, the
GIFT exchange tool, the GIFT platform, the GIFT schema and other relevant
parts of the WP6 toolkit (described in deliverable D6.1) in the GitHub
repository https://github.com/growlingfish. **2.1.2 3D models**
The 3D models from photogrammetry experiments are shared via the widely used
online sharing platform sktechfab, initially as a test at
_https://sketchfab.com/ddarz/models/_ and now officially at
https://sketchfab.com/MixedRealityLab/models. Sketchfab is currently the go-to
web platform for user-generated 3D models, where 3D artists and 3D content
specialists, including cultural heritage researchers, demonstrate and share
their work. It is the natural place to look for this kind of data for anyone
who might be interested.
## 2.2 Making data openly accessible
### 2.2.1 Source code
Our aim is to develop and release an open-source software toolbox that any
potential service provider can deploy on their web server stack of choice to
enable gifting applications. Extensive documentation for how to adopt or adapt
our source code is presented at the framework website,
https://toolkit.gifting.digital/tools/prototyping/.
In terms of data generated by the Platform, the documented CMS API will allow
gifts to be passed to and from "wrapping" and "unwrapping" apps, but only
within a private ecosystem. Although the API could be made public, we
anticipate that the content created as gifts will be personal and possibly
sensitive, i.e. inappropriate for revealing publicly. As such, unless the gift
creator explicitly chooses to release their gift for public consumption, the
gift is only available to the specified receiver and administrators of that
instance of the GIFT CMS.
### 2.2.2 3D models
All the 3D models created in the WP6 photogrammetry experiments are openly
accessible via the
Sketchfab repositories cited above. The models are shared with a Non-
Commercial Share-Alike Creative Commons License (CC-BY-NC-SA). Thus, anyone
can download the models and use them for non-commercial purposes as long as
they attribute them, and they can also make derivatives, but have to
distribute those under a similar licence.
## 2.3 Making data interoperable
### 2.3.1 Source code
The GIFT platform will include a notification server - designed to be
interoperable with existing mobile and desktop messaging clients - that will
keep gift givers and receivers informed about their progress through the
stages of gifting. It will also include a CMS server with a documented REST
API, to allow gifters to create hybrid gifts via the CMS admin interface. The
gift data structure is documented in the developer documentation. The REST API
also allows gift service providers to use their application platform of choice
(web, hybrid or native) to develop and release bespoke authoring/"wrapping"
apps (which users can use to push new gifts to the CMS) and "unwrapping" apps
(which users can use to receive and consume gifts built with the authoring
tools).
During the project we will deploy an exemplar instance of the notification
server and CMS server on the Azure VM-hosting platform (VMs running the open-
source and widely used OS Ubuntu) both to enable research trials, and for
reference by potential gift service providers. For the research trials we will
also develop and deploy hybrid (Ionic) iOS/Android mobile apps for "wrapping"
and "unwrapping" gifts, and a range of web-based "wrapping" apps accessible
via web browser. Again, these will provide a set of exemplars for potential
gift app developers.
Although the design of the Platform may change, it will make use of
exclusively open-source and free-to-use software. The CMS will make use of the
Roots framework ( _roots.io_ ) - an optimised and secured stack of the popular
WordPress CMS, configured via Ansible and deployable via Vagrant. WordPress
allows for extensive customisation and extension via plugins and themes,
meaning that the final GIFT platform will be fit for purpose, but also
extensible by other developers. The notification server will allow developers
to connect to any messaging services of choice, but our reference instance
will use open source EJabber software to pass notifications to a wide range of
existing jabber-compatible clients, including the open-source web-based
ConverseJS client that will be integrated in the CMS and OSX Messages client
that we will use for testing; it will also connect to the Amazon SNS service
to allow SMS-messaging and the MailGun service to provide email notifications.
### 2.3.2 3D models
The models can be downloaded in the widely used .OBJ format, at which point
they can be imported into 3D modelling software and 3D application engines.
## 2.4 Increase data re-use (through clarifying licenses)
### 2.4.1 Source code
Deliverables D6.4 and D4.4 will document how to practically re-use the
outcomes of the project. They will explain deployment, use and maintenance of
the software products developed in the project.
Source code from work packages 2 and 6 will be released under the widely
recognized MIT License, which permits derivative work and commercial
exploitation. A first version of the toolbox will be made publicly available
through the release of deliverable D6.2, the final version will be made
available through D6.4.
The exemplary instance of the Platform will be operated for 2 years after the
duration of the project, until the end of 2021.
### 2.4.2 3D models
The models released on the official Mixed Reality Lab sketchfab page have a
Creative commons CCBY-NC-SA licence which allows users to freely download,
edit and redistribute them, as long as they use similar licencing and
appropriate attribution.
There are no licenses of any kind attached to the project-related 3D models on
the ‘ddarz’ sketchfab account. Researchers interested in clearing rights for
re-use (e.g. regarding derivative work or commercial exploitation) should
contact the researcher in charge of the photogrammetry experiments, Dimitrios
Darzentas at [email protected].
# 3\. Allocation of Resources
Consortium partners are responsible for ongoing management of the data they
collect and use. Project coordinator ITU, is responsible for data that are
shared within the consortium. Data sharing within the consortium will be
facilitated via a secure, encrypted, and locally managed ownCloud data storage
service maintained by ITU for the duration of the project. This service is
offered freely to ITU researchers and will not incur any costs for the
consortium.
Source code will be shared as described above. Using GitHub repositories cover
no cost for open source projects, and guarantees a valid and reliable record
for the source code. UoN will be responsible for maintaining the open source
repositories up till the end of the project, after which it will be up to the
open source community to maintain and develop the software further.
The Sketchfab accounts that are being used to share the 3D models are
currently incurring no additional costs as they are using the free account
plan.
# 4\. Data Security
Secure storage of personal data is described in deliverable D8.1. Security of
data shared through GitHub and Sketchfab is handled by the operators of those
websites. As these are extremely widely used and trusted platforms for this
kind of data, and given the amounts and types of data set out in this
document, there is no discernible need for any project-wide plans for data
backup and recovery. However, each participating researcher is expected to
make backups of their data as they deem necessary and reasonable, on a case-
by-case basis.
# 5\. Ethical Aspects
Ethical aspects of data management are comprehensively covered in deliverables
_D8.1 POPD - Requirement No. 3_ , and _D8.2 NEC - Requirement No. 6_ .
For any issues not already described in the above deliverables, we will abide
by the ethical guidelines and considerations of the research communities
associated with particular methodological approaches. In case of a collision
between the disciplinary requirements for collecting and analysing data
associated with a particular approach and the general guidelines presented in
D8.1, the latter will serve as the reference document.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0595_INTEGROIL_688989.md
|
1. **Introduction**
INTEGROIL project is part of the Open Research Data Pilot (ORDP) in Horizon
2020. ORDP aims to improve and maximize access to and re-use of research data
generated by Horizon 2020 projects and takes into account the need to balance
openness and protection of scientific information, commercialization and
Intellectual Property Rights, privacy concerns, security as well as data
management and preservation questions. ORDP follows the principle “As open as
possible as closed as necessary”.
According to theses premises, the purpose of the present document is to ensure
that research data ( _i.e._ mainly data needed to validate the results
presented in scientific publications but open to other data) is soundly
managed by making them findable, accessible, interoperable and reusable
(FAIR). To this end, this document includes information on:
* the handling of research data during and after the end of the project
* what data will be collected, processed and/or generated
* which methodology and standards will be applied
* whether data will be shared/made open access and
* how data will be curated and preserved (including after the end of the project).
This document constitutes the initial Data Management Plan for the INTEGROIL
project and will be updated whenever significant changes arise and in time for
the latest final review (M36).
2. **Open Research Data Pilot requirements.**
Contractual obligations in relation to Open Access to research data are
described in **Article 29.3 of the Grant Agreement:**
_Regarding the digital research data generated in the action (‘data’), the
beneficiaries must:_
1. _deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following:_
1. _the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;_
2. _other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan' (see Annex 1);_
2. _provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves)._
_This does not change the obligation to protect results in Article 27, the
confidentiality obligations in_
_Article 36, the security obligations in Article 37 or the obligations to
protect personal data in Article 39, all of which still apply._
_As an exception, the beneficiaries do not have to ensure open access to
specific parts of their research data if the achievement of the action's main
objective, as described in Annex 1, would be jeopardized by making those
specific parts of the research data openly accessible. In this case, the data
management plan must contain the reasons for not giving access._
# 2.1 Data set reference, name and description
This section shows a description of the information to be gathered, the nature
and the scale of the data generated or collected during the project.
The Data Management Plan will present in details only the procedures of
creating ‘primary data’ (data not available from any other sources) and of
their management.
## 2.1.1 DATASET 1: Real-time information of the treatment platform
(generated by ACCIONA AGUA and TUPRAS)
<table>
<tr>
<th>
**DataSet: Real-time information of the treatment platform**
</th> </tr>
<tr>
<td>
Data set reference name
</td>
<td>
and
</td>
<td>
INTEGROIL-001. Information of the treatment platform
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
</td>
<td>
The project will generate water quality, energy consumption and other process
parameter data collected in Excel spreadsheets.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
</td>
<td>
The data used will be original from the research project. The existing
information used relates to the prior know-how of the technology developers,
not particularly reflected in any specific set of data.
The data is originated in the sensors and other metering equipment installed
in the laboratory tests, individual prototypes (Work Package 2 and 3) and
integrated platform (Work Package 4). Additionally, point samples of water
quality parameters measured with test kits will also be a source of data (work
Packages 1, 2, 3 and 4).
</td> </tr>
<tr>
<td>
Size of the data
</td>
<td>
</td>
<td>
The size of the data can range from 10-50 entries in the case of laboratory
and bench tests, to several thousand entries per each parameter during
demonstration.
</td> </tr>
<tr>
<td>
Data applicability
</td>
<td>
In order for the raw data to be useful, it will be processed first in order to
provide meaningful process information. The laboratory and prototype data will
then be useful to technology developers, and the integrated platform data will
be useful for the platform operators (end users of the technology).
</td> </tr> </table>
## 2.1.2 DATASET 2: Optimization of the flotation technology (Generated by
ACCIONA)
<table>
<tr>
<th>
**DataSet: Optimization of the flotation technology**
</th> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
INTEGROIL-002. Information of flotation process.
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
The project will generate data about optimal conditions for flotation
performance, collected in Excel sheets.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
The data used will be original from the research project. The data is
originated in the experimental lab test itself (experimental conditions
applied), in the laboratory analysis equipment and also in the sensors of the
individual prototype. Related work packages are WP2 and WP4.
</td> </tr>
<tr>
<td>
Size of the data
</td>
<td>
The size of the data is expected to range between 10-30 entries.
</td> </tr>
<tr>
<td>
Data applicability
</td>
<td>
The data will provide the information required for determining the optimum
operational conditions. It will be useful for the platform operators and also
for the technology/solution providers (operation and maintenance requirements,
OPEX estimation, etc.).
</td> </tr> </table>
## 2.1.3 DATASET 3 and 4: Optimization of ceramic membranes (generated by
LIKUID)
<table>
<tr>
<th>
**DataSet: Optimization of ceramic membrane filtration (DF and MBR)**
</th> </tr>
<tr>
<td>
Data set reference name
</td>
<td>
and
</td>
<td>
INTEGROIL-003. Information of the performance of ceramic filtration at
different operational conditions, both for the direct filtration for produced
water treatment and the CBR process in MBR configuration for refinery
wastewater treatment .
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
</td>
<td>
The project will generate data about filtration performance (flux, TMP,
permeability), filtration sequences, and permeate quality. Moreover, the data
related to the biological process in the MBR configuration will be also
generated.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
</td>
<td>
The data used will be original from the research project. The data is
originated in the experimental lab test itself (experimental conditions
applied), in the sensors of the laboratory filtration rig, in the laboratory
analysis equipment and also in the sensors of the individual prototype.
Related work packages are WP2 and WP4.
</td> </tr>
<tr>
<td>
Size of the data
</td>
<td>
</td>
<td>
The size of the data is expected to range between 10-30 entries for the
laboratory tests to several thousand entries during demonstration.
</td> </tr>
<tr>
<td>
Data applicability
</td>
<td>
The filtration performance data will provide the information required for
determining the optimum operational conditions of ceramic filtration. It will
be useful for the platform operators and also for the technology/solution
providers (operation and maintenance requirements, OPEX estimation, etc.).
</td> </tr>
<tr>
<td>
**DataSet: Optimization of ceramic membranes’ chemical cleaning**
</td> </tr>
<tr>
<td>
Data set reference name
</td>
<td>
and
</td>
<td>
INTEGROIL-004. Information of chemical cleaning protocols and membrane autopsy
studies – ceramic membranes.
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
</td>
<td>
The project will generate data about cleaning chemicals, cleaning conditions,
cleaning efficiency and autopsy studies, collected in Excel sheets.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
</td>
<td>
The data used will be original from the research project. The data is
originated in the experimental lab test itself (experimental conditions
applied), in the sensors of the laboratory filtration rig, in the laboratory
analysis equipment and also in the sensors of the individual prototype.
Related work packages are WP2 and WP4.
</td> </tr>
<tr>
<td>
Size of the data
</td>
<td>
</td>
<td>
The size of the data is expected to range between 10-30 entries.
</td> </tr>
<tr>
<td>
Data applicability
</td>
<td>
The chemical cleaning and membrane autopsy data will provide the information
required for determining the optimum cleaning conditions and exactly reproduce
the corresponding protocol. It will be useful for the platform operators and
also for the technology/solution providers (operation and maintenance
requirements, OPEX estimation, etc.).
</td> </tr> </table>
**2.1.4 DATASET 5, 6 and 7: Conventional advanced oxidation processes
(generated by URV).**
<table>
<tr>
<th>
</th>
<th>
**DataSet: Ozonolysis**
</th> </tr>
<tr>
<td>
Data set reference name
</td>
<td>
and
</td>
<td>
Integroil-005. Information of ozonolysis process
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
</td>
<td>
Application of ozone for the removal of dissolved organic compounds from O&G
wastewater, with the aim of mineralizing this organic fraction or increasing
biodegradability/ decreasing toxicity of the effluent.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
</td>
<td>
The data used will be original from the research project. The existing
information used relates to the prior know-how of the technology developers,
not particularly reflected in any specific set of data.
The data originated will be due to optimization of the operating conditions
(reaction time, concentration of reagents needed i.e. ozone, H2O2, amount and
type of catalyst, pH, temperature…) for the different case studies considered
at laboratory scale as well as from the operation of the individual prototype.
</td> </tr> </table>
<table>
<tr>
<th>
Size of the data
</th>
<th>
</th>
<th>
The size of the data can range from 10-50 entries in the case of laboratory
and bench tests, to several thousand entries per each parameter during
demonstration.
</th> </tr>
<tr>
<td>
Data applicability
</td>
<td>
</td>
<td>
Data will be useful in the validation of ozonolysis technology in treatment of
O&G wastewater at full scale. Also, these data will provide useful information
on the synergies with other techniques studied in the project. The laboratory
and prototype data will then be useful to technology developers, and the
configuration of integrated platform.
</td> </tr>
<tr>
<td>
</td>
<td>
**DataSet: Fenton processes**
</td> </tr>
<tr>
<td>
Data set reference name
</td>
<td>
and
</td>
<td>
Integroil-006. Information of the Fenton processes
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
</td>
<td>
Application of Fenton and Fenton-like processes to achieve maximum efficiency
in the removal and/or mineralization of organic compounds from O&G wastewater
and/or increase of biodegradability or decrease of toxicity.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
</td>
<td>
The data used will be original from the research project.
The data originated will be due to optimization of the operating conditions
(reaction time, concentration of reagents needed, amount and type of catalyst,
pH, temperature…) for the different case studies considered at laboratory
scale as well as from the operation of the individual prototype if considered.
</td> </tr>
<tr>
<td>
Size of the data
</td>
<td>
</td>
<td>
The size of the data can range from 10-50 entries in the case of laboratory
and bench tests, to several thousand entries per each parameter during
demonstration.
</td> </tr>
<tr>
<td>
Data applicability
</td>
<td>
</td>
<td>
Data will be useful in the validation of Fenton-like processes in treatment of
O&G wastewater at full scale. Also, these data will provide useful information
on the synergies with other AOPs techniques studied in the project. The
laboratory and prototype data will then be useful to technology developers,
and the configuration of integrated platform.
</td> </tr>
<tr>
<td>
</td>
<td>
**DataSet: Photocatalysis processes**
</td> </tr>
<tr>
<td>
Data set reference name
</td>
<td>
and
</td>
<td>
Integroil-007. Information of the photocatalytic processes
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
</td>
<td>
Application of photocatalysis to achieve maximum efficiency in the removal
and/or mineralization of organic compounds from O&G wastewater and/or increase
of biodegradability or decrease of toxicity.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
The data used will be original from the research project.
The data originated will be due to optimization of the operating conditions
(reaction time, amount and type of catalyst, pH, temperature…) for the
different case studies considered at laboratory scale as well as from the
operation of the individual prototype if considered.
</td> </tr>
<tr>
<td>
Size of the data
</td>
<td>
The size of the data can range from 10-50 entries in the case of laboratory
and bench tests, to several thousand entries per each parameter during
demonstration.
</td> </tr>
<tr>
<td>
Data applicability
</td>
<td>
Data will be useful in the validation of photocatalysis in treatment of O&G
wastewater at full scale. Also, these data will provide useful information on
the synergies with other AOPs techniques studied in the project. The
laboratory and prototype data will then be useful to technology developers,
and the configuration of integrated platform.
</td> </tr> </table>
## 2.1.5 DATASET 8: Catalytic Wet Air Oxidation (CWAO)(generated by APLICAT)
<table>
<tr>
<th>
</th>
<th>
**DataSet: CWAO**
</th> </tr>
<tr>
<td>
Data set reference name
</td>
<td>
and
</td>
<td>
Integroil-008. Information of Catalytic Wet Air Oxidation process
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
</td>
<td>
Application of a catalytic process for the removal of dissolved organic
compounds from O&G wastewater, with the aim reduce around 30% of TOC and COD.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
</td>
<td>
The data used will be original from the research project.
The data originated will be used for the optimization of the operating
conditions (reaction time, amount and type of catalyst, pH, catalyst
regeneration conditions…) for the different case studies considered at
laboratory scale as well as from the operation of the individual prototype.
</td> </tr>
<tr>
<td>
Size of the data
</td>
<td>
</td>
<td>
The size of the data can range from 10-50 entries in the case of laboratory
and bench tests, to several thousand entries per each parameter during
demonstration.
</td> </tr>
<tr>
<td>
Data applicability
</td>
<td>
</td>
<td>
Data will be useful in the validation of CWAO technology in the treatment of
O&G wastewater at full scale. Also, these data will provide useful information
on the synergies with other techniques studied in the project. The laboratory
and prototype data will then be useful to technology developers, and the
configuration of integrated platform.
</td> </tr> </table>
## 2.1.6 DATASET 9: Life cycle assessment and costing of the treatment
platform (generated by LCA)
<table>
<tr>
<th>
**DataSet: Life cycle assessment and costing of the treatment platform**
</th> </tr>
<tr>
<td>
Data set reference name
</td>
<td>
and
</td>
<td>
INTEGROIL-009. Life cycle assessment and life cycle costing
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
</td>
<td>
The data set will include all the underlying information used in the life
cycle assessment and life cycle costing studies applied to the treatment
platform. It will mainly consist of mass and energy balances of the assessed
processes, as well as the derived costs.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
</td>
<td>
Primary data to build this data set will originate mainly from the project
itself. Secondary data sources will include literature and previous studies
conducted by 2.-0 LCA consultants.
</td> </tr>
<tr>
<td>
Size of the data
</td>
<td>
</td>
<td>
The data will be made available as excel tables and word documents.
</td> </tr>
<tr>
<td>
Data applicability
</td>
<td>
These data can be used by other LCA practitioners or cost assessors within the
oil and gas and wastewater treatment sectors for similar studies.
</td> </tr> </table>
# 2.2 Standards and metadata
Metadata is ‘data about data’ and is the information that enables data users
to find and/or use a dataset.
## 2.2.1 DATASET 1: Real-time information of the treatment platform
The data will be held in transcript form in accessible file formats such as
.xls (Excel), Metadata will include date and/or time, location of measurement
(if relevant), and parameter measured.
No particular standard will be used, since the data format is simple and
Acciona Agua’s commonlyused format should be sufficient. In this format,
process parameters are located the first row of an Excel file, while the
sampling times (in date or time, whatever is applicable depending on the
duration of the experiment and the frequency of sampling) are located in the
first column. In the intersection of different parameters and times, the data
is registered. In the general spreadsheet, the experiment type and location of
sampling is also described, if relevant.
## 2.2.2 DATASET 2: Optimization of flotation process
The data will be held in transcript form in accessible file formats such as
.xls (Excel), and will include the conditions of process optimization
(coagulant concentration, microsphere concentrations, etc.) with the main
process parameters (type of chemical, concentration, temperature, time, etc.)
and the removal efficiency for different parameters (turbidity, O&G, etc.)
distributed in columns.
## 2.2.3 DATASET 3 and 4: Optimization of ceramic membranes
The data will be held in transcript form in accessible file formats such as
.xls (Excel), and will include (1) the operational conditions of ceramic
filtration and the corresponding performance parameters (permeability, fouling
rate), (2) the operational conditions of the biological reactor in MBR
configuration, (3) the parameters related to effluent quality and (4) the
conditions of each cleaning stage (one stage in each row) with the main
cleaning parameters (type of chemical, concentration, temperature, pH, time,
etc.) and the cleaning efficiency distributed in columns.
## 2.2.4 DATASET 5, 6 and 7: Advanced oxidation processes
The data will be held in transcript form in accessible file formats such as
.xls (Excel), Metadata will include date and/or time, location of measurement
(if relevant), and parameter measured. URV will use Excel files templates for
any relevant AOP experiment. Experiment type, description and process
parameters will be given.
## 2.2.1 DATASET 8: CWAO
The data will be held in transcript form in accessible file formats such as
.xls (Excel). Metadata will include date and/or time, location of measurement
(if relevant), and parameters measured. APLICAT will use Excel files templates
for any relevant experiment. Experiment type, description and process
parameters will be given.
## 2.2.2 DATASET 9: life cycle assessment and costing of the treatment
platform
The data stored in the repository will be fully documented with metadata
including data sources and data quality, assumptions made and methods
employed. All metadata will be incorporated in the word and excel tables.
LCA simulation of the treatment platform will be carried out with the
commercial software SimaPro and for this reason the actual SimaPro files
cannot be shared. However the shared word and excel documents will allow data
users to reproduce the methods and results with other software
# 2.3 Data sharing
The data sharing procedures are the same across the datasets and are in
accordance with the Grant Agreement (Article 29.3).
The partners will deposit the research data, including associated metadata,
needed to validate the results presented in the deposited scientific
publications. Research papers written and published during the funding period
will be made available with a subset of the data necessary to verify the
research findings.
The collected and elaborated data will be stored in an open access repository.
ZENODO repository will be used by all project partners in those cases data can
be shared.
The data collected are likely to be two components:
* Data collected, assembled, or generated in each experiment. It could be date and/or time, location of measurement (if relevant), and parameter measured.
* Data that may receive copyright protection due to intellectual property rights. Each partner will decide in such a case how the data is stored and managed and will decide what data needs to be fully included in a database, how to organize the data, and how to relate different data elements. In these cases when a dataset cannot be totally shared due to conflicts with intellectual properties, a range of data will be included instead.
Research data will be made available in a way that can be shared and easily
reused by others, sharing data using open file format (whenever possible), so
that they can be implemented by both proprietary and open source software.
Documenting datasets, data sources, and methodology by which the data were
acquired establishes the basis for interpreting and appropriately using data.
Each generated or collected and then deposited dataset, will include
documentation to help users to re-use it.
## _Opt Out_
It is important to note that in the case a dataset adversely affects
operations and reputation of any partner, it must be kept as confidential and
it must not be published or shared in an open access platforms.
**DATASET 1 generated by TUPRAS will not be publicly available due the need
for confidentiality in connection with security issues.** TUPRAS has ISO/IEC
27001:2013 Information Security Management System Certification. Regarding to
ISO/IEC 27001: 2013, Asset management system classify the shared data as
public, internal, confidential and highly confidential. Internal, confidential
and highly confidential data cannot be published. In the case that these type
of data are published legal action will be initiated.
# 2.4 Archiving and preservation
Data will be preserved for 10 years after the end of the project. After this
time, it is considered that new technologies would have appeared and stored
data will have little value.
To ensure high-quality long-term management and maintenance of the dataset,
the consortium will use repositories that aim to provide archiving and
preservation of long-tail research data (Zenodo).
# 2.5 Ethical aspects
INTEGROIL is not going to deal with personal data. Notwithstanding, any
contract/agreement with suppliers and workers is issued in compliance with the
terms of article 12 of the current Spanish Organic Law on Personal Data
Protection.
# 2.6 Other issues
Not applicable.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0596_ArchAIDE_693548.md
|
# Executive summary
The formal Data Management plan consists of an online document written in the
templates within the
‘DMPonline’ tool: part of the the Open Research Data Pilot (ORD) funded under
Horizon 2020. The ArchAIDE DMP is live at:
_https://dmponline.dcc.ac.uk/plans/12379_
The online DMP is available to view at the above URL. According to the ORD
guidance the DMP is to be considered as a living document, with edits
implemented over the course of the ArchAIDE project. The document consists of
three elements:
1\. Initial DMP: a first version of the DMP to be submitted within the first
six months of the project 2. Detailed DMP: updated over the course of the
project whenever significant changes have arisen.
3\. Final review DMP: reflecting all updates made over the course of the
project.
The ArchAIDE European project aims at developing a highly innovative
application for the archaeological practice, which can quickly recognize
potsherds and improve dating and classification systems. The project, funded
under the Horizon 2020 European programme, is coordinated by the researchers
of the University of Pisa. ArchAIDE aims at improving access and promotion of
the European archaeological heritage through the development and
implementation of an open-data database, which will allow all application
users to use this information. All research data collected and generated
during the project will be managed securely during the project lifetime, made
available as Open Access data by the project end, and securely preserved in
the Archaeology Data Service (ADS) repository into perpetuity. This will
include textual data and visual data (photographs, vector and raster
images/drawing, eventually 3D models), which will be collected and documented
according to the internationally agreed standards set out in the ADS/ Digital
Antiquity Guides to Good Practice
(http://guides.archaeologydataservice.ac.uk). Linked open data held in the ADS
RDF triplestore will provide an alternative means of access to the data, via a
SPARQL query endpoint.
The Project Data Contact is Tim Evans (Archaeology Data Service)
[email protected]
# Data summary
* The purpose of data collection is to populate a database that will act as automated reference tool for the recognition and classification of pottery sherds from archaeological excavations.
* The reference database - where copyright has been cleared - will be publicly available under the standard ADS Terms and Conditions of Use.
* The primary data type will be the database itself which will incorporate textual data, raster and vector images, and 3D models.
* The database will incorporate data from existing sources including the Roman Amphorae digital resource ( _http://dx.doi.org/10.5284/1028192_ )
* The size of the Roman Amphorae database (which will be used to seed the resource) is currently 7Gb, with the additional datasets and potential new data (scans, photographs + 3D models) this may be expected to rise significantly. An estimate of 1 terabyte would represent a maximum expected size.
* The dataset will provide a reference resource for archaeological ceramic specialists and nonspecialists alike.
# Fair Data
**2.1. Making data findable, including provisions for metadata:**
* The final dataset will be archived by the Archaeology Data Service (ADS) as a single collection. Collection-level metadata (based on Dublin Core) will be created, which will allow the resource to be found within the main ADS website. This metadata will also by exposed/consumed by other portals such as ARIADNE. In addition, it is also planned to publish the dataset as Linked Open Data via the stores within Allegrograph, and published via Pubby and the ADS' SPARQL interface.
* The ADS archive will be identifiable via a Digital Object Identifier (DOI), registered with Datacite.
* ADS Collection-level metadata is based on Dublin Core (DC) elements. DC.Subject terms are based on archaeology/heritage specific thesauri and vocabularies updated and maintained as Linked Open Data (LOD) by national cultural heritage bodies (see _http://www.heritagedata.org/_ ) . These allow subject terms such as 'CERAMIC' to be meaningfully and consistently recorded. As part of the ongoing ARIADNE project these terms have also been mapped to the Ariadne Dataset Catalogue Model (ACDM see _http://portal.ariadne-infrastructure.eu/about_ )
* Over the course of data collection a clear versioning system - aided by consistent file-naming strategy) will be used, based on the guidelines stipulated in the Archaeology Data Service / Digital Antiquity Guides to Good Practice.
* As outlined above, the final archive will reside with the ADS with metadata compiled to their standards, based on DC terms. Existing heritage thesauri will be used for the recording of subject terms
**2.2. Making data openly accessible:**
* The main output of the project will be the project reference database. This database will be archived with the Archaeology Data Service (ADS). This database - with the exception of material not copyright cleared - will be made available to download as an ADS interface. ADS archives are free to use under their _Terms and Conditions_ .
* The ADS interface will present the data in open formats enabling wider re-use, for example Comma Separated Values (.csv)
* The database will also be published as LOD via the ADS triplestore.
* The ADS archive will include file-level and collection-level metadata
* The main ADS archive will present the raw data to download in common and open formats (e.g. CSV or JPG). The LOD can be queried via a SPARQL client or by using the ADS SPARQL query interface.
**2.3. Making data interoperable:**
ADS collection-level metadata will incorporate a number of LOD vocabularies to
facilitate interoperability, these include:
* Heritage data thesauri for subject terms (http://www.heritagedata.org/)
* Getty Thesaurus of Geographic Names for spatial data
* Library of Congress Subject Headings (LCSH)
* The ADS also record spatial data to be compliant with the GEMINI metadata standard
In order to ensure interoperability between resources in different languages,
multilingual controlled vocabularies will be incorporated into the database.
Similar work in the archaeological domain has already been carried out by the
EU Infrastructures funded ARIADNE project, mapping country or data centre
specific chronologies, object and monument terms to a central neutral spine -
the Art and Architecture Thesuarus of the Getty Research Institute.
Following the success of this initiative for ARIADNE, ArchAIDE will use a
similar methodology and use the Getty AAT to build a neutral spine of terms
specific to ceramic recording. These include:
* Sherd type (for example "rim")
* Form (for example "plate")
* Decoration type (for example "incised")
* Decoration colour (for example "blue")
Project partners will then identify specific terms used within their national
or regional catalogues and map them to those neutral concepts.
* UB will participate in this task for Catalan and Spanish vocabularies
* UNIPI will contribute with southern-European vocabularies
* UCO with German terminology
* University of York for UK terminologies
* An independent ceramic specialist has also contributed an existing thesaurus of English-French terms
The use of the AAT terms will not only allow a linguistic mapping to be
incorporated within the reference database and public facing application, but
also a conceptual mapping that will allow for differences in terminologies to
be overcome. To explain this last point, archaeologists in different countries
may have different appreciations of what is a "plate" or "platter". However,
in the AAT both terms are hierarchically below a broader term "vessels for
serving and consuming food". The database and user interface can use this
knowledge organisation to allow the ArchAIDE application to search on very
specific terms (such as "plate"), but then to return other results that also
map to broader parent terms so as not to omit results based on a subjective
and personal appreciation of what an object is called.
**2.4. Increase data re-use (through clarifying licenses):**
* The dataset - as delivered via the ADS archive and excluding any material without formal copyright permission - will be freely available to re-use for research purposes as stipulated in the ADS _Terms_ _and Conditions_ of use.
* The datset will be made available upon completion of the project. It is planned that this will occur at the completion of the project in 2019\.
* Quality assurance is a high priority for the project. During the collection phase all data collected and maintained by partners will be subject to standard best practice, as outlined in the ADS/Digital Antiquity _Guides to Good Practice_ . These practices include basic IT good practice on file naming, strict versioning, secure backups (and maintenance of backups), and virus scanning. In addition, all partners creating data will be responsible for ensuring that that quality of material being produced is sufficient to meet the needs of the project. This will include ensuring that scans and other image captures are of the correct detail and quality to be incorporated within the various modelling applications, and that reference information is correctly entered into the ArchAIDE database. The ArchAIDE database will be maintained by INERA, with data cleaning, enhancement and validation performed by all project partners.
* Upon completion of the project the data will be deposited with the ADS, who will ensure that file formats are suitable and that all data is adequately documented to ensure data preservation. An overview of the ADS ingest process can be found in _ADS Ingest Manual_ .
* The data will be archived and disseminated by the ADS in perpetuity. The ADS is a long-standing and accredited Digital Repository, with a peer reviewed policy on ensuring long-term preservation ( _http://archaeologydataservice.ac.uk/advice/preservation_ ) .
# Allocation of resources
Explain the allocation of resources, addressing the following issues:
* The costs for data management (and by extension making the data FAIR) during the data collection phase have been estimated to be minimal, and covered by the existing scheme of works and funds for the relevant work packages. The main task to be undertaken to ensure data is FAIR, is the deposition of the final dataset with the Archaeology Data Service, which forms Work Package 10 of the ArchAIDE project. Within WP10, the main body of work for archiving is 10.2 Data archiving, which has been broken down - via a calculation of project months assigned to this task to 28,895 Euros. It should be noted that ADS costs are one-off, and cover the management and preservation of the dataset in perpetuity.
* Data management will be overseen by Universitaet zu Koeln and Università di Pisa during the data collection phase, and latterly the ADS as part of the Work Packages to ensure preservation and dissemination.
* The financial costs for ensuring management and presentation of the project dataset by the ADS have been included in the original project design. The impact of the ADS has recently been analysed by an independent study. This project established that the archiving and dissemination of data by the ADS was of significant research and financial value to the wider community.
# Data security
Data security will be addressed for the period of Data Collection (1) and
deposition of the archive with the ADS for Preservation (2).
1. During Data Collection all partners will adhere to best practice, as outlined in the ADS/Digital Antiquity Guides to Good Practice. In brief, the following precautions will be undertaken over the course of the data creation phase:
* This project will follow a rigorous procedures of disaster planning, with (off-site) copies made on a daily, weekly and monthly basis. Backup copies will be validated to ensure that all formatting and important data have been accurately preserved. Each backup will be clearly labelled and its location.
* Periodic checks will be performed on a random sample of digital datasets, whether in active use or stored elsewhere. Appropriate checks will include searching for viruses and routine screening procedures included in most computer operating systems. These periodic checks will be in addition to constant, rigorous virus searching on all files.
2. At the end of the project, the dataset will be deposited with the ADS for secure preservation and access into perpetuity. One of the core activities of the ADS is the long term digital archiving of the data that has been entrusted to us. We follow the Open Archival Information System (OAIS) reference model and also have several internal policies and procedures that guide and inform our archiving work in order to ensure that the data in our care is managed in an appropriate and consistent way. These include:
• A _Preservation Policy_ : an annual reviewed policy document which
alongside detailed descriptions of ADS practice provides an overview of
internal procedures for archival policy. This includes an overview of ADS
accreditation, migration and backup/off-site storage. The following overview
is drawn from this document: "The ADS maintain multiple copies of data in
order to facilitate disaster recovery (i.e. to provide resilience). All data
are maintained on the main ADS production server in the machine room of the
Computing Service at the University of York. The Computing Service further
back up this data to tape and maintain off site copies of the tapes. Currently
the backup system uses Legato Networker and an Adic Scalar tape library. The
system involves daily (over-night), weekly and monthly backups to a fixed
number of media so tapes are recycled. All data are synchronised once a week
from the local copy in the University of York to a dedicated off site store
maintained in the machine room of the UK Data Archive at the University of
Essex. This repository takes the form of a standalone server behind the
University of Essex firewall. The server is running a RAID 5 disk
configuration which allows rapid recovery from disk failure. In the interests
of security outside access to this server is via an encrypted SSH tunnel from
nominated IP addresses. Data is further backed up to tape by the UKDA.
# Ethical aspects
Although no ethical issues have been identified, as a matter of course all
staff will adhere to the ethical codes and guides to practice of their
respective organisations
* University of York (ADS): Code of practice and principles for good ethical governance.
* Tel Aviv Universities ethics policy _https://research-authority.tau.ac.il/home/ethics_
* University of Barcelona's Code of Good Research Practice _http://diposit.ub.edu/dspace/handle/2445/28543_
* University of Pisa's ethics code _https://www.unipi.it/index.php/statuto-regolamenti/item/1973codice-etico-della-comunit%C3%A0-accademica_
* University of Cologne's Guidelines for Safeguarding Good Academic Practice and Dealing with
Academic Misconduct:
_https://www.portal.unikoeln.de/sites/uni/PDF/Ordnung_gute_wiss_Praxis_en.pdf_
* Elements' ethics code _http://elements-arq.weebly.com/ethics-code.html_
# Other
The project Data Management Plan (DMP) presented here is based upon existing
internationally agreed procedures and recommendations as outlined in the
Archaeology Data Service / Digital Antiquity Guides to Good Practice, as well
as specific Digital Preservation based standards including the DCC checklist
and handbook of the Digital Preservation Coalition.
In addition to this required format, it was also thought beneficial to have a
separate instructive document to guide subsequent Work packages of the
ArchAIDE project and designed to cover practical and technical elements not
contained in the online tool. The following recommendations presented here are
based upon existing internationally agreed procedures and recommendations as
outlined in the Archaeology Data Service / Digital Antiquity _Guides to Good
Practice_ (Archaeology Data Service/Digital Antiquity 2011), as well as
specific Digital Preservation based standards (DCC 2013; Digital Preservation
Coalition 2016).
This document covers guidance over the lifetime of the project, from
considerations during data collection, deposition with the ADS, and finally
preservation and access at the ADS.
**6.1. Defining the data to be archived**
As defined in Section 3 of this document, the ArchAIDE database is in effect
two entities:
* The reference database
* The results database
The reference database will contain a number of digital and digitised
catalogues of pottery typologies, and at the end of the project cycle will
form a coherent static resource. The results database is intended to form a
dynamic user-driven dataset for incorporation based on field and laboratory
investigative and reporting workflows. The final ArchAIDE project archive
should consist of the reference database and data produced by the application
during the project lifetime.
**6.2. Data Collection (pre-archiving)**
The following Section covers guidelines and recommendations for the period of
data creation. It is inherently linked with the formal handover of the
archival dataset to the ADS (4.4), and that section should be consulted for
specifications on file formats and metadata. During data creation, it is
anticipated that the following guidance will be used.
1. Digitisation
Although a significant amount of data created by the project will be born-
digital, a proportion will also be digitised from physical sources. If
digitisation is undertaken, a number of organisations and guidelines exist
which provide substantial guidance on undertaking digitisation. JISC Digital
Media provides a wide range of advice on digitising existing images.
2. Version Control
Strict version control will be observed. Primarily through the use of
* File naming conventions
* Standard headers listing creation dates and version numbers File logs
Versions that are no longer needed will be removed after ensuring that
adequate backup files have been created.
6.2.3. File Structures + Naming
Files will be organised into easily understandable directory structures. By
following a logical data structure throughout the project, will result in less
time preparing data for archiving at the end of the process. Adherence to a
predefined file structure will also reduce data loss and it provide files with
an absolute location. An example structure is included below; please note that
this is not model is used as an example of a clear structure and is not
proscriptive.
File naming will be considered from the very outset of a project. Every effort
will be made to make file names both descriptive and unique. The following
conventions will be used at all times:
* File names should use only alpha-numeric characters (a-z, 0-9), the hyphen (-) and the underscore (_). No other punctuation or special characters should be included within the filename.
* A full stop (.) should only be used as a separator between the file name and the file extension and should not be used elsewhere within the file name.
* Files must have a file extension to help the ADS and future users of the resource determine the file type.
* Lower case characters should be used, and ensure that supplied documentation accurately reflects the case of your filenames.
Some examples would thus be:
* siteid_artefactid_drawing_042.tif
* siteid_artefactid_photograph_012.tif
* siteid_artefactid_model_131.xyz
4. Secure backup
Backup is the familiar task of ensuring that there is an emergency copy, or
snapshot, of data held somewhere other than the primary location. This project
will follow a rigorous procedures of disaster planning, with (offsite) copies
made on a daily, weekly and monthly basis. These are important in the lifespan
of the project, but are not the same as long-term archiving because once the
project is completed and its digital archive safely deposited, the action of
backing up will become unnecessary. Backup copies will be validated to ensure
that all formatting and important data have been accurately preserved. Each
backup will be clearly labelled.
5. Periodic checking for viruses and other issues
Periodic checks will be performed on a random sample of digital datasets,
whether in active use or stored elsewhere. Appropriate checks will include
searching for viruses and routine screening procedures included in most
computer operating systems. These periodic checks will be in addition to
constant, rigorous virus searching on all files.
**6.3. Archiving with the ADS**
At the end of the project, the defined dataset (see 4.2) will be deposited
with the ADS for secure preservation and access into perpetuity.
6.3.1. Selection and retention
Through adherence to the guidelines on version control it is hoped that little
time should be required for a review of data to be submitted to the ADS.
However, a review should be undertaken to ensure that the archive does not
contain:
* Duplicates
* Working or backup versions of files
* Correspondence (emails or letters) or informal notes generated over the course of the project (note that if files explain other files within the archive they should be considered as metadata and included) Any extraneous or irrelevant materials
2. File formats
The following formats should be used for deposition of the archive with the
ADS. More detail on each datatype is included in the specific sections below
<table>
<tr>
<th>
**Data type**
</th>
<th>
**File format**
</th>
<th>
**Notes**
</th> </tr>
<tr>
<td>
Database
</td>
<td>
Each table or object should be exported as: Comma Separated Values (.csv)
</td>
<td>
UTF-8 encoding should be used if table contain non-ascii characters
</td> </tr>
<tr>
<td>
Raster images
</td>
<td>
All raster images should be supplied in any of th following formats:
* Uncompressed Baseline TIFF v6 (.tif)
* Portable Network Graphic (.png)
* Joint Photographic Expert Group (.jpg)
* JPEG 2000 (.jp2)
</td>
<td>
eShould be used for photographs and fla drawings. TIF is the ADS preferred
forma but others are accepted
</td> </tr>
<tr>
<td>
Vector images
</td>
<td>
Scalable Vector Graphics (.svg)
</td>
<td>
An open standard, XML-based format used to describe 2D vector graphics
developed by the W3C
</td> </tr>
<tr>
<td>
Computer-Aided Design
</td>
<td>
AutoCAD (.dwg or .dxf) version 2010 (AC1024)
</td>
<td>
</td> </tr>
<tr>
<td>
3D models
</td>
<td>
Wavefront OBJ (.obj)
X3D (.x3d)
Polygon File Format (.ply)
Uncompressed Baseline TIFF v6 (.tif)
Digital Negative (.dng)
</td>
<td>
OBJ, X3D or PLY are acceptable for 3D objects. TIF or DNG should be used for
any photographs used for the generation o model textures
</td> </tr>
<tr>
<td>
Documents
</td>
<td>
Microsoft Open XML (.docx) OpenDocument Text (.odt)
</td>
<td>
Either format can be used
</td> </tr> </table>
and
2. Metadata
All files should be accompanied by suitable metadata for that specific
metadata type. The ADS has specific guidance and templates for metadata
available on _its website_ . Individual links to templates are included in
the overview of data types presented below.
3. Database files
Databases are to be deposited as CSV files – usually as flat exports from the
database software being used. For the purposes of the ADS, the core of the
database is the data tables along with documentation and metadata describing
the contents of and relationships between tables. The order or layout of the
columns and rows may also be of significance, but forms, reports, queries and
macros are not seen as core data and are therefore often not preserved.
4. General comments
It is recommended that certain checks be made prior to deposition with the
ADS.
* Tables: although it should be assumed that databases should be migrated in their entirety, an assessment should be made in order to establish which tables should be migrated. Tables in the databases used to temporarily store data are not needed for preservation.
* Formulae, Queries, Macros: if the file contains formulae or queries that need to be preserved in their own right then these need to be identified, as migrated versions of the may only preserve the actual values calculated by the functions and not the functions themselves. Queries may need to be preserved separately and documented within a text file so functionality can be recreated at a later date.
* Comments or Notes: as with macros and formulae, the migration process may not save comments or text notes added to a file. Before migration, comments will need to be stored in a separate text file with a clear indication of which file and cell the comment relates to.
* Special Characters: The database may contain special or foreign characters such as ampersands, smart quotes or the em dash ("—") which interfere with the export and subsequent display of the data. Foreign characters which will often not export to a basic text file unless a specific character set (e.g. UTF-8) is specified.
* Links: it is important that the relationships between tables are understood, documented (see below) and are correct (checks can be made to ensure that duplicate or orphan records aren't present). If the database contains links to images, then checks should be made to ensure that these filenames are stored correctly.
## 6.3.5.1 File metadata
* A template for database metadata can be downloaded from the ADS website here: _http://archaeologydataservice.ac.uk/attach/FilelevelMetadata/ADS_database_metadata_template.o_ _ds_
* An entity relationship model should also be included.
6.3.6. Raster images
The following precautions should be made when creating or converting raster
images:
* Image Size and Resolution - conversions should ensure that the original resolution and image size remains the same in the preservation file format. In addition, it is important that, when converting files to a new format, lossy compression is not applied to the image.
* Bit depth and Colour space - converted files should ensure that the bit depth and colour space of the original image are supported in preservation formats and that images are not degraded when converted.
Although these properties are components of all image formats it is important
to ensure that these properties remain the same/retain the same values when
converting files to archival formats.
In addition, embedded metadata such as EXIF and IPTC can also be seen in
certain cases as a significant property of an image and, where relevant,
should be preserved with the file or exported to a separate plain or delimited
text or XML file to be stored alongside the image. Although it is possible to
preserve JPEG EXIF within the TIFF tag structure it is better held in a
separate file, avoiding the risk of loss or corruption during later migration
and making the metadata more easily accessible. Extraction of EXIF fields is
relatively straight forward, with a number of free tools available.
## 6.3.5.2 File metadata
A template for raster image metadata can be downloaded from the ADS website
here:
_http://archaeologydataservice.ac.uk/attach/FilelevelMetadata/ADS_raster_metadata_template.ods_
6.3.7. Vector images + CAD
Vector images and CAD models should be deposited as either SVG or DWG. Unlike
common raster images such as photographs, many vector images are derived from
data created or held in other applications such as CAD or GIS (which in turn
is often derived from a range of data collection techniques such as
geophysical survey or laser scanning). It is advised that if an image is
derived from another dataset then preservation of the original file should
take precedence over the derived image.
## 6.3.5.3 File metadata
A template for raster image metadata can be downloaded from the ADS website
here:
_http://archaeologydataservice.ac.uk/attach/FilelevelMetadata/ADS_vector_metadata_template.ods_
6.3.6 Storage at the ADS
All research data collected and generated during the project will be managed
securely during the project lifetime, made available as Open Access data by
the project end, and securely preserved in the ADS repository into perpetuity.
The ADS follows the Open Archival Information System (OAIS) reference model,
and have several internal policies and procedures that guide and inform
archiving work in order to ensure that the data in our care is managed in an
appropriate and consistent way.
All data will be documented in the ADS Collections Management System, an
Oracle-based system, held on University of York servers, with a secure off-
site backup held in the UK Data Archive at the University of Essex. During the
lifetime of the project all partners will maintain current working data on
their own secure systems with weekly backup to external hard drives.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0597_DocksTheFuture_770064.md
|
# Executive summary
_The deliverable outlines how the data collected or generated will be handled
during and after the DocksTheFuture project, describes which standards and
methodology for data collection and generation will be followed, and whether
and how data will be shared._
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used by the
Consortium with regard to the project research data. The DMP covers the
complete research data life cycle. It describes the types of research data
that will be generated or collected during the project, the standards that
will be used, how the research data will be preserved and what parts of the
datasets will be shared for verification or reuse. It also reflects the
current state of the Consortium Agreements on data management and must be
consistent with exploitation.
This Data Management Plans sets the initial guidelines for how data will be
generated in a standardised manner, and how data and associated metadata will
be made accessible. This Data Management Plan is a living document and will be
updated through the lifecycle of the project.
# EU LEGAL FRAMEWORK FOR PRIVACY, DATA PROTECTION AND SECURITY
Privacy is enabled by protection of personal data. Under the European Union
law, personal data is defined as “any information relating to an identified or
identifiable natural person”. The collection, use and disclosure of personal
data at a European level are regulated by the following directives and
regulation:
* Directive 95/46/EC on protection of personal data (Data Protection Directive)
* Directive 2002/58/EC on privacy and electronic communications (e-Privacy Directive)
* Directive 2009/136/EC (Cookie Directive)
* Regulation 2016/679/EC (repealing Directive 95/46/EC)
* Directive 2016/680/EC according to the Regulation 2016/679/EC, personal data
_means any information relating to an identified or identifiable natural
person (‘data subject’); an identifiable natural person is one who can be
identified, directly or indirectly, in particular by reference to an
identifier such as a name, an identification number, location data, an online
identifier or to one or more factors specific to the physical, physiological,
genetic, mental, economic, cultural or social identity of that natural person_
(art. 4.1). The same Directive also defines personal data processing as
_any operation or set of operations which is performed on personal data or on
sets of personal data, whether or not by automated means, such as collection,
recording, organisation, structuring, storage, adaptation or alteration,
retrieval, consultation, use, disclosure by transmission, dissemination or
otherwise making available, alignment or combination, restriction, erasure or
destruction (art. 4.2)._
# Purpose of data collection in DocksTheFuture
This Data Management Plan (DMP) has been prepared by taking into account the
template of the
“Guidelines on Fair Data Management in Horizon 2020”
(
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020hi-
oa-data-mgt_en.pdf_ ) . According to the latest Guidelines on FAIR Data
Management in Horizon 2020 released by the EC Directorate-General for Research
& Innovation “beneficiaries must make their research data findable,
accessible, interoperable and reusable (FAIR) ensuring it is soundly managed”.
The elaboration of the DMP will allow to DTF partners to address all issues
related with ethics and data. The consortium will comply with the requirements
of Directive 95/46/EC of the European Parliament and of the Council of 24
October 1995 on the protection of individuals with regard to the processing of
personal data and on the free movement of such data.
DocksTheFuture will provide access to the facts and knowledge gleaned from the
project’s activities over a two-year and a half period and after its end, to
enable the project’s stakeholder groups, including creative and technology
innovators, researchers and the public at large to find/re-use its data, and
to find and check research results.
The project’s activities aim to generate knowledge, methodologies and
processes through fostering cross-disciplinary, cross-sectoral collaboration,
discussion in the port and maritime sector. The data from these activities
will be mainly shared through the project website. Meeting with experts and
the main port stakeholders will be organised in order to get feedback on the
project and to share its results and outcomes.
DocksTheFuture will encourage all parties to contribute their knowledge
openly, to use and to share the project’s learning outcomes, and to help
increase awareness and adoption of ethics and port sustainability.
# Data collection and creation
Data types may take the form of lists (of organisations, events, activities,
etc.), reports, papers, interviews, expert and organisational contact details,
field notes, quantitative and qualitative databases, videos, audio and
presentations. Video and Presentations dissemination material will be made
accessible online via the DocksTheFuture official website and disseminated
through the project’s media channels (Twitter, LinkedIn and Facebook), EC
associated activities, press, conferences and presentations.
DocksTheFuture will endeavour to make its research data ‘Findable, Accessible,
Interoperable and Reusable (F.A.I.R)’, leading to knowledge discovery and
innovation, and to subsequent data and knowledge integration and reuse.
The DocksTheFuture consortium is aware of the mandate for open access of
publications in the H2020 projects and participation of the project in the
Open Research Data Pilot.
More specifically, with respect to face-to-face research activities, the
following data will be made publicly available:
* Data from questionnaires in aggregate form;
* Visual capturing/reproduction (e.g., photographs) of the artefacts that the participants will co-produce during workshops.
# Data Management and the GDPR
In May 2018, the new European Regulation on Privacy, the General Data
Protection Regulation, (GDPR) came into effect. In this DMP we describe the
measures to protect the privacy of all subjects in the light of the GDPR. All
partners within the consortium will have to follow the same new rules and
principles.
In this chapter we will describe how the founding principles of the GDPR will
be followed in the Docks The Future project.
Lawfulness, fairness and transparency
_Personal data shall be processed lawfully, fairly and in a transparent manner
in relation to the data subject._
All data gathering from individuals will require informed consent individuals
who are engaged in the project. Informed consent requests will consist of an
information letter and a consent form. This will state the specific causes for
the activity, how the data will be handled, safely stored, and shared. The
request will also inform individuals of their rights to have data updated or
removed, and the project’s policies on how these rights are managed. We will
try to anonymise the personal data as far as possible, however we foresee this
won’t be possible for all instances. Therefore further consent will be asked
to use the data for open research purposes, this includes presentations at
conferences, publications in journals as well as depositing a data set in an
open repository at the end of the project. The consortium tries to be as
transparent as possible in their collection of personal data. This means when
collecting the data information leaflet and consent form will describe the
kind of information, the manner in which it will be collected and processed,
if, how, and for which purpose it will be disseminated and if and how it will
be made open access. Furthermore, the subjects will have the possibility to
request what kind of information has been stored about them and they can
request up to a reasonable limit to be removed from the results.
Purpose limitation
_Personal data shall be collected for specified, explicit and legitimate
purposes and not further processed in a manner that is incompatible with those
purposes._
Docks The Future project won’t collect any data that is outside the scope of
the project. Each partner will only collect data necessary within their
specific work package.
Data minimisation
_Personal data shall be adequate, relevant and limited to what is necessary in
relation to the purposes for which they are processed._
_Only data that is relevant for the project’s questions and purposes will be
collected. However since the involved stakeholders are free in their answers,
this could result in them sharing personal information that has not been asked
for by the project. This is normal in any project relationship and we
therefore chose not to limit the stakeholders in their answer possibilities.
These data will be treated according to all guidelines on personal data and
won’t be shared without anonymization or explicit consent of the stakeholder._
_Accuracy_
_Personal data shall be accurate and, where necessary, kept up to date_
_All data collected will be checked for consistency._
Storage limitation
_Personal data shall be kept in a form which permits identification of data
subjects for no longer than is necessary for the purposes for which the
personal data are processed_
_All personal data that will no longer be used for research purposes will be
deleted as soon as possible. All personal data will be made anonymous as soon
as possible. At the end of the project, if the data has been anonymised, the
data set will be stored in an open repository. If data cannot be made
anonymous, it will be pseudonymised as much as possible and stored for a
maximum of the partner’s archiving rules within the institution._
_Integrity and confidentiality_
_Personal data shall be processed in a manner that ensures appropriate
security of the personal data, including protection against unauthorised or
unlawful processing and against accidental loss, destruction or damage, using
appropriate technical or organisational measures._
_All personal data will be handled with appropriate security measures applied.
This means:_
* _Data sets with personal data will be stored at a Google Drive server at the that complies with all GDPR regulations and is ISO 27001 certified._
* _Access to this Google Drivel be managed by the project management and will be given only to people who need to access the data. Access can be retracted if necessary._
* _All people with access to the personal data files will need to sign a confidentiality agreement._
_Accountability_
_The controller shall be responsible for, and be able to demonstrate
compliance with the GDPR._
_At project level, the project management is responsible for the correct data
management within the project._
# DocksTheFuture approach to privacy and data protection
On the basis of the abovementioned regulations, it is possible to define the
following requirements in relation to privacy, data protection and security:
* Minimisation: DocksTheFuture must only handle minimal data (that is, the personal data that is effectively required for the conduction of the project) about participants.
* Transparency: the project will inform data subjects about which data will be stored, who these data will be transmitted to and for which purpose, and about locations in which data may be stored or processed.
* Consent: Consents have to be handled allowing the users to agree the transmission and storage of personal data. The consent text included Deliverable 7.1 must specify which data will be stored, who they will be transmitted to and for which purpose for the sake of transparency. An applicant, who does not provide this consent for data necessary for the participation process, will not be allowed to participate.
* Purpose specification and limitation: personal data must be collected just for the specified purposes of the participation process and not further processed in a way incompatible with those purposes. Moreover, DocksTheFuture partners must ensure that personal data are not (illegally) processed for further purposes. Thus, those participating in project activities have to receive a legal note specifying this matter.
* Erasure of data: personal data must be kept in a form that only allow forthe identification of data subjects for no longer than is strictly necessary for the purposes for which the data were collected or for which they are further processed. Personal data that are not necessary any more must be erased or truly anonymised.
* Anonymity: The DocksTheFuture consortium must ensure anonymity by applying two strategies. On the one hand, anonymity will be granted through data generalisation and; on the other hand, stakeholders’ participation to the project will be anonymous except they voluntarily decide otherwise
The abovementioned requirements translate into three pillars:
1. Confidentiality and anonymity – Confidentiality will be guaranteed whenever possible. The only exemption can be in some cases for the project partners directly interacting with a group of participants (e.g., focus group). The Consortium will not make publicly accessible any personal data. Anonymity will be granted through generalisation.
2. Informed consent – The informed consent policy requires that each participant will provide his/her informed consent prior to the start of any activity involving him/her. All people involved in the project activities (interviews, focus groups, workshops) will be asked to read and sign an Informed Consent Form explaining how personal data will be collected, managed and stored.
3. Circulation of the information limited to the minimum required for processing and preparing the anonymous open data sets –The consortium will never pass on or publish the data without first protecting participants’ identities. No irrelevant information will be collected; at all times, the gathering of private information will follow the principle of proportionality by which only the information strictly required to achieve the project objectives will be collected. In all cases, the right of data cancellation will allow all users to request the removal of their data at any time
# FAIR (Findable, Accessible, Interoperable and Re-usable) Data within Docks
The Future
DMP component Issues to be addressed
1. Data summary
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
<table>
<tr>
<th>
The purpose of data collection in Docks The Future is understanding opinions
and getting
feedbacks on the Port of The Future of proper active stakeholders - defined as
groups or organizations having an interest or concern in the project impacts
namely individuals and organisations in order to collect their opinions and
find out their views about the “Port of the Future” concepts, topics and
projects. This will Include the consultation with the European Technological
Platforms on transport sector (for example, Waterborne and ALICE), European
innovation partnerships, JTIs, KICs.Consortium Members have (individually) a
consolidated relevant selected Stakeholders list.
The following datasets are being collected:
* Notes and minutes of brainstorms and workshops and pictires of the events(.doc format, jpeg/png)
* Recordings and notes from interviews with stakeholders (.mp4, .doc format)
* Transcribed notes/recordings or otherwise ‘cleaned up’ or categorised data. (.doc, .xls format)
No data is being re-used. The data will be collected/generated before during,
or after project meetings and through interviews with stakeholders.
The data will probably not exceed 2 GB, where the main part of the storage
will be taken up by the recordings.
The data will be useful for other project partners and in the future for other
research and innovation groups or organizations developing innovative ideas
about ports.
</th> </tr> </table>
2. Making data findable, including provisions for metadata
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
<table>
<tr>
<th>
The following metadata will be created for the data files:
* Author
* Institutional affiliation
* Contact e-mail
* Alternative contact in the organizations
* Date of production
* Occasion of production
Further metadata might be added at the end of the project.
All data files will be named so as to reflect clearly their point of origin in
the Docks The Future structure as well as their content. For instance, minutes
data from the meeting with experts in work package 1 will be named “yyy mmm
ddd DTF –WP1-meeting with experts”.
No further deviations from the intended FAIR principles are foreseen at this
point.
</th> </tr> </table>
3. Making data openly accessible
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited
* Specify how access will be provided in case there are any restrictions
Data will initially be closed to allow verification of its accuracy within the
project.
Once verified and published all data will be made openly available. Where
possible raw data will be made available however some data requires additional
processing
and interpretation to make it accessible to a third party, in these cases the
raw data will not be made available but we will make the processed results
available.
Data related to project events, workshops, webinars, etc will be made
available on the docks the future website. No specific software tools to
access the data are
needed. No further deviations from the intended FAIR principles are foreseen
at this point
4. Making data interoperable
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
The collected data will be ordered so as to make clear the relationship
between questions
being asked and answers being given. It will also be clear to which category
the different
respondents belong (consortium members, external stakeholder).
Data will be fully interoperable – a full unrestricted access will be provided
to datasets that are stored in data files of standard data formats, compliant
with almost all available software applications. No specific ontologies or
vocabularies will be used for creation of
metadata, thus allowing for an unrestricted and easy interdisciplinary use
5. Increase data re-use (through clarifying licences)
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
Datasets will be publicly available. Information to be available at the later
stage of the project. To be decided by owners/ partners of the datasets.
It is not envisaged that Docks The Future will seek patents. The data
collected, processed and analyzed during the project will be made openly
available following deadlines (for
deliverables as the datasets. All datasets are expected to be publicly
available by the end of the project.
The Docks The Future general rule will be that data produced after lifetime of
the project will be useable by third parties. For shared information, standard
format, proper documentation will guarantee re-usability by third parties.
The data are expected to remain re-usable (and maintained by the partner/
owner) as long as possible after the project ended,
6. Allocation of resouces
* Estimate the costs for making your data FAIR. Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project ⮚ Describe costs and potential value of long term preservation
Data will be stored at the coordinator’s repository, and will be kept
maintained, at least, for 5 years after the end of the project (with a
possibility of further
prolongation for extra years).
Data management responsible will be the Project Coordinator (Circle).
No additional costs will be made for the project management data.
7. Data Security
* Address data recovery as well as secure storage and transfer of sensitive data
Circle maintains a backup archive of all data collected within the project.
After the Docks The Future lifetime, the dataset will remain on Circle’s
server and will be managed by the coordinator.
8. Ethical Aspects
* To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former
No legal or ethical issues that can have an impact on data sharing arise at
the
moment
# Open Research Data Framework
The project is part of the Horizon2020 Open Research Data Pilot (ORD pilot)
that “aims to make the research data generated by selected Horizon 2020
projects accessible with as few restrictions as possible, while at the same
time protecting sensitive data from inappropriate access. This implies that
the DocksTheFuture Consortium will deposit data on which research findings are
based and/or data with a long-term value. Furthermore, Open Research Data will
allow other scholars to carry on studies, hence fostering the general impact
of the project itself.
As the EC states, Research Data “refers to information, in particular facts or
numbers, collected to be examined and considered as a basis for reasoning,
discussion, or calculation. […] Users can normally access, mine, exploit,
reproduce and disseminate openly accessible research data free of charge”.
However, the ORD pilot does not force the research teams to share all the
data. There is in fact a constant need to balance openness and protection of
scientific information, commercialisation and Intellectual Property Rights
(IRP), privacy concerns, and security.
The DocksTheFuture consortium adopts the best practice the ORD pilot
encourages – that is, “as open as possible, as closed as necessary”. Given the
legal framework for privacy and data protection, in what follows the strategy
the Consortium adopts to manage data and to make them findable, accessible,
interoperable and re-usable (F.A.I.R.) is presented.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0601_MiARD_686709.md
|
# Metadata
Of prime importance for any spacecraft instrumentation is the knowledge of
time, location and orientation of the spacecraft and target. This is provided
for recent NASA and ESA datasets through the SPICE 2 software standards from
NASA's Navigation and Ancillary Information Facility (NAIF). For each
instrument observation, such information is available as a 'kernel' that can
be read by freely available tools. Most instrument teams write their own
software to use the relevant sub-set of information from the SPICE system. The
most recent version of the SPICE toolkit is version N65, released July 23rd
2014. New releases occur every two to three years. It is expected that the
next SPICE release will remove an existing limit to the size of DSK formatted
datasets (4 million facets). SPICE was developed in part explicitly to improve
archiving of datasets from planetary science missions. Although the use of
SPICE is not a requirement of the NASA Planetary Data System, it is
recommended by the International Planetary Data Alliance 3 .
The MiARD project has required some programming effort (by DLR) in order to
correctly read and interpret the Rosetta mission spice kernels for use with
the shape model, and check the correctness of this code. We also had to
develop our own more precise SPICE kernels as part of the production of the
shape model (see D1.4)
# Project dataset naming conventions
Because the project is using public data from many sources for a variety of
purposes, and aims to archive public data of different types, we do not expect
to be able to consistently use a 'project' nomenclature. Care will be taken to
ensure that version numbers, release dates and change logs (or dataset
descriptions) are used. Archives such as ESA's PSA have additional
documentation requirements.
# Quality control/review
The NASA/ESA shared archives have an internal quality review process which
must be satisfied before data is made available in the archive. For datasets
from the MiARD project supporting peer-reviewed publications, the quality
review is the peer review process of the journal.
Datasets from the project
# Shape models and associated data
Shape models of comet 67P are a precursor for much of the work within the
MiARD project - e.g. mapping of physical properties and modelling of activity.
## Terminology
The series of shape models obtained by the OSIRIS instrument team are released
with a designation SHAPn where n denotes the time period of the observations
made by the OSIRIS camera which were used for the shape reconstruction. There
are plans for pre- and -postperihelion shape models which will use data from
more than one OSIRIS time period (SHAP1-SHAP6 for pre-perihelion and
SHAP7-SHAP8 for the post-perihelion model). As of August 2016, the SHAP5 SPC
model has been delivered to NASA's PDS/SBN to begin the review process prior
to archiving and publication. The SHAP7 SPC model is currently expected to be
ready for submission by the end of 2016. The 'final' shape models, SHAP8,
SHAP5PRE and SHAP8POST are expected to be ready no earlier than April 2017,
and will have a sampling of about 6m. We have maintained the use of this SHAPn
convention to indicate the time period from which a shape model is derived.
In addition to the global shape models, a series of local, higher resolution,
digital terrain models (DTM's) are planned by the project.
SPC and SPG are two different mathematical approaches to deriving depth
information from pairs of images. The MiARD project seeks to combine the
strengths of both approaches (roughly speaking, SPG gives more accurate
results for rough areas with steep slopes, SPC for smooth plains) to produce
minimum-error shape models of the comet.
Further terminology and file formats relevant to the shape models are defined
in the PSA document USER-GUIDE.ASC in the RO-C-MULTI-5-67P-SHAPE-V1.0
directory.
## Source data
OSIRIS camera images are processed by the OSIRIS PI's (with the help of SPICE
kernels) to produce a shape model. These shape models will be archived in the
PDS/PSA after a review process by the archive service, but the MiARD project
has access to them (through the PI's) beforehand. Text taken from ESA's PSA
archive:
## _**Shape Dataset Organization** _
_This shape-model dataset includes a wide variety of shape models of comet
67P/Churyumov-Gerasimenko. They have been developed by several different
groups, using data from several different portions of the Rosetta mission,
using a variety of techniques, and intended for a variety of different
purposes. Several of these models have been cited in the literature as
underlying various investigations. The different shape models are collected
together in order to make it easier for users to choose the appropriate model,
but the wide variety means that there are many possible ways to organize the
archive._
_At the time this document is written (February 2016), only a few of the
anticipated models have been archived (and some models have not yet even been
created), but to understand the organization we discuss generically all the
models that we hope will be archived. All the shape models tesselate the
surface of the nucleus into triangular, flat plates. At the highest level, the
datasets are separated into ascii formats and binary formats. The binary
formats are exclusively the Digital Shape Kernels that are used in SPICE
(routines currently in beta-test version but expected to be in the general
release in spring 2016). The ascii versions are designed for non-SPICE users
and for simple visualization of the geometry. They always include an ascii
version that follows the standard used in PDS-SBN for decades (long prior to
the availability of DSK) that includes a wrapper that makes them viewable in
any VRML-aware application, of which there are many available._
_At the next level, the models are divided into groups corresponding to the
team that produced the models and the method that team used. The four groups
at this level are, as abbreviated in directory names and file names: 1)
mspcd_lam, Modified StereoPhotoClinometry by Distortion, produced at the
Laboratoire d’Astrophysique de Marseille, 2) spc_esa, StereoPhotoClinometry
produced by the flight operations team of ESA (European Space Agency) and
converted to standard formats by the Rosetta Mission Operations Center (RMOC),
3) spc_lam_psi, StereoPhotoClinometry produce by a collaboration between the
Laboratoire d’Astrophysique de Marseille and the Planetary Science Institute,
and 4) spg_dlr, StereoPhotoGranulometry produced at the German Aerospace
Center (DLR) group in Berlin. This grouping also separates the models by the
instruments used to obtain the input images, the models from ESA having been
derived entirely from the NAVCAMs (NAVCAM1 and NAVCAM2 are nominally
identical), whereas the other three groups are based entirely on the
scientific cameras, OSIRIS-NAC and OSIRIS-WAC. At this writing, there are
currently no models available in group 4. See other documents to understand
the differences among the techniques_
_At the third level, the models are sorted by the time period of the data
used, which affects the geographic coverage of the data and the best spatial
resolution achieved. For the models from ESA, this is denoted by the last MTP
(Medium Term Planning) cycle of the data, whereas for the models using the
scientific cameras, the OSIRIS teams used sequential numbers to indicate the
time period, with details given in the relevant subdirectories. At this
writing, the ESA models utilize data obtained through MTP09 (through mid-
November 2014, i.e., data prior to the release of the Philae Lander)._
_The OSIRIS models currently on hand are all SHAP2, using data only through 3
August 2014. Anticipated future deliveries include an SPG version of SHAP4,
SPC and MSPCD versions of SHAP5, and TBD versions of SHAP7 (data being taken
as this is written). At the next level, because the full-resolution models are
very large, there are models with various levels of reduced resolution
available, intended for purposes that do not require the highest resolution
and therefore speed up calculations._
### Relevant formats and required software
Shape models from the project will be made available in a format compatible
with the SPICE toolkit in so far as this is possible i.e. using TRIPLATE/VRML
and SPICE/DSK. (However, the DSK format is currently incompatible with the
full resolution models from the project (four million facet limit), although
changes are planned by NASA's NAIF). In any case, data formats will be
consistent with the NASA/ESA archive policy.
### Archiving/distribution policy
The shape models and several of the GIS datasets from MiARD will be made
publicly available through ESA's PSA after an initial peer-reviewed
publication describing them, and after passing the archives' review procedures
(expected to last about six months). For some of the data products from the
project, the open access journal used provides its own archive (e.g. the
geomorphological regions described in deliverable D1.6 were published in
Planetary & Space Sciences which uses the Mendeley Data repository).
Other datasets, for which the demand is less or the need for reviewing not
apparent will be made available through the project's website.
**Table 2 Summary of datasets for shape models**
<table>
<tr>
<th>
**Datasets required for input**
</th>
<th>
**New datasets and names**
</th>
<th>
**Format(s) and standards**
</th>
<th>
**Archiving**
</th> </tr>
<tr>
<td>
_SHAPn_ models from OSIRIS instrument team.
</td>
<td>
Global shape model CG- DLR_SPG-SHAP7-V1.0
</td>
<td>
.PLY and .PNG
</td>
<td>
In review at ESA PSA. See D1.8. Available on request through project website
or Europlanets website
</td> </tr>
<tr>
<td>
_"_
</td>
<td>
Global shape model
(SPG+MSPCD) with 12, 20 or
44 million facets, and 103 local
DTM's used
</td>
<td>
.PLY, Geotiff, binary
FITS
</td>
<td>
Project website.
</td> </tr>
<tr>
<td>
_"_
</td>
<td>
Local digital terrain models and elevation models (plus quality maps, artefact
maps and orientation information)
</td>
<td>
</td>
<td>
To be submitted to PSA after peer reviewed publication of methodology. See
table below for names of DTM areas.
</td> </tr>
<tr>
<td>
_"_
</td>
<td>
Improved SPICE kernels _cg-dlr_spg-shap7-v1.0.bc cg-dlr_spg-shap7-v1.0.bsp_
</td>
<td>
SPICE format:
CK kernels
SPK kernels
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**#**
</th>
<th>
**DTM Name**
</th>
<th>
**Time Range**
</th>
<th>
**Surface (m²)**
</th>
<th>
**#Facets**
</th>
<th>
**Sampling (cm)**
</th>
<th>
**Quality**
</th> </tr>
<tr>
<td>
1a
</td>
<td>
Agilkia
</td>
<td>
Beginning to Philae landing
</td>
<td>
94,710
</td>
<td>
871,680
</td>
<td>
33
</td>
<td>
Medium, linear
artifacts
</td> </tr>
<tr>
<td>
1b
</td>
<td>
</td>
<td>
Philae landing to end
</td>
<td>
36,477
</td>
<td>
332,032
</td>
<td>
33
</td>
<td>
Good
</td> </tr>
<tr>
<td>
2
</td>
<td>
Ash_aeolian
</td>
<td>
Beginning to perihelion-2m*
</td>
<td>
80,559
</td>
<td>
191,936
</td>
<td>
65
</td>
<td>
Very good
</td> </tr>
<tr>
<td>
3a
</td>
<td>
Hapi_dunes
</td>
<td>
Beginning to perihelion-2m*
</td>
<td>
86,196
</td>
<td>
117,360
</td>
<td>
86
</td>
<td>
Very good
</td> </tr>
<tr>
<td>
3b
</td>
<td>
</td>
<td>
Perihelion+2m* to end
</td>
<td>
86,226
</td>
<td>
469,440
</td>
<td>
43
</td>
<td>
Very good
</td> </tr>
<tr>
<td>
4
</td>
<td>
Anubis_polygones
</td>
<td>
Beginning to end
</td>
<td>
9166
</td>
<td>
57,344
</td>
<td>
40
</td>
<td>
Good
</td> </tr>
<tr>
<td>
5
</td>
<td>
Geb_fractures
</td>
<td>
Beginning to perihelion-2m*
</td>
<td>
84,709
</td>
<td>
305,728
</td>
<td>
53
</td>
<td>
Medium, linear
artifacts
</td> </tr>
<tr>
<td>
6
</td>
<td>
Ash_crater
</td>
<td>
Beginning to perihelion-4m*
</td>
<td>
62,507
</td>
<td>
151,872
</td>
<td>
64
</td>
<td>
Good, some artifacts
</td> </tr>
<tr>
<td>
7a
</td>
<td>
Maat_pits
</td>
<td>
Beginning to perihelion-2m*
</td>
<td>
30,794
</td>
<td>
150,784
</td>
<td>
45
</td>
<td>
Medium, some artifacts
</td> </tr>
<tr>
<td>
7b
</td>
<td>
</td>
<td>
Perihelion+3m* to end
</td>
<td>
29,344
</td>
<td>
150,784
</td>
<td>
44
</td>
<td>
Good
</td> </tr>
<tr>
<td>
8
</td>
<td>
Bes_fractures
</td>
<td>
Perihelion+4m* to end
</td>
<td>
34,966
</td>
<td>
91,392
</td>
<td>
62
</td>
<td>
Good
</td> </tr>
<tr>
<td>
9a
</td>
<td>
Anubis_depression
</td>
<td>
Beginning to perihelion-4m*
</td>
<td>
109,062
</td>
<td>
511,104
</td>
<td>
46
</td>
<td>
Very good
</td> </tr>
<tr>
<td>
9b
</td>
<td>
</td>
<td>
Perihelion+4m* to end
</td>
<td>
110,897
</td>
<td>
511,104
</td>
<td>
47
</td>
<td>
Very good
</td> </tr>
<tr>
<td>
10a
</td>
<td>
Nut_wind_tails
</td>
<td>
Beginning to perihelion-2m*
</td>
<td>
74,776
</td>
<td>
163,584
</td>
<td>
68
</td>
<td>
Very good
</td> </tr>
<tr>
<td>
10b
</td>
<td>
</td>
<td>
Perihelion+2m* to end
</td>
<td>
66,648
</td>
<td>
205,248
</td>
<td>
57
</td>
<td>
Good, some artifacts
</td> </tr> </table>
**Table 3 Parameters of the fifteen local DTMs included in deliverable D1.2.**
# GIS data-sets
A number of the datasets from the MiARD project are compatible with
'Geographical
Information Systems' because they combine vector or scalar data with a
coordinate grid
(usually the CHEOPS ** Error! Bookmark not defined. ** reference system
adopted for 67P/Churyumov-
Gerasimenko). Some of the project's datasets such as the local Digital Terrain
Models (DTMs) use the GeoTiff format associated with popular GIS software
packages such as ArcGIS and QGIS.
## Gravity
<table>
<tr>
<th>
**Datasets required for input**
</th>
<th>
**New datasets and names**
</th>
<th>
**Format(s) and standards**
</th>
<th>
**Archiving**
</th> </tr>
<tr>
<td>
Shape models
from
project, assumption s about density
</td>
<td>
shape_<parameter>.ply, shape_<parameter>_colorbar.svg
<map_projection>/<parameter>_<projection_info>_ms<height ><fr>.<ext>
</td>
<td>
.ply,.svg
.img, .png or
.svg
</td>
<td>
Project website
"
</td> </tr> </table>
**Table 4 Datasets for gravity maps (part of D1.3). The _parameters_ included
are _potential_ (gravitational potential), _dynamical_height_ , _gc_ (surface
acceleration), _slope_gc_ (local terrain slope relative to local gravitational
field, including centrifugal force). _Map projection_ is one of equidistant
cylindrical coordinates, or a north/south polar view Lambert azimuthal equal
area projection; _height_ is the height in pixels of the map; _fr_ is the
coordinate frame used, one of Cheops, SL, BL or NR ** Error! Bookmark not
defined. **.. Full documentation of the file format is given in the D1.3
report.**
## Albedo
<table>
<tr>
<th>
**Datasets required for input**
</th>
<th>
**New datasets and names**
</th>
<th>
**Format(s) and standards**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
OSIRIS-WAC images
close to zero phase
</td>
<td>
3D models maps
</td>
<td>
.ply, .svg
.img, .pmg, .svg
</td>
<td>
Project website
"
</td> </tr> </table>
**Table 5 Same dataset naming scheme as for the gravity maps, see also D1.3**
## Temperatures
**Table 6 Summary of temperature datasets, see D4.5**
<table>
<tr>
<th>
**Datasets required for input**
</th>
<th>
**New**
**datasets and names**
</th>
<th>
**Format(s) and standards**
</th>
<th>
**Archiving and preservation**
</th>
<th>
**Comments**
</th> </tr> </table>
<table>
<tr>
<th>
VIRTIS (from Cédric
Leyrat)
MIRO sub-mm (MPS)
MIRO mm (MPS)
</th>
<th>
MIRO
temperature maps.
</th>
<th>
•
•
•
•
•
•
</th>
<th>
MIRO_README: a file
explaining the contents of the data and how they have been created.
MIRO_DATA: a directory containing the data files (ASCII tables). The data is
the brightness temperature in the mm and sub-mm channels. The data were binned
according to the LST (Local Solar Time) of their acquisition, with a step of
1/24 th of the rotation. So, there are 24 VTK files for each data set (sub-
mm and mm channels). For each LST bin, there is only one temperature per facet
of the shape model,
i.e. we averaged temperatures whenever necessary.
MIRO_VTK_MM: a directory
containing the brightness temperature in the mm channel, projected onto the 3D
shape model (VTK format)
MIRO_VTK_SUBMM: a
directory containing the brightness temperature in the sub-mm channel,
projected onto the 3D shape model (VTK format)
MIRO_PNG: a directory containing an example of the data projected onto the
shape model (PNG image format)
MIRO_temp_read.py: a Python routine to view the data onto the 3D shape model
(Python language format)
</th>
<th>
Project website
</th>
<th>
"VIRTIS-H calibration is still a preliminary, unchecked calibration, with
known
inconsistencies". See also
PSA document VIRTISH_CALIBRATION.PDF, issue 1.4 23rd July 2008.
</th> </tr>
<tr>
<td>
</td>
<td>
VIRTIS
radiance maps
</td>
<td>
•
</td>
<td>
VIRTIS_README: a file
explaining the contents of the data and how they have been created.
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
VIRTIS_DATA: a directory containing the data files (ASCII tables). The data is
the radiance at 4.0 μm, 4.5 μm, 4.75 μm and 4.95 μm.
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
VIRTIS_VTK: a directory containing the radiance,
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
projected onto the 3D shape model (VTK format)
* VIRTIS_PNG: a directory containing images of the
VIRTIS data (PNG image format)
* VIRTIS_AVI: a directory containing a movie of the VIRTIS data projected onto the 3D shape model (AVI movie format)
* 00368190214_M1.vtu: the
reference file used to create the VTK files (VTU format)
VIRTIS_facet_data_to_vtu1.pro: the IDL routine used to create the VTK files
(IDL language format)
</td>
<td>
</td>
<td>
</td> </tr> </table>
## Activity maps and models (3D Gas and dust distribution)
**Table 7 Activity datasets, see D2.5**
<table>
<tr>
<th>
**Datasets required for input**
</th>
<th>
**New datasets and names**
</th>
<th>
**Format(s) and standards**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
Shape model and outgassing code and parameters
</td>
<td>
There are eight files in total (plus a readme.txt), these are: for each model
(inhomogeneous or purely insolation driven) there is one file for the gas
number density and velocity, and one file for each dust particle size. The
filenames are self-explanatory: _inhomogeneous_dust_1.6um.txt
inhomogeneous_dust_16um.txt inhomogeneous_dust_160um.txt inhomogeneous_gas.txt
insolationDriven_dust_1.6um.txt insolationDriven_dust_16um.txt
insolationDriven_dust_160um.txt insolationDriven_gas.txt_
</td>
<td>
ASCII space separated columns. The seven columns (x, y, z, number density, u,
v, w) are:
* x,y,z spatial coordinates in metres from centre of comet (Cheops reference frame)
* the number density of the gas or dust (m -3 )
* u, v, w the x,y,z components of the velocity vector (m/s)
</td>
<td>
Project website
</td> </tr> </table>
## Maps of regional properties
**Table 8 Defined geological units (D1.6) dataset**
<table>
<tr>
<th>
**Datasets required for input**
</th>
<th>
**New datasets and names**
</th>
<th>
**Format(s) and standards**
</th>
<th>
**Archiving and preservation**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
Shape models from project
</td>
<td>
cg-dlr_spg-shap7v1.0_125Kfacets_region s.vtk
cg-dlr_spg-shap7v1.0_125Kfacets_subreg
ions.vtk
</td>
<td>
.vtk
</td>
<td>
Mendeley Data,
https://data.mendeley.com/datasets/ 2845znt54k/1
A more complete set of resolutions
(larger files) is in review with the
ESA PSA
</td>
<td>
CC BY
4.0 licence
</td> </tr> </table>
Glossary and abbreviations
ESA European Space Agency
ESCO European Space Operations Center
GIS Geographic Information System
MSPCD Multi-resolution Stereophotoclinometry by Deformation
NASA USA National Air and Space Administration
NAVCAM navigational cameras on Rosetta mission
OSIRIS a camera instrument on Rosetta mission
PDS NASA's Planetary Data System _http://pds-smallbodies.astro.umd.edu_
PSA ESA's Planetary Science Archive
_http://www.cosmos.esa.int/web/psa/psaintroduction_
PI Principal Investigator. Term used by ESA or NASA to denote the individual
responsible for an instrument and its data
RSOC Data repository run by the Rosetta Science Ground Segment
SPICE name of system that provides spacecraft orientation and position, or
time of dataset. ESA maintains a repository of SPICE kernels for the Rosetta
mission _http://www.cosmos.esa.int/web/spice/spice-for-juice ._ SPICE= **S**
pacecraft, **P** lanet, **I** nstrument, **C** amera-matrix, **E** vents.
SPG stereo photogrammetry
SPC stereo photoclinometry
SHAPn denotes time period over which images were collected, used for numerical
descriptions of the shape of comet 67P
VIRTIS an infrared instrument on the Rosetta mission
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0602_MOS-QUITO_688539.md
|
file type of the data. While widely used in many disciplines (e.g. in medical
science, environmental science, biology, etc.), metadata do not play a central
role in the research context of MOS-QUITO. Within this project, metadata can
appear for example in the form of headers attached to dataset files produced
by experiments or numerical calculations. Such headers typically contain
information about experimental conditions and/or input parameters. In order to
make sure that all the relevant information complementing experimental and
numerical results is properly stored and easily retrievable, every partner in
MOS-QUITO will take maximal care in the organization of this type of metadata.
All wafers, dies, and devices produced during the project will have unique
identifiers. This allows us to trace successful samples back to the original
fabrication files.
# ETHICS
Before sharing or disseminating data, each partner is responsible for
assessing their intellectual property and, when necessary, for obtaining
permission from partners coowning the data. Access to data generated in the
project and project-related information will be available to the partners for
research purposes. Such access will be provided through the project web site.
Materials generated under the project will be disseminated in accordance with
the policy of each partner. All publicly accessible data are available for re-
use without restriction. It is expected that other researchers may find the
data useful for their own studies. When the research data are accessible
through publications, attention will be paid to the fact that they are
properly cited in accordance with an officially recognized citation format.
# TYPOLOGY OF DATA AND RELATED POLICY
Deliverable
D1.3
:
Data Management Plan
MOS-QUITO will generate a variety of data with different nature and different
level of diffusion. We provide here a list of the main types of data with the
respective handling policies:
1. _Experimental data issued from experiments and data files resulting from numerical calculations:_
* Each partner stores these types of data on its computer network (at laboratory level) as well as on local secured servers provided by the host organization.
* The data are not intended for public-domain access but they could be made available upon request (e.g. from the Commission, from scientific publishers, from other partners).
2. _Mask designs for optical lithography:_
* These data have the form of gds files generated using CAD-like software (gds is the standard format for lithography machines).
* The gds-type files are not shared, they are owned by the partner performing device fabrication (CEA, VTT, UCPH, or UCL) and intended for the mask manufacturer.
3. _Pattern designs for electron-beam lithography:_
* Preliminarily generated in ppt or pdf format for easy sharing within the consortium, they eventually consist of gds-type files.
* They can be exchanged among partners (e.g. e-beam lithography steps performed at VTT can be performed using gds files delivered by other partners in the consortium).
4
4. _Databases for transistor modeling:_
* The measurement of devices is required for extracting the compact model parameters, which are then used for circuit design. The measurements can be shared among partners to this purpose. The compact model parameters (often called a model card) for a given technology for the BSIM6 and UTSOI compact models have to be shared with the circuit designers.
* CEA owns a license for the UTSOI model, which is adapted for 28-nm FDSOI technology. EPFL has been given access to this model through a license agreement with CEA.
5. _Measurement programs:_
* Each experimental partner develops, owns, and uses its own measurement programs.
* Different software platforms are currently adopted by the different partners (Labview, Igor, Python, etc.). UCPH, together with partners outside this consortium (e.g. Qutech at Delft and Microsoft), is undertaking a major effort to develop an open-source, Python-based measurement software platform that could be used for a wide range of experiments including those related to qubits. This software platform, while still under development, is already available at https://github.com/QCoDeS/Qcodes.
6. _Modelling programs:_
* Partner carrying out modeling tasks develop, own, and use their own modeling programs. In addition some partners (e.g. CNR) use also commercial software (Matlab, COMSOL) to perform simulations.
* Programs rely on a variety of theoretical models, they are based on different types software platforms, and they are run either locally or on high-power computers located at different computational servers.
Deliverable
D1.3
:
Data Management Plan
7. _Ppt presentations, poster presentations, images, pictures, internal reports, data sheets:_
* This type of digital data will be available to all the partners in MOS-QUITO through the intranet of the project website.
* All material on the intranet should be treated as confidential and for internal use. Partner could use material taken from the intranet for their public presentation provided they obtain permission from the partner who generated the material itself.
8. _Scientific articles (publications, preprints) and press communications:_
* In order to favor the communication of useful information within the consortium, preprints can be shared among partners prior to publication. Shared documents will be treated as confidential in this case.
9. _Patents:_
* This type information is shared only among the partners involved with the patent.
5
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0606_MARINERGI_739550.md
|
1\. Introduction
### 1.1. Introduction and overview of MARINERG-i
The H2020 MARINERG-i project is coordinated by the MaREI Centre at University
College of Cork Ireland. The consortium is comprised of 14 partners from 12
countries (Germany, Belgium, Denmark, Spain, France, Holland, Ireland, Italy,
Norway, Portugal, the United Kingdom and Sweden). MARINERG-i brings together
all the European countries with significant testing capabilities in offshore
renewable energy. The MARINERG-i project is a first step in forming an
independent legal entity of distributed testing infrastructures, united to
create an integrated centre for delivering Offshore Renewable Energy.
MARINERG-i will produce a scientific and business plan for an integrated
European Research Infrastructure (RI), designed to facilitate the future
growth and development of the Offshore Renewable Energy (ORE) sector. These
outputs are designed to ensure that the MARINERG-i RI model attains the
criteria necessary for being successful in an application to the European
Strategy Forum on Research Infrastructures (ESFRI) roadmap in 2020\.
### 1.2. Data Plan Description
This document is the _Update to the Initial_ Data Management Plan (DMP), which
forms the basis for deliverable D1.11. Recognising that DMP’s are living
documents which need to be revised to include more detailed explanations and
finer granularity and updated to take account of any relevant changes during
the project lifecycle (data types/partners etc.). This edition will be
followed by one further volume: ““The Final DMP”, (D1.12). The format for all
three volumes is based on the template taken from the DMP Online web-tool and
conforms to The "Horizon 2020 DMP" template provided by European Commission
(Horizon 2020).
2\. Data Summary
### 2.1. Purpose of the data collection/generation
It is important to note that MARINERG-i has l not created any new scientific
data e.g. from experimental investigations or actual testing of devices.
However, the discovery phase of the work programme (WP 2 and WP3) does involve
detailed information gathering in order to profile multiple attributes of the
participating testing centres and their infrastructure. The information
generated from these activities exists in practice as a form of highly
granular metadata. Along-side and associated with this there is a requirement
to compile and include in a database (WP 7 Stakeholder Engagement; WP 6
Financial Framework), personal contact and other potentially private,
proprietary, financial or otherwise sensitive information which is being
maintained as confidential. Derived synthetic, statistical, or anonymised
information is also being produced which is destined for release in the public
domain. Further details of proposed data collection and use are contained in
D7.3 Stakeholder Database. Details of the procedures for collection and use as
well as their compliance with ethics and data protection legislation are
provided in D10.1 and D10.2.
### 2.2. Relation to the objectives of the project
The collection of data is being undertaken as a primary function of four key
work pages (WP 1, 2, 6 &7) which together form the Discovery phase of the
overall work plan, the general scheme of which is as follows:
* Discovery Phase - Engagement with stakeholders, Mapping, profiling RIA and einfra
* Development Phase – Design and Science plan, Finance, Value statements
* Implementation Phase – Business plan and implementation plan including roadmap.
Data and information collected during the discovery phase are being fed into
and are informing the subsequent phases of development and implementation.
Specifically, the objectives for WP2&3 listed below and deliverables listed in
Table 1 (D2.1 –D3.4) provide an obvious and clear rationale for the collection
and operational use of several main categories of data within the project.
Also listed in Table 1 is deliverable 7.3 the stakeholder database. This
database contains names, contact details, contact status and a range of other
information pertinent to the stakeholder mapping and engagement process, which
is a key objective within WP7.
WP 2 Objectives
The facilities to be included in MARINERG-i are being selected so as to
contribute to the strengthening of European, scientific and engineering
excellence and expertise in MRE research (wave, tidal, wind and integrated
systems) and to represent an indispensable tool to foster innovation across a
large variety of MRE structures and systems and through all key stages of
technology development (TRL’s 1-9). In order to achieve this, a profiling of
the European RI’ is underway on both strategic and technical levels and
considering both infrastructures’ scientific and engineering capabilities.
Both existing facilities and future infrastructures have been identified and
characterized so as to account for future expansion and development. In
parallel, user’s requirements for MRE testing and scientific research at RI’s
has been identified so as to optimize and align service offerings to match
user needs with more efficiency, consistency, precision and accuracy.
All this information has been efficiently compiled so as to provide the basis
to inform the development of the design study and science plan which are
underway in WP 4.
WP 3 Objectives
The set of resources, especially facilities, made available under MARINERG-i
currently have individual information systems and data repositories for
operation, maintenance and archival purposes. Access to these systems may be
generally quite restricted at present, constrained by issues relating to
ownership, IP, quality and other standards, liability, data complexity and
volume. Even where access is possible, use and update uptake of these valuable
resources may not be extensive in the absence of a suitable policies and
effective mechanisms for browsing, negotiation and delivery. A primary
objective of WP3 is to instigate a program to radically improve all aspects
pertaining to the curation, management, documentation, transport and delivery
of data and data products produced by the infrastructure. Work undertaken in
WP3, and also research and pilot studies being developed in the MARINERT2
project is being efficiently compiled so as to provide the basis to inform the
development of the Design Study and Science Plan to be conducted under WP4.
_Table 1 List of deliverables from WP 2, 3, 6 & 7 . _
<table>
<tr>
<th>
**Deliverable**
**Number**
</th>
<th>
**Deliverable Name**
</th>
<th>
**WP**
**Number**
</th>
<th>
**Lead beneficiary**
</th>
<th>
**Type**
</th>
<th>
**Dissemination level**
</th> </tr>
<tr>
<td>
D2.1
</td>
<td>
MRE RI End-users
requirements profiles
</td>
<td>
WP2
</td>
<td>
3 - IFREMER
</td>
<td>
Other
</td>
<td>
Confidential,
only for members of the consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D2.2
</td>
<td>
MRE RI Engineering and science baseline and future needs
profiles
</td>
<td>
WP2
</td>
<td>
3 - IFREMER
</td>
<td>
Other
</td>
<td>
Confidential,
only for members of the
consortium
(including the Commission
</td> </tr>
<tr>
<td>
D3.1
</td>
<td>
MRE e-Infrastructures End-Users requirements
profiles
</td>
<td>
WP3
</td>
<td>
3 - IFREMER
</td>
<td>
Other
</td>
<td>
Services)Confidential,
only for members of consortium (including Commission Services)
</td>
<td>
the the
</td> </tr>
<tr>
<td>
D3.2
</td>
<td>
MRE e-Infrastructures baseline and future needs
profile
</td>
<td>
WP3
</td>
<td>
3 - IFREMER
</td>
<td>
Other
</td>
<td>
Confidential,
only for members of consortium (including Commission Services)
</td>
<td>
the the
</td> </tr>
<tr>
<td>
D3.3
</td>
<td>
Draft Report MRE eInfrastructures strategic and technical alignment
</td>
<td>
WP3
</td>
<td>
1 -
UCC_MAREI
</td>
<td>
Report
</td>
<td>
Confidential,
only for members of consortium (including Commission Services)
</td>
<td>
the the
</td> </tr>
<tr>
<td>
D3.4
</td>
<td>
Final Report MRE e-
Infrastructures strategic and technical alignment
</td>
<td>
WP3
</td>
<td>
1 -
UCC_MAREI
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
</td> </tr>
<tr>
<td>
D6.1
</td>
<td>
Report on all RI costs and revenues
</td>
<td>
WP6
</td>
<td>
4 - WAVEC
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
</td> </tr>
<tr>
<td>
D7.3
</td>
<td>
Stakeholder database
</td>
<td>
WP7
</td>
<td>
5- Plocan
</td>
<td>
Database
</td>
<td>
Confidential,
only for members of
</td>
<td>
the
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
consortium (including Commission Services)
</td>
<td>
the
</td> </tr> </table>
## 2.3. Types and formats of data generated/collected.
As stated in the previous section there are two main types of data being
collected:
1. Data relating to the profiling of the Research Infrastructures (RI’s) and existing einfrastructure
2. Contact details for MARINERG-i stakeholders
The type and format for collecting, analysing, storage of data is mostly
simple text generated locally by subjects using forms and questionnaires in MS
word/excel or alternatively through a centralised system with an online
interface. Images in various graphical formats also form a significant element
of the data collected. Collections have also used other forms of
documentation: specifications, standards; templates; rule-sets; manuals;
guides; and various types of framework documents; legal statutes, contracts,
strategic and operational plans, etc. More detailed specifications/conventions
governing key parameters for all of the above will be provided to data
providers/gathers in advance to ensure current and future interoperability and
compatibility.
The fields currently being used to collate stakeholder contact information are
listed in Table 2 below:
Table 2 Stakeholder database field structure and type
<table>
<tr>
<th>
Field
No
</th>
<th>
Field Header
</th>
<th>
Field type
</th> </tr>
<tr>
<td>
1
</td>
<td>
Order #
</td>
<td>
number
</td> </tr>
<tr>
<td>
2
</td>
<td>
Date
</td>
<td>
Number
</td> </tr>
<tr>
<td>
3
</td>
<td>
Category Stakeholders
</td>
<td>
Text
</td> </tr>
<tr>
<td>
4
</td>
<td>
If Other category, please include it here
</td>
<td>
Text
</td> </tr>
<tr>
<td>
5
</td>
<td>
Name of the Organisation Stakeholder
</td>
<td>
Text
</td> </tr>
<tr>
<td>
6
</td>
<td>
Acronym Stakeholder
</td>
<td>
Text
</td> </tr>
<tr>
<td>
7
</td>
<td>
Address
</td>
<td>
Text
</td> </tr>
<tr>
<td>
8
</td>
<td>
Country
</td>
<td>
Text
</td> </tr>
<tr>
<td>
9
</td>
<td>
Web
</td>
<td>
Text
</td> </tr>
<tr>
<td>
10
</td>
<td>
Phone(s)
</td>
<td>
number
</td> </tr>
<tr>
<td>
11
</td>
<td>
E-mail
</td>
<td>
Text
</td> </tr>
<tr>
<td>
12
</td>
<td>
Contact Person
</td>
<td>
Text
</td> </tr>
<tr>
<td>
13
</td>
<td>
Role in the Organisation
</td>
<td>
Text
</td> </tr>
<tr>
<td>
14
</td>
<td>
MARINERG-i partner providing the information
</td>
<td>
Text
</td> </tr>
<tr>
<td>
15
</td>
<td>
Contact providing the information
</td>
<td>
Text
</td> </tr>
<tr>
<td>
16
</td>
<td>
Energy sectors
</td>
<td>
Text
</td> </tr>
<tr>
<td>
17
</td>
<td>
If Other Sector, please include it here
</td>
<td>
Text
</td> </tr>
<tr>
<td>
18
</td>
<td>
R&D&I Area
</td>
<td>
Text
</td> </tr>
<tr>
<td>
19
</td>
<td>
If Other R&D&I Area, please include it here
</td>
<td>
Text
</td> </tr>
<tr>
<td>
20
</td>
<td>
Does the stakeholder provide permission to receive info from MARINERG-i?
</td>
<td>
Text
</td> </tr>
<tr>
<td>
21
</td>
<td>
Further Comments
</td>
<td>
Text
</td> </tr> </table>
## 2.4. Re-Use of existing data
The RI profiling information gathered will augment and greatly extend existing
generic baseline information gathered under the Marinet FP7 project in the
respective research infrastructures of the MARINERG-i partnerships, and some
new information that has been added through the Marinet2 H2020 project. The
latter is currently accessible through the Eurocean Research Infrastructures
Database (RID) online portal system (http://rid.eurocean.org/), where it is
accessible to RI managers to update. Content for the stakeholder’s database
was initially obtained from existing Marinet and PLOCAN databases, re-use
permission wasobtained from the individuals concerned. Since this is a live
database, additional contact information is being added primarily via our
website where interested stakeholders can sign up to be included as well as
receive newsletters and invitations to events. In addition, partners email
their contacts informing them about the project and encouraging them to join
our mailing list/stakeholder database.
## 2.5. Expected size of the data
The total volume of data to be collected is not expected to exceed 100GB
## 2.6. Data utility: to whom will it be useful
As stated above the data being collated and generated in the project are
primarily for use by the partners within the project in order to prepare
specific outputs relevant to the key objectives. Summary, derived and or
synthetic data products of a non-sensitive nature will be produced for
inclusion in reports and deliverables some of which will be of interest to a
wider range of stakeholders and interested parties including but not limited
to the following:
National authorities, EU authorities, ORE industry, potential MARINERG-i node
participants, International Authorities, academic researchers, other EU and
international projects and initiatives.
# Fair Data
## Metadata and making data findable
There is no specific aim in the Marinerg-I project to generate formally
structured or relational databases. The activity conducted as part of WP2 and
WP3 requires the use of existing databases and collation of information from
Institutions portals and through a questionnaire to be distributed to
potential stakeholders.
Hence metadata will be based on existing metadata formats and standards
developed for the existing services. Additional metadata will be created for
specific fields if necessary, after elaboration of the questionnaires.
More specifically the profiling of the Research Infrastructure is mainly based
on the information available on the Eurocean service, and on services such as
Seadatanet for the E-infrastructures.
Definition of naming conventions and keywords will be based on the same
approach. Specific metadata related to the stakeholders’ database would be
created according to fields presented in Table 2.
MARINERG-i is aware of the EC guidance metadata standards directory
[http://rdalliance.github.io/metadata-directory/standards/]. However given the
nature of the data being compiled, and the early stage of the project
lifecycle no firm decisions have yet been made regarding the use of particular
metadata formats or standards. This will be considered and dealt with further
in the final iteration of this document (D1.12) including the following
aspects: discoverability of data (metadata provision); identifiability of data
and standard identification mechanism; persistent and unique identifiers;
naming conventions; approaches towards search keywords; approaches for clear
versioning; standards for metadata creation
## Open accessibility
Produced data will for a large part be based on processing of existing
datasets already available in open access. This data will be made openly
available.
Restriction could apply to datasets or information provided by stakeholders in
cases where they specify such restrictions (for instance personal contact
details) that shouldn’t be made openly available for confidentiality reasons.
Publicly available data will be made available through the MARINERG-i web
portal.
No specific software should be required apart from standard open source office
tools required to read formats such as “txt”,”asci”, “.docx”, ”.doc”,
”.xls”,”.xlsx”, ”PDF”, “JPEG”, “PNG”, “avi”,”mpeg”,…
Data and metadata should be deposited on the MARINERG-i server.
## Interoperability
The vocabulary used for all technical data types will be the standard
vocabulary used in marine research and offshore renewable experimental testing
programmes such as Marinet, Marinet2, Equimar…
For the other datatypes other interoperable data types will be chosen, where
possible making use of their own domain specific semantics.
## Potential to increase data re-use through clarifying licenses
The project does not foresee the need to make arrangements for licensing data
collected.
Data should and will be made available to project’s partners throughout the
duration of the project and after the end of the project (at least until the
creation of the ERIC) and where possible made available to external users
after completion of the project.
Some of the data produced and used in the project will be useable by third
parties after completion of the project except for data for which restrictions
apply as indicated in
It is expected that information e.g. as posted on the website will be
available and reusable for at least 4 years, although the project does not
guarantee the currency of such data past the end of the project.
# Allocation of Resources
## Explanation for the allocation of resources
Data management can be considered as operating on two levels in MARINERG-i.
The first is at the point of acquisition where responsibility is vested in
those acquiring the data to do so consciously and in accordance with this DMP
and associated Ethics requirements as set out in D 10.1 /D10.2. The second
level is where processed and analysed synthetic data products are passed to
the coordinators for approval and publication.
Data security, recovery and long term storage will be covered in the final
iteration of the DMP (D1.1.)
# Ethical Aspects
Details pertaining to the ethical aspects in respect of data collected under
MARINERG-i are covered in D 10.1 /D10.2. This will as a minimum include
provision for obtaining informed consent for data sharing and long term
preservation to be included in questionnaires dealing with personal data.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0607_Open4Citizens_687818.md
|
# Executive Summary
This document – deliverable D4.7 Data Management Plan (Final) (M30) –
describes the final and updated plans for data management in the Open4Citizens
(O4C) project, both regarding the management of research data and the platform
data. The deliverable describes how selected research data and all data used
or generated in the O4C online platform will be handled. We also describe how
this will be made available after the end of the project (M30, June 2018).
We have chosen to follow the H2020 Data Management Plan (DMP) template in
order to ensure that the document addresses relevant data management questions
in the context of Horizon2020. The template covers among other things
questions surrounding what types of data has been gathered during the project
and why it was gathered. It also accounts for how data is stored, the security
measures as well as the size of the accumulated data and what the possible
utility of the data could be.
The document has primarily been developed by two individual partners in the
consortium, Dataproces and Antropologerne, each dealing with data in different
domains in the project and separate measures for managing this data. For ease
of understanding, the document has been divided to deal with these different
types of data in separate sections; Section A on research data and Section B
on data related to the O4C platform. This also means that there are slight
differences regarding the relevance of questions in the H2020 DMP template
that have been addressed in these sections.
Regarding the research data, the focus is on providing an account of which
kinds of data has been collected, where the data is stored and ensuring that
personal data is anonymised. The O4C platform section goes in depth with the
data gathered in the platform, what internal security measures have been taken
to protect the data and to secure users’ rights regarding their data. Further,
it explains the daily operation and future of the O4C platform beyond the
project.
Another important point in a Horizon 2020 project is to live up to the FAIR
principles which is to make sure that data is findable, accessible,
interoperable and reusable. We address these principles for both research and
platform data.
In summary, this Data Management Plan provides a thorough insight into the
measures taken by the partners of the O4C consortium in both managing data and
making it as open as possible, focusing on two domains within which data
management is required. However, we note that the O4C project has used and
generated relatively small amounts of data and, given privacy considerations,
relatively small amounts of research data that can be made publicly available.
At the time of submitting this deliverable, the final month of the project M30
– June 2018, the project’s legacy in the form of a Network of OpenDataLabs
(NOODL.eu) is being consolidated and scaled up. Within this new data
management context, this current data management plan can be further expanded
upon to meet emerging needs. This will ensure that relevant O4C project data
and additional data generated and used in the network or in individual labs
will be generated, used, and stored in accordance with good practice
guidelines. As the open data landscape matures and lessons are learned with
respect to the implementation of the new General Data Protection Regulation
(GDPR), these will be incorporated into data management practices in the
network.
# Introduction
The Open4Citizens (O4C) project has aimed to adhere to the guidelines of the
Open Research Data Pilot (ORD Pilot) being run by the European Commission
under Horizon2020 1 . This involves making research data FAIR (findable,
accessible, interoperable and reusable). We have produced three data
management plans (DMPs) over the course of the O4C project; the first two
versions in months 6 and 15 of the project, culminating in this current and
final plan.
## Data management responsibility
In this current deliverable, D4.7 Data Management Plan, we address the
project’s research data as well as data handled in and generated by the O4C
platform, https://opendatalab.eu/. These two types of project data will be
considered separately. All consortium partners have been responsible for the
management of data in their own pilots over the course of the project, between
January 2016 and June 2018. At project level, Antropologerne has primarily
been responsible for coordinating data management of the research data and
Dataproces for data related to the O4C platform. After the end of the project,
from the beginning of July 2018, Dataproces will continue to be responsible
for management of data related to the O4C platform. Aalborg University (AAU),
as O4C project leader, will be responsible for the research data made
available for further use.
We plan to consolidate the five O4C pilots into a sustainable network of
OpenDataLabs. As such, the consortium partners in charge of these pilots will
continue to be in charge of the locally generated and used data related to
their lab’s activities, to the extent that they remain responsible for their
lab. These partners are Aalborg University (Aalborg and Copenhagen, Denmark
pilot), Fundacio Privada i2CAT, Internet i Innovacio Digital a Catalunya
(Barcelona, Spain pilot) Politecnico di Milano (Milan, Italy pilot), and
Technische Universiteit Delft (Rotterdam, the Netherlands pilot). See
deliverables D4.4 Open4Citizens Scenarios (Final) and D4.10 Open4Citizens
Business Models and Sustainability Plans (Final) for more information about
future plans and ODL ownership.
The lasting and living legacy of the Open4Citizens project is the Network of
OpenDataLabs. As elaborated in the section regarding allocation of resources
the O4C platform will remain open for at least another five years for use by
the ODLs in the network. It will provide access to a growing number of open
datasets and information related to projects being developed using the O4C
approach.
## Summary of data types addressed in this Data Management Plan
Research data in this project primarily comprises qualitative material
collected by members of the project consortium during hackathons in order to
support evaluation activities. This material, originally created in Microsoft
PowerPoint format slides, is made available for further use in PDF format.
Data in the O4C Platform is primarily user-generated data from hackathon
participants who have used the platform in relation to hackathons, data sets
uploaded by users and information regarding the projects created as hackathon
outcomes.
## Structure of the document
The document is based on the template H2020 Programme Guidelines on FAIR Data
Management in Horizon 2020 version 3.0, 26 July 2016 and is structured with
inspiration from the questions presented in the template. The main difference
in structure from the previous, mid-term, Data Management Plan (DMP)
(deliverable D4.6) is that this version is divided into two separate parts;
Part A regarding research data and Part B focused on the O4C platform data.
This has been done to illustrate that in practice the consortium’s management
of data in these two domains has primarily been carried out by Antropologerne
and Dataproces respectively; Antropologerne as the partner primarily
supporting the generation of qualitative research data across pilots during
the project, and Dataproces, as the consortium partner with most general data-
related expertise and main responsibility for building and managing the O4C
platform.
However, all consortium partners have been involved in data generation and
management, especially with respect to their specific pilot. Similarly, all
pilots have been involved in discussions regarding the choice of data
repository and considerations relating to data management after the end of the
O4C project. At a local level, all pilots have communicated with key
stakeholders about the use of data in the project and specifically
stakeholders’ consent to the use of locally generated and data used for
research and in the O4C platform.
# Part A: Managing Research Data
In the context of this DMP, we apply the definition used by Corti et al.
(2014) who ‘define research data as any research materials resulting from
primary data generation or collection, qualitative or quantitative, or derived
from existing sources intended to be analysed in the course of a research
project. The scope covers numerical data, textual data, digitized materials,
images, recordings or modelling scripts.’ (Corti et al., 2014: viii). As laid
out in General Annex L of the Horizon 2020 Work Programme 2018-2020 (European
Commission 2017), the Open4Citizens research data is ‘open by default’.
However, due to the personal nature of much of the research data in this
project, the data is also ‘as open as possible, as closed as necessary’
(European Commission 2017). As such, the Open4Citizens consortium has adhered
to the requirements laid out in Article 29.3 of the Grant Agreement while only
making selected research data available. This has involved selecting
representative materials in the form of photographs, quotes and reflections on
hackathon activities, gathered by the five O4C pilots as the basis for
evaluation activities. These materials have been collected by the consortium
and presented in PowerPoint slide decks for internal use by the project team,
rather than for open publication. I.e. these are raw research materials.
We have selected material from the research data in the project that is made
available to the extent that we are able to protect the privacy of individuals
involved in the project, e.g. key stakeholders of hackathons and the emerging
OpenDataLabs in the five project pilots, as well as hackathon participants
whose views have been represented in the project’s evaluation raw materials.
We have taken steps to sufficiently anonymise the materials made available, in
accordance with the consent forms signed by project participants (See these in
the Annex). O4C project participants and stakeholders have not given their
consent to have images of themselves made available in an online repository
beyond the end of the project. For this reason, we have erred on the side of
caution with respect to potentially personally identifiable material, and have
blurred the faces of individuals depicted in materials.
O4C project research materials are being made available with the intention of
increasing transparency with respect to both 1) research methodology and 2)
findings presented both in the project’s deliverables to the European
Commission and in other publications. The research data being made available
by the O4C project is ‘not directly attributable to a publication, or [is] raw
data’ as well as ‘underlying data’ 2 , i.e. data that validates results in
the project and in scientific publications. For example, the figure below
shows a selection of anonymised slides from the Danish hackathon in the second
cycle, which has been used for evaluation, but has not been directly
replicated in any publications or project deliverables. As seen in the figure
below, this research material is anonymised using icons over faces, hiding
distinguishable name tags and uses aliases instead of real names.
**Figure 1: Example of selected research material used for evaluation**
Tailored tools have been produced in the project to guide innovation with open
data in O4C-style hackathons. These tools constitute the Citizen Data Toolkit
(see deliverable D2.5 Citizen Data Toolkit, submitted in M30, June 2018).
These and the selected research data, described in the Data Summary in the
section below, are made available using a Creative Commons Attribution-
ShareAlike 4.0 International license. The O4C consortium members have made
selected elements of the project’s research data available by self-archiving
the research materials in the Aalborg University Research Portal,
_www.vbn.aau.dk_ on the dedicated project page 3 .
## Research Data Summary
**What is the purpose of the data collection/generation and its relation to
the objectives of the project?**
Research data is primarily qualitative material used to capture and reflect on
project activities, as well as to feed in to research outputs in the project
such as the citizen data toolkit and the frameworks for the OpenDataLabs
emerging at each of the pilots. In the second year of the project research
data collected has supported both formative evaluation related to the
development of OpenDataLabs as well as summative evaluation reflections
regarding the project’s achievements overall. An example of these materials is
shown in the figure above. These raw materials have formed the basis of
reflection about project activities related to hackathons in each of the
pilots.
The analysis of this research data for evaluation purposes has been presented
in Deliverable 4.2 Data collection and interpretation (D4.2). The material has
been used in other project deliverables. The research data does not include
quantitative data other than some quantitative elements of responses to
questionnaires completed by hackathon participants and O4C pilot members. The
research material therefore contains no datasets. The management of all
datasets used in the project are addressed in Section B on the O4C platform.
The figure below shows the scope and types of research materials generated by
the project in relation to the hackathon events.
**Figure 2: Overview of research materials from O4C hackathon events**
The figure above gives an overview of the scope and formats of research
material gathered by the five pilots during the project from the two cycles of
hackathons and related activities.
**Overview of research materials produced and collected**
The research materials generated in the Open4Citizens project primarily
support evaluation activities, as well as some use of the collected visual
evaluation materials such as photos and videos in dissemination activities.
There is no embedded quantitative data that can be extracted from the research
materials. We nevertheless describe these materials here in the Data
Management Plan, for possible reuse and analysis in the OpenDataLabs or by
others interested in the O4C project’s approach. They can be considered as
supplementary materials to the formal research outputs in the form of project
deliverables, publications, hackathon outputs such as app mock-ups to be
brought to market, and the network of OpenDataLabs.
The research material that is made publicly available consists of three
elements:
1. **Templates** used to gather evaluation materials as well as to support their gathering within the Open4Citizens hackathon process,
2. Selected, anonymized **examples of completed evaluation materials** , and
3. **Final versions of tools** used during the O4C project to support the O4C process for innovation in service delivery using open data.
The use of these materials allows others to replicate the O4C approach to
service innovation, supported by the tools in relation to the know-how
described in project deliverables. In addition, more learning from the O4C
approach can be supported by replicating the evaluation approach, using
evaluation data gathering templates. Finally, further analysis of the
selected, anonymised examples of completed evaluation materials, may support
new findings about the value of the O4C approach of value in the network of
OpenDataLabs and similar initiatives.
The full list of available research data consists of the following:
1. **Templates**
* For gathering evaluation materials in Hackathon cycles 1 and 2
1. Data gathering PowerPoint slide deck
○ Guide for evaluation data gathering (consolidated from the PowerPoint deck
and from the cycle 2 questionnaire)
○ Tool use questionnaire questions (only used in cycle 2)
○ Contribution Story semi-structured interview template
* Of selected, amendable hackathon starter kit tools
* Of amendable citizen data tools
2. **Completed evaluation materials**
Selected, representative examples of anonymised evaluation materials from both
hackathon cycles across all 5 pilots in Barcelona, Denmark (Copenhagen and
Aalborg), Karlstad, Milan, and Rotterdam
* Facts about the hackathon
* Impressions from the hackathon
* Hackathon participant group (team) evaluation slides
* Stakeholder portrait (selected examples from cycle 2, across pilots)
* Hackathon Evaluation for Partners & Stakeholders
* Reflections on use of O4C toolkit tools
* Reflections on replacement tools used
* Reflections on additional tools used
* Online tool use questionnaire responses (collated across pilots)
* High-level observations by Antropologerne from cycle 2 hackathons
3. **Citizen Data Toolkit**
For use by others wishing to use the Open4Citizens approach to understanding
and working with open data for service innovation. The Citizen Data Toolkit
consists of tools from all three toolkit sections listed below, which have
been used, tested and amended in the first and second hackathon cycles. The
final version of the toolkit is presented in deliverable D2.5 Citizen Data
Toolkit. Tools are available as PDF documents, with their source files
available in Adobe Illustrator/InDesign formats, for further adaptation by
anyone with access to these programs who wishes to amend the tools.
* **Hackathon Starter Kit**
1. Templates for selected final versions of Hackathon Starter Kit Tools
○ Final versions of Hackathon Starter Kit tools, adjusted after hackathon
cycle 1 and finalized after hackathon cycle 2
* **Data tools** , resulting from the design case studies (see deliverable D2.3) and lessons learned in 2 hackathon cycles
1. Final versions of tools for working with data
**Figure 3: Overview of O4C tools. Work in progress between 1st and 2nd
hackathon cycles**
The figure above indicates general connections between the different types of
tools and the ways in which they are connected to support different types of
hackathon activities. The management of data used in the O4C platform data
repository and generated in the platform during these activities is described
in Part B. The diagram shows the importance of the supplementary research
outputs whose management is being described here in section A for creating the
main project results. In order to continue to build on O4C research outputs
and results, it is important to manage these outputs and to make them
available.
## Storage, use and accessibility of research materials
It is the responsibility of each pilot to store the research data collected
for evaluation purposes in accordance with the consent given by project
participants. The consent forms collected from participants are stored locally
in hard copy by pilots according to their organisational guidelines, i.e. in a
secure location, accessible only to relevant employees. Final versions of
evaluation materials which are made available online by the project for others
to access and use conform to the requirements regarding anonymity laid out in
the consent form, i.e. ‘I hereby give my consent for all videos and photos of
me, direct quotes, as well as any other material that I have made available to
be used by OpenDataLab _X_ and the Open4Citizens project, provided that it has
been anonymised.’ And ‘The Open4Citizens project partners may use the material
described in this document indefinitely.’ See the annexes for the templates of
the consent form used in relation to gathering research materials in the first
and second hackathon cycles. As shown in figure 1, in order to adhere to these
terms regarding anonymity, visual materials have been anonymised so that
individual faces are not visible, and aliases have been used in the place of
real names.
Research outputs generated throughout the project are primarily in the form of
qualitative material generated by all pilots in the project for analysis and
evaluation purposes. Some quantitative information, e.g. about numbers and
types of participants in the hackathons and partners in the OpenDataLabs has
been collected through questionnaires completed by O4C crew members in the
pilots, as well as by hackathon event participants and other stakeholders
involved in the O4C process. Selected and anonymised materials will be made
publicly available.
This includes the following:
* **Photographs** of hackathon activities and individuals involved in these
* **Quotes** by hackathon participants and other project stakeholders relating to their experience of participation in the O4C project
* **Questionnaire responses**
* by pilot teams regarding the use of specific tools during the O4C-style hackathon, as well as reflection on various elements of the hackathons.
* From hackathon participants about their experiences of hackathon participants
* **Written reflections** by pilot teams on the value of the O4C process of service innovation in hackathons
Photographs, originally available in the PowerPoint files in which they were
gathered, are made available as PDFs. Anonymized questionnaires are available
as Excel files and CSV files. Questionnaire responses from the five O4C pilot
team will not be personally identifiable, but will be related to specific
pilots’ hackathons. Questionnaire responses from hackathon participants and
other stakeholders has personally identifiable information such as name,
workplace or school, and any other personally identifiable information
removed. We are not making video materials or audio recordings available on
the project’s repository as anonymization of this material is not possible to
the degree required with the resources available in the project.
**What is the expected size of the data?**
The total amount of research material made available on the Aalborg University
Research Portal (VBN) is approximately 70 MB. This is about a third of the
total research material generated in the project.
**To whom might the O4C research data be useful ('data utility’)**
Selected, primarily qualitative research materials from the five pilot
projects is being made available, as well as cross-cutting material reflecting
on the evaluation of the project as a whole. As described in more detail
above, this can be useful for researchers, practitioners and others wishing to
duplicate or adapt the Open4Citizens model, i.e. our specific approach to
empowering citizens to make appropriate use of open data for improved service
delivery.
## FAIR Research Data
The consortium members have decided to make research data available through
the Aalborg University Research Portal, VBN (http://vbn.aau.dk/en/), which is
compatible with OpenAire (Open AIRE, 2017c). Selected materials will be
accessible for re-use after the end of the
Aalborg University is a signatory of the Berlin Declaration on Open Access in
the Sciences and
Humanities (Berlin Declaration, 2003), whose principles the Open4Citizens
project subscribes to.
Signatories to the declaration aspire to ‘promote the Internet as a functional
instrument for a global scientific knowledge base and human reflection and to
specify measures which research policy makers, research institutions, funding
agencies, libraries, archives and museums need to consider’ (Berlin
Declaration, 2003, pg. 1).
As described in section 2.2, above, selected and anonymised research material
that is considered to be relevant for future use is being made available via
the Open4Citizens project page on VBN 4 after the end of the project
(project month 30, June 2018).
### F: Findable research data, including provisions for metadata
**Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?**
For research materials collected and generated in the project and made openly
available, a fit-forpurpose file naming convention has been inspired by best
practice for qualitative data, such as described by the UK Data Archive
(2011). This is [Project abbreviation]_[type of material]_[Pilot name, if
relevant]_[Date of event or final version]_[Any additional descriptive
text]_[Number, if there is more than one file with the same information].[file
format]. E.g. For some of the materials in Figure 1, this is: “O4C_
Evaluation-raw-material_Aalborg_November-2017_Hackathon-Impressions_1.PDF”.
Research material in vbn.aau.dk, is findable via a search in the full text of
the file names of the uploaded files, as well as through tags associated with
these in the upload process. DOIs are provided upon request. At this time, at
the end of the project, we do not consider that the additional effort required
to get these DOIs is worth the likely minimal pay-off in terms of increased
findability. We aim to additionally support access to and use of the project’s
research materials by ensuring that there is an easy-to-reach and responsive
person listed prominently as a contact on the project’s VBN page. For the
immediate future, this will be Nicola Morelli, the project coordinator. If
additional resources are secured to scale up the Network of OpenDataLabs, a
dedicated NOODL.eu coordinator would be the contact person. In this way, we
can respond to any challenges being faced by people wishing to access and use
our materials.
### A: Making research data **openly accessible**
**How will the data be made accessible (e.g. by deposition in a repository)?**
Research materials will be made accessible on the Aalborg University Research
Portal, _www.vbn.aau.dk_ on the dedicated project page 5 . After the end of
the project in M30, June 2018, the research material selected as a
representative sample of the O4C project’s work, and described in this
deliverable, will be made available through the Aalborg University Research
Portal.
This repository is primarily intended for publications and material associated
with the project. For this reason, the repository is not listed on
_www.re3data.org_ 6 , the registry of research data repositories highlighted
as a data management resource by the European Commission. Nevertheless, it is
an accessible and sustainable repository that supports the needs of the O4C
project both during and after the project. The repository is OpenAire-
compatible 7 . These relate in particular to having available in-person
support, should the needs of the project with respect to research data change
as the Network of Open Data Labs becomes established.
**What methods or software tools are needed to access the data?**
All research data uploaded to the vbn.aau.dk repository is openly accessible
and downloadable. Although they are increasingly become a standard format, we
are aware that PDFs are not the most accessible format. We will make the
source files for the Citizen Data Toolkit tools available for those who have
the necessary Adobe InDesign software to amend the tools. Here, as well, we
are aware that this is not an openly accessible format. However, the project
consortium has prioritised the production of visually appealing and well-
designed tools that are easy to print and add value in use for those
individuals who want to use them as they are. We expect that the user group
for the tools who may want to amend them are designers who will have access to
the necessary software.
### I: Making data interoperable
**Are the data produced in the project interoperable?**
Interoperability is less relevant for the qualitative research material we are
making available than for quantitative datasets. Microsoft Office has been
used for producing research data, specifically Microsoft PowerPoint and Excel.
Most files will be made available as PDFs.
The consortium has chosen to make editable versions of the tools in the
Citizen Data Toolkit (see deliverable D2.5) available in their original
formats, i.e. Adobe Illustrator. We consider that individuals with an interest
in amending the files for their own purposes are very likely to be designers
or others with existing access to the relevant software packages. Having
explored a number of open source packages for converting Microsoft Office
files for those without access to this software, we will recommend that files
be converted using Libre Office, should anyone contact us wishing to open our
files but being unable to do so. Libre Office is available online here:
https://www.libreoffice.org/download/download/.
### R: Increase data re-use (through clarifying licences)
**How will the data be licensed to permit the widest re-use possible?**
The Open4Citizens project has aimed to be as open as possible. We take the
guidelines developed by Open Knowledge International as our starting point.
Specifically, we have explored the applicability of the Open Data Commons Open
Database License (ODbL) for data created in the project. Given the fact that
most of our research data is visual and qualitative rather than in the form of
datasets, a Creative Commons license seems most appropriate.
All templates and tools, as well as research materials (e.g. PDFs of
PowerPoint slides used to gather evaluation material) produced in the project
that are being made available are being made available under the Creative
Commons Attribution-ShareAlike 4.0 International license. 8 Materials will
therefore be referenced as shown in the figure below.
**Figure 4: Creative commons license reference used for O4C research data (CC
BY-SA 4.0)**
The CC BY-SA 4.0 license has been chosen by the project given the large amount
of visual information that the research data encompasses.
**How long is it intended that the data remains re-usable?**
We will adhere to the repository standard of the Aalborg University Research
Portal. It is intended that the research material remains accessible and re-
usable for five years, in line with the availability of the O4C platform,
where some of this material will also be available.
**Are data quality assurance processes described?**
For the project’s research data, quality assurance of the qualitative
materials has been assured during the O4C project through their review by
Antropologerne on an ongoing basis in coordination with the pilots who have
produced the materials. Additional quality assurance of research data beyond
the end of the project will depend on the extent to which additional resources
are secured in relation to the OpenDataLab to ensure this. If no additional
resources are secured, all further activities relating to the O4C project
research data will be the remit of the Aalborg University VBN support staff,
with whom the O4C project coordinator will maintain contact.
## Research Data Security
**What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?**
Research data has been shared between project partners and stored in
collaborative online working platforms during the project’s lifetime. These
are BaseCamp (https://3.basecamp.com), Google Drive
(https://drive.google.com), and Dropbox (https://www.dropbox.com). Some
intermediate and all final versions of evaluation data collected in the
project and analysis outputs of this material are saved in a standardised
filing system in the project’s BaseCamp account.
Material created during the project is stored locally by the Open4Ctizens
partners according to their institutional data management and storage
guidelines. This locally stored research data includes unanonymised
questionnaire data from hackathons, as well as consent forms signed by
hackathon and other project participants allowing for the use of photos and
videos of these participants. Consent forms will be kept beyond the end of the
Open4Citizens project. Additional research data such as personal notes, unused
photos and video clips etc. will be safely deleted and discarded as
appropriate after the end of the project (June 2018). This research data
includes all data not made publicly available for the long term in Aalborg
University’s Repository. The consortium partners are discussing these
procedures and requirements at the time of submitting this deliverable with
respect to the research materials and their potential use in the five
OpenDataLabs to ensure a common understanding and approach.
All working materials, currently stored on Google Drive and BaseCamp will be
deleted when appropriate by the project coordinator at Aalborg University
after the end of the O4C project when it has been assessed that they are no
longer needed.
**Is the data safely stored in certified repositories for long term
preservation and curation?**
The Aalborg University Research Portal (vbn.aau.dk) will be used for long-term
preservation of research data. See details above. At the time of writing this
deliverable, it has not been possible to get access to the VBN policies and
procedures regarding data security. However, the portal itself meets the
requirements to be OpenAire compatible and we are confident that all necessary
requirements are in place with respect to these considerations.
## Ethical Aspects concerning research data
**Are there any ethical or legal issues that can have an impact on data
sharing?**
Ethical issues related to the research materials have been discussed above.
These specifically relate to informed consent secured from project
participants and to the need to anonymise all the qualitative materials
produced in the project. The O4C consortium members have ensured that the
materials made openly available have been adequately anonymised in line with
the procedures laid out in the project’s consent forms. The physical consent
forms themselves are locally stored by each of the five pilots in a location
that is not openly accessible, e.g. a dedicated file in a lockable room or
filing cabinet.
**Are there any ethical or legal issues that can have an impact on sharing
research data?**
At the time of writing this final data management plan, the General Data
Protection Regulation (GDPR) 9 has come into force in the European Union.
The O4C consortium has used these new rules and associated guidelines as the
basis for assessing which data is made available. We have also been guided by
the Article 29 Working Party Guidelines on Consent. 10
**Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?**
The gathering and analysis of research data in the project is guided by
standard ethics guidelines for the social sciences (e.g. as outlined in
Iphofen (2009) 11 ).
For research data collected in relation to Open4Citizens hackathons, as well
as questionnaires and other personally identifiable information generated,
informed consent has been sought. All participants in hackathons are requested
to provide their consent for all materials produced to be used by the project.
See the annex for standard consent forms used in the project. On the rare
occasions where project participants have not wished for their photos or
quotes to be used, the pilots in question have ensured that none of this
information has been made openly availably.
# Part B: Managing O4C Platform Data
Part B of this Data Management Plan deals with the Open4Citizens platform and
how Dataproces has managed data uploaded to and generated in the platform. It
will give a short explanation of the platform, the life cycle of the data and
security measures taken at Dataproces as well as compliance with the FAIR
principles.
## Data summary
**What is the purpose of the data collection/generation and its relation to
the objectives of the project?**
The purpose of generating and uploading datasets to the O4C Platform at
opendatalab.eu has been to make it possible for participants, curious citizens
and other interested stakeholders to locate and find the data for use in
projects in the various hackathons. Being publicly available, the platform
will only store such datasets for the purpose of facilitating hackathon users
in their search. Further data can also be stored in the marketplace section of
the platform ( _https://opendatalab.eu/#marketplace_ ) where hackathon
outcomes generated by the participants are made available.
### The Open4Citizens platform: Functions and the users
Here, we provide a short description of the platform including the user types,
data utility and datatypes. Opendatalab.eu is a platform for facilitating
hackathons where it is possible to create events, sign up for these events,
upload/download datasets, manipulate data and upload projects.
**User Types**
There are two types of users in the platform:
* **Users** : One is the regular user who can sign up as a user to the platform, upload datasets, sign on for events and upload projects. The user is also able to delete their own projects and datasets.
* **Facilitators** : The other user is the facilitator who can create events and create project teams for each event. The facilitator can see which users participate in his/her specific events, including but not limited to the O4C style hackathons. The facilitator cannot see the information regarding participants in other facilitators’ events and is not able to remove users’ uploaded projects. A facilitator is able to delete all datasets.
### Data utility in the O4C Platform
**To whom might it be useful ('data utility’)**
The O4C platform is focused around helping its users gain an understanding of
open data, as well as aiding the development of new services/improve existing
services during the hackathon cycle. The data in the platform is intended to
be used as:
* **Components in digital mobile or web applications** – a dynamic product to access personally meaningful or context-aware data, such as a weather or route planner app.
* **Elements in concepts** \- i.e. mock-ups of mobile or web applications.
* **Data examples** for the participants to gain a greater understanding of open data.
* **Visualisation** – a statistical representation of data, such as an infographic or a narrative told as a news article (data journalism). The main objective is to communicate about what is otherwise “raw numbers in a table”.
* **Digital service** – a product-service system with various touch points ingrained with open data. For example, a service where citizens can report faulty street objects (broken lamppost, etc.) using a smartphone application, and the government is notified about these problems and can fix them.
Projects and concrete solutions developed in the O4C hackathons include
concepts, mock-ups and prototypes (e.g. for apps). A number of the most
promising solutions have been further developed after the hackathon event in
order to create working solutions to challenges worked on during the
hackathons. These solutions, as well as the data they use, and generate are
the property of the teams who develop them, with the explicit expectation from
the O4C project that they will be made openly available under a creative
commons license.
### What types and formats of data will the project generate/collect?
There are three main types of data in the platform:
* the datasets uploaded for use in hackathons
* hackathons outcomes created by participants
* the user-generated data stored in the platform These datatypes will be explained in the sections below.
**Datasets and their formats:**
Datasets consisted of open data that was uploaded to the platform, that was
used in the hackathons by the participants. The datasets that have been chosen
for the hackathon cycles and uploaded to the O4C Platform mostly consist of
files in .CSV and .xlsx format, which has allowed them to be used with the
visualization tools in the platform such as different kinds of graphs. The
geocoded .CSV files can further be used with mapping tools that allow the user
to see where objects are located on a map.
To ensure that the credit is given to the owner of the dataset, wherever
possible, the name and link to the original dataset have been added to each
dataset in the repository located on the platform.
**Hackathon outcomes** :
The hackathon outcomes uploaded to the platform are the ideas participants
have worked on during the hackathon. As mentioned, there is opportunity to
upload these projects to the platform at ( _https://opendatalab.eu/#market-
place_ . The data that is stored in the platform in relation to uploading
projects is:
* Project name
* Project description
* Thumbnail picture
* Attached CSV or Excel files with name of each file
* Link to external datasets used in the project
**User-generated data** :
This consists first and foremost of the information that users provide when
they create a user in the platform as well as data brought to the hackathon
event by participants and upload to the platform in cases where there is
insufficient relevant open data available:
* First and last name
* Profession
* Date of birth
* E-mail
* Password
* Country (if they want facilitator rights)
The user-generated data also consists of the data that is generated when a
user signs up for an event, such as the date of attendance, which event they
participated in and so on. The data is only visible to Dataproces or the
facilitator of the specific event. This means that facilitators cannot see
each other’s events and there by gaining information on the users attending.
Dataproces does not use this data for analysis of user behaviour, nor is it
possible for outside companies to access user data for analysis. See appendix
for disclaimer in the platform. See figure below or visit
_https://opendatalab.eu/#register-section_ .
**Figure**
**5**
**:**
**Terms and conditions**
**in the O4C Platform**
**General Data Protection Regulation**
To be compliant with the new European Union General Data Protection Regulation
(GDPR), there is a disclaimer on opendatalab.eu, the O4C platform, that
explains users’ rights regarding their personal data (See Appendix).
This includes the following points:
* Information you provide us
* Information collected by cookies and other tracking technologies
* Use of information – purpose and legal basis
* Storage of information
* Sharing of information
* Transfer to third countries
* Security
* Your rights
* Right to request access
* The right to object
* Right to rectification and erasure
* The right to restriction
* The right to withdraw consent
* The right to data portability
* Contact and complaints
**Gaining consent from users to keep their data**
Many of the users have signed up to the platform on physical paper forms
during the hackathons and have not digitally authorized Dataproces to add
their personal data to the platform. As a result, Dataproces has anonymized
the hackathon participants’ projects before these have been showcased in the
O4C platform marketplace.
**Erasing user-generated data**
If the users did not explicitly agree to let us keep their personal user data,
we cannot showcase this in the platform.
Another point regarding deletion of user data is that according to the GDPR
companies must delete user data as soon as they do not have a specific purpose
for keeping it. Therefore, Dataproces has set up a praxis where user-generated
data will be evaluated once a year and deleted if it is not found necessary
for the user and still covered by the disclaimer’s commitment to the user.
Dataproces will maintain this praxis for 5 years where after either an
agreement must be made for Dataproces to continue or for a third party to take
over the task. Should none of the two scenarios be realised, Dataproces is
committed to erase all user data.
### What is the expected size of the data?
The database consists of two parts, user generated/uploaded data and
facilitator generated/uploaded data. The size of the database is at the
present moment (June 2018) about 500 Mb in total. The database size is
directly proportional to the platform usage and traffic.
## Data Security
**What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?**
**Server at Dataproces**
This paragraph contains a technical description of the server structure
located at Dataproces, Skalhuse 5, 9240 Nibe, Denmark.
The Open4Citizens Platform is deployed internally on a server and the figure
shows where each application is deployed. We have used Angular 2 framework to
build the Front End part of the platform. It is deployed on an IIS server. We
have used the Django platform to build our web service and it is deployed on
an Apache server in another Alice virtual server instance. We use Mysql for
our database and it is also running in a separate virtual server.
**Figure 6: Dataproces server structure**
It was concluded in the latest audit performed by authorized firm Attiri
(http://attiri.dk/) that the Dataproces server environment hosting the
platform fulfils all data security standards. The following measures have been
put in place to prevent any outside or unauthorised access to data. **Is the
data safely stored in certified repositories for long-term preservation and
curation? Database at Dataproces**
The database server for storing the platform data is located at Dataproces
Skalhuse 5, 9240 Nibe, Denmark.
* **Firewall** : The database is secured by a firewall, which only provides access to authorized users through a secure protocol. Inside the server there is another firewall that only provide a user access to the specific O4C database.
* **Backup** : There are daily backups of the data.
* **Recovery** : It is possible to retrieve files from any day.
The picture below is a simple visualization of a user logging on to the
OpenDataLab and getting access to the database located at Dataproces through
the internet. Only authenticated users will gain access through the firewall
shown to the right. When inside the Dataproces server, there is another
firewall that directs the user to the specific database which the user is
permitted to access.
**Figure 7: Access to Dataproces’ database from the O4C platform**
## FAIR Data Handling in the O4C Platform
The approach to data storage in the platform is inspired by the FAIR
principles to make it easier for the participants and other interested
stakeholders to find, access and re-use the datasets and to make them
interoperable with other datasets.
* **Making data findable, including provisions for metadata** : To locate metadata in the platform it is possible to search by tags or name in the data repository or by browsing the marketplace.
* **Making data openly accessible:** You can download datasets, which are available through the frontpage of the platform. This requires no user profile or login. It is also possible to upload new datasets, or download existing datasets, edit them and re-upload them to the platform.
* **Making data interoperable:** The file formats is Excel and CSV which are common formats **.**
* **Increase data re-use:** The data is reusable by third parties. Any data uploaded or generated in the platform will be available for later users to exploit and explore. The data will only be shared through www.opendatalab.eu where anyone will have access to them.
### Are data quality assurance processes described?
**Datasets:**
The user who uploads the dataset agrees to take full responsibility for the
quality, which is not Dataproces’ responsibility. To upload data to the
platform the user is required to register in the platform and it is tracked
which user uploaded the specific datasets. Furthermore, users are required to
agree to the terms and conditions on the platform before they can use it.
**Terms and conditions for uploading data**
For securing the quality of the uploaded datasets, Dataproces requires users
who upload data to agree to the terms and conditions of the platform. See the
appendix for the full disclaimer.
In accordance with the terms, the user who uploads the dataset is considered
responsible if the datasets are infected with viruses, are illegal or else and
thereby preventing upload of potentially harmful files.
## Ethical Aspects re: Platform Data
**Are there any ethical or legal issues that can have an impact on data
sharing?**
When organising O4C hackathon events, the O4C pilots have collected
information from public repositories, which contain open data. Since open data
consist of information databases that are public domain, the data can be
freely used and redistributed by anyone. In regards to the open data from
various data sources that are made available on the O4C Platform, Dataproces
does not guarantee that this data has been published with the prior, necessary
and informed approval that it requires.
# Allocation of Resources
**Are the resources for long-term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?**
At the end of the O4C project, the value of the O4C project outputs, including
research data and data in and related to the O4C platform, is being determined
by the five project pilots as they consolidate activities in their emerging,
local OpenDataLabs.
## Resources for O4C platform data
As mentioned under the section Data security, all datasets that are uploaded
to the O4C Platform will be stored on a server at Dataproces who has ensured
preservation and backup throughout the project. The aim is that the Platform
will continue to be available after the O4C project has been fulfilled (beyond
June 2018). This means that the open data that has been collected, generated
and uploaded to the Platform during the project lifetime will be accessible
after the end of the funding period of the O4C project. The data in the O4C
platform will be available for as long as the internal server at Dataproces is
up and running and costs covered by the business case by Dataproces. In that
regard, Dataproces has continuously worked on developing the business plan for
the platform. This also means that the software is not open source but the
property of Dataproces. The path Dataproces has set for the sustainable
business model of the platform is to create a platform that handles both an
open data environment and a closed data environment. The platform
simultaneously collects the process information and gathers the ideas and
thoughts throughout the O4C hackathon event. This has given a powerful tool, a
powerful platform that can handle both the idea generation period, and the
idea development period afterwards, and help the user/host to keep track of
the idea owner. Dataproces will continue to evolve and use the platform after
the project funding stops, and when value is created internal at Dataproces,
the plan is to push the same process to our customers. Dataproces has no
intention to take down the platform, and will for at least a period of 5 years
keep the platform online.
## Resources for research data
Resources for data management during the project have been allocated under
tasks T3.2 Data mapping, integration and technical support, as well as T4.4
Data management plan. At the end of the Open4Citizens project (M30, June
2018), no additional funds are available for data management. Long-term
curation of the research materials will therefore be funded through generic
funding for the Information Technology Services at Aalborg University.
Research data will be maintained in line with the general guidelines for the
Aalborg University Research Portal. The Open4Citizens (O4C) project aims to
consolidate and scale up the Network of OpenDataLabs (NOODL.eu) as a legacy of
the project (See Deliverable D4.10 Sustainability and Business Plans).
# Conclusions and Outlook
The Open4Citizens (O4C) project has used and generated relatively small
amounts of data related to the O4C Platform and in the form of research data
(primarily qualitative research materials), that can be made available for re-
use. This deliverable has described how the data has been managed by the
project’s consortium partners and how we intend it to be FAIR (findable,
accessible, interoperable, re-usable) beyond the project, i.e. after June
2018.
At the time of writing this deliverable, at the end of the project, the five
O4C project pilots in Aalborg/Copenhagen (Denmark), Barcelona (Spain), and
Karlstad (Sweden), Milan (Italy) and Rotterdam (the Netherlands) form the
basis of the emerging Network of OpenDataLabs
(NOODL.eu) 12 . This network is a legacy of the O4C project that currently
looks likely to be sustainable in some shape or form. This current data
management plan is expected to be a starting point for data management in
NOODL.eu. This allows the project partners who will remain involved in the
network as OpenDataLab owners or key stakeholders to improve future data
management related to relevant data and materials from the project, as well as
related to new data used and generated in the network.
# Bibliography
Aalborg University Research Portal, 2017. Accessed at http://vbn.aau.dk/en/
Berlin Declaration 2003. Berlin Declaration on Open Access to Knowledge in the
Sciences and Humanities. (2003). Available at the Max Planck Society Open
Access website: https://openaccess.mpg.de/Berlin-Declaration, Retrieved April
20, 2017
Corti, L., Van den Eynden, V., Bishop, L., & Woollard, M. (2014). _Managing
and sharing research data: a guide to good practice_ : Sage.
Data Catalog Vocabulary 2014, Marking up your dataset with DCAT | Guides. (n.d.). Retrieved April 27, 2017, from https://theodi.org/guides/marking-up-your-dataset-with-dcat
European Commission, DG Justice and Consumers (2018) Article 29 Working Party
Guidelines on consent under Regulation 2016/679 Adopted on 28 November 2017 As
last Revised and Adopted on
10 April 2018. Accessed at
http://ec.europa.eu/newsroom/article29/itemdetail.cfm?item_id=623051. Direct
document link:
http://ec.europa.eu/newsroom/article29/document.cfm?action=display&doc_id=51030
European Commission, DG Research and innovation (2017) General Annex L of the
Horizon 2020 Work Programme 2018-2020 Direct document link:
http://ec.europa.eu/research/participants/data/ref/h2020/other/wp/2018-2020/annexes/h2020wp1820-annex-
l-openaccess_en.pdf Accessed online at
http://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-
cutting-issues/openaccess-data-management/open-access_en.htm
Open Knowledge Group, The Open Definition. Retrieved April 27, 2017, from
http://opendefinition.org/
Guidelines on FAIR Data Management in Horizon 2020. (n.d.). Retrieved April
27, 2017, from
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oadata-mgt_en.pdf
Iphofen, R. (2009) ‘Research Ethics in Ethnography/Anthropology’ European
Commission, DG Research and Innovation, Retrieved 22 June, 2018, at
http://ec.europa.eu/research/participants/data/ref/h2020/other/hi/ethics-
guide-ethnoganthrop_en.pdf
OBdL, Open Data Commons Open Database License (ODbL). (2016, July 06).
Retrieved April 27, 2017, from https://opendatacommons.org/licenses/odbl/
Open Aire 2017a, Open Aire FAQ. Retrieved April 27, 2017, from
https://www.openaire.eu/support/faq#article-id-234
Open Aire 2017b, Principe, P. Open Access in Horizon 2020. Retrieved April 27,
2017, from https://www.openaire.eu/open-access-in-horizon-2020
Open Aire 2017c, OpenAIRE. Retrieved April 27, 2017, from
https://www.openaire.eu/
UK Data Archive 2011, Managing and Sharing Data: Best practice for
researchers. Retrieved April 24,
2017, from http://www.data-archive.ac.uk/media/2894/managingsharing.pdf
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0610_AMBER_675087.md
|
2.2 Making data openly accessible:
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited
* Specify how access will be provided in case there are any restrictions
Where possible data will be made available subject to Ethics and participant
agreement. However, the personally-identifiable nature of the data collected
within AMBER means that in most instances it would be difficult to release
collected data. Where data is made available we will do so using the Kent
Academic Repository (KAR).
Prior to release, a requesting party will need to contact the Project
Coordinator describing their intended use of a dataset. The Project
Coordinator will send a terms and conditions document for them to sign and
return. Upon return, the dataset will be released. Documentation (and, if
available for distribution, software) will be included with the release of the
data.
2.3 Making data interoperable:
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
As stated, we will adhere to ISO/IEC data interchange formats (19794-X) for
the storage of sample and meta data. This will ensure proven interoperability
within the biometrics community.
2.4 Increase data re-use (through clarifying licenses):
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
Due to the sensitive nature of the data they will only be available on
application and their use will be restricted to the research use of the
licensee and colleagues on a need-to-know basis. This non-commercial licence
is renewable after 2 years, data may not be copied or distributed and must be
referenced if used in publications. These arrangements will be formalised in a
User Access Management licence which describes in detail the permitted use of
the data.
# ALLOCATION OF RESOURCES
Explain the allocation of resources, addressing the following issues:
* Estimate the costs for making your data FAIR. Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project Describe costs and potential value of long term preservation
Data will be stored at the coordinator's (University of Kent) repository, KAR,
and will be kept for 5 years after the end of the project. Where requested,
data will be kept for 2 more years. KAR is managed and supported by a team of
experts and is free of charge.
# DATA SECURITY
Address data recovery as well as secure storage and transfer of sensitive data
Data will stored in Kent Academic Repostiory (KAR) which is managed and
supported by a team of experts at the University of Kent and subject to the
university's data security measures and backup policies.
Transfer of data is via a Zip process of distribution.
Encryption of sensitive data using shared-key methods.
Password distributed separately.
# ETHICAL ASPECTS
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
All our work is subject to ethical approval (locally, via an Independent
Ethics Advisor and the EC
REA). Prior to data collection participants will agree to the terms and
conditions outlined in a Participant Information and Consent Form.
# OTHER
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
None
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0612_READ_674943.md
|
# Executive Summary
This paper provides an initial version of the Data Management Plan in the READ
project. It is based on the DMP Online questionnaire provided by the Digital
Curation Centre (DDC) and funded by JISC: _https://dmponline.dcc.ac.uk/_ .
We have included the original questions in this paper (indicated in italic).
The management of research data in the READ project is strongly based on the
following rules:
* Apply a homogenous format across the whole project for any kind of data
* Use a well-known external site for publishing research data (ZENODO)
* Encourage data providers to make their data available via a Creative Commons license
* Raise awareness among researchers, humanities scholars, but also archives/libraries for the importance of making research data available to the public
# Data summary
Provide a summary of the data addressing the following issues:
* State the purpose of the data collection/generation * Explain the relation to the objectives of the project * Specify the types and formats of data generated/collected * Specify if existing data is being re-used (if any) * Specify the origin of the data * State the expected size of the data (if known) * Outline the data utility: to whom will it be useful
The main purpose of all data collected in the READ project is to support
research in Pattern Recognition, Layout Analysis, Natural Language Processing
and Digital Humanities. In order to be useful for research the collected data
must be "reference" data.
Reference data in the context of the READ project consist typically of a page
image from a historical document and of annotated data such as text or
structural features from this page image.
An example: In order to be able to develop and test Handwritten Text
Recognition algorithms we will need the following data: First a (digital) page
image. Second the correct text on this page image, more specifically of a
line. And thirdly an indication (=coordinates of line region), where the text
can be found exactly on this page image. The format used in the project is
able to carry this information. The same is true for most other research areas
supported by the READ project, such as Layout Analysis, Image pre-processing
or Document Understanding.
Reference data are of highest importance in the READ project since not only
research, but also the application of tools developed in the project to large
scale datasets is directly based on such reference data. The usage of a
homogenous format for data production was therefore one of the most important
requirements in the project. READ builds upon the PAGE format, which was
introduced by the University of Salford in the FP7 Project IMPACT. It is well-
known in the computer science community and is able to link page images and
annotated data in a standardized way.
# Fair data
## Making data findable, including provisions for metadata
* Outline the discoverability of data (metadata provision) * Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? * Outline naming conventions used * Outline the approach towards search keyword * Outline the approach for clear versioning * Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how
Part of the research in the Document Analysis and Recognition community is
carried out via scientific competitions organized within the framework of the
main conferences in the field, such as ICDAR (International Conference on
Document Analysis and Recognition) or ICFHR (International Conference on
Frontiers in Handwriting Recognition). READ partners are playing an important
role in this respect and have organized several competitions in recent years.
One of the objectives of READ is to support researchers in setting up such
competitions. Therefore the ScriptNet platform was developed by the National
Centre for Scientific Research – Demokritos in Athens to provide a service for
organizing such competitions. The datasets used in such competitions will be
made available as open as possible.
For this purpose we are using the ZENODO platform and have set up the
corresponding ScriptNet community: https://zenodo.org/communities/scriptnet/.
In comparison to current competitions this is a step towards making Research
Data Management more popular in the Pattern Recognition and Document Analysis
community.
The format of the data is simple: As indicated above all data are coming in
the PAGE XML format, together with images and a short description explaining
details of the reference data.
Since all data in the READ project are created in the Transkribus platform and
with the Transkribus tools, the data format is uniform and can also be
generated via the tool itself. In this way we hope to encourage as many
researchers but also archives and libraries to provide research data.
## Making data openly accessible
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so * Specify how the data will be made available * Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? * Specify where the data and associated metadata, documentation and code are deposited * Specify how access will be provided in case there are any restrictions
All data produced in the READ project are per se freely accessible (or will
become available during the course of the project). We encourage data
providers to use the Creative Commons schema (which is also part of the upload
mechanism in ZENODO) to make their data available to the public. Nevertheless
some data providers (archives, libraries) are not prepared to share their data
in a completely open way. In contrast rather strict regulations are set up to
restrict data usage even for research and development purposes. Therefore some
dataset may be handed over just on request of specific users and after having
signed a data agreement.
## Making data interoperable
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. * Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
Due to the fact that data in the READ project are handled in a highly
standardized way data interoperability is fully supported. As indicated above
the main standards in the field (XML, METS, PAGE) are covered and can be
generated automatically with the tools used in the project.
## Increase data re-use (through clarifying licenses)
* Specify how the data will be licenced to permit the widest reuse possible * Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed * Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why * Describe data quality assurance processes * Specify the length of time for which the data will remain re-usable
As indicated above we encourage use of Creative Commons and support other
licenses only as exceptions to this general policy.
# Allocation of resources
Explain the allocation of resources, addressing the following issues: *
Estimate the costs for making your data FAIR. Describe how you intend to cover
these costs * Clearly identify responsibilities for data management in your
project * Describe costs and potential value of long term preservation
Data Management is covered explicitly by the H2020 e-Infrastructure grant. All
beneficiaries are obliged to follow the outlined policy in the best way they
can.
# Data security
Address data recovery as well as secure storage and transfer of sensitive data
We distinguish between working data and published data. Working data are all
data in the Transkribus platform. This platform is operated by the University
of Innsbruck and data backup and recovery is part of the general service and
policy of the Central Computer Service in Innsbruck. This means that not only
regular backups of all data and software are carried out, but that a
distributed architecture exists which will secure data even in the case of
flooding or fire. Security is also covered by the Central Computer Service
comprising regular security updates, firewalls and permanent evaluation.
Published data are still kept on the Transkribus site as well, but are also
made available via ZENODO.
# Ethical aspects
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
There are no ethical issues connected with the management of research data in
READ. Nevertheless the only aspect which might play a role in the future are
documents from the 20th century coming with personal data. For this case the
Transkribus site offers a solution so that specific aspects of such documents
- which may be interesting research objects - can be classified (e.g. person
names) in a way that research can be carried out but without conflicting with
personal data protection laws.
# Other
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
Not applicable
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0613_FarFish_727891.md
|
# 2 Introduction
Over the course of a research project, considerable amounts of data are
gathered. Often, these data are not preserved or made available for reuse
later on, causing time and effort to be spent in other projects gathering
similar data. The goal of the Horizon 2020 Open Research Data Pilot is remedy
this issue, by ensuring that research data generated through a project is made
available for reuse after a projects end. The H2020 Open Research Data Pilot
is based on the principle of making data FAIR:
* Findable
* Accessible
* Interoperable
* Reusable
As a way of managing the data used during a project lifetime, a Data
Management Plan (DMP) must be created. The DMP-forms includes details on:
* the handling of research data during and after the end of the project
* what data will be collected, processed and/or generated
* which methodology and standards will be applied
* whether data will be shared/made open access
* how data will be curated and preserved (including after the end of the project)
* ethical issues related to the data
* estimated costs associated with data archiving/sharing
The creation of the DMP is the responsibility of task 2.2/deliverable 2.2. As
per the DoA, task 2.2 will fulfill three requirements as a participant in the
H2020 Open Research Data Pilot: " _Firstly, the collected research data should
be deposited in data repository (…). Secondly, the project will have to take
measures to enable third parties to access, mine, exploit, reproduce and
disseminate this research data. Finally, a _Data Management Plan (DMP)_ has to
be developed detailing what kind of data the project is expected to generate,
whether and how it will be exploited or made accessible for verification and
reuse, and how it will be curated and preserved _ ".
# 3 Method
In order to collect information from the project participants, a form and an
explanation describing the desired content of each DMP-component was sent out
to all partners by email (both are attached in " _Appendix 2 – Templates_ ".
Detailed instructions on how to fill out the form was included in the
accompanying e-mail. Both the form and the explanation were based on the
proposed DMP-structure in " _Guidelines on FAIR Data Management in Horizon
2020_ " (2016). Along with the two forms, an example from a previous project
was distributed in the same email.
In order to harmonize the forms, the formatting of certain forms have been
edited where needed. No changes have ben made to the content.
# 4 Conclusion
The deliverable contains 38 forms, detailing the content of the different
datasets, the ways in which data will be stored and how/if it will be made
available at the project end. The forms are grouped according to case study.
Datasets not pertaining to one individual case study are grouped in a separate
category: " _Non-case study specific_ ". If no case study-specific datasets
have been submitted for a particular case study, the case study is not
included in the list in the appendix.
During the later stages of the project, relevant datasets will be uploaded to
the FarFish Database (FFDB), created as part of task 6.1 in Work Package 6
"Development of management tools" as a means of storing research data. The
FFDB will be accessible from the FarFish webpage. At- or near the project end,
datasets will be uploaded from the FFDB to OpenAire. A FarFish account has
been created on Zenodo ( _https://zenodo.org/communities/farfish2020?page=1
&size=20 _ ) . Most datasets can be made publically available, with the
exception of meeting minutes and reports from the Joint Committee meetings. A
full review of what data can, and should, be made available will be made
nearer the project end.
With the exception of personal information collected during interviews, the
potential for ethical issues raised by FarFish are relatively minor, though
this might vary from dataset to dataset. Participation in the project is on a
voluntarily basis, and participants have the right to limit the use of any
information they provide and may request that information collected is deleted
at the end of the project. FarFish follows the Eurostat rules and national
guidelines on data confidentiality, and the ICC/ESOMAR International Code on
Market, Opinion and Social Research and Data Analytics. See the "Ethics and
Security" section in the DoA for more information.
Due to the project being at an early stage, and because different work
packages are at a different time schedule, not all forms share the same level
of detail. The DMP is intended to be a "living" document, however, and will
evolve as the project progresses. Periodic revisions of the DMP are planned
once within each 18-month periodic reporting period. Ahead of each periodic
review, an email will be sent out to all project participants, asking them to
update the DMP-forms pertaining to their datasets by either editing existing
information or by adding new forms if necessary.
Extra revisions might be scheduled should it be needed. The table in chapter 0
"Revision history" provides a summary of revisions carried out over the
lifetime of this Data Management Plan. It provides a version number, the date
of the latest revision, the initials of the editor, and a comment describing
the changes made.
# 5 Acknowledgements
We wish to acknowledge the contribution of all project participants who
contributed to the completion of this deliverable.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0614_PETER_767227.md
|
# 1\. INTRODUCTION
## 1.1 H2020 REQUIREMENTS
The European Commission (EC) is running a flexible pilot under Horizon 2020
called the Open Research Data Pilot (ORDP). This pilot is part of the Open
Access to Scientific Publications and Research Data Program in H20201. The
ORDP aims to improve and maximise access to and re-use of research data
generated by Horizon 2020 projects and takes into account the need to balance
openness and protection of scientific information, commercialisation and
Intellectual Property Rights (IPR), privacy concerns, security as well as data
management and preservation questions.
Projects participating in the ORDP are required to develop a Data Management
Plan (DMP). The DMP describes the types of data that will be generated or
gathered during the project, the standards that will be used, the ways how the
data will be exploited and shared for verification or reuse, and how the data
will be preserved. In addition, beneficiaries must ensure their research data
are findable, accessible, interoperable and reusable (FAIR).
PETER DMP (D3.4) will be set according to the article 29.3 of the Grant
Agreement “Open Access to Research Data”. Project participants must deposit
their data in a research data repository and take measures to make the data
available to third parties, as well as provide information, via the
repository, about tools and instruments needed for the validation of project
outcomes. The third parties should be able to access, reproduce, disseminate
and exploit the data in order, among others, to validate the results presented
in scientific publications.
However, the obligation of participants to protect results, security
obligations, obligations to protect personal data and confidentiality
obligations prior to any dissemination still apply. As an exception, the
beneficiaries do not have to ensure open access to specific parts of their
research data if the achievement of the action's main objective, as described
in Annex I, would be jeopardised by making those specific parts of the
research data openly accessible.
Therefore, the hereby presented DMP contains the reasons for not giving access
to specific data based on the exception provision above. The PETER consortium
has decided what information would be made public according to aspects such as
potential conflicts against commercialization, IPR protection of the knowledge
generated (by patents or other forms of protection), and/or a risk for
obtaining the project objectives.
## 1.2 PETER PROJECT OBJECTIVES
The overall objective of the project is to provide Proof-of-Principle of
implementation of plasmonic principles into EPR and thus to initiate a
fundamentally new technology direction as Plasmon-enhanced THz EPR enabling
spectroscopy and microscopic imaging under the diffraction limit close or at
the THz range. To fulfil this general objective, the particular objectives
must be met as follows:
* Design and fabrication of plasmonic structures (PS) suitable for EPR experiments, with magnetic plasmon resonances in the THz, providing magnetic field enhancement by 2 orders of magnitude, and so the EPR signal enhancement by 4 orders of magnitude localized in a sub-micrometer area.
* Application of PS in THz EPR experiments, evaluation and optimization of their performance with respect to their successful utilization in PE THz EPR spectroscopy and scanning microscopy. Proof-ofPrinciple applications of PE THz EPR spectroscopy. Increase of spin sensitivity by plasmonic effects with respect to THz EPR without antennas: ≥ 104 times.
* Design, assembly and testing of a platform for PE THz EPR scanning microscopy based on the modified THz EPR spectrometer and a Scanning Probe Microscopy (SPM) unit (scanning stage and head carrying a cantilever tip with a PS at its apex) to be developed.
* Proof-of-Principle application of a platform for PE THz EPR scanning microscopy. Sensitivity: 103 spins for 1 h, spatial resolution: ≤ 1 μm.
# 2\. DATA SUMMARY
In the PETER project, the data defined as follows will be made accessible
within the ORDP:
<table>
<tr>
<th>
Type of the Data
</th>
<th>
The underlying data needed to validate the results in scientific publications.
</th> </tr>
<tr>
<th>
Other data to be developed by the project: deliverable reports, meeting
minutes, demonstrator videos, pictures from set-ups approved for dissemination
by the consortium, technical manuals for future users, etc.
</th> </tr>
<tr>
<td>
Format of the data
</td>
<td>
Electronic. The PETER consortium will assure that the format of the electronic
data will be accessible according to the FAIR policy.
</td> </tr>
<tr>
<td>
Size of the data
</td>
<td>
The size of the data is not expected to exceed the file size occurring in the
course of the beneficiaries’ research on a daily basis. The repository used
sets a limit for a single datafile upload to 512 MB.
</td> </tr>
<tr>
<td>
Origin of the data
</td>
<td>
Majority of the underlying data will be a direct output from simulation
software and/or equipment used. Other types of data will be written or
prepared by the PETER researchers and support staff working on the project.
</td> </tr>
<tr>
<td>
Utility of the data
</td>
<td>
To other researchers, allowing them to validate and disseminate the PETER
project results, as well as exploit them in order to start their own
investigations.
</td> </tr> </table>
# 3\. FAIR DATA
For the underlying data, the PETER consortium will use ResearchGate repository
for ORDP purposes since this repository facilitates linking publications and
underlying data through persistent identifiers (DOIs) and data citations, as
well as data archiving and linking datasets to Projects to increase their
visibility. Moreover, most of the researchers involved in the PETER project
already have a profile on ResearchGate. Therefore, the FAIR data policy the
PETER project is following is that established by this repository.
For the other data, the consortium will provide access using the project
website ( _www.peter-instruments.eu_ ).
## 3.1 MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA
### 3.1.1. Discoverability: Metadata Provision
Metadata are created to describe the data and aid discovery. Beneficiaries
will complete all mandatory metadata required by the repository and metadata
recommended by the repository - Type of Data, DOI, Publication Date, Title,
Authors, Description, Terms for Access Rights, and a link to a ResearchGate
Project ( _https://www.researchgate.net/project/Plasmon-Enhanced-Terahertz-
Electron-Paramagnetic-Resonance_ ) as outlined in the repository instructions
_https://explore.researchgate.net/display/support/Data_ .
### 3\. 1.2. Identifiability of data
Beneficiaries will maintain the Digital Object Identifier (DOI) when the
publication/data has already been identified by a third party with this
number. Otherwise ResearchGate will provide each dataset with a DOI.
#### 3.1.3. Naming convention
A naming convention for uploading data to the repository is not mandatory,
since the ResearchGate repository includes a description of the dataset
ensuring easy findability. However, for internal project purposes, the
following guidelines are recommended:
Filename length: max. 40 characters
Characters: alphanumerical; including dot (.), underscore (_), and hyphen (-).
Filename structure: clear and descriptive. Optionally, initials of the
responsible person or a time note can be included. Examples:
Diabolo_simulations_MH.txt
Diabolo_for_midinfra_2018_01.txt
#### 2.1.4. Approach towards search keywords
ResearchGate doesn’t provide keywords for each dataset. Each author will make
sure to include relevant keywords in the datafile description. All dataset
generated by the project consortium will be also identified with the keyword
PETER.
## 3.2 MAKING DATA OPENLY ACCESSIBLE
The underlying data related to scientific publications, the public
deliverables and other datafiles included in Section 2 of this DMP will be
made openly accessible via ResearchGate and the project website.
The work-in-progress specifications of the PETER instrumentation, the
datasheets and internal evaluations of the PE EPR THz scanning microscopy
platform performance, laboratory records, working schemes and other data as
agreed upon between the project consortium members are excluded from the ORDP
and will not be made public in order to not jeopardise potential
commercialisation and IPR protection of knowledge generated.
The dissemination rules of all project results follow the provisions set in
the PETER Consortium Agreement, Article 8.4.
## 3.3 MAKING DATA INTEROPERABLE
Interoperability means allowing data exchange and re-use between researchers,
institutions, organisations, countries, etc. (i.e. adhering to standards for
formats, as much as possible compliant with available (open) software
applications, and in particular facilitating re-combinations with different
datasets from different origins. PETER project ensures the interoperability of
the data by using data in standard electronic formats, and using ResearchGate
repository with a standardised scheme for metadata.
## 3.4 INCREASE DATA RE-USE
Underlying ata (with accompanying metadata) will be shared on ResearchGate no
later than publication of the related paper. The maximum time allowed to share
underlying data is the maximum embargo period established by the EC, six
months.
Data will be accessible for re-use using the Creative Commons licenses
provided by the ResearchGate, without limitation during and after the end of
the implementation period of PETER project. After the end of the project, data
will remain in the repository, and any additional data related with the
project but generated after its end will be also uploaded to the repository at
the responsibility of the authors.
# 4\. ALLOCATION OF RESOURCES
PETER project will use ResearchGate to make data openly available so there
will be no infrastructure costs for the storage of the data. The personnel
costs incurred in connection with the management of the data will be eligible
as a part of the allocated resources within the grant.
# 5\. DATA SECURITY
ResearchGate stores the content across various secure services and also makes
copies onto separate back up servers to assure continuity and preservation in
the event of service disruption.
6\. ETHICAL ASPECTS
No ethical issues apply to any data generated and processed by the PETER
project.
# 7\. CONCLUSIONS
This DMP is intended to be used by PETER project partners as a reference for
data management (providing metadata, storing and archiving) within the
project, on all occasions the data are produced. The project partners have
contributed to and reviewed the DMP and are familiar with its use as part of
WP3 activities. The Leader of the Work Package 3 will also provide support to
the project partners on using ResearchGate and the project website as the data
storage and management tool. The coordinating institution will ensure the
Research Open Data policy by verifying periodically the information uploaded
to the repositories.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0618_LiRichFCC_711792.md
|
* Material characterization data (typically in proprietary format)
* Images, schematics, and graphs (in common data formats such as bitmap or jpg)
* Results summary presentations (PowerPoint files)
* Journal articles, patents, reports on project deliverables (PDF documents)
Data will thus be originated by theoretical and experimental research and
development activities, certain types of data will be frequently re-used (e.g.
for comparison with modified simulation parameters, synthesis protocols etc.),
and it will have moderate size on the order of typically few tens of GB.
Depending on the Work Package involved in data generation, data may not only
be useful for members inside the consortium but also for other academic
institutions or for industry that might want to do benchmarking of new models,
protocols or materials in comparison with existing battery technology.
2. **How will data be managed internally?**
All LiRichFCC partners provide appropriate storage facilities for research
data and provide controlled accesses as well as appropriate infrastructure.
They also support free access to research data considering ethical, legal,
economic, and contractual framework conditions.
3. **What data can be made public?**
Experimental data and synthesis protocols won’t be made openly available as
default, as their results may have the potentiality to be patented. Data
contained in journal articles may be made openly available. Concerning
deliverables, their confidential or public character is already defined and
available on the European H2020 portal.
Some data may be openly available with some delay due to possible patent
applications.
For those data that can be made public, it needs to be ensured that it is
findable, accessible, interoperable, and reusable (FAIR). To this end,
proprietary formats (see 3.1) will be converted into international standard
formats such as ASCII and stored as text files. That way, scientists and
development engineers from all over the world which are researching on the
field of Li-ion batteries or the synthesis and electrochemistry of new Li-rich
cathode materials for Li-ion batteries will benefit from the LiRichFCC
program.
4. **What processes will be implemented?**
The partners of the LiRichFCC consortium combine over a century of experience
in research data handling, and have developed efficient ways to archive and
share data. Nonetheless, research has become increasingly more
interdisciplinary, and amounts of data generated are on the rise. Therefore,
especially for collaborative work within individual work packages, the
partners follow internal codes and standards for making data findable.
**Parameter sets, methods and protocols** will be stored in text documents
that follow standardized naming conventions jointly defined by the LiRichFCC
partners to ensure maximum findability, accessibility and interoperability.
**Aggregated data** in the form of presentations, reports (deliverables),
publications, or patents follow standardized naming conventions. For example,
presentations and reports include the name of the project, the corresponding
work package, and the date. Deliverables can be identified by their
deliverable number, publications have unique DOIs, and patents are numbered
per international standards. Public aggregated data will by default be made
available on the project webpage ( _www.lirichfcc.eu_ ) as well as in a yet-
to-be-determined professional repository.
Page **4** of **5**
Aggregated data to be shared will always be in a format that can be used and
understood by a computer. They will typically be stored in PDF formats that
are either standardised or otherwise publicly known so that anyone can develop
new tools for working with the documents.
**Raw experimental or theoretical data** that has been identified as non-
restricted will be converted into a standard, non-proprietary format (ASCII
text file) and combined with necessary meta data in the form of a text
document and PDF file. Such data will be available on the project website as
well as from a professional repository.
General consideration regarding publication, depositing or patenting of
research data are summarized by the Figure below that has been reproduced from
the H2020 Program Guidelines to the Rules on Open Access to Scientific
Publications and Open Access to Research Data in Horizon 2020:
# OUTLOOK
LiRichFCC partners are currently reaching increased rates of data generation
which make well-crafted policies and processes for data management a must.
This report will be distributed among the LiRichFCC partners to focus
attention on Data Management issues. At the General Assembly of LiRichFCC at
2017/04/11 in Grenoble, concrete policies and protocols for Data Management
will be decided when meeting face-to-face, and the Data Management Plan will
subsequently be updated.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0619_EnTIRE_741782.md
|
# Introduction
This deliverable, due in the sixth month of the project, provides a first
version of the EnTIRE data management plan (DMP). The document describes how
the collected and generated data will be handled during and after lifecycle
the project. The DMP will be updated, where necessary during the EnTIRE
project. This document is based on the Template for the ERC Open Research Data
Management Plan (DMP) 1 .
# Background
The EnTIRE project aims at providing a mapping of the Research Ethics (RE) and
Research Integrity (RI) normative framework which applies to scientific
research conducted in the EU and beyond. The mapping in this project generates
various forms of data. The data includes but is not limited to RE+RI rules and
procedures, educational materials, best practices, and illustrative case
studies in Europe. Organizing and disseminating this data is the primary
objective of this project. During this project the scope of the data will be
decided upon by the stakeholders (WP2). The data will be compiled from
existing closed and open data sources. New data will also be produced, mostly
in relation to the interpretation of existing data. The size of the data is
expected to gradually increase and reach approximately 2,500 items after 4
years.
# Data summary
## Purpose of the data collection/generation
The overall aim of the data collection within the project is to map the RE+RI
normative framework and make it freely available and (re)usable. For this
purpose Work Package (WP) 2, 3, 4 and 5 are dedicated to data collection.
The purpose of this data collection is:
1. to explore RE+RI experiences and practices, defining the boundaries of data to be collected, and developing a mapping structure adapted to user needs (WP 2, stakeholder consultation).
2. to gather information on: relevant normative elements, including RE+RI rules and procedures, educational materials, illustrative cases, and relevant institutions across EU countries (WP 3‐4‐5).
## Relation to the objectives of the project
Organizing and disseminating the data is the primary objective of this
project. This will be achieved by developing a Wiki platform which will
collect all the data gathered during the project and present them in a
user‐friendly and needs‐oriented way. The data collected are mostly publicly
available but not easily findable or searchable. The goal of the project is to
retrieve those data from different sources and make them available on one
platform (purpose 2), owned by the community of users and tailored to its
specific needs (purpose 2).
## Types and formats of data generated/collected
For a detailed overview of the types and formats of data collected and
generated please see Annex1 (Detailed DMP). Where data will be made publicly
available, FAIR principles are followed as indicated.
The mapping in this project generates mainly textual data. Other forms of
produced data include software modifications that will be used to employ and
optimize the platform. This data will be made publicly and freely available on
current open source repositories according to the original licenses of the
software packages (e.g. Semantic MediaWiki). For each WP a short description
of the types and formats of data collected is provided here below:
1. WP2: Stakeholder consultation
The data collected in WP 2 consists of focus group recordings, transcripts,
and basic data from a survey (using EUSurvey tool) of focus groups
participants’ basic background characteristics.
2. WP3: Guidelines and regulations on RE&RI in the European Union
The data collected in Work package 3 will be composed of text files which are
part of the public domain. These data will be composed by: guidelines,
standards, laws, and codes in European countries.
3. WP4: Resources for RE+RI
The data collected in Work package 4 will be composed of publicly available
data on 1) training opportunities for Research Ethics and Research Integrity
(RE+RI), 2) existing RE+RI bodies in Europe and 3) RE+RI experts.
4. WP5: Cases, casuistry and scenarios
The data collected in Work package 5 will be composed by RE+RI case references
(including web URLs, DOIs and standard academic citations) and case tags,
which will result from searches in different potential sources, e.g. academic
literature, reports of RE+RI committees, professional regulators, grey
literature, media outlets and the blogosphere .
## Origin of the data
Most of the data will be gathered from existing sources (closed and open
ones). New data will also be produced (e.g. when consulting stakeholders and
creating casuistry for educational purposes).
A general description of the origin of the data can be found here below:
1. WP2: Stakeholder consultation
* Face‐to‐face focus groups and in an online survey from 16 people in each of the following countries: Spain, the Netherlands and Croatia;
* Online focus groups and in an online survey from approximately 350 people from other EU countries.
2. WP3: Guidelines and regulations on RE &RI in the European Union
* Google, Google Scholar and PubMed;
* Relevant RE RI organization across Europe.
3. WP4: Resources for RE+RI
* Scientific articles, reviews, books, examples and training materials, available on MEDLINE and SCOPUS databases (current output from pilot search strategies include 22426 and 16194 publications, respectively);
* Specialized sites, like ORI (Office for Research Integrity) website ( _https://ori.hhs.gov/_ ) and RRI Tools ( _https://www.rri‐tools.eu/_ );
* Website from universities; websites of EU projects, identified in EU project website _http://cordis.europa.eu/_ ;
* Data from networks of RE+RI bodies in Europe, such as ENRIO – European Network of Research Integrity Offices ( _http://www.enrio.eu/_ ) and EURECNET – network of national Research Ethics Committees (RECs) associations ( _http://www.eurecnet.org/index.html_ );
* Webpages of EU projects addressing RE+RI (such as ENERI ( _http://eneri.eu/_ ),
DEFORM ( _https://www.deform‐h2020.eu/_ ), PRINTEGER
( _https://printeger.eu/_ );
* Public information on RE+RI experts from the above sources.
4. WP5: Cases, casuistry and scenarios
* Academic Literature;
* Reports by RE+RI Committees and Regulatory Bodies;
* Grey Literature;
* Media Outlets;
* The Blogosphere;
* Online Repositories;
* WP2’s Focus Group Sessions.
## Expected size of the data
The data uploaded on the final platforms is expected to gradually increase and
reach approximately 2,500 unique persistent items after 4 years (WP 3‐ 5 will
each produce approximately 500 unique content items). The community will be
expected to produce a thousand items. As multiple formats will be allowed
(e.g. textual data, images, video, sound), the expectation is that the
resulting database will be around 2.5 Gigabytes in size.
## Outline the data utility
The collected data will be relevant for the stakeholders (RI+RE community).
This means that the data collected will be relevant both for researchers, who
will find support for good research practices in the content available on the
EnTIRE platform, and for the general public, who will be able to use the
platform to find easily accessible and user friendly information on research
subject related information. Moreover the software modification data will be
available for future knowledge management EU founded projects.
# FAIR data
The project will use FAIR data principles 2 where possible for public data.
An analysis was performed on all data generated. A detailed analysis can be
found in Annex 1.
## Making data findable, including provisions for metadata
### Discoverability of data (metadata provision)
Data can be found in multiple ways. Common search engines can be employed to
search through the data, but the dataset will also be made available on the
platform for offline analysis. A description of metadata and an instruction
for (re)use will be made available on the platform.
### Identifiability of data
Two main provisions will be used. The URL naming on the platform is persistent
for content. Also, Digital Object Identifiers will be employed for reviewed
content to make data entries persistent and easily retrievable and citable to
scientists.
### Naming conventions
The naming convention used will be based on the analysis of terminology in
RE+RI, as will result from the work from WP3‐WP5.
### Approach towards search keyword
Three main approaches will be employed:
1. Keywords will be included in the page of the online platform to increase searchability by common search engines. These keywords will be based on the analysis of terminology in RE+RI.
2. Users on the platform will be given the opportunity to add tags to content. This folksonomy approach is more flexible and dynamic and ensures that over the longer term, keywords match what people are looking for themselves.
3. The search of users on the platform will be analyzed to investigate what users are looking for and if they found it. This analysis can be used to tailor the keywords to match what users are searching for on the platform.
### Approach for clear versioning
The Mediawiki software has a versioning system which tracks every modification
made on the platform. This overview of modifications (an identification number
together with a data) will be made available online. This ensures that when
cited, the specific version of the document can be traced back.
### Metadata creation
For the interpretation of existing data, metadata and vocabularies of metadata
will be created. These will conform to existing vocabularies where possible.
We will investigate if standards are present in the RE+RI field. The system of
creating the metadata and vocabularies is currently under development. The
approach taken will include a systematic search for current vocabularies and
metadata in science and RE+RI which would be appropriate to annotate the data.
Their suitability will be reviewed and these will be employed, or adapted and
extended where necessary. In the future, folksonomy will also be employed by
having the community tag content, to improve flexible re‐use, searchability
and analysis of data.
## Making data openly accessible
In principle all data produced in the project will be made openly available on
the platform (this includes raw data, metadata, research protocols research
outcomes). All different WP leads are responsible for uploading the data on
the platform.
All data can be accessed and interpreted on the platform itself or can be
downloaded from the platform and analyzed with open source or feely available
software. The platform itself will have a written instruction, including an
explanimation how to work with the data and how to easily upload new data. For
open source software, other avenues will be used (e.g. Github 3 ). For a
detailed overview of publication of the data, see Annex 1. In general, the
platform will be the primary avenue for data publication. All different WP
leads are responsible for uploading the data on the platform.
### Ethical concerns related to publication
For sensitive data which will not be made publicly available, researchers can
contact the relevant Work Package lead. Contact details and instructions will
be present on the platform.
In order to avoid the risk of participants’ personal knowledge of deviant
cases being exposed to others the data collected by WP2 (stakeholder
consultation) and WP5 (cases casuistry and scenarios) will not be fully
published. Special measures will be adopted to ensure the protection of
privacy and confidentiality:
1. Cases and quotes from the focus groups will be anonymized and published only after written consent;
2. Full transcripts of the face‐to‐face and online focus groups interview will not be published and will only be accessible for authorized study personnel;
3. Audio recordings of face‐to‐face focus groups will be destroyed after they have been transcribed and quality checks have been conducted. A data erasure software will be used in order to assure permanent erasure.
4. Any sensitive data collected will be stored electronically in ‘Dark Storage’, a maximum security data storage facility at VUmc.
5. Cases collected from RECs and regulatory bodies will be made publicly available via the online platform only after written consent.
Moreover in order to protect the intellectual property of the authors grey
literature, such as government reports, will be made publicly available by WP5
only after written consent by the author.
Foreseeable privacy and related and ethical concerns are also addressed in
Annex 1.
### Methods or software tools are needed to access the data
The data can be accessed using a conventional internet browser or a an open
source (i.e. Python 4 ) or closed source software package (Matlab 5 ,
Mathematica 6 ). Data will be available on the platform but it will also be
possible to export documents and data for on and offline use. All members of
the community working on the platform will commit data based on the latest
Creative Commons License (4.0) ensuring an open data and open access approach.
This adheres to the license requirements of Wikimedia 7 . Documentation
which includes the instructions to handle the data will be available on the
platform.
## Making data interoperable
In general, all data gathered and produced in the project will be viewable and
interoperable. In order to facilitate interoperability semantic MediaWiki
system will be used. In general, data, metadata and vocabularies will be
generated (see above, section 2.1.4). The system of creating the metadata and
vocabularies is currently under development.
The identification of relevant topics and concepts relating to textual data
will be encoded in metadata. The topics and concepts which will be used for
the metadata structure are not common. They will be developed through
collaboration between experts and the community of users. The resulting
metadata vocabularies will be available on the platform.
For a detailed overview, see Annex 1.
## Increase data re‐use (through clarifying licenses)
All the data uploaded on the platform will be made available through the
Creative Commons License structure where applicable. In cases of copyright,
data will be linked to instead and deduced work will be made available under
the Creative Commons License where possible.
The data will be available from early on in the project. It is estimated that
the first data to be available is at the 1 year time point (May 2018).
Except for some exceptions where privacy concerns outweigh data availability
(for the specifics, see section 2.2.1 and Annex 1), all data will be made
available for re‐use. No commercial re‐use of the data will be allowed prior
to written consent from the consortium lead (WP1).
### Data quality assurance processes
A continuous evaluation of the quality of the data uploaded on the platform is
part of WP 6 (Platform development and maintenance). WP 7 (Community
engagement communication and dissemination) will involve curators who will
review and invite to edit sections of data on the platform.
# Allocation of resources
In this project the data is FAIR by design. This will in the long term reduce
the upkeep costs of the platform. No additional costs are associated with FAIR
data management. As the availability of the data can be expected to be
valuable to many stakeholders, a plan will be created to ensure long term
preservation and distribute the long‐term upkeep costs amongst the
stakeholders (WP 7).
Data management is initially the primary responsibility of project
co‐ordination (WP 1).
During the project, this responsibility is gradually distributed to the
community (WP 7).
# Data security
## Data recovery, secure storage and transfer of sensitive data
The ICT partner, gesinn.it (nr.2 GI) (expert for automating information and
knowledge management based in Germany) will be responsible for data security
on the platform. Standard measures such as data backup and good computer
security practices will be used to ensure data security. In order to guarantee
data security, user authentication (SHA‐512) and SSL (HTTPS) will be used.
In case of a data breach, affected users of the community will be contacted,
the server will be taken offline and leaks in the platform will be solidified
within 48 hours. Any breach of the integrity of the IT system will be reported
to the national and/or international governing bodies.
Long term preservation of data will be ensured by storing the platform and its
data on certified open source repositories. For this purpose a Europe based
provider with twostage authentication, SSL and AES‐256 bit encryption
conforming to the latest standard in IT will be selected.
Transfer of sensitive data will be made by establishing a secure connection
(SSL). Any sensitive data which might result from the stakeholder consultation
(WP2) will be stored in ‘Dark Storage’, a maximum security data storage
facility at VUmc (NL).
# Ethical aspects
Most of the data collected within the project come from the public domain.
However, the project involves research with human participants (questionnaire,
focus groups). Participants will not be exposed to the risk of physical
injury, financial, social or legal harm, and potential psychological risks
will not exceed the daily life standard.
Privacy and confidentiality of research participants and of the members of the
community on the platform will be protected. Before publishing information on
the EnTIRE platform, confidentiality and privacy issues will always be
addressed. If necessary, informed consent will be obtained (as specified in
section 2.2.1 and Annex 1). We are not aware of and do not expect any
potentially critical ethical implications of the research results such as the
protection of dignity, autonomy, integrity and privacy of persons,
biodiversity, protection of the environment, sustainability or animal welfare.
The proposed research does NOT include research activity aimed at human
cloning, intended to modify the genetic heritage of human beings, to create
human embryos, or involving the use of human embryos or embryonic stem cells.
This research proposal does NOT include any security sensitive issues.
Data will primarily be stored in Europe based servers. No primary results will
be exported to the US without their primary location being on the EU soil.
Ethical standards and guidelines of Horizon2020 will be rigorously applied,
regardless of the country in which the research is carried out.
The Ethics deliverables have been submitted on September 30th and attached to
this document (Annex 2).
**APPENDIX 1**
**Detailed data management plan**
**APPENDIX 2**
**Ethics deliverables**
This project has received founding from the European Union H2020 research
**14** and innovation programme under the grant agreement n. 741782.
Mapping Normative Frameworks of
EThics and Integrity of REsearch
**Ethics Requirements: H –**
**Requirement N**
**o**
**. 1**
WP 8 H – Requirement No 1.
<table>
<tr>
<td>
**Project details**
</td>
<td>
</td> </tr>
<tr>
<td>
**Project:**
</td>
<td>
Mapping Normative Frameworks of EThics and Integrity of REsearch
</td> </tr>
<tr>
<td>
**Project acronym:**
</td>
<td>
EnTIRE
</td> </tr>
<tr>
<td>
**Project start date:**
</td>
<td>
01.05.2017
</td> </tr>
<tr>
<td>
**Duration:**
</td>
<td>
48 months
</td> </tr>
<tr>
<td>
**Project number:**
</td>
<td>
741782
</td> </tr>
<tr>
<td>
**Project**
**Coordinator:**
</td>
<td>
Vrije Universiteit Medisch Centrum (VUmc) Amsterdam
</td> </tr> </table>
<table>
<tr>
<td>
**Deliverable details**
</td>
<td>
</td> </tr>
<tr>
<td>
**Work Package:**
</td>
<td>
WP 8 Ethics Requirements
</td> </tr>
<tr>
<td>
**Deliverable description:**
</td>
<td>
H – Requirement No 1.
</td> </tr>
<tr>
<td>
**Work package leader:**
</td>
<td>
Vrije Universiteit Medisch Centrum (VUmc) Amsterdam
</td> </tr>
<tr>
<td>
**Responsible for the deliverable:**
</td>
<td>
Natalie Evans
</td> </tr>
<tr>
<td>
**Submission date:**
</td>
<td>
30.09.2017
</td> </tr> </table>
**Description of deliverable**
EnTIRE conducts a mapping of the Research Ethics and Research Integrity
(RE+RI) normative framework which applies to scientific research conducted in
the EU and beyond. For the purpose of this project, it is necessary to gather
data on and with humans, including: experiences and attitudes regarding RE+RI;
opinions on the online platform; case studies regarding research
misbehaviours; and best practice examples.
Primary data will be collected through questionaires and focus groups during
the stakeholder consultation (WP2) and secondary (publically available) data
will be collected by the work package gathering cases, casuistry and scenarios
(WP5).
This deliverable provides:
1. Detailed information on the informed consent procedures that will be implemented for the participation of humans in the proposed activities (e.g. stakeholder consultation).
2. Templates of the informed consent forms and information sheet
3. Copies of the ethics approval or waiver forms for the stakeholder consultation from the Netherlands, Croatia and Spain.
4. Details about the approach for publishing publically available cases
**Stakeholder consultation (Work package 1)**
EnTIRE’s stakeholder consultation will identify the RE+RI issues of concern to
the stakeholders, practical experience with regulations and guidelines and
other professional, institutional and national norms, resources, and existing
best practices. The consultation will also be used to generate, and to reflect
on, instructive cases from local practice.
The stakeholder consultation has been described in detail in Deliverable 2.1
‐“Protocol for the phased multi‐country stakeholder consultation”. The
consultation consists of face‐toface and online focus groups. Further details
on the informed consent procedure, privacy and confidentiality, data
management and ethics approval are given below.
**Informed consent procedure**
**Participant information letter**
#### _Face‐to‐face focus groups_
Stakeholders interested in participating in the face‐to‐face focus groups will
be sent the below information sheet. Information in red must be adapted
depending on the location of the focus group.
**Invitation to participate in focus groups for the stakeholder consultation
‘Mapping the**
**Normative Framework of Ethics and Integrity of Research (EnTIRE)’** Dear
Sir/Madam,
We at the EnTIRE project aim to create an online website that makes
information about research ethics and research integrity easily accessible to
the research community. This European Commission funded project seeks to
include all stakeholders in a participatory way. As such, we are conducting an
in‐depth stakeholder consultation amongst people involved in research. We aim
to consult: researchers, journal editors, national and local ethics/integrity
committees, policy makers, representatives from industry (including
pharmaceutical companies), and representatives from research funding
organisations.
We would like to invite you to participate in these focus groups. By agreeing,
you commit to participating in two separate discussions approximately one week
apart in (insert city). They will be led by researchers from VU University
Medical Center (in collaboration with The University of Split Medical
School/European University of Madrid). As this is a Europe‐wide consultation,
the language of the focus groups will be English. Furthermore, one third of
participants from the Dutch/Spanish/Croatian focus groups will be invited to
participate in an additional focus group in Amsterdam that will bring together
participants from parallel studies in (insert the two other countries) to
discuss similarities and differences between countries.
All focus group discussions will take place in Autumn 2017. This letter
contains details about the project and the stakeholder consultation so you can
make an informed decision whether you would like to participate in the focus
groups or not.
1. **Aim of the focus groups**
In the first focus group, we will discuss your experiences of research ethics
and research integrity issues. This will allow us to develop an understanding
of any difficulties you might encounter as well as ideas you might have on how
you could be better supported in the future, particularly in regard to
informational needs. For example, if researchers say they do not know data
management guidelines, or the procedure for raising concerns about integrity
of research practices, or, alternatively, have suggestions for improvement, we
can identify those issues and suggestions as relevant for gathering
information and putting this on the website.
The second focus group, taking place approximately two weeks later, will
involve a presentation of the pilot version of the website and a discussion
about its content and presentation. This will further help us understand if we
need to collect any information additional to the preliminary data collection
categories of: guidelines, codes, legislations, and standards; committees,
training courses and expert advice and contacts; cases, casuistry and
scenarios. Participants will also help us understand if we are presenting
information in an optimal way or how this might be improved.
The third, potential, focus group, will bring participants together from the
Netherlands Spain, and Croatian to discuss similarities and differences
between countries. This will provide us with an understanding of the diversity
of informational needs across different EU countries, and with suggestions on
how to deal with them in presenting data on the website.
2. **What is involved?**
If you would like to participate, we will invite you to two focus group
sessions at the VUmc, Amsterdam. The preliminary dates are:
Round 1. date of first focus group
Round 2. date of second focus group
Each of these focus groups will take about 2 hours.
There is also the possibility that you will be invited to a day‐long workshop
held at the VUmc,
Amsterdam, that will bring together one third of the participants from the
focus groups in the Netherlands, Spain, and Croatia.
Round 3. date of third focus group
If you cannot make these dates but would like to join the focus groups, we
would still like to hear from you as we might conduct individual interviews
with stakeholder groups under‐represented in the focus group discussions.
Before attending the focus group, we will ask you to complete a short
questionnaire (sent via email and taking about 15 minutes) about your
background: gender, age, role (depending on the stakeholder group – e.g.
academics will be asked their area of expertise (biomedical, social sciences,
natural sciences, applied sciences) and position (PhD student, Research
Associate, Assistant Professor, Professor, Head of Department), years of
experience, nationality and country of residence. The questionnaire will also
include a couple of open questions about what you know about research ethics
and research integrity and what support is currently available to you.
3. **Benefits and risks of participating**
The direct benefits of participating in the research are that participants can
share experiences and contribute to the development of the platform, thus
being able to actively bring in and broaden their knowledge and experience;
mostly, however, the benefits are indirect, they will be accrued by the
research community as a whole which will benefit from access to a website that
makes information about research ethics and research integrity easily
accessible. The website will also potentially foster the uptake of ethical
standards and responsible conduct of research in Europe, and ultimately
support research excellence and strengthen society’s confidence in research
and its findings. One risk associated with the focus group is other people
knowing the details about any research misconduct you might describe. Efforts
to minimize this risk include asking all participants to return
confidentiality agreements, and to avoid the use of identifying
characteristics. In addition, the time commitment required for two (and
potentially three) focus groups discussions may prove inconvenient.
4. **If you do not want to join or want to stop the group conversation**
Participation is voluntary. If you do not want to participate, you do not have
to do anything and you are not required to let us know. If you decide to
participate, you must sign the attached informed consent form and return it
via email prior to the focus group. If you have agreed to participate but
change your mind, you can of course withdraw at any point (including during
the focus group discussions), we would ask you kindly to inform us if this is
the case.
5. **Use of data and dissemination of research findings to participants**
The focus groups will be recorded. These recordings will be destroyed after
they have been transcribed. Personal data, such as informed consent forms and
answers to the questionnaire, will be stored separately from the discussion
transcripts. Personal data will be destroyed within 6 months of the end of the
focus group discussions. The transcripts of the focus groups will be kept for
up to 15 years after the end of the study (in accordance with EU and
Dutch/Spanish/Croatian data protection laws). All data is anonymised for
analysis. The findings from the stakeholder consultation will also be
published and made publically available on the Project’s page on the European
Commission research information portal:
_http://cordis.europa.eu/project/rcn/210253_en.html_
6. **Financial aspects**
There is no fee paid for participation, however all travel expenses will be
reimbursed. If you are invited to the third, international focus group, your
travel and accommodation will be reimbursed according to local university
rules and you will receive 70 euros per diem to cover your expenses in the
country.
7. **Do you have any questions?**
Please do not hesitate to contact the consultation project coordinator, Dr.
Natalie Evans [email protected]_ , if you have any hestions.
#### _Online focus groups_
Online focus group participants will receive a similar information sheet to
the face‐to‐face focus group participants, but tailored to the online
procedure:
**Invitation to participate in focus groups for the stakeholder consultation
‘Mapping the**
**Normative Framework of Ethics and Integrity of Research (EnTIRE)’** Dear
Sir/Madam,
We at the EnTIRE project aim to create an online website that makes
information about research ethics and research integrity easily accessible to
the research community. This European Commission funded project seeks to
include all stakeholders in a participatory way. As such, we are conducting an
in‐depth stakeholder consultation amongst people involved in research. We aim
to consult: researchers, journal editors, national and local ethics/integrity
committees, policy makers, representatives from industry (including
pharmaceutical companies), and representatives from research funding
organisations.
We would like to invite you to participate in this stakeholder consultation
via participation in online focus groups.
By agreeing, you commit to participating in two online discussions, one
focusing on your perspectives and experiences of research ethics and research
integrity issues, the other focusing on your opinions about the proposed
website. Each will take place over a period of two weeks, with a period of two
weeks inbetween, with a new question posted every two days. You will receive
an email each time a new question is posted.
You will interact with other participants anonymously and discussions will be
facilitated and moderated by researchers from VU University Medical Center. As
this is a Europe‐wide
consultation, the language of the focus groups will be English.
All focus group discussions will take place Jan‐March 2018. This letter
contains details about the project and the stakeholder consultation so you can
make an informed decision whether you would like to participate in the online
discussions or not.
1. **Aim of the focus groups**
In the first focus group, we will discuss your experiences of research ethics
and research integrity issues. This will allow us to develop an understanding
of any difficulties you might encounter as well as ideas you might have on how
you could be better supported in the future, particularly in regard to
informational needs. For example, if researchers say they do not know data
management guidelines, or the procedure for raising concerns about integrity
of research practices, or, alternatively, have suggestions for improvement, we
can identify those issues and suggestions as relevant for gathering
information and putting this on the website.
The second focus group, taking place approximately two weeks later, will begin
with a short video about our proposed website, followed by a discussion about
its content and presentation. This will further help us understand if we need
to collect any information additional to the preliminary data collection
categories of: guidelines, codes, legislations, and standards; committees,
training courses and expert advice and contacts; cases, casuistry and
scenarios. Participants will also help us understand if we are presenting
information in an optimal way or how this might be improved.
2. **What is involved?**
If you would like to participate, we will invite you to two online discussions
taking place over a two week period (with two weeks in between).
Round 1. date of first focus group
Round 2. date of second focus group
Before participating, we will ask you to complete a short questionnaire (sent
via email and taking about 15 minutes) about your background: gender, age,
role (depending on the stakeholder group – e.g. academics will be asked their
area of expertise (biomedical, social sciences, natural sciences, applied
sciences) and position (PhD student, Research Associate, Assistant Professor,
Professor, Head of Department), years of experience, nationality and country
of residence. The questionnaire will also include a couple of open questions
about what you know about research ethics and research integrity and what
support is currently available to you.
3. **Benefits and risks of participating**
The direct benefits of participating in the research are that participants can
share experiences and contribute to the development of the platform, thus
being able to actively bring in and broaden their knowledge and experience;
mostly, however, the benefits are indirect, they will be accrued by the
research community as a whole which will benefit from access to a website that
makes information about research ethics and research integrity easily
accessible. The website will also potentially foster the uptake of ethical
standards and responsible conduct of research in Europe, and ultimately
support research excellence and strengthen society’s confidence in research
and its findings. One risk associated with the focus group is other people
knowing the details about any research misconduct you might describe. Efforts
to minimize this risk include: anonymous interaction within the online
discussion; asking all participants to return confidentiality agreements; and,
asking participants to avoid using details that migth identify themselves or
others. In addition, the time commitment required to respond to online
comments may prove inconvenient.
4. **If you do not want to join or want to stop the group conversation**
Participation is voluntary. If you do not want to participate, you do not have
to do anything and you are not required to let us know. If you decide to
participate, you must sign the attached informed consent form and return it
via email prior to the focus group. If you have agreed to participate but
change your mind, you can of course withdraw at any point (including during
the focus group discussions), we would ask you kindly to inform us if this is
the case.
5. **Use of data and dissemination of research findings to participants**
Data from the online discussion threads will be collected by [name of online
focus group provider], who have been selected based on their compliance with
EU data protection acts and their ability to guarantee that participants can
interact anonymously. Personal data, such as informed consent forms and
answers to the questionnaire, will be stored separately from the discussion
transcripts. Personal data will be destroyed within 6 months of the end of the
focus group discussions. The discussion transcripts will be kept for up to 15
years after the end of the study (in accordance with EU and Dutch data
protection laws). All data is anonymised for analysis. The findings from the
stakeholder consultation will also be published and made publically available
on the Project’s page on the European Commission research information portal:
_http://cordis.europa.eu/project/rcn/210253_en.html_
6. **Financial aspects**
There is no fee paid for participation, however all travel expenses will be
reimbursed. If you are invited to the third, international focus group, your
travel and accommodation will be reimbursed according to local university
rules and you will receive 70 euros per diem to cover your expenses in the
country.
7. **Do you have any questions?**
Please do not hesitate to contact the consultation project coordinator, Dr.
Natalie Evans [email protected]_ , if you have any hestions.
**Informed consent and confidentiality agreement**
On agreeing to participate, stakeholders from both the face‐to‐face and online
focus groups will be sent a short online questionnaire (for details see
Deliverable 2.1) and an informed consent and confidentiality agreement (see
below) via email.
**Informed consent and confidentiality agreement**
Please read the statements below in connection with the research **‘** Mapping
the Normative Framework of Ethics and Integrity of Research (EnTIRE):
stakeholder consultation’ and sign if you are in agreement with all of the
statements.
‐ I have read the information sheet.
‐ I was given the opportunity to ask any questions and any questions I did
have were sufficiently answered.
‐ I had enough time to decide if I would join.
‐ I know that participation is voluntary. I also know that I can decide at any
time that I would like to withdraw my participation and quit the study. I do
not have to give any explanations.
‐ I give permission to make the sound recording.
‐ I give permission for collecting and using my data in the way and for the
purposes stated in the information letter.
‐ I want to participate in this research.
‐ **I agree to maintain the confidentiality of the information discussed by
all participants and researchers during the focus group session.**
Name:
Signature: Date: __ / __ / __
The questionnaire and informed consent and confidentiality agreement need to
be completed before participation.
**Privacy and confidentialty**
One risk associated with the focus group discussions is other people knowing
the details about any research misconduct described. Efforts to minimize this
risk include: anonymous interaction within the online discussion; asking all
participants to return confidentiality agreements; and, asking participants to
avoid using details that migth identify themselves or others. Participants
will also be reminded to respect privacy and confidentiality at the beginning
of each and every focus group (both face‐to‐face and online).
**Data management**
The burden of responsibility for data protection lies with the Dutch partner
(VUmc).
_Face‐to‐face focus groups_
Audio recordings of face‐to‐face focus groups will be destroyed after they
have been transcribed and quality checks have been conducted, and only the
transcripts will be archived.
_Online focus groups_
Data from the online discussions will be collected through third party
software. A suitable party will be chosen in the next months, and will be
selected based on their compliance with EU data protection acts and their
ability to guarantee anonymity. A data processing agreement with this party
will be constructed and signed.
Face‐to‐face and online focus group transcripts will have any identifying
information removed as much as possible, and will only be accessible to
authorized study personnel. Any sensitive data collected will be stored
electronically in ‘Dark Storage’, a maximum security data storage facility at
VUmc.
**Ethics approval**
Ethics approval or excemption documents have been obtained in the Netherlands
(for the Dutch face‐to‐face and the multi‐country online focus groups), Spain
(for the Spanish faceto‐face focus groups) and Croatia (for the Croatian
face‐to‐face focus groups).
**Case, casuistry and scenarios (Work package 5)**
The data collection work package ‘Cases, casuistry and scenarios’ will collect
publically available information about published RE+RI cases. Some of these
may contain identifying data, however this will be removed before being
published on the online platform.
15
Mapping Normative Frameworks of
EThics and Integrity of REsearch
**Ethics Requirements: POPD –**
**Requirement N**
**o**
**. 2**
WP 8 POPD – Requirement N o 2\.
<table>
<tr>
<td>
**Project details**
</td>
<td>
</td> </tr>
<tr>
<td>
**Project:**
</td>
<td>
Mapping Normative Frameworks of EThics and Integrity of REsearch
</td> </tr>
<tr>
<td>
**Project acronym:**
</td>
<td>
EnTIRE
</td> </tr>
<tr>
<td>
**Project start date:**
</td>
<td>
01.05.2017
</td> </tr>
<tr>
<td>
**Duration:**
</td>
<td>
48 months
</td> </tr>
<tr>
<td>
**Project number:**
</td>
<td>
741782
</td> </tr>
<tr>
<td>
**Project**
**Coordinator:**
</td>
<td>
Vrije Universiteit Medisch Centrum (VUmc) Amsterdam
</td> </tr> </table>
<table>
<tr>
<td>
**Deliverable details**
</td>
<td>
</td> </tr>
<tr>
<td>
**Work Package:**
</td>
<td>
WP 8 Ethics Requirements
</td> </tr>
<tr>
<td>
**Deliverable description:**
</td>
<td>
POPD – Requirement N o 2\.
</td> </tr>
<tr>
<td>
**Work package leader:**
</td>
<td>
Vrije Universiteit Medisch Centrum (VUmc) Amsterdam
</td> </tr>
<tr>
<td>
**Responsible for the deliverable:**
</td>
<td>
Natalie Evans
</td> </tr>
<tr>
<td>
**Submission date:**
</td>
<td>
30.09.2017
</td> </tr> </table>
2
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0624_READ_674943.md
|
# Executive Summary
This paper provides an updated version of the Data Management Plan in the READ
project. It is based on the DMP Online questionnaire provided by the Digital
Curation Centre (DDC) and funded by JISC: _https://dmponline.dcc.ac.uk/_ .
We have included the original questions in this paper (indicated in italic).
The management of research data in the READ project is strongly based on the
following rules:
* Apply a homogenous format across the whole project for any kind of data
* Use a well-known external site for publishing research data (ZENODO)
* Encourage data providers to make their data available via a Creative Commons license
* Raise awareness among researchers, humanities scholars, but also archives/libraries for the importance of making research data available to the public
The READ platform Transkribus has implemented the above mentioned principles
from the very beginning. In Y2 we followed this path and were especially able
to provide more research data via ZENODO.
A new aspect relevant to the DMP appeared in Y2 with the enforcement of the
General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) in May
2018. Specific consequences will be covered in Y3 of the project.
# Data summary
Provide a summary of the data addressing the following issues:
* State the purpose of the data collection/generation * Explain the relation to the objectives of the project * Specify the types and formats of data generated/collected * Specify if existing data is being re-used (if any) * Specify the origin of the data * State the expected size of the data (if known) * Outline the data utility: to whom will it be useful
The main purpose of all data collected in the READ project is to support
research in Pattern Recognition, Layout Analysis, Natural Language Processing
and Digital Humanities. In order to be useful for research the collected data
must be "reference" data.
Reference data in the context of the READ project consist typically of a page
image from a historical document and of annotated data such as text or
structural features from this page image.
An example: In order to be able to develop and test Handwritten Text
Recognition algorithms we will need the following data: First a (digital) page
image. Second the correct text on this page image, more specifically of a
line. And thirdly an indication (=coordinates of line region), where the text
can be found exactly on this page image. The format used in the project is
able to carry this information. The same is true for most other research areas
supported by the READ project, such as Layout Analysis, Image pre-processing
or Document Understanding.
Reference data are of highest importance in the READ project since not only
research, but also the application of tools developed in the project to large
scale datasets is directly based on such reference data. The usage of a
homogenous format for data production was therefore one of the most important
requirements in the project. READ builds upon the PAGE format,
D2.8. Data Management Plan P2 21 th February, 2018
which was introduced by the University of Salford in the FP7 Project IMPACT.
It is well-known in the computer science community and is able to link page
images and annotated data in a standardized way.
# Fair data
## Making data findable, including provisions for metadata
* Outline the discoverability of data (metadata provision) * Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? * Outline naming conventions used * Outline the approach towards search keyword * Outline the approach for clear versioning * Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how
Part of the research in the Document Analysis and Recognition community is
carried out via scientific competitions organized within the framework of the
main conferences in the field, such as ICDAR (International Conference on
Document Analysis and Recognition) or ICFHR (International Conference on
Frontiers in Handwriting Recognition). READ partners are playing an important
role in this respect and have organized several competitions in recent years.
One of the objectives of READ is to support researchers in setting up such
competitions. Therefore the ScriptNet platform was developed by the National
Centre for Scientific Research – Demokritos in Athens to provide a service for
organizing such competitions. The datasets used in such competitions will be
made available as open as possible.
For this purpose we are using the ZENODO platform and have set up the
corresponding ScriptNet community: https://zenodo.org/communities/scriptnet/.
In comparison to current competitions this is a step towards making Research
Data Management more popular in the Pattern Recognition and Document Analysis
community.
The format of the data is simple: As indicated above all data are coming in
the PAGE XML format, together with images and a short description explaining
details of the reference data.
Since all data in the READ project are created in the Transkribus platform and
with the Transkribus tools, the data format is uniform and can also be
generated via the tool itself. In this way we hope to encourage as many
researchers but also archives and libraries to provide research data.
## Making data openly accessible
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so * Specify how the data will be made available * Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? * Specify where the data and associated metadata, documentation and code are deposited * Specify how access will be provided in case there are any restrictions
All data produced in the READ project are per se freely accessible (or will
become available during the course of the project). We encourage data
providers to use the Creative Commons schema (which is also part of the upload
mechanism in ZENODO) to make their data available to the public. Nevertheless
some data providers (archives, libraries) are not prepared to share their data
in a completely open way. In contrast rather strict regulations are set up to
restrict data usage even for research and development purposes. Therefore some
dataset may be handed over just on request of specific users and after having
signed a data agreement.
In Y2 we were especially proud to convince several libraries and archives to
deliver their data for competitions and to make it available in the ZENODO
repository. But what needs to be critically noted is that the “Creative
Common” license does not perfectly fit for the purpose of making historical
documents available as research data. The reason is that historical documents
are owned by a certain archive or library, but that there is no copyright
connected with the image files. Creative Commons nevertheless regulates
copyright restrictions and is therefore not appropriate for this case. Even
CC0 1.0 Universal does not fit since it requires a person who has the
copyright on the document and is therefore entitled to dedicate the work to
the public. 1
## Making data interoperable
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. * Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
Due to the fact that data in the READ project are handled in a highly
standardized way data interoperability is fully supported. As indicated above
the main standards in the field (XML, METS, PAGE) are covered and can be
generated automatically with the tools used in the project.
This can be fully underlined with the experiences gained in Y2. E.g. the PAGE
format became even more popular among computer scientists and therefore the
options to work with it in different environments and for different purposes
increased.
## Increase data re-use (through clarifying licenses)
* Specify how the data will be licenced to permit the widest reuse possible * Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed * Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why * Describe data quality assurance processes * Specify the length of time for which the data will remain re-usable
As indicated above we encourage use of Creative Commons and support other
licenses only as exceptions to this general policy.
# Allocation of resources
Explain the allocation of resources, addressing the following issues: *
Estimate the costs for making your data FAIR. Describe how you intend to cover
these costs * Clearly identify responsibilities for data management in your
project * Describe costs and potential value of long term preservation
Data Management is covered explicitly by the H2020 e-Infrastructure grant. All
beneficiaries are obliged to follow the outlined policy in the best way they
can.
# Data security
Address data recovery as well as secure storage and transfer of sensitive data
We distinguish between working data and published data. Working data are all
data in the Transkribus platform. This platform is operated by the University
of Innsbruck and data backup and recovery is part of the general service and
policy of the Central Computer Service in Innsbruck. This means that not only
regular backups of all data and software are carried out, but that a
distributed architecture exists which will secure data even in the case of
flooding or fire. Security is also covered by the Central Computer Service
comprising regular security updates, firewalls and permanent evaluation.
Published data are still kept on the Transkribus site as well, but are also
made available via ZENODO.
In Y2 we became even more aware of the security aspect by some requests from
archives and libraries concerning the use and processing of (personal) data
from the 20 th century. Moreover the EU directive on General Data Protection
Regulation (GDPR) (Regulation (EU) 2016/679) became effective on 25 May 2018.
Though Transkribus falls under the Austrian Data Protection Law (which
implements the EU directive) we are aware that for specific projects we have
to adapt our working environment and set up specific rules for all employees.
This shall be tackled in Y3.
# Ethical aspects
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
There are no ethical issues connected with the management of research data in
READ.
Nevertheless the only aspect which might play a role in the future are
documents from the 20th century coming with personal data. For this case the
Transkribus site offers a solution so that specific aspects of such documents
- which may be interesting research objects - can be classified (e.g. person
names) in a way that research can be carried out but without conflicting with
personal data protection laws.
# Other
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
Not applicable
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0625_STEM4youth_710577.md
|
# Introduction
This Data Management Plan (DMP) has been prepared by mostly following the
recent document “Guidelines on FAIR Data Management in Horizon 2020” (Version
3.0, 26 July 2016) 1 . As stated by the mentioned guidelines, the goal of
this DMP first version is NOT to generate an extensive and definitive document
but rather to set the basis of data management in the StemForYouth (SFY) H2020
project. This document will be kept therefore alive during the evolution of
SFY project. It will be periodically updated and completed during the whole
duration of the project (see Appendix 1. Timetable for review) and the final
version delivered at the end (M30).
In this SWAFS project, the general goal of the project is to bring teenagers
closer to Science and Technology. Students are thus at the core of the project
and it is thus especially important to implement Responsible Research and
Innovation keys in all activities of the project. Project has to ensure that
RRI concepts will be assimilated by the students with all the significant
dimensions, as future possible researchers and responsible citizens.
For instance, through the citizen science projects co-created and implemented
in their schools, students themselves will collect, treat and analyse research
data. Having a comprehensive data management plan in order to allow Open
Access to Research Data is thus of vital importance, not only for the
researchers participating in the project but also to disseminate RRI best
practices to the youngest.
# Data Summary
**2.1 What is the purpose of the data collection/generation and its relation
to the objectives of the project?**
The data in the project will be generated mostly in WP4 (Citizen Science at
School), WP5, WP6 and WP7. The data generated (indirectly) by WP5 will be
related largely to the Physics experiments the students will execute. Due to
its nature the data will have little scientific value, so there is no point of
making them available on the standard open data platform. Instead, this data
will be freely shared within students on the Open Content Management Platform
the project will developed – this data has value only with close relation to
the particular experiments.
The data produced by WP6 (Open Content Management Platform) and WP7 (Trail and
outreach activities) will largely concerns various system statistics, the
characteristics the students and the teachers will work with the content
(multidisciplinary courses developed) and the trial results. This data are
intended to use for scientific purposes, however this data will have value
only with connection with public deliverables and scientific papers to be
published later in the project. At the moment there is no need to closer
describe this data – the description will most likely be included in the third
version of the DMP. Later on in the documents we will focus only on the data
to be generated by WP4, as its structure and the way this data could be used
for scientific research are already known.
The scientific research data collection will be mainly associated with the
WP4, Citizen Science at School. In this WP, Citizen Science experiments will
be performed through a collective research process. The young boys and girls
will participate to the governance of the research projects, design the
experiments, conduct them and analyse the data.
In relation to the main objective of the project -which is to bring teenagers
closer to Science and Technology, Citizen Science-, the introduction at school
supports the latest research in science education, which advocates for a
reduced emphasis on memorisation of facts-based content and increased
engagement in the process of inquiry through hands-on or learning by doing
activities. It has been also demonstrated that the students participation and
motivation is strongly increased when they participate in Citizen Science
projects, as a result of the close contact with scientists, the perception of
their ability to solve important issues for the community, and their
empowerment as true owners and disseminators of the projects results.
**2.2 What types and formats of data will the project generate/collect?**
The project, in relation to Citizen Science experiences, will generate two
types of data:
## HUMAN MOBILITY DATA
These data will be collected through an App installed on a mobile device
(Smartphone o Tablet). The data will consist of Timestamped GPS positions
recorded every x seconds. The XML data table will be composed by the following
fields in a simple table:
1. id: GPS position ID
2. id_user:User ID
3. lon: Longitude
4. lat: Latitude
5. timestamp: Recorded time
6. accuracy: Accuracy provided by the user's devices
See for example: _http://dx.doi.org/10.5061/dryad.7sj4h_
## HUMAN DECISION MAKING DATA
The data will consist of demographic information about the volunteers jointly
with their actions playing to original games, adapted from social dilemmas
such as for example the prisoner’s dilemma. The data will be structured in SQL
or XML (although SQL is the preferred choice) formats and here is an example
of the fields that can be found:
1. _id: User ID_
2. _num_jugador: Player ID wihin the network_
3. _partida_ID: Session ID_
4. _Diners_inicials: Player’s initial bucket joc clima_
5. _Num_seleccions: number of total actiona for each player in joc clima_
6. _Guany_final: Final payoff joc clima_
7. _Rival_joc_inversor1: Opponent’s ID in joc inversor1_
8. _Rival_joc_inversor2: Opponent’s ID in joc inversor2_
9. _Rol_joc_inversor1: Role joc inversor1_
10. _Rol_joc_inversor2: Role joc inversor2_
11. _Seleccio_joc_inversor1: Strategy in joc inversor1_
12. _Seleccio_joc_inversor2: Strategy in joc inversor2_
13. _Seleccio_joc_premi: Strategy in Prisoner Dilemma game_
14. _Guess_joc_premi: Expectation in Prisoner Dilemma Game_
15. _Is_robot_joc_inversor1: Automatic computer selection in joc inversor1_
16. _Is_robot_joc_inversor2: Automatic computer selection in joc inversor2_
17. _Is_robot_joc_premi: Automatic computer selection in joc premi_
18. _Diners_clima: Payoff joc clima_
19. _Diners_inversor1: Payoff joc inversor1_
20. _Diners_inversor2: Payoff joc inversor2_
21. _Diners_premi: Payoff Prisoner Dilemma game_
_See for example (XML case):_ _https://doi.org/10.5281/zenodo.50429_
_**Will you re-use any existing data and how?** _
Data from previous Citizen Science experiments on the same themes will be used
for comparison purposes, calibration or to complete the set of collected data.
These existing data from Universitat de Barcelona are already deposited in
repositories such as Dryad, GitHub and Zenodo with a CC0 1.0 license, allowing
re-use. The data gathered using citizen science experiment will also be
crossed with data from socio-economic demographics (such as average life-
expectancy, average wage, average house prices in a given neighbourhood or
region) data publicly available by public administration open repositories.
_**What is the origin of the data?** _
The data are collected during Citizen Science experiments. The volunteers
freely and consciously deliver their data, which is the result of their
participation to the experiment. In addition, the experiments will be thought
to solve or propose solutions relevant issues for the community based on the
evidences collectively gathered.
_**What is the expected size of the data?** _
Typically, for each experiment, a number of 50-200 volunteers will
participate. The size of the files is usually between 5-30 MB.
_**To whom might it be useful ('data utility')?** _
Each set data will be analysed by the students that designed the experiments
and the researchers participating in these dynamics. In addition, the Open
Data might be useful to different collectives such as:
1. Others students involved in similar Citizen Science experiments, in the frame of StemForYouth project. It is foreseen that at least 3 schools in Barcelona, 1 in Athens and 1 in Warsaw will participate.
2. Others scientists having convergent research lines in terms of human mobility and collective decision making (in both cases, data is scarce and not generally shared).
3. Public institutions concerned by the social questions raised by the experiments. The data may serve as evidences to support some policies.
4. Teachers and students that will use the Citizen Science toolkit produced in the frame of StemForYouth in order to introduce Citizen Science at school.
# FAIR DATA
## Making data findable, including provisions for metadata
_**Are the data produced and/or used in the project discoverable with
metadata, identifiable and locatable by means of a standard identification
mechanism (e.g. persistent and unique identifiers such as Digital Object
Identifiers)?** _
Yes, the data will be associated with metadata and locatable by means of a
DOI, following the previous examples (
_http://dx.doi.org/10.5061/dryad.7sj4h_ or
_https://doi.org/10.5281/zenodo.50429_ )
_**What naming conventions do you follow?** _
All the data names set will contain, in this order:
1. STEMForYouth
2. The name of the school that designed the experiment
3. Name or reference of the experiment
4. Name of the place and date of the experiment
Thus, an example of data name could thus be:
STEMForYouth_JesuïtesCasp_Gimcana_Barcelona_2017_03_05
_**Will search keywords be provided that optimize possibilities for re-use?**
_
Human mobility, Pedestrian mobility, Smart City, Human Decision Making, Social
Dilemmas, Citizen Science, STEMForYouth, Public Experiments, Collective
Experiments, Action Research, Human Behaviour, Collective Action, Geolocation,
Game Theory, Cooperation.
_**Do you provide clear version numbers?** _
Yes, see naming conventions. In addition, the raw data and the treated data
will be provided.
_**What metadata will be created? In case metadata standards do not exist in
your discipline, please outline what type of metadata will be created and
how.** _
Metadata created will carefully explain and describe the meaning of each of
the fields of the database. Additionally, there will be a lab notebook made by
students and the results of surveys. These files will be related to assessment
and performance of students and this needs to be discussed with all partners
with similar needs or similar outcomes (December 2016).
## Making data openly accessible
_**Which data produced and/or used in the project will be made openly
available as the default? If certain datasets cannot be shared (or need to be
shared under restrictions), explain why, clearly separating legal and
contractual reasons from voluntary restrictions.** _
Personal data will not be available (being in an independent file). Rest of
the data and specifically produced data code will remain open. Some of the
activities in the classroom will however encourage the use of open source
platforms.
_**Note that in multi-beneficiary projects it is also possible for specific
beneficiaries to keep their data closed if relevant provisions are made in the
consortium agreement and are in line with the reasons for opting out.** _
The data generated through the Citizen Science experiment will all be made
openly available except from personal data (those socio-demographic data that
might help to identify a single individual or those mobility data that can
make possible to retrieve personal data such as home address). In the latter
case, data will be properly randomized to avoid any re-identification process.
_**How will the data be made accessible (e.g. by deposition in a
repository)?** _
The data will be deposited simultaneously in Zenodo and Github using standard
files for data tables (e.g. SQL or XML).
_**What methods or software tools are needed to access the data?** _
In general, no software is necessary to access the data. In case this were
necessary, the appropriate open source software will be also provided.
_**Is documentation about the software needed to access the data included?** _
Yes.
_**Is it possible to include the relevant software (e.g. in open source
code)?** _
Yes, in GitHub. A specific space in GitHub will be created in order to include
the software used in SFY.
_**Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.** _
Yes. Zenodo (OpenAire/CERN repository) and Github, traditionally associated
with the Open Source movement.
_**Have you explored appropriate arrangements with the identified
repository?** _ Yes.
_**If there are restrictions on use, how will access be provided?** _ Access
is free and open in both cases.
_**Is there a need for a data access committee?** _ No.
_**Are there well described conditions for access (i.e. a machine readable
license)?** _ Yes.
_**How will the identity of the person accessing the data be ascertained?** _
We will be able to use the protocols from Zenodo and GitHub (OpenSource and
OpenData) although it will be generally difficult to identify the person.
## Making data interoperable
_**Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different
origins)?** _ Yes. Recombination with different datasets is also considered
specially related to neighbourhood open access data such as welfare or life-
expectancy or geolocated urban elements. Open software applications will be
used by students such as Carto (https://carto.com) and Plot.Ly
(https://plot.ly), which are indeed available to process data and elaborate
new data with online tools. A good, alternative to make open data available is
to include data together with an R package and programs in GitHub that they
are able to dive into the open data. R has become very popular in big data
analysis. R has also a large user community, so on open R package may be an
efficient way to disseminate the WP4 (and other project statistic) results
amongst the research community. Finally, data processed can be downloaded as
well and placed in a public repository as Zenodo. Codes been used might not
always be available. The availability of codes to run specific digital
platforms will be discussed with partners and if relevant, included in the
next revised version of this DMP.
_**What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?** _
Vocabularies being used will have three main origins: Education, Social
Sciences when experimenting with human subjects and Cartography and
Geolocation. Statistics, probability and Movement Ecology are possible
additional vocabulary. To be discussed with all partners when dealing with
Assessment. In the case of Zenodo repository, all metadata is stored
internally in MARC. Metadata is exported in several standard formats such as
MARCXML, Dublin Core, and DataCite Metadata Schema according to OpenAIRE
Guidelines. For textual items, English is preferred but all languages are
accepted.
_**Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?** _
Yes. It will also intend to be fully comprehensive by students participating
in the project.
_**In case it is unavoidable that you use uncommon or generate project
specific ontologies or vocabularies, will you provide mappings to more
commonly used ontologies?** _ Yes.
## Increase data re-use (through clarifying licenses)
_**How will the data be licensed to permit the widest re-use possible?** _ All
the data will have a Creative Commons CC0 1.0 license.
_**When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.** _
The data will be made available shortly, from six months to twelve months
after the realization of the experiments. In addition, data will be delivered
to volunteers almost immediately through userfriendly interfaces.
_**Are the data produced and/or used in the project useable by third parties,
in particular after the end of the project? If the re-use of some data is
restricted, explain why.** _ Yes, the data can be used by third parties. No
restriction.
_**How long is it intended that the data remains re-usable?** _
Always. In the Zenodo repository, items will be retained for the lifetime of
the repository. This is currently the lifetime of the host laboratory CERN,
which currently has an experimental programme defined for the next 20 years at
least. In all case, a DOI and a perma-link will be provided.
_**Are data quality assurance processes described?** _
Yes. The data quality will be assessed by the researchers of Universitat de
Barcelona that will help conducting the Citizen Science experiments. The
documentation attached to each database will include a discussion about data
quality. Scientific papers using the data will also validate the data quality.
Zenodo and GitHub only guarantee a minimal quality process. For instance, all
data files are stored along with a MD5 checksum of the file content. Files are
regularly checked against their checksums to assure that file content remains
constant.
# ALLOCATION OF RESOURCES
_**What are the costs for making data FAIR in your project?** _
In the case of UB, no cost associated for the deposit in repositories
(although they have a size limit), as all the processes described are free of
charge. In addition, an offline copy of all data sets will be saved in hard
disk funded by the EU project (300-600 euros approx.) and while another copy
with personal data is stored an in-house UB server.
_**How will these be covered? Note that costs related to open access to
research data are eligible as part of the Horizon 2020 grant (if compliant
with the Grant Agreement conditions).** _ Not applicable unless more space
than the one offered for free in the repositories in the repository. However,
based on the current plan, this case is unlikely. Hard disk will be funded by
the EU project and with the UB budget in the case of the Citizen Science
experiments.
_**Who will be responsible for data management in your project?** _
Ignasi Labastida, Head of the Research Unit at the CRAI of the UB.
_**Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)** _
In case we exceed the quota, the cost in GitHub is 6.43 Euros per month and
per user (see _https://github.com/pricing_ ) . There is an unlimited use in
Zenodo but the size constraint is 2GB per file. Higher file quotas in Zenodo
can be requested. Zenodo general policies can be consulted here
_https://zenodo.org/policies_ .
# DATA SECURITY
_**What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?** _
The data will be stored in an in-house UB server. In addition, a copy will be
done in an external disc. Data files and metadata in Zenodo are backed up
nightly and replicated into multiple copies in the online system.
_**Is the data safely stored in certified repositories for long term
preservation and curation?** _ Yes, Zenodo repository provide this
certification https://zenodo.org/policies
# ETHICAL ASPECTS
_**Are there any ethical or legal issues that can have an impact on data
sharing? These can also be discussed in the context of the ethics review. If
relevant, include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA).** _
Before making the data available, all possible references allowing identifying
the volunteers will be taken out of the data set. All volunteers will sign an
informed consent. All the Citizen Science experiments will pass through the
Ethics Committee of Universitat de Barcelona and the rest of the partners
should perform identical protocols. The data collection of the Spanish Citizen
Science experiments will follow the rules of the LOPD (Ley Orgánica de
Protección de Datos de Carácter Personal, Organic Law for Personal Data
Protection) and equivalent process will be followed for experiments done in
Poland and Greece.
_**Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?** _ We will not share personal
data.
# OTHER ISSUES
_**Do you make use of other national/funder/sectorial/departmental procedures
for data management? If yes, which ones?** _ None.
# APPENDIXES
_**1\. Timetable for review** _
M6: Submission of DMP first version (DMP version 1, D9.2)
M8: Introduction to Open Data and DMP (Second Project Meeting)
M12: First review of DMP (DMP version 2)
M24: Second review of DMP (DMP version 3)
M30: Final DMP (DMP version 4, D9.6)
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0628_CARESSES_737858.md
|
# Description of the deliverable
Deliverable D8.4 relates to Work Package 8 - Dissemination and Exploitation;
more specifically, D8.4 details the first version of the Data Management Plan
produced in Task T8.2 (Plans for Dissemination,
Exploitation, Data management). According to the CARESSES DoA, Deliverable
D8.1 _includes the Data Management Plan, according to the requirements of the
pilot on Open Research Data (to which CARESSES takes part)._
In particular, due to CARESSES’ peculiarities, the data produced during the
project will belong to two different classes:
1. Data that are produced as the outcome of RTD activities performed by a partner or a team of partners according to the DoA;
2. Data that are produced in run-time by a software or hardware component of the system (i.e., the robot operating in the smart environments) during experiments.
To clarify the difference between the two classes above, please notice that
data of the first kind typically correspond to the output of tasks performed
by CARESSES partners, either in parallel or in cascade. An example is the
Cultural Knowledge Base, which contains information on how the robot shall
adapt its behavior depending on the cultural identity of the user. The
Cultural Knowledge Base includes data that are the ultimate output of “Task
1.4 Cultural competence encoded with formal tools”, which – on its turn -
relies on previous tasks such as “Task 1.1 Definition of scenarios”; “Task 1.2
Guidelines for culturally competent robots”; “Task 2.2 Cultural knowledge
representation” (to mention but a few). Another example are the data collected
in the pre- and post-testing structured interviews with clients and informal
caregivers in the last year of the project, which are the output of “Task 7.1
Pre- and post-testing structured interviews”, but iteratively rely on a set of
previous tasks that constitute the basis for making experiments and collecting
such data.
Data of the second kind are not the direct output of an RTD activity performed
by partners, but are typically produced by the system itself during
experiments, and correspond to the log of information exchanged in run-time
among different software components. An example are the data acquired by
sensors over time, which are then processed in order to detect the user’s
emotion or action, or the actions that have been chosen for execution by the
planner depending on the context at a time.
The Consortium has made the effort, since the first months of the project, to
identify all the data that will be produced in CARESSES, by making a clear
distinction among data of the first and second class. Converging on agreed
data formats as early as possible has played a key role for producing a
detailed Data Dissemination Plan within the first five months of the project,
it will be crucial to maintain a correct flow of information between
participants throughout the project, and it will lay the basis to make the
integration of software components easier.
Section 2 briefly describes the methodology that has been adopted by CARESSES’
partners in order to converge on shared data formats, and to select the data
types to be included in the initial release of the Data Dissemination Plan.
Section 3 describes the data types that have been selected for publication as
Open Research Data. Section 4 comments on how this document may be used, and
updated. Finally Conclusions are given. The attachment reports our initial
answers to most of the issues raised by the guidelines for DMP preparation (
_https://dmponline.dcc.ac.uk/_ ) , laying the basis for producing highquality
data that comply with all the constraints of the Open Research Data pilot.
# Methodology
The definition of the Data Management Plan has started since the first month
of the project, by relying on procedures and tools for collaborative working
that have allowed partners to aim at the definition of shared data formats:
FlashMeeting - a browser-based application for videoconference, GoogleDrive
online tools, a CARESSES repository hosted on gitLab and, obviously, emails.
## Procedures for converging to shared data formats
During the Kick-off meeting, a special session was devoted to making partners
aware of the importance of open data. Specifically, a top-down approach has
been proposed and approved by the consortium, based on the principle of
valuing the needs and expertise of each participant in its respective area of
research as a starting point, and establishing policies and a procedure to
rapidly converge to agreed data formats through subsequent steps of iterative
refinement.
According to this rationale, the following steps have been performed in
January-May 2017.
* January 2017: the coordinator prepares Input and Output data templates, that are presented and discussed during videoconference meetings: in the templates, partners are asked to describe the Input data that they need to receive from other partners, as well as the Output data that they are able to produce. As usual, data are distinguished in the two classes 1 and 2 above, i.e., data that are produced as the outcome of RTD activities _versus_ data that are produced in run-time by a component of the system. Templates have a structure that follows the guidelines for DMP preparation, in order to ease the mapping onto the online tool.
* 30th -31st January, Kick-off meeting in London: starting from the Input and Output templates filled by partners, the Coordinator describes the procedure to be used to converge to agreed data formats and then identify the data to be made public. The procedure can be summarized as follows:
* in a first phase, partners try to find a match between the Output Data they expect to produce and the Input Data that other partners expect to receive, in order to guarantee that each required Input has a matching corresponding Output; in this process, a unique format for matching Input/Output data is negotiated;
* in a second phase, after Input/Output data formats have been uniquely defined, data that are relevant to be included in the Data Management Plan are chosen, and described at a greater level of detail according the guidelines for DMP preparation.
* February – April: partners implement the two phases described above, by negotiating, during videoconference meetings, the exact format of each Input/Output data type of class 1 or 2. Data of class 2 require the additional effort of specifying details of the whole architecture of the system, including the three main components Cultural Knowledge Base, Culturally-Sensitive Planning & Execution, and Culture-aware Human-Robot Interaction, as well as the subcomponents that ultimately produce / consume those Input/Output data. The final outcome of this process is a “Living document about CARESSES Data Types” 1 describing the exact format of all the data exchanged in the system, which partners can use as a reference for the software development, and will be updated as the project progresses and new needs may arise.
* April – May: partners select, during videoconference meetings, the data of class 1 and 2 to be included in the Data Management Plan, which are then refined and described in greater detail according to the guidelines for DMP preparation. The final outcome of this process has been uploaded using the online tool, and is attached to this document.
# Data included in the ORDP
According to the procedure described in Section 2, the following Data types
have been selected to be included in the Data Management Plan. All of the data
will be collected in an ethically appropriate manner, with Ethics Committee
approval and in compliance with the principles and requirements of General
Data Protection Regulation (EU) 2016/679. Additional details can be found in
Deliverable 10.1 Ethics Requirements.
## Dataset 1: Cultural Knowledge
“Cultural Knowledge” is a dataset of the first class, i.e., it is produced by
RTD activity performed by partners in the context of WP1 (Transcultural
Nursing) and WP2 (Cultural Knowledge Representation). Specifically, WP1 leads
to the collection of a large corpus of knowledge, that play a key role for any
assistive robot which aims at showing a culturally competent behavior
(Hofstede 2001; Papadopoulos 2006). Such knowledge includes the scenarios
tables describing possible interactions between the robot and clients
belonging to different cultural group (WP1, Deliverable D1.1), the guidelines
for achieving a culturally competent robotic behavior (WP1, Deliverable D1.2),
as well as additional sources of information that may be found on dedicated
publications or websites (describing geographical regions of different
countries, customs, manners, etc.). This heterogeneous knowledge is then
structured using a formal language for knowledge representations in WP2 (using
the Ontology Web Language 2 ) allowing for a unique representation,
automatic acquisition, update and reasoning.
By making this data available to the scientific community we:
1) Foster the discussion on what knowledge is required for cultural
competence; 2) Allow other research groups to contribute to our cultural
knowledge base;
3) Foster the research on efficient representations of the cultural knowledge,
proposing the CARESSES Cultural Knowledge Base as a benchmark.
The Cultural Knowledge Base, properly encoded using the OWL2 formalism, will
be stored in a public repository. Public repositories of ontologies are
becoming more and more popular. As an example, the SmartCity website
_http://smartcity.linkeddata.es/index.html_ collects ontologies defining
concepts and relationships about smart cities, energy and related fields, as
well as related datasets. Specifically, we propose the publication of
* the OWL2 ontology that we developed for the representation of cultural knowledge in WP2, together with
* the set of all sources of cultural knowledge produced or collected in WP2 that we deem relevant for the development of a culture-aware assistive robot (scenarios tables, guidelines, other sources of the knowledge encoded in the Cultural Knowledge Base in the form of a list of links to web sources).
The data contained in the Cultural Knowledge Base are not personal data, and
therefore do not fall under the General Data Protection Regulation (EU)
2016/679.
## Dataset 2: Interaction Logs
“Interaction Logs” is a dataset of the second class, i.e., it consists in the
log / history of the interactions between the system (robot plus smart
environment) and the user, in the form of a temporally ordered list of all
messages shared among the software components of the system. The dataset
includes the logs produced in WP6 (Testing in Health-Care Facilities and the
iHouse), but also those produced in WP5 (System Integration) during system
level integration of CARESSES modules, i.e., before the testing in health-care
facilities that is performed in WP6.
The messages shared among the components (goals, actions to be executed,
sensor data, position, posture and gesture of the user, etc.) during one
encounter between the robot and a user provide a persistent recording of the
events occurred during the encounter, which can be later analyzed:
* In itself, to find correlations between events, user actions and robot actions;
* Together with the pre- and post-testing structured interviews (see Dataset 3), to find correlations between events, user actions, robot actions and the users’ and caregivers’ responses By making this data available to the scientific community we:
1. Foster the research on culturally-competent robot behavior;
2. Allow for the study, discovery and definition of robot behaviors that have a positive/negative impact on the user and the caregiver, which can ultimately lead to the definition of standards in terms of actions and capabilities required for (effective) assistive robots.
The Interaction Logs data set will be stored in a public repository. More
specifically, as it is common for logs of software components, we are
considering to store the data on Github ( _https://github.com/_ ) , which is
among the largest and most popular repository hosting services. Github
repositories can be given a
DOI and released using the data archiving tool Zenodo ( _https://zenodo.org/_
) , which also ensures that all metadata required for the identification of
the repository are filled before its public release.
All the data type exchanged among software components of the system are not
personal data (according to their current definition in the “Living document
about CARESSES Data Types”), and therefore do not fall under the General Data
Protection Regulation (EU) 2016/679.
Whenever possible, video / audio recordings and annotations will be collected
following informed consent and anonymized in compliance with the General Data
Protection Regulation (EU) 2016/679, in order to describe events occurred
during the encounter (the additional cost for anonymization and the possible
impact on participants during experiments will be carefully considered before
collecting supporting video/audio data).
## Dataset 3: End-user Evaluation
“End-user Evaluation” is a dataset of the first class, i.e., it is produced by
RTD activity performed by partners in the context of WP6 (Testing in Health-
Care Facilities) and WP7 (End-User Evaluation). The research within WP6 and
WP7 leads to the acquisition of the corpus of CARESSES end-users’ responses to
pre- and post-testing structured interviews, aimed at evaluating the key
feature of cultural competence in designing robots more sensitive to the
user’s needs, customs and lifestyle, improving the quality of life of users
and their caregivers, reducing caregiver burden and improving the system’s
efficiency and effectiveness. Data include results of
* Client perception of the system’s cultural competence, Adapted CCATool (Papadopoulos et al., 2004);
* Client and informal caregiver related quality of life, SF-36 (Hays et al 1993);
* Caregivers burden, ZBI tool (Zarit et al., 1980);
* Client satisfaction about the system’s efficiency and effectiveness (Chin et al, 1988); Qualitative semi-structured interviews transcripts.
By making this data available to the scientific community we:
1. Allow other researchers to validate the findings of CARESSES.
2. Foster the research on the evaluation of (culture-aware) assistive robots.
End-user Evaluation data will be released as a publicly accessible dataset
using the data archiving tool Zenodo ( _https://zenodo.org/_ ) , which also
ensures that all metadata required for the identification of the dataset are
filled before its public release. The data will be collected following
informed consent and will be pseudonymized, in compliance with the General
Data Protection Regulation (EU) 2016/679.
## Summary of data included in the ODRP
The following table summarizes the data to be openly published in CARESSES and
a tentative release date, after the data has been used by project’s
participants for the project’s RTD activities and to prepare scientific
publications.
<table>
<tr>
<th>
Name
</th>
<th>
Class
</th>
<th>
Content
</th>
<th>
Work Packages
</th>
<th>
Supporting material
</th>
<th>
Personal data
</th>
<th>
First release date
</th> </tr>
<tr>
<td>
Cultural
Knowledge
</td>
<td>
1
</td>
<td>
Cultural Knowledge Base in OWL2.
</td>
<td>
WP1, WP2
</td>
<td>
Scenarios tables, guidelines, other sources of knowledge encoded in
the Cultural Knowledge Base
</td>
<td>
NO
</td>
<td>
M24
</td> </tr>
<tr>
<td>
Interaction Logs
</td>
<td>
2
</td>
<td>
Logs of the interactions between the robot and a user, in the form of a
temporally ordered
list of all messages
shared among CARESSES components.
</td>
<td>
WP5, WP6: Collected during the testing
phases of
CARESSES
</td>
<td>
Video / audio recordings and annotations in order to describe events occurred
during the encounter
</td>
<td>
NO
(supporting material
must be
collected and
handled in
line with
GDPR
2016/679)
</td>
<td>
M27
</td> </tr>
<tr>
<td>
End-User Evaluation
</td>
<td>
1
</td>
<td>
Adapted CCATool responses SF-36 responses
ZBI responses
QUIS responses Qualitative semistructured interviews transcripts.
</td>
<td>
WP7:
Collected from the end-users in the pre-and post-testing phases of
CARESSES
</td>
<td>
N.A.
</td>
<td>
YES
To be collected and
managed in
line with
GDPR
2016/679.
</td>
<td>
M37
</td> </tr> </table>
_Figure 1: Summary of the datasets to be included in the CARESSES Data
Management Plan (more details in the Attachment)._
D8.4: Data Management Plan (updated during the project) Page 10
# How to
As the project will progress, the consortium will use this deliverable, as
well as the detailed Data Management Plan that has been uploaded using the
online tool (see the Attachment), as a reference for the publication of open
data. The Data Management Plan, even in its preliminary form (to be updated as
the project progresses), includes a detailed set of guidelines, that
participants will carefully considered for the production and publications of
high quality data meeting the required standards.
# Conclusions
## Compliance with the DoA and corrective actions
The Deliverable is the output of Tasks 8.2. According to the Description of
Action (DoA), deliverable D8.4:
_includes the Data Management Plan, according to the requirements of the pilot
on Open Research Data (to which CARESSES takes part)._
By considering that the draft Data Management Plan included in this
deliverable will be periodically updated during the project under the
supervision of the Exploitation, Dissemination and IPR Board, the work
reported in this document and its attachments fully complies with the plans in
the DoA.
## Achievements
A draft Data Management Plan has been prepared, which is the output of a
process aimed at identifying, since the first months of the project, all the
data that will be produced in CARESSES, in order to converge to agreed data
formats as early as possible. As an additional achievement, this process has
produced a “Living document about CARESSES Data Types”, describing the exact
format of all the data exchanged in the system, that partners can use as a
reference for the software development. This document will allow for
maintaining a correct flow of information between participants throughout the
project, and will lay the basis to make the integration of software components
easier.
Among all data produced in CARESSES, datasets to be openly published have been
chosen, described in greater detail according to the guidelines for DMP
preparation, and finally uploaded using the online tool (
_https://dmponline.dcc.ac.uk/_ ) .
As required by the Grant Agreement, updates to the Data Management Plan will
be made online, and discussed in the periodic technical report, in the section
dedicated to WP8 ‘Dissemination and Exploitation”.
D8.4: Data Management Plan (updated during the project) Page 11
## Next steps
In the next months, the Data Management Plan will be updated as the project
progresses and new needs may arise.
# Bibliography
1. Papadopoulos I (2006) Transcultural health and social care: development of culturally competent practitioners. Elsevier Health Sciences, 2006.
2. Hays R.D., Sherbourne C.D., Mazel R.M. (1993) The RAND 36-Item Health Survey 1.0. Health Econ 2(3): 217-27
3. Hofstede, G. (2001) Culture's Consequences: Comparing Values, Behaviors, Institutions and Organizations Across Nations. 2nd Edition, Thousand Oaks CA: Sage Publications.
4. Papadopoulos I, Tilki M, Lees S (2004) Promoting cultural competence in health care through a research based intervention. Journal of Diversity in Health and Social Care, 1(2): 107-115
5. Zarit S.H., Reever K.E., Bach-Peterson J. (1980) Relatives of the impaired elderly: correlates of feelings of burden. Gerontologist 20(6): 649-55. DOI: 10.1093/geront/20.6.649
6. Chin, J.P., Diehl, V.A., Norman, K.L. (1988) Development of an Instrument Measuring User Satisfaction of the Human-Computer Interface. CHI’88: 213-218
# Attachments
Attachment 1: The attachment reports the information uploaded using the online
tool for DMP preparation ( _https://dmponline.dcc.ac.uk/_ ) .
D8.4: Data Management Plan (updated during the project) Page 12
**DMP title**
**Project Name** My plan (Horizon 2020 DMP) - DMP title
**Project Identifier** CARESSES
**Grant Title** 737858
**Principal Investigator / Researcher** Antonio Sgorbissa
**Description** The groundbreaking objective of CARESSES is to build
culturally competent care robots, able to autonomously re-configure their way
of acting and speaking, when offering a service, to match the culture, customs
and etiquette of the person they are assisting. By designing robots that are
more sensitive to the user’s needs, CARESSES’ innovative solution will
offer elderly clients a safe, reliable and intuitive system to foster their
independence and autonomy, with a greater impact on quality of life, a reduced
caregiver burden, and an improved efficiency and efficacy. The need for
cultural competence has been deeply investigated in the Nursing literature.
However, it has been totally neglected in Robotics. CARESSES stems from the
consideration that cultural competence is crucial for care robots as it is for
human caregivers. From the user’s perspective, a culturally appropriate
behavior is key to improve acceptability; from the commercial perspective, it
will open new avenues for marketing robots across different countries.
CARESSES will adopt the following approach. First, we will study how to
represent cultural models, how to use these models in sensing, planning and
acting, and how to acquire them. Second, we will consider three (physically
identical) replicas of a commercial robot on the market and integrate cultural
models into them, by making them culturally competent. Third, we will test the
three robots, customized for three different cultures, in the EU (two cultural
groups) and Japan (one cultural group), on a number of elderly volunteers and
their informal caregivers. Evaluation will be conducted through quantitative
and qualitative investigation. To achieve its groundbreaking objective,
CARESSES will involve a multidisciplinary team of EU and Japanese researchers
with a background in Transcultural Nursing, AI, Robotics, Testing and
evaluations of health-care technology, a worldwide leading company in Robotics
and a network of Nursing care homes.
**Funder** European Commission (Horizon 2020)
**1\. Data summary**
**Provide a summary of the data addressing the following issues:**
**State the purpose of the data collection/generation**
**Explain the relation to the objectives of the project**
**Specify the types and formats of data generated/collected**
**Specify if existing data is being re-used (if any)**
**Specify the origin of the data**
**State the expected size of the data (if known)**
**Outline the data utility: to whom will it be useful**
Three datasets have been selected to be included in the Data Management Plan:
Dataset 1: Cultural Knowledge Base (CKB)
Dataset 2: Interaction logs (CKB)
Dataset 3: End-Users Responses (EUR)
**Dataset 1: Cultural Knowledge Base (CKB)**
_State the purpose of the data collection/generation_
The purpose of WP1 and WP2 is to: 1) collect the corpus of knowledge allowing
an assistive robot to exhibit a culturally competent behavior (with a specific
focus on the three cultures considered during the final testing stage); and 2)
formalize it in a framework allowing for the automated acquisition, update and
retrieval of culturerelated information. This framework is the Cultural
Knowledge Base, that will allow for performing a cultural assessment of the
user and aligning plans and sensorimotor behaviours to the user’s cultural
identity.
_Explain the relation to the objectives of the project_
The design and development of a framework for cultural knowledge
representation, allowing for the automated acquisition, update and retrieval
of culture-related information is the purpose of KRA2, and directly matching
the scientific objectives O2, O3, O4 and the technological objectives O5, O6
of the project. Moreover, the Cultural Knowledge Base is key to performing a
cultural assessment of the user and aligning plans and sensorimotor behaviours
to the user’s cultural identity, which is the main goal of the project.
_Specify the types and formats of data generated/collected_
_What format will your data be in (SPSS, Open Document Format, tab-delimited
format, etc)?_
The CKB will be an ontology written in the OWL 2 language (
_https://www.w3.org/OWL/_ ).
_Why have you chosen to use a particular format?_
OWL is described as “a Semantic Web language designed to represent rich and
complex knowledge about things, groups of things, and relations between
things. OWL is a computational logic-based language such that knowledge
expressed in OWL can be exploited by computer programs, e.g., to verify the
consistency of that knowledge or to make implicit knowledge explicit.” (
_https://www.w3.org/OWL/_ ) As such, the language perfectly matches the
requirements for the Cultural Knowledge Base, as described in the previous
sections.
_Do the chosen formats and software enable sharing and long-term validity of
data?_
OWL and its current version OWL 2 are a standard developed by the W3C
consortium
( _http://www.w3.org/_ ), which is the main international standards
organization for the World Wide Web, and are arguably the most popular
knowledge representation language. OWL (OWL 2 since 2009) was first published
in 2004 and it has always been actively maintained by the W3C.
_Specify if existing data is being re-used_
_Are there any existing data or methods that you can reuse?_
We will reuse, as far as our application permits it, existing ontologies for
the description of concepts of relevance in the context of the CARESSES
project.
_Do you need to pay to reuse existing data?_
Many ontologies are published under licenses that allow for free use, sharing
and reuse, such as the CC BY 4.0 license (
_https://creativecommons.org/licenses/by/4.0/_ ). At the moment, there is no
evidence that we will have to reuse data which is not freely accessible.
_Are there any restrictions on the reuse of third-party data?_
We will refer to the licenses of the third-party ontologies we will include in
the CKB ontology (if any) to determine possible restrictions.
_Can the data that you create - which may be derived from third-party data -
be shared?_
We will refer to the licenses of the third-party ontologies we will include in
the CKB ontology (if any) to define the conditions under which the CKB can be
accessed, used and shared.
_Specify the origin of the data_
_How are the data produced and collected (possibly with reference to the
CARESSES WorkPlan)?_
Task 1.1, Task 1.2 and Task 1.3 are devoted to the identification, collection
and validation of all the knowledge required by culturally competent robots
for elderly assistance. At the same time, Task 2.1, Task 2.2 and Task 2.3 are
devoted to the identification and development of the framework and tools for
the representation of cultural knowledge. Task 1.4 is devoted to the
formalization of the knowledge collected in Tasks 1.1-3 with the tools
developed in Tasks 2.1-3
_State the expected size of the data_
_State the expected size, not necessarily in terms of “memory storage”; this
can be the number of records in a Database, a number of “facts” or “rules”,
values versus time, and so on._
Ontologies are usually described in terms of number of classes, properties,
datatypes and instances they provide (see for example the Time Ontology:
_http://lov.okfn.org/dataset/lov/vocabs/time_ ). As a reference, the DOGONT
ontology for the description of intelligent domotic environments
( _http://lov.okfn.org/dataset/lov/vocabs/dogont_ ) describes 893 classes and
74 properties.
_Outline the data utility: to whom it will be useful_
An ontology describing the corpus of knowledge required for culturally
competent assistive robots can be useful: 1) in the field of Robotics, as a
guideline and reference for the development of robots able to interact with
people while keeping cultural information into account; 2) in the field of
Transcultural Nursing, as a validated and publicly available ontology for the
description of concepts related to cultural competence and the detailing of a
number of cultures (specifically, the ones to be considered during the testing
phase of CARESSES).
_Please provide a concrete example of the data produced in the right format_
_Example 1: OWL ontology (with examples of object properties, data properties,
classes and individuals) describing some of the concepts contained in the CKB_
_ <?xml version="1.0"?> _
_ <rdf:RDF xmlns="http://example.com/caressesontology#"
xml:base="http://example.com/caressesontology" _
_[…]_
_xmlns:caressesontology="http://example.com/caressesontology#" > _
_ <owl:Ontology rdf:about="http://example.com/caressesontology"> _
_ <rdfs:comment>This is the Knowledge Base for Caresses</rdfs:comment>
</owl:Ontology> _
_ <!-- _
_///////////////////////////////////////////////////////////////////////////////////////_
_// Object Properties_
_///////////////////////////////////////////////////////////////////////////////////////_
_\-- > _
_ <!-- http://example.com/caressesontology#has_Positive \--> _
_ <owl:ObjectProperty
rdf:about="http://example.com/caressesontology#has_Positive"> _
_ <rdfs:domain rdf:resource="http://example.com/caressesontology#User"/> _
_ <rdfs:range rdf:resource="http://example.com/caressesontology#Topic"/>
</owl:ObjectProperty> _
_ <!-- _
_///////////////////////////////////////////////////////////////////////////////////////_
_// Data properties_
_///////////////////////////////////////////////////////////////////////////////////////
\-- > _
_ <!-- http://example.com/caressesontology#age --> _
_ <owl:DatatypeProperty rdf:about="http://example.com/caressesontology#age"> _
_ <rdfs:domain rdf:resource="http://example.com/caressesontology#User"/> _
_ <rdfs:range rdf:resource="http://www.w3.org/2001/XMLSchema#int"/> _
_ </owl:DatatypeProperty> _
_ <!-- http://example.com/caressesontology#gender \--> _
_ <owl:DatatypeProperty
rdf:about="http://example.com/caressesontology#gender"> _
_ <rdfs:domain rdf:resource="http://example.com/caressesontology#User"/> _
_ <rdfs:range rdf:resource="http://www.w3.org/2001/XMLSchema#string"/> <!-- _
_///////////////////////////////////////////////////////////////////////////////////////_
_// Classes_
_///////////////////////////////////////////////////////////////////////////////////////_
_\-- > _
_ <!-- http://example.com/caressesontology#AlmondChicken \--> _
_ <owl:Class rdf:about="http://example.com/caressesontology#AlmondChicken"> _
_ <rdfs:subClassOf
rdf:resource="http://example.com/caressesontology#ChineseFood"/> _
_ <owl:disjointWith
rdf:resource="http://example.com/caressesontology#CantoneseFriedRice"/> _
_ <rdfs:comment>Almond chicken</rdfs:comment> _
_ </owl:Class> _
_ <!-- http://example.com/caressesontology#Badminton \--> _
_ <owl:Class rdf:about="http://example.com/caressesontology#Badminton"> _
_ <rdfs:subClassOf rdf:resource="http://example.com/caressesontology#Sport"/>
_
_ <rdfs:comment>Badminton</rdfs:comment> _
_ </owl:Class> _
_ <!-- _
_///////////////////////////////////////////////////////////////////////////////////////_
_// Individuals_
_///////////////////////////////////////////////////////////////////////////////////////_
_\-- > _
_ <!-- http://example.com/caressesontology#SCH_AlmondChicken \--> _
_ <owl:NamedIndividual
rdf:about="http://example.com/caressesontology#SCH_AlmondChicken"> _
_ <rdf:type rdf:resource="http://example.com/caressesontology#AlmondChicken"/>
_
_ <likeliness
rdf:datatype="http://www.w3.org/2001/XMLSchema#decimal">0.7</likeliness> _
_ <neg>I really don’t like almond chicken</neg> _
_ <pos>Almond chicken is my favourite chinese food!</pos> _
_ <pos>Almond chicken is so tasty!</pos> _
_ <pos>Chinese almond chicken is lovely!</pos> _
_ <pos>I always eat almond chicken</pos> _
_ <pos>I love almond chicken!</pos> _
_ <poswait>Almond chicken is delicious, isn’t it?</poswait> _
_ <poswait>Do you know a good place here around where I can eat almond
chicken?</poswait> _
_ <poswait>Have you eaten almond chicken recently?</poswait> _
_ <poswait>What about some almond chicken today?</poswait> _
_ <que>Do you like almond chicken?</que> _
_ <topicname>AlmondChicken</topicname> _
_ <rdfs:comment>Topic Almond Chicken related to a Chinese User</rdfs:comment>
</owl:NamedIndividual> _
**Dataset 2: Interaction Logs (IL)**
_State the purpose of the data collection/generation_
The IL data set is the collection of messages shared among the CARESSES
components during interactions between the culturally competent robot and a
person. Each IL file captures the events occurred during the encounter, the
actions and status of the person (as perceived by the robot) and the actions
of the robot, and it is acquired to the aim of allowing offline analyses and
replays of the events occurred during the interaction.
_Explain the relation to the objectives of the project_
In the course of Task 5.6, the analysis of the Interaction Logs is key to
evaluate the performance of the components developed in WP2, WP3 and WP4,
which refer to the technical objectives O5-O12 of the project. In the context
of the end-user evaluation performed in WP7, the analysis of the Interaction
Logs collected during the tests in WP6 can help in assessing the performance
of the culturally competent assistive robot, which contributes to the
validation objective O15.
_Specify the types and formats of data generated/collected_
_What format will your data be in (SPSS, Open Document Format, tab-delimited
format, etc)?_
The IL data set will be a collection of text files in CSV format, which is
among the most readable formats for information storage. Each line corresponds
to a record, i.e. all the info related to a message shared over universAAL by
any of the software components of the culturally competent robot during an
encounter with a person. A record is divided into fields, separated by a
delimiter (e.g. a comma). Fields of relevance in our case include: 1)
timestamp of the message; 2) owner of the message; 3) content of the message.
_Why have you chosen to use a particular format?_
The CSV format is a very popular format for data exchange, widely supported by
consumer, business and scientific applications (e.g Microsoft Excel, MATLAB).
The fields to store in the IL records comply with popular standards for log
files (e.g. the ROS Bag file format for the log files of ROS applications
defined in _http://wiki.ros.org/Bags/Format/2.0_ , or the Extended Log file
Format for the log files of web servers defined in _http://www.w3.org/TR/WD-
logfile.html_ ). In particular, the ROS middleware ( _http://www.ros.org/_ )
is the de-facto standard in robotics applications and log files written in the
ROS Bag file format can be replayed and accessed within ROS by any other
component. Conversion from the CSV format to the ROS Bag file format is not
difficult (see _http://answers.ros.org/question/119211/creating-a-ros-bag-
file-fromcsv-file-data/_ ).
_Do the chosen formats and software enable sharing and long-term validity of
data?_
The CSV format, is among the most readable formats for information storage,
supported by the vast majority of software for numerical and data analysis.
_Specify if existing data is being re-used_
_Are there any existing data or methods that you can reuse?_
No. The IL data will be entirely produced in the course of CARESSES, during
interactions between the culturally competent robot and a person.
_Can the data that you create - which may be derived from third-party data -
be shared?_
We do not foresee any restriction to sharing the IL data set.
_Specify the origin of the data_
_How are the data produced and collected (possibly with reference to the_
_CARESSES WorkPlan_
Interaction Logs are collected during two separate stages of the project: 1)
in the course of Task 5.6 (evaluation of the integrated CARESSES modules as
validation stage within the development process) and 2) in the course of Task
6.3 (experimental evaluation of the culturally competent robot with the
participants in the control and experimental groups) and Task 6.4
(experimental evaluation of the culturally competent robot in the smart house
iHouse).
_State the expected size of the data_
_State the expected size, not necessarily in terms of “memory storage”; this
can be the number of records in a Database, a number of “facts” or “rules”,
values versus time, and so on._
The IL data set will be described in terms of number of files (i.e., number of
recorded interactions between the culturally competent robot and a person) and
number of records in each file.
_Outline the data utility: to whom it will be useful_
The IL data set, as a collection of quantitative data describing interactions
between a person and an assistive robot can be useful to academic and
industrial researchers aiming at defining guidelines, best practices and
standards in the field of HumanRobot Interaction (e.g., identifying which
robot actions are frequently requested by people, identifying recurrent
sequences of robot actions – human actions that a robot could rely on to
exhibit predictive behaviours). Portions of the dataset may also be used by
roboticists for the development and testing of specific robotic applications
(e.g., the IL dataset can be used to train and test algorithms for learning
the habits/routines of a person from the analysis of recurring events).
_Please provide a concrete example of the data produced in the right format_
_Log of messages shared over universAAL, as provided by the universAAL
component Log Monitor_
<table>
<tr>
<th>
Message
Type
</th>
<th>
Timestamp
</th>
<th>
Content
</th> </tr>
<tr>
<td>
D5.1 _(user request)_
</td>
<td>
1496049951891
</td>
<td>
[Remind_medication : blue_pill : between 12.00 and
12.30]
</td> </tr>
<tr>
<td>
D6.1 _(user state)_
</td>
<td>
1496049973842
</td>
<td>
[Greta : Greta Ahlgren : 10/04/2017: 12:05 : (2.3, 1.0,
0.0): (1.2, 0.0, 90.0) : Kitchen.FridgeArea : Standing :
\- : Cooking : Eating : Excited]
</td> </tr> </table>
**Dataset 3: End-Users Responses (EUR)**
_State the purpose of the data collection/generation_
The end-user evaluation of the culturally competent robot performed within the
CARESSES project implies gathering the responses of the end user to a number
of tools (at present they include: Adapted CCA tool, SF-36, ZBI, QUIS) and the
transcripts of qualitative semi-structured interviews. The analysis of such
responses: 1) enables us to be able to describe the differences in baseline
characteristics of the clients within and between the arms they are allocated
to, which is crucial for controlling and thus minimizing the impact of
confounding variables; 2) allows for assessing the impact of the (culturally
competent) assistive robot in terms of quality of life, increased independence
and autonomy, health and care efficiency gains.
_Explain the relation to the objectives of the project_
The assessment of the impact of the culturally competent assistive robot on
the lives of elderly people and their informal carers is a key goal of the
whole CARESSES project. More specifically, the evaluation of the robot with
elderly participants belonging to different cultures refers to validation
objectives O15, O16 and O17.
_Specify the types and formats of data generated/collected_
_What format will your data be in (SPSS, Open Document Format, tab-delimited
format, etc)?_
Quantitative data collected from structured questionnaires will comply with
the SPSS v21 format. Qualitative data collected from semi-structured
interviews will be transcribed verbatim using Microsoft Word and subsequently
imported into QSR NVivo 11.
_Why have you chosen to use a particular format?_
We hold expertise in both SPSS and QSR NVivo, both of which are advanced and
appropriate analytical software tools.
_Do the chosen formats and software enable sharing and long-term validity of
data?_
Yes.
_Specify if exsisting data is being re-used_
_Are there any existing data or methods that you can reuse?_
No. The EUR data will be entirely produced in the course of CARESSES, during
interactions between the culturally competent robot and the end-users
recruited for the testing phase.
_Do you need to pay to reuse existing data?_
No, but we will need permission to use outcome tools of interest.
_Are there any restrictions on the reuse of third-party data?_
No.
_Can the data that you create - which may be derived from third-party data -
be shared?_
Yes within the CARESSES consortium. Anonymised / non-identifiable data will be
used in outputs.
_Specify the origin of the data_
_How are the data produced and collected (possibly with reference to the_
_CARESSES WorkPlan_
Quantitative data will be produced in the course of Tasks 6.1, 6.2, 6.3, 7.1,
7.3 through the following structured tools applied during the testing phase:
* Background data: Cultural group, age, gender, client diagnosis, educational level,marital status, religion and religiosity, and data collected during screening (i.e. aggression, cognitive competence etc)
* Outcome data: Adapted RCTSH Cultural Competence Assessment Tool (CCATool,
Papadopoulos et al., 2004), Short Form (36) Health Survey (SF-36 v2, Hays et
al 1993), the Zarit Burden Inventory (ZBI; Zarit et al., 1980), and
Questionnaire for user interface satisfaction (QUIS) (Chin et al, 1988). Also
we need to record screening results and response rates – all of this data will
be compiled into SPSS.
Qualitative data will be collected in the course of Tasks 6.1, 6.2, 6.3, 7.2,
7.3 during semi-structured interviews with clients and informal caregivers.
_State the expected size of the data_
_State the expected size, not necessarily in terms of “memory storage”; this
can be the number of records in a Database, a number of “facts” or “rules”,
values versus time, and so on._
SPSS database: 45 clients and up to 45 caregivers, background and screening
associated data per client, background data per caregiver, two time points for
SF 36 and ZBI, one time point for CCATool and QUIS. Therefore, approximately
90 rows and
100 columns of data
NVivo database: Transcripts of 15 clients and up to 15 caregivers
_Outline the data utility: to whom it will be useful_
Anyone involved with the analysis and dissemination activities associated with
WP7 data.
_Please provide a concrete example of the data produced in the right format_
_SPSS data:_
_Client number, cultural group, age, gender, diagnosis, educational level,
marital status, religion, religiosity, InterRai aggression, InterRai cognitive
competence, CCATool questions and scores, SF36 questions and scores, ZBI
questions and scores, QUIS questions and scores_
<table>
<tr>
<th>
_1_
</th>
<th>
_WE_
</th>
<th>
_71_
</th>
<th>
_M_
</th>
<th>
_Mild dementia_
</th>
<th>
_University degree_
</th>
<th>
_Widowed_
</th>
<th>
_C of E_
</th>
<th>
_medium_
</th>
<th>
_low_
</th>
<th>
_high_
</th>
<th>
_5_
</th>
<th>
_…_
</th> </tr>
<tr>
<td>
_2_
</td>
<td>
_IND_
</td>
<td>
_77_
</td>
<td>
_F_
</td>
<td>
_Depression_
</td>
<td>
_College level_
</td>
<td>
_Widowed_
</td>
<td>
_Hinduisim_
</td>
<td>
_low_
</td>
<td>
_low_
</td>
<td>
_high_
</td>
<td>
_7_
</td>
<td>
_…_
</td> </tr> </table>
2. **FAIR data**
**2.1 Making data findable, including provisions for metadata:**
**Outline the discoverability of data (metadata provision)**
**Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as**
**Digital Object Identifiers?**
**Outline naming conventions used**
**Outline the approach towards search keyword**
**Outline the approach for clear versioning**
**Specify standards for metadata creation (if any). If there are no standards
in your discipline describe what metadata will be created and how**
**Dataset 1: Cultural Knowledge Base (CKB)**
_Outline the discoverability of data (metadata provision)_
_What metadata, documentation or other supporting material should accompany
the data for it to be interpreted correctly?_
The Linked Open Vocabularies (LOV) initiative (
_http://lov.okfn.org/dataset/lov_ ) hosts a large number of vocabularies and
ontologies for the semantic web, and actively promotes the design and
publication of high quality ontologies. Their recommendations for the metadata
and documentation supporting an ontology are publicly available at
_http://lov.okfn.org/Recommendations_Vocabulary_Design.pdf_ . We will adhere,
as far as the peculiarities of our application allow it, to those guidelines
in the preparation of the metadata and documentation of the CKB. As an
example, the above recommendations define the fields and formats of the
metadata to associate to classes and properties as “rdfs:label” (element
title), “rdfs:comment” (element role), “rdfs:isDefinedBy” (explicit link
between an element and the namespace it belongs to),
“vs:term_status” (element status among “stable”, “testing”, “unstable”,
“deprecated”).
The ontology itself together with the metadata allow for the automatic
generation of documentation.
We will also provide as much as possible of the original cultural information
formalized in the CKB, to provide the rationale for the formalization we
propose and foster the research on, on the one hand, what knowledge makes for
a culturally competent robot and, on the other hand, how such knowledge should
be formalized for its effective use by the robot.
_What information needs to be retained to enable the data to be read and_
_interpreted in the future?_
The metadata written in accordance with the aforementioned recommendations and
the documentation automatically generated from the CKB ontology contain all
the information to be retained to ensure its readability.
_How will you capture / create the metadata?_
Metadata will be created and updated manually, concurrently with the data, in
the course of WP1 and WP2 as described in the above sections. The
creation/update of metadata, specifically consists in the writing of a number
of text fields for each element of the ontology.
_Can any of this information be created automatically?_
Metadata will be manually inserted in the CKB ontology. A number of tools
exist to automatically generate the documentation of an ontology starting from
its description and metadata in the OWL / RDF language, e.g. Parrot
_http://idi.fundacionctic.org/parrot/parrot_ .
_What metadata standards will you use and why?_
We will adhere to the recommendations for metadata and documentation of
ontologies drafted by the LOV initiative (publicly available at
_http://lov.okfn.org/Recommendations_Vocabulary_Design.pdf_ ), which are aimed
at maximizing the readability and usability of the ontology by other users.
Such guidelines require no big effort for the production of the metadata and
documentation and ensure the compatibility with the requirements of many
freely available tools, such as WebVOWL, for the interpretation and
visualization of ontologies (for example in the case of the Time ontology
_http://visualdataweb.de/webvowl/#iri=http://www.w3.org/2006/time_ ).
_Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers?_
Once made publicly available, the CKB will be identified by a unique Uniform
Resource Identifier (URI). We will consider suggesting the CKB ontology for
inclusion in databases such as Protégé Ontology Library
( _https://protegewiki.stanford.edu/wiki/Protege_Ontology_Library_ ) and LOV,
which provides rich indexing and search tools.
_Outline naming conventions used_
A number of different style guidelines and naming conventions for ontologies
have been proposed. [1] surveys the most popular ones and tries to extrapolate
guidelines which are valid in a multilingual scenario. Considering the
intrinsic multilingual nature of the CARESSES project, we will adopt, whenever
possible, the guidelines they propose for multilingual applications.
[1] Montiel-Ponsoda, E., Vila Suero, D., Villazón-Terrazas, B., Dunsire, G.,
Escolano Rodríguez, E., & Gómez-Pérez, A. (2011). Style guidelines for naming
and labeling ontologies in the multilingual web.
_Outline the approach towards search keyword_
Indexing and search engines automatically identify the names of classes,
properties, datatypes and instances as valid search keywords (see for example
_http://lov.okfn.org/dataset/lov/about_ ).
_Outline the approach for clear versioning_
The metadata of the CKB ontology will include information about the date of
publication of the ontology (“dc:issued” element of the Dublic Core vocabulary
for resource description), date of the last modification (“dc:modified”),
version code
(“owl:versionInfo”) and change log with respect to the previous version
(rdfs:comment). In addition to this, popular Git repository hosting services
provide a large number of tools for version control.
_Specify standards for metadata creation (if any). If there are no standards
in your discipline describe what metadata will be created and how_
We will adhere, as far as the peculiarities of our project will allow it, to
the recommendations for metadata and documentation of ontologies drafted by
the LOV initiative (
_http://lov.okfn.org/Recommendations_Vocabulary_Design.pdf_ ).
**Dataset 2: Interaction Logs (IL)**
_Outline the discoverability of data (metadata provision)_
_What metadata, documentation or other supporting material should accompany
the data for it to be interpreted correctly?_
The IL data set requires documentation describing: 1) the system’s functional
architecture (in terms of what the different CARESSES components require and
provide and how they are connected) and 2) the details of the messages shared
over universAAL, that are stored in the IL files.
Metadata are divided in two categories. Metadata related to a IL file (e.g.
time and location of the recorded interaction) will be manually added to each
file. Metadata related to the messages (time, owner, message type) are stored
in the fields of each record together with the message content and will be
automatically associated to the messages by the logging tool.
_What information needs to be retained to enable the data to be read and
interpreted in the future?_
The documentation and metadata written in accordance with the aforementioned
specifications contain all the information to be retained to ensure the
readability of the IL files.
_How will you capture / create the metadata?_
IL files are automatically generated during an encounter between the
culturally competent robot and a person. As mentioned above, metadata related
to the messages (time, owner, message type) will be automatically created by
the universAAL communication middleware and stored in the IL files at runtime
by the logging tool. Metadata related to a IL file (e.g. time and location of
the recorded interaction) will be manually added at a later stage.
_Can any of this information be created automatically?_
Metadata related to the messages are automatically created by the universAAL
communication middleware. Some of the metadata related to a IL file (e.g.
starting time and location of the recorded interaction) can also be generated
automatically by the logging tool.
Documentation cannot be generated automatically.
_What metadata standards will you use and why?_
The rationale for choosing the metadata related to the messages, stored in the
fields of each record together with the message content, draws inspiration
from popular standards for log files (e.g. the ROS Bag file format for the log
files of ROS applications defined in _http://wiki.ros.org/Bags/Format/2.0_ ,
or the Extended Log file Format for the log files of web servers defined in
_http://www.w3.org/TR/WD-logfile.html_ ). The notation and naming convention
will adhere with those of the universAAL platform.
_Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers?_
Github ( _https://github.com/_ ) is among the largest and most popular
repository hosting services. Github repositories can be given a DOI and
released using the data archiving tool Zenodo ( _https://zenodo.org/_ ),
which also ensures that all metadata required for the identification of the
repository are filled before its public release. We will consider this option
for the publication of the IL dataset.
_Outline naming conventions used_
The metadata associated with the dataset itself with adhere to the conventions
of the chosen archiving tool (e.g., Zenodo). Metadata associated with files
and records with follow the naming convention of the universAAL platform.
_Outline the approach towards search keyword_
Archiving services such as Zenodo allow for specifying a list of search
keywords to associate with the dataset, as part of the publication process.
_Outline the approach for clear versioning_
Github (as most repository hosting services) provides a large number of tools
for version control, in particular allowing for making different releases of a
repository. By default, Zenodo takes an archive of the associated Github
repository every time a new release is created.
_Specify standards for metadata creation (if any). If there are no standards
in your discipline describe what metadata will be created and how_
The metadata requested by Zenodo for the publication of an archive comply with
several standard metadata format such as MARCXML, Dublin Core and DataCite
Metadata Schema ( _http://about.zenodo.org/policies/_ ).
**Dataset 3: End-Users Responses (EUR)**
_Outline the discoverability of data (metadata provision)_
_What metadata, documentation or other supporting material should accompany
the data for it to be interpreted correctly?_
The EUR data set requires metadata describing the real-world meaning of
values, variables and files, as well as technical information such as variable
types and formats. Qualitative metadata pertaining to the file type, data
source, the geographic and temporal coverage, source descriptions,
annotations, coding structures and explanations will be documented,
_What information needs to be retained to enable the data to be read and
interpreted in the future?_
The metadata written in accordance with the aforementioned specifications
contain all the information to be retained to ensure the readability of the
EUR data.
_How will you capture / create the metadata?_
SPSS stores all metadata associated with a dataset in a Dictionary, and
provides tools for its creation, validation and export in easily readable
formats. The SPSS Dictionary will be created together with the insertion of
the quantitative EUR data in SPSS. QSR NVivo 11 also enables data management
including providing tools for documentation files, classification and
attributes, and enables exporting into a wide range of formats appropriate for
archiving.
_Can any of this information be created automatically?_
SPSS metadata to be stored in the Dictionary will be created manually. For
NVivo, a log of information about the data sources, editing done, coding and
analysis carried out is created automatically. Other information will be
created manually.
_What metadata standards will you use and why?_
Metadata and documentation standards will adhere to those described by the UK
Data
Archive ( _http://www.data-archive.ac.uk/_ ), which is the UK’s largest
collection of digital research data in the social sciences and humanities and
is connected to a network of data archives across the world.
_Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers?_
The UK Data Archive supports the use of persistent identifiers across its work
so that data, metadata and other outputs can be reliably referenced and
linked, in particular promoting the association of data sets with the ORCID of
the contributors and with DataCite DOIs for persistent data citation. Other
archives, such as Zenodo ( _https://zenodo.org/_ ), allow for associating a
DOI to the data sets. We will consider these options for the publication of
the EUR data set.
_Outline naming conventions used_
The EUR data set will adhere, as far as possible, to the conventions of the
chosen archiving service (e.g., UK Data Archive, Zenodo) and of the tools it
refers to.
_Outline the approach towards search keyword_
Both aforementioned archiving services make sure that search keywords and
metadata required for finding the data set with their search tools are
provided as part of the publication process.
_Outline the approach for clear versioning_
A number of solutions for clear versioning of the EUR data set are available.
Most metadata standards (e.g. Dublin Core) allow for specifying the version
and other related information inside the data set. Moreover, a number of
repository hosting services (e.g. Github) provide a large number of tools for
version control, in particular allowing for making different releases of a
repository.
_Specify standards for metadata creation (if any). If there are no standards
in your discipline describe what metadata will be created and how_
Metadata creation of the quantitative and qualitative data will adhere to the
standards described by the UK Data Archive. It is important to mention that
the UK Data Archive is a member of the Data Documentation Initiative, whose
aims include the development of robust metadata standards for social science
data.
**2.2 Making data openly accessible:**
**Specify which data will be made openly available? If some data is kept
closed provide rationale for doing so**
**Specify how the data will be made available**
**Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?**
**Specify where the data and associated metadata, documentation and code are
deposited**
**Specify how access will be provided in case there are any restrictions**
**Dataset 1: Cultural Knowledge Base (CKB)**
_Is the data made openly available (YES/NO/PARTIALLY)_
YES
_Specify how and where (in which repository) the data will be made available_
For the storage of the CKB ontology we will consider different solutions,
evaluating their performance in terms of data persistence, security and
accessibility. To allow other researchers to easily find the ontology, we will
apply for its insertion in popular ontology libraries and search engines, such
as Protégé Ontology Library, LOV and
Google.
_Specify what methods or software tools are needed to access the data?_
_Name the required methods or software tools_
A list of existing tools for accessing, visualizing and managing ontologies
such as the
CKB ontology is available at:
_https://en.wikipedia.org/wiki/Ontology_(information_science)#Editor_
_Is the software pre-existing or developed as an output of CARESSES_
All the tools listed above are pre-existing and independent from CARESSES.
_Is documentation about the software available to access the data included?_
Most of the tools listed above provide documentation and support (see for
example
Protégé: _http://protege.stanford.edu/_ )
_Is it possible to include the relevant software (e.g. in open source code)?_
Many of the tools listed above (e.g. Protégé) are open source.
**Dataset 2: Interaction Logs (IL)**
_Is the data made openly available (YES/NO/PARTIALLY)_
YES
_Specify how and where (in which repository) the data will be made available_
We are considering to publish the IL dataset on a public Github repository and
to use
Zenodo for assigning it a DOI.
_Specify what methods or software tools are needed to access the data?_
_Name the required methods or software tools_
Files in the CSV format can be accessed by a wide variety of software
applications, including proprietary (e.g., Microsoft Excel, MATLAB) and open
source applications (e.g. Open Office Calc, Octave, R).
_Is the software pre-existing or developed as an output of CARESSES_
All the applications mentioned above are pre-existing and independent from
CARESSES.
According to the needs of the project, we will maybe develop software
applications (e.g. ROS packages) or scripts for existing software (e.g. MATLAB
or R scripts) specifically for the management of the data within the IL files.
In such case, we will consider publishing such code together with the dataset.
_Is documentation about the software available to access the data included?_
Most of the applications mentioned above come with rich documentation and
support functionalities (see for example MATLAB
_https://uk.mathworks.com/support/?_
_s_tid=gn_supp_ ).
_Is it possible to include the relevant software (e.g. in open source code)?_
The applications mentioned above which are not open source (e.g. Microsoft
Excel, MATLAB) provide free trials. Moreover, both Microsoft and Mathworks
have special licensing contracts for students and academic institutions.
**Dataset 3: End-Users Responses (EUR)**
_Is the data made openly available (YES/NO/PARTIALLY)_
PARTIALLY
_If some data is kept closed provide rationale for doing so_
We will withhold screening data (pertaining to cognitive competence and
aggression) since this data is purely used to determine their eligibility
rather than for data analysis.
_With whom will you share the data, and under what conditions?_
This data will not be shared unless we consider the data to be of considerable
health importance to the research participant. In this case we shall be guided
by our incidental findings policy and may in some cases disclose this data to
the research participant.
_Specify how and where (in which repository) the data will be made available_
We are considering to publish the EUR dataset in the UK Data Archive or
Zenodo.
Specify what methods or software tools are needed to access the data?
_Name the required methods or software tools_
Quantitative data in SPSS format (.sav) can be accessed with IBM SPSS (
_https://www.ibm.com/analytics/us/en/technology/spss/_ ), and open source data
analysis software such as R ( _https://www.r-project.org/_ ). Both software
allow for exporting the data set in a number of other formats, including
Microsoft Excel format (.xls, xlsx) and CSV.
Qualitative data in Microsoft Word format (.doc, .docx) can be accessed with
Microsoft
Word and open source text editing software such as Apache OpenOffice (
_https://www.openoffice.org/_ ).
_Is the software pre-existing or developed as an output of CARESSES_
All the applications mentioned above are pre-existing and independent from
CARESSES.
_Is documentation about the software available to access the data included?_
Most of the applications mentioned above come with rich documentation and
support functionalities (see for example R
_https://cran.r-project.org/manuals.html_ ).
_Is it possible to include the relevant software (e.g. in open source code)?_
The applications mentioned above which are not open source (e.g. IBM SPSS,
Microsoft Word) provide free trials.
**2.3 Making data interoperable:**
**Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.**
**Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?**
**Dataset 1: Cultural Knowledge Base (CKB)**
_Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability_
Ontologies themselves are a tool for interoperability. Within CARESSES, the
CKB ontology constitutes the vocabulary for the culturally competent assistive
robot to be developed in the course of the project and facilitates the use and
interaction among all software tools developed within the project. In its
construction, whenever possible, we will adopt terms and definitions which are
standard in the field or culture they refer to.
_Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?_
When and however possible, we will refer to standard vocabularies.
**Dataset 2: Interaction Logs (IL)**
_Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability_
Zenodo complies with several standard metadata format such as MARCXML, Dublin
Core and DataCite Metadata Schema ( _http://about.zenodo.org/policies/_ ).
Moreover, the CSV format is among the most readable formats for information
storage, supported by the vast majority of software for numerical and data
analysis.
_Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?**)** _
We will provide mapping to the CKB ontology, as well as other existing
vocabularies, whenever possible.
**Dataset 3: End-Users Responses (EUR)**
_Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability_
Zenodo complies with several standard metadata format such as MARCXML, Dublin
Core and DataCite Metadata Schema ( _http://about.zenodo.org/policies/_ ).
Quantitative data in the EUR dataset will comply with the data vocabulary of
the tools they refer to, thus ensuring exchange and re-use by any researcher
making use of the same or compatible tools. We will try to adhere, as far as
our application permits it, to the European Language Social Science Thesaurus
(ELSST _https://elsst.ukdataservice.ac.uk/elsst-guide/elsst-structure_ ).
_Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?_
We will provide mapping to the ELSST thesaurus, the CKB ontology, as well as
other existing vocabularies, whenever possible.
**2.4 Increase data re-use (through clarifying licenses):**
**Specify how the data will be licenced to permit the widest reuse possible
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed**
**Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why**
**Describe data quality assurance processes**
**Specify the length of time for which the data will remain re-usable**
**Dataset 1: Cultural Knowledge Base (CKB)**
_Specify how the data will be licensed to permit the widest reuse possible_
_Who owns the data?_
The matter of the ownership of data produced within the project is discussed
in the Coordination Agreement among partners. This matter will be handled
under the supervision of the Exploitation, Dissemination and IPR board.
_How will the data be licensed for reuse?_
Licensing terms will be defined by the CARESSES partners and in accordance
with the restrictions, if any, of any third-party data used in the CKB
ontology.
_If you are using third-party data, how do the permissions you have been
granted affect licensing?_
A number of ontologies (such as the Time Ontology from W3C) grant “permission
to copy, and distribute their contents in any medium for any purpose and
without fee or royalty”. We will keep the licensing terms of any third-party
ontology we will use in the CKB ontology into account in the definition of the
licensing terms of the CKB ontology
itself.
_Will data sharing be postponed / restricted e.g. to seek patents?_
Probably not. However, it will most likely be postponed to comply with
publication regulations. This matter will be handled under the supervision of
the Exploitation,
Dissemination and IPR board.
_Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed (no later than
publication of the main findings and should be in-line with established best
practice in the field)_
According to the CARESSES Work plan, the CKB ontology will be ready for
publication approx. from month 25 (third year of the project). The CKB
ontology will be officially publicly released upon the publication of related
articles.
_Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project?_
_Who may be interested in using your data?_
As previously stated, we envision the CKB ontology to be especially useful
for: 1) researchers in the field of Robotics, who may use it as a guideline
and reference for the development of robots able to interact with people while
keeping cultural information into account; 2) companies producing robots and
other devices for personal assistance, who may use it as a source of validated
information for a number of cultures (specifically, the ones to be considered
during the testing phase of CARESSES), allowing for culture-aware human-robot
interaction; 3) researchers and practitioners in the field of Transcultural
Nursing, who may use it as a validated and publicly available ontology for the
description of concepts related to cultural competence and the detailing of a
number of cultures.
_What are the further intended or foreseeable research uses for the data?_
See above
_If the re-use of some data is restricted, explain why_
At the moment, we do not foresee any restriction on the re-use of the CKB
ontology.
_Describe data quality assurance processes_
“Data quality” can be defined in terms of syntactic, semantic and pragmatic
quality (see
ISO 8000-8:2015).
A number of ontology editors, such as Protégé, provide tools for automatically
detecting inconsistencies in the ontology and checking its validity. Moreover,
there exist publicly available tools, such as Oops! (
_http://oops.linkeddata.es/_ ) which automatically check for anomalies, errors
and lack of metadata for documentation. As an example, the full catalogue of
pitfalls detected by Oops! Is available at
_http://oops.linkeddata.es/catalogue.jsp_ . The pragmatic quality of the CKB
ontology (i.e., whether it fits for its intended use) will be checked during
its creation by the experts involved in the CARESSES project, and
experimentally evaluated in the testing phase of the project.
_Specify the length of time for which the data will remain re-usable_
Forever.
**Dataset 2: Interaction Logs (IL)**
_Specify how the data will be licensed to permit the widest reuse possible_
_Who owns the data?_
The matter of the ownership of data produced within the project is discussed
in the Coordination Agreement among partners. This matter will be handled
under the supervision of the Exploitation, Dissemination and IPR board.
_How will the data be licensed for reuse?_
Licensing terms will be defined by the CARESSES partners.
_If you are using third-party data, how do the permissions you have been
granted affect licensing?_
We do not foresee the use of any third-party data in the IL dataset.
_Will data sharing be postponed / restricted e.g. to seek patents?_
Probably not. However, it will most likely be postponed to comply with
publication regulations. This matter will be handled under the supervision of
the Exploitation,
Dissemination and IPR board.
_Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed (no later than
publication of the main findings and should be in-line with established best
practice in the field)_
According to the CARESSES Work plan, Interaction Logs are collected in two
separate stages of the project: first in the course of Task 5.6 (m23 – m27)
and then in the course of Task 6.3 (m28-m33) and Task 6.4 (m28-m33). As such,
the first portion of the IL data set will be ready for publication approx.
from month 27, while the second portion of the IL data set will be ready for
publication approx. from month 37(end of the project) and it will be
officially publicly released upon the publication of related articles.
_Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project?_
_Who may be interested in using your data?_
As previously stated, we envision the IL data set to be useful for academic
and industrial researchers aiming at defining guidelines, best practices and
standards in the field of Human-Robot Interaction (e.g., identifying which
robot actions are frequently requested by people, identifying recurrent
sequences of robot actions – human actions that a robot could rely on to
exhibit predictive behaviours). Portions of the dataset may also be used by
roboticists for the development and testing of specific robotic applications
(e.g., the IL dataset can be used to train and test algorithms for learning
the habits/routines of a person from the analysis of recurring events).
_What are the further intended or foreseeable research uses for the data?_
See above
_If the re-use of some data is restricted, explain why_
At the moment, we do not foresee any restriction on the re-use of the IL data
set.
_Describe data quality assurance processes_
“Data quality” can be defined in terms of syntactic, semantic and pragmatic
quality (see ISO 8000-8:2015) or, in other words, in terms of completeness,
validity, accuracy, consistency.
Data completeness indicates whether all the data necessary to meet the current
(and possibly future) information demand are available. By design, the IL data
set fulfills the requirements of the culture-aware robot developed in the
CARESSES project, thus ensuring that it contains sufficient information for an
assistive robot to have meaningful interactions with a person. Data validity
will be assessed in WP2, WP3 and WP4, as part of the development process of
the software modules producing the messages to be stored in the IL data set.
Data accuracy and consistency refer to whether the values stored are correct
or not. Since the data to be stored in the IL data set are used by the
culturally competent robot to tune its behavior towards the assisted person,
one of the goals of the project is to maximize their reliability. To allow for
a quantitative assessment of the accuracy of the data in the IL data set, we
will consider providing, together with the portion of the IL data set acquired
in Task 5.6 in lab conditions, supporting material providing the ground truth
of the stored data.
_Specify the length of time for which the data will remain re-usable_
Forever.
**Dataset 3: End-Users Responses (EUR)**
_Specify how the data will be licensed to permit the widest reuse possible_
_Who owns the data?_
The matter of the ownership of data produced within the project is discussed
in the Coordination Agreement among partners. This matter will be handled
under the supervision of the Exploitation, Dissemination and IPR board.
_How will the data be licensed for reuse?_
Licensing terms will be defined by the CARESSES partners.
_If you are using third-party data, how do the permissions you have been
granted affect licensing?_
We do not foresee the use of any third-party data in the EUR dataset.
_Will data sharing be postponed / restricted e.g. to seek patents?_
Probably not. However, it will most likely be postponed to comply with
publication regulations. This matter will be handled under the supervision of
the Exploitation,
Dissemination and IPR board.
_Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed (no later than
publication of the main findings and should be in-line with established best
practice in the field)_
According to the CARESSES Work plan, End-Users Responses are defined,
structured and collected in the course of Tasks 6.1, 6.2 and 6.3, which span
months 19 to 33 of the project. Quantitative data are then post-processed and
analysed in the course of Tasks 7.1 and 7.3, which span months 27 to 37 of the
project, while qualitative data are post-processed and analysed in the course
of Tasks 7.2 and 7.3, which span months 29 to 37. The EUR data set will
therefore be ready for publication approx. from month 37(end of the project)
and it will be officially publicly released upon the publication of related
articles.
_Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project?_
_Who may be interested in using your data?_
We envisage that the EUR data will be useful for academics interested in
conducting secondary analysis. This could include academics interested in the
acceptability and clinical or cost-effectiveness impact of culturally aware
robots, but also those interested in our baseline characteristics and outcome
measurement data.
_What are the further intended or foreseeable research uses for the data?_
See above
_If the re-use of some data is restricted, explain why_
At the moment, we do not foresee any restriction on the re-use of the EUR data
set.
_Describe data quality assurance processes_
“Data quality” can be defined in terms of syntactic, semantic and pragmatic
quality (see ISO 8000-8:2015) or, in other words, in terms of completeness,
validity, accuracy, consistency.
We will strive for data completeness by constructing methodological protocols
and tools that are user-friendly and sensitive. For the quantitative data, to
increase the likelihood of validity and accuracy, we will employ existing
widely used, previously validated data collection instruments such as the
SF-36 and ZBI. For all of the quantitative tools we employ, we shall conduct a
series of Cronbach’s Alphacoefficient tests in SPSS. This will also help with
establishing internal consistency. Further, we shall conduct Cohen’s kappa
tests to establish the degree of inter-rater consistency between the
researchers collecting data. To help boost the likelihood of consistency being
achieved, the research team will be trained to follow the same strict
protocols throughout. For qualitative data, to help boost the trustworthiness
of our analysis we should engage in respondent validation exercises with our
participants. Our interview schedules and data collection processes will be
sensitive and planned so that they are likely to be complete and effective.
_Specify the length of time for which the data will remain re-usable_
Forever.
**3\. Allocation of resources**
**Explain the allocation of resources, addressing the following issues:**
**Estimate the costs for making your data FAIR. Describe how you intend to
cover these costs**
**Clearly identify responsibilities for data management in your project**
**Describe costs and potential value of long term preservation**
**Dataset 1: Cultural Knowledge Base (CKB)**
_Estimate the costs for making your data FAIR. Describe how you intend to
cover these costs (costs related to open access to research data are eligible
as part of the Horizon_
_2020 gran)_
The costs of making the CKB ontology FAIR are included in Task 2.6.
_Describe costs and potential value of long term preservation_
Once the CKB ontology is publicly available, the only foreseeable cost for its
preservation is the cost of the repository hosting service where it is
located.
**Dataset 2: Interaction Logs (IL)**
_Estimate the costs for making your data FAIR. Describe how you intend to
cover these costs (costs related to open access to research data are eligible
as part of the Horizon_
_2020 gran)_
The costs of making the IL data set FAIR are included in Task 5.6 (for the
first portion of the data set) and in Task 7.3 (for the second portion of the
data set). The cost and effort of making the second portion of the data set
FAIR is expected to be significantly lower than that of the first portion of
the data set.
_Describe costs and potential value of long term preservation_
Once the IL data set is publicly available, the only foreseeable cost for its
preservation is the (eventual) cost of the repository hosting service where it
is located.
**Dataset 3: End-Users Responses (EUR)**
_Estimate the costs for making your data FAIR. Describe how you intend to
cover these costs (costs related to open access to research data are eligible
as part of the Horizon 2020 grant)_
SPSS and NVivo 11 licensing costs may be applicable. Otherwise we do not
envisage any additional costs associated with making our data FAIR.
_Describe costs and potential value of long term preservation_
Once the data set is publicly available, the only foreseeable cost for its
preservation is the (eventual) cost of the repository hosting service where it
is located.
**4\. Data security**
**Address data recovery as well as secure storage and transfer of sensitive
data Dataset 1: Cultural Knowledge Base (CKB)**
_Specify if the data should be safely stored in certified repositories for
long term preservation and curation.**)** _
We are considering applying for the inclusion of the CKB ontology in well
known collections of Ontologies (we will apply for its insertion in popular
ontology libraries and search engines, such as the Protégé Ontology Library,
LOV and Google) to make it publicly available to a large audience. We will
host the CKB ontology on a repository which provides adequate guarantees in
terms of data persistence, security and accessibility.
_Is your data sensitive (e.g. detailed personal data, politically sensitive
information or trade secrets)? (YES/NO)_
NO
**Dataset 2: Interaction logs (IL)**
_Specify if the data should be safely stored in certified repositories for
long term preservation and curation._
We are considering archiving the IL data set with the Zenodo data archiving
tool to ensure long term preservation and curation.
_Is your data sensitive (e.g. detailed personal data, politically sensitive
information or trade secrets)? (YES/NO)_
NO
**Dataset 3: End-Users Responses (EUR)**
_Specify if the data should be safely stored in certified repositories for
long term preservation and curation._
We are considering archiving the EUR data set with the Zenodo data archiving
tool to ensure long term preservation and curation.
_Is your data sensitive (e.g. detailed personal data, politically sensitive
information or trade secrets)? (YES/NO)_
NO
**5\. Ethical aspects**
**To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former**
**Dataset 1: Cultural Knowledge Base (CKB)**
_Are the data acquired by carrying out research involving human participants?
(YES/NO)_
YES
_If the answer is YES,_
_Specify the procedure established to gain consent for data preservation and
sharing_
Human participants will be involved in the collection and validation of
culture-specific information. Participants’ responses will be merged and
generalized, and no personspecific detail will be stored in the CKB ontology
(which, by design, captures cultural information at a national/group level).
Therefore, data not fall under the General Data Protection Regulation (EU)
2016/679. All of the data will be collected in an ethically appropriate manner
and with Ethics Committee approval.
_Specify how will sensitive data be handled to ensure it is stored and
transferred securely_
No sensitive data will be stored in the CKB ontology.
_Specify how will you protect the identity of participants, e.g. via
anonymisation or using managed access procedures_
No personal data about the participants will be stored in the CKB ontology.
_Are the data acquired by carrying out research involving human participants?_
_(YES/NO)_
YES
**Dataset 2: Interaction logs (IL)**
_If the answer is YES,_
_Specify the procedure established to gain consent for data preservation and
sharing_
Human participants will be involved in the collection of recordings of
interactions between the culturally competent robot and a person.
By design, the IL data set does not contain any person-specific detail, since
it only captures events and status information which are of relevance for the
robot to plan and tune its behavior. Moreover, messages refer to participants
only by an ID which ensures the protection of their identity both during the
experiments and in the public IL data set. Therefore, data not fall under the
General Data Protection Regulation (EU) 2016/679. All of the data will be
collected in an ethically appropriate manner and with Ethics Committee
approval.
_Specify how will sensitive data be handled to ensure it is stored and
transferred securely_
No sensitive data will be stored in the IL data set.
_Specify how will you protect the identity of participants, e.g. via
anonymisation or using managed access procedures_
No personal data about the participants will be stored in the IL data set.
Moreover, participants will be exclusively identified by an ID.
**Dataset 3: End-Users Responses (EUR)**
_Are the data acquired by carrying out research involving human participants?_
YES
If the answer is YES,
_Specify the procedure established to gain consent for data preservation and
sharing_
Ethical approval from appropriate bodies will be defined before the
experiments and participants will be asked to provide informed consent for
their participation in the study. This will include consent for data
preservation and sharing.
_Specify how will sensitive data be handled to ensure it is stored and
transferred securely_
The data will be collected following informed consent and will be
pseudonymized, in compliance with the General Data Protection Regulation (EU)
2016/679.
_Specify how will you protect the identity of participants, e.g. via
anonymisation or using managed access procedures_
As above (via pseudonymized)
**6\. Other**
**Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)**
We do not consider other procedures for data management.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0630_INJECT_732278.md
|
# Executive Summary
INJECT is a new Innovation Action that supports technology transfer to the
creative industries; under the call for “action primarily consisting of
activities directly aiming at producing plans and arrangements or designs for
new, altered or improved products, processes or services” (H2020 Innovation
Action). To achieve its aim INJECT will test and establish an INJECT spin-off
business in the journalism market through its ecosystem developments. While
user testing and testing of the tool in operational environments will aid in
the development and technical improvements of the INJECT technology.
The INJECT tool is new to journalism and to European markets; the data
management plan covers this testing and validation of both technical and
economic performance in real life operating conditions provided by the
journalism market domain. This project therefore has limited scientific
research activities.
This document aims to present the data management plan for INJECT project, the
considerations, actions and activities planned with an aim to deliver on the
objectives of the project. The deliverable introduces the data management plan
as a living document, its purpose and intended use. The document discusses the
INJECT data types and applies the FAIR data management process to ensure that,
wherever possible, the research data is findable, accessible, interoperable
and reusable (FAIR), and to ensure it is soundly managed.
# Purpose of the Data Management Plan
This deliverable of the INJECT project is prepared under WP5 and the Task 5.1
_INJECT Data Management Plan (1 st version) _ . In this task we initiate
discussion of the data management life cycle, processes and/or generated by
the INJECT project and to make the data findable, accessible, interoperable
and reusable (FAIR). This data management plan is living document, a dynamic
document that will be edited and updated during the project, with a second
version to be delivered in month 18.
# INJECT Data Types
The Data Management Plan asks the following questions and we address those
throughout the document, noting where actions are underway and further
considerations that will be made as the project develops. As previously noted,
the INJECT project is an H2020 Innovation Action, and hence is not intended to
generate scientific data per se, therefore the data management plan considers
the activities undertaken within the project.
**2.1 What is the purpose of the data collection/generation and its relation
to the objectives of the project?**
The three stated INJECT project objectives are:
Obj1: Extend and aggregate the new digital services and tools to increase the
productivity and creativity of journalists in different news environments
Obj2: Integrate and evaluate the new digital services and environments in CMS
environments
Obj3: Diffuse the new digital services and support offerings in news and
journalism markets
Data collection and generation related to each is to enable the co-creation
then effective evaluation of the INJECT tool, and scientific reporting of
research and innovation that will deliver each of these objectives.
**2.1.1 What types and formats of data will the project generate/collect?**
The project will generate and collect the following types and formats of data:
− Co-created user requirements on the INJECT tool and services: format is
structured text requirements;
− Parsed and semantic-tagged news stories from online digital news sources
(including partner news archives) as part of INJECT toolset: format is
PostgreSQL database, the processed/parsed results are stored into an external
Elastic Search Cluster for later searching;
− Semantic-tagged news stories used to inform design of INJECT creative search
strategies:
format is structured documents of news stories, associated word counts and
other observed patterns, by story type;
− Usability evaluation reports of INJECT tool by journalists: format is
structured written reports;
− Semi-structured interview data about INJECT tool use by journalists: format
is documented, content-tagged notes from semi-structured interviews;
− Focus group reports about INJECT tool use by journalists: format is
structured reports of focus group findings;
− INJECT tool activity log data, recording meaningful activities of tool users
over selected time periods: format is structured spreadsheet;
− Corpus of news stories generated by journalists using the INJECT tool:
format is structured database of news stories and related data attributes;
− Quantitative creativity assessments of news stories generated by journalists
with and without use of the INJECT tool: format will be structured
spreadsheets;
− Economic and contract data about each launched INJECT ecosystem: format is
structured spreadsheet.
**2.1.2 Will you re-use any existing data and how?**
The following data is reused from existing news sources:
− Parsed and semantic-tagged news stories from online digital news sources
(including partner news archives) as part of INJECT toolset: format is the raw
news article data is stored in a PostgreSQL database, the processed/parsed
results are stored into an external Elastic Search Cluster for later
searching;
− Semantic-tagged news stories used to inform design of INJECT creative search
strategies:
format is structured documents of news stories, associated word counts and
other observed patterns, by story type;
− Corpus of news stories generated by journalists using the INJECT tool:
format is structured database of news stories and related data attributes.
**2.1.3 What is the origin of the data?**
The reused data originates from selected news sources:
_Figure 1: News Sources_
<table>
<tr>
<th>
**Source**
</th>
<th>
**Country**
</th> </tr>
<tr>
<td>
BBC
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Quartz
</td>
<td>
UK
</td> </tr>
<tr>
<td>
The Guardian
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Telegraph
</td>
<td>
UK
</td> </tr>
<tr>
<td>
FT
</td>
<td>
UK
</td> </tr> </table>
<table>
<tr>
<th>
The Times
</th>
<th>
UK
</th> </tr>
<tr>
<td>
Sky News
</td>
<td>
UK
</td> </tr>
<tr>
<td>
The Independent
</td>
<td>
UK
</td> </tr>
<tr>
<td>
The Huffington Post
</td>
<td>
UK
</td> </tr>
<tr>
<td>
The Huffington Post
</td>
<td>
US
</td> </tr>
<tr>
<td>
Reuters News
</td>
<td>
UK
</td> </tr>
<tr>
<td>
The Economist
</td>
<td>
UK
</td> </tr>
<tr>
<td>
The New York times
</td>
<td>
US
</td> </tr>
<tr>
<td>
Daily Mail
</td>
<td>
UK
</td> </tr>
<tr>
<td>
The Wall Street Journal
</td>
<td>
US
</td> </tr>
<tr>
<td>
The Washington Post
</td>
<td>
US
</td> </tr>
<tr>
<td>
The Metro
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Herald Scotland
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Bloomberg
</td>
<td>
US
</td> </tr>
<tr>
<td>
The Scotsman
</td>
<td>
UK
</td> </tr>
<tr>
<td>
The Irish Times
</td>
<td>
Ireland
</td> </tr>
<tr>
<td>
Irish Independent
</td>
<td>
Ireland
</td> </tr>
<tr>
<td>
New Statesman
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Newsweek
</td>
<td>
US
</td> </tr>
<tr>
<td>
The Daily Beast
</td>
<td>
US
</td> </tr>
<tr>
<td>
Times Education Supplement
</td>
<td>
UK
</td> </tr>
<tr>
<td>
BBC Mundo
</td>
<td>
UK
</td> </tr>
<tr>
<td>
El Mundo
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
El Pais
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
Cinco Dias
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
CNN
</td>
<td>
US
</td> </tr>
<tr>
<td>
CNN Money
</td>
<td>
US
</td> </tr>
<tr>
<td>
London Evening Standard
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Birmingham Post
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Birmingham Mail
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Farming Life
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Belfast Telegraph
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Yorkshire Post
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Yorkshire Evening Post
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Manchester Evening News
</td>
<td>
UK
</td> </tr>
<tr>
<td>
South Wales Evening Post
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Irish Examiner
</td>
<td>
Ireland
</td> </tr>
<tr>
<td>
Herald Scotland
</td>
<td>
Scotland
</td> </tr>
<tr>
<td>
The Mirror
</td>
<td>
UK
</td> </tr>
<tr>
<td>
The Irish Sun
</td>
<td>
Ireland
</td> </tr>
<tr>
<td>
Irish Daily Star
</td>
<td>
Ireland
</td> </tr>
<tr>
<td>
The Sun
</td>
<td>
UK
</td> </tr> </table>
<table>
<tr>
<th>
Daily Star
</th>
<th>
UK
</th> </tr>
<tr>
<td>
Daily Record
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Daily Express
</td>
<td>
UK
</td> </tr>
<tr>
<td>
Los Angeles Times
</td>
<td>
US
</td> </tr>
<tr>
<td>
Chicago Tribune
</td>
<td>
US
</td> </tr>
<tr>
<td>
The Onion
</td>
<td>
US
</td> </tr>
<tr>
<td>
Forbes
</td>
<td>
US
</td> </tr>
<tr>
<td>
Fox News
</td>
<td>
US
</td> </tr>
<tr>
<td>
Herald Tribune [International NY Times]
</td>
<td>
US
</td> </tr>
<tr>
<td>
ABC News
</td>
<td>
US
</td> </tr>
<tr>
<td>
Buzzfeed
</td>
<td>
US
</td> </tr>
<tr>
<td>
Newsmax Media
</td>
<td>
US
</td> </tr>
<tr>
<td>
U.S. News and World Report
</td>
<td>
US
</td> </tr>
<tr>
<td>
The Globe and Mail
</td>
<td>
Canada
</td> </tr>
<tr>
<td>
Toronto Star
</td>
<td>
Canada
</td> </tr>
<tr>
<td>
New Zealand Herald
</td>
<td>
NZ
</td> </tr>
<tr>
<td>
Dominion Post
</td>
<td>
NZ
</td> </tr>
<tr>
<td>
The Sydney Morning Herald
</td>
<td>
Australia
</td> </tr>
<tr>
<td>
The Brisbane Times
</td>
<td>
Australia
</td> </tr>
<tr>
<td>
Herald Sun
</td>
<td>
Australia
</td> </tr>
<tr>
<td>
The Daily Telegraph (Australia)
</td>
<td>
Australia
</td> </tr>
<tr>
<td>
The Courier-Mail
</td>
<td>
Australia
</td> </tr>
<tr>
<td>
Bangkok Post
</td>
<td>
Thailand
</td> </tr>
<tr>
<td>
Jakarta Globe
</td>
<td>
Indonesia
</td> </tr>
<tr>
<td>
South China Morning Post
</td>
<td>
Hong Kong
</td> </tr>
<tr>
<td>
Der Spiegel International
</td>
<td>
Germany
</td> </tr>
<tr>
<td>
Ekathimerini
</td>
<td>
Greece
</td> </tr>
<tr>
<td>
Dutch News
</td>
<td>
Netherlands
</td> </tr>
<tr>
<td>
Krakow Post
</td>
<td>
Poland
</td> </tr>
<tr>
<td>
Portugal Resident
</td>
<td>
Portugal
</td> </tr>
<tr>
<td>
The Local Newspaper
</td>
<td>
Sweden
</td> </tr>
<tr>
<td>
Connexion Newspaper
</td>
<td>
France
</td> </tr>
<tr>
<td>
Le Monde
</td>
<td>
France
</td> </tr>
<tr>
<td>
Le Monde Diplomatique
</td>
<td>
France
</td> </tr>
<tr>
<td>
EuroFora
</td>
<td>
EU
</td> </tr>
<tr>
<td>
Friedl News
</td>
<td>
Austria
</td> </tr>
<tr>
<td>
New Europe
</td>
<td>
Belgium
</td> </tr>
<tr>
<td>
Copenhagen Post
</td>
<td>
Denmark
</td> </tr>
<tr>
<td>
News of Iceland
</td>
<td>
Iceland
</td> </tr>
<tr>
<td>
Finnbay Newspaper
</td>
<td>
Finland
</td> </tr>
<tr>
<td>
North Cyprus News
</td>
<td>
Cyprus
</td> </tr>
<tr>
<td>
Prague Daily Monitor
</td>
<td>
Czech Republic
</td> </tr>
<tr>
<td>
Daily News Egypt
</td>
<td>
Egypt
</td> </tr>
<tr>
<td>
The Punch
</td>
<td>
Nigeria
</td> </tr>
<tr>
<td>
Business Day Live
</td>
<td>
South Africa
</td> </tr>
<tr>
<td>
Independent Newspaper
</td>
<td>
South Africa
</td> </tr>
<tr>
<td>
Mail and Guardian
</td>
<td>
South Africa
</td> </tr>
<tr>
<td>
Bhutan Observer
</td>
<td>
Bhutan
</td> </tr>
<tr>
<td>
Financial Express
</td>
<td>
India
</td> </tr>
<tr>
<td>
Business Standard
</td>
<td>
India
</td> </tr>
<tr>
<td>
Economic Times
</td>
<td>
India
</td> </tr>
<tr>
<td>
The Indian Express
</td>
<td>
India
</td> </tr>
<tr>
<td>
Live Mint [INDIA]
</td>
<td>
India
</td> </tr>
<tr>
<td>
Stavanger Aftenblad
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
Bergens Tidende
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
Dagbladet
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
Verdens Gang (VG)
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
Dagens Næringsliv
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
NRK
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
Aftenposten
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
Le Figaro
</td>
<td>
France
</td> </tr>
<tr>
<td>
BFMTV
</td>
<td>
France
</td> </tr>
<tr>
<td>
Le Parisien
</td>
<td>
France
</td> </tr>
<tr>
<td>
Le Express
</td>
<td>
France
</td> </tr>
<tr>
<td>
L'OBS
</td>
<td>
France
</td> </tr>
<tr>
<td>
Le Point
</td>
<td>
France
</td> </tr>
<tr>
<td>
Les Echos
</td>
<td>
France
</td> </tr>
<tr>
<td>
CBS
</td>
<td>
Netherlands
</td> </tr>
<tr>
<td>
SCP
</td>
<td>
Netherlands
</td> </tr>
<tr>
<td>
NU
</td>
<td>
Netherlands
</td> </tr>
<tr>
<td>
Al Jazeera
</td>
<td>
Qatar
</td> </tr>
<tr>
<td>
FD
</td>
<td>
Netherlands
</td> </tr>
<tr>
<td>
Adformatie
</td>
<td>
Netherlands
</td> </tr>
<tr>
<td>
Eerste Kamer
</td>
<td>
Netherlands
</td> </tr>
<tr>
<td>
Europees Parlement Nieuws
</td>
<td>
Netherlands
</td> </tr>
<tr>
<td>
Daily Nation
</td>
<td>
Kenya
</td> </tr>
<tr>
<td>
Vanguard
</td>
<td>
Nigeria
</td> </tr>
<tr>
<td>
The Namibian
</td>
<td>
Namibia
</td> </tr>
<tr>
<td>
News24
</td>
<td>
South Africa
</td> </tr> </table>
As the first ecosystem for INJECT is established in Norway there will be more
sources that may be added, such as internal archives, statistical bureau
information, and public data (maps, weather, traffic). It is further noted
that this list will expand with further ecosystem developments as more
newspapers and others from the journalistic domain became customers in the
future.
**2.1.4 Data generated during the project arises from:**
− A user-centred co-design process with journalists and news organisations;
− Knowledge acquisition and validation exercises with experienced journalists
for each of the 6 INJECT creative search strategies;
− Data- and information-led design of each of the 6 INJECT creative search
strategies;
− Formative and summative evaluations of INJECT tool use by journalists and
news organisations.
− Original content created by journalists and news organisations who choose to
contribute to public Explaain card content.
**2.1.5 What is the expected size of the data?**
The expected sizes of the data varies by types:
− Documents and reporting describing the user requirements, user activity logs
and qualitative results from formative and summative evaluations of the INJECT
tool, including the corpus of generated news stories, will be small –
deliverable reports with short data appendices;
− Parsed and semantic-tagged news stories from online digital news sources
(including partner news archives) as part of INJECT toolset will be large. The
current data set at m6 of the project is just over one million articles.
**2.1.6 To whom might it be useful ('data utility')?**
The INJECT project data might be useful to:
− News organisations and IT providers who will target the news industry, to
inform their development of more creative and productive news stories, to
support the competitiveness of the sector;
− News organisations and IT providers who wish to develop new forms of
business model through which to deliver digital technologies to the news and
journalism sectors;
− Journalism practitioners who will extrapolate from project results in order
to improve journalism practices across Europe.
− Academics and University departments and Institutes that could use the
INJECT data for research and teaching purposes.
# FAIR data
## Making data findable, including provisions for metadata
As stated previously INJECT is an Innovation Action that supports technology
transfer to the creative industries; it will test and establish an INJECT
spin-off business in the journalism market through its ecosystem developments.
The INJECT tool is new to journalism and to European markets and the intention
is that it becomes a sought after commercially viable product. This viability
will require the product to be sold and to earn revenue, from both its
subscribed use and innovations made through paid for adaptations. It will be
necessary that some types of information are sold specifically to customers
and therefore cannot be in the public domain.
The FAIR framework asks:
− Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?
− What naming conventions do you follow?
− Will search keywords be provided that optimize possibilities for re-use?
− Do you provide clear version numbers?
− What metadata will be created? In case metadata standards do not exist in
your discipline, please outline what type of metadata will be created and how.
The following table provides INJECT’s current answers to these questions.
_Figure 2: Making data findable._
<table>
<tr>
<th>
**Data type**
</th>
<th>
**Discoverable?**
</th>
<th>
**Reuse and metadata conventions**
</th> </tr>
<tr>
<td>
Co-created user
requirements on the INJECT tool and services
</td>
<td>
No
</td>
<td>
The single user requirements document will be extracted from project
deliverables, and posted in an acceptable form on the INJECT project website
</td> </tr>
<tr>
<td>
Parsed and semantic-tagged news stories from online digital news sources as
part of INJECT toolset
</td>
<td>
Yes
</td>
<td>
All news stories will be searchable through the INJECT tool and advanced
search algorithms, which have APIs. News stories are tagged with semantic
metadata about article nouns and verbs, and person, place, organisation and
activity entities. The meta-data types are currently bespoke standards, to
allow tool development to take place
</td> </tr>
<tr>
<td>
Semantic-tagged news stories used to inform design of INJECT creative search
strategies
</td>
<td>
No
</td>
<td>
The news stories will be collated in one or more online documents. Each news
article will be metatagged with data about the article’s length, presence and
number of keywords, and other observations
</td> </tr>
<tr>
<td>
Usability evaluation reports of INJECT tool by journalists
</td>
<td>
No
</td>
<td>
The usability evaluation report content will not be made available for reuse.
Ethical approval does not allow for reuse and sharing
</td> </tr>
<tr>
<td>
Semi-structured interview data about INJECT tool use by journalists
</td>
<td>
No
</td>
<td>
The semi-structured interview data will not be made available for reuse, as
ethical approval does not allow for its reuse and sharing
</td> </tr>
<tr>
<td>
Focus group reports about
INJECT tool use by journalists
</td>
<td>
No
</td>
<td>
The focus group data will not be made available for reuse, as ethical approval
does not allow for its reuse and sharing
</td> </tr>
<tr>
<td>
INJECT tool activity log data, recording meaningful activities of tool users
over selected time periods
</td>
<td>
Yes
</td>
<td>
Anonymous INJECT tool activity log data will be made available for sharing and
reuse, in line with ethical consent from journalist users. Clear log data
versions will be set up. Data will be structured and delivered in XLS sheets,
to allow analyst searching and management of the data
</td> </tr>
<tr>
<td>
Corpus of news stories generated by journalists using the INJECT tool
</td>
<td>
No
</td>
<td>
The corpus of news stories will not be made available directly for reuse by
the project, although published articles will be available, at their
publication source
</td> </tr>
<tr>
<td>
Quantitative creativity assessments of selected news stories generated by
journalists with and without use of the INJECT tool
</td>
<td>
Yes
</td>
<td>
Anonymous quantitative creativity assessments of selected news stories
generated with and without the INJECT tool will be made available for sharing
and reuse, in line with ethical consent from the expert assessors. Clear log
data versions will be set up. Data will be structured and delivered in XLS
sheets, to allow analyst searching and management of the data
</td> </tr>
<tr>
<td>
Economic and contract data about each launched INJECT ecosystem
</td>
<td>
No
</td>
<td>
The intention is that INJECT becomes a sought after commercially viable
product to be sold and to earn revenue, from both its subscribed use and
innovations made through paid for adaptations. It will be necessary that some
types of information are sold specifically to customers and therefore cannot
be in the public domain.
</td> </tr> </table>
## Making data openly accessible
The FAIR framework asks:
− Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions.
− How will the data be made accessible (e.g. by deposition in a repository)?
− What methods or software tools are needed to access the data?
− Is documentation about the software needed to access the data included?
− Is it possible to include the relevant software (e.g. in open source code)?
− Where will the data and associated metadata, documentation and code be
deposited?
Preference should be given to certified repositories that support open access
where possible.
− Have you explored appropriate arrangements with the identified repository?
− If there are restrictions on use, how will access be provided?
− Is there a need for a data access committee?
− Are there well-described conditions for access (i.e. a machine readable
license)?
− How will the identity of the person accessing the data be ascertained?
The following table provides INJECT’s current answers to these questions for
data that will be made available for sharing in the project.
_Figure 3: Openly accessible data._
<table>
<tr>
<th>
**Data type**
</th>
<th>
**Open?**
</th>
<th>
**How will data be accessed**
</th> </tr>
<tr>
<td>
Co-created user requirements on the INJECT tool and services
</td>
<td>
Yes
</td>
<td>
The single user requirements document will be posted on the project website,
with clear signposting and instructions for use
</td> </tr>
<tr>
<td>
Parsed and semantictagged news stories from online digital news sources as
part of INJECT toolset
</td>
<td>
No
</td>
<td>
The parsed and semantic-tagged news stories will not be made publicly
available. This data represents core commercial value of the INJECT tool, and
will be not shared, except through INJECT tools made available as part of the
commercial ecosystems
</td> </tr>
<tr>
<td>
Semantic-tagged news stories used to inform design of INJECT creative search
strategies
</td>
<td>
Yes
</td>
<td>
The news stories will be published in online documents that will be accessible
via the INJECT’s restricted project website and associated storage space. The
stories will be stored and edited using standard MS Office applications, which
users will need to edit them. A validated user log-in to the restricted area
of the INJECT project website will be needed to access and download the
stories
</td> </tr>
<tr>
<td>
INJECT tool activity log data, recording meaningful activities of tool users
over selected time periods
</td>
<td>
Yes
</td>
<td>
The INJECT tool activity log data will be published in online documents that
will be accessible via the INJECT’s restricted project website and associated
storage space. The log data will be stored and edited using standard MS Office
applications, which users will need to edit them. A validated user log-in to
the restricted area of the INJECT project website will be needed to access and
download the log data
</td> </tr>
<tr>
<td>
Quantitative creativity assessments of selected news stories generated by
journalists with and
</td>
<td>
Yes
</td>
<td>
The collected quantitative assessments will be published in online documents
that also will be accessible via the INJECT’s restricted project website and
associated storage space. The assessments will be stored and edited using
standard MS
</td> </tr>
<tr>
<td>
without use of the INJECT tool
</td>
<td>
</td>
<td>
Office applications, which users will need to edit them. A validated user log-
in to the restricted area of the INJECT project website will be needed to
access and download the quantitative assessments
</td> </tr>
<tr>
<td>
Economic and contract data about each launched
INJECT ecosystem
</td>
<td>
No
</td>
<td>
The intention is that INJECT becomes a sought after commercially viable
product with innovations made through paid for adaptations. It will be
necessary that some types of information are sold/contracted to specific
customers and therefore cannot be in the public domain.
</td> </tr> </table>
## Making data interoperable
The FAIR assessment asks:
− Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different origins)?
− What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?
− Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?
− In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?
In response, the INJECT project will not seek to make its data interoperable
with other research data sets, and to enable data exchange and re-use between
researchers, institutions, organisations and countries. There are several
reasons for this decision:
− There are no established standards for data about digital tool use in
journalism, to interoperate with;
− There are established standards for data about creativity support tool use
in computer science, to interoperate with, although a standardized survey
metric for digital creativity support has been developed by US researchers,
which the INJECT project will submit to.
To compensate, the INJECT project will make its data available in the most
open tools available, for example the MS Office suite, and to provide
sufficient documentation to enable understanding and use by other researchers.
## Increase data re-use (through clarifying licences)
The FAIR framework asks:
− How will the data be licensed to permit the widest re-use possible?
− When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.
− Are the data produced and/or used in the project useable by third parties,
in particular after the end of the project? If the re-use of some data is
restricted, explain why.
− How long is it intended that the data remains re-usable? − Are data quality
assurance processes described?
Data re-use is a live consideration for INJECT as the tool is technically
developed and ecosystems established. City and the Innovation Manager are
leading an exploration into the registrations of one or more trademarks for
the project. The current recommended action for public documents, such as the
website, have been marked with the copyright symbol (©), name and the year of
creation: Copyright © The INJECT Consortium, 2017. Data protection aspects of
the project will be coordinated across the relevant national data protection
authorities. The project is aware, and will work towards, upcoming European
data protection rules that will enter into force May 2018 and their impact
will be considered: _http://ec.europa.eu/justice/data-
protection/reform/index_en.htm_
In addition, an ongoing investigation into Intellectual Property rights is
underway. Advice is has been sought through legal channels at City, University
of London. This includes consideration of how the INJECT tool operates in
framing and storing of article text and referencing plus the eco-systems’
payment and use of the tool. As the project develops this will be a key
consideration in work packages.
## Allocation of resources
The FAIR framework asks:
− What are the costs for making data FAIR in your project?
− How will these be covered? Note that costs related to open access to
research data are eligible as part of the Horizon 2020 grant (if compliant
with the Grant Agreement conditions). − Who will be responsible for data
management in your project?
− Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?
The FAIR framework has a minimum impact on INJECT. INJECT’s resources for
managing the FAIR framework are built into the project’s work plan. For
example:
− The development and management of the INJECT data types and sets is
incorporated into and budgeted for in the current work plan;
− Overall data management will be undertaken by the project manager role at
the project coordinator partner – Dr Amanda Brown.
However, the resources for long-term preservation discussed (costs and
potential value, who decides and how what data will be kept and for how long)
have yet to be finalised for the first version of the FAIR document.
## Data security
The FAIR framework asks:
− What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?
− Is the data safely stored in certified repositories for long-term
preservation and curation?
INJECT stores the processed/parsed results into an Amazon Elastic Search
Cluster. Amazon Elasticsearch Service routinely applies security patches and
keeps the Elasticsearch environment secure and up to date. INJECT controls
access to the Elasticsearch APIs using AWS Identity and Access Management
(IAM) policies, which ensure that INJECT components access the Amazon
Elasticsearch clusters securely. Moreover, the AWS API call history produced
by AWS CloudTrail enables security analysis, resource change tracking, and
compliance auditing.
## Ethical aspects
The FAIR framework asks:
− Are there any ethical or legal issues that can have an impact on data
sharing? These can also be discussed in the context of the ethics review.
− Is informed consent for data sharing and long-term preservation included in
questionnaires dealing with personal data?
The INJECT consortium have not identified any specific ethics issues related
to the work plan, outcomes or dissemination. We do note that individual
partners will adhere to ethical rules.
At City, University of London the data management and compliance team are
undertaking a significant review of all policies and procedures on ethics and
data use. We continue to work to the current data protection policy with a
commitment to protecting and processing data with adherence to legislation and
other policy. “Sensitive data shall only be collected for certain specific
purposes, and shall be obtained with consent” will apply to all personal data
collected and any participants provided fair processing notices about the use
of that data. The project will adhere to the commitment to holding any data in
secure conditions, and will make every effort to safeguard against accidental
loss or corruption of data.
# Summary and Outlook
The subsequent INJECT deliverable D5.2 will revisit the data management plan,
the considerations, actions and activities undertaken alongside the delivery
on the objectives of the project. “The FAIR Data Principles provide a set of
milestones for data producers” (Wilkinson et al, 2016) and as the project
develops and within the next deliverable we will revisit the data management
plan data types and consider the milestones to apply the FAIR data management
of research data that is findable, accessible, interoperable and reusable
(FAIR).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0631_BlueHealth_666773.md
|
# 1 Introduction
This Data Management Plan (DMP) is a continuously updated document that
describes the new data generated in the BlueHealth project, its type, format
and structure, the arrangements for its storage and security, and its
potential for being used by others outside of the BlueHealth Consortium. The
structure of this DMP is based on the guidelines provided in annexes to the
EC’s _Guidelines on Data Management in Horizon 2020_ 1 and the Digital
Curation Centre’s _Checklist for a Data Management Plan_ 2 .
## 1.1 Open Data
Horizon 2020 includes a limited and flexible pilot action on open access to
research data. The BlueHealth project is participating in this pilot and the
development of this DMP has been done in part to facilitate the release of
some of the data generated within the project through storage in research data
repositories.
It is anticipated that the structure and content of the DMP facilitate
Documenting how endeavour to make it possible for third parties to access,
mine, exploit, reproduce and disseminate them
## 1.2 Contents and structure of this Data Management Plan
This document is ordered according to the work package (WP) within which each
data set is to be primarily generated and subsequently curated.
In instances where a number of data sets are described for a given section of
a WP (a specific study or component task of the WP), the data sets are
described titled according to that study or task name.
Information provided for each data set includes:
* A data set reference and name
* A description of the contents of the data set
* Information on standards and metadata used to manage the data set
* Information on data sharing within BlueHealth
* Details on the archiving and preservation of the data, including collection and storage and Open Data
## 1.3 General Data Protection Regulation (GDPR)
In the light of the upcoming change of data protection legislation, a series
of steps have been taken by the Project Coordinator regarding compliance
within the BlueHealth project. GDPR (Regulation (EU) 2016/679) 3 will be
implemented on the 25th May 2018, at which point legal uses of personal data
will change. EU citizens will be granted additional controls on the actions of
those processing their personal data and on its free movement. The Project
Coordinator has undertaken training in GDPR compliance at the University of
Exeter. A session on GDPR and ethics will be given at the 2018 BlueHealth
annual conference in Tartu, Estonia by the project’s External Ethics Advisor
Professor Ken Goodman to the project team as a whole. The review of this
document is currently on hold pending further information gathering on the
part of BlueHealth partners regarding actions that should be taken within
their institutions (and the project) to ensure compliance.
# 2 Data sets generated in WP2
The only primary data generated in WP2 relates to a large online survey that
will be carried out in tasks T2.3 and T2.4. The other tasks in WP2 relate to
review of pre-existing data or use and analysis of secondary datasets. Data
related to these tasks are not described in this document, which is primarily
concerned with management of data generated specifically for the purposes
of—and within the remit of—the BlueHealth project.
## 2.1 BlueHealth Survey (tasks T2.3, T2.4)
2.1.1 Data set reference, name
BlueHealth Survey Data.
### 2.1.2 Data set description
These data comprise quantitative and qualitative information on visits to open
spaces, health and demographic data.
The entire data set will consist of 48 component data sets (one per country,
per quarter, for 12 countries).
### 2.1.3 Standards and metadata
The data will be stored as text files to maximise potential for use with a
variety of analytical software.
It is not known at this stage if any particular standards will be adhered to
managing this data beyond those associated with generic good data management
practice.
### 2.1.4 Data sharing
After cleaning, the data set will be made available to other BlueHealth
partners upon request. The request will allow each researcher to obtain the
correct IT credentials for accessing a secure, encrypted server located at
University of Exeter (UNEXE) via virtual private network (VPN). It will not be
possible to download data from this server; instead all analyses will be
carried out remotely using analytical software installed on that machine.
In order to satisfy data protection and ethical concerns, any information that
might permit identification of survey respondents is removed or obfuscated. In
practice, the only data which might afford this possibility is geolocation
data. Rounding of grid references will be carried out upon receipt of the data
(after merging with relevant geographic data from existing databases), thereby
preventing any possibility of subject identification.
### 2.1.5 Archiving and preservation
_2.1.5.1 Collection and storage of data_
The data will be collected by a third party survey company using an online
panel questionnaire. The company will provide the data sets to UNEXE for
storage in an encrypted form on secure servers, where it will remain for the
duration of the project lifetime. Data will be backed up at least daily to
another dedicated server based on onsite as well to a remote server (also on
UNEXE property, but at different geographic location in the UK). _2.1.5.2 Open
Data_
The entire data set (without geolocation data but with additional geographical
variables, see _above_ ) will be made available as Open Data after an embargo
period following the end of the BlueHealth project lifetime. The length of the
embargo period is yet to be determined.
The data will be relocated to an Open Data repository based at UNEXE known as
Open Research Exeter (ORE). Responsibility for the management of the data will
then be transferred to ORE, who may require the addition of metadata to the
data set in order to aid in its identification by the research community at
large.
# 3 Data sets generated in WP3
The data generated in this WP are specific to each community-level
intervention study, which may be either a case study or individual-level
intervention study. Therefore, the data sets are described under subheadings
for each study.
## 3.1 Appia Antica park (tasks T3.3 and T3.4) 3.1.1 Data set reference, name
Park user data set.
### 3.1.2 Data set description
The park user data set will contain data on individual use and perception of
the park and individual perception of health.
The nature of the environment at the site will be evaluated using the
BlueSpace Survey.
### 3.1.3 Standards and metadata
All data will be stored as open format text (.csv) files. These data will be
structured and managed using established standards.
### 3.1.4 Data sharing
These data may be shared with WP2 within BlueHealth. Depending on the nature
of the data collected, it may be possible for either the entire data set or a
subset of it to be pooled with other WP3 studies, a task which may be carried
out within the remit of task T2.5. The practical arrangements for transferring
these data to WP2 are yet to be established.
### 3.1.5 Archiving and preservation
_3.1.5.1 Collection and storage of data_
An adapted version of the BlueHealth Survey will be used alongside interviews
carried out with both users and non-users to collect park user data. The
first data will be collected in October/November 2016 and data collection will
end in October/November
2017\. The park user data will be collected by questionnaire and will be
stored on-site at ISS on secure servers. Data will be backed up weekly using
the standard procedures followed at ISS. _3.1.5.2 Open Data_
The park user data set will be made available to the public after the
BlueHealth project has been completed. These data will be shared with
interested users by means of individual requests made to the ISS.
## 3.2 English Coast Path (tasks T3.3 and T3.4)
### 3.2.1 Data set reference, name
The following data sets will be created for the Staithes stretch of coast
path, and additional data sets similarly created for the second possible
stretch of path:
* CoastPathWave1_Staithes
* CoastPathWave2_Staithes
* CoastPathWave3_Staithes
* CoastPath_StaithesAudit
* CoastPathWave1_StaithesControl
* CoastPathWave2_StaithesControl
* CoastPathWave3_StaithesControl
* CoastPath_ControlStaithesAudit
### 3.2.2 Data set description
The data sets suffixed "Audit" will contain audit data using the BlueSpace
Survey (including objective land cover data). The "wave1" data will contain
data collected from the adapted BlueHealth Survey from March 2017 at both the
intervention and control sites. The "wave2" data will contain data collected
from the adapted BlueHealth Survey from summer 2017 (when the path opens) at
both the intervention and control sites. The "wave3" data will contain data
collected from the adapted BlueHealth Survey from March 2018 at both the
intervention and control sites. All of the above three will contain some
sensitive data such as approximate home location, health status and socio-
economic indicators.
The data collected from the BlueHealth Survey and will potentially be
supplemented by objective data e.g. accelerometry or pedestrian counts.
Standards and metadata
### 3.2.3 Data sharing
These data may be shared with WP2 within BlueHealth. Depending on the nature
of the data collected, it may be possible for either the entire data set or a
subset of it to be pooled with other WP3 studies, a task which may be carried
out within the remit of task T2.5. The practical arrangements for transferring
these data to WP2 are yet to be established.
### 3.2.4 Archiving and preservation
_3.2.4.1 Collection and storage of data_
Data collection will begin at the start of March 2017 and be completed by end
of March 2018.
Data will stored as xls files generally. Where objective data are being used,
they are often recorded in their own local file format. All data will be
structure and managed according to established standards and stored at a local
offline secure server based at the University of Exeter. Data will be backed
up daily to another server on-site.
All the "wave" datasets will be collected by questionnaire (unless we decide
to incorporate objective data in these). At present, we intend for this to be
postal. In any case, objective data will be collected via its own medium (i.e.
pen-and-paper pedestrian counts, electronic pulse counts, or accelerometer
measured data). The audit datasets will be collected via self-completion
questionnaire too.
_3.2.4.2 Open Data_
All data will be made available to the public after the BlueHealth project has
been completed. An Open Data repository at the University of Exeter will be
used to host the data sets and make them available to the public.
### 3.2.5 Analysis and reporting
Analyses will be carried out at UNEXE. In principle, data may be pooled with
other case study data, but the practicalities and sense of such pooling is yet
to be established.
## 3.3 Ripoll River low-cost intervention (tasks T3.3 and T3.4)
3.3.1 Data set reference, name Ripoll River low-cost intervention
### 3.3.2 Data set description
This data set contains an evaluation of the results of a pre-post intervention
longitudinal study. The data will be generated through administering an
adapted version of the BlueHealth Survey.
3.3.3 Standards and metadata
The data will be managed and structured according to established standards.
### 3.3.4 Data sharing
Data will be analysed solely at CREAL. These data may be shared with WP2
within BlueHealth. Depending on the nature of the data collected, it may be
possible for either the entire data set or a subset of it to be pooled with
other WP3 studies, a task which may be carried out within the remit of task
T2.5. The practical arrangements for transferring these data to WP2 are yet to
be established.
### 3.3.5 Archiving and preservation
_3.3.5.1 Collection and storage of data_
Data will be collected from mid-September 2017 to mid-May 2018.
Data will be stored as Excel spreadsheets. Data will be stored at CREAL in
encrypted form on secure servers and a weekly back-up carried out to another
server on-site.
_3.3.5.2 Open Data_
All data except those containing personal information will be made available
as Open Data after the BlueHealth project has been completed, and will be
stored for public access on a repository hosted by CREAL.
## 3.4 Besòs River along Montcada i Reixac (tasks T3.3 and T3.4)
3.4.1 Data set reference, name
Besos River along Montcada i Reixac
### 3.4.2 Data set description
This data will be generated from evaluation of a longitudinal pre-post
intervention looking at a population of adults (>18 years, males and females)
using an a adapted version of the BlueHealth Survey. Additional data in the
same data set will be generated using the BlueSpace Survey.
3.4.3 Standards and metadata
The data will be managed and structured according to established standards.
### 3.4.4 Data sharing
Data will be analysed at CREAL. These data may be shared with WP2 within
BlueHealth. Depending on the nature of the data collected, it may be possible
for either the entire data set or a subset of it to be pooled with other WP3
studies, a task which may be carried out within the remit of task T2.5. The
practical arrangements for transferring these data to WP2 are yet to be
established. 3.4.5 Archiving and preservation
_3.4.5.1 Collection and storage of data_
Data will be collected from mid-November 2016 to mid-June 2017.
The data will be stored as Excel spreadsheets. Data will be stored at CREAL in
encrypted form on secure servers and a weekly back-up carried out to another
server on-site.
_3.4.5.2 Open Data_
All data except those containing personal information will be made available
as Open Data after the BlueHealth project has been completed, and will be
stored for public access on a repository hosted by CREAL.
## 3.5 Modernist water body Anne Kanal Tartu (tasks T3.3 and T3.4)
### 3.5.1 Data set reference, name
Nine data sets will be generated in the course of this case study, as follows:
1. BlueSpace affordance (T5.2) and SoftGIS data
1. Spatially-linked preferences for blue space
2. Physical interventions baseline survey data (T5.3)
1. Health and physical activity status data
2. Site observation data
3. Site quality information
3. Design of interventions construction data (5.4); detailed construction design map
4. Interventions construction costs data (T5.3 and T5.7); costs of construction data
5. Qualitative data from discussions
6. Physical intervention post construction impact evaluation data (T5.7)
1. Health and physical activity status data
2. Site observation data
7. Virtual intervention evaluation data (T5.6); preference data
8. Virtual therapy prototype Estonian results data (T4.4); health response data
9. Case study scenario discussion group data (T6.2); qualitative data from discussions
### 3.5.2 Data set description
These data relate to urban acupuncture in the sense of making temporary
interventions that would add aesthetic value to the area under the condition
of seasonality.
### 3.5.3 Standards and metadata
It is not yet determined whether these data will be structured and managed
according to established standards.
### 3.5.4 Data sharing
All analyses will be carried out at EMU. Data sets 8 and 9 will be made
available to WP4 and WP6, respectively, for pooling/combination with other
data. These data may be shared with WP2 within BlueHealth. Depending on the
nature of the data collected, it may be possible for either the entire data
set or a subset of it to be pooled with other WP3 studies, a task which may be
carried out within the remit of task T2.5. The practical arrangements for
transferring these data to WP2 are yet to be established.
### 3.5.5 Archiving and preservation
_3.5.5.1 Collection and storage of data_
Data will be collected between early September 2016 and the end of December
2018. The nine data sets will be collected and stored as follows:
1. Collected from a web-based interface; stored as GIS shapefiles and xls files
2. …
1. Collected by questionnaire; stored as xls file
2. Collected using paper maps; stored as Illustrator graphic files
3. Collected using paper maps and water testing equipment; stored as GIS shapefiles and xls files
3. From specialist design software; stored as Autocad files
4. Collected fromconstruction contracts and purchase orders; stored as xls files
5. Collected as digital sound recordings and notes; stored as mp3, txt and doc files
6. …
1. Collected by questionnaire; stored as xls files
2. Collected using paper maps; stored as Illustrator graphics files
7. Collected by questionnaire; stored as xls files
8. Collected by questionnaire and ? (requires WP4 input); stored as xls files
9. Collected as digital sound recordings and notes; stored as mp3, txt and doc files
All digital data will be stored on the hard drives of individual workers’
computers. Backups will be made to a separate hard drive on a daily basis.
Analogue data (e.g. paper maps etc.) will be stored in a locked cupboard on-
site at EMU.
_3.5.5.2 Open Data_
There are currently no plans to make these data available to the public. This
decision will be reviewed during the project lifetime.
## 3.6 Tallinn inner city coast (tasks T3.3 and T3.4)
The same data sets will be generated as for the _Modernist water body Anne
Kanal_ but will relate to a different location (Tallinn harbour).
## 3.7 Urban stream Rio de Couros (tasks T3.3 and T3.4)
The same data sets will be generated as for the _Modernist water body Anne
Kanal_ but will relate to a different location (town in central Portugal).
## 3.8 Wetland biosphere Kristianstad (tasks T3.3 and T3.4)
The same data sets will be generated as for the _Modernist water body Anne
Kanal_ but will relate to a different location (recreational wetlands in
Sweden).
## 3.9 Office workers walking individual-level intervention (tasks T3.3 and
T3.4)
### 3.9.1 Data set reference, name
Three data sets will be generated for the purposes of this individual-level
intervention:
1. BlueHealth Survey data
2. Individual measurement data
3. Individual smartphone data
### 3.9.2 Data set description
The BlueHealth Survey data will contain the data collected from
administering an adapated version of the BlueHealth Survey for about 60
volunteers.
Individual measurement data will contain information on height, weight and
cortisol levels.
Individual smartphone data will include information on speed, noise and air
pollution relating to the volunteers when walking.
### 3.9.3 Standards and metadata
Data will be stored as xls format files on local secure servers at CREAL. It
is not yet determined whether these data will be structured and managed
according to established standards.
### 3.9.4 Data sharing
These data may be shared with WP2 within BlueHealth. Depending on the nature
of the data collected, it may be possible for either the entire data set or a
subset of it to be pooled with other WP3 studies, a task which may be carried
out within the remit of task T2.5. The practical arrangements for transferring
these data to WP2 are yet to be established.
### 3.9.5 Archiving and preservation
_3.9.5.1 Collection and storage of data_
Data collection will commence in mid-January 2018 and finish in mid-June 2018.
The BlueSpace Survey will be used to collect data on the environment in which
the workers are walking. In addition to the adapted version of the BlueHealth
Survey, a number of other measurements may be collected, including height,
weight, blood pressure, cortisol levels. Smartphones will be carried by
participants to provide data on their location, speed, and exposures to air
pollution and noise.
All data will be stored at CREAL on a secure server. Data will be backed up on
a weekly basis to another server on-site.
_3.9.5.2 Open Data_
All study data except those containing personal information will be made
available to the public after the BlueHealth project has been completed via a
repository hosted by CREAL.
## 3.10 Malmö Swimming study individual-level intervention (tasks T3.3 and
T3.4)
3.10.1 Data set reference, name Three data sets will be generated:
* Swimming ability
* Attitude survey data
* Qualitative interview information
### 3.10.2 Data set description
The swimming ability data will comprise socioeconomic indicators, GIS-
coordinates, sex, age. The Attitude survey data will additionally collect
information on health status and reported attitudes. The Qualitative interview
information will comprise background data (SES, sex, age) and narratives.
### 3.10.3 Standards and metadata
Data will be stored as csv files and Word documents (qualitative and narrative
information) and will be structured and managed according to established
standards.
### 3.10.4 Data sharing
All data will be analysed at ULUND. These data may be shared with WP2 within
BlueHealth. Depending on the nature of the data collected, it may be possible
for either the entire data set or a subset of it to be pooled with other WP3
studies, a task which may be carried out within the remit of task T2.5. The
practical arrangements for transferring these data to WP2 are yet to be
established.
### 3.10.5 Archiving and preservation
_3.10.5.1 Collection and storage of data_
Data will be collected from September 2017 to the end of December 2018\.
The Swimming ability data will be collected from registers. The Attitude
survey data will be collected via a questionnaire and the Qualitative
interview information will come from face-to-face interviews conducted with
the children.
All data will be stored at ULUND on a secure server; all personal identifiers
will have been removed prior to transfer to ULUND and storage on these
servers. Data will be backed up every night to another physical device located
on-site.
_3.10.5.2 Open Data_
The Swimming ability data will be made available as Open Data during the
BlueHealth project lifetime. Possibly the Attitude survey data will also be
made available. No personal identifiers will be associated with these data
sets and identification of individuals will be impossible. The means of making
these data available is yet to be determined. The Qualitative interview
information will not be made
available as Open Data as narratives contain information that would
potentially allow identification of individuals.
3.11 Effect of Thessaloniki waterfront improvement on local population
## health individual-level intervention (tasks T3.3 and T3.4)
3.11.1 Data set reference, name
Thessaloniki waterfront data
### 3.11.2 Data set description
The Thessaloniki waterfront data set will be generated that contains
information on the following: metabolomics, health status, socioeconomic
status, age, gender, dietary habits, environmental (exposure).
### 3.11.3 Standards and metadata
Data will generally be stored as txt files, although some other data formats
may be used e.g. for metabolomics analyses. Data will be structured and
managed according to established standards.
### 3.11.4 Data sharing
The Thessaloniki waterfront data will be analysed at AUTH. These data may be
shared with WP2 within BlueHealth. Depending on the nature of the data
collected, it may be possible for either the entire data set or a subset of it
to be pooled with other WP3 studies, a task which may be carried out within
the remit of task T2.5. The practical arrangements for transferring these data
to WP2 are yet to be established.
### 3.11.5 Archiving and preservation
_3.11.5.1 Collection and storage of data_
Data collection will take place between early December 2016 and late July
2017.
The impact of the intervention will be evaluated using an adaptation of the
BlueHealth Survey, SoftGIS and an ad hoc questionnaire to make sure it is
feasible to answer. The quality of the environment will be assessed using the
BlueSpace Survey, questionnaire data and environmental monitoring data. In
addition, advanced multi-omics platforms (GC-MS ToF; LC-MS ToF) will be used.
Metabolomics data will be generated after laboratory analyses of human
biosamples. Health status, socioeconomic status, age, gender, dietary habits
information will be collected using questionnaires. Environmental (exposure)
data will be collected by environmental and personal exposure monitors.
All data will be stored in encrypted from on local secure servers at AUTH.
Data are automatically backed up daily to another physical device on-site.
_3.11.5.2 Open Data_
All environmental and exposure data, as well as data on subject age, gender,
and socioeconomic status, will be made available as Open Data after a certain
embargo period has passed. These data will be made available on a repository
hosted by AUTH. Any data related to health status or biomarkers will not be
made available due to privacy and confidentiality issues.
# 4 Data sets generated in WP4
No information is currently available on the management of data generated from
this WP at this early stage in the project. This section will be populated in
due course as the project evolves.
# 5 Data sets generated in WP5
The management of data generated by WP5 (tasks T5.1 to T5.7 inclusive) is
described below. The cross-cutting nature of the work conducted in WP5, in
particular work done in WP3, means that the information below is essentially a
duplicate of that provided for the following WP3 case studies (links to
sections above): _3.5 Modernist water body Anne_ _Kanal Tartu_ , _3.6 Tallinn
inner city coast_ , _3.7 Urban stream Rio de Couros_ and _3.8 Wetland_
_biosphere Kristianstad_ .
## 5.1.1 Data set reference, name
1. 5.2 BlueSpace affordance SoftGIS data
2. 5.3 Physical interventions baseline survey data
3. 5.4 Design of interventions construction data
4. 5.3/5.7 Interventions construction costs
5. 5.5 Policy support data
6. 5.7 Physical intervention post construction impact evaluation data
7. 5.6 Virtual intervention evaluation data
## 5.1.2 Data set description
1. Web-based interface
2. A) questionnaire and B) site mapping using paper maps; C) site mapping using paper maps and water testing equipment
3. Specialist design software
4. Construction contracts and purchase orders
5. Digital sound recorders and manual notes
6. A) questionnaire and B) site mapping using paper maps
7. Questionnaire
## 5.1.3 Standards and metadata
The following file formats will be used to store the data sets:
1. GIS shapefiles and .xls spreadsheet
2. A) .xls spreadsheet and B) Illustrator graphic files; C)GIS shapefiles and .xls spreadsheet
3. Autocad files
4. .xls spreadsheet
5. Text files .doc
6. A) .xls spreadsheet and B) Illustrator graphic files
7. .xls spreadsheet
No particular standards will be adhered to. It is unclear at this stage which
metadata would be associated with these data.
5.1.4 Data sharing
No sharing of these data sets is envisaged within the BlueHealth Consortium.
## 5.1.5 Archiving and preservation
_5.1.5.1 Collection and storage of data_
All digital data will be stored on the hard drives of the dedicated computers
of BlueHealth staff based at the Estonian University of Life Sciences (EMU).
These data will be backed up daily to a separate hard drive located on-site,
which is password protected.
All analogue data (paper questionnaires, maps etc) will be stored in locked
cupboards on-site at EMU. _5.1.5.2 Open Data_
There are no plans to release any of the data generated in WP5 as Open Data at
this stage.
# 6 Data sets generated in WP6
No information is currently available on the management of data generated from
this WP at this early stage in the project. This section will be populated in
due course as the project evolves.
# 7 Data sets generated in WP7
The data generated in WP7 is of a qualitative nature.
## 7.1 Data set reference and name
Four data sets will be generated in WP7 related to the development of a
decision support tool (DST), as follows:
* Design of DST
* Criteria for DST
* Existing DSTs
* Ways to deal with uncertainty
## 7.2 Data set description
* Design of DST o Outcomes of consultations on qualitative information about user needs.
* Consultation-type data.
* Criteria for DST o Qualitative input elements to DST. o Evidence on health benefits and risks of green and blue space.
* Consultation-type data.
* Existing DSTs o Qualitative information about existing DST in related areas.
* Consultation-type data and review of literature.
* Ways to deal with uncertainty o Qualitative information about strategies to make decisions under uncertainties. o Consultation-type data and review of literature.
## 7.3 Standards and metadata
Data will be stored as Word document files (.docx).
No particular standards or metadata will be associated with these files.
## 7.4 Data sharing
No sharing of these data sets with BlueHealth partners outside of the
respective WP is envisaged at the present time. 7.5 Archiving and preservation
_7.5.1.1 Collection and storage of data_
All data will be stored at the WHO Europe premises on secure servers.
_7.5.1.2 Open Data_
These data sets will not be made available to the public. Interested
researchers may apply to make use of these data from the WP7 Leader, who may
grant access at their discretion. This access will be mediated by the WP7
Leader, who will send successful applicants the data by email. The non-
sensitive nature of these data means that there are no concerns regarding data
protection or ethics in doing so.
The nature of the raw data make them relatively uninteresting to those not
directly involved in the very specific work of WP7, since they are only really
useful in the construction the DST.
# 8 Data sets generated in WP8
No information is currently available on the management of data generated from
this WP at this early stage in the project. This section will be populated in
due course as the project evolves.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.