filename
stringlengths 18
35
| content
stringlengths 1.53k
616k
| source
stringclasses 1
value | template
stringclasses 1
value |
---|---|---|---|
0833_EU-XCEL_644801.md
|
D 8.3. Data Management Plan
# Plan Details
The objectives of the EUXCEL project were to:
1. Create more ICT entrepreneurs who are ‘Incubator ready’,
2. Foster inter-regional European entrepreneurship collaboration, developing a ‘Born European’ entrepreneurship mind-set among cohorts of ICT/ entrepreneurship student teams,
3. Develop a network of ICT creative entrepreneurship spaces,
4. Host six start-up scrums (summer schools) in six countries (Denmark, Ireland, Germany, Greece, Poland and Spain) per year of project to include extended participation from other associated countries,
5. Prototype and support the EU Virtual Incubator platform to continue development of the technology and business using virtual teams,
6. Host and pilot two European Entrepreneurial Tech Challenge finals in Year 1 and Year 2 of the project where the best teams from the ‘start-up scrums’ compete and pitch before expert panels.
The project conducted research studies using psychological scales which sought
to examine the role of variety of personality characteristics within the
entrepreneurial process, particularly within the startup team setting. he data
collected is intended to contribute to understanding of founding team dynamics
in ICT entrepreneurship.
Key details describing the administration of the plan are provided in Table 1.
<table>
<tr>
<th>
**Plan Name**
</th>
<th>
EUXCEL Data Management Plan
</th> </tr>
<tr>
<td>
**Grant Number**
</td>
<td>
644801
</td> </tr>
<tr>
<td>
**Principal Investigator**
</td>
<td>
Brian O’Flaherty
</td> </tr>
<tr>
<td>
**Plan Data Contact**
</td>
<td>
Brian O’Flaherty, [email protected]
</td> </tr> </table>
Table 1: EUXCEL Data Plan Administrative Details
The plan is based on the template provided by the European Commission in
accordance with the Open Research Data Pilot.
# Data Set Description
Data relating to between 10 and 15 psychological constructs and behavioural
patterns will be stored on the repository. This data will be relevant to
research in small group dynamics, virtual team work, and entrepreneurship.
Individual level constructs measured include entrepreneurial intentionality,
entrepreneurial skills, entrepreneurial passion, emotional intelligence, fear
of failure, and resilience. Team level constructs include transactive memory
systems, team confidence, and shared identity. The
\- 4 -
D 8.3. Data Management Plan
data will be used by small teams working in new project development, software
development teams, virtual teams, startup incubators, and entrepreneurship
researchers.
# Accessibility and Metadata
The data will be readily accessible on the Zenodo platform, and will be issued
with a digitial object identifier (DOI). Metadata will accompany all data
sheets, listing and describing each measured construct and enabling
researchers to reuse the results provided. The data will be created through
the generation of spreadsheets from electronic surveys. These surveys were
issued to participants in the EUXCEL project at a number of points throughout
the two cycles of the programme. Two separate files will be created from this
data, each pertaining to one cycle of the EUXCEL programme, and the files will
be named accordingly. The project description will enable researchers to
contextualise the data, and both the metadata and data itself will allow other
researchers to understand the procedures undertaken and test for reliability.
The data will be stored in XLS format, as this will ease interpretation and
analysis, while also facilitating transfer of the data to social science
research statistical software such as SPSS. This format will also enable both
the long term sharing and validity of the data.
The archive is available at the following link location.
_https://zenodo.org/record/888835_
# Data Sharing and Archiving
Data will be shared through the creation of a collection on the Zenodo
platform. Zenodo is a research depository that was created by OpenAIRE and
CERN. OpenAIRE is a Horizon 2020 project which supported the implementation of
the European Commission Open Access policies. Zenodo allows researchers to
create publicly available repositories that are both searchable and citable.
The data will be stored on CERN's repository software, INVENIO. It will also
take advantage of the Zenodo DOI function which allows editing and updating of
the data files over time and citations of the data in future research
publications.
The data will be safely stored in the Zenodo repository long after the
original collection of the data. Along with the metadata and project
documentation provided, this will mean that the data will be useful for
entrepreneurship and social science researchers as long as the constructs
examined have value for them. Both data files and metadata are kept in
multiple online and independent replicas. CERN has made a commitment to
maintain the data centre. Should Zenodo have to close operations, they have
issued a guarantee that all content will be migrated to other suitable
repositories, and since all uploads have DOIs, all citations and links to the
stored data will not be affected.
\- 5 -
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0837_SARAFun_644938.md
|
# EXECUTIVE SUMMARY
The present document is a deliverable of the SARAFun project, funded by the
European
Commission’s Directorate-General for Research and Innovation (DG RTD), under
its Horizon 2020 Research and innovation programme (H2020). It presents the
final version of the project Data Management Plan (DMP). The current document
explains in detail what data has been generated throughout the project’s
lifecycle, the means for sharing of this data in order to become accessible
for verification and reuse, as well as the ways in which it has been curated
and preserved.
Throughout the project, the team needed to manage a large number of datasets,
generated and collected by various means, i.e. sensors, cameras, robots and
direct interactions with users (e.g. interviews and questionnaires). By the
end of the project, 7 different datasets have been produced through the
SARAFun’s technical activities, with almost all the partners being data owners
and/or producers.
All SARAFun datasets have been handled considering the main data security and
privacy principles, respecting also the partners IPR policies. A dedicated
Data Management Portal, hosted on Zenodo, further supported the efficient
management, storage and sharing of the project’s datasets.
It is strongly emphasized that this is the result of an ongoing document that
has being evolved along with the project progress and has been updated in
order to reflect up-to-date information.
# I NTRODUCTION
## PURPOSE
The SARAFun project has been formed to enable a non-expert user to integrate a
new bimanual assembly task on a robot in less than a day. This is accomplished
by augmenting the robot with cutting edge sensory and cognitive abilities as
well as reasoning abilities required to plan and execute an assembly task.
The purpose of this deliverable (D7.8 “Data Management Plan”) is to deliver a
detailed analysis of all the datasets generated by the SARAFun project. This
final version of the DMP includes an overview of the datasets that have been
produced by the project as well as the specific characteristics and their
management processes. It also includes additional information regarding the
dissemination of the project’s open access knowledge and datasets, aiming to
foster further exploitation of the SARAFun’s results by the scientific
community.
## GENERAL PRINCIPLES
Through the activities of the SARAFun project [1], pioneer research has been
carried out in order to develop and deliver a next generation bi-manual robot
that can be exploited in the production lines for assisting human workers in a
safety manner through novel human demonstration and teaching algorithms. To
this end, human participants have been involved in the project and data have
been collected regarding their assembly’s movements, their ratings of the
system and assembly forces in a production line.
### Participation in the Pilot on Open Research Data
SARAFun highly supports the Pilot on Open Research Data launched by the
European Commission along with the Horizon2020 programme, and therefore a
significant part of research data generated by the project has been made open
and it is offered to the Open Research Data Pilot, where SARAFun participates.
To this end, the Data Management Plan provided through the activities of this
deliverable, explains in detail what data has been generated by the project,
how it has been exploited or made accessible for verification and reuse, and
how it has been curated and preserved.
### IPR Management & Security
Due to the high innovative nature of the SARAFun project, high level
technologies have been developed during the project’s lifecycle in order to be
afterwards released in the market. Therefore, foreground capable of industrial
or commercial application must be protected taking into account legitimate
interests. All involved partners have Intellectual Property Rights on the
technologies and data developed or collected with their participation. As the
partners’ economic sustainability highly depends on these technologies and
data, SARAFun Consortium will protect all data collected for SARAFun purposes.
Additionally, prior notice of dissemination has been given to other
participants, whereas any dissemination such as publications and patent
applications must indicate the Community financial assistance. Moreover,
appropriate measures have been taken for effectively avoiding a leak of data,
while all data repositories of this project are adequately protected.
### Personal Data Protection
SARAFun involves the carrying out of data collection in order to assess the
technology and effectiveness of the proposed solution. This have been carried
out in full compliance of any European and national legislation and directives
relevant to the country where the data collections are taking place
(INTERNATIONAL/EUROPEAN):
1. The Convention 108 for the Protection of Individuals with Regard to Automatic Processing of Personal Data;
2. Directive 95/46/EC & Directive 2002/58/EC of the European parliament regarding issues with privacy and protection of personal data and the free movement of such data; iii) The legislation in Sweden: The 1998 Personal Data Act; iv) The Spanish Organic Law 15/99 (amendments: 5/02 & 424/05);
v) The Greek Law 2472/1997: Protection of Individuals with regard to the
Processing of Personal Data, and vi) The Greek Law 3471/2006: Protection of
personal data and privacy in the electronic telecommunications sector and
amendment of law 2472/1997.
More detailed information regarding data privacy issues can be found in
Deliverable 1.2 “Preliminary Ethics and Safety Manual for SARAFun technology”.
# DATA MANAGEMENT PLAN
## DATASET LIST
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset Name**
</th>
<th>
**Status**
</th> </tr>
<tr>
<td>
1
</td>
<td>
DS.01.CERTH.FAIM2017Dataset
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
2
</td>
<td>
DS.02.CERTH.IJERTCS2018Dataset
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
3
</td>
<td>
DS.03.CERTH.CVPR2016Dataset
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
4
</td>
<td>
DS.04.CERTH.SnapFitForceProfiles
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
5
</td>
<td>
DS.05.CERTH.ContactEvaluationData
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
6
</td>
<td>
DS.01.ULUND.TransientDetection
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
7
</td>
<td>
DS.01.UNIBI.TactileData
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
8
</td>
<td>
DS.01.ABB.ExperimentalVerification_GraspQuality
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
9
</td>
<td>
DS.01.TECNALIA.Human_Performance_of_Bimanual_Assembly
</td>
<td>
“updated M36”
</td> </tr> </table>
## PLANS PER DATASET
<table>
<tr>
<th>
**Dataset reference and name**
</th> </tr>
<tr>
<td>
**DS.01.CERTH.FAIM2017Dataset**
</td> </tr>
<tr>
<td>
**Dataset description**
</td> </tr>
<tr>
<td>
_**General Description** _
Dataset used for keyframe extraction in laboratory environment. An instructor
person will pick up two small objects and, afterwards will assembly them. This
dataset was used in “Teaching Assembly by Demonstration using Advanced Human
Robot Interaction and a Knowledge Integration Framework”
</td> </tr>
<tr>
<td>
_**Origin of Data** _
Device type: RGBD sensor. Two aligned streams are used, extracted from one
depth sensor (640X480) and one RGB camera (640X480). The two sensors operate
in a low range area (20cm to 1.5m). Sampling rate: 10 fps.
</td> </tr>
<tr>
<td>
_**Relation to project objectives** _
Objective 1: To develop a bi-manual robot that will be capable to learn the
assembly of two parts by human demonstration
</td> </tr>
<tr>
<td>
_**To whom it would be useful** _
This dataset will be useful to key frame extraction algorithms.
</td> </tr>
<tr>
<td>
_**Type and format** _
The data will be available in video format (e.g. image sequences).
</td> </tr>
<tr>
<td>
_**Expected size** _
The volume of data is estimated at approximately 1.26GB/min for RGB and 1.08
GB/min for depth. 11 sequences have been captured ranging from 10 to 15
seconds each. Each sequence holds a volume of approximately 100 MB (70 MB for
color and 30 MB for depth).
</td> </tr>
<tr>
<td>
_**Similar Data sets** _
No similar datasets have been found
</td> </tr>
<tr>
<td>
**Discoverability and naming conventions**
</td> </tr>
<tr>
<td>
_**Metadata provision** _
For the metadata RGBD sensors have been used. Two aligned streams are used,
one depth camera (640X480) and one RGB (640X480). Both sensors have low range
(20cm-1.5m). Sampling rate 10 fps. Annotation has been given based on the
outputs of the algorithms produced, in addition to the manually
</td> </tr>
<tr>
<td>
selected (ground truth) key frames. The metadata is provided in xml format
with the respective xml schema. Indicative metadata include a) camera
calibration information, b) camera pose matrix for each viewpoint, c) 3D pose
annotation
</td> </tr>
<tr>
<td>
_**Naming conventions** _
Each demonstrated assembly sequence will be labeled using the type of assembly
followed by an integer indicating the order of execution.
</td> </tr>
<tr>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access** _ Open
</td> </tr>
<tr>
<td>
_**Reason for restricting access (if so)** _ No restriction
</td> </tr>
<tr>
<td>
_**Access provision** _
A web portal has been created by CERTH on the Zenodo platform for the data
management that should provide a description of the dataset as well as links
to a download section.
</td> </tr>
<tr>
<td>
_**Software to access data set** _
Commonly available tools and software libraries for enabling reuse of dataset
(e.g.OpenCV).
</td> </tr>
<tr>
<td>
**Archiving, preservation and re-usability**
</td> </tr>
<tr>
<td>
_**Duration of preservation** _
Data will be preserved for at least 2 years after the end of the project.
</td> </tr>
<tr>
<td>
_**Repository of preservation** _
The dataset is preserved on zenodo as well as on CERTH servers and is
available for download. The portal is equipped with authentication mechanisms,
so as to handle the identity of the persons/organizations that download them,
as well as the purpose and the use of the dataset.
</td> </tr>
<tr>
<td>
_**Cost of preservation** _
A USB disk drive (approximately 16GB) has been allocated for the dataset.
There are no costs associated with its preservation
</td> </tr>
<tr>
<td>
_**Quality assurance** _
The available datasets have been validated by CERTH and included in the
relative publications.
</td> </tr> </table>
<table>
<tr>
<th>
**Dataset reference and name**
</th> </tr>
<tr>
<td>
**DS.02.CERTH.IJERTCS2017Dataset**
</td> </tr>
<tr>
<td>
**Dataset description**
</td> </tr>
<tr>
<td>
_**General Description** _
Dataset of responses from users that had to rate the HRI system. The dataset
has been used in the paper with title “An Advanced Human-Robot Interaction
Interface for Collaborative Robotic Assembly
Tasks”
</td> </tr>
<tr>
<td>
_**Origin of Data** _
Interviews and questionnaire answers of the test subjects that had to rate the
HRI system. Also the time it took them to teach an assembly to the robot.
</td> </tr>
<tr>
<td>
_**Relation to project objectives** _
Objective 1, and WP2, T2.5: The design and prototyping the necessary
interfaces for the HRI in terms of controlling the teaching procedure.
</td> </tr>
<tr>
<td>
_**To whom it would be useful** _
This dataset will be useful for HRI rating reference.
</td> </tr>
<tr>
<td>
_**Type and format** _
The data’s type is spreadsheet and it’s available in excel format.
</td> </tr>
<tr>
<td>
_**Expected size** _
The data is small in size, around 1MB.
</td> </tr>
<tr>
<td>
_**Similar Data sets** _
There are many similar datasets that involve participant responses in UI
questionnaires, however the particular questionnaire on HRI was generated by
CERTH so there are no comparable datasets.
</td> </tr>
<tr>
<td>
**Discoverability and naming conventions**
</td> </tr>
<tr>
<td>
_**Metadata provision** _
There is no metadata available.
</td> </tr>
<tr>
<td>
_**Naming conventions** _
The dataset is contained in a single .xls file and each question has a
corresponding column in the table which is clearly indicated by its number.
</td> </tr>
<tr>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access** _
Open
</td> </tr>
<tr>
<td>
_**Reason for restricting access (if so)** _
No restriction
</td> </tr>
<tr>
<td>
_**Access provision** _
A web portal has been created by CERTH on the Zenodo platform for the data
management that should provide a description of the dataset as well as links
to a download section.
</td> </tr>
<tr>
<td>
_**Software to access data set** _
Commonly available tools and software libraries for enabling reuse of dataset
(e.g.MS Excel, LibreOffice Calc).
</td> </tr>
<tr>
<td>
**Archiving, preservation and re-usability**
</td> </tr>
<tr>
<td>
_**Duration of preservation** _
Data will be preserved for at least 2 years after the end of the project.
</td> </tr>
<tr>
<td>
_**Repository of preservation** _
The dataset is preserved on zenodo as well as on CERTH servers and is
available for download. Of course, the portal is equipped with authentication
mechanisms, so as to handle the identity of the persons/organizations that
download them, as well as the purpose and the use of the downloaded dataset.
</td> </tr>
<tr>
<td>
_**Cost of preservation** _
A USB disk drive (approximately 16GB) has been allocated for the dataset.
There are no costs associated with its preservation
</td> </tr>
<tr>
<td>
_**Quality assurance** _
The available datasets have been validated by CERTH and included in relative
publications on the corresponding methods
</td> </tr> </table>
<table>
<tr>
<th>
**Dataset reference and name**
</th> </tr>
<tr>
<td>
**DS.03.CERTH.CVPR2016Dataset**
</td> </tr>
<tr>
<td>
**Dataset description**
</td> </tr>
<tr>
<td>
_**General Description** _
Dataset of RGB and depth images reflecting two usage scenarios, one
representing domestic environments and the other a bin-picking scenario found
in industrial settings.
</td> </tr>
<tr>
<td>
_**Origin of Data** _
Device type: RGBD sensor. Two aligned streams are used, extracted from one
depth sensor (640X480) and one RGB camera (640X480).
</td> </tr>
<tr>
<td>
_**Relation to project objectives** _
Objective (2), WP3: To develop coarse-grained object tracking algorithms based
on privacy-preserving sensing (depth) and at different levels of granularity
(teaching mode versus real-time execution of the manipulation process)
</td> </tr>
<tr>
<td>
_**To whom it would be useful** _
This dataset will be useful to object tracking algorithms
</td> </tr>
<tr>
<td>
_**Type and format** _
The data will be available in video format (e.g. image sequences) and txt
formats for the annotation.
</td> </tr>
<tr>
<td>
_**Expected size** _
The volume of data is estimated at approximately 500KB/image for RGB and 100
KB/image for depth. 15 scenes have been captured ranging from 20 to 60 images
each. Each scene holds a volume of approximately 15 MB (10 MB for color and 5
MB for depth).
</td> </tr>
<tr>
<td>
_**Similar Data sets** _
1\. Princeton Tracking Benchmark
(http://tracking.cs.princeton.edu/dataset.html)
</td> </tr>
<tr>
<td>
**Discoverability and naming conventions**
</td> </tr>
<tr>
<td>
_**Metadata provision** _
For the metadata RGBD sensors have been used. Two aligned streams are used,
one depth camera (640X480) and one RGB (640X480). Annotation has been given
based on the outputs of the algorithms produced, in addition to the manually
defined (ground truth) object poses. The metadata is provided in
</td> </tr>
<tr>
<td>
txt format. Indicative metadata include a) camera position information, b) 3D
pose annotation, c) 3D mesh files of the objects.
</td> </tr>
<tr>
<td>
_**Naming conventions** _
Each sequence is labeled using the type of the objects followed by an integer
indicating the order of detection.
</td> </tr>
<tr>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access** _
Open
</td> </tr>
<tr>
<td>
_**Reason for restricting access (if so)** _
No restriction
</td> </tr>
<tr>
<td>
_**Access provision** _
A web portal has been created by CERTH on the Zenodo platform for the data
management that should provide a description of the dataset as well as links
to a download section.
</td> </tr>
<tr>
<td>
_**Software to access data set** _
Commonly available tools and software libraries for enabling reuse of dataset
(e.g.OpenCV).
</td> </tr>
<tr>
<td>
**Archiving, preservation and re-usability**
</td> </tr>
<tr>
<td>
_**Duration of preservation** _
Data will be preserved for at least 2 years after the end of the project.
</td> </tr>
<tr>
<td>
_**Repository of preservation** _
The dataset is preserved on zenodo as well as on CERTH servers and is
available for download. Of course, the portal is equipped with authentication
mechanisms, so as to handle the identity of the persons/organizations that
download them, as well as the purpose and the use of the downloaded dataset.
</td> </tr>
<tr>
<td>
_**Cost of preservation** _
A USB disk drive (approximately 16GB) has been allocated for the dataset.
There are no costs associated with its preservation
</td> </tr>
<tr>
<td>
_**Quality assurance** _
The available datasets have been validated by CERTH and included in relative
publications.
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS.04.CERTH.SnapFitForceProfiles**
</td> </tr>
<tr>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**General Description** _
The Dataset is used for training and testing a machine learning classifier in
order to achieve real-time detection of successful snap-fit assemblies.
The Dataset contains force profiles on the axis of motion (assembly), captured
during a robotic and a human assembly process of two different snap-fit
assembly types, namely cantilever and annular.
In robotic assembly, the process is done automatically where a robot holds one
of the two parts and pushes it against the other, until the process is
characterized as successful or failed. In the human assembly process, a human
assembles the two parts while the robot acts as a smart sensor and captures
the developed forces in the axis of assembly.
The data set is split into 8 files, 4 for each snap fit type. One containing
force profiles from the human based process (50 assembly cases) and one
containing force profiles from the robot based process (60 assembly cases).
Their labels (successful or failure) are also included in separate files.
</td> </tr>
<tr>
<td>
_**Origin of Data** _
Device type: A 6 DoF KuKa robot is used for the assembly and force capturing,
along with a wrist forces torque sensor (ATI F/T Mini 40).
</td> </tr>
<tr>
<td>
_**Relation to project objectives** _
Objective 2: To develop a bi-manual robot that enables teaching of assembly
with advanced physical human-robot interaction
</td> </tr>
<tr>
<td>
_**To whom it would be useful** _
This dataset will be useful to analyze and evaluate snap fit assembly types
based on the developed force profiles. It can support any type of detection
and machine learning algorithm for assembly detection and fault prediction.
</td> </tr>
<tr>
<td>
_**Type and format** _
The data will be available in .mat files
</td> </tr>
<tr>
<td>
_**Expected size** _
The data set as described above incorporates 100 human based assemblies and
120 robot based assemblies along with their labels, and is of approximately
3.4 MB.
</td> </tr>
<tr>
<td>
_**Similar Data sets** _
</td> </tr> </table>
<table>
<tr>
<th>
1\. Complementary material of the following research item
Huang, Jian, Yuan Wang, and Toshio Fukuda. "Set-Membership-Based Fault
Detection and Isolation for Robotic Assembly of Electrical Connectors." _IEEE
Transactions on Automation Science and Engineering_ (2016).
**Source** : http://ieeexplore.ieee.org/document/7572012/media
</th> </tr>
<tr>
<td>
**Discoverability and naming conventions**
</td> </tr>
<tr>
<td>
_**Metadata provision** _
There is no metadata available.
</td> </tr>
<tr>
<td>
_**Naming conventions** _
Each snap fit assembly process is labeled indicating the order of experimental
execution.
</td> </tr>
<tr>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access** _
Open
</td> </tr>
<tr>
<td>
_**Reason for restricting access (if so)** _
No restriction
</td> </tr>
<tr>
<td>
_**Access provision** _
A web portal has been created by CERTH on the Zenodo platform for the data
management that should provide a description of the dataset as well as links
to a download section.
</td> </tr>
<tr>
<td>
_**Software to access data set** _
Any software that can process .mat files such as Matlab, R, Python etc.
</td> </tr>
<tr>
<td>
**Archiving, preservation and re-usability**
</td> </tr>
<tr>
<td>
_**Duration of preservation** _
Data will be preserved for at least 2 years after the end of the project.
</td> </tr>
<tr>
<td>
_**Repository of preservation** _
The dataset is preserved on zenodo as well as on CERTH servers and is
available for download. Of course, the portal is equipped with authentication
mechanisms, so as to handle the identity of the persons/organizations that
download them, as well as the purpose and the use of the downloaded dataset.
</td> </tr>
<tr>
<td>
_**Cost of preservation** _
</td> </tr>
<tr>
<td>
Storage space of approximately 3.4MB is required and there are no costs
associated with its preservation.
</td> </tr>
<tr>
<td>
_**Quality assurance** _
The available datasets is validated by CERTH and is included in publications
currently under review.
</td> </tr> </table>
<table>
<tr>
<th>
**Dataset reference and name**
</th> </tr>
<tr>
<td>
**DS.05.CERTH.ContactEvaluationData**
</td> </tr>
<tr>
<td>
**Dataset description**
</td> </tr>
<tr>
<td>
_**General Description** _
Dataset generated by logging wrench forces of the robot’s F/T sensor in
various contact configurations between the assembly parts.
</td> </tr>
<tr>
<td>
_**Origin of Data** _
Device type: F/T sensor. Wrench sensor messages created by ROS
(sensor_msgs:WrenchStamped). TF data between the sensor link and the robot
base_link is included.
</td> </tr>
<tr>
<td>
_**Relation to project objectives** _
Objective (1), WP5: To maintain contact stability.
</td> </tr>
<tr>
<td>
_**To whom it would be useful** _
This dataset will be useful to center of pressure estimation algorithms.
</td> </tr>
<tr>
<td>
_**Type and format** _
ROS bag files and txt formats for the annotation.
</td> </tr>
<tr>
<td>
_**Expected size** _
The volume of data is estimated at approximately 1ΜB/bag file and there are
around 18 bag files for each one of the 3 assemblies. So the total size is
around 50MB.
</td> </tr>
<tr>
<td>
_**Similar Data sets** _
No similar datasets found
</td> </tr>
<tr>
<td>
**Discoverability and naming conventions**
</td> </tr>
<tr>
<td>
_**Metadata provision** _
None
</td> </tr>
<tr>
<td>
_**Naming conventions** _
Each bag file is named using the type of the assembly along with an integer id
and the timestamp of the time that the recording took place.
</td> </tr>
<tr>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access** _
</td> </tr>
<tr>
<td>
Open
</td> </tr>
<tr>
<td>
_**Reason for restricting access (if so)** _
No restriction
</td> </tr>
<tr>
<td>
_**Access provision** _
A web portal has been created by CERTH on the Zenodo platform for the data
management that should provide a description of the dataset as well as links
to a download section.
</td> </tr>
<tr>
<td>
_**Software to access data set** _
The Robot Operating System (ROS).
</td> </tr>
<tr>
<td>
**Archiving, preservation and re-usability**
</td> </tr>
<tr>
<td>
_**Duration of preservation** _
Data will be preserved for at least 2 years after the end of the project.
</td> </tr>
<tr>
<td>
_**Repository of preservation** _
The dataset is preserved on zenodo as well as on CERTH servers and is
available for download. Of course, the portal is equipped with authentication
mechanisms, so as to handle the identity of the persons/organizations that
download them, as well as the purpose and the use of the downloaded dataset.
</td> </tr>
<tr>
<td>
_**Cost of preservation** _
A USB disk drive (approximately 16GB) has been allocated for the dataset.
There are no costs associated with its preservation
</td> </tr>
<tr>
<td>
_**Quality assurance** _
The available datasets have been validated by CERTH and have been used for
SARAFun’s contact evaluation.
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS.01.ULUND.TransientDetection**
</td> </tr>
<tr>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**General Description** _
Dataset used for evaluation of a recurrent neural network (RNN) for
recognition of transients, in order to detect events during robotic assembly.
Inputs are robot joint torque data. Outputs are probabilities that the event
is occurring, as estimated by the RNN.
</td> </tr>
<tr>
<td>
_**Origin of Data** _
ABB YuMi robot. Joint torque measurements on the right arm with seven degrees
of freedom.
</td> </tr>
<tr>
<td>
_**Relation to project objectives** _
Objective 1: To develop a bi-manual robot that will be capable to learn the
assembly of two parts by human demonstration. During assembly, it is necessary
that the robot detects key events, to determine when to switch between sub-
tasks. Not all assembly robots are equipped with force sensors, hence the
sensor-less approach where joint torques are used.
</td> </tr>
<tr>
<td>
_**To whom it would be useful** _
Robot engineers and researchers that have to create and test transient
detection algorithms, for instance using statistical machine learning.
</td> </tr>
<tr>
<td>
_**Type and format** _
There are 50 trials in total. 50 time series consist of input data, and
another 50 represent the output. These are stored in plain .txt format.
</td> </tr>
<tr>
<td>
_**Expected size** _ 10 MB.
</td> </tr>
<tr>
<td>
_**Similar Data sets** _
The MNIST dataset is similar in the sense that it has the purpose of
evaluating machine learning algorithms.
http://yann.lecun.com/exdb/mnist/
</td> </tr>
<tr>
<td>
**Discoverability and naming conventions**
</td> </tr>
<tr>
<td>
_**Metadata provision** _
</td> </tr> </table>
<table>
<tr>
<th>
Input consists of time series of robot joint torques on the arm side in Nm,
with seven channels; one for each robot joint. The output consists of
estimated transient probability in one dimension. The sampling frequency is
250 Hz.
</th> </tr>
<tr>
<td>
_**Naming conventions** _ trq{i}.txt denotes input time series number {I}
snaplog{i} denotes output time series number {i}
</td> </tr>
<tr>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access** _
Open
</td> </tr>
<tr>
<td>
_**Reason for restricting access (if so)** _ Not applicable.
</td> </tr>
<tr>
<td>
_**Access provision** _
A web portal has been created by CERTH on the Zenodo platform for the data
management that should provide a description of the dataset as well as links
to a download section.
</td> </tr>
<tr>
<td>
_**Software to access data set** _
Matlab, Python, Julia, or similar programming tools required for data
visualization.
</td> </tr>
<tr>
<td>
**Archiving, preservation and re-usability**
</td> </tr>
<tr>
<td>
_**Duration of preservation** _
Data will be preserved for at least 2 years after the end of the project.
</td> </tr>
<tr>
<td>
_**Repository of preservation** _
The dataset is preserved on zenodo as well as on CERTH servers and is
available for download. Of course, the portal is equipped with authentication
mechanisms, so as to handle the identity of the persons/organizations that
download them, as well as the purpose and the use of the downloaded dataset.
</td> </tr>
<tr>
<td>
_**Cost of preservation** _
The dataset is relatively small in size in the machine learning context. There
are no costs associated with the preservation.
</td> </tr>
<tr>
<td>
_**Quality assurance** _
</td> </tr> </table>
The dataset was included in the peer-reviewed, accepted paper _Detection and
Control of Contact Force_
_Transients in Robotic Manipulation without a Force Sensor,_ to be presented
at ICRA, Brisbane, May 2018.
<table>
<tr>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS.01.UNIBI.TactileData**
</td> </tr>
<tr>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**General Description** _
Tactile data for slip detection experiments. Various objects are hold by two
KuKa robots between two tactile sensors with different initial forces and
released to create slippage events.
</td> </tr>
<tr>
<td>
_**Origin of Data** _
Two Myrmex sensors attached to KuKa LWR 4 arms.
</td> </tr>
<tr>
<td>
_**Relation to project objectives** _
Objective 4:To develop strategies to improve and maintain grasp stability for
industrial grippers
</td> </tr>
<tr>
<td>
_**To whom it would be useful** _
Comparison of different slip detection algorithms.
</td> </tr>
<tr>
<td>
_**Type and format** _
Tactile data is recorded as ros sensor_msg/image stream @ 1kHz per sensor.
</td> </tr>
<tr>
<td>
_**Expected size** _
230 Mb compressed, 2.4Gb uncompressed.
</td> </tr>
<tr>
<td>
_**Similar Data sets** _
No similar datasets have been found
</td> </tr>
<tr>
<td>
**Discoverability and naming conventions**
</td> </tr>
<tr>
<td>
_**Metadata provision** _
ROS .bag metadata, e.g. timestamps.
</td> </tr>
<tr>
<td>
_**Naming conventions** _
ROS naming convention.
</td> </tr>
<tr>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access** _ open
</td> </tr>
<tr>
<td>
_**Reason for restricting access (if so)** _
</td> </tr>
<tr>
<td>
none
</td> </tr>
<tr>
<td>
_**Access provision** _
A web portal has been created by CERTH on the Zenodo platform for the data
management that should provide a description of the dataset as well as links
to a download section.
</td> </tr>
<tr>
<td>
_**Software to access data set** _
Standard ROS tools and image processing software (e.g. OpenCV).
</td> </tr>
<tr>
<td>
**Archiving, preservation and re-usability**
</td> </tr>
<tr>
<td>
_**Duration of preservation** _
Data will be preserved for at least 2 years after the end of the project.
</td> </tr>
<tr>
<td>
_**Repository of preservation** _
The dataset is preserved on zenodo as well as on CERTH servers and is
available for download. Of course, the portal is equipped with authentication
mechanisms, so as to handle the identity of the persons/organizations that
download them, as well as the purpose and the use of the downloaded dataset.
</td> </tr>
<tr>
<td>
_**Cost of preservation** _
A USB disk drive (approximately 16GB) has been allocated for the dataset.
There are no costs associated with its preservation
</td> </tr>
<tr>
<td>
_**Quality assurance** _
The dataset is validated with the standards set by the SARAFun consortium and
through various trials with the SARAFun system.
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS.01.ABB.ExperimentalVerification_GraspQuality**
</td> </tr>
<tr>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**General Description** _
Dataset used for measuring grasp quality of automatically design fingers for
industrial robots.
</td> </tr>
<tr>
<td>
_**Origin of Data** _ the following equipment are used to measure the
resistant force and torques of the fingers designed by GAFD, MDF and eGrip
methods:
* Torque/force sensor (MAGTROL SA – TMB 306/411) measures the torque resistance with analog voltage signal as output.
* Analog-to-digital convertor (PicoScope 2000) that converts the analog signal from the torque sensor to a laptop through a USB connection.
* Spring: Adjusting the component through a pull force (will only be used for the force experiment). The spring is attached between the component and the sensor with the purpose to give a certain elasticity to the pull force and prevent impact forces.
* Cables are used to attach the component to the sensor.
</td> </tr>
<tr>
<td>
_**Relation to project objectives** _
Objective 1: To automate the design process of fingers for industrial robot
grippers.
</td> </tr>
<tr>
<td>
_**To whom it would be useful** _
This dataset will be useful to measure the grasp quality of fingers.
</td> </tr>
<tr>
<td>
_**Type and format** _
The data will be available in Microsoft Excel format (i.e. .csv).
</td> </tr>
<tr>
<td>
_**Expected size** _
The volume of data is estimated at approximately 1.5MB.
</td> </tr>
<tr>
<td>
_**Similar Data sets** _
</td> </tr>
<tr>
<td>
**Discoverability and naming conventions**
</td> </tr>
<tr>
<td>
_**Metadata provision** _
There is no metadata available
</td> </tr>
<tr>
<td>
_**Naming conventions** _
Each experiment iteration is labeled using “combination” followed by an
integer indicating the order of execution.
</td> </tr>
<tr>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access** _
Open
</td> </tr>
<tr>
<td>
_**Reason for restricting access (if so)** _
No restriction
</td> </tr>
<tr>
<td>
_**Access provision** _
A web portal has been created by CERTH on the Zenodo platform for the data
management that should provide a description of the dataset as well as links
to a download section.
</td> </tr>
<tr>
<td>
_**Software to access data set** _
Commonly available tools and software libraries for enabling reuse of dataset
(e.g. Excel and Notepad).
</td> </tr>
<tr>
<td>
**Archiving, preservation and re-usability**
</td> </tr>
<tr>
<td>
_**Duration of preservation** _
Data will be preserved for at least 2 years after the end of the project.
</td> </tr>
<tr>
<td>
_**Repository of preservation** _
The dataset is preserved on zenodo as well as on CERTH servers and is
available for download. Of course, the portal is equipped with authentication
mechanisms, so as to handle the identity of the persons/organizations that
download them, as well as the purpose and the use of the downloaded dataset.
</td> </tr>
<tr>
<td>
_**Cost of preservation** _
A USB disk drive (approximately 16 Gigabyte) will be allocated for the
dataset. There are no costs associated with its preservation
</td> </tr>
<tr>
<td>
_**Quality assurance** _
The dataset has been used in a peer-reviewed published paper with title
“Experimental verification of design automation methods for robotic finger”
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS.01.TECNALIA .Human_Performance_of_Bimanual_Assembly**
</td> </tr>
<tr>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**General Description** _
Recording of experiments in which volunteer human subjects performed a sliding
insertion task using instrumented objects to measure the kinematics and
interaction forces during unimanual and bimanual manipulation.
</td> </tr>
<tr>
<td>
_**Origin of Data** _
Data were acquired using custom instrumented objects that included infrared
markers for 3D motion tracking by a CodaMotion tracking system and interaction
forces measured by OptoForce 6 d.o.f. force/torque sensors.
</td> </tr>
<tr>
<td>
_**Relation to project objectives** _
Objective 5: To transfer to the robot, knowledge about human sensorimotor
performance during assembly.
</td> </tr>
<tr>
<td>
_**To whom it would be useful** _
Researchers interested in developing biomimetic control policies for assembly
Researchers interested in identifying human behavior for teaching by
demonstration
Researchers interested in studying human sensorimotor behavior
</td> </tr>
<tr>
<td>
_**Type and format** _
Raw data in the form of 3D markers positions and force/torque values
Processed data with computed object pose and wrench
</td> </tr>
<tr>
<td>
_**Expected size** _
Typical size for an experiment with 5-10 subjects: 0.5 – 1 GB
</td> </tr>
<tr>
<td>
_**Similar Data sets** _
These data sets are rather unique in their details, i.e. with respect to the
assembly tasks that are studied and the specific file formats. But the general
nature of the dataset is similar to many datasets collected by research
laboratories centered on the study of human motor control.
</td> </tr>
<tr>
<td>
**Discoverability and naming conventions**
</td> </tr> </table>
<table>
<tr>
<th>
_**Metadata provision** _
Metadata includes alignment information from the CodaMotion tracking system
</th> </tr>
<tr>
<td>
_**Naming conventions** _
Data is stored in an anonymous fashion, preventing the association of a given
dataset to an individual human volunteer.
Data from a given individual are grouped under a common filename tag, e.g.
user01, user02, etc.
Data are stored in directory structures denoted ‘raw’ for the original
recording files intrinsic to each data measurement system and ‘splitted’
corresponding to data that have been aligned across sensors and split into
individual trials.
Data are further split according to experiment conditions:
* BI – bimanual, impaired vision
* BP – bimanual, normal vision
* UI – unimanual, impaired vision
* UIH – unimanual, impaired vision, haptic information
* UP – unimanual, normal vision
* UPH – unimanual, normal vision, haptic information
</td> </tr>
<tr>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access** _
Embargoed
</td> </tr>
<tr>
<td>
_**Reason for restricting access (if so)** _
The dataset is reserved to researchers involved in the project until the first
scientific report is published in a peer-reviewed journal. It will then be
released for public access.
</td> </tr>
<tr>
<td>
_**Access provision** _
A web portal has been created by CERTH on the Zenodo platform for the data
management that should provide a description of the dataset as well as links
to a download section.
</td> </tr>
<tr>
<td>
_**Software to access data set** _
Preprocessed (“splitted”) are recorded in ASCII text files for universal
access.
Raw data from the CodaMotion tracking system, in proprietary .mdf format, are
also available.
</td> </tr>
<tr>
<td>
**Archiving, preservation and re-usability**
</td> </tr>
<tr>
<td>
_**Duration of preservation** _
2 years past the publication of the first peer-reviewed scientific report.
</td> </tr>
<tr>
<td>
_**Repository of preservation** _
The dataset is preserved on zenodo as well as on CERTH servers and is
available for download. Of course, the portal is equipped with authentication
mechanisms, so as to handle the identity of the persons/organizations that
download them, as well as the purpose and the use of the downloaded dataset.
</td> </tr>
<tr>
<td>
_**Cost of preservation** _
A USB disk drive (approximately 16 Gigabyte) will be allocated for the
dataset. There are no costs associated with its preservation
</td> </tr>
<tr>
<td>
_**Quality assurance** _
Datasets are provided “as is”, but validation of the datasets are assured
through the publication of peerreviewed reports in the international
scientific literature.
</td> </tr> </table>
# DISSEMINATION AND EXPLOITATION OF OPEN RESEARCH DATA
Data constitutes a strong asset of the SARAFun project, since the several
components of the system developed and tested in real environments throughout
the project have led to the production of a considerable volume of various
datasets. On top of that, considerable new applied knowledge has been produced
during the project, captured in the several SARAFun reports and scientific
publications (ANNEX I).
The consortium believes firmly in the concepts of open science and the large
potential benefits the European innovation and economy can draw from allowing
reusing research data at a larger scale. By ensuring that the project’s
results are used by other research stakeholders, we will stimulate the
continuity and transfer of SARAFun outputs to further research and other
initiatives, allowing others to build upon, benefit from and be influenced by
them.
To this end, SARAFun participates in the **Open Research Data Pilot (ORD)**
launched by the European Commission along with the Horizon 2020 programme. In
this context, certain data produced by the project will be published with open
access – though this objective will obviously need to be balanced with IPR and
data privacy principles.
## SARAFUN OPEN RESEARCH DATA
The main openly exploitable data assets of the project take the following
forms:
* Open datasets; Public deliverables;
* Scientific publications.
### Open Datasets
Throughout the SARAFun development period, various data has been generated to
aid in the development of different modules of the system or in the creation
of scientific publications. This data was generated by multiple sources, some
of them listed below:
* Recordings from RGB and Depth sensors (images, video);
* Questionnaire responses;
* Measurements on the robot;
* Force sensor data;
Such data can be anonymised and shared with open access in the form of
statistics, which could be analysed for evaluating algorithms or similar
systems and possibly extracting knowledge from them. Nearly every dataset is
accompanied by several metadata e.g. type, xml, object, etc., which could
support multiple kinds of analysis on the generated data.
### Public Deliverables
The project has produced and updated more than 40 public reports which
incorporate public data and knowledge produced and integrated during the
3-year duration of the grant. This knowledge revolves around multiple research
fields and disciplines, such as:
* End-user needs analysis;
* Industrial application scenarios/use cases;
* Robotic systems architecture;
* HRI interfaces;
* User experience optimization
* Semantics modelling;
* Data aggregation/integration techniques;
* Evaluation methodologies;
* Dissemination and exploitation of results; etc.
### Scientific Publications
Multiple open access scientific publications have been produced in the
framework of the project, published either in conferences or relevant
journals/books. These publications summarize main achievements of the project
that can be further exploited by the scientific community.
## OPEN DATA DISSEMINATION PLATFORMS
Visibility of the above mentioned assets is the key for allowing other
stakeholders to get inspired by the project and re-use the produced data and
knowledge, so as to fuel the open data economy. To ensure visibility of open
SARAFun resources, several platforms have been employed by the team, where
other researchers and the general public can find information on the project’s
results, but also to download project’s data and documents. These platforms
are listed below:
### Zenodo
Zenodo is a widely used research data repository, allowing research
stakeholders to search and retrieve open data uploaded by other researchers.
The uploaded datasets can be accessible by anyone (open access) and the
project is provided with a dissemination platform. The project team ensures
that open project resources are regularly uploaded on Zenodo, such as public
deliverables, scientific papers and datasets.
**Figure 1. SARAFun zenodo page**
**Figure 2. The Zenodo page of the first dataset of CERTH**
**Figure 3. The Zenodo page of the first dataset of ULUND**
### The OpenAIRE platform
Dissemination and exploitation of the project’s open data is supported through
the EC’s OpenAIRE platform, where visitors can access all types of SARAFun
data, searching by various keywords and metadata. Zenodo is linked with the
OpenAIRE platform and every uploaded dataset and publication can be accessed
through it.
**Figure**
**4**
**. The OpenAIRE platform**
# CONCLUSIONS
The present report constitutes the final version of the SARAFun Data
Management Plan and provided an updated description of the datasets produced
throughout the project, the strategy put in place for their storage,
protection and sharing, as well as the infrastructure implemented to
efficiently manage them. In addition, it presented the project’s measures for
ensuring visibility, sustainability and dissemination of the SARAFun open
research data.
Throughout the project, the consortium needed to manage a large number of
datasets, collected by various means, i.e. sensors, cameras, manual inputs in
robotic systems and direct interactions with users (e.g. interviews and
questionnaires). Almost all the project partners have become SARAFun data
owners and/or producers. Similarly, all the technical work packages of the
project produced data. All datasets have been handled considering the main
data security and privacy principles, respecting also the partners IPR
policies.
As part of the Open Research Data Pilot (ORD), the project has taken measures
to promote the open data and knowledge produced by the project. Interested
stakeholders, such as researchers or industry actors, will be able to access
open resources generated by the project, through various platforms, even
beyond the project’s duration. This way, sustainability of the SARAFun
outcomes will be fostered. However, particular attention needs to be paid on
ensuring that the data made openly available violates neither IPR of the
project partners, nor the regulations and good practices around personal data
protection. For this latter point, systematic anonymization of data is
necessary.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0839_RESLAG_642067.md
|
# 1 INTRODUCTION
In the Data Management Plan (DMP) we define the way data generated in the
RESLAG project is named, stored, classified and disseminated. This document
applies only to the technical data generated in the project, the deliverables
and papers published are excluded from it. To see the rules and procedures
stablished regarding the publications and deliverables see the Project
Management Handbook (D1.1).
The European Union funded projects must disseminate the results of the
researches done during the project unless it goes against their legitimate
interest. On top of that, RESLAG project was signed in as part of the “Open
Research Data Pilot”, which implies a commitment to, as far as possible, make
the data generated during the project accessible and free for third parties.
In order to meet the requirements of the
European Commission this DMP follows the guidelines of the “Guidelines on Open
Access to Scientific Publications and Research Data in Horizon 2020” (see
Annex II).
In order to ensure the accessibility and intelligibility of the data that will
be generated during the RESLAG project, we have designed a DMP that will apply
through the whole duration of the activities and until the last update of the
data.
The following terms will be used through the whole document:
* **Data set:** A data set is a collection of data. In the context of this document, it should be understood as aggregated data that can be analysed as a whole and has a conclusive and concrete result.
* **Data sheet or Fact sheet:** A sheet that summarizes the characteristics of a data set so that it can be read by anyone for a quick understanding of the content of the data, there is a template for this Fact sheet (see Annex I).
* **Metadata:** In the context of this document metadata is organized information labelling a data set and encoded in the code of the websites in order to facilitate discovery and reuse of the information by third parties.
* **Underlying data:** In the context of this document, the underlying data is the data used to reach conclusions published in a paper.
* **Embargo period:** In academic publishing, an embargo is a period during which access to academic journals is not allowed to users who have not paid for access. The purpose of this is to protect the revenue of the publisher.
# 2 METADATA STRATEGY AND STANDARDIZATION
The Metadata and Standardization of the data sets generated will have a key
role in making the information discoverable. The consistent implementation of
the following guidelines will make the search of information easier for the
interested community in order to find and use the data sets generated and
shared within the RESLAG project.
## 2.1 Metadata strategy
Metadata is organized information labelling data. Metadata has been
historically used in sectors in which archiving was a main concern such as
libraries, administrations and the publishing industry. In the age of the
Internet the main users of the metadata encoded in any website code are the
search engines of the browsers (such as Google, bing, Yahoo…). The search
engines look for the words inserted by the user in the browser through
millions of websites, and the metada encoded in the code of the websites helps
the engines to find the information the user is looking for. In this way,
metadata endorses discovery and reuse of the information by third parties.
Since metadata helps to find information when someone is browsing on the
internet, an official metadata standard recommended by the European Union will
be used to ensure that as many people as possible can find the data sets
shared within the RESLAG project.
Three types of metadata will be defined for each data set:
1. Fact sheet information: As stated in the Section “3. Fact sheet information” of this document, for each data set the authors will have to fill a Fact sheet that allows anyone to quickly identify the content of the data set. As mentioned in the Section “3.2 Data set Metadata”, all the information filled in that Fact Sheet can be used to include it in the encoded metadata of the website.
2. Common metadata: According to the “Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020” (see Annex II) regarding the research data generated, the beneficiaries of the grants should follow Article 29.3 of the Grant Agreement which states that the bibliographic metadata must be in a standard format and must include all of the following terms:
1. European Union (EU);
2. Horizon 2020;
3. Name of the project: Turning waste from steel industry into valuable low cost feedstock for energy intensive industry;
4. Acronym: RESLAG;
5. Grant number: 642067.
3. Specific metadata: The authors will have the option to choose up to 3 Keywords that they consider relevant for the data set and can be of frequent use if someone is searching for the kind of data contained on the data set.
Once the Fact sheet is fulfilled it will be sent with the data set to the
Website managers. The Website managers will use the information indicated by
the authors to complete the metadata of the data sets that are going to go
public. The metadata will not be use in those data sets that has been
categorized in the Fact sheet as “Restricted”, (see Section “3.1.7 Sharing
status” of this document).
## 2.2 Standardization
In order to make the information accessible for internal and external users
and according to the good practices for “Open data” free file formats such as
PDF, OpenOffice, PNG (portable network graphics) and SVG (scalable vector
graphics) will be prioritized when uploading information.
Regarding the names of the files, large research projects as RESLAG can
generate hundreds of data files, short descriptive and consistent file names
will be key to make it easier to locate the information needed now and in the
future. The rules to name data set files will be the following:
Dates
in
YYYYMMDD
format
Acronym
of the
European
project
Short name of
the
information
contained in
the file
characters
(15
max.)
RESLAG
Sequential
document
versions that
will start again
for every
different date
Task in
which the
data set is
generated
T
1.2
Data
Management
_
_
_
20151223
_
v01
**Figure 2.1: Example of data set file version name.**
# 3 FACT SHEET INFORMATION
For each data set the researcher will fill the Fact Sheet shown in Annex I.
The fields specified in that Fact Sheet should be filled according to the
following rules and recommendations.
## 3.1 Data set description
### 3.1.1 Reference
Each data set will have a reference that will be generated by the combination
of the name of the project, the Work Package and Task in which it is generated
and a consecutive number (15 characters maximum, for example: RESLAG_T1.0_01).
### 3.1.2 Description
An intelligible description of the data collected, understandable for people
that do not directly work in the project, and independent from other data set
descriptions, so it can be understood without having to go through every data
set. (60 characters maximum).
### 3.1.3 Authors
The name of the Authors and the Entity will have to be completed.
### 3.1.4 Origin
The researchers will have to select the origin or origins of the data between
the next options:
* Observational data;
* Laboratory experimental data;
* Computer simulation;
* Review;
* Testing pilot data;
* Papers;
* Other, to be specify.
### 3.1.5 Nature
The researchers will have to select the nature of the data between the next
options:
* Documents (text, Word), spreadsheets;
* Laboratory notebooks, field notebooks, diaries;
* Questionnaires, transcripts, codebooks;
* Audiotapes, videotapes;
* Photographs, films;
* Test responses;
* Slides, artefacts, specimens, samples;
* Collection of digital objects acquired and generated during the process of research;
* Database contents (video, audio, text, images);
* Models, algorithms, scripts;
* Contents of an application (input, output, log files for analysis software, simulation software, schemas);
* Methodologies and workflows;
* Standard operating procedures and protocols; Other, to be specify.
### 3.1.6 Scale
The measurement scale of the data must be identified, for example: mm, ºC,
W/(m·K),…
### 3.1.7 Sharing status
The researchers will have to select the sharing status between the next
options:
* Open: Open for public disposal.
* Embargo: It will become public when the embargo period applied by the publisher is over. In case it is categorized as embargo the end date of the embargo period must be written in DD/MM/YYYY format. Restricted: Only for project internal use.
### 3.1.8 Potential interested groups
The researchers will have to select one or more potential interested groups
between the next options:
* General public;
* Energy storage researchers;
* Material researchers;
* Green energy researchers;
* Technical laboratory methodology researchers;
* Pilot methodology researchers;
* Industry;
* Public entities;
* Computational model developers;
* System designers;
* Developers and constructors; Other, to be specify.
### 3.1.9 Whether it underpins a scientific publication
The researchers will have to answer “Yes” or “No”, and in case the answer is
“Yes” they will have to give the reference and date to the mentioned
publication in the following format: _“Name & Surname of the researcher; Name
& Surname of the researcher. Name of the paper. NAME OF THE PUBLICATION.
DD/MM/YYYY. _
_ISSN XXXX-XXXX”_ .
## 3.2 Data set metadata
In order to make the data sets from the RESLAG project easier to find, the
metadata encoded in the websites that store RESLAG data will be defined as
standard and consistent as possible.
### 3.2.1 Common metadata
According to the “Guidelines on Open Access to Scientific Publications and
Research
Data in Horizon 2020” (see Annex II) regarding the research data generated,
the beneficiaries of the grants should follow Article 29.3 of the Grant
Agreement which states that the bibliographic metadata must be in a standard
format and must include all of the following terms:
* European Union (EU);
* Horizon 2020;
* Name of the project: Turning waste from steel industry into valuable low cost feedstock for energy intensive industry;
* Acronym: RESLAG;
* Grant number: 642067.
### 3.2.2 Specific metadata
All the information filled in the Fact Sheet that is specific to each data set
can be used to include it in the metadata. In addition, the authors of the
data set will have the possibility to include up to 3 Keywords related to the
data set (maximum of 25 characters in total).
# 4 DATA SHARING
As stated in the article 29.1 of the Grant Agreement in all European Union
funded projects _“Unless it goes against their legitimate interests, each
beneficiary must — as soon as possible —‘disseminate’ its results by
disclosing them to the public by appropriate means”_ .
On top of that, RESLAG project was voluntarily and with the agreement of all
the partners signed in as part of the “Open Research Data Pilot”. According to
the article 29.3 of the Grant Agreement the beneficiaries of the grant for the
RESLAG project must _“deposit in a research data repository and take measures
to make it possible for third parties to access, mine, exploit, reproduce and
disseminate — free of charge for any user —_ _the data, including associated
metadata, needed to validate the results presented in (i) scientific
publications as soon as possible; (ii) other data, including associated
metadata”_ .
The main reason to establish a Data Management Plan is to ensure the
accessibility and intelligibility of the data that will be generated during
the RESLAG project. The project team will store the information where it can
be easily found and will establish the access procedures needed to keep it
safe and accessible at the same time.
## 4.1 Access procedures
Following the guidelines of Open access as much information as possible will
be freely shared in order to enable other scientific teams through Europe to
use the output of the research made by the RESLAG team. This aim will be
balanced with the necessity to protect the interest of the result obtained
during the project.
The coordination team will assess under strict criteria the nature of the data
and will give advice in order to stablish which data will be shared in the
public section:
* It will be excluded for public distribution data sets containing key information that could be patented for commercial or industrial exploitation.
* Data sets containing key information that could be used by the research team for publications will not be shared until the embargo period applied by the publisher is over, the data sets used to build the papers are generally called “underlying data”. RESLAG project team commits to try and shorten those embargo periods as much as possible. According to the detailed legal requirements on Open Access to publications and “underlying” data that are contained in article 20.2 of the Grant Agreement, in order to comply with the requirements: o An electronic machine-readable copy of the published version of the
“underlying data” of the publications will be deposit in a repository for
scientific publication at least within 6 months since the publication to the
public and, if possible, in the published format.
o The project team will ensure that the bibliographic metadata of this data
sets at least includes:
* The terms ["European Union (EU)" and "Horizon 2020"].
* The name of the action, acronym and grant number.
* The publication date and the length of the embargo period if applicable, and a persistent identifier.
## 4.2 Repository
The research data from this project will be deposited both in:
* _A dedicated website for the project_ : The domain of the website will be ** www.reslag.eu. ** The RESLAG website will be established using the “WordPress” content management system so that selected data users can participate in adding site content over time, depending on the kind of access profile given to them.
* _An open access repository_ : Best practices recommend using an institutional open repository to ensure that the data can be found by anyone. The data sets of the RESLAG project will be deposited (https://zenodo.org/) . This is one of the free repositories recommended by the Open Access Infrastructure for Research in Europe (OpenAIRE) on their website, and it is an open repository for all fields of science that allows uploading any kind of data file formats. Both repositories are prepared to share research data in different ways according to how the partners decide the data should be shared:
* _The dedicated website for the project_ : Information can be shared in the website at two different levels:
* A private access intranet for internal management of research data. Each participant of the project will have a username and a password that will be mandatory to enter into the intranet and have access to all the information shared.
* A public section for the public access to final research data sets. As stated before in this document the data set shall be understood as aggregated data that can be analysed as a whole and has a conclusive and concrete result, and will not include laboratory notebooks, partial data sets, preliminary analyses, drafts of scientific papers… All the information that it is decided to be publically shared will have no access restriction.
* _An open access repository_ : The same Website managers that post the data sets in the public section of the website page from RESLAG will simultaneously post it in the open access repository. ZENODO allows to upload files under restricted, open or embargoed access: o Content deposited under an open status will be accessible to general public.
* Content deposited under an embargo status can be stored indicating the end date for the embargo so that the repository maintains a restricted access to data until the end of that period, and then it will be publically available automatically. o Content deposited under a restricted status will be only accessible by the approval of the depositor of the original file.
## 4.3 Data sharing timeline
* Data will be created and stored in each of the participant entities databases during the duration of the project.
* Data will be shared between partners through the private access intranet of the dedicated website during the duration of the project.
* During the project as each data set is created it will be assessed and categorized as open, embargo or restricted by the owners (to stablish the ownership of the results of the research see Grant agreement, Article 26.1) of the content of the data set:
* Open status:
* RESLAG Website: They will be deposited in the public section during the next month after they are finished.
* ZENODO repository: They will be deposited under public status during the next month after they are finished. o Embargo status:
* RESLAG Website: They will not be deposited in the public section until the embargo period expires. When the embargo period expires, data sets will be deposited in the public section during the next month after the publication.
* ZENODO repository: They will be deposited under embargo status during the next month after the publication.
* Restricted status:
* RESLAG Website: They will only be deposited in the intranet of the project.
# 5 STORE AND PRESERVATION
Once the project is finished the data sets that could be used by other
scientific teams for the reconstruction and evaluation of reported results
should be preserved for the long-term. In line with the best practices several
copies will be stored:
## 5.1 The original
The original documents will be stored in the databases of the entities that
have created them.
## 5.2 The RESLAG website copy
The data sets uploaded to the public section of the dedicated website will be
available for public use at least for 6 months after the end of the project.
## 5.3 The ZENODO digital repository copy
As stated in the Section “4. DATA SHARING” of this document data sets will be
deposited in _http://www.zenodo.org/,_ the data stored in ZENODO is stored in
CERN Data Centre and the repository will provide a long-term management of the
data:
* Both data files and metadata are kept in multiple online replicas.
* Both data files and metadata are backed up to tape every night and replicated into multiple copies in the online system.
* CERN has considerable knowledge and experience in building and operating large scale digital repositories and a commitment to maintain this data centre to collect and store data as it grows over the next 20 years.
* In the highly unlikely event that ZENODO will have to close operations, they will migrate all content to other suitable repositories, and they guarantee that all citations and links to ZENODO resources will not be affected.
# CONCLUSION
As stated before in this document, the article 29.1 of the Grant Agreement
requires to all European Union funded projects _“Unless it goes against their
legitimate interests, each beneficiary must — as soon as possible
—‘disseminate’ its results by disclosing them to the public by appropriate
means”_ . In addition, RESLAG project was voluntarily signed in as part of the
“Open Research Data Pilot” which according to the article 29.3 of the Grant
Agreement means that the beneficiaries of the grant for the RESLAG project
must _“deposit in a research data repository and take measures to make it
possible for third parties to access, mine, exploit, reproduce and disseminate
— free of charge for any user —_ _the data, including associated metadata,
needed to validate the results presented in (i) scientific publications as
soon as possible; (ii) other data, including associated metadata”_ .
In the present Data Management Plan we have displayed the data management
policy that will be applied to all data sets generated during the RESLAG
project. Following the guidelines specified in the DMP we expect to be active
contributors to the research community of the EU, enabling the reuse and the
dissemination of the knowledge generated during the lifetime of the RESLAG
project.
That being said, the DMP is not a fixed document and we expect it to evolve
and gain precision. The DMP will be updated, if necessary, during the project
lifetime in order to:
* Update the information of the data sets that will be shared;
* Incorporate changes made in the Consortium Agreement regarding the data policy, if any;
* Incorporate changes made in the Data Management policy of the H2020, if any;
* Any other external changes.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0840_MINATURA 2020_642139.md
|
**Introduction**
MINATURA2020 is a complex project and requires a lot of different data and
information to be gathered on one hand and additional data and information
will also be produced and processed during the project duration. The data
management plan shall support to get the relevant data in time.
MINATURA2020 will rely strongly on secondary data that can be obtained from
partners and partner's network; public institutions on national and EU level
that are dealing with systematic data collection in various fields. Data
collection will (for instance) refer to spatial data and data on quantifiable
aspects of mineral resources of which some are already processed and some are
not. Data gathering and processing will be the responsibility of individual
WPs; namely to set a suitable framework and identify necessary data, relevant
sources and plan for obtaining new data from the field. This kind of data
collection (not exclusively) refers to WP 1, 2, 3 whereas data and information
processing are (not exclusively) related to WP 4 and 5.
Summerizing, WP1, 2 and 3 will gather most of the data process it and deliver
information as input for planning of WP4 and 5. For WP5 additional information
gathering will be necessary and will be done in forms of interviews and
secondary data processing and information derived from WP 1, 2, 3 and 4.
_Important is to mention:_
The **data management (plan) will be an iterative process and constantly
updated** depending on several factors. For instance, getting in touch with
different data provider will be done during the whole project.
<table>
<tr>
<th>
**2**
</th>
<th>
**Need for Data**
</th> </tr> </table>
The complexity of the topic requires a different set of data.
# 2.1 Kind of data / which context
* Spatial data
* Geological data
* Economical data
* Legal data
* Mining data
* Environmental data
* Data on stakeholder networks
* Data on infrastructure, marketing
* Status of area/deposit/mine
* etc.
_We can structure different types of data:_
1. Spatial data - coordinates of mineral deposit (vertical- underlying and overlying layers and horizontal: x,y,z)
1a Geological data - geological knowledge: qualitative parameters
(geotechnical data for aggregates; geochemical data for ores: elements having
economic value, metal concentrations, ore genetics, ore deposit models),
Critical Raw Materials (CRM), categorization of resources and reserves
1b Land use data (high –good quality agriculture areas, high – good quality
forest areas, joint planning area, industry zones, areas of settlements,
mineral management areas, Infrastructure II: roads, railway, power lines-
electricity, gas and oil pipelines, drainage system) 1c ...
2. Socio-economic data - (jobs, public hearing, GDP, Gross Value Added by sectors or mineral commodities, demand, supply, import-export data, primary/secondary ratio, investments, e.g. highways, railway).
2a Legal data
2b Environment - valorization/permission data (groundwater-dependent
ecosystems, water source, surface water, air-, noise pollution, nature
conservation including geological protection sites, RAMSARI, Natura2000,
National Parks, national ecological network, higly protected areas-“ex lege
area”, caves, core areas, threshold for pollutions, landscape protection) 2c
Cultural Heritage (UNESCO, national, archeological, historical)
3. Other
4. “Information” on
State of art on spatial planning in individual country
Process of spatial planning and land use planning
Relevant legal representative stakeholders
Relevant non-legal representative stakeholders
Other interest, pressure, lobbying groups Etc.
\-
We can specify, for instance with regard to spatial data the following:
* What spatial data is/will be available, i.e. on what minerals ((critical) metallic, industrial, construction minerals)?
* What is the spatial resolution?
* When will this be available?
o For instance, from Minerals4EU-project ( _www.minerals4eu.eu_ ) , see
below; o Some data are available from other projects but can be data at
national level which are not public available or are confidential.
* What is the best way to share the data? Would the portal provide adequate access, then?
# 2.2 For which WP are data needed
Thematic WPs i.e. WP1, WP2, WP3, WP4, WP5
Spatial data will be needed for WP1 and WP4; geologic, economic and legal data
for WP2 and WP3. The modeling of the land use conflict free areas by ALTERRA
and Partners with case studies will be important (WP1). 1 Besides, iteration
between WPs and learning loops between practice-tests-theory will be
important.
# 2.3 Which kind of level
All levels are relevant i.e. EU-level to local level but depending on the WPs.
WP1 is focussing on EU- and national level. Whereas WP2, WP3 consider all
levels. Regarding WP1, there are two workshops in Wageningen (Netherlands)
planned (end of September, beginning of November 2015) 2 .
<table>
<tr>
<th>
**3**
</th>
<th>
**Data sources**
</th> </tr> </table>
There are different data sources/options.
# 3.1 EU/Commission (published data), regulatory or monitoring bodies
European Commission reports & policy documents
For instance: EUROSTAT; RMIS 3 (inner scientific data service).
# 3.2 EU-projects
Some examples are listed in section 4.2.
Many projects have been ordered into a system in the MIN4EU project (
_www.minerals4eu.eu_ ) .
# 3.3 National /regional/local sources
Spatial data will be needed for WP1 and WP4. Many spatial data sources are
provided in the WP1 inquiry for spatial data (MINATURA dropbox) whereas
geologic, economic and legal data mostly will be needed for WP2 and WP3. This
group of data (national sources) will be collected/processed during WP1 and
WP2 (e.g. task 2.2, preparation of country reports). For instance, data
provider from Serbia will be:
Ministry of Mining and Energy of Serbia
Statistical yearbook of Serbia
Statitistical data of Electric Power System of Serbia
Agency for Environmental protection Serbia
Data from significant producers of mineral resources
Agency for Spatial Planning of Serbia
\-
A certain problem with regards to the data collection and sources is that
different countries have different ministries and authorities handling data on
minerals. The method, target and the purpose of the data collection are not
homogenous 4 . Within the MINATURA project there will be discussions
(needed) on the database structure that is suitable to support the MDoPI
concept.
# 3.4 Other options
Stakeholder Involvement (policy makers, Industry,)
<table>
<tr>
<th>
**4**
</th>
<th>
**Management of Data**
</th> </tr> </table>
It is important for the project to get all needed data in time from the right
sources. We need to differ between data collection within the project and data
‘transfer’ from sources outside the consortium. All the data have to be
comparable and compliant to the INSPIRE directive. In terms of
efficiency/resources we want to avoid duplication of data searching (i.e. we
are aiming to use already existing data sources) and implement data sources,
especially those generated from EU-projects who are usually covering several
EU-countries). This is also relevant in that sense that MINATURA shall
consider a pan-European approach (which was expressed by the Commission during
the Kick-off-meeting) rather than the countries covered by the consortium
itself.
# 4.1 Data needed for different WPs
WPs are interrelated but certainly we need to differ between data (and related
WP). In this regard, a separate **Data Matrix** related to the different WPs
has been prepared (excel-table) and will be used from the MINATURA partners.
## _4.1.1 WP 1_
Objective of WP1 is to explore current and future land use competition between
mining and other land uses, based on existing methodologies and approaches at
EU and national level. And by doing so, the basis for a concept and
methodology for defining and protecting the mineral deposits of public
importance can be developed (to be accomplished in WP2).
The (spatial) data used in the WG1 are coming from existing dataset one for
each country case studies (to be collected in M2-M12). Spatial data and land
use data (if available) will be used and implemented.
# _4.1.2 WP 2_
The main objective of WP2 is to establish an appropriate mapping framework
based on detailed qualifying conditions for classifying “mineral deposits of
public importance” (MDoPI). The main scope/assessment criteria for the country
reports are national standards (e.g. national minerals (planning) policy
framework), what is currently assessed, how is it reported (and in what
format), update frequencies; information of legal basics, procedures
concerning mining/minerals versus environmental restrictions etc. To determine
how mineral deposits are considered in partner countries, including where each
partner country is in the land use planning cycle.
Need of geological (resources/reserves), economical (GDP, mineral
consumption), environmental information, land use plan etc. at national
(regional) level (to be collected in M6-M15, preparation of country reports);
see also below (section 4.2.2).
# _4.1.3 WP 3_
The overall objective would be to figure out possibilities how to incorporate
the concept of “mineral deposits of public importance” into the
_national/regional_ /EU minerals (land use) planning policy framework. The
idea is to explore and define regional, national and EU-level regulatory
measures for the safeguarding of MDoPI (using information of baseline
assessment in task 2.2).
There is a need of ‘legal’ data/information (laws, regulations, permitting
procedures etc.) at national (regional) level (to be collected in M6-M15); see
also below (section 4.2.2).
## _4.1.4 WP 4_
WP4 is strongly interrelated with WG1.
The objective of WP4 is to test the developed methodology in selected partner
countries, taking into account different national policy scenarios and their
impacts to ensure robustness at all levels (local/regional, national and EU).
Need of data on national/regional level i.e. spatial data, geological data
(mineral deposits), feedback from all thematic WPs:
•spatial data (feedback/cooperation with WP1) (M12-M19),
•information on national policies (feedback from WP3 – questionnaires/country
reports) (M18),
•lists of suggested potential protected areas in case study countries (on
demand from partners, submission by e-mail/Dropbox) (M12-M19),
•if exist: maps/portals of actual (protected) areas of MD in partner
countries/regions (other EU projects, on demand from partners) (M12-M19)
•feedback on created lists of protected areas that suit safeguarding criteria
in case study countries (feedback from WP5 workshops) (M25) Need of data on EU
level:
•selected safeguarding criteria (output of WP2) (M18)
## _4.1.5 WP 5_
Main objective of WP5 is to open up a dialogue with representatives of all
relevant stakeholders across the EU from local, regional, national to EU
levels, including civil society and the public, public administration and
experts of science and industry on mineral deposits, land use and development
planning, mining and related legislation (particularly permitting), and the
relevant industries, to achieve a consensus on mineral deposits of public
importance (MDoPI) and support the development of related regulatory
framework.
Collection/processing of data will be complimented and (further) facilitated
during these stakeholder meetings (First round of consultation workshops
(M12-M14); second round of consultation workshops (M21-M23)). For example,
some data might not be public available or are confidential.
## 4.2 Data collection - How to approach sources
As mentioned in the introduction – the _data management will be an iterative
process_ _depending on several factors_ . For instance getting in touch with
different data provider will be done during the whole project. Therefore the
data management plan (and data matrix) will be permanently updated (and in
this sense, also this document). 5
### 4.2.1 EU-level
Published sources like EUROSTAT, dataset sources like Corinne Land Cover;
published results of EU-projects can be used.
MINATURA identified several important (finalized, ongoing) EU-projects which
deliverables might be valuable to be assessed and used. Relevant EU-projects
for MINATURA are i.a. Pro Mine, MININVENTORY, Minerals4EU, EURARE and SNAP-
SEE. In many cases, important EU-projects can be ‘approached’ via the MINATURA
partners and Advisory Board (AB) members (which were previously involved in
these projects). We are able to approach these sources via MINATURA partners
and AB members. For example, Nikos Arvanitidis was project coordinator of
Minerals4EU, Daniel Cassard was WP5-leader; both are MINATURA AB member. Nikos
Arvanitidis is also involved in EURARE and Chair of Mineral Resources Expert
Group, EuroGeoSurveys. IMA and UEPG was part of the AB of MININVENTORY.
Günter Tiess was project manager for SNAP-SEE.
Some examples will be given: **ProMine**
The ProMine project can be approached via Daniel Cassard, Nikos Arvanitidis
(MINATURA-AB-members).
(For instance) We received information and options for downloading the ProMine
database from Daniel Cassard. We also were informed to use the Excel file
downloadable from the ProMine Portal:
_http://geodata.gtk.fi/Promine/deposits_AllComoditiesBis.xls_ . If we want to
integrate maps (ProMine maps of mineral potential, predictive maps, Geology at
1:1.5M scale, Geophysics) in a map viewer, we can use the following WMS/WFS
URL: _http://mapsrefrec.brgm.fr/wxs/promine/wp1ogc_ .
## Minerals4EU project
The Minerals4EU project can be approached via Daniel Cassard.
The aim of **Minerals4EU project** was to develop an EU Mineral intelligence
network 6 structure delivering a web portal, a European Minerals Yearbook
and foresight studies. The network aims to provide data, information and
knowledge on primary & secondary mineral raw materials flows, volumes,
reserves &resources inlarge Europe, making a fundamental contribution to the
European Innovation Partnership on Raw Materials (EIP RM), seen by the
Competitiveness Council as key for the successful implementation of the major
EU2020 policies. The aim of the Minerals4EU project is to establish the EU
minerals intelligence network structure, comprising European minerals data
providers and stakeholders, and transform this into a sustainable operational
service. Minerals4EU would therefore contribute to and support decision making
on the policy and adaptation strategies of the Commission, as well as
supporting the security of EU resource and raw
conflict. While doing this, we will create the rules to combine the maps – to
relate the minerals to the land use. In task 1.3, we will confront these maps
with potential future scenarios for land use.
6 40 countries were involved as part of the project.
materials supply, by developing a network structure with mineral information
data and products.
The Minerals4EU tool is delivered in INSPIRE compatible infrastructure that
enables EU geological surveys and other partners to share mineral information
and knowledge, and stakeholders to find, view and acquire standardized and
harmonized georesource and related data. The target of the Minerals4EU project
is to integrate available mineral expertise and information based on the
knowledge base of member geological surveys and other relevant stakeholders,
in support of public policy-making, industry, society, communication and
education purposes at European and international levels. The project duration
was 2 years (September 2013 – August 2015).
Overview connection between the data systems developed in EU projects and the
planned ERA-NET and Permanent Body. (Daniel Cassard presentation, MIN4EU
conference, 25.08.2015, Brussels)
Daniel Cassard informed us that the project team currently is completing the
work on the portal components and data from national providers. Portrayals of
the M4EU database (i.e., parts of the database in Excel format, based on the
M4EU data model and allowing end users to see and assess data 6 which are
part of the tool. 7
**MINVENTORY** (Minventory w.ebsite)
MINVENTORY can be approached via IMA.
The aim of the MINVENTORY project was to create a harmonised pan-European
statistical database on resource and reserve information related to primary
and secondary raw materials (including mining wastes, landfill stocks & flows
and in-use materials). A comprehensive, questionnaire-based report was
published recently, describing the current situation of EU-28 and 13
neighboring countries 8 . The survey also covers the harmonisation issues in
three major topics:
* Policy, legislation and regulation
* Data quality and comparability
* Data infrastructure, provision and accessibility
Final delivery of the MINVENTORY is a roadmap, which identifies bottlenecks
related to raw materials (primary, secondary) resources, reserves, and overall
EU reporting in an INSPIRE compliant format.
Data sources:
Minventory official website;
(https://ec.europa.eu/growth/tools-databases/minventory/content/minventory)
http://www.minventory.eu/
Minventory Final Report, Minventory: EU raw materials statistics on resources
and reserves (
_http://ec.europa.eu/DocsRoom/documents/9625/attachments/1/translations/en/renditions/nativ_
_e)_ )
SNAP-SEE
SNAP can be approached via Günter Tiess and Zoltan Horvath (WP5-leader). SNAP
is relevant for MINATURA because land use planning approaches (related to
aggregates) were discussed i.e. how to include aggregates priority zones in
the land use planning framework.
The Sustainable Aggregates Planning in South East Europe (SNAP-SEE) project
was implemented under the 4th call in the South East Europe (SEE) Program. It
lasted from October 2012 to November 2014 and gathered 27 partners from 13 SEE
countries, namely Albania, Austria, Bosnia and Herzegovina (Herzegbosnian
Canton), Bulgaria, Croatia, Greece, Hungary, Italy (Autonomous Province of
Trento and Emilia Romagna Region), Montenegro, Romania, Serbia, Slovakia and
Slovenia, and Turkey. The SNAP-SEE project focused on developing and
disseminating tools for aggregates management and planning in the SEE. Its
primary objective was to develop a Toolbox for Aggregates Planning to support
national/regional, primary and secondary aggregates planning in SEE countries.
Further projects shall be taken into account:
European Geological Data Infrastructure (EGDI) GeoSeas
EuroGeosource
OneGeology-Europe
INTRAW - _International cooperation on Raw materials_ (started at same time
as MINATURA)
COBALT - _"Contributing to Building of Awareness, Learning and Transfer of
knowledge on_ _sustainable use of raw materials"_ **Start date:** 2013-05-01,
**End date:** 2015-04-30
EURARE - _Development of a sustainable exploitation scheme for Europe’s Rare
Earth ore_ _deposits_
EO-MINERS - _Earth Observation for Monitoring and Observing Environmental and
Societal_ _Impacts of Mineral Resources Exploration and Exploitation_ **Start
date:** 2010-02-01, **End date:** 2013-10-31
FAME - Flexible and Mobile Economic Processing Technologies
### 4.2.2 National level
MINATURA is based on three pillars: 1)bottom-up approach, 2) harmonisation 9
, 3)real-life demonstration. 10 In this regard we need sufficient
information/(reliable)data from national/regional to local level
(WP1,WP2,WP3,WP4) – against the background of a panEuropean approach.
For this exercise the extensive network of the consortium needs to be
mobilised that includes i.a. geological surveys, industry associations, and
other data owners from Europe. Collection and analysis will be centred on the
relevant aspects necessary to supplement available raw materials
data/information.
This analysis will be broken down into several parts/sectors (multisectoral
analysis) as it is important that _relevant competence is responsible for
their specific research area_ (i.e. ‘minerals economy’, ‘geology’, ‘land use
planning’, ‘policy/legal’ etc.). Compatibility with EU standards will be taken
into account as well, whereas the Raw Material Initiative (RMI) and the
European Innovation Partnership on raw materials are of particular importance.
## 4.3 Timing
It is necessary to start timely – from the project beginning - with the data
collection/processing/storing (compare also section 4.1). Especially in two
ways: a) collection of spatial and land use data (WP1/WP4) and b)collection of
other different set of data e.g. resources/reserves, mineral economics, legal
data, mining plans (WP2/WP3).
Apart from that we need to distinguish between a) (possibilities within the)
MINATURA consortium and options for the pan-European approach. With the
support of AB-members we are trying to collect data from the remaining
countries like (for instance) Germany, France, and Finland 11 . With regard
to WP2/WP3 we prepared a questionnaire (which also was forwarded to the
MINATURA partners in order to have the same format). Besides, we want to
prepare 12 a sort of questionnaire (smaller format) in order to approach
stakeholders beyond our ABmembers. Data collection (WP2/3) from MINATURA
partners needs to be done during July and September 2015 (based on country
reports). Starting in October 2015, we also are aiming to approach our AB-
members and all other potential stakeholders in Europe. We will discuss (and
verify) the progress during the Lisbon workshop, end of October 2015; further
during the UK workshop on February/March 2016, Scandinavian workshop 2016.
Besides, the collection of data/information shall be improved / complimented
with our stakeholder meetings (2 stages); planned for 2016\.
Finally, ‘timing of data’ of course is also determined by the MINATURA
Ganttchart:
## 4.4 Availability of data and project results
All data and project results must be assessable for and intelligible to third
party and publicly available. Access to data will be enabled through the
official project website: www.minatura.eu, project results will be
disseminated through different sources and (stakeholder) networks.
<table>
<tr>
<th>
**5**
</th>
<th>
**Conclusions**
</th> </tr> </table>
Collection, processing and storing of data based on an appropriate data
management plan is of utmost importance for the success of the MINATURA2020
project.
The complexity of the topic requires a different set of data (in relation to
the WPs) and approach to data sources.
The intention of a pan-European approach is challenging and requires a
realistic ‘data management strategy’. Our aim is to increase the MINATURA
network consequently through our AB-members, focussed workshops in Europe
(e.g. South versus North-Europe) and other options (e.g. stakeholder
questionnaires).
Data must be available to the right time – according to our time and work plan
(Gantt chart, pert chart) and available on the MINATURA-Dropbox.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0842_BRESAER_637186.md
|
# Objective
This deliverable presents the second version of the Data Management Plan (DMP)
and has been produced at M24.
The purpose of the DMP is to provide an analysis of the main elements of the
data management policy that will be used by the consortium with regard to all
the datasets that will be generated by the project.
The DMP is a document outlining how research data will be handled during a
research project, and after it is completed. It is very important in all
aspects for projects participating in the Horizon 2020 Open Research Data
Pilot as well as for almost any other research project.
The DMP is closely related to the Dissemination Plan, as pictured below:
# Background
Data Management Plans (DMPs) have been introduced in the Horizon 2020 Work
Programme for 2014-15: _A further new element in Horizon 2020 is the use of
Data Management Plans (DMPs) detailing what data the project will generate,
whether and how it will be exploited or made accessible for verification and
re-use, and how it will be curated and preserved. The use of a Data Management
Plan is required for projects participating in the Open Research Data Pilot.
Other projects are invited to submit a Data Management Plan if relevant for
their planned research._
Projects taking part in the Pilot on Open Research Data are required to
provide a first version of the DMP as an early deliverable within the first
six months of the project. **Projects participating in the pilot as well as
projects who submit a DMP on a voluntary basis because it is relevant to their
research should ensure that this deliverable is mentioned in the proposal.**
Since DMPs are expected to mature during the project, more developed versions
of the plan can be included as additional deliverables at later stages. The
purpose of the DMP is to support the data management life cycle for all data
that will be collected, processed or generated by the project.
# Updating the DMP
A DMP describes the data management life cycle for all data sets that will be
collected, processed or generated by the research project. It is a document
outlining how research data will be handled during a research project, and
even after the project is completed, describing what data will be collected,
processed or generated and following what methodology and standards, whether
and how this data will be shared and/or made open, and how it will be curated
and preserved. The DMP is not a fixed document; it evolves and gains more
precision and substance during the lifespan of the project.
According to the EC guidelines, the DMP need to be updated at least by the
mid-term and final review to fine-tune it to the data generated and the uses
identified by the consortium since not all data or potential uses are clear
from the start.
The present deliverable is the mid-term update of the DMP. The final review
will be produced on M54 and described in D1.16
# Second version of the Data Management Plan
The 2 nd DMP reflects the current status of reflection within the consortium
about the data that will be produced.
The points below will be addressed on a dataset by dataset basis:
* Data set reference and name
Identifier for the data set to be produced. (For now only a name is provided.
Once the datasets are published/archived, a definitive identifier will be
given)
* Data set description
Description of the data that will be generated or collected, its origin (in
case it is collected), nature and scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration and reuse.
* Standards and metadata
Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created.
* Data sharing
In case the dataset cannot be shared, the reasons for this should be mentioned
(e.g. ethical, rules of personal data, intellectual property, commercial,
privacy-related, security-related).
In the present version of the DMP, since most of the datasets have not been
produced yet, two items related to data sharing are described:
* Can the dataset be shared ? (e.g. are there barriers related to confidentiality, privacy, rules of personal data, etc.)
* Can the dataset be re-used within and/or outside the consortium? Only data which underpins published research findings and/or has longer-term value (i.e. can be reused) should be shared.
For the datasets that can be shared and re-used, access procedures will be
finalised in the next version of the DMP.
* Archiving and preservation (including storage and backup)
Description of the procedures that will be put in place for long-term
preservation of the data. Indication of how long the data should be preserved,
what is its approximated end volume, what the associated costs are and how
these are planned to be covered.
The list of datasets and their description will be updated in the course of
the project.
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Task**
</th>
<th>
**Partner in charge**
</th>
<th>
**Data set description**
</th>
<th>
**Date of finalisation of data**
</th>
<th>
**Standards, format**
</th>
<th>
**Can this dataset be shared?**
</th>
<th>
**Is this dataset reusable?**
</th>
<th>
**Archiving and preservation (including storage and backup)_as foreseen today_
**
</th> </tr>
<tr>
<td>
**All weather**
**240315.xlsm**
</td>
<td>
T2.1
</td>
<td>
TNO
</td>
<td>
Collection of heating and cooling degree days information for 109 locations
across Europe. This data was sourced from degreedays.net.
</td>
<td>
**M3**
</td>
<td>
.xlsm
</td>
<td>
YES
</td>
<td>
YES
</td>
<td>
Data stored on project folder in TNO network. Also distributed to other
project partners involved in Task 2.1.
</td> </tr>
<tr>
<td>
**Database for**
**Geocluster maps**
</td>
<td>
T2.5
</td>
<td>
TNO
</td>
<td>
Database / tabulated data for various parameters (such as climate, building
stock typology...) for regions across the EU and Turkey
</td>
<td>
M36
</td>
<td>
</td>
<td>
Can be shared in principle, if no confidential sources are used (in which case
there would be restrictions)
</td>
<td>
YES
</td>
<td>
At this stage, within TNO servers. May eventually move to another location
depending on final host.
</td> </tr>
<tr>
<td>
**Preliminary simulations results**
</td>
<td>
T2.2
</td>
<td>
Technion
</td>
<td>
Data containing energy calculations for basecases defined in T2.2 and energy
strategy application (results only!)
</td>
<td>
**M11**
</td>
<td>
.csv and xls (EnergyPlus result files)
</td>
<td>
Will be partially published in a peerreviewed paper as synoptic charts.
If raw results placed on open database server there must be copyright
restrictions
</td>
<td>
Could be reused for virtual demonstrations.
</td>
<td>
Initial raw data and data analysis to be preserved by partners at least during
duration of the project and 5 years later (for audit), but it can have long
term preservation by partners performing energy simulations in their off-line
backup devices (ie DVDs, USB storage, private server storage, etc).
Volume is 1.16 GB.
</td> </tr>
<tr>
<td>
**Simulation results provided by software tool**
**(energy calculations)**
</td>
<td>
T2.3
</td>
<td>
Technion
</td>
<td>
Data containing energy calculations generated by tool to be developed in T2.3
using database approach
</td>
<td>
M54
</td>
<td>
Metadata to be defined
(mysql database)
</td>
<td>
To be confirmed
</td>
<td>
Yes with copyright limitations
</td>
<td>
Data provided by web application that can be downloaded in the final user
computers (such as in a computer program) and resides in an online server that
provides the program to the final users.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
Costs and maintenance of a suitable web server have to be discussed.
Volume not known at this time.
</th> </tr>
<tr>
<td>
**Simulation results provided by software tool**
**(envelope installation)**
</td>
<td>
T4.4
</td>
<td>
Technion
</td>
<td>
Data containing envelope installation aids generated by tool to be developed
in T4.4
</td>
<td>
M54
</td>
<td>
Metadata to be defined
(mysql database and possibly BIM
files)
</td>
<td>
To be confirmed
</td>
<td>
Yes with copyright limitations
</td>
<td>
Data provided by web application that can be downloaded in the final user
computers (such as in a computer program) and resides in an online server that
provides the program to the final users. Costs and maintenance of a suitable
web server have to be discussed.
Volume not known at this time.
</td> </tr>
<tr>
<td>
**Analysis and evaluation of the monitored results**
</td>
<td>
T6.2
T6.7
T6.8
</td>
<td>
Technion
</td>
<td>
Energy calculations and other information about the demonstration building
(MS4,
T6.2, T6.7 and T6.8)
</td>
<td>
M25
(expected)
</td>
<td>
Csv and xls
(trnsys result
files)
</td>
<td>
Related to WP6 MS4 and D6.3 (Public)
Could be used as part of a peer-reviewed paper as synoptic
chart
If raw results placed on open database server there must be copyright
restrictions
</td>
<td>
In principle yes
Part of virtual demonstrations
</td>
<td>
Initial raw data and data analysis to be preserved by partners at least during
duration of the project and 5 years later (for audit), but it can have long
term preservation by partners performing energy simulations in their off-line
backup devices (ie DVDs, USB storage, private server storage, etc).
Volume not known at this stage.
</td> </tr>
<tr>
<td>
**Building information**
</td>
<td>
T6.1
T5.3
</td>
<td>
CARTIF
</td>
<td>
Those data related to the measurements of the building and the monitoring
network.
</td>
<td>
M54
</td>
<td>
LonWorks
and IFC4
standards based
</td>
<td>
Raw data cannot be provided because of upcoming EU regulation (rules of
</td>
<td>
YES, for virtual demonstration
</td>
<td>
The data will be persistently stored in database and secure backups will be
automatically and weekly generated in order to avoid the loss of data.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
PosgreSQL database
</th>
<th>
personal data). KPIs only
For the public: decision to be taken by the building owner
</th>
<th>
</th>
<th>
Additionally, data logs will be maintained to avoid data gaps.
This information is easily restored in PostgreSQL database through the backup
file.
</th> </tr>
<tr>
<td>
**Technologies data**
</td>
<td>
T5.3
</td>
<td>
CARTIF
</td>
<td>
Data collected from the façade solution technologies for the application of
the BEMS control algorithms.
</td>
<td>
M54
</td>
<td>
LonWorks standard based whenever possible.
PosgreSQL database
</td>
<td>
Open to the consortium.
For the public: decision to be taken by the technology owner
</td>
<td>
YES - by the
technology owners only
</td>
<td>
The data will be persistently stored in database and secure backups will be
automatically and weekly generated in order to avoid the loss of data.
Additionally, data logs will be maintained to avoid data gaps.
This information is easily restored in PostgreSQL database through the backup
file.
</td> </tr>
<tr>
<td>
**BEMS data**
</td>
<td>
T5.3
</td>
<td>
CARTIF
</td>
<td>
Data generated by the BEMS itself: alarms about malfunctioning, calculation
results for the optimization and internal data for rendering the calculations.
</td>
<td>
M54
</td>
<td>
data model based on IFC4 for the
internal performance of the BEMS and its results.
PosgreSQL database
</td>
<td>
Open to the consortium.
KPI-related data will be shared as open data (only about performance)
</td>
<td>
YES - by the
technology owners only, with the exception of aggregated data about
performance
</td>
<td>
The data will be persistently stored in database and secure backups will be
automatically and weekly generated in order to avoid the loss of data.
This information is easily restored in PostgreSQL database through the backup
file.
</td> </tr>
<tr>
<td>
**EMI TEST REPORT**
</td>
<td>
T3.7
</td>
<td>
Mondragon
</td>
<td>
Reports and analysis associated with photovoltaic module
</td>
<td>
M27
</td>
<td>
</td>
<td>
See later. If successful, to be used as a marketing
support for BRESAER
</td>
<td>
YES (by
Mondragon)
</td>
<td>
Confidential storage by Mondragon
</td> </tr> </table>
<table>
<tr>
<th>
**EMI TEST REPORT**
</th>
<th>
T3.7
</th>
<th>
Solarwall
</th>
<th>
Preparation of the material necessary to carry out different tests of the
Solarwall material by EMI
</th>
<th>
M24
</th>
<th>
ACCORDING
TO THE
APPLICATION
RULES
</th>
<th>
See later
</th>
<th>
Yes (by Solarwall)
</th>
<th>
Confidential storage by
Solarwall
</th> </tr>
<tr>
<td>
**EMI TEST REPORT**
</td>
<td>
T3.7
</td>
<td>
STAM
</td>
<td>
Reports on tests performed on lightweight insulating panels coupled with and
without photovoltaic modules.
</td>
<td>
M27
</td>
<td>
ETAG034 Results
provided in
.doc and .pdf
</td>
<td>
Relevant data will be disclosed for
marketing purposes
</td>
<td>
YES (by STAM for commercial purposes)
</td>
<td>
Internal storage by STAM, marketing results will be disclosed through websites
</td> </tr>
<tr>
<td>
**EMI TEST REPORT**
</td>
<td>
T3.7
</td>
<td>
EURECAT
</td>
<td>
Reports associated to the test done to the automatic insulated blind. Wind
test, thermal test and reaction to fire test.
</td>
<td>
M27
</td>
<td>
</td>
<td>
See later. If successful, to be used as a marketing
support for BRESAER
</td>
<td>
YES (by Eurecat)
</td>
<td>
Confidential storage by Eurecat
</td> </tr>
<tr>
<td>
**Life cycle analysis and life cycle cost data**
</td>
<td>
T6.4
</td>
<td>
Tecnalia
</td>
<td>
Type and quantity of material, cost of material, consumption of energy to
manufacturing, description of the production process, ...
</td>
<td>
M47
</td>
<td>
ISO 14.040, ISO 14025,
ISO 15804.
</td>
<td>
NO (commercial)
See more details below
</td>
<td>
YES (by
consortium)
See more details below
</td>
<td>
Tecnalia will store the data until the end of the project.
</td> </tr>
<tr>
<td>
**Life cycle analysis and life cycle cost data**
</td>
<td>
T6.4
</td>
<td>
Mondragon
</td>
<td>
LCC-LCA analysis of polymer concrete ventilated facade module
Example: Type and quantity of material, cost of material, consumption of
energy to manufacturing, description of the production process
</td>
<td>
M51
</td>
<td>
</td>
<td>
See later. If good, to be used as a marketing support for BRESAER
</td>
<td>
YES (by
Mondragon)
</td>
<td>
Confidential storage by Mondragon
</td> </tr>
<tr>
<td>
**Life cycle analysis and life cycle cost data**
</td>
<td>
T6.4
</td>
<td>
STAM
</td>
<td>
Analysis of costs and environmental impact of production process for the
integrated solution of insulating panels + PV elements. Raw materials working
procedures and energy consumption are taken into account.
</td>
<td>
M51
</td>
<td>
ISO14040 and
ISO14044 Results
provided in
.xlsx
(numerical results),
reports in
.doc and .pdf
</td>
<td>
Relevant data will be disclosed for
marketing purposes
</td>
<td>
YES (by STAM for commercial purposes)
</td>
<td>
Internal storage by STAM, marketing results will be disclosed through websites
</td> </tr> </table>
_**(*) Publications:** _
Technion has budget assigned for one Gold Open Access publication. There might
be another publication related to BRESAER, but this would be under the usual
copyright agreement of the editorial houses, that would not entail additional
charges to the project.
<table>
<tr>
<th>
**Life cycle analysis and life cycle cost data**
</th>
<th>
T6.4
</th>
<th>
Solarwall
</th>
<th>
Analysis of the life cycle of the solar system that includes not only the
Solarwall panel but also the structures. For example: quantity of material,
energy consumed, manufacturing, etc.
</th>
<th>
M28
</th>
<th>
</th>
<th>
See Later
</th>
<th>
Yes (by Solarwall)
</th>
<th>
Confidential Storage by
Solarwall
</th> </tr>
<tr>
<td>
**Life cycle analysis and life cycle cost data**
</td>
<td>
T6.4
</td>
<td>
EURECAT
</td>
<td>
LCC-LCA analysis of automatic insulated blind.
Example: Type and quantity of material, cost of material, consumption of
energy to manufacturing, description of the production process
</td>
<td>
M51
</td>
<td>
</td>
<td>
See later. If good, to be used as a marketing support for BRESAER
</td>
<td>
YES (by Eurecat)
</td>
<td>
Confidential storage by Eurecat
</td> </tr>
<tr>
<td>
**Data related to**
**BRESAER substructure**
</td>
<td>
WP3
</td>
<td>
Mondragon
</td>
<td>
The result of the project can provide a new kind of profile or substructure
that may be patented or protected. The system will also generate data,
knowledge and information.
</td>
<td>
M54
</td>
<td>
</td>
<td>
See later. Could be used as a marketing support for BRESAER
</td>
<td>
YES (by
Mondragon)
</td>
<td>
Confidential storage by Mondragon
</td> </tr> </table>
The Gold Open Access publication is likely to be submitted to one of
Elsevier's or Taylor & Francis' publications, which offer a large variety of
journals with high impact factor that support gold open access publishing. The
definitive journal will be selected based partially on budget and the most
advantageous open access agreement.
The topic is likely to be based on Task 2.2 (simulations) and Task 2.3 (design
tool).
# Data sharing policy
Most of the datasets generated by the project are related to the technologies
that are developed in the project. This raises confidentiality issues:
disclosing too much information would indeed open the door to reverse-
engineering by competitors. Additionally, if the project results are to be
patented, they should not be published beforehand.
On the other hand, it is in the interest of the partners to disseminate a
certain amount of data about the performances of the technologies (simulation
data, data from the demonstration) to maximise the exploitation potential.
Datasets that have been collected by partners to perform the analyses such as
LCA, Geocluster maps, etc., and are not specific to BRESAER technologies,
could also be shared with other similar projects.
A compromise must therefore be found between complete confidentiality, partial
publication and Open Research Data. The data sharing strategy is at present
provisional and will be refined once the datasets are collected/ generated.
Once the strategy is finalised, the DMP will describe how data will be (or
have been) shared, including access procedures, embargo periods (if any),
outlines of technical mechanisms for dissemination and necessary software and
other tools for enabling re-use, and definition of whether access will be
widely open or restricted to specific groups. The repository where data is
stored will also be identified, indicating in particular the type of
repository (institutional, standard repository for the discipline, etc.).
The consortium is at present investigating the opportunity to use the
repository suggested by the EC ( _https://www.zenodo.org_ ).
# Conclusions
The BRESAER partners will generate various datasets during the project. Most
of them are related to BRESAER technologies, which raises confidentiality
issues. But datasets which underpin published research findings and/or have
longer-term value (i.e. could be reused by other consortia) will be shared,
under conditions that will be presented in the final version of the DMP
(D1.16, M54).
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement N° 637186\.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0845_SARAFun_644938.md
|
# EXECUTIVE SUMMARY
The present document is a deliverable of the SARAFun project, funded by the
European
Commission’s Directorate-General for Research and Innovation (DG RTD), under
its Horizon 2020 Research and innovation programme (H2020). It presents the
first version of the project Data Management Plan (DMP). The current document
explains in detail what data will be generated throughout the project’s
lifecycle, the possible means for the sharing of this data in order to become
accessible for verification and reuse, as well as the ways in which it will be
curated and preserved. Additionally, it provides the necessary information in
order for the Data Management Portal to be afterwards created through this
project’s activities.
It is strongly emphasized that this is an ongoing document that is being
evolved along with the project progress and will be regularly updated in order
to reflect up-to-date information.
# INTRODUCTION
## PURPOSE
The purpose of this deliverable (D7.5 “Draft Data Management Plan”) is to
deliver an analysis of the main elements of the Data Management Policy that
will be used by the consortium with regard to all the datasets generated by
the SARAFun project. The DMP is not a fixed document, but will evolve
throughout the project’s lifecycle. This first version of the DMP includes an
overview of the datasets to be produced by the project as well as the specific
conditions are attached to them. The next version of the DMP will be published
at M36 through the activities of D7.8 and will describe in more details the
data generated as well as the uses identified by the consortium.
## GENERAL PRINCIPLES
Through the activities of the SARAFun project [1], pioneer research will be
carried out in order to develop and deliver a next generation bi-manual robot
that can be exploited in the production lines for assisting human workers in a
safety manner through novel human demonstration and teaching algorithms. To
this end, human participants will be involved in the project and data will be
collected regarding their assembly’s movements and assembly forces in a
production line. For the purpose of optimizing the project’s development, a
process of knowledge management will be implemented. This process will provide
the consolidation of the knowledge spiral, enable cooperation and will
additionally allow for the creation of new knowledge. All the participants of
the project have to cooperate in order to reach the most efficient process of
knowledge management. Initially algorithms that have been implemented to
identify objects, grasping and the recognition of the characteristics of a
grip (such as rotation, strength, speed), will be used before their adjustment
on a production line, in order to allow the optimization measures. Therefore,
a database to store data for benchmarking the algorithms developed in the
project lifetime and beyond is required. Several experiments will be made
using the algorithms on a production line and each experiment will derive a
significant data. Developers will refer to these data with a view to obtain
information in order to increase the efficiency of the implemented algorithms.
Moreover, a part of the performed experiment’s data and of the algorithm’s
code will be provided to the scientific community as well as to robotics
researchers in order to support the optimization of their executive power
(e.g. utilizing github repository for open access to code developed in the
project lifetime as well as publication to open access journals).
_**Participation in the Pilot on Open Research Data** _
SARAFun highly supports the Pilot on Open Research Data launched by the
European Commission along with the Horizon2020 programme, and therefore a
significant part of research data generated by the project will be made open
and will be offered to the Open Research Data Pilot, where SARAFun will
participate. To this end, the Data Management Plan provided through the
activities of this deliverable, explains in detail what data the project will
generate, whether and how it will be exploited or made accessible for
verification and reuse, and how it will be curated and preserved.
_**IPR Management & Security ** _
Due to the high innovative nature of the SARAFun project, high level
technologies will be developed during the project’s lifecycle in order to be
afterwards released in the market. Therefore, foreground capable of industrial
or commercial application must be protected taking into account legitimate
interests. All involved partners have Intellectual Property Rights on the
technologies and data developed or collected with their participation. As the
partners’ economic sustainability highly depends on these technologies and
data, SARAFun Consortium will protect all data collected for SARAFun purposes.
Additionally, prior notice of dissemination will be given to other
participants, whereas any dissemination such as publications and patent
applications must indicate the Community financial assistance. Moreover,
appropriate measures will be taken for effectively avoiding a leak of data,
while all data repositories of this project will be adequately protected.
_**Personal Data Protection** _
SARAFun involves the carrying out of data collection in order to assess the
technology and effectiveness of the proposed solution. This will be carried
out in full compliance of any European and national legislation and directives
relevant to the country where the data collections are taking place
(INTERNATIONAL/EUROPEAN):
i) The Convention 108 for the Protection of Individuals with Regard to
Automatic Processing of Personal Data; ii) Directive 95/46/EC & Directive
2002/58/EC of the European parliament regarding issues with privacy and
protection of personal data and the free movement of such data; iii) The
legislation in Sweden _:_ The 1998 Personal Data Act ; iv) The Spanish Organic
Law 15/99 (amendments : 5/02 & 424/05);
v) The Greek Law 2472/1997: Protection of Individuals with regard to the
Processing of Personal Data, and vi) The Greek Law 3471/2006: Protection of
personal data and privacy in the electronic telecommunications sector and
amendment of law 2472/1997.
More detailed information regarding data privacy issues can be found in
Deliverable 1.2 “Preliminary Ethics and Safety Manual for SARAFun technology”.
# DATASET LIST
For the purposes of SARAFun a number of datasets needs to be created, which
are listed in the following table, together with a short description for each
one of them.
**Table 1: Dataset List Table**
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset Name**
</th>
<th>
**Description**
</th>
<th>
**WPs & Tasks **
</th> </tr>
<tr>
<td>
1
</td>
<td>
DS.01.CERTH.
KeyFrameExtraction
</td>
<td>
Dataset used for key frame extraction in laboratory and factory environments.
An instructor person will pick up two small objects and, afterwards will
assemble them.
</td>
<td>
The data are going to be collected within the activities of WP3 and more
specifically Tasks T3.1, T3.2 and T3.3.
</td> </tr>
<tr>
<td>
2
</td>
<td>
DS.02.CERTH.ObjectTracking
</td>
<td>
Dataset used for object tracking and object pose estimation in laboratory and
factory environments. Three variations will be used:
1) experiments with non-occluded objects, 2) partially occluded objects by
either a) the instructor’s hand or b) another object, and
3) combination of the above.
</td>
<td>
The data are going to be collected within the activities of WP3 and more
specifically T3.1.
</td> </tr> </table>
## DATASET “DS.01.CERTH. KEYFRAMEEXTRACTION”
**Table 2: Dataset “DS.01.CERTH. KeyFrameExtraction”**
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS.01.CERTH. KeyFrameExtraction**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**General Description** _
Dataset used for key frame extraction in laboratory and factory environments.
An instructor person will pick up two small objects and, afterwards will
assembly them.
_**Origin of Data (e.g. indicative collection procedure, devices used etc.)**
_ Device type: RGBD sensor.
Two aligned streams will be used, extracted from one depth sensor (640X480 or
960X540) and one RGB camera (1920X1080). The two sensors will operate in a low
range area (20cm to 1.5m). Sampling rate: 30 fps.
</td> </tr> </table>
<table>
<tr>
<th>
_**Nature and scale of data** _
The data will be available in video format (e.g. image sequences, video file
format, etc.). Scale will be specified later on.
_**To whom could the dataset be useful** _
This dataset will be useful to key frame extraction algorithms.
_**Related scientific publication(s)** _
This dataset will accompany SARAFun’s publications in the field of key frame
extraction. _**Indicative existing similar data sets (including possibilities
for integration and reuse)** _ To be specified later on.
</th> </tr>
<tr>
<td>
**3.**
</td>
<td>
**Standards and metadata**
</td> </tr>
<tr>
<td>
For the metadata RGBD sensors will be used. Two aligned streams will be used,
one depth camera (640X480 or 960X540) and one RGB (1920X1080). Both sensors
will have low range (20cm-1.5m). Sampling rate 30 fps, both nominate and real
sampling rate will be included. Frame sequent. Definition for depth and colour
stream. Horizontal deviation angle (YAW angle). Key frame annotation (ground
truth of the correct key frames). Lighting conditions. Manipulated object type
(primitive shapes, ICT component, etc). FOV: horizontal and vertical.
Annotation will be given based on the outputs of the algorithms produced, and
will be used later on as a basis for evolving other algorithms, in addition to
the manually defined (ground truth) key frames. The metadata will be provided
in xml format with the respective xml schema.
Indicative metadata include a) camera calibration information, b) camera pose
matrix for each viewpoint, c) 3D pose annotation, d) 3D object model in CAD
format. The metadata will be in a format that maybe easily parsed with open
source software."
</td> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type (widely open, restricted to specific groups, private)** _
Open
_**Access Procedures** _
A web page will be created by CERTH on the SARAFun data management portal that
should provide a description of the dataset as well as links to a download
section. _**Embargo periods (if any)** _
None
_**Technical mechanisms for dissemination** _
A link to the dataset will be provided from the SARAFun web page, and in all
relevant SARAFun publications.
_**Necessary S/W and other tools for enabling re-use** _
Commonly available tools and software libraries for enabling reuse of dataset
(e.g.OpenCV).
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
A data management portal will be created and maintained by CERTH in order to
accommodate full as well as public versions of the datasets used. Links to the
portal will also exist at the SARAFun website, while the data will be also
stored at CERTH’s servers and other common back-up mechanisms in order to
</td> </tr>
<tr>
<td>
avoid loses of data and ensure data reliability.
</td> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**Data preservation period** _
Data will be preserved for at least 2 years after the end of the project.
_**Approximated end volume of data** _
The volume of data is estimated at approximately 1,26GB/min for RGB and 1,08
GB/min for depth. 1015 sequences will be captured ranging from 15 to 30
seconds each. Each sequence will hold a volume of approximately 700 MB (400 MB
for colour and 300 MB for depth).
_**Indicative associated costs for data archiving and preservation** _
A hard disk drive (approximately 1 Terabyte) will be probably allocated for
the dataset. There are no costs associated with its preservation.
_**Indicative plan for covering the above costs** _
The initial costs will be covered by SARAFun, while the costs that will come
up after the finalization of the project will be covered by CERTH.
</td> </tr> </table>
## DATASET “DS.02.CERTH.OBJECTTRACKING”
**Table 3: Dataset “DS.02.CERTH. ObjectTracking”**
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS.02.CERTH.ObjectTracking**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**General Description** _
Dataset used for object tracking and object pose estimation in laboratory and
factory environments. Three variations will be used:
1. experiments with non-occluded objects,
2. partially occluded objects by either a) the instructor’s hand or b) another object, and 3) combination of the above.
_**Origin of Data (e.g. indicative collection procedure, devices used etc.)**
_ Device type: RGBD sensor.
Two aligned streams will be used, extracted from one depth sensor (640X480 or
960X540) and one RGB camera (1920X1080). The two sensors Both will operate in
a low range area (20cm to 1.5m). Sampling rate: 30 fps.
_**Nature and scale of data** _
Video format (e.g. image sequences, video file format, etc.) Scale will be
specified later on.
_**To whom could the dataset be useful** _
This dataset will be useful for object recognition alogirithms.
</td> </tr> </table>
<table>
<tr>
<th>
_**Related scientific publication(s)** _
This dataset will accompany SARAFun’s publications in the field of object
recognition.
_**Indicative existing similar data sets (including possibilities for
integration and reuse)** _
1. Latent-Class Hough Forests for Object Detection and Pose Estimation
( _http://www.iis.ee.ic.ac.uk/rkouskou/Research.html_ )
2. Model Based Training, Detection and Pose Estimation of Texture-Less 3D Objects in Heavily
Cluttered Scenes ( _http://campar.in.tum.de/Main/StefanHinterstoisser_ )
3.The Berkley's B3DO dataset ( _http://kinectdata.com/_ )
4\. The Berkley's BigBird dataset ( _http://rll.berkeley.edu/bigbird/_ ).
</th> </tr>
<tr>
<td>
**3.**
</td>
<td>
**Standards and metadata**
</td> </tr>
<tr>
<td>
For the metadata RGBD sensors will be used. Two aligned streams will be used,
one depth camera (640X480 or 960X540) and one RGB (1920X1080). Both sensors
will have low range (20cm-1.5m). Sampling rate 30 fps, both nominate and real
sampling rate will be included. Frame sequent. Definition for depth and colour
stream. Horizontal deviation angle (YAW angle). Key frame annotation (ground
truth of the correct key frames). Lighting conditions. Manipulated object type
(primitive shapes, ICT component, etc). FOV: horizontal and vertical.
Annotation will be given based on the outputs of the algorithms produced, and
will be used later on as a basis for evolving other algorithms. The metadata
will be provided in xml format with the respective xml schema.
Indicative metadata include a) camera calibration information, b) camera pose
matrix for each viewpoint, c) 3D pose annotation, d) 3D object model in CAD
format. The metadata will be in a format that maybe easily parsed with open
source software."
</td> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type (widely open, restricted to specific groups, private)** _
Open
_**Access Procedures** _
A web page will be created by CERTH on the SARAFun data management portal that
should provide a description of the dataset as well as links to a download
section. _**Embargo periods (if any)** _
None
_**Technical mechanisms for dissemination** _
A link to the dataset will be provided from the SARAFun web page, and in all
relevant SARAFun publications.
_**Necessary S/W and other tools for enabling re-use** _
Commonly available tools and software libraries for enabling reuse of dataset
(e.g.OpenCV).
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
A data management portal will be created and maintained by CERTH in order to
accommodate full as well as public versions of the datasets used. Links to the
portal will also exist at the SARAFun website, while the data will be also
stored at CERTH’s servers and other common back-up mechanisms in order to
avoid loses of data and ensure data reliability.
</td> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**Data preservation period** _
Data will be preserved for at least 2 years after the end of the project.
_**Approximated end volume of data** _
The volume of data is estimated at approximately 1,26GB/min for RGB and 1,08
GB/min for depth. 5-10 sequences will be captured ranging from 15 to 30
seconds each. Each sequence will hold a volume of approximately 700 MB (400 MB
for colour and 300 MB for depth).
_**Indicative associated costs for data archiving and preservation** _
A hard disk drive (approximately 1 Terabyte) will be probably allocated for
the dataset. There are no costs associated with its preservation.
_**Indicative plan for covering the above costs** _
The initial costs will be covered by SARAFun, while the costs that will come
up after the finalization of the project will be covered by CERTH.
</td> </tr> </table>
# CONCLUSION
This deliverable constitutes a first draft analysis of the procedures and
infrastructures that will be implemented by SARAFun in order to effectively
manage the data produced through the project’s activities. One of the key
elements of the Data Management Plan constitutes the Data Management Portal,
which will handle and manage the large amount of datasets collected from the
devices used for the SARAFun purposes.
Special care will be given in order for the Data Management Portal to allow
specific access to all partners participating in the process of data
production. Additionally, editing and access rights will be managed in an
appropriate way. Moreover, special attention will be given by the SARAFun data
management plan to the appropriate collection and publication of metadata. All
necessary information will be stored in order to facilitate the optimal use as
well as the re-use of these datasets. Each data producer will be responsible
for managing the respective data and metadata, whereas all data and metadata
will be integrated in the Data Management Portal. Specific flexibility levels
are required by the Data Management Portal regarding the public datasets, as
well as attention towards the IPR rights of every partner and the European and
National regulations and directives regarding personal data privacy and
protection.
In conclusion, the current document presents a first overview of the datasets
used and the kind of data gathered for the SARAFun purposes, as well as for
the specific challenges that need to be considered for their effective
management. It is emphasized that this constitutes an ongoing document and
will therefore be updated throughout the project’s lifecycle. The final,
updated version of this document will be delivered in M36 through the
activities of D7.8, and will provide a more detailed Data Management Plan,
whereas the Data Management Portal will be at its final stage by that time.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0846_MixedEmotions_644632.md
|
# Introduction and scope
This Data Management Plan (DMP) describes the data management life cycle for
all data sets that will be collected, processed or generated by the
MixedEmotions project. It outlines how research data will be handled during
the project, and even after it is completed, describing what data will be
collected, processed or generated and following what methodology and
standards, whether and how this data will be shared and/or made open, and how
it will be curated and preserved. As the DMP is not a fixed document, it will
evolve and gain more precision and substance during the lifespan of the
project; therefore the first versions will be necessarily incomplete.
# Dataset description for data lifecycle management
This initial version of the DMP will describe each available dataset using the
fields below. To allow for more context and a better understanding of the
purpose of the different datasets, they are listed and categorized according
to the consortium partner that will collect the data. In future versions of
this DMP, when the data is more complete, a more detailed categorization
system will be used.
* **Dataset reference and name** : dataset identifier
* **Dataset description:** short dataset profile, summary and origin
* **Standards and metadata:** formats used
* **Data sharing:** access policies including restrictions on use
* **Archiving and preservation:** storage and backup provisions
* **Responsible partner:** partner in charge of collecting and maintaining the data
# Dataset identification and listing
## Deutsche Welle content
* **Data set reference and name** : DW texts
* **Data set description:** Texts obtained from Deutsche Welle API regarding selected brands
* **Standards and metadata:** Text, brand, date, language
**Data sharing:** No sharing. That data is already available from DW.
* **Archiving and preservation:** Preserved in a “sources” index in the platform elasticSearch.
* **Responsible partner:** DW
## Twitter content
* **Data set reference and name:** Tweets
* **Data set description:** Tweets extracted from Twitter regarding selected brands
* **Standards and metadata:** Text, brand, date, language, account.
* **Data sharing:** None. There are legal issues sharing this data.
* **Archiving and preservation:** Preserved in a “sources” index in the platform elasticSearch.
* **Responsible partner:** BUT
## Twitter graph
* **Data set reference and name:** Twitter graph
* **Data set description:** Relationships for Twitter accounts. That would be followers and followings of accounts that tweeted about our selected brands.
* **Standards and metadata:** RDF.
* **Data sharing:** No sharing. There are legal issues sharing this data.
* **Archiving and preservation:** In a graph database that could be Elasticsearch with the Siren plugin.
* **Responsible partner:** UPM
## Facebook content
* **Data set reference and name:** Facebook content
* **Data set description:** A dataset of publicly available user accounts content as provided by SODATO (Copenhagen Business School). SODATO stores the public facebook wall data into a MS SQL Server db and can export a variety of csv files.
* **Standards and metadata:** tbd
* **Data sharing:** Open access.
**Archiving and preservation:** In a graph database that could be
Elasticsearch with the Siren plugin.
* **Responsible partner:** NUIG
## Websites content
* **Data set reference and name:** Websites content
* **Data set description:** In case DW text is not enough, web text from some sites should be extracted.
* **Standards and metadata:** Text, brand, date, language, source.
* **Data sharing:** No sharing. There are legal issues sharing this data.
* **Archiving and preservation:** Preserved in a “sources” index in the platform elasticSearch.
* **Responsible partner:** PT
## Tagged Text
* **Data set reference and name:** Tagged Text
* **Data set description:** Once text is processed (splitted and emotion, polarity and terms are added) the results are saved to be the base of the analytics.
* **Standards and metadata:** Sentence, brand, date, language, account, original_text, emotions, polarity, concepts, topics, source, media.
* **Data sharing:** No sharing, for commercial reasons.
* **Archiving and preservation:** Preserved in a “results” index in the platform elasticSearch.
* **Responsible partner:** PT
## SindiceTech Knowledge Graph
* **Data set reference and name:** Knowledge graph
* **Data set description:** Basis for the MixedEmotions knowledge graph
* **Standards and metadata:** RDF Dumps available,
* **Data sharing:** ST will provide both low level data dumps (RDF) and virtual machines preloaded with the data.
**Archiving and preservation:** ST will not per se preserve the data as they
are integrating sources which are preserved already. The main work will be of
integration and cleanup of the data coming from Wikidata and DBpedia along
with the integration of support tools
* **Responsible partner:** ST
# Conclusions
It is too early in the project to have a complete data set identification.
Some of the data that will need to be collected is still not clear enough to
be detailed with the required level of specification, and others will surely
be identified later in the project, so this first version of the Data
Management Plan should be taken as a work in progress, still incomplete.
As new data sets to be collected are clearly identified by the consortium
partners, the Data Management Plan will be updated accordingly.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0847_OBSERVE_665136.md
|
# OBSERVE toolkit
**Data set reference and name**
OBSERVE toolkit with the deck of cards and manual - Deliverable 4.3.
## Data set description
The ca 100 cards contain text and images. Each card provides basic information
on one emerging issue identified in OBSERVE.
Underlying data are captured in report D1.2 and 1.3 and their annexes (see
section 2). The cards are explicitly targeted to the widest possible range of
potential users from policy, industry and society wishing to engage in
reflection and dialogue on emerging topics. Accordingly, it should be freely
accessible for everybody and distributed, used and reused as much as possible.
## Standards and metadata
A limited number of cards will be physically printed on carton. The deck will
also be available in .pdf format for download and printing. Metadata describe
structural data of the files.
## Data sharing
The printed cards will be distributed to the FET unit and key users (e.g. FET
advisory board participants in sense making workshops). For wider
dissemination they will be provided for download on the OBSERVE and Fraunhofer
ISI Website in .pdf format.
In parallel they will be published using the green road of open access through
the Fraunhofer Institutional open access repository (Fraunhofer eprint). The
Fraunhofer eprint system captures and preserves all necessary metadata to
ensure accessability by search engines and library systems. It is connected to
the OpenAire Open Access Infrastructure so the publications will be
automatically findable and accessible worldwide.
More detailed information on underlying research will be provided through
reports 1.2 and 1.3 (see section 2).
## Archiving and preservation (including storage and backup)
Fraunhofer eprints automatically assigns a permanently unchangeable Internet
address for long-term archiving.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0858_RECAP_693171.md
|
# Executive Summary
The purpose of the current deliverable is to present the 1 st Data
Management Plan (DMP) of the RECAP project and is a collective product of work
among the coordinator and the rest of the consortium partner.
The scope of the DMP is to describe the data management life cycle for all
datasets to be collected, processed or generated in all Work Packages during
the course of the 30 months of RECAP project. FAIR Data Management is highly
promoted by the Commission and since RECAP is a data intensive project,
relevant attention has been given to this task. However, the DMP is a living
document in which information will be made available on a more detailed level
through updates as the implementation of RECAP project progresses and when
significant changes occur. This document is the initial of the three versions
to be produced for the Data Management Plan throughout the RECAP project’s
duration.
The deliverable is structured in the following chapters:
Chapter 1 includes a description of the methodology used
Chapter 2 includes the description of the DMP Components
# 1\. Methodology
The Data Management Plan methodology approach that has been used for the
compilation of the D1.3 has been based on the updated version of the
“Guidelines on FAIR Data Management in Horizon 2020” 1 version 3.0 released
on 26 July 2016 by the European Commission Directorate – General for Research
& Innovation. The RECAP DMP addresses the following issues:
Data Summary
FAIR data
Making data findable, including provisions for metadata
Making data openly accessible
Making data interoperable
Increase data re-use
Allocation of resources
Data security
Ethical aspects
Other issues
The RECAP project coordinator (DRAXIS) has provided on time all the work
package leaders and rest of the partners with a template that includes all the
10 abovementioned issues along with instructions to fill the template.
## 1.1 Data Summary
The Data Summary addresses the following issues:
Outline the purpose of the collected/ generated data and its relation to the
objectives of RECAP project.
Outline the types and formats of data already collected/ generated and/ or
foreseen for generation at this stage of the project.
Outline the reusability of the existing data.
Outline the origin of the data.
Outline the expected size of the data. Outline the data utility.
RECAP proposes a methodology for improving the efficiency and transparency of
the compliance monitoring procedure through a cloud-based Software as a
Service (SaaS) platform which will make use of large volumes of publicly
available data provided by satellite remote sensing, and user-generated data
provided by farmers through mobile devices. Therefore, the majority of the
data that it will fall into the following categories:
Remote Sensing Imagery (VHR)
Free Satellite Data (Sentinel, LandSat)
Copernicus and GEOSS-DataCore open products
User Photos (geo-referenced and dated photos from user’s smartphones)
User data (data related to a farmer’s plants)
Compliance data (user actions related to compliance requirements)
At this stage of the project these data are not in any way all-inclusive but
provide a basis from which RECAP project has developed the user requirements
in relation to the RECAP platform.
One of the main concepts of the project is to involve farmers into the data
collection and contribution process. The idea is to make that as simple as
possible and allow them to contribute data. That way RECAP will be able to
collect a large amount of information and data related to the farmer’s
activities and habits related to compliance. By collecting, organising them
and combining with the remote sensing imagery organisation RECAP will also be
able to gain more insight into the process of auditing, identify misconducts
and mistreatments, recognize good practices and be able to trace back what
went wrong and what thrived. Obviously, privacy issues will be taken into
account in order to ensure that no personal or sensitive data of any farmer
are dispersed.
Data sharing and accessibility for verification and re-use will be available
through the RECAP project platform open to anyone. The use of open standards
and architecture will also allow other uses of this data and their integration
with other related applications.
Data obtained by RECAP will be openly available under open data licenses for
use by:
All the public control and paying agencies who are in charge of payments,
oppositions, compensation and recovery of support granted under the CAP.
The farmers associations that will use parts of the system to support their
farmers in complying with the Cross Compliance Scheme.
The agricultural consultants that will use the data in order to provide
services to their farmers in complying with the Cross Compliance Scheme.
The research partners in RECAP (UREAD and NOA) which will use the data and the
results for further scientific and research purposes.
Within RECAP all personal data used in the project will be protected. When
possible, the data collected in the project will be available to third parties
in contexts such as scientific scrutiny and peer review. As documented in the
D1.1- Project Management Handbook, deliverables’ external reviewers will sign
a confidentiality declaration, which includes the following statement:
_“I hereby declare that I will treat all information, contained within the
above mentioned deliverable and which has been disclosed to me through the
review of this deliverable, with due confidentiality._ ”
Finally, it is expected that the RECAP project will result in a number of
publications in scientific, peer-reviewed journals. Project partners are
encouraged to collaborate with each other and jointly prepare publications
relevant to the RECAP project. Scientific journals that provide open access
(OA) to all their publications will be preferred, as it is required by the
European Commission.
## 1.2 FAIR data
### 1.2.1 Making data findable, including provisions for metadata
This point addresses the following issues 1 :
Outline the discoverability of data (metadata provision)
Outline the identifiability of data and refer to standard identification
mechanism.
Outline the naming conventions used.
Outline the approach towards search keyword.
Outline the approach for clear versioning.
Specify standards for metadata creation (if any).
This point refers to existing suitable standards of the discipline, as well as
an outline on how and what metadata will be created. Therefore, at this stage,
the available data standards (if any) accompany the description of the data
that will be collected and/or generated, including the description on how the
data will be organised during the project, mentioning for example naming
conventions, version control and folder structures.
As far as the metadata are concerned, the way the consortium will capture and
store this information should be described. For instance, for data records
stored in a database with links to each item metadata can pinpoint their
description and location. There are various disciplinary metadata standards 2
, however the RECAP consortium has identified a number of available best
practices and guidelines for working with Open Data, mostly by organisations
or institutions that support and promote Open Data initiatives, and will be
taken into account. These include:
Open Data Foundation 3
Open Knowledge Foundation 4
Open Government Standards 5
Furthermore, data will be interoperable, adhering for data annotation, data
exchange, compliant with available software applications related to
agriculture. Standards that will be taken into account in the project are:
_INSPIRE_ : Infrastructure for Spatial Information in the European Community.
Addresses spatial data themes needed for environmental applications 6 .
_IACS_ : Integrated Administration and Control System. IACS is the most
important system for the management and control of payments to farmers made by
the Member States in application of the Common Agricultural Policy 7 .
_AGROVOC_ : This is the most comprehensive multilingual thesaurus and
vocabulary for agriculture nowadays. It is owned and maintained by a community
of institutions all over the world and curated by the Food and Agricultural
Organisation of the United Nations (FAO).
_Dublin Core and ISO/IEC 11179 Metadata Registry (MDR)_ : This addresses
issues in the metadata and data modelling space.
### 1.2.2 Making data openly accessible
The objectives of this point address the following issues 1 :
Specify which data will be made openly available and if some data is kept
closed explain the reason why.
Specify how the data will be made available.
Specify what methods or software tools are needed to access the data, if a
documentation is necessary about the software and if it is possible to include
the relevant software (e.g. in open source code).
Specify where the data and associated metadata, documentation and code are
deposited. Specify how access will be provided in case there are any
restrictions.
### 1.2.3 Making data interoperable
This point will describe the assessment of the data interoperability
specifying what data and metadata vocabularies, standards or methodologies
will be followed in order to facilitate interoperability. Moreover, it will
address whether standard vocabulary will be used for all data types present in
the data set in order to allow inter-disciplinary interoperability.
### 1.2.4 Increase date re-use
This point addresses the following issues 1 :
Specify how the data will be licensed to permit the widest reuse possible.
Specify when the data will be made available for re-use.
Specify if the data produced and/ or used in the project is useable by third
parties, especially, after the end of the project.
Provide a data quality assurance processes description.
Specify the length of time for which the data will remain re-usable.
### **1.3 Allocation of resources**
The objectives of this point address the following issues 1 :
Estimate the costs for making the data FAIR and describe the method of
covering these costs.
Identify responsibilities for data management in the project.
Describe costs and potential value of long term preservation.
_**1.4 Data security** _
This point will address data recovery as well as secure storage and transfer
of sensitive data.
### **1.5 Ethical aspects**
This point will cover the context of the ethics review, ethics section of DoA
and ethics deliverables including references and related technical aspects.
### **1.6 Other issues**
Other issues will refer to other national/ funder/ sectorial/ departmental
procedures for data management that are used.
# 2\. DMP Components in RECAP
_**2.1 DMP Components in WP1 – Project Management (DRAXIS)** _
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
</td>
<td>
</td>
<td>
**Contact details of project partners and advisory board**
Databases containing all the necessary information regarding the project
partners and Advisory Board members.
The project partners data is stored in a simple table in the RECAP wiki, with
the following fields:
Name
Email
Phone
Skype id
The advisory board members data is described by the following fields:
Name
Description
Affiliation
Organisation
Country
Proposed by
Additional fields will be added as the project progresses.
</td> </tr>
<tr>
<td>
Making data findable, provisions for metadata
</td>
<td>
including
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
The databases will not be publicly available. The databases will only be
accessible through the RECAP wiki and only the members of the consortium will
have access to that material.
The administration of the RECAP wiki will only be accessible by the
Coordinator (DRAXIS) of RECAP and the databases will be renewed when new data
will be available.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Preserving contact details of the project partners and advisory board members
for the entire time of the project will facilitate the internal communication.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
The data will be preserved and shared with the members of the consortium
through the RECAP wiki. The data is collected for internal use in the project,
and not intended for long-term preservation. The work package leader is
keeping a quarterly backup on a separate disk.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
## 2.2 DMP Components in WP2 – Users’ needs analysis & coproduction of
services
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
</td>
<td>
</td>
<td>
The scope of the collection of user needs of the initial requirements (D2.2:
Report of user requirements in relation to the RECAP platform) and also for
the co-production phase (D2.4: Report on co-production of services), where
applicable results will also be used to produce peer reviewed papers.
The collection of data from end users is an integral part of the RECAP project
and co-production of the final product that will help to ensure the creation
of a useful product.
Questionnaire data (including written responses (.docx and .xslx) and
recordings (.mp3)) compromise the majority of the data. The work package
leader may also collect previous inspection and BPS reports.
The origin of the data is from:
Paying Agency partners in the RECAP project,
Farmers in the partner countries,
Agricultural consultants and accreditation bodies in the partner countries.
Written responses are likely to be fairly small in size (<1 GB over the course
of the project). Recordings are larger files and likely to be 10 - 20 GB over
the course of the project.
The data will be useful to the work package 3 leader for the production of the
RECAP platform; other partner teams throughout the project, as well as the
wider research community when results are published.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
When data is published in peer reviewed papers it will be available to any who
wish to use it. As it contains confidential and sensitive information, the raw
data will not be made available.
Outline naming conventions used (e.g. Data_<WPno>_<serial number of
dataset>_<dataset title>. Example Data_WP1_1_User generated content). Data is
stored on University of Reading servers and labelled with the work package,
country of origin and the type of data. Data can be searched by country, WP
number or data type.
There are unlikely to be multiple versions of data collected – for example,
each interview will be conducted on a single occasion.
This data contains sensitive personal information so it cannot be made public.
Data included in published papers will be anonymised and summarised by region
or other suitable grouping criteria (e.g. farm type or farmer age) following
the journal standards to make it possible to include in meta-analysis.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data contains sensitive personal data therefore it cannot legally be made
public. Anonymised, summarised data will be available in any published papers.
Complete data cannot be made available because it contains sensitive personal
data.
</td> </tr> </table>
<table>
<tr>
<th>
Making data interoperable
</th>
<th>
Raw data cannot be made freely available because it contains sensitive
personal information. Data included in published papers will be anonymised and
follow the standards of the journal to ensure that it can be used in meta-
analysis.
</th> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
Any data published in papers will be immediately available to metaanalysis.
However, it is not legal to release sensitive personal data such as the
questionnaire responses.
Data quality is assured by asking partners to fill out paper questionnaire in
their own languages. These are the translated and stored in spreadsheets.
Separately, the interviews are recorded, translated and transcribed. This
ensures accurate data recording and translation.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Costs of publishing papers in open access format is the key cost in this part
of the project. During the duration of the project, money from the RECAP
budget will be used to cover journal fees (these are approximately
£1000/paper). Papers are likely to be published after the completion of the
project, in this case the university has a fund to which we can apply in order
to cover the costs of open access publishing.
Data is stored on University of Reading servers.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
University of Reading servers are managed by the university IT services. They
are regularly backed up and secure.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
## 2.3 DMP Components in WP3 – Service integration and customisation
### 2.3.1 System Architecture
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
A report describing the RECAP platform in details containing information like
component descriptions and dependencies, API descriptions, information flow
diagram, internal and external interfaces, hardware requirements and testing
procedures. This will be the basis upon which the system will be built.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
It will become both discoverable and accessible to the public once it is
delivered to the EU and the consortium decides to do so.
The report will contain a table stating all versions of the document, along
with who contributed to each version, what the changes were as well as the
date the new version was created.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
The data will be available in D3.1: System architecture. The dissemination
level of D3.1 is public. It will be available through the RECAP wiki for the
members of the consortium and when the project decides to publicise
deliverables, it will be uploaded along with the other public deliverables to
the project website or anywhere else the consortium decides.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
Engineers who want to build similar systems, could use this as an example.
</td> </tr> </table>
<table>
<tr>
<th>
Allocation of resources
</th>
<th>
N/A
</th> </tr>
<tr>
<td>
Data security
</td>
<td>
The Architecture report will be securely saved in the DRAXIS premises and will
be shared with the rest of the partners through the RECAP wiki.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.3.2 Website content farmer
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Various data like users’ personal information, farm information, farm logs,
reports and shapefiles containing farm location will be generated via the
platform. All of these data will be useful for the self-assessment process and
the creation of meaningful tasks for the farmers. The data described above
will be saved in the RECAP central database.
All user actions (login, logout, account creation, visits on specific parts of
the app) will be logged and kept in the form of a text file. This log will be
useful for debugging purposes.
Reports containing information on user devices (which browsers and mobile
phones) as well as number of mobile downloads (taken from play store for
android downloads and app store for mac downloads) will be useful for
marketing and exploitation purposes, as well as decisions regarding the
supported browsers and operating systems.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Every action on the website will produce meaningful metadata such as time and
date of data creation or data amendments and owners of actions that took
place. Metadata will assist the discoverability of the data and related
information.
Only the administrator of the app will be able to discover all the data
generated by the platform.
The database will not be discoverable to other network machines operating on
the same LAN, VLAN with the DB server or other networks. Therefore only users
with access to the server (RECAP technical team members) will be able to
discover the database.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Only registered users and administrators will have access to the data. The
data produced by the platform is sensitive private data and cannot be shared
with others without the user’s permission. No open data will be created as
part of RECAP.
The database will only be accessible by the authorised technical team.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
All platform generated data will be saved on the RECAP database server.
Encryption will be used to protect sensitive user data like emails and
passwords. All data will be transferred via SSL connections to ensure secure
exchange of information.
If there is need for updates, the old data will be overwritten and all actions
will be audited in detail and a log will be kept, containing the changed text
</td> </tr>
<tr>
<td>
</td>
<td>
for security reasons. The system will be daily backed up and the backups will
be kept for 3 days. All backups will be hosted on a remote server to avoid
disaster scenarios.
All servers will be hosted behind firewalls inspecting all incoming requests
against known vulnerabilities such as SQL injection, cookie tampering and
cross-site scripting, etc. Finally, IP restriction will enforce the secure
storage of data.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
All farmer generated data will be protected and will not be shared without the
farmer’s consent.
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.3.3 User uploaded photos
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
RECAP users will be able to upload photos from a farm. These photos will be
timestamped and geolocated and will be saved in the RECAP DB or a secure
storage area. The purpose of the images is to prove compliance or not. The
most common file type expected is jpg.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Metadata related to the location and the time of the taken photo as well as a
name, description and tag for the photo will be saved. These metadata will
help the discoverability of the photos within the platform. Farmers will be
able to discover photos related to their farms (uploaded either by them or the
inspectors) and Paying Agencies will be able to discover all photos that have
been granted access to.
The images folder will not be discoverable by systems or persons in the same
or other servers in the same LAN/VLAN as the storage/database server.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Only if the farmer allows to, some photos might be openly used within the
RECAP platform as good practice examples. Otherwise, and only if the farmer
gives their consent, the photos will be accessible by the relevant RECAP users
only.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
Photos will be saved in jpeg format.
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
Farmers will be able to download photos and use them in any way they want.
Inspectors and paying agencies will have limited abilities of reusing the
data, depending on the access level given by the farmer. This will be defined
later in the project.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Preserving photos for a long time will offer both farmers and the paying
agencies the opportunity to check field conditions of previous years and use
them as example to follow or avoid.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
User generated photos will be saved on the RECAP server. SSL connections will
be established so that all data are transferred securely. In case of necessary
updates, the old data will be overwritten and all actions will be audited in
detail and a log will be kept, containing the changed text for security
reasons. The system will be daily backed up and backups will be kept for 3
days. All backups will be hosted on a remote server to avoid disaster
scenarios.
</td> </tr>
<tr>
<td>
</td>
<td>
All servers will be hosted behind firewalls inspecting all incoming requests
against known vulnerabilities such as SQL injection, cookie tampering and
cross-site scripting, etc. Finally, IP restriction will enforce the secure
storage of data.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
All user generated data will be protected and will not be shared without the
farmer’s consent.
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.3.4 Website content inspectors
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Inspection results will be generated by the inspectors through the system. The
inspection results will be available through the farmer’s electronic record
and will be saved in the RECAP central database.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Metadata such as date, time, associated farmer and inspector and inspection
type will be saved along with the inspection results to enhance the
discoverability of the results.
Inspectors will be able to discover all inspection results, whereas farmers
will only be able to discover results of their farms. The administrator of the
app will be able to discover all the inspection results generated by the
platform.
The database will not be discoverable to other network machines operating on
the same LAN, VLAN with the DB server or other networks. Therefore only users
with access to the server (RECAP technical team members) will be able to
discover the database.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Inspection results contain sensitive private data and can only be accessed by
inspectors and associated farmers. These data cannot be shared with others
without the user’s permission. No open data will be created as part of RECAP.
The database will only be accessible by the authorised technical team.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
Inspection results will be possible to be exported in pdf format and used in
other systems that the local governments are using to manage the farmer’s
payments.
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
RECAP will be integrated with third party applications, currently being used
by the local governments, in order to reuse information already inserted in
those systems.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
All platform generated data will be saved on the RECAP database server. All
data will be transferred via SSL connections to ensure secure exchange of
information.
If there is need for updates, the old data will be overwritten and all actions
will be audited in detail and a log will be kept, containing the changed text
for security reasons. In case of necessary updates, the old data will be
overwritten and all actions will be audited in detail and a log will be kept,
containing the changed text for security reasons. The system will be daily
</td> </tr>
<tr>
<td>
</td>
<td>
backed up and the backups will be kept for 3 days. All backups will be hosted
on a remote server to avoid disaster scenarios.
All servers will be hosted behind firewalls inspecting all incoming requests
against known vulnerabilities such as SQL injection, cookie tampering and
cross-site scripting, etc. Finally, IP restriction will enforce the secure
storage of data.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
Inspection results will be protected and will not be shared without the
farmer’s consent.
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.3.5 E-learning material
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
As part of RECAP videos and presentations will be created in order to educate
farmers and inspectors on the current best practices. Some of them will be
available for the users to view whenever they want and some other will be
available only via live webinars. The e-learning material will be mainly
created by the paying agencies and there is a possibility to reuse existing
material from other similar systems.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Metadata such as video format, duration, size, time of views, number of
participants for live webinars will be saved along with the videos and the
presentations in order to enhance the discoverability of the results. All
registered users will be able to discover the e-learning material either via
searching capability or via a dedicated area that will list all the available
sources.
The database and the storage area will not be discoverable to other network
machines operating on the same LAN, VLAN with the DB server or other networks.
Therefore only users with access to the server (RECAP technical team members)
will be able to discover the database and the storage area.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
The e-learning material will only be accessible through the RECAP platform.
All RECAP users will have access to that material.
The database will only be accessible by the authorised technical team.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Videos and power point presentations will be saved on the RECAP database
server. All data will be transferred via SSL connections to ensure secure
exchange of information.
The system will be daily backed up and the backups will be kept for 3 days.
All backups will be hosted on a remote server to avoid disaster scenarios.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.3.6 CC laws and rules
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Cross compliance law and inspection lists with checkpoints will be used both
by the inspectors during the inspections but also by the farmers to perform
some sort of self-assessment. The lists will be given to us by the Paying
agencies in a various formats (xl, word) and will be transformed in electronic
form.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
All registered users will have access to the laws and the inspection
checklists via the RECAP platform.
Metadata related to the different versions of the checklists and the newest
updates of the laws, along with dates and times will also be saved. Metadata
will help the easy discoverability of the most up to date content.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
All content related to CC laws and inspections will be securely saved on the
RECAP database server. All data will be transferred via SSL connections to
ensure secure exchange of information.
The system will be daily backed up and the backups will be kept for 3 days.
All backups will be hosted on a remote server to avoid disaster scenarios.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.3.7 Remotely sensed data
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Generation of satellite based spectral indices and remote sensing
classification products to establish an alerting mechanism for breaches of
cross-compliance. The products will be used in WP4.
Processing of open satellite data for monitoring CAP implementation is in the
core of RECAP.
Data will be available in raster and vector data, accessible through a
GeoServer application on top of a PostGIS database.
Historical, Landsat-based spectral indices may be used to assist a timeseries
analysis.
The origin of the data will be:
USGS for Landsat ( _http://glovis.usgs.gov/_ ) and
ESA for Sentinel, delivered through the Hellenic National Sentinel Data Mirror
Site ( _http://sentinels.space.noa.gr/_ )
Sentinel-2 data are about 4 GB each, while Landsat around 1 GB each, both
compressed. Assuming 4 pilot cases, and a need to have at least one image per
month on a yearly basis, this accounts for 240 GB of image data
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
in total. Indices and classification products will account for an additional
10%, hence a total of 250 GB of data is foreseen to be generated.
Data and products will be useful for the Paying Agencies, the farmers
themselves and the farmer consultants. They will be ingested by the RECAP
platform and disseminated to project stakeholders, while their usefulness will
be demonstrated during the pilot cases.
</th> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
The image data and the processed products will be available to all
stakeholders through a PostGIS. Registered users will have unlimited access to
the products for the duration of the project.
Data is stored on the National Observatory of Athens servers and labelled with
the work package, country of origin and the type of data.
Geoserver and PostGIS provide a build-in keyword search tool that will be used
and Postgres MCCC versioning tool will also be used.
INSPIRE metadata will be created for all the EO-based geospatial products that
will be generated in the lifetime of the project.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Spectral Indices and EO-based classification objects will be made available.
Commercial VHR satellite imagery that will be used in the context of the
pilots will not be restricted due to the associated restrictions of the
satellite data vendor.
Data and products will be made accessible through an API on top a Postgres
database.
No special software is needed in order to access the data. A user can create
scripts to access and query the database and retrieve relevant datasets.
They data and associated metadata will be deposited in NOA’s servers.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
PostGIS and Geoserver is a widely accessible tool for managing geospatial
information. INSPIRE protocol will be used for metadata descriptors, the
typical standard for geospatial data.
No standard vocabulary will be used and no ontology mapping is foreseen.
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
The PostGIS database that will be created in RECAP will be licensed with the
Open Data Commons Open Database License (ODbL).
The EO-based geospatial products that will be generated in RECAP will be made
available for re-use for the project’s lifetime and beyond. All EO-based
products will remain usable after the end of the project. No particular data
quality assurance process is followed, and no relevant warranties will be
provided.
EO-based products will remain re-usable at least two years after the project’s
conclusion.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Costs for maintaining a database of the EO-based products that will be
generated to serve the pilot demonstrations are negligible. Publication fees
(approximately €1000/paper) are however foreseen.
Data is stored on NOA’s servers.
Long term preservation of the products generated for the pilots is minimal.
However, if this is to scale-up and go beyond the demonstration phase, then
making data FAIR will incur significant costs. Generating FAIR spectral
indices and EO-based classification products for large geographical regions
and with frequent updates, has a potential for cross-
</td> </tr>
<tr>
<td>
</td>
<td>
fertilization of different fields (e.g. precision farming, CAP compliance,
environmental monitoring, disaster management, etc.).
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
NOA servers are managed by the IT department. They are regularly backed up and
secure.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
_**2.4 DMP Components in WP4 – Deployment and operation** _
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
The purpose of the WP4 data is to identify all training needs for the pilot
cases, to complete the training and to perform the pilots testing in the 5
locations: Spain, Greece, Lithuania, UK and Serbia. Also the WP4 data will
serve to monitor the effective conduct of the pilots, and provide an effective
feedback to enhance the final solution of the RECAP platform. The data
collected and generated in WP4 will be necessary in order to develop the
proper platform and test it for the delivery of public services that will
enable the improved implementation of the Common
Agricultural Policy (CAP), increasing efficiency and transparency of public
authorities, offering personalised services to farmers and stimulating the
development of new added value services by agricultural consultants; and also
to develop personalised public services to support farmers to better comply
with CAP requirements.
Mainly and if it is possible, it will be used online and/or electronic
archives. The main documents and formats that will be used in order to collect
and generate the necessary data will be templates agreed in the D1.4: Pilot
Plan. There will be templates of documents such as: questionnaires,
interviews, cooperation agreements, invitation letters to participate in the
pilots, agendas and minutes of the meetings, attendance sheets, application
forms, informed consent forms, etc.
Semi-structured interviews with individuals will be collected and stored using
digital audio recording (e.g. MP3) only if the interviewees give their
permission. In case they deny, interview notes will be typed up according to
agreed formats and standards.
All transcripts will be in Microsoft Word (doc. / docx.).
In the D4.1: Pilot Plan/Impact Assessment Plan, the metadata of WP4,
procedures and file formats for note-taking, recording, transcribing, storing
visual data from participatory techniques, and semi-structured interviews,
questionnaires and focus group discussion data will be developed and agreed.
In other Work Packages, a few existing general data is already being used to
develop different tasks and deliverables; for example compliance requirements
in each country.
Also in WP4, a few existing data from the different pilot partners will be re-
used or will be available in the necessary format to the project or in this
case to develop properly the WP4.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Generally the research objectives require qualitative data that are not
available from other sources. Some data can be used to situate and triangulate
the findings of the proposed research, and will supplement the collected data
as part of the proposed research. However, qualitative and attitudinal data
are generally rare or of insufficiently high quality to address the research
questions. The research objectives also require quantitative analysis of
public data.
The origin of the data for WP4, will be mainly from:
Partners of the project
Pilot partners
Public national/regional authorities of the Pilot countries
Agricultural consultancy services of pilot countries
Different farmers from the different pilot countries
This data will be collected through different templates, questionnaires,
interviews, meetings and focus groups.
The detail of this data origin and how the data will occur, will be detailed
in the D1.4-Pilot Plan.
Firstly, the data of the WP4 will be useful for the research purposes of the
project, and therefore for their partners and for the improvement of the RECAP
platform that will be developed in WP3. Also the data of the WP4 and the
results of the project will be useful for the regional/national authorities of
CAP in the pilot countries, for the agricultural consultancy services and of
course these data, results and outputs of the project, and for the farmers and
farmers’ cooperatives.
</th> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Outline naming conventions used “data_name of the file_WPnº_TaskNº”.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
WP leader intends to use Hadoop 8 which supports multiple types of data,
both structured and unstructured and can generate value from it remarkably
quickly. Another major benefit for Hadoop is the fact that it is resilient to
failure. When data is sent to an individual node, that data is also replicated
to other nodes in the cluster, which means that in the event of failure, there
is another copy available for use.
Other NoSQL 9 technologies may also be used to store unstructured data where
it is considered that will reinforce efficiency (e.g. MongoDB 10 ). The new
breed of NoSQL databases are designed to expand transparently to take
advantage of new nodes, and they are usually designed with low-cost commodity
hardware in mind.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
The data of WP4 will start to be collected and generated in WP4 in spring
2017, and all the specifications and periods of use, and re-use will be
established in deliverable D4.1 Pilot Plan.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
The data of WP4 will need to be backed up regularly; due to viruses’ problems,
this will include regular email sharing with the technological partners and
coordinator, so that up-to-date versions will be stored on different
institutions server. Qualitative data will be backed up and secured by the
responsible partner of WP4 on a regular basis and metadata will include clear
labelling of versions and dates. There are some potential sensitivities around
some of the collected data, so it will be established a system for data
protection, including use of passwords and safe backup hardware.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
A letter explaining the purpose, approach and dissemination strategy
(including plans of sharing data) of the pilot phase, and an accompanying
consent form (including sharing data) will be prepared and translated into the
relevant languages by the pilot partners. A clear verbal explanation will also
be provided to each interviewee and focus group participant. Commitments to
ensure confidentiality will be maintained by ensuring recordings will not be
publicly; that transcripts will be anonymised and details that can be used to
identify participants will be removed from transcripts or concealed in write-
ups. Due to the highly-focused nature of the pilot phase, many participants
may be easily identifiable despite the efforts to ensure anonymity or
confidentiality. In such cases, participants will be shown sections of
transcript and/or report text in order to ensure that the confidentiality of
their interview data.
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
_**2.5 DMP Components in WP5 – Dissemination & Exploitation ** _
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
</td>
<td>
Data collection is necessary for the elaboration of the Dissemination and
Communication Strategy, the establishment and management of the Network of
Interest, the Market assessment and the Business plan.
Lists of communication recipients in excel files containing
organisations/bodies and their e-mail addresses. Parts of the lists have been
developed in previous projects of the WP leader. The rest of the data has been
developed through desk research.
Project User Group contact details (name and e-mail address). Not fully
specified and finalised yet.
Information regarding direct/indirect competitors and data regarding Paying
Agencies, Agri-consultants and farmers (name/organization and email address).
Not fully specified and finalized yet.
</td> </tr>
<tr>
<td>
Making data findable, provisions for metadata
</td>
<td>
including
</td>
<td>
The deliverable publically available “Dissemination and Communication
Strategy” will facilitate discoverability of data contained in them.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data concerning e-mail addresses will not be openly available, as being
personal data.
Deliverables publically posted on the website of RECAP will make available all
relative data.
No particular methods or software tools are needed to access the data.
</td> </tr>
<tr>
<td>
</td>
<td>
Data are stored at ETAM’s server. Deliverables are posted on the website of
RECAP.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Data management responsibilities have been allocated to two members of the WP
project team.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Automated backup of files.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0859_SSIX_645425.md
|
# Executive Summary
This document aims to provide a detailed overview of the platforms and
techniques that can be used as data sources for the entire SSIX platform.
The document clearly lists all the public data available that can be retrieved
and processed by the SSIX platform, along with the detailed results of the
assessments performed on the identified data sources. This document will help
to highlight important structural aspects of the platform and to identify all
the criticalities that have to be taken into consideration when dealing with
certain data collection techniques.
*** This is a public shortened version of D3.1. The rest of the content was
considered commercially sensitive by the consortium members and therefore was
not made public. The full deliverable was submitted to EC. For any questions
and queries, please contact the SSIX Coordinator for further details ***
# 1 Introduction
The present document aims to provide a detailed overview of the platforms and
techniques that can be used as data sources for the entire SSIX platform.
The main activity of WP3 consists in the implementation of the processes
dedicated to gathering data and metadata from several platforms and websites,
the assorted information needed for the calculation of the SSIX indices
forming the core logics of the platform. These processes will allow
applications to interact with different social platforms, blogs and newsfeeds,
thus requiring the implementation of complex pieces of software dedicated to
the collection and processing of increasing amounts of data.
This introductory document contains the results of the assessments performed
on the identified data sources providing APIs, that helped to highlight
important structural aspects of each platform and to identify all the
criticalities that have to be taken into consideration when dealing with
certain data collection techniques.
For instance, almost every social platform (like Facebook, Twitter or Google+)
exposes public APIs that can be used to retrieve data from the available
endpoints. In these cases, a fundamental factor driving the definition of the
functional specifications, resides in the usage limit imposed by these
platforms. It is therefore important to keep an eye on these limits when
defining the scope of the external data to be collected.
A dedicated chapter has been produced about data gathering techniques to be
used on those sources that do not provide API access (e.g. web sites, forums,
etc.), thus requiring to interact with RSS feeds or HTML pages.
Moreover, the document clearly lists all the public data available on the
different sources that can be retrieved and processed by the SSIX platform.
These tables will help to identify the significant fields to be stored and
sent to the subsequent NLP processes.
# 2 Data Sources Assessment
## 2.1 Analysis Criteria
All the sources assessed listed here have been analysed and evaluated using
the same criteria. The following list provides a short description for each
criteria considered during the assessment.
If the criterion is not applicable to the analysed source, the label **N/A**
is used.
If no information is found about a criterion, the label **UNREP** is used.
* **Source name** : common name for the data source.
* **Status** : current status of the access to the source (active, inactive or closed).
* **API name** : common name of the API exposed by the data source.
* **Latest version** : latest version available at the time this document is updated. ● **Update frequency** : frequency with which the API is updated.
* **Costs** : the cost and pricing policies for querying the data source, if applicable.
* **Description** : brief description of the API used.
* **Interface type** : the kind of protocol exposed by the API (e.g. SOAP, RESTful, etc.).
* **Output type** : description of the data format returned by the source.
* **Authentication** : description of the authentication process, if requested.
* **Data timezone** : timezone used in the data returned by the source.
* **Available languages** : if the source allows to filter the contents returned on the basis of the language, this contains the list of supported languages.
* **Region** : the world region in which the source is valid, if applicable.
* **Quota limits** : documented limits in the number of possible calls to the API.
* **Maximum amount of data per request** : the maximum amount of data that is returned at every request when the source is queried using the API.
* **Maximum historical data depth** : the maximum depth in time that can be requested and retrieved from the source.
* **Most recent data available** : the last hour/day available when performing a request, this indicates the freshness of the data.
* **Documentation** : where to find official documentation about the source.
* **Support** : indicates whether official support exists and where to find it.
* **Resources** : tools and resources available to test, debug or explore the API.
* **Public data available** : list of the public data that can be retrieved from the data source using the described method.
* **Final considerations and known criticalities** .
* **Alternatives** : possible services to use as an alternative in case of a major disruption of the official APIs.
# 4 Data Management Plan
As reported in the official “Guidelines on Data Management in Horizon 2020”,
the purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used by the
applicants with regard to all the datasets that will be generated by the
project.
In details, the DMP describes the data management lifecycle for all the data
sets that will be collected, processed or generated by the project.
The DMP is not a fixed document, but evolves during the lifespan of the
project. This is why three versions of this document will be released with the
following cadence:
* V1 in M4
* V2 in M18
* V3 in M24
The Data Management Plan for the SSIX project can be found in Appendix A1 in
the CO version of this deliverable.
## 4.1 Open Research Data Pilot (Open Access to Scientific Publications and
Research Data)
The SSIX project is participating in the Open Research Data Pilot (ORDP),
meaning that all publications, data and metadata to reproduce scientific
experiments should be open access. The following constitutes what SSIX will be
sharing as part of the ORDP:
* All open source software and components that shall be developed as part of the project work.
* Where some code is not open it may be available as web service/API for academic/research by industry partners but not for commercial use freely.
* All public deliverables.
* Results and enriched data derived from experiments, as it will allow scientists/researchers to verify and repeat the experiments. This will apply **only** to data which are not proprietary or commercially sensitive or do not have any ethical/legal implications will be made available. This is inline with the ORDP (see page 9 of _Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020)_ whereby a participant can opt out for reasons related to commercial, security or protection of personal data.
* All publications will ideally be made open access **type gold** (immediately accessible for free) if not certainly **type green, which would involve a period of embargo** . Note that if a peer reviewed publication contains any commercially sensitive content it will pass through IPR screening before being published under open access i.e. "protect first and publish later" 1 . Note that if any publishers are not "open access friendly", SSIX can always opt to publish pre-print forms of articles as open access. This is becoming quite common across the research community.
* All data to be shared with or as part of the ORDP will be placed in a repository that will point to all data entities shared within ORDP so that these can be accessed, mined, exploited, reproduced etc. 2
* The Open Access Infrastructure for Research in Europe (OpenAIRE 3 ) is the recommended single point of entry for open access data and publications by the EC. 4
* We will seek to ensure that there is single point of entry to all SSIX publications and data. ARAN 5 (Access to Research at National University of Ireland, Galway) is already registered as an open access repository in OpenAIRE 6 as well as OPUS 7 \- Volltextserver Universität Passau (OPUS University of Passau) **.** The consortium will ensure that that all publications that are deposited within these repositories will be correctly attributed via OpenAIRE to the **SSIX project** and likewise any publications that are not deposited through NUIG or PASSAU will be submitted directly to OpenAIRE. The advantage of using ARAN or OPUS is that we automatically adhere to all the guidelines 8 listed by the EC since both repositories would not be listed under the Directory of Open Access Repositories (OpenDOAR) 9 .
* Finally, the mandatory first version of the **Data Management Plan (DMP)** must be produced at month six to participate in the ORDP. The DMP is attached to the CO version of this delerivable in Appendix A1.
Not all the data collected or produced by the project will be made available
to the public due to the legal implications, examples being the raw data
gathered from Twitter, Facebook or other social media platforms, that are
protected by strict terms and conditions that forbid to distribute the
contents to third parties. Again this is in line with page 9 of _Guidelines on
Open Access to Scientific Publications and Research Data in Horizon 2020 5 :
_
**_“if participation in the Pilot on Open Research Data is incompatible with
existing rules_ _concerning the protection of personal data”_ **
The DMP provided in **Appendix A1** of the CO version of this deliverable
helps to identify the different datasets of the SSIX project with a particular
attention to data sharing aspects, each may vary from case to case for an
individual dataset.
# 5 Technical Issues
## 5.1 Geographic Data Availability
A relevant information useful for the SSIX indices calculation would consist
in the geographic data derived from the collected contents. This would allow
to attribute a specific origin to the sentiment trends detected, modulating
the algorithms in accordance with the position of the user that generated the
content.
Unfortunately, geolocation procedures cannot be implemented due to the lack of
statistical relevance. For instance in the case of Twitter, that is the main
source for most of the incoming contents, we detected that only less than 1%
of tweets in english contains geographic coordinates and only about 2% of the
total tweets has the “place” field populated (that is an information
explicitly provided by the user). These numbers indicate the impossibility to
work with a statistically significant sample.
Among the other sources, only Google+ and StockTwits seem to provide
geolocation information through their APIs (StockTwits returns an undocumented
location field). These platforms have not been tested yet, so it is not
possible to provide any statistical sample of geographic data.
## 5.2 Real Time Data Processing
Real Time Data processing (or Nearly-Real Time - NRT - in our case) consist in
the process of collecting, analyzing and returning a content a few moments
after it has been published on the original source. The delay time in this
case may vary _from milliseconds to seconds_ according to different technical
and functional factors, among which: computing power, storage performances,
incoming traffic, number of filters and enrichments applied to the original
data, complexity of the algorithms that manipulate the data.
Among the sources assessed in the present document, only Twitter and
StockTwits are suitable for processing data with a NRT approach. This is
because they provide real time streaming APIs that push the contents to the
clients as soon as they are posted, unlike the other sources that can be
queried with a traditional REST API approach.
These aspects are important for the definition of the algorithms created for
the calculation of the SSIX indices.
## 5.3 Batch Data Processing
This kind of data processing refers to the procedures implemented in order to
retrieve data from sources exposing traditional REST APIs (like Facebook,
Google+ or Linkedin) or not providing API access at all (this is the case of
web page scraping or RSS feeds).
These procedures, to be considered completely independent pieces of software,
have to be scheduled in order to query the remote endpoints at given
intervals.
The interval suitable for each source cannot be determined a priori, since it
is strongly related to the number of items (keywords, stocks, users,
companies, etc.) to track and the limits imposed by the API, like the maximum
number of requests per minute.
The aim of the project, limited to the technical boundaries of the available
infrastructure, is to collect and analyze the data with the highest frequency
possible, therefore much effort will be put in the creation of data gathering
procedures acting at least on a 15 minutes basis.
## 5.4 Missing Data Handling
Missing data will be addressed with dedicated handlers raising alerts in the
following scenarios:
* The designated technical staff can be alerted (via email) in case of missing data for certain items or for repeated occurrences of data loss;
* The final user can be alerted with proper messages on the front-end, warning that the some data are partial or missing.
It is important to distinguish between data missing because of malfunctions
and data missing because of effective lack of contents on the remote source.
In the last case, also the lack of data is providing a significant information
that should be considered inside the algorithms.
## 5.5 Errors Handling
Errors occurring during the data retrieval processes have to be promptly
pointed out through dedicated alerting systems (e.g. email or sms). In these
cases the designated technical staff will intervene in order to understand the
cause of the problem, recover the process and apply software patches if
needed.
Blocking errors may be caused by different factors, like unreported changes in
the remote endpoints (e.g. different field names in the JSON response) or
technical malfunctions occurring on the server.
# 6 Conclusions
The considerations emerged from this document demonstrate the effectiveness of
the assessments performed, since the reader can easily acknowledge the risks
and criticalities deriving from the data gathering activities, along with the
complete lists of the collectable data.
First of all, there is a marked difference between real-time and batch
processing: in our case, only Twitter is suitable to support real-time
processing, since it provides a streaming API that pushes the Tweets to the
connected clients as soon as they are published. For all the other sources it
is necessary to develop ad hoc procedures that can be scheduled to request and
retrieve specific data at regular intervals, in compliance with the
limitations applied to certain APIs.
Another relevant topic emerging from this document is the variety of the
logics to be implemented in order to support the different data gathering
techniques. For the SSIX project, the data will be sourced from APIs, RSS
feeds, CSV files, web pages using HTML scraping: every modality requires
different approaches, that must take into consideration the substantial
differences between the queried platforms.
The assessments collected in this document also helped to identify the
criticalities and the issues related to this kind of activities. Most of them
derive from the experience, while others are clearly stated in the available
documentation. In general we are able to identify common criticalities, that
can be mainly related to the following risks:
* Application being blocked because of excess in the API usage;
* Application becoming obsolete because of changes in the API specifications, resulting in the inability to retrieve new data;
* Application becoming obsolete because of changes in the data structures, resulting in the inability to retrieve new data;
* Difficulty to find appropriate and complete documentation during development activities, leading to deploy potentially wrong procedures;
* Difficulty to find complete and reliable channels to monitor in order to stay updated on the potential changing of the sources.
These risks can be reduced with the adoption of the following measures:
* Accurate analysis of the limitations before of the definition of the functional specifications;
* Distribution of the applications on clustered systems in order to prevent IP blockage;
* Creation of dedicated tasks able to constantly monitor the status of the queried sources and send appropriate alerts to request manual intervention;
* Correct handling of application errors and exceptions raised from failures in data requests, in order to address specific warnings to the right persons;
* Accurate and deep testing sessions during development activities and after each deploy.
An ideal scenario would involve a 24H service of constant human monitoring,
especially if the number of required servers increase exponentially. This
would allow to promptly intervene in case of errors or disruptions, but it
requires high financial resources and cannot be instituted during this phase
of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0862_RECAP_693171.md
|
# Executive Summary
The present document is a deliverable of the RECAP project, funded by the
European Commission’s Directorate – General for Research and Innovation (DG
RTD), under its Horizon 2020 Innovation Action programme (H2020).
The deliverable presents the second version of the project Data Management
Plan (DMP). This second version lists the various new datasets that will be
produced by the project, the main data sharing and the major management
principles the project will implement around them. Thus, the deliverable
includes all the significant changes such changes in consortium policies and
any external factors that may have influenced data management in RECAP
project. It is submitted on Month 12 as a Mid-Term review of the RECAP Data
Management Plan.
The deliverable is structured in the following chapters:
Chapter 1 includes an introduction to the deliverable.
Chapter 2 includes the description of the datasets along with the documented
changes and additional information.
# 1\. Introduction
The RECAP project aims to develop and pilot test a platform for the delivery
of public services that will enable the improved implementation of the CAP,
targeting public Paying Agencies, Agricultural Consultants and farmers. The
RECAP platform will make use of large volumes of publicly available data
provided by satellite remote sensing, and user-generated provided by farmers
through mobile devices.
This deliverable D1.5 “Data Management Plan (2)” aims to document all the
updates on the RECAP project data management life cycle for all datasets to be
collected, processed or generated. A description of how the results will be
shared, including access procedures and preservation according to the
guidelines in Horizon 2020. It is a living document and it evolves and gains
more precision and substance during the lifespan of the project.
Although the DMP is being developed by DRAXIS, its implementation involves all
project partners’ contribution. The next version of the DMP, to be published
at M30, will describe more in detail the practical data management procedures
implemented by the RECAP project.
The Work Packages that have not occurred any changes are not included in this
deliverable.
# 2\. DMP Components in RECAP
## 2.1 DMP Components in WP2 – Users’ needs analysis & coproduction of
services (UREAD)
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
</td>
<td>
</td>
<td>
Collection of user needs for scoping of the initial requirements (Deliverable
2.2) and also for the co-production phase (Deliverable 2.4), where applicable
results will also be used to produce peer reviewed papers.
Collating data from end users is an integral part of the RECAP project – co-
production of the final product will help to ensure that a useful product is
created.
Questionnaire data (including written responses (.docx and .xslx) and
recordings (.mp3)) compromise the majority of the data. We may also collect
previous inspection and BPS reports.
The origin of the data is from Paying Agency partners in the RECAP project,
farmers in the partner countries as well as agricultural consultants and
accreditation bodies in the partner countries.
Written responses are likely to be fairly small in size (<1Gb over the course
of the project). Recordings are larger files and likely to be 10-20 Gb over
the course of the project.
The data is essential for the technical team to develop the RECAP platform;
other partner teams throughout the project, as well as the wider research
community when results are published will benefit.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
When data is published in peer reviewed papers it will be available to any who
wish to use it. As it contains confidential and sensitive information, the raw
data will not be made available.
Data is stored on University of Reading servers and labelled with the work
package, country of origin and the type of data.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data contains sensitive personal data therefore it cannot legally be made
public. Anonymized, summarised data will be available in any published papers.
Complete data cannot be made available because it contains sensitive personal
data.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
Raw data cannot be made freely available because it contains sensitive
personal information. Data included in published papers will be anonymised and
follow the standards of the journal to ensure that it can be used in meta-
analysis.
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
Any data published in papers will be immediately available to metaanalysis.
However, it is not legal to release sensitive personal data such as the
questionnaire responses.
Raw data contains sensitive personal data and cannot legally be made
available.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Data quality is assured by asking partners to fill out paper questionnaire in
their own languages. These are the translated and stored in spreadsheets.
Separately, the interviews are recorded, translated and transcribed. This
ensures accurate data recording and translation.
</th> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Costs of publishing papers in open access format is the key cost in this part
of the project. During the duration of the project, money from the RECAP
budget will be used to cover journal fees (these are approximately
£1000/paper). Papers are likely to be published after the completion of the
project, in this case the university has a fund to which we can apply in order
to cover the costs of open access publishing.
Data is stored on University of Reading servers.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
University of Reading servers are managed by the university IT services. They
are regularly backed up and secure.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
## 2.2 DMP Components in WP3 – Service integration and customisation (DRAXIS
– NOA)
### 2.2.1 System Architecture
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
A report describing the RECAP platform in details containing information like
component descriptions and dependencies, API descriptions, information flow
diagram, internal and external interfaces, hardware requirements and testing
procedures. This will be the basis upon which the system will be built.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
It will become both discoverable and accessible to the public when the
consortium decides to do so.
The report contains a table stating all versions of the document, along with
who contributed to each version, what the changes where as well as the date
the new version was created.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
The data are available in D3.1: System architecture. The dissemination level
of D3.1 is public. It is be available through the RECAP wiki for the members
of the consortium and when the project decides to publicize deliverables, it
will be uploaded along with the other public deliverables to the project
website or anywhere else the consortium decides.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
Engineers who want to build similar systems, could use this as an example.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
The Architecture report will be securely saved in the DRAXIS premises and will
be shared with the rest of the partners through the RECAP wiki.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.2.2 Website content farmer
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Various data like users’ personal information, farm information, farm logs,
reports and shapefiles containing farm location will be generated via the
platform. All of these data will be useful for the self-assessment process and
the creation of meaningful tasks for the farmers. The data described above
will be saved in the RECAP central database.
All user actions (login, logout, account creation, visits on specific parts of
the app) will be logged and kept in the form of a text file. This log will be
useful for debugging purposes.
Reports containing information on user devices (which browsers and mobile
phones) as well as number of mobile downloads (taken from play store for
android downloads and app store for mac downloads) will be useful for
marketing and exploitation purposes, as well as decisions regarding the
supported browsers and operating systems.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Every action on the website will produce meaningful metadata such as time and
date of data creation or data amendments and owners of actions that took
place. Metadata will assist the discoverability of the data and related
information.
Only the administrator of the app will be able to discover all the data
generated by the platform.
The database will not be discoverable to other network machines operating on
the same LAN, VLAN with the DB server or other networks. Therefore only users
with access to the server (RECAP technical team members) will be able to
discover the database.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Only registered users and administrators will have access to the data. The
data produced by the platform are sensitive private data and cannot be shared
with others without the user’s permission. No open data will be created as
part of RECAP.
The database will only be accessible by the authorized technical team.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
All platform generated data will be saved on the RECAP database server.
Encryption will be used to protect sensitive user data like emails and
passwords. All data will be transferred via SSL connections to ensure secure
exchange of information.
If there is need for updates, the old data will be overwritten and all actions
will be audited in detail and a log will be kept, containing the changed text
for security reasons. In case of necessary updates, the old data will be
overwritten and all actions will be audited in detail and a log will be kept,
containing the changed text for security reasons. The system will be daily
backed up and the back-ups will be kept for 3 days. All backups will be hosted
on a remote server to avoid disaster scenarios.
All servers will be hosted behind firewalls inspecting all incoming requests
against known vulnerabilities such as SQL injection, cookie
</td> </tr>
<tr>
<td>
</td>
<td>
tampering and cross-site scripting. Finally, IP restriction will enforce the
secure storage of data.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
All farmer generated data will be protected and will not be shared without the
farmer’s consent.
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.2.3 User uploaded photos
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
RECAP users will be able to upload photos from a farm. These photos will be
timestamped and geolocated and will be saved in the RECAP DB or a secure
storage area. The purpose of the images is to prove compliance or not. The
most common file type expected is jpg.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Metadata related to the location and the time of the taken photo as well as a
name, description and tag for the photo will be saved. These metadata will
help the discoverability of the photos within the platform. Farmers will be
able to discover photos related to their farms (uploaded either by them or the
inspectors) and Paying Agencies will be able to discover all photos that have
been granted access to.
The images folder will not be discoverable by systems or persons in the same
or other servers in the same LAN/VLAN as the storage/database server.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Only if the farmer allows to, some photos might be openly used within the
RECAP platform as good practice examples. Otherwise the photos will only be
only accessible by the relevant RECAP users.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
Photos will be saved in jpeg format.
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
Famers will be able to download photos and use them in any way they want.
Inspectors and paying agencies will have limited abilities of reusing the
data, depending on the access level given by the farmer. This will be defined
later in the project.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Preserving photos for a long time will offer both farmers and the paying
agencies the opportunity to check field conditions of previous years and use
them as example to follow or avoid.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
User generated photos will be saved on the RECAP server. SSL connections will
be established so that all data are transferred securely. In case of necessary
updates, the old data will be overwritten and all actions will be audited in
detail and a log will be kept, containing the changed text for security
reasons. The system will be daily backed up and backups will be kept for 3
days. All backups will be hosted on a remote server to avoid disaster
scenarios.
All servers will be hosted behind firewalls inspecting all incoming requests
against known vulnerabilities such as SQL injection, cookie tampering and
cross-site scripting. Finally, IP restriction will enforce the secure storage
of data.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
All user generated data will be protected and will not be shared without the
farmer’s consent.
</td> </tr> </table>
<table>
<tr>
<th>
Other issues
</th>
<th>
N/A
</th> </tr> </table>
### 2.2.4 Website content inspectors
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Inspection results will be generated by the inspectors through the system. The
inspection results will be available through the farmer’s electronic record
and will be saved in the RECAP central database.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Metadata such as date, time, associated farmer and inspector and inspection
type will be saved along with the inspection results to enhance the
discoverability of the results.
Inspectors will be able to discover all inspection results, whereas farmers
will only be able to discover results of their farms. The administrator of the
app will be able to discover all the inspection results generated by the
platform.
The database will not be discoverable to other network machines operating on
the same LAN, VLAN with the DB server or other networks. Therefore only users
with access to the server (RECAP technical team members) will be able to
discover the database.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Inspection results contain sensitive private data and can only be accessed by
inspectors and associated farmers. These data cannot be shared with others
without the user’s permission. No open data will be created as part of RECAP.
The database will only be accessible by the authorized technical team.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
Inspection results will be possible to be exported in pdf format and used in
other systems that the local governments are using to manage the farmer’s
payments.
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
RECAP will be integrated with third party applications, currently being used
by the local governments, in order to reuse information already inserted in
those systems.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
All platform generated data will be saved on the RECAP database server. All
data will be transferred via SSL connections to ensure secure exchange of
information.
If there is need for updates, the old data will be overwritten and all actions
will be audited in detail and a log will be kept, containing the changed text
for security reasons. In case of necessary updates, the old data will be
overwritten and all actions will be audited in detail and a log will be kept,
containing the changed text for security reasons. The system will be daily
backed up and the back-ups will be kept for 3 days. All backups will be hosted
on a remote server to avoid disaster scenarios. All servers will be hosted
behind firewalls inspecting all incoming requests against known
vulnerabilities such as SQL injection, cookie tampering and cross-site
scripting. Finally, IP restriction will enforce the secure storage of data.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
Inspection results will be protected and will not be shared without the
farmer’s consent.
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Cross compliance law and inspection lists with checkpoints will be used both
by the inspectors during the inspections but also by the farmers to perform
some sort of self-assessment. The lists will be given to us by the
</td> </tr> </table>
### 2.2.5 E-learning material
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
As part of RECAP videos and presentations will be created in order to educate
farmers and inspectors on the current best practices. Some of them will be
available for the users to view whenever they want and some other will be
available only via live webinars. The e-learning material will be mainly
created by the paying agencies and there is a possibility to reuse existing
material from other similar systems.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Metadata such as video format, duration, size, time of views, number of
participants for live webinars will be saved along with the videos and the
presentations in order to enhance the discoverability of the results.
All registered users will be able to discover the e-learning material either
via searching capability or via a dedicated area that will list all the
available sources.
The database and the storage area will not be discoverable to other network
machines operating on the same LAN, VLAN with the DB server or other networks.
Therefore only users with access to the server (RECAP technical team members)
will be able to discover the database and the storage area.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
The e-learning material will only be accessible through the RECAP platform.
All RECAP users will have access to that material.
The database will only be accessible by the authorized technical team.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Videos and power point presentations will be saved on the RECAP database
server. All data will be transferred via SSL connections to ensure secure
exchange of information.
The system will be daily backed up and the back-ups will be kept for 3 days.
All backups will be hosted on a remote server to avoid disaster scenarios.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.2.6 CC laws and rules
<table>
<tr>
<th>
</th>
<th>
Paying agencies in a various formats (excel, word) and will be transformed in
electronic form.
</th> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
All registered users will have access to the laws and the inspection
checklists via the RECAP platform.
Metadata related to the different versions of the checklists and the newest
updates of the laws, along with dates and times will also be saved. Metadata
will help the easy discoverability of the most up to date content.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
All content related to CC laws and inspections will be securely saved on the
RECAP database server. All data will be transferred via SSL connections to
ensure secure exchange of information.
The system will be daily backed up and the back-ups will be kept for 3 days.
All backups will be hosted on a remote server to avoid disaster scenarios.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.2.7 Information extraction and modeling from remotely sensed data
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Collection of Very High Resolution (VHR) satellite imagery and farmer
declarations. Generation of satellite based spectral indices and remote
sensing classification products. Both data sets will be used to establish an
alerting mechanism for breaches of cross-compliance. The products will be used
in WP4.
Processing of open and commercial satellite data for monitoring CAP
implementation is in the core of RECAP.
Data will be available in raster and vector data, accessible through a
GeoServer application on top of a PostGIS database.
Historical, Landsat-based spectral indices may be used to assist a timeseries
analysis.
The origin of the data will be USGS for Landsat ( _http://glovis.usgs.gov/_ )
and ESA for Sentinel, delivered through the Hellenic National Sentinel Data
Mirror Site ( _http://sentinels.space.noa.gr/_ ) . Farmers’ data and VHR
will be provided by the Paying Agencies that participate in the project.
Sentinel-2 data are about 4GB each, while Landsat around 1 GB each, both
compressed. Assuming 4 pilot cases, and a need to have at least one image per
month on a yearly basis, this accounts for 240GB of image data in total.
Indices and classification products will account for an additional
10%, hence a total of 250 GB of data is foreseen to be generated. VHR
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
imagery are of the order of 20GB in total. Vector data are a few MBs in size.
Data and products will be useful for the Paying Agencies, the farmers
themselves and the farmer consultants. They will be ingested by the RECAP
platform and disseminated to project stakeholders, while their usefulness will
be demonstrated during the pilot cases. VHR satellite data will not be
redistributed, and a relevant agreement has been signed to ensure that these
data are used only for the development and demonstration activities of RECAP.
</th> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
The image data and the processed products will be available to all
stakeholders through a PostGIS. Registered users will have unlimited access to
the products for the duration of the project, with the exception of the VHR
satellite data and farmers’ declarations.
Data is stored on the National Observatory of Athens servers and labelled with
the work package, country of origin and the type of data.
Geoserver and PostGIS provide a build-in keyword search tool that will be
used.
INSPIRE metadata will be created for all the EO-based geospatial products that
will be generated in the lifetime of the project.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Spectral Indices and EO-based classification objects will be made available.
Commercial VHR satellite imagery that will be used in the context of the
pilots will be restricted due to the associated restrictions of the satellite
data vendor and the Joint Research Center (JRC). Farmers’ declarations are
considered to be Personal data and hence will be not open for reuse.
Data and products will be made accessible through an API on top a Postgres
database.
No special software is needed. A user can create scripts to access and query
the database and retrieve relevant datasets.
The data and associated metadata will be deposited in NOA’s servers.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
PostGIS and Geoserver is a widely accessible tool for managing geospatial
information. INSPIRE protocol will be used for metadata descriptors, the
typical standard for geospatial data.
No standard vocabulary will be used and no ontology mapping is foreseen.
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
The Postgis database that will be created in RECAP will be licensed with the
Open Data Commons Open Database License (ODbL).
The EO-based geospatial products that will be generated in RECAP will be made
available for re-use for the project’s lifetime and beyond. All EO-based
products will remain usable after the end of the project, with the exception
of the VHR satellite imagery.
No particular data quality assurance process is followed, and no relevant
warranties will be provided.
EO-based products will remain re-usable at least two years after the project’s
conclusion.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Costs for maintaining a database of the EO-based products that will be
generated to serve the pilot demonstrations are negligible. Publication fees
(approximately €1000/paper) are however foreseen.
Data is stored on NOA’s servers.
Long term preservation of the products generated for the pilots is minimal.
However, if this is to scale-up and go beyond the demonstration phase, then
making data FAIR will incur significant costs. Generating FAIR spectral
indices and EO-based classification products for large geographical regions
and with frequent updates, has a potential for cross-fertilization of
different fields (e.g. precision farming, CAP compliance, environmental
monitoring, disaster management, etc.).
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
NOA servers are managed by the IT department. They are regularly backed up and
secure.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
_2.2.8 Maps_
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
</td>
<td>
</td>
<td>
The following maps have been provided by the pilot countries and will be used
by the RECAP platform in the form of map layers:
Habitat
Natura sites,
Nitrate Vulnerable Zones,
Botanical Heritage Sites
Watercourse maps
Slope map (or DEM)
Administrative boundaries and settlements
Land Use / Land Cover Maps, as detailed as possible
ILOT and sub-ILOT
LPIS (WMS or SHP)
The need comes from the fact that by using these maps, useful information
regarding the compliance to the rules will be derived. All maps are not
produced as part of this project but as explained they have been provided to
the technical team by the pilots and will be reused. The types of the maps
differ but some indicative types are SHP, SBX, SBN, PRJ, DBF, QPJ. Similarly,
the size varies a lot, from 1KB to 20MB.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
All registered users will have access to the above maps. The users will be
able to identify the maps by their distinctive name.
Metadata related to the different versions of the maps. Metadata will help the
easy discoverability of the most up to date content.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
Maps are saved in standard formats that are commonly used.
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr> </table>
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
The WP4 data will serve to monitor the effective implementation of the pilots
and provide the necessary feedback to ensure the RECAP platform is a useful
product for the end-users. Previously available data from the pilot partners,
especially with regards to the co-creation task in WP2 will be used. Also,
data from D5.2 “Market assessment report” will be considered for defining the
data to collect in WP4.
In D4.1 “Pilot Plan”, the metadata of WP4, procedures, templates and file
formats for note-taking, recording, transcribing and storing data from
questionnaires and focus group discussions will be developed and agreed. The
main documents used in order to collect and generate the necessary data will
be: informed consent forms, attendance sheets and minutes of the
meetings/workshops, questionnaires, guidelines for interviews and focus
groups, etc. Mainly and when possible, online and/or electronic archives will
be used. Semi-structured interviews with
</td> </tr> </table>
<table>
<tr>
<th>
Data security
</th>
<th>
All maps will be saved on the RECAP server. All data will be transferred via
SSL connections to ensure secure exchange of information.
The system will be daily backed up and the backups will be kept for 3 days.
All backups will be hosted on a remote server to avoid disaster scenarios.
</th> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.2.9 Examples of BPS applications
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Examples of previous years submitted BPS applications have been shared with
the technical team. As part of the user journey, the farmers will have to
enter details similar to the ones they have entered in the BPS application
hence the use of such data will allow the effective design of the DB as well
as training material for the classifiers of the Remote Sensing Component. The
data have been delivered in excel sheets by all pilots.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Only the technical team will have access to these data and will not be used on
the RECAP platform.
No metadata will be produced.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
All data are securely saved in the DRAXIS and NOA’s premises.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
No such data will be shared with anyone outside the consortium.
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
_**2.3 DMP Components in WP4 – Deployment and operation (INI)** _
<table>
<tr>
<th>
</th>
<th>
individuals will be collected and stored using digital audio recording (e.g.
MP3) only if the interviewees give their permission. In case they deny,
interview notes will be typed up according to agreed formats and standards.
All transcripts will be in Microsoft Word (*.doc/ *.docx). Partners will be
asked to anonymize the data prior to sending it to WP4 leader.
The origin of the data for WP4, will be mainly from: Partners of the project
Pilot partners
Public national/regional authorities of the Pilot countries
Agricultural consultancy services of pilot countries Farmers from the
different pilot countries
The size of the data that will be collected and generated in WP4 is not known
yet, although written responses are likely to be fairly small in size (<1 GB
for all pilots) and recordings to be larger files (10 \- 20 GB).
Raw data collected in WP4 will be useful for the improvement and validation of
the RECAP platform. Once treated and anonymized, results of the pilots
conducted in WP4 will be made public in D4.3, D4.4 and D4.5. It is foreseeable
that data will be useful for the regional/national authorities of CAP in the
pilot countries, for the agricultural consultancy services and for the farmers
and farmers’ cooperatives.
</th> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
The raw data collected in WP4 will not be made publicly available as it
includes confidential and sensitive personal information.
Outline naming conventions used (e.g. Data_<WPno>_<serial number of
dataset>_<dataset title>. Example Data_WP4_3_Intermediate Pilot
Evaluation_Spain data).
Data will be stored on INI’s servers and labelled with the task name, country
of origin and the type of data. Data will be searchable by country, task name
and data type.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
All raw data collected in WP4 will be for internal use within the project
consortium, as the objective of WP4 is to validate the RECAP platform
developed in WP3. As raw data will contain sensitive personal data, the
databases will not be publicly available.
Data will be stored on INI’s servers and it will be accessible through the
RECAP wiki only by the members of the consortium. The administration of the
RECAP wiki will only be accessible by the Coordinator of RECAP (DRAXIS) and
the databases will be renewed when new data will be available.
Raw data will be treated in order to produce D4.3, D4.4 and D4.5, which are
public deliverables.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
The data of WP4 will start to be collected and generated in WP4 in the fall
2017, and all the specifications and periods of use, and re-use will be
established in deliverable D4.1 “Pilot Plan” to be produced in spring 2017. As
mentioned above, it is not legal to release sensitive personal data such as
the questionnaire and interviews responses.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Data quality will be assured by asking partners to fill out paper
questionnaire in their own languages. Interviews will be recorded, translated
and transcribed to ensure accurate data recording and translation.
</th> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
The data is collected for internal use in the project, and not intended for
long-term preservation. The data will be preserved and shared with the members
of the consortium through the RECAP wiki. WP4 leader (INI) keeps two daily
incremental backups, one on a separate disk and another one on a remote server
within Spain.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
A letter explaining the purpose, approach and dissemination strategy
(including plans of sharing data) of the pilot phase, and an accompanying
consent form (including sharing data) will be prepared in D4.1 “Pilot plan”
and translated into the relevant languages by the pilot partners. A clear
verbal explanation will also be provided to each interviewee and focus group
participant. Commitments to ensure confidentiality will be maintained by
ensuring recordings will not be publicly available, that transcripts will be
anonymized and details that can be used to identify participants will be
removed from transcripts or concealed in write-ups. Due to the highly-focused
nature of the pilot phase, many participants may be easily identifiable
despite the efforts to ensure anonymity or confidentiality. In such cases,
participants will be shown sections of transcript and/or report text in order
to ensure confidentiality of their interview data.
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
WP4 leader (INI) abides by the Spanish regulation in terms of protection of
personal data (Ley Orgánica 15/1999 de 13 de diciembre and Real Decreto
1720/2007 de 21 de diciembre) and undergoes an external audit by a specialized
consultancy (AUDISIP, _www.audisip.com_ ) in order to ensure that internal
procedures of the company follow the regulation. INI has appointed an internal
manager on Data Protection issues, who has put in place the necessary internal
procedures to ensure the company follows the regulation and regularly trains
and reminds INI staff on their obligations in terms of data protection and any
modifications of the regulation.
</td> </tr> </table>
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Data collection is necessary for the elaboration of the Dissemination and
Communication Strategy, the establishment and management of the Network of
Interest, the Market assessment and the Business plan. Specifically, they are
necessary for target groups’ tracking procedure and for Paying Agencies,
agricultural consultants and farmers collective bodies’ profiling.
</td> </tr> </table>
## 2.4 DMP Components in WP5 – Dissemination & Exploitation (ETAM)
<table>
<tr>
<th>
</th>
<th>
Regarding the types and formats of data collected, these are lists of
communication recipients and target groups’ lists in excel files containing
organisations/bodies and their e-mail addresses.
Parts of the lists have been developed in previous projects of the WP leader.
The rest of the data has been developed through desk research. The expected
size of the data will be approximately 7-10 thousands.
Regarding the data utility, they are useful to the WP leader for carrying out
communication and dissemination and for the development of the business plan.
</th> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
The deliverables publically available i.e. “Communication and dissemination
plan” and “Market Assessment Report” facilitate discoverability of data.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data concerning e-mail addresses will not be openly available, as being
personal data.
Deliverables publically posted on the website of RECAP will make available all
respective data.
No particular methods or software tools are needed to access the data. Data
are stored at ETAM’s server. Deliverables are posted on the website of RECAP.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
As commented above, deliverables publically posted on the website of RECAP
will make available all respective data without any restrictions.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Data management responsibilities have been allocated to two members of the WP
project team.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Automated backup of files and no transfer of sensitive data.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
The pilot implementation and utilisation of the RECAP platform, requires the
collection and storage of personal data. All data collected are kept secure
and unreachable by unauthorised persons. They are handled with appropriate
confidentiality and technical security, as required by the law in the pilot
countries (Spain, Greece, Lithuania, UK, and Serbia) and EU laws and
recommendations. The Privacy Risk Assessment deliverable was carried out to
guarantee a privacy friendly platform i.e. a secure and safe environment for
collecting, sharing and consulting personal data. The deliverable contains a
chapter referring to the EU legislation. This is followed by a presentation of
the laws and the competent authorities in the pilot countries. There is also a
chapter that deals with the privacy risk assessment definition and
characteristics. The personal data in the RECAP platform are discussed and
finally, risks and mitigation measures are presented in detail. A glossary of
terms at the end of the document provides useful definitions.
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
# 3\. Conclusion
The DMP reflects the data management strategy and the procedure that RECAP
will follow in order to identify issues and missing information related to
data management that can be further clarified until the submission of the 3rd
DMP. The DMP is not a fixed document but it will be updated once more during
project lifespan (M30).
# Abbreviations
<table>
<tr>
<th>
API
</th>
<th>
Application Programming Interface
</th> </tr>
<tr>
<td>
BPS
</td>
<td>
Basic Payments Scheme
</td> </tr>
<tr>
<td>
CAP
</td>
<td>
Common Agricultural Policy
</td> </tr>
<tr>
<td>
CC
</td>
<td>
Cross Compliance
</td> </tr>
<tr>
<td>
DEM
</td>
<td>
Digital Elevation Model
</td> </tr>
<tr>
<td>
DMP
</td>
<td>
Data Management Plan
</td> </tr>
<tr>
<td>
EU
</td>
<td>
European Union
</td> </tr>
<tr>
<td>
IP
</td>
<td>
Internet Provider
</td> </tr>
<tr>
<td>
jpeg
</td>
<td>
Joint Photographic Experts Group
</td> </tr>
<tr>
<td>
mp3
</td>
<td>
Motion Picture Experts Groups Layer-3
</td> </tr>
<tr>
<td>
LAN
</td>
<td>
Local Area Network
</td> </tr>
<tr>
<td>
LPIS
</td>
<td>
Land Parcel Identification Systems
</td> </tr>
<tr>
<td>
PDF
</td>
<td>
Portable Document Format
</td> </tr>
<tr>
<td>
SQL
</td>
<td>
Structured Query Language
</td> </tr>
<tr>
<td>
SSL
</td>
<td>
Secure Sockets Layers
</td> </tr>
<tr>
<td>
VLAN
</td>
<td>
Virtual LAN
</td> </tr>
<tr>
<td>
WMS
</td>
<td>
Web Map Server
</td> </tr>
<tr>
<td>
XML
</td>
<td>
Extensible Markup Language
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0863_MARA_686647.md
|
Additional requests are recorded comprehensibly in the internal ticketing-
system.
Remote access is possible through a secure VPN access solution and two factor
authentication. Project specific: If required, it is possible to monitor the
changes of data (along with the reason of change provided by the user) by
adding version control to the file repositories. Access rights are managed
within the lifecycle of the project.
# Risk management
AIT focuses on two aspects of risk management initiatives – user awareness and
preventive technology.
With regard to user awareness several guidelines are defined to support users
in handling data. In addition users are frequently informed about new
developments and threats.
With regard to preventive technology there are several security measures in
place to ensure data protection from unauthorized access.
# Secure Access and Transfer
Every connection to a data-processing system from a remote location is done by
certificate based authentication and strong encryption methods (e.g. for the
“Online Document Sharing Service”). Secure and safe data transfer is managed
through SSL based protocols (e.g. HTTPs, FTPs, SFTP, SCP) or a virtual private
network.
**Storage and backup (short and mid-term):**
# Types of data
The available storage space on the file storage is separated by location and
department. Permission for access is claimed comprehensibly over the internal
ticketing system. The permission management is organized with active directory
groups by the central IT.
Specific project initiatives can be handled with higher restrictions or with a
differentiated support.
# System related backups
Backup of virtual machines, backup of local client data, backup of Linux
systems and full image client backups are designed for disaster recovery, not
for mid-term preservation. On request it is possible to preserve a system for
mid or long term.
# General backup procedure
Company data is stored for 10 years after the project end, because, according
to the internal AIT-quality management specifications, project data must be
kept for this period.
The central backup procedure in overview:
File-Service: on a regular base data is secured on LTO tapes:
Daily: differential backup, where the LTO tapes are overwritten weekly.
Every Friday: full backup, where the LTO tapes are overwritten each month.
Exception: No overwrite of the last full backup in a month is made. This LTO
tape is stored securely in a data safe (security class EN 1047-1) with
restricted access.
Exchange-Service: daily full backup to disk.
Virtual machine: daily backup to disk of the whole central managed virtual
infrastructure, with an available restore period of the last 7 days. Longer-
term backups have to be requested separately.
SharePoint: daily differential backup and weekly a full backup to a file share
which is kept for 30 days
FTP: no backup needed because it is only used for data exchange
OneDrive for Business: managed, externally hosted (EU) cloud storage solution
for each user for data exchange and project activities with a guarantee of
high availability. Therefore, there exists no centrally managed internal
backup strategy.
Data backup and recovery of the central infrastructure is the responsibility
of the central IT.
Decentralized initiatives can be handled with higher restrictions or with a
differentiated support.
**Archiving and preservation (long term):**
According to the internal AIT quality management specifications, project data
must be stored for 10 years after the end of the project (see above: general
backup procedure). For this period, AIT can guarantee the availability and the
restricted access to stored data for eligible persons. Besides the standard
backup procedure, AIT has no further dedicated central data archiving system.
Because of that, AIT cannot guarantee the unchangeability of stored data. If
necessary, data archiving needs to be executed through decentralized
initiatives.
# 2.1.2 Albert-Ludwigs-Universität Freiburg (ALU-FR)
**MARA data manager:**
Sonja-Verena Albers
**Available / necessary resources:**
We have person specific accounts in which the raw data are saved. These are
backed up at the IT centre of the university. Moreover, we have a NAS (network
attached storage) system in the lab where we make a second copy of the raw
data. For the recording of all experimental procedures, we use an electronic
lab journal at _http://www.labguru.com/_ .
**Data access and security:**
All the raw data are saved within accounts that are password protected and
also the individual Labguru accounts are password protected. As the lab head,
the MARA data manager, Sonja-Verena Albers, has access to all information
saved on labguru.
**Storage and backup (short and mid-term):**
As described above, the people initially save their raw data in their personal
accounts. These drives are backed-up daily on a server at the IT centre of the
university and in regular intervals to our own NAS system. Moreover,
experimental procedures and results are saved in the personal Labguru
accounts.
As we produce mainly photo files from DNA electrophoresis gels or SDS PAGE
gels, it is not expected that space will be limiting. Only movie files
recorded on our microscope can be larger, but these are stored at the
microscopy computer and also on our NAS system.
Our data can always be retrieved via the IT centre of the university, which
has a security and retrieval procedure at hand.
Experiments documented and finished in Labguru are finished by the executing
person, and then witnessed by the MARA data manager. Then these experiments
have time stamps and cannot be changed anymore. All the data from Labguru are
also saved monthly as a PDF file which is also saved in Sonja-Verena Albers’s
account and on the NAS system.
**Archiving and preservation (long term):**
See above. Once a student leaves, the data are transferred to an account
belonging to Sonja-Verena Albers. And this again is backed up every day as
described above. The Labguru data are always accessible to the MARA data
manager at ALU-FR.
# 2.1.3 Imperial College London (IMPERIAL)
**MARA data manager:**
Morgan Beeby
**Available / necessary resources:**
We currently have a dedicated ~55 tb RAID6 server running Linux Mint 17.2 for
primary data storage. Backup is provided by nightly mirroring this to a larger
server running similar infrastructure in a separate building on campus. Rsync
is used to provide nightly snapshots using crontabs, enabling storage of
different versions of files over a time period. Logs are emailed to me nightly
to assess complete backup. Use of RAID6 means double redundancy of hard
drives, reducing the likelihood of RAID array failure to negligible.
It is likely that we will need to expand this in due course, but currently we
project that this will be satisfactory for the immediate future. Expansion
plans will be to purchase additional server pairs as described.
**Data access and security:**
The MARA data manager, Morgan Beeby, is the only person with superuser access.
As such, he is the only person capable of deleting data. Deleted and altered
files are nevertheless recoverable via our mirroring backup system. We do not
anticipate collaborators having direct access to data on our filesystems and
will rather – if necessary – provide copies of pertinent data to
collaborators.
Servers are housed in dedicated server rooms with restricted swipecard access.
If data is collected at electron microscopy facilities off-campus, data will
be transferred via hard drive by courier (the de facto standard of the field).
The facilities will retain copies of data until confirmed receipt at ICR.
Recovery will be performed manually if/when needed.
**Storage and backup (short and mid-term):**
Described under “Available / necessary resources”. **Archiving and
preservation (long term):**
Data is treated as “permanent”. We anticipate pruning datasets to only
relevant and published data upon attaining appropriate milestones (i.e.,
publication) to avoid storage of irrelevant data.
# 2.1.4 Apta Biosciences Ltd (APTA)
**MARA data manager:**
Yap Sook Peng
**Available / necessary resources:**
Apta has engaged a third party IT vendor (Cordeos Pte Ltd) for all our IT
support which includes software, hardware and backup system. Additional
resources are required from the IT vendor for data backup for MARA project,
which is additional backup tapes (1TB each).
**Data access and security:**
All data for the MARA project will be stored in the Shared Drive, a storage
device on a local access network (LAN) of Apta’s server. An exclusive project
folder will be created for MARA project. The MARA project folder will only be
accessible to approved personnel and project team members who need access to
complete their tasks. The access control is set as (1) No Access, (2) Access
with Read only, (3) Access with Read, Write and Delete.
For the MARA project folder, non-project members with a need to access the
data will have read only permission.
Sensitive data is password protected. Permissions to other files are set by
the data manager.
The data is in support of potential new products for commercialisation. Any
leak of the data will affect the commercial potential for any product coming
from MARA data.
Security measures have been put in place to reduce the risk of data leaks.
Control Read, Write and Delete access measures have been put in place.
Collaborators will have read only access to the data except in certain
circumstances where there is a need for them to use the data in its original
format.
Only the people with Read, Write and Delete access are permitted to add data.
For the MARA project folder, non-project members with a need to access the
data will have read only permission. Project members will have Read, Write and
Delete access to the project folder.
Sensitive data is password protected. Permissions to other files are set by
the data manager.
**Storage and backup (short and mid-term):**
MARA Project Folders will be stored in shared drive and backed up daily. The
data is backed up externally in a physical tape format for its affordability,
reliability and portability. The third party IT Vendor will be responsible for
data backup and recovery. In the event of an incident, the latest data
recovery will be the night before at 11pm.
For non-electronic data, e.g. lab notebooks, HPLC spectra, etc., the data will
be scanned and converted to electronic PDF files. The scanning of lab
notebooks will be done at quarterly basis, and the original data will be
archived and stored for at least 5 years after the MARA project ends.
Additional tapes (1TB each) will be purchased for MARA project data. The
additional tapes come with additional cost, which is ~600 Euros per tape.
IT Vendor (Cordeos Pte Ltd) will be responsible for data backup and recovery.
Data up to the night before (11pm) will be readily available if there is an
incident in the laboratory or office.
**Archiving and preservation (long term):**
The data will be backed up in a physical tape on daily basis, till it is fully
stored, and the tape will be labelled based on the duration period. Example:
01 Jan 2016 – 28 April 2016 (MARA Project Tape 01). The backed up tapes will
be archived and stored at an offsite location, both away from the IT provider
and the Apta laboratory site, for long term preservation, or at least 5 years
from the date of project closure.
The approximated end volume of data to be generated from MARA project is about
60GB. However, as the data will be backed up daily by storing it in the tape
format without overwriting the previous data, it is a great challenge to
predict the accumulated data saved and backed up throughout the entire course
of work at this point in time.
All data for MARA project will be stored in the Shared Drive of Apta server.
An exclusive project folder will be created for MARA project. The MARA project
folder will only be accessible to approved personnel and project team members
who need access to complete their tasks. The access control is set as (1) No
Access, (2) Access with Read only, (3) Access with Read, Write and Delete.
For the MARA project folder, non-project members with a need to access the
data will have read only permission.
Sensitive data is password protected. Permissions to other files are set by
the data manager. Apta will rely on the archiving and preservation
capabilities provided by our IT Vendor.
# 2.1.5 Aarhus Universitet (AU)
**MARA data manager:**
Jacob Lauwring Andersen
**Available / necessary resources:**
An electronic lab book service ( _www.labwiki.au.dk_ ) is available and back
up is running daily. This will be applied for data management.
**Data access and security:**
Aarhus University has a thorough information security policy dedicated to
protect Aarhus University's information and, in particular, to ensure that the
confidentiality, integrity and availability of critical and sensitive
information and information assets are retained.
**Storage and backup (short and mid-term):**
Data is stored for 10 years after deposition and back up procedures are
running on daily basis. Aarhus University will provide sufficient storage for
the project. Data can be recovered from backup on hourly basis and deposited
data can be recovered on daily basis.
**Archiving and preservation (long term):**
Once deposited, data is stored for minimum 10 years at Aarhus University.
Protein structures will be deposited in the Protein Data Bank ( _www.pdb.org_
) and stored.
# _2.2 Data set descriptions_
Dissemination levels within MARA:
PU = public
CO = confidential, only for members of the consortium and the involved EC
services
## 2.2.1 MARA-AIT-001
**Data set reference / name / creator:**
Reference: MARA-AIT-001
Name: DNA and oligonucleotide sequence data
Created by: AIT – Ivan Barisic, Yasaman Ahmadi, and Regina Soldo **Data set
description:**
Data format: electronic: XLSX, GB
Software used for data generation: Excel, Cadnano, etc.
Hardware used for data generation: IonTorrent PGM
Typical file size (for electronic data): Kilobytes
Approximate amount of data: Megabytes
Short description of the data set: DNA sequence data is a letter code
comprising A (adenine), C (cytosine), G (guanine) and T (thymine)
corresponding to a nucleotide. The sequence data will be used to synthetize
DNA. Some sequences will be published within scientific publications.
**Standards and metadata:**
Sequence data obtained from the IonTorrent PGM will be saved in the GenBank
Flat File format.
Oligonucleotide sequences will be saved together with their corresponding
name, length, target, DNA and, if applicable, the origami structure.
**Data sharing:**
Dissemination level: CO
Embargo period: Until publication/patent
Repository/repositories planned for upload: Published sequences will be made
available via Pubmed and/or NCBI Genbank
Further details on data sharing: The data will be accessible and shared within
the AIT business unit Molecular Medicine.
Explanation why CO data cannot be made public: The data cannot be shared due
to intellectual property and commercial issues.
**Data access and security:**
Described in the general part.
**Storage and backup (short and mid-term, during the project):**
Described in the general part.
**Archiving and preservation (long term, after the project):**
Described in the general part.
## 2.2.2 MARA-AIT-002
**Data set reference / name / creator:**
Reference: MARA-AIT-002
Name: Source code for software Created by: AIT – Stephan Pabinger **Data set
description:**
Data format: electronic - various source code files (.python, .cpp, .c, .h …)
Software used for data generation: IDEs (integrated development environments)
Hardware used for data generation: PCs
Typical file size (for electronic data): Kilobytes to Megabytes (including
test files)
Approximate amount of data: Megabytes to Gigabytes
Short description of the data set: During the project, several source code
files will be generated to develop new tools and integrate functionality into
existing tools. Software will be made available within scientific
publications.
**Standards and metadata:**
Documentation of the source code will be either created directly in the source
file or separately in an additional document (metadata). Manuals of the
software will be stored in the repository system. If applicable standardize
input and output formats will be used depending on the design of the software.
**Data sharing:**
Dissemination level: CO
Embargo period: none
Repository/repositories planned for upload: The source code will be stored in
a distributed revision control system that will be hosted at the AIT.
Further details on data sharing: One central repository will be used for
merging and housing the different branches of the software source files. In
addition, each contributor will have the possibility to keep own versions of
the software in their own repository.
Explanation why CO data cannot be made public: The data cannot be shared due
to intellectual property and commercial issues.
**Data access and security:**
In addition to the description in the general part, access to the data will be
given on a per-user basis. The repository will be hosted with the AIT network.
**Storage and backup (short and mid-term, during the project):**
Described in the general part.
**Archiving and preservation (long term, after the project):**
Described in the general part.
## 2.2.3 MARA-ALU-FR-001
**Data set reference / name / creator:**
Reference: MARA-ALU-FR-001
Name: Electrophoresis image data
Created by: ALU-FR – Patrick Tripp, Lena Hoffmann **Data set description:**
Data format: TIFF, JPEG, EPS
Software used for data generation: Imaging software (Quantity One (Biorad),
Chemostar Imager (Intas))
Hardware used for data generation: Biorad Imaging system and Intas Imaging
system
Typical file size (for electronic data): Kilobytes
Approximate amount of data: For electronic data: in order of gigabytes
Short description of the data set: The images recorded show the results of DNA
electrophoresis or protein electrophoresis experiments. In our Labguru account
they are linked to the specific experiment where a detailed description exists
of the experimental procedure.
**Standards and metadata:**
No standards and metadata exist for these data.
**Data sharing:**
Dissemination level: PU
Embargo period: Until published
Repository/repositories planned for upload: Publisher’s repositories Further
details on data sharing: Publishing in “Open access” journals **Data access
and security:**
Please see general part.
**Storage and backup (short and mid-term, during the project):**
Please see general part.
**Archiving and preservation (long term, after the project):**
Please see general part.
## 2.2.4 MARA-ICL-001
**Data set reference / name / creator:**
Reference: MARA-ICL-001
Name: Electron cryo-tomographic imaging data Created by: IMPERIAL – Morgan
Beeby, Amanda Wilson **Data set description:**
Data format: Electronic: MRC files of tomograms
Software used for data generation: IMOD, Tomo3D, RAPTOR, PEET, Relion
Hardware used for data generation: ICT FEI F20 electron cryo-microscope;
possible use of off-campus electron cryo-microscopes.
Typical file size (for electronic data): 3 gb
Approximate amount of data: 100s of datasets amounting to 10s of terabytes of
data.
Short description of the data set: Data will be 3D tomograms generated by
electron cryo-tomography. Data will be useful to MARA participants and the
general scientific community interested in electron cryo-
microscopy and archaellar motors. Published electron cryo-microscopy data will
be archived at publicly accessible EMPIAR and EMDB databases for raw and
processed data, respectively.
**Standards and metadata:**
Data will be stored in de facto standard MRC file formats. Metadata is
required to be stored in the lab database with additional information recorded
by users at the time of data collection. Metadata is stored in a backed-up
MySQL database which is dumped nightly as a text file backup.
**Data sharing:**
Dissemination level: PU + CO. PU - Public: published data will be archived at
publicly accessible EMPIAR and EMDB databases for raw and processed data,
respectively. CO - Confidential: data in the process of being interpreted and
pre-publication.
Embargo period: Data will be made publically available at the time of
publication.
Repository/repositories planned for upload: Published electron cryo-microscopy
data will be archived at publicly accessible EMPIAR and EMDB databases for raw
and processed data, respectively. Empiar:
_https://www.ebi.ac.uk/pdbe/emdb/empiar/_ EMDB:
_https://www.ebi.ac.uk/pdbe/emdb/_
Further details on data sharing: Data will be freely available via publicly
accessible databases listed above. Necessary software for viewing is all
publically and freely available (primarily IMOD and UCSF Chimera). Published
data will be widely open.
Explanation why CO data cannot be made public: Confidential data is data that
is still in the process of interpretation pre-publication.
**Data access and security:**
Please see General DMP section.
**Storage and backup (short and mid-term, during the project):**
Please see General DMP section.
**Archiving and preservation (long term, after the project):**
Please see General DMP section.
## 2.2.5 MARA-APTA-001
**Data set reference / name / creator:**
Reference: MARA-APTA-001 Name: Seligo sequence data.
Created by: APTA – Yap Sook Peng, Yau Yin Hoe **Data set description:**
Data format: Final data files (gel images, Seligo sequences), technical
reports and completed laboratory notebook scanned copies will be in PDF
format. Data files still being accessed will be stored in the appropriate
format, e.g. Excel.
Software used for data generation: Microsoft Excel, BioEdit, FinchTV, Nanodrop
2000/2000c version 1.4.2, CFX Manager Software.
Hardware used for data generation: Gel imager (BioRad), Sanger sequencer (ABI
3730xl platform), NanoDrop 2000 Spectrophotometer, CFX Connect.
Typical file size (for electronic data):
1 raw image data file from GelDoc EZ is 3MB.
1 raw data file from RT-PCR is <0.1MB
1 raw data file from Nanodrop is 0.4MB, Screen shot 0.6MB 1 PDF Bioanalyzer
report is 2MB in average.
1 raw data file for DNA sample sequence is ~ 500KB per sample, 40MB per
96-well plate Approximate amount of data:
~1GB per protein target
~20GB for all data (against 20 bacterial pathogen targets) Short description
of the data set:
One of the main data set to be generated from the development of Seligo is the
sequences of Seligo binders against the 20 most important bacterial pathogens.
The data will be useful to commercial competitors seeking to develop similar
products. Additionally, the Seligo sequences are confidential information and
could be used by competitors to rapidly replicate the MARA work.
**Standards and metadata:**
No existing standards for reference.
The bulk of the data generated from Development of Seligo will be the sequence
data of the selected Seligo candidates. From each of the selection rounds,
less than 800 Seligo sequences (94-mer each) will be generated. Sanger
sequencing methodology instead of Next Generation Sequencing (NGS) will be
used to identify the sequences of the Seligo identified from the selection
process. Hence, there won’t be any metadata created, but approximately 400-800
Seligo sequences per selection to be analysed and stored in excel file format.
**Data sharing:**
Dissemination level: CO
Embargo period: 3 years, for IP reasons
Repository/repositories planned for upload: MARA data will be stored in a
separate folder within the shared drive in Apta. A Sharepoint created for MARA
members will be used for data sharing within the consortium. Further details
on data sharing: The data which supports the public dissemination of the MARA
result will be made public.
Explanation why CO data cannot be made public: Datasets will not be made
public where there is intellectual property and/or commercial reasons. All
datasets involved included in support of publications and presentations will
be made public.
**Data access and security:**
Described in the general part.
**Storage and backup (short and mid-term, during the project):**
Described in the general part.
**Archiving and preservation (long term, after the project):**
Described in the general part.
## 2.2.6 MARA-APTA-002
**Data set reference / name / creator:**
Reference: MARA-APTA-002
Name: AUDENA design and development
Created by: APTA – Yap Sook Peng, Yau Yin Hoe, Shuji Ikeda **Data set
description:**
Data format: Final data files (NMR spectra, Mass Spectra, HPLC charts, gel
images, Seligo sequences, BIAcore binding interaction data), technical reports
and completed laboratory notebook scanned copies will be in PDF format. Data
files still being accessed will be stored in the appropriate format, e.g.
Excel.
Software used for data generation: Microsoft Excel, BioEdit, FinchTV, BIAcore
3000 Control Software version 4.0.1, BIAevaluation version 4.0.1, Nanodrop
2000/2000c version 1.4.2, CFX Manager Software, Unicorn 2.0, DNA_H8_F2.
Hardware used for data generation: NMR, ESI-MS, HPLC, gel imager (BioRad),
Sanger sequencer (ABI 3730xl platform), BIAcore 3000 machine, NanoDrop 2000
Spectrophotometer, CFX Connect, NTS DNA synthesizers, AKTA Purifier, AKTA
Explorer and PC.
Typical file size (for electronic data):
1 raw image data file from GelDoc EZ is 3MB.
1 raw data file from RT-PCR is <0.1MB
1 raw data file from Nanodrop is 0.4MB, Screen shot 0.6MB
1 raw log data from NTS synthesizer is 7KB/synthesizer column 1 PDF Processed
report for NMR or MS from NUS is 0.1MB in average.
1 PDF Bioanalyzer report is 2MB in average.
1 raw data file for DNA sample sequence is ~ 500KB per sample, 40MB per
96-well plate Approximate amount of data: ~3GB
Short description of the data set: One of the main goal of MARA project is the
development of AUDENA. The data set to be generated from the design and
development of AUDENA may comprise of but not limited to the sequences of
Seligo binders against the bacterial pathogens, new Seligo random library,
AUDENA design and development, synthesis methodology of new Seligo / AUDENA
designs, HPLC purification methods for Seligos and manufacturing of Seligos.
The data will be useful to commercial competitors seeking to develop similar
products. Additionally, AUDENA design and protocols are confidential company
know-how and could be used by competitors to rapidly replicate the MARA work.
**Standards and metadata:**
No existing standards for reference.
The bulk of the data generated from AUDENA design and development may comprise
of the tests and validation data of AUDENA ideas and designs, e.g. form of
Seligo random library, G-quadruplex validation, etc. There won’t be any
metadata created from this work.
**Data sharing:**
Dissemination level: CO
Embargo period: 3 years, for IP reasons
Repository/repositories planned for upload: MARA data will be stored in a
separate folder within the shared drive in Apta. We have yet to decide on
which repository to be used for external data storage/sharing. Further details
on data sharing: The data which supports the public dissemination of the MARA
result will be made public.
Explanation why CO data cannot be made public: Datasets will not be made
public where there is intellectual property and/or commercial reasons. All
datasets involved included in support of publications and presentations will
be made public.
**Data access and security:**
Described in the general part.
**Storage and backup (short and mid-term, during the project):**
Described in the general part.
**Archiving and preservation (long term, after the project):**
Described in the general part.
## 2.2.7 MARA-APTA-003
**Data set reference / name / creator:**
Reference: MARA-APTA-003
Name: Seligo manufacturing and purification.
Created by: APTA – Yap Sook Peng, Jeremiah Decosta **Data set description:**
Data format: Final data files (NMR spectra, Mass Spectra, HPLC charts, gel
images, Seligo sequences, BIAcore binding interaction data), technical reports
and completed laboratory notebook scanned copies will be in PDF format. Data
files still being accessed will be stored in the appropriate format, e.g.
Excel.
Software used for data generation: Microsoft Excel, BIAcore 3000 Control
Software version 4.0.1, BIAevaluation version 4.0.1, Nanodrop 2000/2000c
version 1.4.2, Unicorn 2.0, DNA_H8_F2.
Hardware used for data generation: NMR, ESI-MS, HPLC, gel imager (BioRad),
BIAcore 3000 machine, NanoDrop 2000 Spectrophotometer, NTS DNA synthesizers,
AKTA Purifier, AKTA Explorer and PC.
Typical file size (for electronic data):
1 raw image data file from GelDoc EZ is 3MB.
1 raw data file from Nanodrop is 0.4MB, Screen shot 0.6MB
1 raw log data from NTS synthesizer is 7KB/synthesizer column
1 PDF Processed report for NMR or MS from NUS is 0.1MB in average
Approximate amount of data: several GB per Seligo
Short description of the data set: The main data to be generated from the
manufacturing and purification of Seligo may comprise of but not limited to
the Amidites and Seligos synthesis data, HPLC purification of
Amidites and Seligos, and quality control data of Amidites and Seligos. The
data will be useful to commercial competitors seeking to develop and
manufacture similar products. Additionally, manufacturing protocols are
confidential company know-how and could be used by competitors to rapidly
replicate the MARA work.
**Standards and metadata:**
No existing standards for reference.
The bulk of the data generated from Seligo manufacturing and purification will
be the protocols and data for synthesis, purification and quality control of
Amidites and Seligos. There won’t be any metadata created. **Data sharing:**
Dissemination level: CO
Embargo period: 3 years, for IP reasons
Repository/repositories planned for upload: MARA data will be stored in a
separate folder within the shared drive in Apta. We have yet to decide on
which repository to be used for external data storage/sharing. Further details
on data sharing: The data which supports the public dissemination of the MARA
result will be made public.
Explanation why CO data cannot be made public: Datasets will not be made
public where there is intellectual property and/or commercial reasons. All
datasets involved included in support of publications and presentations will
be made public.
**Data access and security:**
Described in the general part.
**Storage and backup (short and mid-term, during the project):**
Described in the general part.
**Archiving and preservation (long term, after the project):**
Described in the general part.
## 2.2.8 MARA-AU-001
**Data set reference / name / creator:**
Reference: MARA-AU-001
Name: MARA-AU-001
Created by: AU – Jacob Lauwring Andersen **Data set description:**
Data format: PDF, electronic lab book (www.labwiki.au.dk).
Software used for data generation: Word, and Adobe. Hardware used for data
generation: Äkta purifier.
Typical file size (for electronic data): Megabytes
Approximate amount of data: megabytes
Short description of the data set: Expression and purifications protocols for
proteins purified for the MARA project.
**Standards and metadata:**
Standard material and methods section for protein purification publication,
including further details on expression and purification experiments.
**Data sharing:**
Dissemination level: CO
Embargo period: Not determined yet. Depending on interests.
Repository/repositories planned for upload: Electronic lab book system (
_www.labwiki.au.dk_ ) .
Further details on data sharing: Publication and sharing at meetings.
Explanation why CO data cannot be made public: IPR reasons, depending on the
commercial interests of the MARA project partners.
**Data access and security:**
See general procedures. In general the expression and purification data is not
very sensitive and should be shared early. All MARA members will have access
to the data protected by password.
**Storage and backup (short and mid-term, during the project):**
See general procedures.
**Archiving and preservation (long term, after the project):**
Data is stored in the electronic lab book system and stored 10 years after
deposition to the Aarhus University archiving service.
## 2.2.9 MARA-AU-002
**Data set reference / name / creator:**
Reference: MARA-AU-002
Name: MARA-AU-002
Created by: AU – Jacob Lauwring Andersen **Data set description:**
Data format: PDF, electronic lab book and protein data bank files.
Software used for data generation: Word, ccp4 work-package, phenix, pymol.
Hardware used for data generation: Synchrotron radiation.
Typical file size (for electronic data): Gigabytes
Approximate amount of data: Gigabytes
Short description of the data set: Diffraction data and refined protein
structures of proteins relevant to the MARA project.
**Standards and metadata:**
File format: .pdb for the final refined structure. .cbf for diffraction
images.
**Data sharing:**
Dissemination level: CO
Embargo period: Not determined yet. Depending on interests.
Repository/repositories planned for upload: The protein Data Bank (
_www.pdb.org_ ) .
Further details on data sharing: Publication and sharing at meetings.
Explanation why CO data cannot be made public: IPR reasons, depending on the
commercial interests of the MARA project partners.
**Data access and security:**
See general procedures. In general the protein structures are sensitive and
should only be shared when all commercial interests are protected.
**Storage and backup (short and mid-term, during the project):**
See general procedures.
**Archiving and preservation (long term, after the project):**
Diffraction data is stored 10 years after deposition to the Aarhus University
archiving service.
# _2.3 Data exchange within the MARA consortium_
Within the MARA consortium, data can be exchanged using a password protected
Microsoft Sharepoint system that is only accessible by registered project
members. This system has a very flexible design to tailor different sections
to the specific needs of the project. The system is accessible via a public
web-address ( _https://portal.ait.ac.at/sites/mara_ ) , which is also linked
from the official MARA website ( _http://maraproject.eu_ ) . Included into
the exchange system is a document centre where reports, documents, templates,
etc. can be centrally hosted and shared. This allows project members to access
current versions of reference documents and guidelines. User access rights to
the exchange system are managed by Stephan Pabinger (AIT). All data exceeding
100MB will only be stored temporarily on the system. After successful sharing,
it will be transferred into the respective storage systems of the individual
partners (see 2.1). As described previously, daily differential backup and a
weekly full backup to a file share will be performed for the data stored in
the exchange system.
# Conclusion
As the EC acknowledged in their “ _Guidelines on Data Management in Horizon
2020_ ” , a DMP is not a fixed document, but evolves during the lifespan of
the project. Several MARA project team members have already announced that
they will report additional data sets during the course of the project. The
DMP will be updated accordingly during the project lifetime.
The information collected within the consortium for this initial version of
the DMP also revealed an aspect of which we haven’t been aware to its full
extent during the writing of the MARA proposal: Although MARA has declared
itself to be part of the “open data pilot” and the MARA consortium is still
committed to give the general public access to its research data, most data
will have to be kept confidential until the related IPR is secured. Thus, at
the current time, hardly any research data within MARA are labelled as PU
(“public”). As soon as IPR is secured, we will change the status of data sets
from CO (“confidential, only for members of the consortium and the involved EC
services”) to PU, provided there are no other reasons for confidentiality (as
outlined in the MARA grant agreement). At this time, we will also choose the
appropriate repositories and document them in the DMP.
The DMP itself has been declared as a PU document by the MARA consortium and
will be made accessible to the public via the MARA web page.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0864_SWIMing_637162.md
|
# Objectives
The objective of the Data Management Plan (DMP) is to provide an analysis of
the main elements of the data management policy that will be used by the
applicants with regard to the datasets that will be generated by the project.
The DMP is a new important element in Horizon 2020 projects and describes what
data the project will generate, whether and how it will be exploited or made
accessible for verification and re-use, and how it will be curated and
preserved.
SWIMing is a Coordination & Support Action (CSA) and does not actively
research on topics related to Energy-efficient Buildings (EeB) and the use of
Linked Building Data (LBD). The aim of SWIMing is not to generate new data,
but to review the types of domains, use cases and data modelling that EeB
projects are addressing and identifying how Building Information Modelling -
Linked Data (BIM-LD) technologies can support the exploitation of project
results. Nonetheless, SWIMing will generate data in the form of business use
cases, guidelines and best practices. This data should be publicly available,
comparable, correct, up-to date, complete and compelling and ideally
maintained by an active and neutral EeB community. A specific challenge of
SWIMing is to extract and harmonize relevant data from very different project
resources like project websites, deliverables, publications, tools and
feedback from project partners. Such neutral knowledge base will foster reuse
of project results and better collaboration, and help in the process of
identifying common data requirements, which can benefit from the application
of Linked Open Data (LOD) technologies.
This deliverable shows the approach that has been chosen by the SWIMing
project to deal with expected project results. It first clarifies the types of
managed data and the used methodology to collect and harmonize that data and
then explains the way in which SWIMing is dealing with the challenges of data
management and publication as mentioned in the Horizon 2020 Data Management
guidelines [3] and also the W3C guidelines and best practices for managing
data on the web [4]. It should be noted that the types of data SWIMing will
generate do not necessarily subscribe to all the recommendations put down by
the EC and W3C, but we address each guideline in respect to the data
regardless.
# Types of Managed Data in SWIMing
The SWIMing project, as a CSA, will collect data in the form of relevant
business use cases in and around the different Building Life Cycle Energy
Management (BLCEM) stages and Building Information Modelling (BIM)
requirements for these use cases. It may also generate new business use cases
which can benefit from the application of
BIM-LD during the course of analyzing projects and liaising with academic,
industrial and governmental bodies. The project will also provide a set of
guidelines and best practices for generating free interlinked, and
semantically interoperable BIM resources for meeting current and future
application requirements within the BLC, uncovered during the analysis of the
business use cases. It will therefore generate a set of guidelines and best
practices for:
1. Standardization of project outcomes through shared linked data vocabularies. Examples of these are: building system control data model, data models for communication between the building and the wider ‘smart grid’, models for describing new energy saving materials and devices, models of devices and sensors in terms of costs, energy ratings and their capabilities, models for describing occupant behavior and comfort, etc.
2. Minimizing time, cost and resources employed in integrating (reformatting, interlinking) existing EeB project outcomes into the BIM-LD cloud;
3. Generating and exploiting these BIM-LD outcomes to meet new and future application requirements;
4. Identifying and developing LD-based applications for frequent and common BIM related tasks.
The set of guidelines and best practices will be created/updated in each
iteration, which will be put at the disposal of the Steering board, which has
been created in WP4 and which consists of the project partners and also the
W3C LBD community members, to allow them to contribute with further resources
and use cases. Our purpose is to guide the transformation of such resources in
a way that allows for their reuse and interoperation across the BLC and on the
Web, by following Open Data standards.
The W3C community portal and wiki will be the main port of call for any
community member to contribute to the development of the business use cases.
Here they will also be able to contribute to the classification and
categorization of stakeholders and data domains. They will be encouraged to
share the data models and open data sets they use with the wider community.
The types of data generated on the wiki will therefore be use case
descriptions, guidelines, and best practices. A full description of the
organization of the use cases, domains, stakeholders can be found in D1.1 as
well as on the shared wiki [2]. Also, on this wiki under data domains a
collection is being iteratively generated of typical data models (both non-RDF
and RDF based) currently being used by the projects. This data is community
driven and already put under control of the W3C LBD community group, and as
such not all use cases are necessarily of direct relevance to the EeB domain.
This is because the W3C group is interested in all data generated across the
BLC.
Nonetheless, most of the use cases are energy related as SWIMing is currently
the main
driver of use case contributions. More details on the guidelines will be
available when D2.2 is made available in M11 of the project.
# Data Collection Framework
As shown in the previous section a main outcome of SWIMing in terms of managed
and published data is to identify EeB business use cases which can benefit
from the application of both Building Information Modelling and Linked Open
Data (BIM-LD). Various EeB research projects will be reviewed, categorized and
brought together in order to facilitate knowledge sharing and to increase the
impact of project results. A main challenge of this data collection process is
to find a common methodology to describe and compare identified business use
cases. Thus, to be able to identify similarities and differences a common
framework is needed that enables to categorize and cluster business use case
developments.
The non-profit organization buildingSMART is developing open standards for the
AEC/FM industry supporting data sharing throughout the life-cycle of a
building. The open IFC standard (ISO 16739) is a main driver for the
implementation of the BIM approach and is an internationally accepted
reference for vendor-neutral data exchange of building data. buildingSMART is
faced with very similar challenges as the SWIMing project because tool vendors
are not able to support the whole IFC standard. Instead, they implement
subsets of IFC being relevant for their specific application area. For
instance, the CAD application of an architect is typically not able to handle
structural analysis data of the structural engineer, or the tool might be
limited to the early design stage and does not support later detailed design.
To be able to manage design processes based on use case specific tools and
partial data exchange the IDM/MVD methodology has been developed by
buildingSMART. This methodology has been adopted by the SWIMing project for
the data collection process.
The IDM/MVD methodology defines how to specify business use cases and how to
coordinate involved stakeholders with their tools and data requirements. A
prerequisite for this is to be clear about processes, actors, shared or
exchanged data and used interfaces or data structures. It provides a framework
for the specification of collaborative design scenarios, in particular for
Building Information Modelling (BIM). The next subchapters briefly introduce
into the IDM/MVD methodology and the types of data that are collected from EeB
projects.
## IDM/MVD methodology and its adoption in SWIMing
The IDM/MVD methodology is divided into two main parts:
1. Information Delivery Manual (IDM, orange parts in Figure 1)
2. Model View Definition (MVD, blue parts in Figure 1)
### Information Delivery Manual
The Information Delivery Manual method (IDM, [9]) is focusing on knowledge
defined by domain experts. It defines processes and exchange requirements,
which will answer what kind of tasks must be carried out, who is responsible,
when they have to carry out (order, dependencies) and what data needs to be
exchanged.
Two kinds of specifications are used:
1. Process Maps based on the Business Process Modelling Notation (BPMN)
2. Exchange Requirements typically collected in a table format
Process Maps define the various tasks to be carried out throughout the life-
cycle of a building. Each task is placed within a swim lane, which is assigned
to an actor role whole is responsible for carrying out those tasks. Arrows
between tasks define data dependencies and are typically linked with data
exchange requirements. For making data exchanges more explicit IDM introduces
own swim lanes, which may carry additional information about the kind of data
source like BIM, drawings, regulations or other kinds of data. The horizontal
axis is tailored according to the life-cycle phases so that it is visible
whether a task has to be carried out in the feasibility stage, early design,
detailed design, commissioning, construction phase or other phases.
More details might be added to refine processes and deal with alternatives.
For instance tasks might be subdivided into subtasks, decision gateways might
be introduced to control the data flow and to deal with iterative design
cycles, or messages are added to show expected communication between actors.
For SWIMing this level of detail is not relevant as the main focus is to agree
on actor roles (domains & stakeholder), the design phases (building life-cycle
stages) and tasks (use cases).
Exchange Requirements specify the data that needs to be exchanged. As
mentioned above it typically starts with identifying main data sources in
terms of high-level data structures or domains. This information can be
represented in own swim lanes and will be detailed in the next step in order
to identify required data, which is defined by objects, attributes and
relationships.
Figure 1 Overview about IDM/MVD (from
### Model View Definition
The Model View Definition is translating Exchange Requirements to data
structures, which are used for implementation. For the IFC data structure this
means to agree on a subset schema of the whole IFC specification and to define
additional constraints that needs to be implemented by tool vendors and
finally certified by buildingSMART. This not only reduces the efforts for
software implementation but will also ensure a certain level of quality for
IFC-based data exchange.
MVD developments are not limited to IFC-based data exchange, although existing
specification and validation tools may not be used then. In the context of LBD
scenarios an MVD could be assigned to one or more (linked) ontologies that are
able to cover expected data requirements. This is interesting with respect to
data requirements which go beyond BIM/IFC data, either by including other
application areas like geographical data (GIS) or by covering a higher level
of detail like for instance dealing with special material properties for novel
heat loss calculations.
## Adoption in SWIMing
SWIMing is using the IDM/MVD methodology as a reference framework to develop
and agree on main criteria for collecting LBD use cases from EeB research
projects. These main criteria are:
* stakeholders (actor roles that are involved in tasks)
* building life-cycle stages (high level definitions from feasibility studies to demolition)
* building domains (data exchange definitions using general descriptions)
These criteria enable to cluster and compare use cases on a high level. For
those use cases which are identified as having the greatest capability to
benefit from adopting BIMLD technologies, refined versions of the use cases
will be developed using BPMN models and more detailed exchange requirements to
support the process of converting to LD.
# Best Practices and Guidelines to Data Management in Relation to SWIMing
The Data Management Plans (DMPs) describes what data the project will
generate, whether and how it will be exploited or made accessible for
verification and re-use, and how it will be curated and preserved. The
beneficiaries are expected to take benefits from the generated data in the
following manners [3]:
* deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate
* the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;
* other data, including associated metadata, as specified and within the deadlines laid down in the data management plan
The SWIMing project manages the generated data using the following web
platforms:
1. Google Drive (private, project internal use only)
This data is shared within the project consortium are stored and managed in
Google Drive. This includes the deliverable drafts, project management
documents, presentation slides, project related literatures, etc.
2. SWIMing website (public) _http://swiming-project.eu/_
The website provides information about the SWIMing project. It informs about
the objectives, partners and results of the project. There is also information
about all kinds of upcoming events related to topics addressed by SWIMing.
Hosted using WordPress, comments can also be added to posts (e.g. events), and
made public with the permission of the SWIMing members.
3. W3C LBD Wiki (public) _https://www.w3.org/community/lbd/wiki/Seed_Use_Cases_
The data related to results of the projects are stored and published in a
wikimedia platform. This includes analyzed use cases, data domain
categorization, etc. as described in section 2 and the deliverable D1.1. The
data is publicly available and editing rights are already granted to
registered persons outside the SWIMing project consortium.
Accordingly, data management is not only dealing with public data but also
project internal policies. However, the main focus of this Data Management
Plan is publicly available data, in particular as SWIMing is actively
promoting the reuse of EeB project results.
The SWIMing project not only follows the guidelines on data management in
Horizon 2020 as recommended by European Commission [3] but also the best
practices of the W3C communities [4, published as draft in June 2015], which
SWIMing members are both actively promoting through its dissemination
activities in WP3.
The Horizon 2020 guidelines address the following topics:
* _Data set reference and name._ In order to enable identification, search, and retrieval of the data, each data set is named and accessible through a URL. For instance, each business use case identified by SWIMing has its own URL and thus can be referenced as a web resource.
* _Data set description._ Each data set is described by some text including its origin. In the W3C LBD Wiki, it can be seen who are the authors of a certain page. The changes of the pages can be also tracked. Furthermore, in the wiki contents it is also possible to hyperlink to related information or other resources like ontologies or available data sets. These can link to open data silos generated by the project or existing external information sources.
* _Standards and metadata._ SWIMing provides metadata and standardized terms for the W3C LBD Wiki, so that ambiguities and clashes can be avoided. It will give the consumer a better understanding on the collected and enriched data. These terms also act a matrix to compare use cases developed in different projects. The provided data follows the IDM/MVD framework developed by buildingSMART as an open standard for BIM-based use case developments (see section 3). Further details about collected information is provided in deliverable D1.1.
* _Data sharing._ It describes how the data are shared, including access procedures, license, and the management of sensitive data. It will be further explained in section 4.2, 4.4, and 4.7
* _Archiving and preservation._ It deals with the procedures for long-term preservation of the data. It comprises how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. The implementation in SWIMing project is described in section 4.4.
Since SWIMing project uses a web platform to store and manage the generated
data and in particular is promoting the use of BLD, it has taken into
consideration the best practices for managing data on the web as recommended
by W3C. The best practices cover the Data Management Guidelines issued by
European Commission. The implementation of each best practice is explained in
the following sections.
Figure 2 Best practices addressing the challenges faced when publishing data
on the Web [4]
## Data Vocabularies and Metadata
This challenge is relevant to achieve semantic interoperability between data
producers and consumers. The solution proposed in the Semantic Web community
is to agree on a shared vocabulary and to make it available in an open format.
In the W3C LBD Wiki a high level data vocabulary has been developed to share a
common understanding about collected data, not only among partners within the
SWIMing consortium but also among LBD community members and other external
data consumers. It is also intended to avoid ambiguity and clashes as much as
possible, this however remains a challenge due to the wide range and diversity
of covered topics. The specific challenge then is to find a good compromise
and to keep it as comprehensible as possible.
At the time of this writing the following agreements are made:
1. _Seed Use Case template._ It provides a common structure or template to collect the business use cases. Each of collected use case has to be described and presented in the same way following the template. More information on these can be found on the wiki [2] and also D1.1.
2. _Data domains categorization and taxonomy._ It is an agreed categorization of data domains used by use cases collected from different EU research projects related to energy efficient buildings. Each category is represented by a wiki page, which provides short description, examples of the type of data and some existing RDF- and non-RDF-based data models.
3. _Building Life Cycle Stage._ It lists agreed building life cycle stages considered for analyzing the business use cases, i.e. (i) Planning and Design; (ii) Construction, Commissioning; (iii) Operation; (iv) Retrofitting/ Refurbishment/ Reconfiguration; (v) Demolition/ Recycling.
4. _Stakeholders._ It is an agreed categorization of actors involved in BLC stages such as architect, owner, engineers etc. It includes not only human stakeholders but also organizations like energy supplier or manufacturers and other non-human stakeholders like data providers, applications and devices.
The SWIMing vocabulary has been developed in the beginning of the project and
has gone through several steps of refinement. It has been discussed within the
LBD community and meanwhile provides a stable basis for our BIM-LD use case
collection. However, further refinements and extensions of the vocabulary are
very likely to reflect new insights and to deal with requirements coming from
use case harmonization and in particular further detailing of key use cases.
Extensions and adjustments will be documented on the W3C LBD Wiki to reflect
the latest state of the shared vocabulary.
Other agreements have been made for internal work and project management. For
instance a simple folder structure based on the work breakdown structure of
the work packages is used in our shared Google Docs drive (see Figure 3). Each
work package folder contains subfolders corresponding to deliverables.
Additional folders are created for other documents like meetings minutes,
logos, budget related documents, etc.
Figure 3 SWIMing Google Drive Folder Structure for internal data management
## Sensitive Data
Sensitive data is any designated data or metadata that is used in limited ways
and/or intended for limited audiences. Sensitive data may include personal
data, corporate or government data, and mishandling of published sensitive
data may lead to damages to individuals or organizations.
To support best practices for publishing sensitive data, data publishers
should identify all sensitive data, assess the exposure risk, determine the
intended usage, data user audience and any related usage policies, obtain
appropriate approval, and determine the appropriate security measures needed
to be taken to protect the data. Appropriate security measures should also
account for secure authentication and use of HTTPS.
Any use cases generated during the SWIMing project are derived from publicly
available deliverables. Where additional data is elicited from the EeB project
members, it will only be published on the W3C LBD Wiki with the full
permission of the project coordinator. Sensitive data in the form of contacts
are only shared through the internal Google Drive and will not be shared
without permission of the appropriate party. Data gathered through interviews
and questionnaires will also be fully anonymized unless permission is
explicitly asked for and given. TCD has its own internal ethics committee
which must review any questionnaire or survey before it is used to ensure it
complies with its own standards 1 and the standards of the EC 2 . This
sets down strict policies for managing and anonymizing personal data.
## Data Formats
Any collected and enriched use case related data is published on a Wiki HTML
page that is accessible over the internet. Anyone can access this data,
although only members of the community can edit it. So far, main audience of
this information are humans as the main aim of this data is to trigger further
discussions and information exchange within the LBD community. Accordingly,
content of the Wiki pages is mainly structured to meet layout requirements.
For further automatic evaluation, especially if collected data is consolidated
and amount of data increases, a machine readable format is needed. Ideally,
collected data will be offered as RDF graph based on an ontology derived from
the vocabulary and agreements discussed in section 4.1. The SWIMing consortium
is discussing this option, but has not yet come to a decision.
The internal project data, for example deliverables and project management
documents are written using Microsoft Office tools (Word, Excel, PowerPoint).
They are also exported in Google Format (Google Docs, Sheets, Slides), so that
everyone in the project consortium are able to read and edit the data online.
Some of supporting data are represented in PDF format. All used data formats
have been selected to optimize data exchange and collaboration within the
SWIMing consortium. It is mainly driven by used tools and workflows to reduce
coordination overhead.
## Data Preservation
This section describes best practices related to data preservation:
* _The coverage of a dataset should be assessed prior to its preservation_ \- check whether all the resources used are either already preserved somewhere or provided along with the new dataset considered for preservation.
* _Data depositors willing to send a data dump for long term preservation must use a well-established serialization_ \- Web data is an abstract data model that can be expressed in different ways (RDF, JSON-LD, ...). Using a well-established serialization of this data increases its chances of re-use.
* _Preserved datasets should be linked with their "live" counterparts_ \- A link is maintained between the URI of a resource, the most up-to-date description available for it, and preserved descriptions. If the resource does not exist anymore the description should say so and refer to the last preserved description that was available.
All SWIMing data is to be stored on the wiki. Currently, there are no plans to
provide the data in other serializations than those provided by the wiki page.
## Feedback
The wiki page is open for the community to contribute and give feedback,
SWIMing project members are specifically asking for feedback regarding all EeB
project data (relevant to them) published on the wiki and any recommendation
to adjust or change that data will be added to the W3C LBD wiki page as
received. Feedback is also being elicited through the use of questionnaires
and surveys. These are generated using Google forms, which can then be sent to
relevant parties. This data is stored on the shared internal Google drive as
Google spreadsheets. Paper questionnaires and surveys have also been
distributed at workshops and events. This data is also entered into the same
google spreadsheets. All feedback during workshops and tutorials will be
documented in meeting minutes (e.g. word or google doc) and stored on the
shared internal google drive, where they are analyzed by the Steering board
and then published on the wiki.
## Data Enrichment
Data enrichment is defined as a set of processes that can be used to enhance,
refine or otherwise improve raw or previously processed data [4]. In the
SWIMing project, original project documents (deliverables, websites, and
specifications) provide the necessary input to extract, categorize and publish
required use case related data. This is mainly a review process that requires
to harmonize information and, if not available, to enrich data by getting
feedback from project partners. References to used resources are always
provided so that the original source of information can be used for
verification. The review process also includes an assessment regarding the use
of BIM-LD (benefits and challenges), which is mainly done by the reviewer as
this information shall show the potential as seen by an LBD expert. Other than
this, there are no plans for additional enrichment of the data sources
generated within the project.
## Data License
A license is a legal document giving official permission to use the data
generated or used in a project. According to the type of license adopted by
the publisher, there might be more or fewer restrictions on sharing and
reusing data. In the context of data on the Web, the license of a dataset can
be specified within the data, or outside of it, in a separate document to
which it is linked. The SWIMing project will use open web based data and will
fully comply with any licenses associated with the data.
## Provenance and quality
Data provenance allows data providers to pass information about the data
origin and history to data consumers. It is important to provide it, if the
data is shared between collaborators who might not have direct contact to each
other, so that the data consumers know the origin or history of the data [4].
In the SWIMing project, the contact data of the author and link to the project
homepage, i.e. where the use case originated from, are provided in the use
case wiki page. It allows the data consumers to access the original
information sources from project home pages and to contact the use case author
if necessary.
Furthermore, the wiki platform offers a mechanism to track the changes of each
page. The data consumer can see who made the changes and when were the changes
made. The changes tracking function is depicted in Figure 4. Whilst the W3C
recommends the use of ontologies, e.g. the prov-o ontology 3 , to address
the challenge in data provenance, the current method of adding and changing
use cases on the wiki does not lend itself well to the application of the
prov-o ontology. As key use cases are identified and explored in greater
detail during the project, the recording of provenance through the use of the
prov-o ontology may be applied to support machine readability (see also
section 4.3).
Data quality affects the suitability of data for specific applications,
including applications. Documenting data quality significantly eases the
process of datasets selection, increasing the chances of re-use. Independently
from domain-specific peculiarities, the quality of data should be documented
and known quality issues should be explicitly stated in metadata [4]. In the
project SWIMing the data quality is ensured by asking for feedback from
authors/project owners. This will be directly visible in the author’s field of
collected use cases.
Figure
4
Changes tracking of
W3C LBD
Wiki
## Data versioning
Data on the web collaboration platform, such as WIki, changes over time.
Version information makes a dataset uniquely identifiable. It makes the data
consumer to understand how data has changed over time and to determine which
version of a dataset they are working with. Good data versioning enables
consumers to understand if a newer version of a dataset is available. Explicit
versioning allows for repeatability in research, enables comparisons, and
prevents confusion [4]. The W3C LBD wiki provides the change log of each wiki
page. It can be seen who have performed the changes, when the changes occurred
and what exactly the changes are (see Figure 4). Also, the project
deliverables will record specific snapshots of the wiki at different times,
and these can be further used to track different ‘versions’ of the use case
and data domain classifications and descriptions.
## Data identification
The use of a common identification system helps the data consumers to identify
the data and to perform comparison on data in a reliable way. The data has to
be discoverable and citable through time. In the SWIMing project, by using the
wiki platform, each page containing information about a use case, a data
domain category, or a building life cycle stage is accessible through URL. The
URL represents the identifier of the corresponding data. It shall not be
changed over time. The following gives some example of URL corresponding to
use case, data domain category, and building life cycle stage.
* https://www.w3.org/community/lbd/wiki/Building_Energy_Management_System_f or_Energy_Efficient_Operation
* https://www.w3.org/community/lbd/wiki/Category:Building_Devices
* https://www.w3.org/community/lbd/wiki/Category:Operation
## Data access
Data consumers usually require a simple and near real time access to data on
the web. The W3C LBD Wiki and the SWIMing project website is accessible from
anywhere from web browser without any read protection. The SWIMing Google
Drive is also accessible from web browser, but only by partners within the
consortium. No bulk download neither special APIs is provided for accessing
the data other than through HTTP.
## Conclusion of Best Practices and Guidelines
The previous section introduced the best practices guidelines on data
management in Horizon 2020 as recommended by European Commission [3] and also
the best practices of the W3C communities [4]. It addressed these with respect
to the types of data generated by the SWIMing project. This consists of
business use cases, in particular those which can benefit from BIM-LD, and
also guidelines and best practices for converting building data to LD. This
data will be stored on the shared W3C portal and wiki and as such, we do not
at this stage foresee the need for ontological descriptions of the data, in
particular, for recording provenance, licensing etc. The types of data that
projects which SWIMing is clustering though will benefit from these same
guidelines, and so, the project will be actively promoting their usage during
events held as part of WP3 dissemination and clustering. In the next section
we examine how SWIMing compliments the CSA Ready4SmartCities, which has looked
also at the application of LD technologies in the Smart City domain.
# Comparison with Ready4SmartCities
The Ready4SmartCities project presented a set of guidelines for Linked Data
generation in the energy domain [5] aiming to address:
* The generation of Linked Data from tabular (SQL, XLS, or CSV) file formats, among others, which are the formats that are currently the most used in the energy domain.
* The issue of legal aspects, licenses, and data ownership, which is regarded as an important topic that could help lowering the barrier to publish data.
* The generation of static data, as well as dynamic data.
* Various means of obtaining and accessing the data, including data stored in files, which is in line with the specified requirements.
Figure 5 Ready4SmartCities - Steps of the guidelines for Linked Data
generation [5]
Figure 5 presents the generic steps for generating Linked Data as proposed by
Ready4SmartCities. Moreover, a set of requirements for the publication of
Linked Data in the energy domain has been also introduced by Ready4SmartCities
[6] provided in a consolidated way together with two available standards:
* the ISO/IEC 25012 standard (International Organization for Standardization) on Data Quality for the scope of Linked Open Data that provides some data quality indicators which are analyzed for quality requirements extraction, and
* the AENOR (La Asociación Española de Normalización y Certificación) PNE 178301 Spanish standard on Smart Cities and Open Data, which presents a set of metrics and indicators concerning the maturity of the opening and publishing data from the public sector in order to facilitate their reuse for the scope of Smart Cities.
The overall requirements extracted by the research and survey analyses are
summarized into the categories presented in Figure 6. READY4SmartCities aimed
at identifying existing knowledge and data resources that are independent from
the Energy Management Systems (EMS) domain, as well as ontologies, datasets
and alignments specific for EMS interoperability [7]. For the collection of
ontologies and datasets, a special online catalogue [8] has been developed to
ensure that resources are collected and recorded in a standardized way.
The catalogue also allows for ease of understanding and use in terms of
submission of new content, visualization of existing resources and handling of
recorded items. For the collection of alignments, an alignment server offered
as a web service has been set up in order to identify and document links and
alignments among the identified ontologies and datasets.
Figure 6 Ready4SmartCities - Tasks for Linked Data publication [6]
While READY4SmartCities is mainly focused on identifying energy-related
ontologies and datasets, the SWIMing project has a complementary scope by
identifying and analyzing business use cases for Building Information
Modelling (BIM) and Linked Data. SWIMing further analyses their potential
extensions to better represent issues such as data modality and data format,
with the goal of enabling fully automatic discovery and consumption of
resources by Building Life Cycle Energy Management (BLCEM) systems.
# Conclusion
The Data Management Plan (DMP), which is a requirement for all projects
participating in the H2020 Pilot on Open Research Data, aims to maximize
access to and re-use of the research data generated during the course of the
project. The SWIMing project is a Coordination & Support Action and does not
actively research on topics related to Energy-efficient Buildings (EeB) and
the use of Linked Building Data (LBD). The aim of SWIMing is rather to extract
and share knowledge generated by various EeB projects. The main source of data
generated by SWIMing is the LBD wiki, which provides a portal for the
community to access and contribute toward descriptions of business use cases.
Data will also be generated in the form of guidelines and best practices for
generating free interlinked, and semantically interoperable BIM resources for
meeting current and future application requirements within the BLC
For the structuring of use cases, this deliverable documents a standard
methodology to capture those use cases (IDM/MVD) and provides a description on
how the generated data is to be stored, made accessible for verification and
re-use, and how it is being curated and preserved via the shared community W3C
community portal. The document also presents the best practices as set down by
the W3C on publishing data on the web and R4SC project, which both address the
same concerns as the DMP guidelines. As a CSA, SWIMing will be actively
promoting these best practices amongst the wider EeB communities, and will be
providing the expertise and tools to those projects who are unfamiliar with
these practices so that they may apply them to their own project generated
data, thus supporting greater exploitation of their project results and thus
increase impact for their project outcomes.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0865_POWER2DM_689444.md
|
**Findable**
Assign persistent IDs, provide rich metadata, register in a searchable
resource,...
**Accessible**
Retrievable by their ID using a standard protocol, metadata remain accessible
even if data aren’t...
**Interoperable**
Use formal, broadly applicable languages, use standard vocabularies, qualified
references...
# Reusable
Rich, accurate metadata, clear licences, provenance, use of community
standards...
According to principles outlined in
_www.force11.org/group/fairgroup/fairprinciples_
The structure of the plan is as generated by the online tool DMP Online
_https://dmponline.dcc.ac.uk/_ and the contents of the sections were drafted
according to the guidance offered by DMP Online.
# 2\. ADMINISTRATIVE DETAILS
Project Name: POWER2DM - Predictive model-based decision support for diabetes
patient empowerment
Project Identifier: POWER2DM
Grant Title: NUMBER — 689444 — POWER2DM
Principal Investigator / Researcher: Albert A. de Graaf (Coordinator)
Description: Data Management Plan for POWER2DM Health and Observations of
Daily Life data used for self-management by diabetes patients
Funder: European Commission (Horizon 2020) Call Topic: PHC 28 – 2015: Self-
management of health and disease and decision support systems based on
predictive computer modelling used by the patient him or herself
# 3\. DATA SETS
A wide range of patients with diabetes might benefit from support through
POWER2DM. Two different patient populations with altered glucose metabolism
are targeted in POWER2DM: T1DM and T2DM in primary/secondary & tertiary care.
The Self Management Support System (SMSS) developed in POWER2DM will be tested
in a pragmatic RCT: the **POWER2DM Evaluation Campaign** , to establish the
accuracy and utility of the POWER2DM DSS and APIs and evaluate effectiveness
in a real-world setting. We will use three different centres who are
specialized on (some or all of) these entities: The Reina Sofia University
Hospital in Spain (T1DM), the Leiden University Medical Centre and Primary
Care Research Network in the Netherlands (T1&2DM), and the Institut für
Diabetes “Gerhardt Katsch“ in Karlsburg, Germany (T1&2 DM). Study
characteristics are as follows:
_Study design and Operation:_ The protocol for the POWER2DM Evaluation
Campaign will be pragmatic randomised trial with 9 months follow-up of
individual patients. Patients will be randomised to either Power2DM support
(active arm) or usual care (control arm). Patients in the Power2DM
intervention the first 2 weeks patient will follow an established protocol in
order to monitor any problems in using the whole system. There will be
evaluation moments at baseline, after 3, 6 and 9 months.
_Endpoints:_ Primary outcome: %Hba1c levels before and after the intervention
between the two arms
(active versus control). Secondary outcomes: Generic Quality of Life (SF-36);
Patient utilities (EQ5D-L);
Disease Specific Quality of Life (DSQLS); costs (CostQ); self-management
outcome (heiQ: Health
Education Impact Questionnaire, Summary of Diabetes Self Care Activities,
Diabetes Management Self-efficacy); lifestyle and physical activity and other
process outcomes of the POWER2DM modules and services for patients and care
providers: reliability, usability, acceptance and actual usage.
_Sample Size:_ Variable: The level of Hba1c%. Minimum detectable difference:
0.35% (Standard
Deviation 1.0%). For an alpha error of 0.05 and a power of 80%, the minimum
sample size needed is
129 subjects per group. **The POWER2DM RCT will include 140 type 1 DM and 140
type 2 DM subjects, 280 patients in total** , allowing us to face a loss to
follow of 8.5%. In pre-specified subgroup analyses of patients with T1DM and
T2DM we are able to detect a difference of 0.5% with a sample size of 63
subjects per treatment strategy per DM subtype (N=70 with 10% loss to follow-
up).
_Statistical analysis:_ The primary outcome will be analysed using the Stata
13 xtmixed command for multi-level linear regression, adjusting for clusters
at GP-level, repeated measurements within a patient values (StataCorp, College
Station, Tx, USA). Strategy by time interactions will be assessed to detect
differences between the groups at particular time points. In addition,
strategy by time by DM type will be assessed to detect differences in effects
between the two DM subtypes.
Data from participants will be recorded in the POWER2DM Personal Data Store.
Appropriate privacy and data security measures will be put in place acoording
to pertinent regulations. Personal data will be stored with an anonymised
identifier. The keys that will enable to link data to an individual person
will be stored in a secured fashion on separate servers. For the Open Research
data pilot, part of the contents of the Personal Data Store will be made
available for research purposes and transferred to a Data repository according
to cinformed onsent provided by study participants.
# 4\. Data set description
POWER2DM will collect basic diabetes related data, clinical measurements,
patient data (Quality of Life-QoL questionnaires, self-management profile),
daily nutritional intake, exercise level, sleep quality, glucose measurements,
vital signs (pulse, temperature), medication intakes, etc. The provisional
list of measured parameters subdivided in 4 categories is as follows:
Table 1. Provisional list of measures in POWER2DM Evaluation Campaign dataset:
**Comment [Ad1]:** Description of the
data that will be generated or collected, its origin (in case it is
collected), nature and scale and to whom it could be useful, and whether it
underpins a scientific publication. Information on the existence (or not) of
similar data and the possibilities for integration and reuse.
Questions to consider:
● What data will you create?
Guidance: Give a brief description of the data that will
be created, noting its content and coverage
**Comment [Ad2]:** We may wish to add a
5th category containing model predictions, and a 6 th category containing
use
characteristics of the various devices and
POWER2DM features
<table>
<tr>
<th>
Measure Category and Name (# items)
</th>
<th>
Code
</th> </tr>
<tr>
<td>
**Lifestyle and Daily Monitoring**
</td>
<td>
**LDM**
</td> </tr>
<tr>
<td>
Blood Glucose Level
</td>
<td>
1
</td> </tr>
<tr>
<td>
Dietary Intake
</td>
<td>
2
</td> </tr>
<tr>
<td>
Activity Tracker
</td>
<td>
3
</td> </tr>
<tr>
<td>
Sleep Tracker
</td>
<td>
4
</td> </tr>
<tr>
<td>
Sleep Quality VAS
</td>
<td>
5
</td> </tr>
<tr>
<td>
Relaxation/Stress
</td>
<td>
6
</td> </tr>
<tr>
<td>
Stress VAS (1)
</td>
<td>
7
</td> </tr>
<tr>
<td>
Emotional VAS (1)
</td>
<td>
8
</td> </tr>
<tr>
<td>
Diabetes Medication Treatment (Type/ Dosage/Frequency)
</td>
<td>
9
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Questionnaires (# items)**
</td>
<td>
**Q**
</td> </tr>
<tr>
<td>
WHO-5 (5)
</td>
<td>
1
</td> </tr>
<tr>
<td>
PHQ-9 (9)
</td>
<td>
2
</td> </tr>
<tr>
<td>
GAD-7 (7)
</td>
<td>
3
</td> </tr>
<tr>
<td>
PSS (10)
</td>
<td>
4
</td> </tr>
<tr>
<td>
PAID (20)
</td>
<td>
5
</td> </tr>
<tr>
<td>
DSMQ-R (20)
</td>
<td>
6
</td> </tr>
<tr>
<td>
ADDQoL (28)
</td>
<td>
7
</td> </tr>
<tr>
<td>
HFS (27)*
</td>
<td>
8
</td> </tr>
<tr>
<td>
DEPS-R (14)*
</td>
<td>
9
</td> </tr>
<tr>
<td>
FCQ (15)*
</td>
<td>
10
</td> </tr>
<tr>
<td>
D-FISQ (21)*
</td>
<td>
6
</td> </tr>
<tr>
<td>
Gut Health (?)
</td>
<td>
11
</td> </tr>
<tr>
<td>
ASQ (3)
</td>
<td>
12
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Clinical/Lab Tests**
</td>
<td>
**CLT**
</td> </tr>
<tr>
<td>
HbA1c
</td>
<td>
1
</td> </tr>
<tr>
<td>
Fasting Glucose
</td>
<td>
2
</td> </tr>
<tr>
<td>
Fasting insulin
</td>
<td>
3
</td> </tr>
<tr>
<td>
Insulin sensitivity (%HOMA-2 S): based on fasting glucose/insulin
</td>
<td>
4
</td> </tr>
<tr>
<td>
Beta cell function (%HOMA-2-B): based on fasting glucose/insulin
</td>
<td>
5
</td> </tr>
<tr>
<td>
Inflammation (mg/l hs-CRP)
</td>
<td>
6
</td> </tr>
<tr>
<td>
Tissue damage (TC, HDL-C, LDL-C,TG, liver damage blood markers, kidney damage
markers, neuropathy markers, smoking status)
</td>
<td>
7
</td> </tr>
<tr>
<td>
Non-estrefied fatty acids
</td>
<td>
8
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**PatientCharacteristics**
</td>
<td>
**PC**
</td> </tr>
<tr>
<td>
Anamnese: Age/ Gender/Height/Type Diabetes /MedicalHistory (Time since
diagnosis and Complications)/AS4
</td>
<td>
1
</td> </tr> </table>
<table>
<tr>
<th>
Weight
</th>
<th>
2
</th> </tr>
<tr>
<td>
BMI (calculated from Weight and Height)
</td>
<td>
3
</td> </tr>
<tr>
<td>
Waist
</td>
<td>
4
</td> </tr>
<tr>
<td>
Blood pressure
</td>
<td>
5
</td> </tr> </table>
Note: *indicates that this measure will only be used if a patient engages in
an associated selfmanagement task (e.g. only insulin users will be asked about
anxiety related to using insulin) or they indicate associated problems in
other questionnaires (e.g. DEPS-R will be administered if the patient
indicates issues regarding eating)
**5\. Data Capture methods:**
The data of the different categories in Table 1 will be captured as follows:
* How will the data be created?
The following data sources are planned:
* PC category: measured by medical professional
* CLT category: measured in Clinical Chemistry lab
* Q category: patient self-evaluation
* LDM category: data will be registered by devices and sensors used by the patients, or entered by the patient in a mobile or web application ● What standards or methodologies will you use?
specification.
**Comment [Ad5]:**
This is a
suggestion
. It
will result in a large set of files
(
~30 per
patient)
We do not at present envision different versions of the dataset since
only unprocessed data are
collected
.
**Comment [Ad6]:**
open for discussion.
We may want to store some processed
data as well e.g.
weekly aggregates of
p
hysical activity , calorie intake,sleep,
stress, etc.
The following standards will be used:
* PC category: medical professional standards
* CLT category: Clinical Chemistry lab methods/standards
* Q category: established questionnaire methods/standards/scoring methodology;
* LDM category: standards and methods acoording to specifications of devices and sensors used ● How will you structure and name your folders and files?
Folders will be named according to pilot site name / measure category/measure
code (cf. Table 1).
File names will be named according to the (anonymous) subject identifier. The
naming will be
Subject_Site_Category_Code. Each entry in a file specifies a value plus the
associated date/time
**Comment [Ad3]:** We still need to
decide: 3separate local labs in the 3 pilot regions or a single central lab?
**Comment [Ad4]:** We need to decide
whether filling out of questionnaires will be supervised or not
* How will you ensure that different versions of a dataset are easily identifiable?
**6\. Metadata:**
* How will you capture / create the metadata?
We plan to capture the metadata in a “readme” text file. The strict file
naming convention should
further allow to uniquely identify which data is contained in each file.
* Can any of this information be created automatically?
The “readme” text file will be created by hand. The data file names will be
created automatically.
* What metadata standards will you use and why?
POWER2DM will create a dataset of a diverse nature, not matching any of the
disciplinary metadata standards for Biology, Earth Science, Physical Science,
Social Science & Humanities, and General research Data offered by DCC on their
website _http://www.dcc.ac.uk/resources/metadata-standards_ .
We do not consider the development of a dedicated standard for POWER2DM
essential for the efficient dissemination of the project results. The proposed
metadata capturing, based on the documentation in the “readme” text file
together with the strict file naming convention will allow interested
researchers to re-use the data without much difficulty.
# 7\. Data sharing, repository and restrictions
The POWER2DM dataset concerns personal medical and behavioural data. Data
storage in the
POWER2DM Personal Data Store will be subject to strict privacy/security
measures dictated by the Ethics criteria that apply to the project (Ethics
Deliverables of Work package 9). In transferring data to a data repository for
sharing, special care will be taken to preserve the same standard of data
privacy/security. This will be accomplished by properly anonymising the data
and ensuring that the keys to link data to patient identity are not
transferred. As an additional privacy precaution, data may be aggregated to a
certain extent depending on requirements of the POWER2DM models (i.e. still
allowing the reproduction of the results).
As a guiding principle for sharing, study participants are considered owner of
their personal data. Therefore, participants will be asked to participate in
the Open Research Data Pilot by giving informed consent to their data being
made publicly available after proper anonymisation and aggregation mentioned
above. This consent will be asked in a second separate consent form, in
addition to the standard informed consent to have their data made available to
the project team for research purposes. While the latter is required for
participation in the study, the response to the Open Research Data Pilot
sharing consent form will not be part of the inclusion criteria.
Method for data sharing:
* How will you make the data available to others?
The data will be stored in a data repository
**Comment [Ad7]:** open for discussion
* With whom will you share the data, and under what conditions?
The data will be publicly available for any party without the requirement to
attribute the data to the
POWER2DM Consortium (Open Access, Creative Commons CC Zero License (cc-zero)
(see **Comment [Ad8]:** proposed _http://ufal.github.io/public-license-
selector/_ )
* Are any restrictions on data sharing required? e.g. limits on who can use the data, when and for what purpose.
No restrictions on who can use the data and for what purpose apply.
* What restrictions are needed and why?
An embargo period of maximum 12 months after finalization of the project is
deemed required to
allow sufficient time for publication of the results, and for establishment of
intellectual property
* What action will you take to overcome or minimise restrictions?
Subjects participating in the study will be asked to give separate informed
consent to make their data publicly available for any purpose. Publications
and patent applications will be planned as early as possible yet realistic.
* Where (i.e. in which repository) will the data be deposited?
The data plus instruction files for usage will be deposited in the Zenodo
cost-free data repository for sharing ( _http://www.zenodo.org_ ) . The
Zenodo procedures for long-term preservation of the data will be put in place.
The duration of data preservation is still to be decided. The data is not of a
very complex nature. The approximated end volume will depend on several
factors including the number of participants willing to take part in the Open
Research Data Pilot, and the degree of aggregation to be applied, and as a
consequence is difficult to predict at the current time.
(
patent applications
)
.
**Comment [Ad9]:**
suggestion
No associated costs will be involved with the data sharing.
**8\. Preservation Plan:**
The following applies:
* What is the long-term preservation plan for the dataset?
The dataset will be deposited in the Zenodo data repository.
* Will additional resources be needed to prepare data for deposit or meet charges from data repositories?
No. The data preparation for deposit is part of Workpackage 7 (Dissemination)
and the deposit in Zenodo is free of charge. If psossible, the physical
depositing of the data will be done already before the end of the project, but
the embargo will be in place until the end of the period required for
publications and securing of intellectual property.
* What additional resources are needed to deliver your plan?
No additional resources are needed. Since the data will be publicly available
without restrictions, we do not need to keep a supervised data release system
in place.
* Is additional specialist expertise (or training for existing staff) required?
No additional specialist expertise is required. The readme files supplied with
the deposited data will contain all the information required to use the data.
* Do you have sufficient storage and equipment or do you need to cost in more?
This does not apply since the dat will be deposited in the Zenodo repository
* Will charges be applied by data repositories?
No. Zenodo is a cost-free repository.
* Have you costed in time and effort to prepare the data for sharing / preservation?
Yes. This is part of Workpackage 7 Dissemination.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0868_ABIOMATER_665440.md
|
# Responsibilities
The local coordinators will be responsible for the management of the research
data at their institutions throughout the life of the research project. Any
changes or issues related to the storage or sharing of data will be reported
to the project coordinator who is ultimately responsible for the data
management of the whole consortium.
4
# Storage
The storage will depend on the type of data and the needs for sharing and
access. All experimental files (e.g. unprocessed CCD imaging files, numerical
simulation data files) will be stored on the local servers/PCs attached to the
particular experimental set-up. The access to the data will be open only to
the individual researchers directly involved with the given experimental
work/simulations. All technical information and metadata (based on the
processed experimental files) will be stored on personal computers of the
relevant consortium members as well as on the dedicated data server
(University of Exeter network drive), which will be also used for the
consortium web site. The access and share of this data will be available to
all consortium members. The public access (via the website) will be available
for some of the data and regulated according to the existing consortium IP
protection and commercial confidentiality policies. The Project Coordinator
will delegate the responsibilities of maintaining and updating the data to one
of the Research associates, who will regularly report on the state and
modifications to the Website and the data.
5
# Backup
The backup of the raw experimental data files will be achieved according to
the existing regulations at the relevant institutions. For example, in Exeter
this will be done on daily basis via the centralised Backup Service. The local
coordinators will be responsible for providing the necessary actions and
monitoring the backup procedures throughout the life of the project. The
backup of the metadata on the consortium website will be carried out in the
same way as that for the experimental files at Exeter.
6
# Data type and formats
**Technical documentation/reports/publications** – will be produced using
standard editing word processors such as Word/Power Point/Excel. Where
necessary (for sharing or using in the website) the documents will be
converted into ‘.pdf’ files. Most images/diagrams/plots will be saved or
converted into standard formats such as
‘.jpeg’,’.tiff’ or ‘.bmp’.
**Experimental data** – will be produced as part of the experimental work or
numerical simulations. The format of the data will depend on the particular
process/experimental setup that will be used to record the information. For
example, in imaging experiments the files will be saved as files appropriate
for the given CCD camera, but later transformed into movies with the standard
file formats, such as ‘.mp4’,’.avi’. Numerical Simulation and other
experimental work will produce standard ASCII type data, saved with
appropriate extensions, such as ‘.dat’ or ‘.txt’ as being typically used for
the given instrument/simulation model.
Data from each experiment will be stored in a separate folder containing a
text file detailing experimental details, file names of the raw data, imaging
parameters used (i.e. optical setup, excitation intensities, scan speeds, etc)
and where appropriate, validatory analytical data sets. This will allow future
users of the data to access and comprehend the raw files.
7
# Data sharing
Publication of peer-reviewed outputs will take place as soon as possible,
during the course of the research project or within 12 months of the end of
the funding period. Publications will be in accordance with the University of
Exeter’s Open Access policy and where possible made openly available via the
Gold (pay-to-publish) route, or otherwise via the Green (institutional
repository-ERIC,
https://as.exeter.ac.uk/library/resources/researchoutputrepositoryeric/repositorypolicy/
) route or similar schemes available at the partner institutions.Re-use of raw
data will be facilitated by making this available upon request alongside the
relevant contextual information.The host institution will raise awareness of
the data available through the conferences and workshops highlighted in the
case for support.
8
# Proprietary data
Potential commercial exploitation of the techniques and materials developed
during the project will be fully explored through the Exeter University’s
Research & Knowledge Transfer department. All data will be made freely
available unless this department advises that, due to the proprietary nature
of the data, it should be withheld from the public domain.
9
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0869_LUCA_688303.md
|
**3) Data Management**
This section will be subject to change as the project progresses, and reflects
the current status within the consortium about the primary data that will be
generated. The sub-sections below provide detailed information on the data
sets, standards and metadata, and the respective data sharing and archiving
and preservation procedures for the data sets collected at each partner
institution:
# a. Data sets collected at ICFO
Four types of data will be collected at ICFO:
1. “Component data”: Design drawings (subsystems and LUCA system); (opto-) electronics board and component designs and specifications.
2. “Sub-system data”: research laboratory data (test results of components), sub-systems and the LUCA system; research application data (dynamic range, sensitivity, repeatability, accuracy and other parameters defined in **WP4** ).
3. “Evaluation data”: Evaluation data which are the results from the end-user tests in clinics.
4. “Exploratory data”: Exploratory data generated mainly within **WP5** by ICFO Knowledge & Technology Transfer unit and the Medical Optics group together (market, IP etc. analysis reports).
## i. Data set descriptions
<table>
<tr>
<th>
_**What data** will be **generated or collected** ? _
</th>
<th>
“Component data”: ICFO group will be mainly in charge of the components
related to diffuse correlation spectroscopy (DCS) sub-system. As such, we will
generate design drawings, specifications and such for (a) source/laser, (b)
detector, single photon counting avalanche photodiode, and (c) correlator
unit.
“Sub-system data”: ICFO group will generate test results associated with
components – electrical, optical, physical – and the DCS subsystem in its
integrated form as a stand-alone system. DCS subsystem will be tested for its
dynamic range (in intensity and in correlation decay times), sensitivity to
small changes in scattered motion, repeatability over time and accuracy.
Finally, the integrated LUCA system will be tested and we will focus on the
DCS subsystem in its integrated form in the full LUCA platform.
“Evaluation data”: ICFO group will be involved in the evaluation of the data
measured in the clinics by the end-users. ICFO group will be in charge of
preprocessing, fitting, presentation and interpretation of the DCS data.
“Exploratory data”: ICFO Knowledge & Technology Transfer unit (ICFO-KTT) will
work mainly with ICFO Medical Optics group but also with others to carry a
market analysis, freedom-to-operate analysis and others. This data will be
generated and managed at ICFO.
We note that all these actions are collaborative and we expect significant
overlaps and data sharing between partners.
</th> </tr> </table>
<table>
<tr>
<th>
_What is its**origin** ? _
</th>
<th>
“Component data” and “Sub-system data” will be internal to the group and to
the project. The measurements will be carried out at ICFO by ICFO.
“Evaluation data” will be generated at IDIBAPS in close collaboration with
IDIBAPS.
“Exploratory data” will be generated at ICFO-KTT using external databases,
studies and sources.
</th> </tr>
<tr>
<td>
_What are its**nature, format and scale** ? _
</td>
<td>
A wide range of data formats and scales will be generated.
1. Drawings and designs will use industry standard software and will, primarily, be confidential in nature. We will, as much as possible, generate publicly accessible versions for dissemination purposes. These will be stored in forward compatible, time-tested formats. Specifics will arise by M18.
2. Research application data on the testing of LUCA will follow non-standard formats common to each laboratory, in this case ICFO, doing the testing and will be stored in binary and text files. They will be associated with an electronic notebook which will include links to analysis scripts (Matlab, R, Excell, custom-software). The processed data will be saved in a report format and will be publicly available once cleared in terms of IP and exploitation issues by the appropriate committee in LUCA project as foreseen by the description of action.
3. Clinical data will be stored in electronic report forms, in formats that are to be designed and specified in LUCA tasks appropriate to the agreed rules on the system. The raw data will be associated with appropriate electronic notebooks , it will be anonymized as described in the ethical procedures, and parts pertaining to the identifiable patient information will be destroyed according to the ethical procedures and approvals that are due M24. This is a task of IDIBAPS. The processed data will be publicly available in summary as well as for individual subjects and shared through the LUCA web-site. Details will depend on the final system and the outputs that are tasks to be completed by M24.
4. Market analysis data will be confidential and will be shared within the consortium as reports and numbers. A summary will be published as part of the appropriate project deliverables.
5. Supporting data used in academic peer reviewed publications will be made available, after publication, via a recognised suitable data sharing repository (e.g. zenodo or national repository if available). This policy will be followed unless a partner or IEC can show that disseminating this data will compromise IP or other commercial advantage as detailed below. The project will use the metadata standards and requirements of the repository used for sharing the data.
At the ICFO group, long-term access is ensured by the following measures:
1\. Forward compatible, time-tested formats such as text files (commaseparated
values, open-source formats such as R data-tables), and/or open-source binary
formats (such as open document spreadsheets, open document text) and/or custom
made binary formats (with definition files stored in standard text formats)
will be utilized with associated descriptive documentation.
</td> </tr>
<tr>
<td>
</td>
<td>
2. All data will be stored in a secure hard-drive that is backed up every night by an incremental back-up script (rsbackup) to an external drive. Both drives are regularly replicated and upgraded at roughly three year intervals.
3. All desktop computers used by the ICFO personnel involved in the project is centrally managed by ICFO information technology (ICFO-IT) department which utilizes secure folders on the ICFO servers that are backed up automatically and internally by ICFO-IT.
4. All instrument control computers are kept outside internet and are “ghosted” after every major upgrade. “Ghost” copies are kept by ICFO-IT in open-source formats.
5. All electronic designs are stored, managed and accessed through the ICFO electronics workshop and are assigned unique identifiers.
We note that ICFO Medical Optics group has a proven track-record in longterm
data storage and access going back to the PI’s earlier work from late 90s.
</td> </tr>
<tr>
<td>
_**To whom** could it be **useful** ? _
</td>
<td>
“Component data”: In the short-term, this type of data is only useful for the
internal LUCA partners. In the medium-term, it will be useful for our other
projects and some of these components are expected to become products. Some
information may be used in scientific publications and presentations as
described below.
“Sub-system data” and “evaluation data” are useful both internally for our
developments and upgrades but also for scientific publications. The data will
be useful to the end-user community and the biophontonics community and will
also be of interest to endocrinologists, the biomedical optics community, the
ultrasonics community, radiologists, and biomedical engineers. “Exploratory
data” is mainly useful internally and, in the medium-term, may be useful for
industrial partners for exploitation purposes, e.g. for fund-raising. It will
also be useful for future grant applications where higher TRL levels are
foreseen.
</td> </tr>
<tr>
<td>
_Do**similar data sets exist** ? Are there possibilities for **integration and
reuse** ? _
</td>
<td>
This is a unique device and a data-set. There are possibilities to combine
processed data for review papers on optics + ultrasound combinations in
biomedicine as well as for reviews on applications of diffuse optics in
cancer.
</td> </tr> </table>
2. **Standards and metadata**
<table>
<tr>
<th>
_How will the**data be collected/generat ed** ? _
</th>
<th>
“Component data” and “sub-system data” will be generated by laboratory tests
using test equipment and using design software.
“Evaluation data” will be generated mainly from ex vivo phantom measurements
and by data acquired from the subjects.
“Exploratory data” will be generated by studies of external databases,
interviews with end-users and others.
Details are described in the specific work-packages.
</th> </tr>
<tr>
<td>
_Which community_
</td>
<td>
The lack of community data standards is one of the points that we explicitly
</td> </tr>
<tr>
<td>
_**data standards or methodologies** (if any) will be used at this stage? _
</td>
<td>
discuss and attempt to contribute in LUCA project. Here, we mean the community
of biomedical optics researchers using diffuse optical methods.
Standards of a second community, the ICFO community, will be used. As
mentioned above, there are standard methods internal to ICFO Medical Optics
group, those handled by ICFO-IT, those handled by ICFO electronics workshop
and those handled by ICFO-KTT.
</td> </tr>
<tr>
<td>
_How will the data be**organised during the project?** _
</td>
<td>
“Component data” and “sub-system data” generated by ICFO will follow a
convention where the acronym of each component – stored at a shared billof-
materials document -- , the date, the time will be used to uniquely identify
the data set. Each data set will be associated with an electronic notebook
kept in an open-source data format as described above. All software and main
texts will be kept in a subversion repository managed by the ICFO-IT for
version control.
“Evaluation data” will follow the conventions defined jointly by IDIBAPS, HEMO
and ECM who are the main drivers of the clinical studies and the final
software suites. ICFO Group will follow their naming conventions.
</td> </tr>
<tr>
<td>
_**Metadata** should be created to describe the data and aid discovery. **How
will you capture this information?** _
</td>
<td>
This will be captured in electronic notebooks, in header files in open-source
format (described above) and in case-report files. The exact details are being
defined as the systems mature.
</td> </tr>
<tr>
<td>
_**Where will it be recorded?** _
</td>
<td>
All internal data will be kept according to the different units at ICFO and
their standard practices. We will work collectively with the other LUCA
partners to arrange the external data in standard formats. As explained above,
every dataset is associated with an electronic notebook, appropriate header
file and comments. These will be recorded in the storage system(s) described
above.
</td> </tr> </table>
3. **Data Sharing**
<table>
<tr>
<th>
_**Where and how** will the data be made available and **how can they be
accessed** ? Will you share data via a data repository, handle data requests
directly or use another mechanism? _
</th>
<th>
Internal to the project, the ICFO data will be shared using generic
cloudstorage (mainly Dropbox) wherever appropriate, e.g. when the shared data
is not very sensitive or is incomprehensible for an intruder. Otherwise, it
will be shared by encrypted files (PGP encryption) using ICFO’s own cloud
system that is managed by ICFO-IT. Brief reports, spreadsheets and such will
be shared by the TEAMWORK framework set by EIBIR.
Externally, we will use the project web-site as the main gateway for sharing
data. We will post, after IP clearance, appropriate data sets alongside
publications on journal web-sites.
</th> </tr>
<tr>
<td>
_**To whom** will the data be made available? _
</td>
<td>
Bulk of the data will be widely accessible for end-users, however, there may
be some data, such as market studies, IP portfolios that will be shared with
entities and people related to the exploitation activities.
</td> </tr>
<tr>
<td>
_What are the_
</td>
<td>
We will use the LUCA web-site for all dissemination. The processed data will
</td> </tr>
<tr>
<td>
_**technical mechanisms for dissemination** and necessary **software** or
other tools for enabling **re-use** of the data? _
</td>
<td>
be presented in a way that it is cross-platform and software independent to
the best of our abilities. If some software or dataset we generate becomes of
value for the general biomedical optics community, we will consider developing
a unique web-site for this purpose.
</td> </tr>
<tr>
<td>
_Are any**restrictions on data sharing** required and **why** ? _
</td>
<td>
There will be restrictions based on the need for securing publications prior
to public release and for exploitation purposes. These are defined in the
project DOA.
</td> </tr>
<tr>
<td>
_What**strategies** will you apply **to overcome or limit restrictions** ? _
</td>
<td>
We will utilize procedures such as embargo until publication, anonymising and
simplification.
</td> </tr>
<tr>
<td>
_**Where (i.e. in which repository)** will the data be deposited? _
</td>
<td>
As mentioned above, there are no community defined standards for the
biomedical diffuse optics community. Therefore, we will utilize the project
website, possibly dedicated websites for specific outputs and journal
websites.
</td> </tr> </table>
4. **Archiving and preservation (including storage and backup)**
<table>
<tr>
<th>
_What procedures_
_will be put in place for**long-term preservation of the data** ? _
</th>
<th>
As described above and repeated below, there are set of procedures for ICFO
generated data. At the ICFO group, Long-term access is ensured by the
following measures:
1. Forward compatible, time-tested formats such as text files (commaseparated values, open-source formats such as R data-tables), and/or open-source binary formats (such as open document spreadsheets, open document text) and/or custom made binary formats (with definition files stored in standard text formats) will be utilized with associated descriptive documentation.
2. All data will be stored in a secure hard-drive that is backed up every night by an incremental back-up script (rsbackup) to an external drive. Both drives are regularly replicated and upgraded at roughly three year intervals.
3. All desktop computers used by the ICFO personnel involved in the project is centrally managed by ICFO information technology (ICFO-IT) department which utilizes secure folders on the ICFO servers that are backed up automatically and internally by ICFO-IT.
4. All instrument control computers are kept outside internet and are “ghosted” after every major upgrade. “Ghost” copies are kept by ICFO-IT in open-source formats.
5. All electronic designs are stored, managed and accessed through the ICFO electronics workshop and are assigned unique identifiers.
We note that ICFO Medical Optics group has a proven track-record in long-
</th> </tr>
<tr>
<td>
</td>
<td>
term data storage and access going back to the PI’s earlier work from late
90’s.
</td> </tr>
<tr>
<td>
_**How long will the data be preserved** and what will its **approximated end
volume** be? _
</td>
<td>
Apart from the certain aspects of the clinical datasets, which will be managed
by IDIBAPS, there are no limitations on the preservation of the data. We will
follow academic standards and aim for a ten year preservation of the data. As
mentioned above, PI is able to access, re-use and re-analyse data from late
90s.
The approximate end-volume of this data will be less than one terabyte.
</td> </tr>
<tr>
<td>
_Are**additional resources and/or is specialist expertise** needed? _
</td>
<td>
No. We are all experts in the management of datasets of this size. Internally,
ICFO-IT manages the general policies, makes suggestions on good-practices and
ensures security against intrusions.
</td> </tr>
<tr>
<td>
_Will there be any**additional costs** for archiving? _
</td>
<td>
The costs are budgeted within the project and internally.
</td> </tr> </table>
# b. Data sets collected at POLIMI
Three types of data will be collected by POLIMI:
1. “Component data”: specification and designs of laser sources, detectors and timing electronics, including the electronic boards for operating them.
2. “Sub-system data”: research laboratory data (test results of components), sub-systems and the LUCA system; research application data (dynamic range, sensitivity, repeatability, accuracy and other parameters defined in **WP4** ).
3. “Evaluation data”: Evaluation data that are the results from the end-user tests in clinics.
## i. Data set descriptions
<table>
<tr>
<th>
_**What data** will be **generated or collected** ? _
</th>
<th>
“Component data”: POLIMI will be in charge of the components related to time-
resolved spectroscopy (TRS) sub-system. As such, POLIMI will generate
specifications and design drawings for (a) laser sources, (b) detectors,
namely SPAD (Single-Photon Avalanche Diodes) or SiPMs (Silicon
PhotoMultipliers), and (c) timing electronics (TDC, Time-to-Digital
Converter).
“Sub-system data”: POLIMI will generate test results associated with
components – electrical, optical, physical – and the TRS subsystem in its
integrated form as a stand-alone system. TRS subsystem will be tested for
performances assessment. Finally, the integrated LUCA system will be tested
and we will focus on the TRS subsystem in its integrated form in the full LUCA
platform.
“Evaluation data”: POLIMI will be involved in the evaluation of the data
measured in the clinics by the end-users, in particular for pre-processing,
fitting, presentation and interpretation of the TRS data.
We note that all these actions are collaborative and we expect significant
overlaps and data sharing between partners.
</th> </tr> </table>
<table>
<tr>
<th>
_What is its**origin** ? _
</th>
<th>
“Component data” and “Sub-system data” will be generated within the group and
the project. The measurements will be carried out at POLIMI by POLIMI.
“Evaluation data” will be generated at IDIBAPS.
</th> </tr>
<tr>
<td>
_What are its**nature, format and scale** ? _
</td>
<td>
A wide range of data formats and scales will be generated.
1. Drawings and designs will use industry standard software and will, primarily, be confidential in nature. We will, as much as possible, generate publicly accessible versions for dissemination purposes. These will be stored in forward compatible, time-tested formats. Specifics will arise by M18.
2. Research application data on the testing of LUCA will follow non-standard formats common to each laboratory, in this case POLIMI, doing the testing and will be stored in binary and text files. Matlab/Excell script will be provided for the reading of these files. The processed data will be saved in a report format and will be publicly available once cleared in terms of IP and exploitation issues by the appropriate committee in LUCA project as foreseen by the description of action.
3. Clinical data will be stored in electronic report forms, in formats that are to be designed and specified in LUCA tasks appropriate to the agreed rules on the system. The raw data will be anonymized as described in the ethical procedures, and parts pertaining to the identifiable patient information will be destroyed according to the ethical procedures and approvals that are due M24. This is a task of IDIBAPS. The processed data will be publicly available in summary as well as for individual subject s and shared through the LUCA web-site. Details will depend on the final system and the outputs that are tasks to be completed by M24.
4. Supporting data used in academic peer reviewed publications will be made available, after publication, via a recognised suitable data sharing repository (e.g. zenodo or national repository if available). This policy will be followed unless a partner or IEC can show that disseminating this data will compromise IP or other commercial advantage as detailed below. The project will use the metadata standards and requirements of the repository used for sharing the data.
At POLIMI, Long-term access is ensured by the following measures:
1. Forward compatible, time-tested formats such as text files (commaseparated values, and/or open-source binary formats (such as open document spreadsheets, open document text) and/or custom made binary formats (with definition files stored in standard text formats) will be utilized with associated descriptive documentation.
2. All data will be stored in secure hard-drives provided by a redundant system (RAID 5) that is backed up every week by an incremental backup script (rsbackup) to other external servers. The data servers are located in the basement of the Physics Department and DEIB department of Politecnico di Milano in a restricted access area. The data servers have an access controlled by passwords, and they are part of a VLAN without access from outside the POLIMI institution. The VLAN at which not only the data servers are connected but all the PCs used for this project is part of an institutional network protected by a firewall.
3. All instrument control computers are kept outside internet.
</td> </tr>
<tr>
<td>
</td>
<td>
4\. All electronic designs are stored, managed and accessed through the POLIMI
electronics workshops and are assigned unique identifiers.
We note that POLIMI group has a proven track-record in long-term data storage
and access going back to 80’s.
</td> </tr>
<tr>
<td>
_**To whom** could it be **useful** ? Does it underpin a scientific
publication? _
</td>
<td>
“Component data”: In the short-term, this type of data is only useful for the
internal LUCA partners. In the medium-term, it will be useful for our other
projects and some of these components are expected to become products. Some
information may be used in scientific publications and presentations as
described below.
“Sub-system data” and “evaluation data” are useful both internally for our
developments and upgrades but also for scientific publications. We submit
articles to target journals for the end-user community (i.e. .Journal of
Clinical Endocrinology and Nutrition, European Journal of Endocrinology,
Clinical Endocrinology and Thyroid in Endocrinology field, and Radiology,
European Journal of Radiology and American Journal of Radiology in Radiology
field) and for the biophotonics community (e.g. Biophotonics, Applied Optics,
Biomedical Optics Express, Journal of Biomedical Optics, Nature Photonics).
This is a multidisciplinary project and we expect that the range of journals
will expand as the project progresses and may include endocrinology,
biomedical optics, ultrasonics, radiology, biomedical engineering and others.
</td> </tr>
<tr>
<td>
_Do**similar data sets exist** ? Are there possibilities for **integration and
reuse** ? _
</td>
<td>
This is a unique device and a data-set. There are possibilities to combine
processed data for review papers on optics+ultrasound combinations in
biomedicine as well as for reviews on applications of diffuse optics in
cancer.
</td> </tr> </table>
2. **Standards and metadata**
<table>
<tr>
<th>
_How will the**data be collected/generat ed** ? _
</th>
<th>
“Component data” will be generated by laboratory tests using test equipment
and using design software.
“Subsystem data” will be generated mainly from ex vivo phantom measurements
and by data acquired from the subjects.
</th> </tr>
<tr>
<td>
_Which community**data standards or methodologies** (if any) will be used at
this stage? _
</td>
<td>
The lack of community data standards is one of the points that we explicitly
discuss and attempt to contribute in LUCA project. Here, we mean the community
of biomedical optics researchers using diffuse optical methods.
POLIMI have already experienced other EU multidisciplinary projects during
which exchange of data with different formats was crucial. Standard Matlab
scripts were prepared in order to read data from the POLIMI format and convert
them into other formats.
</td> </tr>
<tr>
<td>
_How will the data be**organised during the project?** _
</td>
<td>
“Component data” generated by POLIMI will be stored in folders and files
within a root folder (whose name is the projects’s one, “LUCA”) that will
contain all the information concerning the project. Each component will have a
dedicated folder and the various releases of the component data will have a
progressive numbering.
</td> </tr>
<tr>
<td>
</td>
<td>
“Sub-system data” generated by POLIMI will follow the standard convention
applied by the Biomedical Optics Group, where the files are stored in a
folders with the name of the project, and organized in subfolders indicating
the different experiments/WP activities. The name of the files is composed of
three parts: a three letter identifier to indicate the experiments/activity, a
letter indicating the nature of the file (e.g. “m” in-vivo experimental
measurement, “p” phantom measurement, “s” instrument response function
measurement) and a progressive number. In the header of the file all the other
information useful for the univocal identification of the data set are stored.
An extensive description of the experiment and each file details are also
written in the logbook of the laboratory involved.
“Evaluation data” will follow the conventions defined jointly by IDIBAPS, HEMO
and ECM who are the main drivers of the clinical studies and the final
software suites. POLIMI Group will follow their naming conventions.
</td> </tr>
<tr>
<td>
_**Metadata** should be created to describe the data and aid discovery. **How
will you capture this information?** _
</td>
<td>
Metadata will be captured in text files describing how the data are stored in
files and folders, how and when the data have been collected, the importance
of the data, etc.
</td> </tr>
<tr>
<td>
_**Where will it be recorded?** _
</td>
<td>
All internal data will be kept according to the different units at POLIMI and
their standard practices. We will work collectively with the other LUCA
partners to arrange the external data in standard formats. These will be
recorded in the storage system(s) described above. Additionally, the data will
be stored also in laptop and desktop computers routinely used in laboratory
activities.
</td> </tr> </table>
3. **Data Sharing**
<table>
<tr>
<th>
_**Where and how** will the data be made available and **how can they be
accessed** ? Will you share data via a data repository, handle data requests
directly or use another mechanism? _
</th>
<th>
Internal to the project, the POLIMI data will be shared using cloud-storage
systems (such as OneDrive) via encrypted files.
Brief reports, spreadsheets and such will be shared by the TEAMWORK framework
set by EIBIR.
Externally, we will use the project web-site as the main gateway for sharing
data. We will post, after IP clearance, appropriate data sets alongside
publications on journal web-sites.
</th> </tr>
<tr>
<td>
_**To whom** will the data be made available? _
</td>
<td>
Data describing the details of the developed components will be restricted
only to the partner of the consortium working on connected topics.
General data describing the performance of the developed components and how to
exploit them will be widely accessible.
</td> </tr>
<tr>
<td>
_What are the**technical mechanisms for dissemination** and necessary
**software** or other tools for enabling **re-use** of the data? _
</td>
<td>
We will use the LUCA web-site for dissemination. The processed data will be
presented in a way that it is cross-platform and software independent to the
best of our abilities. If some software or dataset that we generate becomes of
value for the general biomedical optics community, we will consider developing
a unique web-site for that purposes.
</td> </tr>
<tr>
<td>
_Are any**restrictions on data sharing** required and **why** ? _
</td>
<td>
There will be restrictions based on the need for securing publications prior
to public release and for exploitation purposes. These are defined in the
project DOA. Furthermore, any patient data that could be used to identify the
patients will be properly anonymized prior to sharing and the link between the
patient ID and the dataset will be permanently destroyed after an appropriate
time based on the ethical protocols and procedures that are approved. This is
IDIBAP’s responsibility and the POLIMI group will receive data that is already
anonymized according to these principles.
</td> </tr>
<tr>
<td>
_What**strategies** will you apply **to overcome or limit restrictions** ? _
</td>
<td>
We will utilize procedures such as embargo until publication.
</td> </tr>
<tr>
<td>
_**Where (i.e. in which repository)** will the data be deposited? _
</td>
<td>
As mentioned above, there are no well-established community defined standards
for the biomedical diffuse optics community. Therefore, we will utilize
project web-site, possibly dedicated web-sites for specific outputs and
journal web-sites.
</td> </tr> </table>
4. **Archiving and preservation (including storage and backup)**
<table>
<tr>
<th>
_What procedures_
_will be put in place for**long-term preservation of the data** ? _
</th>
<th>
As mainly described above and repeated below: Long-term access is ensured by
the following measures:
1. Forward compatible, time-tested formats such as text files (commaseparated values, and/or open-source binary formats (such as open document spreadsheets, open document text) and/or custom made binary formats (with definition files stored in standard text formats) will be utilized with associated descriptive documentation.
2. All data will be stored in secure hard-drives provided by a redundant system (RAID 5) that is backed up every week by an incremental backup script (rsbackup) to other external servers. The data servers are located in the basement of the Physics Department and DEIB department of Politecnico di Milano in a restricted access area. The data servers have an access controlled by passwords, and they are part of a VLAN without access from outside the POLIMI institution. The VLAN at which not only the data servers are connected but all the PCs used for this project is part of an institutional network protected by a firewall.
3. All instrument control computers are kept outside internet.
4. All electronic designs are stored, managed and accessed through the
</th> </tr>
<tr>
<td>
</td>
<td>
POLIMI electronics workshops and are assigned unique identifiers.
We note that POLIMI group has a proven track-record in long-term data storage
and access going back to 80s.
</td> </tr>
<tr>
<td>
_**How long will the data be preserved** and what will its **approximated end
volume** be? _
</td>
<td>
Apart from the certain aspects of the clinical datasets – which will be
managed by IDIBAPS, there are no limitations on the preservation of the data.
We will follow academic standards and aim for a ten year preservation of the
data. As mentioned above, POLIMI group is able to access, re-use and re-
analyse data from early 90s.
The approximate end-volume of this data will be less than one terabyte.
</td> </tr>
<tr>
<td>
_Are**additional resources and/or is specialist expertise** needed? _
</td>
<td>
No. We are all experts in the management of datasets of this size. Internally,
POLIMI IT managers make suggestions on good practices and ensure security
against intrusions.
</td> </tr>
<tr>
<td>
_Will there be any**additional costs** for archiving? _
</td>
<td>
The costs are budgeted within the project and internally.
</td> </tr> </table>
# c. Data sets collected at IDIBAPS
Two types of data will be collected at IDIBAPS:
1. “Clinical data”: Clinical data that is related to healthy volunteers and patients included as participants in **WP5** .
2. “Evaluation data”: Evaluation data that are the results from the end-user tests in clinics.
## i. Data set descriptions
<table>
<tr>
<th>
_**What data** will be **generated or collected** ? _
</th>
<th>
“Clinical data”: IDIBAPS will be involved in the recruitment of healthy
volunteers and patients included in the pilot study as participants in **WP5**
. Data will be related to medical history, physical examination, laboratory
and ultrasound parameters.
“Evaluation data”: Evaluation data that are the results from the end-user
tests in clinics.
We note that all these actions are collaborative and we expect significant
overlaps and data sharing between partners.
</th> </tr>
<tr>
<td>
_What is its**origin** ? _
</td>
<td>
All data will be generated within the project. Some will reflect the
confidential know-how of an individual partner; others will be generated in
collaboration.
“Clinical data” will be generated at IDIBAPS. Data storage will be performed
maintaining the anonymity of volunteers and following current legislation. No
biological samples related to the study will be stored. Once analyzed samples
collected will be destroyed according to the existing protocols in the CDB
(Centre de Diagnòstic Biomèdic) of Hospital Clinic of Barcelona. The encoding
list will be destroyed once all the participants are measured with LUCA device
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
and data is analyzed, to be sure no extra information is required.
“Evaluation data” will be generated at IDIBAPS.
</th> </tr>
<tr>
<td>
_What are its**nature, format and scale** ? _
</td>
<td>
A wide range of data formats and scales will be generated.
1. Research application data on the testing of LUCA will follow non-standard formats common to each laboratory doing the testing and will be stored in binary and text files. They will be associated with an electronic notebook which will include links to analysis scripts (Matlab, R, Excell, customsoftware). The processed data will be saved in a report format and will be publicly available once cleared in terms of IP and exploitation issues by the appropriate committee.
2. Clinical data: regarding to personal data, the standard regulatory guidelines will be followed at the national and international level: Spanish law and Directive 95/46/EC of the European Union, on protection of personal data. The only sensitive data that will be collected and/or processed are related to health and ethnicity. A database will be created with the variables of interest of the participants, both volunteers and patients. This database is only available to a member of the Hospital Clínic (Dr. Mireia Mora). The variables collected to register and treat patients' vital information will be included in another database associated to the code number of the participant. These variables include: name, date of birth and medical record number. This database will only be available to the members of the Hospital Clinic, since it is responsible for the clinical patients in routine clinical practice. The other members of the project will not have the data of the participants, only the code number assigned coding and the study variables for their analysis. It is not expected that the immediate results of this research project carry out important ethical implications.
3. Evaluation data will be stored in electronic report forms, in formats that are to be designed and specified in LUCA tasks appropriate to the agreed rules on the system. The raw data will be associated with appropriate electronic notebooks , it will be anonymized as described in the ethical procedures, and parts pertaining to the identifiable patient information will be destroyed according to the ethical procedures and approvals that are due M24. The processed data will be publicly available in summary as well as for individual subject s and shared through the LUCA web-site. Details will depend on the final system and the outputs that are tasks to be completed by M24.
4. Conformity data will be generated and stored according to the industry standards and will be mainly public. It will be shared as a report.
5. Market analysis data will be confidential and will be shared within the consortium as reports and numbers. A summary will be published as part of the appropriate project deliverables.
6. Supporting data used in academic peer reviewed publications will be made available, after publication, via a recognised suitable data sharing repository (e.g. zenodo or national repository if available). This policy will be followed unless a partner or IEC can show that disseminating this data will compromise IP or other commercial advantage as detailed below. The project will use the metadata standards and requirements of the repository used for sharing the data.
</td> </tr>
<tr>
<td>
_**To whom** could it be **useful** ? Does it underpin a scientific
publication? _
</td>
<td>
“Clinical data” (anonymized) and “evaluation data” are useful both internally
for our developments and upgrades but also for scientific publications. The
data will be interesting to the end-user community, the biophontonics
community, endocrinotlogists, the biomedical optics community, the ultrasonics
community, radiologists, and biomedical engineers. “Exploratory data” is
mainly useful internally and, in the medium-term, may be useful for industrial
partners for exploitation purposes, e.g. for fund-raising. It will also be
useful for future grant applications where higher TRL levels are foreseen.
</td> </tr>
<tr>
<td>
_Do**similar data sets exist** ? Are there possibilities for **integration and
reuse** ? _
</td>
<td>
This is a unique device and a data-set. There are possibilities to combine
processed data for review papers on optics + ultrasound combinations in
biomedicine as well as for reviews on applications of diffuse optics in
cancer.
</td> </tr> </table>
2. **Standards and metadata**
<table>
<tr>
<th>
_How will the**data be collected/generat ed** ? _
</th>
<th>
“Clinical data” will be collected from healthy volunteers and patients that
will agree to participate. Healthy participants will be selected among those
who have participated in previous work on thyroid with diffuse optics. They
will be asked if they want to participate again in this project, completely
voluntary. Patients will be selected from those who are followed by the
endocrinology department of the Hospital Clinic of Barcelona and because of
the condition will be surgically treated with total thyroidectomy. Data will
be generated from to medical history, physical examination, laboratory and
ultrasound parameters that will be obtained from the clinical practice.
“Evaluation data” will come from subjects measurements with the LUCA device.
“Exploratory data” will be generated by studies of external databases,
interviews with end-users and others.
Details are described in the specific work-packages.
</th> </tr>
<tr>
<td>
_Which community**data standards or methodologies** (if any) will be used at
this stage? _
</td>
<td>
Data storage, and where applicable sharing, will be performed maintaining the
anonymity of volunteers and following current legislation. No biological
samples related to the study will be stored. Once analyzed samples collected
will be destroyed according to the existing protocols in the CDB (Centre de
Diagnòstic Biomèdic) of Hospital Clinic of Barcelona. Data pertaining to this
study, both clinical, laboratory and imaging, are not included in the
conventional medical story, they will be included in a separate file in a
protected place. Medical images, such as ultrasounds and MRIs will be stored
in a storage system for images called PACS that allows you to store and
transfer images in DICOM format.
</td> </tr>
<tr>
<td>
_How will the data be**organised during the project?** _
</td>
<td>
“Clinical data” will follow the standard procedure in accordance with the
guidelines outlined in the Declaration of Helsinki and complies with the
national legislation currently in effect in Spain, specifically, the Law of
Biomedical Research ( _Ley de Investigación Biomédica_ ) enacted in 2007. This
law regulates the ethical evaluation of research projects in Spain that
involve human subjects, and it designates and authorizes the local Clinical
Research
</td> </tr>
<tr>
<td>
</td>
<td>
Ethics Committees for the review of all types of research projects involving
humans, as well as when handling personal data. In this sense, the LUCA study
in Spain will fulfill all national and European ethical requirements.
Participants will be codified using “LUCA” followed by the “CO” for controls
and “CA” for cases and followed by number established by the order of
evaluation, for example, LUCA_CO_1, LUCA_CO_2, LUCA_CA_1… All the information
obtained and written in the clinical protocol will be introduced in the
database using both categorical and numeric variables as suitable. Excell and
SPSS database will be used with restricted access.
“Evaluation data” will follow the conventions defined jointly by IDIBAPS, HEMO
and ECM who are the main drivers of the clinical studies and the final
software suites.
</td> </tr>
<tr>
<td>
_**Metadata** should be created to describe the data and aid discovery. **How
will you capture this information?** _
</td>
<td>
This will be captured in electronic notebooks, in header files in open-source
format (described above) and in case-report files. The exact details are being
defined as the systems mature.
</td> </tr>
<tr>
<td>
_**Where will it be recorded?** _
</td>
<td>
All internal data will be kept according to the different units at IDIBAPS and
their standard practices. We will work collectively with the other LUCA
partners to arrange the external data in standard formats. As explained above,
every data-set is associated with an electronic notebook, appropriate header
file and comments. These will be recorded in the storage system(s) described
above.
</td> </tr> </table>
3. **Data Sharing**
<table>
<tr>
<th>
_**Where and how** will the data be made available and **how can they be
accessed** ? Will you share data via a data repository, handle data requests
directly or use another mechanism? _
</th>
<th>
Data storage, and where applicable sharing, will be performed maintaining the
anonymity of volunteers and following current legislation. No biological
samples related to the study will be stored. Once analyzed samples collected
will be destroyed according to the existing protocols in the CDB (Centre de
Diagnòstic Biomèdic) of Hospital Clinic of Barcelona. The encoding list will
be destroyed once all the participants are measured with LUCA device and data
is analyzed, to be sure no extra information is required. At the latest, this
will take place upon the completion of the project. The realization of this
project will involve the voluntary participation of unpaid volunteers. Any use
of data or samples follows local regulations, and international, especially:
Declaration of Helsinki (World Medical Association), as amended in 2000,
European Convention on Human Rights and Dignity of the Human Being with regard
to the Application of Biology and Medicine (Oviedo, April 1997). The partners
involved in these aspects are committed to reporting all aspects of the
studies to the project committees. This includes written informed consent
documentation, part of the protocol for human research studies.
Supporting data used in academic peer reviewed publications will be made
available, after publication, via a recognised suitable data sharing
repository
(e.g. zenodo or national repository if available). This policy will be
followed
</th> </tr>
<tr>
<td>
</td>
<td>
unless a partner or IEC can show that disseminating this data will compromise
IP or other commercial advantage. The project will use the metadata standards
and requirements of the repository used for sharing the data.
Brief reports, spreadsheets and such will be shared via the project internal
collaboration platform Teamwork.
Externally, we will use the project website as the main gateway for sharing
data approved for dissemination. We will post, after IP clearance, appropriate
data sets alongside publications on journal web-sites.
</td> </tr>
<tr>
<td>
_**To whom** will the data be made available? _
</td>
<td>
We aim to make bulk of the data widely accessible; however, there may be some
data, such as market studies, IP portfolios that will be shared with entities
and people related to the exploitation activities. Clinical data of subjects
will be internal in IDIBAPS and will not be shared.
</td> </tr>
<tr>
<td>
_What are the**technical mechanisms for dissemination** and necessary
**software** or other tools for enabling **re-use** of the data? _
</td>
<td>
We will use the LUCA website for all dissemination. The processed data will be
presented in a way that it is cross-platform and software independent to the
best of our abilities. If some software or dataset that we generate becomes of
value for the general biomedical optics community, we will consider developing
a unique web-site for that purposes.
</td> </tr>
<tr>
<td>
_Are any**restrictions on data sharing** required and **why** ? _
</td>
<td>
There will be restrictions based on the need for securing publications prior
to public release and for exploitation purposes. These are defined in the
project DOA. Furthermore, any patient data that could be used to identify the
patients will be properly anonymized prior to sharing and the link between the
patient ID and the dataset will be permanently destroyed after an appropriate
time based on the ethical protocols and procedures that are approved. This is
IDIBAPS responsibility and the ICFO group will receive data that is already
anonymized according to these principles.
</td> </tr>
<tr>
<td>
_What**strategies** will you apply **to overcome or limit restrictions** ? _
</td>
<td>
We will utilize procedures such as embargo until publication, anonymising and
simplification.
</td> </tr>
<tr>
<td>
_**Where (i.e. in which repository)** will the data be deposited? _
</td>
<td>
As mentioned above we will utilize project web-site, possibly dedicated
websites for specific outputs and journal web-sites.
Within the LUCA consortium we will use the project management platform
Teamwork where data files can be up- and downloaded in folders organised by WP
and/or specific topics with a version management and the possibilities to
restrict the access and add tags.
</td> </tr> </table>
4. **Archiving and preservation (including storage and backup)**
<table>
<tr>
<th>
_What procedures_
_will be put in place for**long-term preservation of** _
</th>
<th>
Data storage will be performed maintaining the anonymity of volunteers and
following current legislation. No biological samples related to the study will
be stored. Once analyzed samples collected will be destroyed according to the
existing protocols in the CDB (Centre de Diagnòstic Biomèdic) of Hospital
Clinic
</th> </tr>
<tr>
<td>
_**the data** ? _
</td>
<td>
of Barcelona. The encoding list will be destroyed once all the participants
are measured with LUCA device and data is analyzed, to be sure no extra
information is required. At the latest, this will take place upon the
completion of the project. Any use of data or samples follows local
regulations, and international, especially: Declaration of Helsinki (World
Medical Association), as amended in 2000, European Convention on Human Rights
and Dignity of the Human Being with regard to the Application of Biology and
Medicine (Oviedo, April 1997). The partners involved in these aspects are
committed to reporting all aspects of the studies to the project committees.
This includes written informed consent documentation, part of the protocol for
human research studies.
Regarding to personal data, the standard regulatory guidelines will be
followed at the national and international level: Spanish law and Directive
95/46/EC of the European Union, on protection of personal data. The only
sensitive data that will be collected and/or processed are related to health
and ethnicity. A database will be created with the variables of interest of
the participants, both volunteers and patients. This database is only
available to a member of the Hospital Clínic (Dr. Mireia Mora). The variables
collected to register and treat patients' vital information will be included
in another database associated to the code number of the participant. These
variables include: name, date of birth and medical record number. This
database will only be available to the members of the Hospital Clinic, since
it is responsible for the clinical patients in routine clinical practice. The
other members of the project will not have the data of the participants, only
the code number assigned coding and the study variables for their analysis. It
is not expected that the immediate results of this research project have
important ethical implications. Data pertaining to this study, both clinical,
laboratory and imaging, are not included in the conventional medical story,
they will be included in a separate file in a protected place. Medical images,
such as ultrasounds and MRIs will be stored in a storage system for images
called PACS that allows you to store and transfer images in DICOM format.
</td> </tr>
<tr>
<td>
_**How long will the data be preserved** _
_and what will its**approximated end volume** be? _
</td>
<td>
According to the Biomedical Investigation Law, there is no need to preserve
the data. However, we aim for at least five year preservation of the data.
The approximate end-volume of this data will be less than one terabyte.
</td> </tr>
<tr>
<td>
_Are**additional resources and/or is specialist expertise** needed? _
</td>
<td>
No. We are all experts in the management of datasets of this size.
</td> </tr>
<tr>
<td>
_Will there be any**additional costs** for archiving? _
</td>
<td>
The costs are budgeted within the project and internally.
</td> </tr> </table>
# d. Data sets collected at HEMO
Four types of data will be collected at HemoPhotonics:
1. “Component data”: Design drawings (subsystems and LUCA system); Firmware and Software for micro-controllers etc.; (opto-) electronics boards; component designs and specifications.
2. “Sub-system data”: laboratory evaluation data (test results of components) for sub-systems and the LUCA system; device application data (dynamic range, sensitivity, repeatability, accuracy and other parameters defined in **WP4** ); compliance testing and documentation.
3. “Evaluation data”: Evaluation data that are the results from the end-user tests in clinics.
4. “Exploratory data”: Exploratory data generated mainly within exploitation plan (market reports; market & IP strategy, IP analysis reports etc.).
## i. Data set descriptions
<table>
<tr>
<th>
_**What data** will be **generated or collected** ? _
</th>
<th>
“Component data”: HemoPhotonics will mainly provide or contribute to
components related to diffuse correlation spectroscopy (DCS) sub-system,
develop internal control electronics, specific firmware, as well as operation
and control software. We will therefore generate schematic and design
drawings, software and firmware codes, application documentation,
specifications etc.
“Sub-system data”: HemoPhotonics will generate or contribute to test results
associated with components – electrical, optical, physical – and the DCS
subsystem in its integrated form as a stand-alone system. Furthermore
HemoPhotonics will perform and document functional and compliance tests on the
sub-system level as well as for the integrated LUCA system.
“Evaluation data”: HemoPhotonics will be involved to some aspects of
evaluation of the data measured in the clinics by the end-users. In
particular, HemoPhotonics will generate evaluation code for optical data in
collaboration with ICFO and POLIMI for the LUCA device implementation based on
clinical evaluations. Furthermore, end-user feedback e.g. on usability of the
LUCA device in clinical settings will be collected.
“Exploratory data”: In collaboration mainly with ICFO and the industrial
partners, HemoPhotonics will provide contributions to the exploitation aspects
of LUCA like market analysis, exploitation strategy, freedom-to-operate
analysis etc.
</th> </tr>
<tr>
<td>
_What is its**origin** ? _
</td>
<td>
“Component data” will be generated by HemoPhotonics.
“Sub-system data” will be generated by HemoPhotonics, ICFO, POLIMI, VERMON,
ECM
“Evaluation data” will be generated at IDIBAPS in collaboration with ICFO.
Specific evaluation code to be developed for implementation in the LUCA system
will be generated by HemoPhotonics.
“Exploratory data” will be mainly generated at ICFO-KTT using external
databases, studies and sources.
</td> </tr>
<tr>
<td>
_What are its**nature, format** _
</td>
<td>
A wide range of data formats and scales will be generated.
</td> </tr>
<tr>
<td>
_**and scale** ? _
</td>
<td>
1. Drawings and designs will use industry standard software and will, primarily, be confidential in nature. As much as possible, publicly accessible versions will be generated for dissemination purposes. These will be stored in forward compatible, time-tested formats. Specifics will arise by M18.
2. Software and firmware code will be developed in standard development suites for C++, VHDL on a dedicated computer system. Codes will be confidential.
3. Application data on the testing of LUCA will follow non-standard formats in binary and text files and evaluated with internal scripts based on common software tools (Excel, Matlab, etc.) on dedicated computer system. The processed data will be saved in a report format and will be publicly available once cleared in terms of IP and exploitation issues by the appropriate committee in LUCA project as foreseen by the description of action.
4. Exploitation strategy, market analysis, freedom-to-operate analysis etc. data will be confidential and will be shared within the consortium as reports and numbers. A summary will be published as part of the appropriate project deliverables.
Long-term access is ensured by the following measures:
1. All data will be stored in a secure hard-drive that is backed up bi-weekly to an external drive. Both drives will be regularly replicated and upgraded at roughly three year intervals.
2. All developed intermediate and released firmware and software code will be stored under proper consecutive version assignments.
3. All mechanical and electronic design files will be stored, managed with assignment of unique identifiers.
</td> </tr>
<tr>
<td>
_**To whom** could it be **useful** ? Does it underpin a scientific
publication? _
</td>
<td>
“Component data”: In the short-term, this type of data is only useful for the
internal LUCA partners. In the medium-term, it will be useful for our other
projects and when some of these components might become products.
“Sub-system data” and “evaluation data” are useful internally for our
developments and upgrades. They may support occasionally scientific
publications.
“Exploratory data” is mainly useful internally and, in the medium-term for
product exploitation as well as e.g. fund raising purposes addressing higher
technology readiness levels.
</td> </tr>
<tr>
<td>
_Do**similar data sets exist** ? Are there possibilities for **integration and
reuse** ? _
</td>
<td>
This is a unique device and a data-set.
</td> </tr> </table>
2. **Standards and metadata**
<table>
<tr>
<th>
_How will the**data be** _
_**collected/genera** _
</th>
<th>
“Component data” and “sub-system data” will be generated by laboratory tests
using test equipment, using design and development software.
</th> </tr>
<tr>
<td>
_**ted** ? _
</td>
<td>
“Evaluation data” will be generated mainly from ex vivo phantom measurements
and by data acquired from the subjects.
“Exploratory data” will be generated by studies of external databases,
interviews with end-users and others.
Details are described in the specific work-packages.
</td> </tr>
<tr>
<td>
_Which community**data standards or methodologies** (if any) will be used at
this stage? _
</td>
<td>
Community data standards in this area of research do presently not exist but
the LUCA project attempts to contribute to future standardization.
</td> </tr>
<tr>
<td>
_How will the data be**organised during the project?** _
</td>
<td>
“Component data” and “sub-system data” generated by HemoPhotonics will follow
a convention where the acronym of each component – stored at a shared bill-of-
materials document -- , the date, the time will be used to uniquely identify
the data set. All software and main texts will be kept in a subversion
repository.
“Evaluation data” will follow the conventions defined jointly by IDIBAPS,
HemoPhotonics and ECM who are the main drivers of the clinical studies and the
final software suites.
</td> </tr>
<tr>
<td>
_**Metadata** should be created to describe the data and aid discovery. **How
will you capture this information?** _
</td>
<td>
This will be captured in header files in open-source format. The exact details
are being defined as the systems mature.
</td> </tr>
<tr>
<td>
_**Where will it be recorded?** _
</td>
<td>
Every data-set is associated with an electronic notebook, appropriate header
file and comments and will be recorded in the storage system described above.
</td> </tr> </table>
3. **Data Sharing**
<table>
<tr>
<th>
_**Where and how** _
_will the data be made available and**how can they be accessed** ? Will you
share data via a data repository, handle data requests _
_directly or use another_
</th>
<th>
Internal to the project, the HemoPhotonics data will be shared using generic
cloud-storage (mainly Dropbox) wherever appropriate, e.g. when the shared data
is not sensitive or incomprehensible for outsiders. Brief reports,
spreadsheets and such will be shared by the Teamwork framework set by EIBIR.
Externally, we will use the project web-site as the main gateway for sharing
data. We will post, after IP clearance, appropriate data sets alongside
publications on journal web-sites.
</th> </tr>
<tr>
<td>
_mechanism?_
</td>
<td>
</td> </tr>
<tr>
<td>
_**To whom** will the data be made available? _
</td>
<td>
Apart of dissemination related activities of WP6, most of HemoPhotonics
generated data is restricted to internal use.
</td> </tr>
<tr>
<td>
_What are the**technical mechanisms for dissemination** and necessary
**software** or other tools for enabling **re-use** of the data? _
</td>
<td>
We will use the LUCA web-site for all dissemination. The processed data will
be presented in a way that it is cross-platform and software independent to
the best of our abilities.
</td> </tr>
<tr>
<td>
_Are any**restrictions on data sharing** required and **why** ? _
</td>
<td>
For most of the data generated by HemoPhotonics, restrictions on device
technology (hardware and software) are required to allow a successful
exploitation of the developments in future products.
</td> </tr>
<tr>
<td>
_What**strategies** will you apply **to overcome or limit** . _
</td>
<td>
Where appropriate, IP protection measure will be implemented.
</td> </tr>
<tr>
<td>
_**Where (i.e. in which repository)** will the data be deposited? _
</td>
<td>
Where appropriate, we will use the project website to make data available.
Supplementary data will be accessible in publications available on journal
websites and the project website.
</td> </tr> </table>
4. **Archiving and preservation (including storage and backup)**
<table>
<tr>
<th>
_What procedures will be put in place for**longterm preservation of the data**
? _
</th>
<th>
As described above, to ensure long-term access, HemoPhotonics will implement
the following measures:
1. Forward compatible, time-tested formats such as text files (commaseparated values, open-source formats), and/or open-source binary formats (such as open document spreadsheets, open document text) and/or custom made binary formats (with definition files stored in standard text formats) will be utilized with associated descriptive documentation.
2. Codes are based on standard languages with long-term availability (e.g. C++, VHDL).
3. All data will be stored in a secure hard-drive that is backed up bi-weekly to an external drive. Both drives are regularly replicated and upgraded at roughly three year intervals.
</th> </tr>
<tr>
<td>
</td>
<td>
4\. All mechanical and electronic designs are stored and assigned unique
identifiers.
</td> </tr>
<tr>
<td>
_**How long will the data be preserved** and what will its _
_**approximated end volume** be? _
</td>
<td>
We aim for a ten year preservation of the data.
The approximate end-volume of this data will be less than one terabyte.
</td> </tr>
<tr>
<td>
_Are**additional resources and/or is specialist expertise** needed? _
</td>
<td>
No.
</td> </tr>
<tr>
<td>
_Will there be any**additional costs** for archiving? _
</td>
<td>
The costs are budgeted within the project and internally.
</td> </tr> </table>
# e. Data sets collected at VERMON
Three types of data will be collected at VERMON:
1. “Component data”: Design drawings of the probe; components, mechanical parts and specifications.
2. “Sub-system data”: research laboratory data (test results of components, images), research application data (US probe performance, mechanical and safety validation and other specification validation as defined in the **WP3** ).
3. “Exploitation data”: Market and competition assessment data, cost models, pre-product datasheet. Patent list (competition, FTO and patents resulting from LUCA) in relation with **WP7** activities.
## i. Data set descriptions
<table>
<tr>
<th>
_**What data** will be **generated or collected** ? _
</th>
<th>
“Component data”: VERMON will be mainly in charge of the components related to
the multimodal probe. As such, we will generate design drawings,
specifications, process definition.
“Sub-system data”: VERMON will generate test results associated with the probe
compliance to specifications. Tests data will deal with mechanical assessment,
process validation, US component performance and safety compliance.
“Exploitation data”: VERMON will contribute to the data collection necessary
to setup a thorough exploitation plan. These include market data forecasts,
potential end-user identification and manufacturing costs. Patent datasets
will be created to assess competition and FTO as well as to monitor IP
protection of
</th> </tr>
<tr>
<td>
</td>
<td>
the LUCA project results.
We note that all these actions are collaborative and we expect significant
overlaps and data sharing between partners.
</td> </tr>
<tr>
<td>
_What is its**origin** ? _
</td>
<td>
“Component data” and “Sub-system data” will be generated with VERMON’s
internal design and test tools.
“Exploitation data” will be essentially derived from market studies, patent
database extraction and more generally from the web.
</td> </tr>
<tr>
<td>
_What are its**nature, format and scale** ? _
</td>
<td>
A wide range of data formats and scales will be generated.
1. Drawings and designs will use industry standard software and will, primarily, be confidential in nature. We will, as much as possible, generate publicly accessible versions for dissemination purposes. These will be stored in forward compatible, time-tested formats.
2. Research application data on the testing of LUCA probe will follow formats dependent from the test workbenches. Usually, the measurement data will be stored in Matlab or Excel formats. The data files have small sizes (few tens of Ko).
3. Market analysis data will be confidential and will be shared within the consortium as reports and numbers. A summary will be published as part of the appropriate project deliverables.
These data will be stored in data servers of VERMON. The IS infrastructure is
based on redundant hard-drive with a weekly and monthly backup.
</td> </tr>
<tr>
<td>
_**To whom** could it be **useful** ? Does it underpin a scientific
publication? _
</td>
<td>
“Component data”: In the short-term, this type of data is only useful for the
proper interaction between LUCA partners and to keep the internal knowledge
within VERMON. In the long-term, if further developments and designs occur,
this type of data will be shared on a business-to-business basis.
“Sub-system data” are useful both internally for our developments and upgrades
but also for assessing the performance indicators of the LUCA solution.
Generic performance data can be public for dissemination purposes towards
possible end-users and customers.
“Exploitation data” is company confidential by default. For proper
coordination of exploitation in the LUCA consortium some data subsets or
aggregated data can be shared.
</td> </tr>
<tr>
<td>
_Do**similar data sets exist** ? Are there possibilities for **integration and
reuse** ? _
</td>
<td>
This is a unique device and a data-set. There are possibilities to combine
processed data for review papers on optics + ultrasound combinations in
biomedicine as well as for reviews on applications of diffuse optics in
cancer.
</td> </tr> </table>
2. **Standards and metadata**
<table>
<tr>
<th>
_How will the**data be** _
_**collected/genera** _
</th>
<th>
Data will be generated from different sources from internal tools and
testbenches to data accessible by the web.
</th> </tr>
<tr>
<td>
_**ted** ? _
</td>
<td>
</td> </tr>
<tr>
<td>
_Which community**data standards or methodologies** _
_(if any) will be used at this stage?_
</td>
<td>
Not applicable for VERMON
</td> </tr>
<tr>
<td>
_How will the data be**organised during the project?** _
</td>
<td>
VERMON has an internal methodology to keep track of the data generated by each
project/product which is based on several data management tools :
* A project management tools keeps track of project development. This internally developed database records the project responsibilities and all the project step validation.
* Designs and measurement files are stored in a dedicated server following a common folder infrastructure. Each probe has a root folder with subfolders related to “Specifications”, “Design History Files (DHF)” and “Preliminary study”. The DHF folder has a standard organisation related to each development step of the project and history of each processed probe with Quality check sheets. Most of the documents have dedicated templates, giving a formal and easy check of their version and level of approval.
* Mechanical designs files are managed by our design tool (TopSolid, Missler Software) giving access to each parts and sub-parts with a versioning and user-rights management.
</td> </tr>
<tr>
<td>
_**Metadata** should be created to describe the data and aid discovery. **How
will you capture this information?** _
</td>
<td>
Not applicable object for VERMON
</td> </tr>
<tr>
<td>
_**Where will it be recorded?** _
</td>
<td>
All internal data will be kept according to the IS infrastructure in VERMON
and with its standard practices. We will work collectively with the other LUCA
partners to arrange the external data in standard formats.
</td> </tr> </table>
3. **Data Sharing**
<table>
<tr>
<th>
_**Where and how** will the data be made available and **how can they be
accessed** ? Will you share data via a data repository, handle data _
</th>
<th>
Internal data is stored internally in VERMON with no access from outside the
company network.
Externally, we will use the project web-site as the main gateway for sharing
data. We will post, after IP clearance, appropriate data sets alongside
publications on journal web-sites.
</th> </tr>
<tr>
<td>
_requests directly or use another mechanism?_
</td>
<td>
</td> </tr>
<tr>
<td>
_**To whom** will the data be made available? _
</td>
<td>
We aim to make bulk of the data widely accessible; however, there may be some
data, such as market studies, IP portfolios that will be shared with entities
and people related to the exploitation activities.
Specific data will be shared among the LUCA consortium to ensure the proper
advancement of the project. Different levels of sharing may be considered:
only one person, several people belonging to one partner, a group of partners
(WP group, topic group,…) or to the whole consortium.
</td> </tr>
<tr>
<td>
_What are the**technical mechanisms for dissemination** and necessary
**software** or other tools for enabling **re-use** of the data? _
</td>
<td>
We will use the LUCA web-site for all dissemination. The processed data will
be presented in a way that it is cross-platform and software independent to
the best of our abilities. If some software or dataset that we generate
becomes of value for the general biomedical optics community, we will consider
developing a unique web-site for that purposes.
</td> </tr>
<tr>
<td>
_Are any**restrictions on data sharing** required and **why** ? _
</td>
<td>
There will be restrictions based on the need for securing publications prior
to public release and for exploitation purposes. These are defined in the
project DOA.
</td> </tr>
<tr>
<td>
_What**strategies** will you apply **to overcome or limit restrictions** ? _
</td>
<td>
Data which have been approved for public release, after confidentiality and IP
clearance, either on the project website or dissemination documents will be,
by purpose, without limitations. Possible access restrictions to scientific
publication may be dictated by the publication editors. Whenever possible, we
will target editors which offer free access.
</td> </tr>
<tr>
<td>
_**Where (i.e. in which repository)** will the data be deposited? _
</td>
<td>
As mentioned above we will utilize project web-site, possibly dedicated
websites for specific outputs and journal web-sites.
Within the LUCA consortium we will use the project management platform where
data files can be uploaded/downloaded in folders organised by WP and/or
specific topics with a version management and the possibilities to restrict
the access and add tags.
</td> </tr> </table>
4. **Archiving and preservation (including storage and backup)**
<table>
<tr>
<th>
_What procedures will be put in place for**longterm preservation of the data**
? _
</th>
<th>
VERMON internal infrastructure has been designed for long-term data storage
and retrieval. We use dedicated internal servers for each tool. These servers
are mirrored with a RAID infrastructure located in a separate room with
regular storage backup (daily/weekly/monthly).
This IS infrastructure cannot be accessed from outside of VERMON’s network.
</th> </tr>
<tr>
<td>
_**How long will** _
</td>
<td>
Internally to VERMON, project archives as old as 15 years ago can actually be
</td> </tr>
<tr>
<td>
_**the data be preserved** and what will its _
_**approximated end volume** be? _
</td>
<td>
retrieved in a fast and thorough way.
</td> </tr>
<tr>
<td>
_Are**additional resources and/or is specialist expertise** needed? _
</td>
<td>
VERMON has two people dedicated to IS management.
</td> </tr>
<tr>
<td>
_Will there be any**additional costs** for archiving? _
</td>
<td>
The costs are budgeted within the project and internally.
</td> </tr> </table>
# f. Data sets collected at ECM
Four types of data will be collected at ECM:
1. “Component data”: Ultrasound beamformer specifications, electronic boards schematics and design, processing software specification and source code, FPGA firmware specifications and source code, mechanical drawings.
2. “Sub-system data”: Ultrasound probe integration test reports, ultrasound image evaluation test reports, integration test reports of Luca demonstrator, integration report of the communication protocol between ultrasound and optical components.
3. “Evaluation data”: Evaluation data which are the results from the end-user tests in clinics.
4. “Exploratory data”: Market and competition analysis reports, cost structure, commercial product datasheet, business plan.
## i. Data set descriptions
<table>
<tr>
<th>
_**What data** will be **generated or collected** ? _
</th>
<th>
“Component data”: ECM will provide data related to the ultrasound beamformer
hardware, firmware and software. Generated data will be made of mechanical
drawings, electronic schematics, software and firmware source codes,
specification documents.
“Sub-system data”: ECM will generate test results associated with ultrasound
system performance including probe integration, image quality assessment,
interaction with the optical components, functional and compliance test
reports at the sub-system level and for the integrated LUCA system.
“Evaluation data”: ECM will be involved in the evaluation of the data measured
in the clinics by the end-users. ECM will be in charge of generation of the
ultrasound image and display of the optical measurements results. End-user
feedback on the LUCA device performance in clinical settings will be
collected.
“Exploratory data”: In collaboration mainly with ICFO and the industrial
partners, ECM will provide contributions to the exploitation aspects of LUCA
like market
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
analysis, exploitation strategy, freedom-to-operate analysis etc.
</th> </tr>
<tr>
<td>
_What is its**origin** ? _
</td>
<td>
“Component data” will be generated by ECM.
“Sub-system data” will be generated by ECM, HemoPhotonics, ICFO, POLIMI,
VERMON.
“Evaluation data” will be generated at IDIBAPS in collaboration with ICFO. ECM
will be involved in supporting the clinical investigators with the ultrasound
subsystem performance.
“Exploratory data” will be mainly generated from market analysis reports,
potential customers need analysis using external databases, studies and
available reports.
</td> </tr>
<tr>
<td>
_What are its**nature, format and scale** ? _
</td>
<td>
A wide range of data formats and scales will be generated.
1. Drawings and designs will use industry standard software and will, primarily, be confidential in nature. As much as possible, publicly accessible versions will be generated for dissemination purposes. These will be stored in forward compatible, time-tested formats. Specifics will arise by M18.
2. Software and firmware code will be developed in standard development suites for C++, VHDL on a dedicated computer system. Codes will be confidential.
3. Application data on the testing of LUCA will follow non-standard formats in binary and text files and evaluated with internal scripts based on common software tools (Excel, Matlab, etc.) on dedicated computer system. The processed data will be saved in a report format and will be publicly available once cleared in terms of IP and exploitation issues by the appropriate committee in LUCA project as foreseen by the description of action.
4. Exploitation strategy, market analysis, freedom-to-operate analysis etc. data will be confidential and will be shared within the consortium as reports and numbers. A summary will be published as part of the appropriate project deliverables.
Long-term access is ensured by the following measures:
1. All data will be stored in ECM data server secured by a redundant hard drive system that is totally backed up once a week and incrementally backed up on a daily basis.
2. All developed firmware and software code will be stored under proper consecutive version assignments.
3. All mechanical and electronic design files will be stored, managed with assignment of unique identifiers according to ECM Quality system requirements.
</td> </tr>
<tr>
<td>
_**To whom** could it be **useful** ? Does it underpin a scientific
publication? _
</td>
<td>
“Component data”: In the short-term, this type of data is only useful for the
internal LUCA partners. In the medium-term, it will be useful for our other
projects and when some of these components might become products.
“Sub-system data” and “Evaluation data” are useful internally for our
developments and upgrades. They may support occasionally scientific
publications.
“Exploratory data” is company confidential by default. For proper coordination
of exploitation in the LUCA consortium some data subsets or aggregated data
can
</td> </tr>
<tr>
<td>
</td>
<td>
be shared.
</td> </tr>
<tr>
<td>
_Do**similar data sets exist** ? Are there _
_possibilities for**integration and reuse** ? _
</td>
<td>
This is a unique device and a data-set. There are possibilities to combine
processed data for review papers on optics + ultrasound combinations in
biomedicine as well as for reviews on applications of diffuse optics in cancer
</td> </tr> </table>
2. **Standards and metadata**
<table>
<tr>
<th>
_How will the**data be collected/gener ated** ? _
</th>
<th>
“Component data” and “sub-system data” will be generated by laboratory tests
using test equipment, using design and development software.
“Evaluation data” will be generated mainly from ex vivo phantom measurements
and by data acquired from the subjects.
“Exploratory data” will be generated by studies of external databases,
interviews with end-users and others.
Details are described in the specific work-packages.
</th> </tr>
<tr>
<td>
_Which community**data standards or** _
_**methodologies** _
_(if any) will be used at this stage?_
</td>
<td>
Not applicable for ECM.
</td> </tr>
<tr>
<td>
_How will the data be**organised during the project?** _
</td>
<td>
“Component data” and “sub-system data” generated by ECM will be managed
according to the existing quality procedure related to documentation control
under the requirements of ISO 13485 standard.
“Evaluation data” will follow the conventions defined jointly by IDIBAPS,
HemoPhotonics and ECM who are the main drivers of the clinical studies and the
final software suites.
</td> </tr>
<tr>
<td>
_**Metadata** should be created to describe the data and aid discovery. **How
will you capture** _
_**this information?** _
</td>
<td>
Not applicable for ECM.
</td> </tr>
<tr>
<td>
_**Where will it be recorded?** _
</td>
<td>
Every data-set will be recorded in the ECM storage system described above.
</td> </tr> </table>
3. **Data Sharing**
<table>
<tr>
<th>
_**Where and how** _
_will the data be made available and**how can they be accessed** ? Will you
share data via a data repository, handle data requests directly or use another
mechanism? _
</th>
<th>
Data are stored internally in ECM servers with no access from outside the
company network.
Externally, we will use the project web-site as the main gateway for sharing
data.
</th> </tr>
<tr>
<td>
_**To whom** will the data be made available? _
</td>
<td>
We aim to make bulk of the data widely accessible; however, there may be some
data, such as market studies, IP portfolios that will be shared with entities
and people related to the exploitation activities.
Specific data will be shared among the LUCA consortium to ensure the proper
advancement of the project. Different levels of sharing may be considered:
only one person, several people belonging to one partner, a group of partners
(WP group, topic group) or to the whole consortium.
</td> </tr>
<tr>
<td>
_What are the**technical mechanisms for dissemination** and necessary
**software** or other tools for enabling **re-use** of the data? _
</td>
<td>
We will use the LUCA web-site for all dissemination. The processed data will
be presented in a way that it is cross-platform and software independent to
the best of our abilities. If some software or dataset that we generate
becomes of value for the general biomedical optics community, we will consider
developing a unique web-site for that purposes.
</td> </tr>
<tr>
<td>
_Are any**restrictions on data sharing** required and **why** ? _
</td>
<td>
There will be restrictions based on the need for securing publications prior
to public release and for exploitation purposes. These are defined in the
project DOA.
</td> </tr>
<tr>
<td>
_What**strategies** will you apply **to overcome or limit restrictions** ? _
</td>
<td>
We will use procedures as embargo until publication in order to implement IP
protection measures.
</td> </tr>
<tr>
<td>
_**Where (i.e. in which repository)** will the data be deposited? _
</td>
<td>
As mentioned above we will utilize project web-site, possibly dedicated
websites for specific outputs and journal web-sites.
Within the LUCA consortium we will use the project management platform where
data files can be uploaded/downloaded in folders organised by WP and/or
specific topics with a version management and the possibilities to restrict
the access and add tags.
</td> </tr> </table>
## iv. Archiving and preservation (including storage and backup)
<table>
<tr>
<th>
_What procedures will be put in place for**longterm preservation of the data**
? _
</th>
<th>
As described above, to ensure long-term access, ECM will implement the
following measures:
1. Forward compatible, time-tested formats such as text files (commaseparated values, open-source formats), and/or open-source binary formats (such as open document spreadsheets, open document text) and/or custom made binary formats (with definition files stored in standard text formats) will be utilized with associated descriptive documentation.
2. Codes are based on standard languages with long-term availability (e.g. C, C#, VHDL).
3. All data will be stored in ECM data server secured by a redundant hard drive system that is totally backed up once a week and incrementally backed up on a daily basis.
4. All mechanical and electronic designs are stored and assigned unique identifiers according to the ECM Quality procedure related to Documentation Control, under the requirements of ISO 13485 standard.
</th> </tr>
<tr>
<td>
_**How long will the data be preserved** and what will its _
_**approximated end volume** be? _
</td>
<td>
We aim for a ten year preservation of the data.
The approximate end-volume of this data will be less than one terabyte.
</td> </tr>
<tr>
<td>
_Are**additional resources and/or is specialist expertise** needed? _
</td>
<td>
No.
</td> </tr>
<tr>
<td>
_Will there be any**additional costs** for archiving? _
</td>
<td>
The costs are budgeted within the project and internally.
</td> </tr> </table>
# g. Data sets collected at UoB
One type of data will be collected at UoB:
1\. “Simulated data”: Data produced using numerical models for evaluation
using phantoms.
## i. Data set descriptions
<table>
<tr>
<th>
_**What data** will be **generated or** _
</th>
<th>
“Simulated data”: UoB group will be mainly in charge of the computational
tools that predict physical systems. Only data from these computational
</th> </tr> </table>
<table>
<tr>
<th>
_**collected** ? _
</th>
<th>
models will be generated.
We note that all these actions are collaborative and we expect significant
overlaps and data sharing between partners.
</th> </tr>
<tr>
<td>
_What is its**origin** ? _
</td>
<td>
“Simulated data” will be internal to the group and to the project.
</td> </tr>
<tr>
<td>
_What are its**nature, format and scale** ? _
</td>
<td>
A wide range of data formats and scales will be generated.
1. Research application data on the testing of LUCA will follow nonstandard formats common to each laboratory, in this case UoB, doing the modelling and will be stored in binary and text files. They will be associated with an electronic notebook which will include links to analysis scripts (Matlab, Excell, custom-software). The processed data will be saved in a report format and will be publicly available once cleared in terms of IP and exploitation issues by the appropriate committee in LUCA project as foreseen by the description of action.
2. Supporting data used in academic peer reviewed publications will be made available, after publication, via a recognised suitable data sharing repository. This policy will be followed unless a partner or IEC can show that disseminating this data will compromise IP or other commercial advantage as detailed below. The project will use the metadata standards and requirements of the repository used for sharing the data.
At the UoB, Long-term access is ensured by the following measures:
1. Forward compatible, time-tested formats such as text files (commaseparated values, open-source formats such as R data-tables), and/or open-source binary formats (such as open document spread sheets, open document text) and/or custom made binary formats (with definition files stored in standard text formats) will be utilized with associated descriptive documentation.
2. All data will be stored in a secure hard-drive that is backed up every night by an incremental back-up script (rsbackup) to an external drive. Both drives are regularly replicated and upgraded at roughly three year intervals.
3. All desktop computers used by the UoB personnel involved in the project is centrally managed by UoB information technology department which utilizes secure folders on the servers that are backed up automatically and.
</td> </tr>
<tr>
<td>
_**To whom** could it be **useful** ? Does it underpin a scientific
publication? _
</td>
<td>
“Simulated data”: In the short-term, this type of data is only useful for the
internal LUCA partners. In the medium-term, it will be useful for our other
projects. Some information may be used in scientific publications and
presentations as described below.
The data will be interesting to the end-user community and the biophontonics
community. We submit articles to target journals for these communities (e.g.
Biophotonics, Applied Optics, Biomedical Optics Express, Journal of Biomedical
Optics, Nature Photonics).
</td> </tr>
<tr>
<td>
_Do**similar data sets exist** ? Are _
</td>
<td>
There are possibilities to combine simulated data for review papers on optics
+ ultrasound combinations in biomedicine as well as for reviews on
applications
</td> </tr>
<tr>
<td>
_there possibilities for**integration and reuse** ? _
</td>
<td>
of diffuse optics in cancer.
</td> </tr> </table>
2. **Standards and metadata**
<table>
<tr>
<th>
_How will the**data be collected/generat ed** ? _
</th>
<th>
“Simulated data” will be generated using computational models.
Details are described in the specific work-packages.
</th> </tr>
<tr>
<td>
_Which community**data standards or methodologies** (if any) will be used at
this stage? _
</td>
<td>
The lack of community data standards is one of the points that we explicitly
discuss and attempt to contribute in LUCA project. Here, we mean the community
of biomedical optics researchers using diffuse optical methods.
</td> </tr>
<tr>
<td>
_How will the data be**organised during the project?** _
</td>
<td>
“Simulated data” will follow the conventions defined jointly by IDIBAPS, HEMO
and ECM who are the main drivers of the clinical studies and the final
software suites. UoB Group will follow their naming conventions.
</td> </tr>
<tr>
<td>
_**Metadata** should be created to describe the data and aid discovery. **How
will you capture this information?** _
</td>
<td>
This will be captured in electronic notebooks, in header files in open-source
format (described above) and in case-report files. The exact details are being
defined as the systems mature.
</td> </tr>
<tr>
<td>
_**Where will it be recorded?** _
</td>
<td>
All internal data will be kept according to the different units at UoB and
their standard practices. We will work collectively with the other LUCA
partners to arrange the external data in standard formats. As explained above,
every dataset is associated with an electronic notebook, appropriate header
file and comments. These will be recorded in the storage system(s) described
above.
</td> </tr> </table>
3. **Data Sharing**
<table>
<tr>
<th>
_**Where and how** will the data be made available and **how can they be
accessed** ? Will you share data via a data repository, handle data requests
directly _
</th>
<th>
Internal to the project, the UoB data will be shared using generic cloud-
storage (mainly Dropbox) wherever appropriate, e.g. when the shared data is
not very sensitive or is incomprehensible for an intruder. Otherwise, it will
be shared by encrypted files (PGP encryption) using UoB’s own cloud system
that is managed by its IT department. Brief reports, spreadsheets and such
will be shared by the TEAMWORK framework set by EIBIR.
Externally, we will use the project web-site as the main gateway for sharing
data. We will post, after IP clearance, appropriate data sets alongside
publications on journal web-sites.
</th> </tr>
<tr>
<td>
_or use another mechanism?_
</td>
<td>
</td> </tr>
<tr>
<td>
_**To whom** will the data be made available? _
</td>
<td>
Bulk of the data will be widely accessible, however, there may be some data,
such as market studies, IP portfolios that will be shared with entities and
people related to the exploitation activities.
</td> </tr>
<tr>
<td>
_What are the**technical mechanisms for dissemination** and necessary
**software** or other tools for enabling **re-use** of the data? _
</td>
<td>
We will use the LUCA web-site for all dissemination. The processed data will
be presented in a way that it is cross-platform and software independent to
the best of our abilities. If some software or dataset that we generate
becomes of value for the general biomedical optics community, we will consider
developing a unique web-site for that purposes.
</td> </tr>
<tr>
<td>
_Are any**restrictions on data sharing** required and **why** ? _
</td>
<td>
There will be restrictions based on the need for securing publications prior
to public release and for exploitation purposes. These are defined in the
project DOA. Furthermore, any patient data that could be used to identify the
patients will be properly anonymized prior to sharing and the link between the
patient ID and the dataset will be permanently destroyed after an appropriate
time based on the ethical protocols and procedures that are approved. This is
IDIBAP’s responsibility and the UoB group will receive data that is already
anonymized according to these principles.
</td> </tr>
<tr>
<td>
_What**strategies** will you apply **to overcome or limit restrictions** ? _
</td>
<td>
We will utilize procedures such as embargo until publication, anonymising and
simplification.
</td> </tr>
<tr>
<td>
_**Where (i.e. in which repository)** will the data be deposited? _
</td>
<td>
As mentioned above, there are no community defined standards for the
biomedical diffuse optics community. Therefore, we will utilize project web-
site, possibly dedicated web-sites for specific outputs and journal web-sites.
</td> </tr> </table>
4. **Archiving and preservation (including storage and backup)**
<table>
<tr>
<th>
_What procedures will be put in place for**long-term preservation of the
data** ? _
</th>
<th>
At the UoB group, Long-term access is ensured by the following measures:
1. Forward compatible, time-tested formats such as text files (comma-separated values, open-source formats such as R data-tables), and/or open-source binary formats (such as open document spreadsheets, open document text) and/or custom made binary formats (with definition files stored in standard text formats) will be utilized with associated descriptive documentation.
2. All data will be stored in a secure hard-drive that is backed up every night by an incremental back-up script. Both drives are regularly replicated and upgraded at roughly three year intervals.
</th> </tr>
<tr>
<td>
</td>
<td>
3\. All desktop computers used by the UoB personnel involved in the project is
centrally managed by UoB IT department.
</td> </tr>
<tr>
<td>
_**How long will the data be preserved** and what will its **approximated end
volume** be? _
</td>
<td>
Apart from the certain aspects of the clinical datasets – which will be
managed by IDIBAPS, there are no limitations on the preservation of the data.
We will follow academic standards and aim for a ten year preservation of the
data.
The approximate end-volume of this data will be less than one terabyte.
</td> </tr>
<tr>
<td>
_Are**additional resources and/or is specialist expertise** needed? _
</td>
<td>
No. We are all experts in the management of datasets of this size. Internally,
UoB-IT manages the general policies, makes suggestions on good-practices and
ensures security against intrusions.
</td> </tr>
<tr>
<td>
_Will there be any**additional costs** for archiving? _
</td>
<td>
The costs are budgeted within the project and internally.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0870_MammaPrint_672570.md
|
H2020 SME 672570 — MammaPrint
Data Management Plan 31 AUG 2015
_Analysis:_
The tumor samples (FFPE blocks or slides) will be shipped to Agendia for
MammaPrint analysis purposes. The West German Study Group (WSG) will provide
the clinical and pathological information necessary for the execution of the
trial.
_Archival:_
Any remaining material will be returned to WSG Biobank in Hannover (Germany)
at the end of the project.
# Data storage at Agendia
Every step of the research process, from data collection to data
transformations, variable creation and the final analyses, is documented and
stored in a secure centralized location.
Data in the EDC is stored on the web-server in a secure database, which is
replicated for backup purposes. Data sent to, and retrieved, from the web-
servers is encrypted using SSL (Secure Sockets Layer) if so required. Only the
principle investigators and the ICT Director and Operations Director at the
company responsible for creating and maintaining the study database will have
access to the data entered. All externally involved individuals have executed
a Confidentiality Agreement to ensure that data is kept private.
The MammaPrint index data, the result of the genomic profile analysis, are
stored in a secure database according to the applicable SOP’s that are in
place at Agendia.
Procedures with respect to electronic data storage, managing and monitoring
access control, systems and password security and backup procedures have been
written to ensure data integrity.
From the data warehouse datasets will be made available for statistical
analysis which will be performed by Agendia.
# Data accessibility
The gene expression profile data , translated into a MammaPrint index,
obtained for each patient in the four clinical trials, together with the
clinical pathological information, are the source data for peer reviewed
publications in (inter)national journals, presentations on congresses and
symposia.
The algorithm to translate the microarray results into a MammaPrint index is
proprietary and cannot be accessed or shared with others.
Related to the strategy for knowledge management and protection, Agendia will
give open access to the scientific publications through open access
publishing.
Where allowed by the academic partner or collaborator the data collected will
be open for other research groups. Any qualified researcher who is interested
in using the MammaPrint project data may apply for access by a proposal. The
applications need to be reviewed and approved by Agendia.
Agendia NV ● Science Park 406 ● 1098 XH Amsterdam ● The Netherlands phone +31
20 4621500 ● fax +31 20 4621505 ● [email protected] ● www.agendia.com
Page 4 of 5
H2020 SME 672570 — MammaPrint
Data Management Plan 31 AUG 2015
# Data archiving
All data generated during the course of the projects will be stored according
to the applicable SOP’s and Compliance Manual that are in place at Agendia.
Procedures have been written on retention of records and materials, electronic
data storage, backup procedures and electronic, paper and study material
archiving.
At the end of the project all data will be integrated in a data warehouse.
Before the EDC and clinical database is closed, closure checks and a quality
assurance audit will be performed to verify the integrity and completion of
the data collected and assure data quality.
Agendia NV ● Science Park 406 ● 1098 XH Amsterdam ● The Netherlands phone +31
20 4621500 ● fax +31 20 4621505 ● [email protected] ● www.agendia.com
Page 5 of 5
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0871_EVER-EST_674907.md
|
# EVER-‐EST Data Management Plan
## Document scope
This document presents the EVER-‐EST project Data Management Plan, describing
how EVER-‐EST Virtual Research Communities data is made Findable, Accessible,
Interoperable and Reusable (FAIR), taking into account the VRCs specific
requirements in relation to openness and protection of scientific information,
commercialisation and Intellectual Property Rights (IPR), privacy concerns,
security and long term preservation needs.
The EVER-‐EST DMP has been added to list of the EVER-‐EST deliverables
following the amendment of the Grant Agreement that was signed on the 16th of
August 2016 to provide a sound data management plan as this is an essential
part of the Earth Science research life cycle and best practices.
## Document structure
The overall structure of this document is based on the guidelines on FAIR Data
Management in Horizon 2020 (Version 3.0 dated 26 July 2016) and the new DMP
template included in the latter guideline. The DMP will be updated over the
course of the project whenever significant changes arise (as foreseen by the
guidelines and in line with the periodic evaluation/assessment reviews of the
EVER-‐EST project).
# Data Management Plan Components
## Data management plan data summary
EVER-‐EST will provide earth scientists with the means to seamlessly manage
both the data involved in their computationally intensive disciplines and the
scientific methods applied in their observations and modelling, which lead to
the specific results that need to be attributable, validated and shared within
the community e.g. in the form of scholarly communications. Such data
management capabilities will be augmented with the models, techniques and
tools necessary for the preservation of scientific methods and their
implementation in computational forms such as scientific workflows, which are
increasingly used in the Earth Science domain. Central to this approach is the
concept of Research Objects (ROs), semantically rich aggregations of resources
that bring together data, methods and people in scientific investigations. ROs
enable the creation of digital artefacts that can encapsulate scientific
knowledge and provide a mechanism for sharing and discovering assets of
reusable research. The scientific community involves multi-‐disciplinary
scientists in all Earth Science disciplines and policy impact areas. Policy
makers are responsible for defining the main Earth health indicators, disaster
risk management actions and investments.
EVER-‐EST follows a user-‐centric approach driven by four pre-‐selected
communities in Earth Sciences:
* Sea Monitoring;
* Geo Hazard Supersites;
* Land Monitoring;
* Natural Hazards.
For each of the latter communities the relevant data summary will be provided
in the following subchapter highlighting:
* The purpose of the data collection/generation for the specific community;
* Types and formats of data that are generated and/or collected by the community;
* Existing data re-‐use and how the data are re-‐used;
* Origin of the data;
* Expected size of the data;
* The 'data utility'
A detailed description of the pre-‐selected communities input data need and
generated out put data is provided by
AD[3].
### Sea Monitoring Virtual Research Community
The Sea Monitoring VRC focuses on finding new ways to measure the quality of
the maritime environment and it is quite wide and heterogeneous, consisting of
multi-‐disciplinary scientists such as biologists, geologists, oceanographers
and GIS experts, as well as agencies and authorities (e.g. ARPA or the Italian
Ministry of Environment). The scientific community has the main role of
assessing the best criteria and indicators for defining the Good Environmental
Status descriptors defined by the Marine Strategy Framework Directive (MSFD).
The indicator derivation process includes the following:
* **Datasets:** Raster data for seafloor bathymetry, backscatter and hydrodynamic models, vector data for the coral occurrences; jellyfisch occurences from citizen science monitoring (video, photo, reports, social media information); Mediterranean sea physical and biogeochemical variables from satellite data platform (Copernicus, _http://marine.copernicus.eu/services-‐portfolio/access-‐to-‐products/_ , aviso, _http://www.aviso.altimetry.fr/en/data/products.html_ , ecc.) Posidonia meadows habitat mapping.
* **Software:** ArcGIS tool for deriving environmental variables and geospatial/ statistical analysis, .xls matrix, R, Maxent.
* **Documents:** Published abstract, PPT presentation of the models, MSFD document on habitat extent and invasive species distribution, previous paper on habitat suitability models.
### GeoHazards Supersites Virtual Research Community
The Geohazard Supersites and Natural Laboratories (GSNL) is a collaborative
initiative supported by GEO (Group on Earth Observations) within the Disasters
Resilience Benefit Area. The goal of GSNL is to facilitate a global
collaboration between Geohazard monitoring agencies, satellite data providers
and the Geohazard scientific community to improve scientific understanding of
the processes causing geological disasters and better estimate geological
hazards. The Geohazards presently addressed in the GSNL initiative are all
hazards linked to earthquakes and volcanic eruptions (e.g. seismic shaking,
ground deformation, seismically triggered landslides, ash fall, pyroclastic
flow, lava flow). The monitoring of these hazards is done via Permanent
Supersites, which deal with prevention activities (i.e. science to support
seismic and volcanic hazard assessment), and Event Supersites, which have a
limited duration and are dedicated to intensive scientific research on
specific eruptions or earthquakes. In EVER-‐EST, the activity of the
Geohazard VRC is focused on Permanent volcanic Supersites (Mount Etna,
Islandic volcanoes, Campi Flegrei/Vesuvio). The main activities of this VRC
need the following resources:
* **Datasets:** geophysical parameters describing seismic and volcanic processes and phenomena (e.g. ground displacement and velocity, gas composition, atmospheric water content, ash particle density, etc.), SAR and optical satellite data (e.g. Sentinel1 & 2, COSMO-‐SkyMed , TerraSAR X, Radarsat 2, ALOS 2, MODIS,
MSG, Pleiades, etc.), GPS data.
* **Software/Models:** Scientific modeling codes used to simulate the effects of the phenomena and processes. They are used to generate space/time representations of geophysical phenomena (e.g. measures of surface deformation, models of ash dispersal, models of the magmatic reservoir). Commercial image analysis software for SAR and optical data (SARSCAPE). Commercial software for data analysis (Matlab, ENVI/IDL, Fortran, Python, etc.). Geographic Information System software (ArcGis).
* **Documents:** Publications on journals or conference proceedings, validation reports, reports on research results, Research Objects including scientific results, workflows, bibliography, topical discussions, etc.
#### Land Monitoring Virtual Research Community
The European Union Satellite Centre (SatCen) represents, in the framework of
EVER-‐EST and in line with the Secure Societies Horizon 2020 Societal
Challenge, the stakeholders involved in the decision-‐making process of the
EU in the field of the Common Foreign and Security Policy (CFSP).
Land Monitoring is key in providing useful information to those entities that
have to:
* Make informed decisions referred to the monitoring of urban, build-‐up and natural environments;
* Identify certain features and anomalies or changes over areas of interest as well as of natural resources;
* Monitor features/changes condition and exploitation to address related environmental, scientific,
humanitarian, health, political and security issues as well as to adopt
sustainable management practices.
Thus the Land Monitoring community can be described as composed by
institutional and operational entities as well as by scientific and research
entities, potentially having different final goals but using the same space
assets and similar services/techniques.
The Land Monitoring VRC data generation process includes:
* **Datasets** : Satellite images (e.g. Sentinel 1 and other data from the Copernicus programme and third party missions), other geotagged data (structured and unstructured) coming from social, commercial, open and other sources (e.g. social media information and newsfeed);
* **Software** : Data ingestion tools (from catalogues as the ESA Sentinels Scientific Hub); pre-‐processing and processing tools (e.g. calibration, co-‐registration, change detection) from open software (e.g. SNAP), open libraries (e.g. GDAL) and custom developed algorithms (mainly written in Java); these tools might be readapted to be used in the frame of EVER-‐EST project;
* **Documents** : Documentation on the data (e.g. Sentinels’ guidebooks or data provenance) and the (pre-‐) processing algorithms ingested (e.g. reference papers) as well as validation procedures and reports (e.g. description of possible methods to validate the whole processing chain).
#### Natural Hazards Virtual Research Community
The Natural Hazards Partnership (NHP) is a group of 17 collaborating public
sector organisations comprising government departments, agencies and research
organisations. The NHP provides a mechanism for providing co-ordinated advice
to government and those agencies responsible for civil contingency and
emergency response during natural hazard events.
The NHP provides daily assessments of hazard status via the Daily Hazard
Assessment (DHA) to the UK responder and resilience communities,
pre-‐prepared science notes providing descriptions of all relevant UK hazards
and input to the National Risk Assessment. In addition, the NHP has set up a
Hazard Impact Model (HIM) group tasked with modelling the impact of a range of
UK hazards within a common framework and operational delivery of the model
outputs. Initially they are concentrating on modelling the impact of 3 key
hazards – surface water flooding, land instability and high winds – on people,
their communities and key assets such as road, rail and utility networks. The
partners share scientific expertise, data and knowledge on hydrological
modelling, meteorology, engineering geology, GIS and data delivery and
modelling of socio-‐economic impacts.
The HIM data generation process includes:
* **Dataset:** Impact Library, a repository of pre-‐calculated impact data for each HIM, a surface water flooding hazard footprint generated, using the G2G modelling process, in ASCII grid format, county level reporting areas generated by the flood forecasting centre in ESRI shapefiles;
* **Software/Methods:** R and Python scripting languages used for modelling impacts of hazards based on hazard footprint data and the impact library; ArcGIS geoprocessing tools for generation of polygonised impact outputs.
* **Documentation:** impact results that require summary and presentation to end users, including an interpretation of the risk when forecast data used in initial stages of the modelling. Guidelines on running hazard impact modelling scenarios and schematic descriptions of the hazard impact modelling workflows. Hazard Impact Framework report enabling standards across different hazard scenarios. Related conference presentations, papers and proceedings as well as peer review papers authored by NHP partners and their individual institutions.
## Data management plan scope
Earth Science communities using EVER-‐EST infrastructure during the research
life cycle generate scientific peer-reviewed publications for which open
access obligation in Horizon 2020 apply. The underlying research data and
products within the scope of this data management plan are heterogeneous as
summarized in the previous chapter and can be grouped in:
• Research data collected or processed/generated as part of the VRC research
life cycle, intermediate products, as preliminarily identified in [AD4] and
summarized in chapter 2.1; • Research objects.
# Findable, Accessible, Interoperable and Reusable (FAIR) Data
The research object concepts, technologies and methodologies enable the vision
for ‘FAIR’ Findable, Accessible, Interoperable and Re-‐usable data management
practices while supporting VRCs specific requirements in relation to both
openness and protection of scientific information, commercialisation and IPR,
privacy concerns, security and long term preservation needs.
The research object paradigm, life cycle model and technology support FAIR
data management recommendations related to sharing documentation/communication
of scientific knowledge as well the reproducibility of scientific results
including:
* Documenting best practices (WFs, analysis methods, monitoring methods, etc.).
* Providing long term preservation of scientific knowledge (how data are analysed, how results are validated, etc.)
* Providing long term preservation of end-‐user stories (demonstrating scientist-‐end-‐user interactions), also for public dissemination.
* Executing of “standard” workflows for data analysis/modeling in order to validate results and generate “standard” products (e.g. deformation maps) as mass products.
* Testing algorithms and data, either modifying the workflow to execute new analysis methods/models on the same dataset, or executing the original workflow on different datasets;
* Supporting long term data series and historical science based on past observations and the validation of models with actual data
Research Objects for EVER-‐EST VRC can encapsulate the following data/product
information.
* **Workflows:** High level flowchart and formal workflow descriptors (e.g. Taverna bundles). Also, metadata such as text files describing the general workflow, including all information needed by scientists to choose this workflow for other use cases (assumptions, usage issues, etc.)
* **Documentation:** ranging from scientific papers, bibliography, user manuals to impact results, report, etc.
* **Data:** Input data (for processing and for validation), output data (intermediate non-‐validated and final
validated) and a report on use case data and results.
* **Processing components:** Software, web services, configuration setup, hardware requirements.
* **Products** : results obtained using workflow-‐centric RO or external processing tools. These results may be preliminary or not yet published, but need to be encapsulated in RO for scientific purposes or for risk management purposes. Usually correlated by explicative text files.
At this stage of the project, for the scope of this data management plan the
following RO types as described in
[AD4 and AD5] have been identified:
* **Workflow-‐Centric RO** : contain a workflow, whether a Taverna WF bundle or just an executable code and/or a Fortran, Matlab, etc. source code, executable not only on the VRE.
* **Data-‐Centric RO** : contains reference to a dataset or observation (normally many of them). Depending on
the scope it may be static or be a live RO to which further data are added
periodically.
* **Research Product Centric RO** : It contains the (normally validated) results of one or more processing runs (e.g. a workflow for source modeling). It could contain instead the result of qualitative interpretations (e.g. a map of geomorphological features). In addition, the following RO type, not under the scope of this DMP has been identified:
* Documentation and bibliographic Research Objects.
## Making data findable, including provisions for metadata
EVER-‐EST includes activities aiming on definition and harmonization of
metadata for the VRE as part of the RO model definition. The detailed
description of these activities can be found in [AD4, AD5]. This work is
intended for harmonization, in the course of the project, of the data and
research object produced using the VRE and the VRCs communities have already
started more and more to benefit of the internal training-‐by-‐doing,
generating and using ROs. VRCs taking part of the project might have their own
community-‐specific metadata schemes. However, the overall aim of the
EVER-‐EST data management policy at the start of the project was to encourage
the use of the latter schemes and documentation methods, meanwhile progressing
on the harmonization of the metadata and ontologies taking into account the
specific needs of the VRCs. Use of suitable international standards (e.g.
INSPIRE directive, RDA Metadata standard directory, metadata standard for long
term data preservation) have been assessed. Data produced and used during the
project will be identifiable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers),
Registry of Research Data Repositories and repositories like Zenodo, OpenAIRE
and
CERN are currently under assessment.
## Making data openly accessible
All EVER-‐EST project results that are open to use for any purpose will be
appropriately licensed using open licensing policy (e.g. Creative Commons
4.0BY or similar). Unless required by the Consortium Agreement or VRC specific
IPRs, all EVER-‐EST data products openly accessible will be discoverable
(i.e. via metadata harvesting access) in reasonable time after data collection
and/or generation. Default time for this is 6 (six) months from the end of the
result generation. It is to be noted that the data provided by the VRCs may
not be fully open, depending on the specific license and conditions of use of
the input data. This may apply for instance to some satellite data (e.g.
COSMO-‐SkyMed or Radarsat 2) or to some in situ datasets.
For many datasets produced, the storage and access management will be
implemented using the Research Objects environment and, EVER-‐EST VRE and VRC
repositories, if applicable. Access will be provided to the Commission
officials and their appointed reviewers. Access to IPR sensitive data will be
adequately controlled. The detailed description of data access infrastructure,
data set and RO catalogues is provided by
[AD4, AD5, AD6].
## Making data interoperable
As part of the project objectives, work is on going to assess the
interoperability of VRCs research data and research object. Metadata
vocabularies, standards and commonly used ontologies are being assessed to
facilitate inter-‐disciplinary cross-‐fertilization of results. At this
stage of the project, the research object model has been updated and extended
as follows:
* Included the required vocabulary terms for describing geographic and time information, data access policies and intellectual property.
* Updated and aligned the research object core ontology and required extensions with the latest model of the Annotation Ontology, called Open Annotation Ontology (and since July 2016 Web Annotation Ontology, W3C Candidate Recommendation).
* Cleaned and properly annotated all the ontologies with provenance and metadata information.
* Adaptation and integration of existing Earth Observation metadata specifications.
## Increase data re-‐use
Each VRC is currently assessing how to license data to permit the widest reuse
possible and clearly identify any requirements for data embargo and length of
time for which the data will remain usable if applicable. Data quality
assurance processes are being implemented within the Research Object embedded
checklist.
## Allocation of resources, long term data preservation
Each community is responsible for the VRCs specific data storage requirements.
The EVER-‐EST project will provide services for data set storage sharing and
backup as described in [AD6]. Data selected for long term preservation will be
included in the VRC specific long-‐term preservation requirements. In the
data preservation decision the following aspects will be considered: 1)
Re-‐usability of the data (including metadata), 2) needed resources for long
term storage (size, access), 3) expected storage period, 4) possibility of
external data storage using non-‐ project related repositories. Data set
storage, curation and maintenance costs during the project life time are valid
EVER-‐EST costs. The long term resources needed for long term preservation
and storage will be considered in the sustainability plan. To be noted that
the adoption of the research object paradigm includes additional metadata in
the form of checklists that monitor and diagnose potential decay derived e.g.
from issues with the availability or accessibility of the data due to platform
downtime or data format changes, either as a fork at the VRE or as a reference
to the original dataset at the side of the data provider.
## Data security
Data recovery, secure storage and transfer of sensitive data are being
addressed at architectural design level [AD11] and will be described in detail
in the next release of the plan. Basic access control to the content of the
research object, particularly by third parties accessing the research object
is currently under implementation.
## Ethical aspects
As stated in the Grant Agreement, data sets collected or generated in
EVER-‐EST do not have ethic aspects concerns.
## Other: Licensing and IPR
Ownership of the data and results produced throughout the project activities
is defined in the Consortium Agreement and by the VRCs specific IPRs
regulations. The following requirements on functionalities related both to the
research object paradigm and impacting in EVER-‐EST architecture design, are
under implementation:
* Citation and attribution: sharing of data and methods, particularly at a point in time before an actual paper is published by a team of scientists to assure that data and methods are fully referentially, e.g. as a research object with its own DOI.
* Licensing mechanisms: allow scientists to define the terms in which their research objects can be used. This would allow creating confidence on the research object and establishing etiquette for acknowledgement that would support the previous point.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0873_EarthServer-2_654367.md
|
# Introduction
The EarthServer-2 project is itself built around concepts of data management
and accessibility. Its aim is to implement enabling technologies to make large
datasets accessible to a varied community of users. The intention is not to
create new datasets but to make existing datasets (identified at the start of
the project) easier to access and manipulate, encouraging data sharing and
reuse. Additional datasets will be added during the life of the project as
they become available and the DMP will be updated as a “live” document to
reflect this.
# Data Organisation, Documentation and Metadata
Data will be accessible through the Open Geospatial Consortium (OGC) Web
Coverage Processing Service 1 (WCPS) and Web Coverage Service 2 (WCS)
standards. EarthServer-2 will establish data/metadata integration on a
conceptual level (by integrating array queries with known metadata search
techniques such as tabular search, full text search, ontologies etc.) and on a
practical level (by utilizing this integrated technology for concrete
catalogue implementations based on standards like ISO 19115, ISO 19119 and ISO
19139 depending on the individual service partner needs).
# Data Access and Intellectual Property
Data access restrictions and intellectual property rights will remain as set
by the dataset owners (see Section 6). The datasets identified for the initial
release have no access restrictions.
# Data Sharing and Reuse
The aim of EarthServer-2 is to make data available for sharing and reuse
without requiring that users download the entire (huge) dataset. Data will be
available through the OGC WCPS and WCS standard, allowing users to filter and
process data at source before transferring them back to the client. Access
will be simplified by the provision of data services (Marine, Climate, Earth
Observation, Planetary and Landsat) that will web portals with a user friendly
interface to filtering and analysis tools as required by the application
domain.
# Data Preservation and Archiving
EarthServer-2 will not generate new data; preservation and archiving will be
the responsibility of the upstream projects from which the original data was
obtained.
# Data Register
The data register will be maintained as a “live” document; a snapshot will be
created for each DMP release (see 6.1 and following sections).
The data register will be based upon information and restrictions supplied by
the upstream data provider matched to Horizon 2020 guidelines as below (in
_italics)_ :
* **Data set reference and name**
_Identifier for the data set to be produced._
* **Data set description**
_Descriptions of the data that will be generated or collected, its origin (in
case it is collected), nature and scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration and reuse._
* _Standards and metadata_
_Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created._
* _Data sharing_
_Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling reuse, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.). In case the
dataset cannot be shared, the reasons for this should be mentioned (e.g.
ethical, rules of personal data, intellectual property, commercial, privacy-
related, security-related)._
* **Archiving and preservation (including storage and backup)** _Description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered._
Within EarthServer-2 currently, the original data are held by upstream
providers who have their own policies. In this case archiving and preservation
responsibility will remain with the upstream project.
## Marine Science Data Service
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**ESA OC-CCI v2**
</th> </tr>
<tr>
<td>
**Organisation**
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
The ESA Climate Change Initiative (CCI) programme is generating a set of
validated, error characterised, Essential Climate Variables (ECVs) from
existing satellite observations. The Ocean Colour ECV is providing ocean
colour data, with a focus on Case 1 waters, which can be used by climate
change prediction and assessment models. The dataset is created by band-
shifting and bias-correcting MERIS and MODIS data to match SeaWiFS data,
merging the datasets and computing per-pixel uncertainty estimates. See
http://www.esa-oceancolourcci.org/?q=webfm_send/496 for full details _._
</td> </tr>
<tr>
<td>
**Standards**
</td>
<td>
Data will be made available through the OGC WCPS and WCS standard.
</td> </tr>
<tr>
<td>
**Spatial extent**
</td>
<td>
Global
</td> </tr>
<tr>
<td>
**Temporal extent**
</td>
<td>
1981-2013
</td> </tr>
<tr>
<td>
**Project Contact**
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**Upstream Contact**
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**Limitations**
</td>
<td>
None
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Free
</td> </tr>
<tr>
<td>
**Constraints**
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Data Format**
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
**Access URL**
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
Data is part of long term ESA CCI project and the original copy is maintained
there.
</td> </tr> </table>
_**Table 1: Data set description for the MSDS.** _
## Climate Science Data Service
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**ECMWF ERA Reanalysis**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ECMWF**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
FP7 Era-Clim2
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS and WCS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
1900-2010
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Stephan Siemen (ECMWF)
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
Dick Dee (ECMWF)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free, but no redistribution
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GRIB
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://apps.ecmwf.int/datasets/
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Stored in MARS archive - original data will be kept without time limit
</td> </tr> </table>
_**Table 2: Data set description for the CSDS.** _
## Earth Observation Data Service
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**MOD 04 - Aerosol Product; MOD 05 - Total Precipitable**
**Water; MOD 06 - Cloud Product; MOD 07 - Atmospheric Profiles; MOD 35 - Cloud
Mask**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**NASA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
There are three MODIS Level 3 Atmosphere Products, each covering a different
temporal scale: Daily, 8-Day, and Monthly. Each of these Level 3 products
contains statistics de-rived from over 100 science parameters from the Level 2
Atmosphere products: Aerosol, Precipitable Water, Cloud, and Atmospheric
Profiles. A range of statistical summaries (scalar statistics and 1- and
2-dimensional histograms) are computed, depending on the Level 2 science
parameter. Statistics are aggregated to a 1° x 1° equal-angle global grid. The
daily product contains ~700 statistical summary parameters. The 8-day and
monthly products contain ~900 statistical summary parameters.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS and WCS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
2000 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
http://modaps.nascom.nasa.gov/services/user/
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
The distribution of the MODAPS data sets is funded by NASA's Earth-Sun System
Division (ESSD). The data are not copyrighted; however, in the event that you
publish data or results using these data, we request that you include the
following acknowledgment:
_"The data used in this study were acquired as part of the NASA's Earth-Sun
System Division and archived and distributed by the MODIS Adaptive Processing
System_
_(MODAPS)."_
We would appreciate receiving a copy of your publication, which can be
forwarded to [email protected].
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF (generated from HDF)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of Level-2 MODIS Atmosphere Products
</td> </tr> </table>
_**Table 3: First data set description for the EODS.** _
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**MOD 08 - Gridded Atmospheric Product; MOD 11 - Land Surface Temperature and
Emissivity**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**NASA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
There are three MODIS Level 3 Atmosphere Products, each covering a different
temporal scale: Daily, 8-Day, and Monthly. Each of these Level 3 products
contains statistics de-rived from over 100 science parameters from the Level 2
Atmosphere products: Aerosol, Precipitable Water, Cloud, and Atmospheric
Profiles. A range of statistical summaries (scalar statistics and 1- and
2-dimensional histograms) are computed, depending on the Level 2 science
parameter. Statistics are aggregated to a 1° x 1° equal-angle global grid. The
daily product contains ~700 statistical summary parameters. The 8-day and
monthly products contain ~900 statistical summary parameters.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS and WCS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
2000 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
http://modaps.nascom.nasa.gov/services/user/
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
The distribution of the MODAPS data sets is funded by NASA's Earth-Sun System
Division (ESSD). The data are not copyrighted; however, in the event that you
publish data or results using these data, we request that you include the
following acknowledgment:
_"The data used in this study were acquired as part of the NASA's Earth-Sun
System Division and archived and distributed by the MODIS Adaptive Processing
System_
_(MODAPS)."_
We would appreciate receiving a copy of your publication, which can be
forwarded to [email protected].
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF (generated from HDF)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of Level-3 MODIS Atmosphere Products
</td> </tr> </table>
_**Table 4: Second data set description for the EODS.** _
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
SMOS Level 2 Soil Moisture
(SMOS.MIRAS.MIR_SMUDP2); SMOS Level 2 Ocean Salinity (SMOS.MIRAS.MIR_OSUDP2)
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
ESA's Soil Moisture Ocean Salinity (SMOS) Earth Explorer mission is a radio
telescope in orbit, but pointing back to Earth not space. Its Microwave
Imaging Radiometer using Aperture Synthesis (MIRAS) radiometer picks up faint
microwave emissions from Earth's surface to map levels of land soil moisture
and ocean salinity.
These are the key geophysical parameters, soil moisture for hydrology studies
and salinity for enhanced understanding of ocean circulation, both vital for
climate change models.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS and WCS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
12-01-2010 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
EO-Support (https://earth.esa.int/web/guest/contact-us)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
https://earth.esa.int/web/guest/data-access/how-to-access-eodata/earth-
observation-data-distributed-by-esa
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF (generated from measurements geo-located in an equal-area grid system
ISEA 4H9)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of Level-2 SMOS Products
</td> </tr> </table>
_**Table 5: Third data set description for the EODS.** _
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Landsat8 L1T**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Level 1 T- Terrain Corrected
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS and WCS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
European
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
2014 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
EO-Support (https://earth.esa.int/web/guest/contact-us)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
Terms and Conditions for the Utilisation of Data under ESA’s Third Party
Missions scheme
</td> </tr>
<tr>
<td>
License
</td>
<td>
Open and Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
Acceptance of Terms and Conditions
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
ESA is an International Co-operator with USGS for the
Landsat-8 Mission. Data is downlinked via Kiruna and Matera (KIS and MTI)
stations whenever the satellite passes over Europe, starting from November
2013. Typically the station's will receive 2 or 3 passes per day each and
there will be some new scenes for each path, in accordance with the overall
mission acquisition plan.
The Neustrelitz data available on the portal from May 2013 to December 2013
Data will be processed to either L1T or L1Gt product format as soon as it is
downlinked. The target time is for scenes to be available for download within
3 hours of reception. https://landsat8portal.eo.esa.int/faq/
</td> </tr> </table>
_**Table 6: Fourth data set description for the EODS.** _
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Sentinel2**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Level-1C 3
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS and WCS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
Q3 2015
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free and Open
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
Registration
A maximum of 2 concurrent downloads per user is allowed in order to ensure a
download capacity for all users.
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
Sentinel Standard Archive Format for Europe (SAFE) format, including image
data in JPEG2000 format, quality indicators, auxiliary data and metadata
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
Sentinels Scientific Data Hub: https://scihub.esa.int
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of ESA Earth Observation Long Term Data Preservation (LTDP)
Programme
</td> </tr> </table>
_**Table 7: Fifth data set description for the EODS.** _
## Planetary Science Data Service
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
MGS MOLA GRIDDED DATA RECORDS
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**Jacobs University**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Mars Orbiter Laser Altimeter (MOLA)
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS and WCS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
Not Applicable (gridded from multiple experiment data records)
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://ode.rsl.wustl.edu/mars/pagehelp/quickstartguide/index.
html?mola.htm
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term NASA PDS archives and the original copies are
maintained there
</td> </tr> </table>
_**Table 8: First data set description for the PSDS.** _
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
MRO-M-CRISM-3-RDR-TARGETED-V1.0
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**Jacobs University**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
TRDR - Targeted Reduced Data Records contain data calibrated to radiance or
I/F.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS and WCS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Local
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
Variable
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://ode.rsl.wustl.edu/mars/pagehelp/quickstartguide/index.
html?crism.htm
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term NASA PDS archives and the original copies are
maintained there
</td> </tr> </table>
_**Table 9: Second data set description for the PSDS.** _
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
MRO-M-CRISM-5-RDR-MULTISPECTRAL-V1.0
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**Jacobs University**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
MRDR - Multispectral Reduced Data Records contain multispectral survey data
calibrated, mosaicked, and map projected.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS and WCS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Regional/global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
Not Applicable, derived data from multiple acquisition times
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://ode.rsl.wustl.edu/mars/pagehelp/quickstartguide/index.
html?crism.htm
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term NASA PDS archives and the original copies are
maintained there
</td> </tr> </table>
_**Table 10: Third data set description for the PSDS.** _
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
LRO-L-LOLA-4-GDR-V1.0
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**Jacobs University**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
LRO LOLA GRIDDED DATA RECORD
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
Not Applicable (gridded from multiple experiment data records
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://ode.rsl.wustl.edu/moon/pagehelp/quickstartguide/index .html?lola.htm
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term NASA PDS project and the original copies are
maintained there
</td> </tr> </table>
_**Table 11: Fourth data set description for the PSDS.** _
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
MEX-M-HRSC-5-REFDR-DTM-V1.0
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**Jacobs University**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Mars Express HRSC topography
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Local
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
Variable
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
ftp://psa.esac.esa.int/pub/mirror/MARS-
EXPRESS/HRSC/MEX-M-HRSC-5-REFDR-DTMV1.0/DOCUMENT/
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term ESA PSA project and the original copies are
maintained there
</td> </tr> </table>
_**Table 12: Fifth data set description for the PSDS.** _
## Landsat Data Cube Service
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Landsat**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ANU/NCI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
The Australian Reflectance Grid (ARG)
http://geonetwork.nci.org.au/geonetwork/srv/eng/metadata.sh
ow?id=24&currTab=simple
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available at OGC WCS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Longitude: 108 – 155, Latitude: -10 - -45, Universal Transverse Mercator (UTM)
and Geographic Lat-Lon
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
1997-now
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Commonwealth of Australia (Geoscience Australia) 2015.
Creative Commons Attribution 4.0 International Australia License.
https://creativecommons.org/licenses/by/4.0/
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
Commonwealth of Australia (Geoscience Australia) 2015.
Creative Commons Attribution 4.0 International Australia License.
https://creativecommons.org/licenses/by/4.0/
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF [NetCDF-CF conversion currently underway]
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://dap.nci.org.au/thredds/remoteCatalogService?catalog=
http://dapds00.nci.org.au/thredds/catalog/rs0/catalog.xml
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
This data collection is part of the Research Data Storage Infrastructure
program, which aims for long-term preservation.
</td> </tr> </table>
_**Table 13: Data set description for the LDCS.** _
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0877_U_CODE_688873.md
|
# Executive Summary
In this report the initial Data Management Plan (DMP) for the U_CODE project
is presented. The report outlines how research data will be handled during and
after the project duration. It describes what data will be collected,
processed or generated with which methodologies and standards, whether and how
this data will be shared or made open, and how it will be curated and
preserved.
The Data Management Plan (DMP) describes the data management life cycle for
all data sets. The purpose of the DMP is to provide an analysis of the main
elements of the data management policy that will be used in U_CODE with regard
to all data sets that will be generated by the project. The data collected and
generated by the different U_CODE partners will have multiple formats. In
general four different types are generated and processed 1.) text based data,
2.) visual based data sets, 3.) models, and 4.) software / source code data
sets.
The Data Management Plan provides information on the following points:
* Data set description
* Data set reference and name
* Data sharing
* Standards and metadata
* Archiving and preservation (including storage and backup)
The DMP gives a first overview on the diversity, scale and amount of data
which will be handled during the U_CODE project. While the project is ongoing,
conjectPM is used as the collaboration platform for the management of U_CODE
data.
The DMP is not a fixed document, but evolves during the lifespan of the
project.
# Applied Methodology
The methodology applied for drafting this initial DMP of U_CODE is based on
guidelines of the European Commission 1 . According to these guidelines all
U_CODE partners were asked to list and describe their datasets. The compiled
list is presented in attachment 1 2 at the end of this document. The tables
give details about the datasets generated in the project. These various
datasets are stored at conjectPM for (internal) use during the project
duration. Which data sets will be stored for open access will be decided later
in the project.
This list addresses the main points on a dataset by dataset basis and reflects
the current status of discussion and reflection within the consortium about
the data that is going to be produced within the U_CODE project. This list
will evolve and develop over the lifetime of the project and will be kept up
to date on the U_CODE collaborative platform conjectPM.
## Data set description
The data collected and generated by the different U_CODE partners will have
multiple formats and vary in size from a few MB’s to several GB’s. The formats
range from interview transcripts, survey results, protocols, pictures, visual
recordings up to software prototypes, and test data. So far four types of
general data sets are identified:
* **text based data** : interviews, surveys (scientific), publications, reports,
* **visual data** : logfiles graphs, visual protocols, pictures, UML diagrams
* **models** : models, digital models, conceptual framework
* **software** data: prototype, software prototypes, test data, source code
The Initial DMP template asked the U_CODE partners to describe their different
data sets according to the following items:
_DATA SET – name; DATA SET - nature of data; Lead; WP; Task/ Deliverable, time
in which data is generated/collected, type of data, data format, publication
date, source of data, how is the data generated/collected, how is the data
processed; restriction on using the data; standards; metadata; data sharing;
preservation and backup; duration of preservation (short-term, long-term,
...), related dataset; underpins scientific publication; License_
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Nr**
</td>
<td>
**DATA SET ‐ name**
</td>
<td>
**DATA SET Type ‐ nature of data**
</td>
<td>
**Lead**
</td>
<td>
**WP**
</td>
<td>
**Task/ Deliv.**
</td>
<td>
**time in which data is generated/collected**
</td>
<td>
**Type of data**
</td>
<td>
**data format**
</td>
<td>
**Publication Date**
</td>
<td>
**Source of data**
</td> </tr>
<tr>
<td>
**Explanation & filling examples **
</td>
<td>
</td>
<td>
**eg.**
**Interviews, survey results, software prototypes, software, publications,
production, test data, conceptual framework, modells,**
</td>
<td>
TU Dr, TU De, ISEN, CONJ, OPT, SilSax, GMP
</td>
<td>
**1…8**
</td>
<td>
</td>
<td>
</td>
<td>
**audio, video, text, pictures, code, models…**
</td>
<td>
**xls, docx, jepg, pdf, ppt, mp3 ...,**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 1
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 2
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
**how is the data generated/collected**
</td>
<td>
**how is the data processed**
</td>
<td>
**Restriction on using the data, suggestions by now**
</td>
<td>
**audience, if yet known**
</td>
<td>
**standards**
</td>
<td>
**metadata**
</td>
<td>
**data sharing**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
**open access, open to qualified researchers, confidental ‐ only for**
**U_CODE members**
</td>
<td>
**e.g. other research groups, users of …**
</td>
<td>
**Reference to existing suitable standards of the discipline,**
</td>
<td>
**If standards do not exist, an outline on how and what metadata will be
created.**
</td>
<td>
**Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necess software and other tools for enabling re‐use, and definition of whether
access be widely open or restricted to specific groups. Identification of the
repository where data will be stored, if already existing and identified,
indicating in partic the type of repository (institutional, standard
repository for the discipline, etc. In case the dataset cannot be shared, the
reasons for this should be mentioned ethical, rules of personal data,
intellectual property, commercial, privacy‐relate security‐related).**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**a**
**w**
**u**
**)**
**d**
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
**preservation and backup**
</td>
<td>
**duration of preservation (short‐term, long‐term, ...)**
</td>
<td>
**related dataset**
</td>
<td>
**underpins scientific publication**
</td>
<td>
**License**
</td> </tr>
<tr>
<td>
</td>
<td>
**Description of the procedures that will be put in place for long‐term
preservation of the data.**
</td>
<td>
**Indication of how long the data should be preserved, what is its
approximated end volume, what the associated costs are and how these are
planned to be covered**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**g.**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
Fig. 1: Data Management Template
Due to the fact that data collection and creation is an ongoing process,
questions such as the detailed description of data nature, exact scale, to
whom those data may be useful or if these data underpin a scientific
publication will be answered in the updated versions of the DMP. Moreover the
question on the existence or non-existence of similar data and the
possibilities for integration and reuse are not finally agreed between the
U_CODE partners and will be reported later.
## Data set reference and names
A first collection of datasets has been compiled in Attachment 1 at the end of
this document. A comprehensive pattern for naming the produced datasets of the
project to be published open access is going to be developed. As an example
one approach could be the following:
UCODE_Data_"WPNo."."DatasetNo."_"DatasetTitle"UCODE_Data_WP1.1_UserGenerate
dContent). This depends also on the long term data sharing platform to be
chosen.
conjectPM is used to organize, manage and monitor the collected and generated
data sets of the U_CODE project. Due to the structure of the collaboration
platform conjectPM (for a detailed explanation see Section 2.4) a unified name
structure is not necessary to handle the various data sets during the project
duration.
## Data sharing
ConjectPM is used to share and manage the collected and generated data sets
within the U_CODE project. It provides a well-organized structure to make it
easy for research teams to find, better understand and reuse the various data
by creating a consistent and well structured research data pool (see also
2.4).
**Open access policy:** By default all of the created data in U_CODE shall be
made available open access. Reasons for not making the data open will derive
from
* **legal properties** (e.g. missing copyrights, participant confidentiality, consent agreements, or intellectual property rights)
* **scientific** and/or **business** reasons (e.g. pending publications, exploitation aspects)
* **technical issues** (e.g. incomplete data sets).
The collected and generated data can be classified into two categories 1)
**short term** intermediate **data** (stored at conjectPM), and 2.) **long
term** **data** (stored in repositories, such as ZENODO or OpARA). The long
term data have different levels of open accessibility:
* data with restricted access to the U_CODE partner creating this data set;
* data with restricted access to U_CODE project partners;
* data that is to be published and shared as open source to researchers only; data that is to be published and shared as open source to everyone.
The decisions on data publication and the level of accessibility will be taken
per dataset and by the responsible U_CODE partner who created the dataset.
This will be documented in this (or future) versions of the data management
plan. The updated version of the DMP shall detail the information on data
sharing, including access procedures, embargo periods, and outlines of
technical mechanisms for dissemination for open accessible data sets.
Strategies to limit restrictions may include: anonymising or aggregating data.
Questions to be considered when further developing the open access policy in
U_CODE are:
* How do we make the date available to others?
* With whom are we sharing the data, and under what conditions?
* What kind of restrictions are needed and why?
* What actions are we planning to minimise these restrictions?
## Standards and metadata
The U_CODE project will create diverse data to detail project content and
moreover create data needed to enable other researchers to use and regenerate
output data in a systematic way. The documentation can take the form of
publications, manuals and README files on how to use the software, in addition
to scripts for running the software.
To enable a consistent description of all datasets provided by the project, a
template table is used to describe metadata of each dataset including title,
author, description, formats, etc. (see attachment 1). U_CODE partners collect
and create data sets on their own or by co-creating these data sets together.
Due to the diversity of the project partners involved there were no community
data standards identified yet.
The collaboration platform – conjectPM – used for the management of U_CODE
enforces the categorization of any document uploaded in order to impose a
common structure in the metadata of the document repository. On project
initialization, participants agreed on the following mandatory categories to
be assigned to a document: Company, Document Type, Topic and Work Package. The
values assignable to the respective categories are shown in the following
screens.
Fig. 2: Upload Dialog with mandatory categories
Fig. 3: Upload dialog with company category selections
Fig. 4: Upload dialog with document type category selection
Fig. 5: Upload dialog with Topic category selections
Fig. 6: Upload dialog with Work Package category selections
Fig. 7: Upload dialog with selected category choices (example)
Document retrieval can then be conducted on the basis of categories, as shown
in the Advanced Search dialog.
Fig. 8: search options (example)
Category settings can be adapted during the project if necessary. However, the
addition or removal of categories is not downward compatible (additions) and
might render existing documents invisible via category search. Hence the
removal of existing categories is not advisable. However, adding more choices
to any existing category is entirely non-critical. Besides project-specific
categories the default categories Owner/created by, Creation date, Path in the
document tree and Keywords are always in place. Keyword search is made
available by full-text scanning of the entire document on document upload.
## Archiving and preservation
### Storage, backup, replication and versioning in U_CODE
Intermediate data generated by the U_CODE partners will be stored in the
U_CODE collaborative platform conjectPM. This repository can be easily
accessed by all partners. It includes all the publications, raw data, reviews,
all Deliverables and the management of the U_CODE project.
## Data Security at conject Data Centres
The conjectPM systems are located in two self-sufficient and geographically
separated facilities. During normal operation the system load is balanced
across the two locations. In the unlikely event that either one of the data
centres becomes unavailable the remaining one can take over full operation and
guarantee the availability of all customer data. The conjectPM file system
consists of an array of independent storage units. It maintains at least three
copies of each file spread across the two locations. Failures of storage units
are automatically detected and handled by recreating the data on other storage
units. Storage units can be added or replaced while the system stays fully
operable, ensuring that sufficient capacity is always available when required.
Core system components are secured against failure by duplication of power
supplies, CPUs, storage devices and network connections. All hardware
components have secondary devices in place for failover contingency. Due to
the high levels of resiliency in place conjectPM guarantees a 99.5%
availability SLA to all of their clients around the globe.
The U_CODE project on the conjectPM platform has been configured to match the
overall U_CODE work package structure. Access rights to documents have been
set according to the work package leader. A general section in the project
folder structure is set up for administrative purposes and information
exchange between U_CODE partners.
### Long term data sharing platform
Selected data from the conjectPM repository will be shared publicly during or
after the life time of the project. All long term data collected or generated
will be deposited in a repository. If required, the entire information content
of the U_CODE project can be stored on disk for archiving. This functionality
can also be used to transfer U_CODE content to another system. The final
repository has not been chosen yet. The choice of repository will depend on:
* location of repository
* research domain
* costs
* open access options
* prospect of long-term preservation.
**ZENODO repository:**
One of the repositories considered is ZENODO _https://zenodo.org/_ . This is
online, free of charge storage created through the European Commission’s
OpenAIREplus project and is hosted at CERN, Switzerland. It encourages open
access deposition of any data format, but also allows deposits of content
under restricted or embargoed access. Contents deposited under restricted
access are protected against unauthorized access at all levels. Access to
metadata and data files is provided over standard protocols such as HTTP and
OAI-PMH.
Data files are kept in multiple replicas in a distributed file system, which
is backed up to tape every night. Data files are replicated in the online
system of ZENODO. Data files have versions attached to them, whilst records
are not versioned. Derivatives of data files are generated, but the original
content is never modified. Records can be retracted from public view; however,
the data files and records are preserved. The uploaded data is archived as a
Submission Information Package in ZENODO. Files stored in ZENODO will have MD5
checksum of the file content, and it will be checked against their checksum to
assure that a file content remains correct. Items in the ZENODO will be
retained for the lifetime of the repository which is also the lifetime of the
host laboratory CERN which currently has an experimental programme defined for
the next 20 years. Each dataset can be referenced at least by a unique
persistent identifier (DOI), in addition to other forms of identifications
provided by ZENODO.
## OpARA repository
Another option is provided by the Technische Universität Dresden, which is
currently setting up an institutional, inter-disciplinary repository with
long-term archive in the project OpARA. It will provide open access long-term
storage of data, including metadata and will go into production in 2017.
Other institutional and thematic repositories will be considered and evaluated
in the next months.
# Budget
The costs of preparing the data and documentation will be borne by the project
partners. This is already budgeted in the personnel costs included in the
project budget.
The permanent costs of preserving datasets on the ZENODO repository will be
free of charge as long as the single dataset storage is no greater than the
maximum 2GB of data.
The permanent costs of preserving datasets on the OpARA repository are planned
to be free of charge for TUD members. But the final decision on costs has not
been taken.
# Attachment 1: Initial Datasets in U_CODE
Initial Datasets in U_CODE sorted by U_CODE partners 3
Initial Datasets in U_CODE sorted by DATA SET Type 4
U_CODE **Data Management Plan Template (by Partner)**
<table>
<tr>
<th>
**Nr**
</th>
<th>
**DATA SET ‐ name**
</th>
<th>
**DATA SET Type ‐ nature of data**
</th>
<th>
**Lead**
</th>
<th>
**WP**
</th>
<th>
**Task/ Deliv.**
</th>
<th>
**time in which data is generated/collected**
</th>
<th>
**Type of data**
</th>
<th>
**data format**
</th>
<th>
**Publication Date**
</th>
<th>
**Source of data**
</th>
<th>
**how is the data generated/collected**
</th>
<th>
**how is the data processed**
</th>
<th>
**Restriction on using the data, suggestions by now**
</th>
<th>
**audience, if yet known**
</th>
<th>
**standards**
</th>
<th>
**metadata**
</th>
<th>
**data sharing**
</th>
<th>
**preservation and backup**
</th>
<th>
**duration of preservation (short‐term, long‐term, ...)**
</th>
<th>
**related dataset**
</th>
<th>
**underpins scientific publication**
</th>
<th>
**License**
</th> </tr>
<tr>
<td>
**Explanation & filling examples **
</td>
<td>
</td>
<td>
**eg.**
**Interviews, survey results, software prototypes, software, publications,
production, test data, conceptual framework, modells,**
</td>
<td>
TU Dr, TU De, ISEN, CONJ, OPT, SilSax, GMP
</td>
<td>
**1…8**
</td>
<td>
</td>
<td>
</td>
<td>
**audio, video, text, pictures, code, models…**
</td>
<td>
**xls, docx, jepg, pdf, ppt, mp3 ...,**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
**open access, open to qualified researchers, confidental ‐ only for**
**U_CODE members**
</td>
<td>
**e.g. other research groups, users of …**
</td>
<td>
**Reference to existing suitable standards of the discipline,**
</td>
<td>
**If standards do not exist, an outline on how and what metadata will be
created.**
</td>
<td>
**Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling re‐use, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository**
**(institutional, standard repository for the discipline, etc.).**
**In case the dataset cannot be shared, the reasons for this should be
mentioned (e.g. ethical, rules of personal data, intellectual property,
commercial, privacy‐related, security‐related).**
</td>
<td>
**Description of the procedures that will be put in place for long‐term
preservation of the data.**
</td>
<td>
**Indication of how long the data should be preserved, what is its
approximated end volume, what the associated costs are and how these are
planned to be covered**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 1
</td>
<td>
Reviewreport on Kick off meeting in Dresden
</td>
<td>
report
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP1
</td>
<td>
1.2
</td>
<td>
02/03/2016‐04/03/2016
</td>
<td>
text, pictures, , visual protocols
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
workshop with U_CODE partners
</td>
<td>
workshop
</td>
<td>
stored at conject pm
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 2
</td>
<td>
Reviewreport on GA Meeting in Dresden
</td>
<td>
report
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP1
</td>
<td>
1.2
</td>
<td>
01/06/2016‐03/06/2016
</td>
<td>
text, pictures, , visual protocols
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
workshop with U_CODE partners
</td>
<td>
workshop
</td>
<td>
stored at conject pm
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 3
</td>
<td>
ideagrams of workshop
</td>
<td>
logfiles of discussions/interviews
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP1
</td>
<td>
1.2
</td>
<td>
02/03/2016‐31/07/2019
</td>
<td>
picture, graph
</td>
<td>
</td>
<td>
</td>
<td>
workshop with U_CODE partners
</td>
<td>
workshop
</td>
<td>
stored at conject pm
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 4
</td>
<td>
photo documentation of meetings & ws
</td>
<td>
report
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP1
</td>
<td>
1.2
</td>
<td>
02/03/2016‐31/07/2019
</td>
<td>
pictures
</td>
<td>
jpeg, img
</td>
<td>
</td>
<td>
workshop with U_CODE partners
</td>
<td>
workshop
</td>
<td>
stored at conject pm
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 5
</td>
<td>
netplans
</td>
<td>
graphs/pictures
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP2
</td>
<td>
1.3
</td>
<td>
02/03/2016‐31/07/2020
</td>
<td>
pictures
</td>
<td>
jpeg, img
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 6
</td>
<td>
expert talk Hamburg/Reschke
</td>
<td>
interview
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP2
</td>
<td>
2.4
</td>
<td>
11.04.2016
</td>
<td>
audio
</td>
<td>
mpg
</td>
<td>
</td>
<td>
workshop with CONJECT
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 7
</td>
<td>
technical & financial quarterly reports
</td>
<td>
report
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP1
</td>
<td>
1.2
</td>
<td>
02/03/2016‐31/07/2019
</td>
<td>
text,
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
U_CODE partners
</td>
<td>
</td>
<td>
stored at conject pm
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 8
</td>
<td>
Design Heurisics and Design Decision making process
</td>
<td>
models
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP2
</td>
<td>
T2.4
</td>
<td>
01/02/2016‐30/11/2016
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 9
</td>
<td>
literature review of social media and communication workflows in urban
planning
</td>
<td>
(scientific) publications
</td>
<td>
**TU Dr MC**
</td>
<td>
WP2
</td>
<td>
2.2
</td>
<td>
01/04‐ ongoing
</td>
<td>
text, pictures
</td>
<td>
pdf, docx, ppt
</td>
<td>
</td>
<td>
(scientific) literature
</td>
<td>
</td>
<td>
stored at conject pm
</td>
<td>
only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 10
</td>
<td>
review of existing crowdsourcing and gaming approaches in urban planning
</td>
<td>
(scientific) publications
</td>
<td>
**TU Dr MC**
</td>
<td>
WP2
</td>
<td>
2.2
</td>
<td>
01/04‐ ongoing
</td>
<td>
text, pictures
</td>
<td>
pdf, docx, ppt
</td>
<td>
</td>
<td>
(scientific) literature
</td>
<td>
</td>
<td>
stored at conject pm
</td>
<td>
only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 11
</td>
<td>
Functionality scheme of a communication system
</td>
<td>
publication
</td>
<td>
**TUDr‐MC**
</td>
<td>
WP2
</td>
<td>
D2.2
</td>
<td>
01/02/2016‐31/12/2017
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 12
</td>
<td>
Revised functional specifications
</td>
<td>
publication
</td>
<td>
**TUDr**
</td>
<td>
WP2
</td>
<td>
D2.4
</td>
<td>
01/02/2016‐30/11/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 13
</td>
<td>
Usability Testing
</td>
<td>
test data
</td>
<td>
**TUDr‐MC**
</td>
<td>
WP6
</td>
<td>
T6.2
</td>
<td>
01/08/2016‐30/11/2018
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 14
</td>
<td>
Mopo24 – Morgenpost Sachsen
</td>
<td>
articles of daily newspaper
</td>
<td>
**TUDr‐AL**
</td>
<td>
WP2
</td>
<td>
II.3
</td>
<td>
2014–2016
</td>
<td>
text
</td>
<td>
xml
</td>
<td>
</td>
<td>
https://mopo24.de/share/sitemap.xml
</td>
<td>
</td>
<td>
</td>
<td>
only for U_CODE members
</td>
<td>
</td>
<td>
TEI XML
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
short‐term preservation (test file)
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 15
</td>
<td>
Presseschau Dresden
</td>
<td>
articles of different newspapers about Dresden and surrounding area
</td>
<td>
**TUDr‐AL**
</td>
<td>
WP2
</td>
<td>
II.3
</td>
<td>
2007–2016
</td>
<td>
text
</td>
<td>
xml
</td>
<td>
</td>
<td>
daily newsletter sent via E‐mail
</td>
<td>
</td>
<td>
</td>
<td>
only for U_CODE members
</td>
<td>
</td>
<td>
TEI XML
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
short‐term preservation (test file)
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 16
</td>
<td>
Semantic / Sentiment Analysis in Social Media
</td>
<td>
models
</td>
<td>
**TUDr ‐AL**
</td>
<td>
WP2
</td>
<td>
T2.3
</td>
<td>
01/02/2016‐30/11/2016
</td>
<td>
software
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 17
</td>
<td>
Interview Collection
</td>
<td>
interviews
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
3.1‐3.3
</td>
<td>
01/04/2016‐01/12/2016
</td>
<td>
text, pictures, audio records
</td>
<td>
pdf, docx, MP3, JPEG
</td>
<td>
</td>
<td>
</td>
<td>
ethnographic observation, semistructured interviews
</td>
<td>
stored at PC
</td>
<td>
confidential
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 18
</td>
<td>
Pictures
</td>
<td>
pictures
</td>
<td>
**ISEN**
</td>
<td>
WP
</td>
<td>
3.1‐3.3
</td>
<td>
01/04/2016‐01/12/2016
</td>
<td>
pictures,
</td>
<td>
pdf, JPEG
</td>
<td>
</td>
<td>
</td>
<td>
ethnographic observation, semistructured interviews
</td>
<td>
stored at PC
</td>
<td>
confidential
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 19
</td>
<td>
Moderated Models
</td>
<td>
MoM
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
3.1‐3.3
</td>
<td>
01/04/2016‐01/12/2016
</td>
<td>
text, pictures,
</td>
<td>
pdf, docx, MP3, JPEG
</td>
<td>
</td>
<td>
</td>
<td>
ethnographic observation, semistructured interviews
</td>
<td>
stored at PC
</td>
<td>
confidential
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 20
</td>
<td>
Initial report on co‐design sessions, ethnographic study and interviews
</td>
<td>
publication
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
T3.1/D3. 1
</td>
<td>
01/02/2016‐30/09/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 21
</td>
<td>
Interaction Formats between professionals and citizens
</td>
<td>
publication
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
D3.2/M1 1
</td>
<td>
01/02/2016‐30/10/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 22
</td>
<td>
Functional specifications of U_CODE and use case description
</td>
<td>
publication
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
D3.3
</td>
<td>
01/02/2016‐30/10/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 23
</td>
<td>
Functional description of the U_CODE tool
</td>
<td>
publication
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
D3.4
</td>
<td>
01/02/2016‐30/10/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
interview partners: …
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 24
</td>
<td>
Roadmap for implementation and of a validation test plan
</td>
<td>
publication
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
D3.5
</td>
<td>
01/02/2016‐28/02/2017
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 25
</td>
<td>
OTD NL Valkenburg CONFIDENTIAL
</td>
<td>
interviews
</td>
<td>
**TUDe**
</td>
<td>
WP7
</td>
<td>
7.1
</td>
<td>
01/04‐01/06/2016
</td>
<td>
text, pictures
</td>
<td>
PDF
</td>
<td>
May 10, 2016
</td>
<td>
Interview project leaders of Location Valkenburg
</td>
<td>
interview
</td>
<td>
stored at conject pm
</td>
<td>
confidental ‐ only for U_CODE members
</td>
<td>
U_CODE members only
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 26
</td>
<td>
Legal Framework NL
</td>
<td>
interviews + models
</td>
<td>
**TUDe**
</td>
<td>
WP2
</td>
<td>
2.1
</td>
<td>
01/04‐01/06/2016
</td>
<td>
text, pictures
</td>
<td>
PDF
</td>
<td>
TBD
</td>
<td>
interview, + publications
</td>
<td>
interview + publications
</td>
<td>
to be stored at conject pm
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 27
</td>
<td>
LEF Report
</td>
<td>
interview
</td>
<td>
**TUDe**
</td>
<td>
WP2
</td>
<td>
2.1
</td>
<td>
01/04‐01/06/2016
</td>
<td>
text, pictures
</td>
<td>
PDF
</td>
<td>
May 27, 2016
</td>
<td>
interview LEF expert
</td>
<td>
interview
</td>
<td>
stored at conject pm
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 28
</td>
<td>
Phases presentation TUDelft
</td>
<td>
conceptual framework
</td>
<td>
**TUDe**
</td>
<td>
WP2
</td>
<td>
2.1
</td>
<td>
01/04‐01/06/2016
</td>
<td>
text, pictures
</td>
<td>
PDF
</td>
<td>
Jun 7, 2016
</td>
<td>
interview, + publications
</td>
<td>
interview + publications
</td>
<td>
stored at conject pm
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 29
</td>
<td>
Workshop I&M report
</td>
<td>
workshop + observations
</td>
<td>
**TUDe**
</td>
<td>
WP2
</td>
<td>
2.1
</td>
<td>
01/04‐20/04/2016
</td>
<td>
text, pictures
</td>
<td>
PDF
</td>
<td>
TBD
</td>
<td>
workshop at Dutch ministry I&M
</td>
<td>
workshop report
</td>
<td>
to be stored at conject pm
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 30
</td>
<td>
O‐Testbed Description Workshop ‐ report
</td>
<td>
workshop report
</td>
<td>
**TUDe**
</td>
<td>
WP2, 3, 7
</td>
<td>
7.1
</td>
<td>
01/04‐01/05/2016
</td>
<td>
text, pictures
</td>
<td>
PDF
</td>
<td>
Apr 27, 2016
</td>
<td>
workshop with U_CODE members
</td>
<td>
workshop report
</td>
<td>
stored at conject pm
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 31
</td>
<td>
Co‐Design methodologies in urban design (initial version)
</td>
<td>
survey
</td>
<td>
**TUDe**
</td>
<td>
WP2
</td>
<td>
T2.1/D2. 1
</td>
<td>
01/02/2016‐30/11/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
research and review of existing tools for urban planning (e.g. Poldering in
NL)
</td>
<td>
using of linguistic data form social networks
</td>
<td>
stored at conject pm
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 32
</td>
<td>
Co‐Design methodologies in urban designs
</td>
<td>
survey
</td>
<td>
**TUDe**
</td>
<td>
WP2
</td>
<td>
T2.1/D2. 3
</td>
<td>
01/02/2016‐31/12/2017
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
interview partners: …
</td>
<td>
</td>
<td>
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 33
</td>
<td>
Assessment report and testbed report
</td>
<td>
publication
</td>
<td>
**TU Delft**
</td>
<td>
WP7
</td>
<td>
T7.2/D7. 1/M38
</td>
<td>
01/05/2016‐01/02/2019
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 34
</td>
<td>
cross‐cultural comparison study
</td>
<td>
publication
</td>
<td>
**TU Delft**
</td>
<td>
WP7
</td>
<td>
D7.2/M3 8
</td>
<td>
01/05/2016‐01/02/2019
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 35
</td>
<td>
examplatory Model Data for OPTIS WP4 virtual space implementation
</td>
<td>
digital models in the standardized format IFC. Different discipilines (partial
models) and sizes
</td>
<td>
**CONJECT/OPT**
</td>
<td>
WP4
</td>
<td>
D4.4
</td>
<td>
20.06.2016
</td>
<td>
Digital Building Model. Format: IFC part 21 physical file (ISO 10303‐21)
</td>
<td>
ifc
</td>
<td>
June 2016
</td>
<td>
freely available ifc sources
</td>
<td>
conject sample files, internet no copyrights
</td>
<td>
stored at conject pm. Optis will use them for trial in virtual environment
</td>
<td>
open access
</td>
<td>
U_CODE, in special for OPTIS testing purposes
</td>
<td>
ISO 16739 ‐ Industry Foundation Classes (IFC) for data sharing in the
construction and facility management industries
</td>
<td>
</td>
<td>
available on conject PM, project U_CODE
</td>
<td>
subject to the conject PM versioning, backup and security procedures
</td>
<td>
subject to conject PM long‐term preservation policy (i.e. hard disc image of
the project including entire project document set)
</td>
<td>
following in later stages of U_CODE:
annotations to models (participant feedback) in BCF format
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 36
</td>
<td>
Use Case Framework
</td>
<td>
Formalization methods for WP7 Testbed Assesment Reports
</td>
<td>
**CONJECT/TUDe**
</td>
<td>
WP7
</td>
<td>
D7.1
</td>
<td>
29.04.2016
</td>
<td>
power point presentation
</td>
<td>
pptx
</td>
<td>
Apr 16
</td>
<td>
created by the author
</td>
<td>
</td>
<td>
</td>
<td>
only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
Office Open XML
</td>
<td>
</td>
<td>
available on conject PM, project U_CODE
</td>
<td>
subject to the conject PM versioning, backup and security procedures
</td>
<td>
subject to conject PM long‐term preservation policy (i.e. hard disc image of
the project including entire project document set)
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 37
</td>
<td>
U_CODE Sales Presentation
</td>
<td>
SPIN ‐ Presentation for potential U_CODE customers
</td>
<td>
**CONJECT/TUDr KA**
</td>
<td>
WP8
</td>
<td>
D8.1
</td>
<td>
07.06.2016
</td>
<td>
power point presentation
</td>
<td>
pptx
</td>
<td>
June 2016
</td>
<td>
created by the author
</td>
<td>
</td>
<td>
</td>
<td>
only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
Office Open XML
</td>
<td>
</td>
<td>
available on conject PM, project U_CODE
</td>
<td>
subject to the conject PM versioning, backup and security procedures
</td>
<td>
subject to conject PM long‐term preservation policy (i.e. hard disc image of
the project including entire project document set)
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 38
</td>
<td>
Agile Methodology
</td>
<td>
Introduction into agile methods and tools
</td>
<td>
**CONJECT/TUDr KA**
</td>
<td>
WP1
</td>
<td>
T1.1
</td>
<td>
stopped due to line problems,
will be resumed as life presentation in Toulon
</td>
<td>
webinar
</td>
<td>
</td>
<td>
June 2016
</td>
<td>
created by the author
</td>
<td>
</td>
<td>
to be done after successful presentation
</td>
<td>
only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 39
</td>
<td>
introduction into UML
</td>
<td>
webinar on UML methodology, first UML diagram types and how to use them in
U_CODE
</td>
<td>
**CONJECT/TUDe**
</td>
<td>
WP1
</td>
<td>
D7.1
</td>
<td>
01.04.2016
</td>
<td>
webinar
</td>
<td>
</td>
<td>
Apr 16
</td>
<td>
created by the author
</td>
<td>
</td>
<td>
</td>
<td>
only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 40
</td>
<td>
UMLDiagrams
</td>
<td>
first UML diagrams
</td>
<td>
**CONJECT**
</td>
<td>
WP1
</td>
<td>
D7.1
</td>
<td>
01/04/2016‐01/12/2017
</td>
<td>
visual graphs
</td>
<td>
eg. vpp
</td>
<td>
Apr 16
</td>
<td>
created by the author
</td>
<td>
</td>
<td>
</td>
<td>
only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 41
</td>
<td>
Project Information Model
</td>
<td>
prototype
</td>
<td>
**CONJECT**
</td>
<td>
WP5
</td>
<td>
T5.1/D5. 1
</td>
<td>
01/06/2016‐31/12/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 42
</td>
<td>
Data Space structure (cloud server)
</td>
<td>
prototype
</td>
<td>
**CONJECT**
</td>
<td>
WP5
</td>
<td>
T5.2/D5. 2
</td>
<td>
01/07/2016‐31.12.2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 43
</td>
<td>
Co‐design space
</td>
<td>
prototype
</td>
<td>
**CONJECT**
</td>
<td>
WP5
</td>
<td>
T5.3/D5. 3/M32
</td>
<td>
01/08/2016‐31/08/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 44
</td>
<td>
Social Media component
</td>
<td>
prototype
</td>
<td>
**CONJECT**
</td>
<td>
WP5
</td>
<td>
T5.4/D5. 4/M32
</td>
<td>
01/08/2016‐31/08/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 45
</td>
<td>
Toolkit for design
</td>
<td>
prototype
</td>
<td>
**CONJECT**
</td>
<td>
WP5
</td>
<td>
T5.6/D5. 5/M34
</td>
<td>
01/09/2016‐31/12/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 46
</td>
<td>
Exchange information architecture (HUB)
</td>
<td>
prototype
</td>
<td>
**CONJECT**
</td>
<td>
WP
</td>
<td>
D5.6
</td>
<td>
01/09/2016‐31/12/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 47
</td>
<td>
Functionality Testing
</td>
<td>
test data
</td>
<td>
**CONJECT**
</td>
<td>
WP6
</td>
<td>
D6.1/M2 4
</td>
<td>
01/07/2016‐31/12/2018
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 48
</td>
<td>
Integration and standardisation
</td>
<td>
test data
</td>
<td>
**CONJECT**
</td>
<td>
WP6
</td>
<td>
T6.3/D6. 2
</td>
<td>
01/05/2018‐31/12/2018
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 49
</td>
<td>
Natural interface development
</td>
<td>
Software prototypes
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
D4.1 ‐
D4.5
</td>
<td>
Month 06 ‐> 36
</td>
<td>
Software application, 3D visualization, Natural
</td>
<td>
exe, docx, pdf, pptx
</td>
<td>
</td>
<td>
WP3 deliverables
</td>
<td>
</td>
<td>
Stored at OPTIS headquarter
</td>
<td>
Confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 50
</td>
<td>
Technical specifications of interface development
</td>
<td>
publication
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
T4.1/D4. 1/M14
</td>
<td>
01/07/2016‐31/12/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 51
</td>
<td>
Public project space (interface for front‐end design, version 1+2)
</td>
<td>
prototype
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
T4.2/D4. 2/D4.5
</td>
<td>
01/07/2016‐28/02/2017
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 52
</td>
<td>
Public project space (interface for front‐end design) with 3D (version 1)
</td>
<td>
prototype
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
T4.3/D4. 3
</td>
<td>
01/08/2016‐30/06/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 53
</td>
<td>
Exchange data functionality
</td>
<td>
prototype
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
T4.4/D4. 4
</td>
<td>
01/08/2016‐30/06/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 54
</td>
<td>
Public project space (interface for front‐end design) with 3D (version 2)
</td>
<td>
prototype
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
D4.6
</td>
<td>
01/08/2016‐31/12/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 55
</td>
<td>
Reports on end‐users feedback and enhanced functional requirements
</td>
<td>
publication
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
D4.7/M3 6
</td>
<td>
01.07.2016‐31/12/2018
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 56
</td>
<td>
Exploitataion, Dissemination ans Communication
</td>
<td>
report
</td>
<td>
**SilSax / TUDr KA**
</td>
<td>
WP8
</td>
<td>
8.3
</td>
<td>
02/03/2016‐31/07/2019
</td>
<td>
text, picture
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
U_CODE members only
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 57
</td>
<td>
Collection of future customers
</td>
<td>
report
</td>
<td>
**SilSax / TUDr KA**
</td>
<td>
WP9
</td>
<td>
8.4
</td>
<td>
02/03/2016‐31/07/2020
</td>
<td>
text, picture
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
U_CODE members only
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 58
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
U_CODE
</th>
<th>
**Data Management Plan Template (by Data Set Type**
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Nr**
</td>
<td>
**DATA SET ‐ name**
</td>
<td>
**DATA SET Type ‐ nature of data**
</td>
<td>
**Lead**
</td>
<td>
**WP**
</td>
<td>
**Task/ Deliv.**
</td>
<td>
**time in which data is generated/collected**
</td>
<td>
**Type of data**
</td>
<td>
**data format**
</td>
<td>
**Publication Date**
</td>
<td>
**Source of data**
</td>
<td>
**how is the data generated/collected**
</td>
<td>
**how is the data processed**
</td>
<td>
**Restriction on using the data, suggestions by now**
</td>
<td>
**audience, if yet known**
</td>
<td>
**standards**
</td>
<td>
**metadata**
</td>
<td>
**data sharing**
</td>
<td>
**preservation and backup**
</td>
<td>
**duration of preservation (short‐term, long‐term, ...)**
</td>
<td>
**related dataset**
</td>
<td>
**underpins scientific publication**
</td>
<td>
**License**
</td> </tr>
<tr>
<td>
**Explanation & filling examples **
</td>
<td>
</td>
<td>
**eg.**
**Interviews, survey results, software prototypes, software, publications,
production, test data, conceptual framework, modells,**
</td>
<td>
TU Dr, TU De, ISEN, CONJ, OPT, SilSax, GMP
</td>
<td>
**1…8**
</td>
<td>
</td>
<td>
</td>
<td>
**audio, video, text, pictures, code, models…**
</td>
<td>
**xls, docx, jepg, pdf, ppt, mp3 ...,**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
**open access, open to qualified researchers, confidental ‐ only for**
**U_CODE members**
</td>
<td>
**e.g. other research groups, users of …**
</td>
<td>
**Reference to existing suitable standards of the discipline,**
</td>
<td>
**If standards do not exist, an outline on how and what metadata will be
created.**
</td>
<td>
**Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling re‐use, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository**
**(institutional, standard repository for the discipline, etc.).**
**In case the dataset cannot be shared, the reasons for this should be
mentioned (e.g. ethical, rules of personal data, intellectual property,
commercial, privacy‐related, security‐related).**
</td>
<td>
**Description of the procedures that will be put in place for long‐term
preservation of the data.**
</td>
<td>
**Indication of how long the data should be preserved, what is its
approximated end volume, what the associated costs are and how these are
planned to be covered**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 6
</td>
<td>
expert talk Hamburg/Reschke
</td>
<td>
interview
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP2
</td>
<td>
2.4
</td>
<td>
11.04.2016
</td>
<td>
audio
</td>
<td>
mpg
</td>
<td>
</td>
<td>
workshop with CONJECT
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 27
</td>
<td>
LEF Report
</td>
<td>
interview
</td>
<td>
**TUDe**
</td>
<td>
WP2
</td>
<td>
2.1
</td>
<td>
01/04‐01/06/2016
</td>
<td>
text, pictures
</td>
<td>
PDF
</td>
<td>
May 27, 2016
</td>
<td>
interview LEF expert
</td>
<td>
interview
</td>
<td>
stored at conject pm
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 17
</td>
<td>
Interview Collection
</td>
<td>
interviews
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
3.1‐3.3
</td>
<td>
01/04/2016‐01/12/2016
</td>
<td>
text, pictures, audio records
</td>
<td>
pdf, docx, MP3, JPEG
</td>
<td>
</td>
<td>
</td>
<td>
ethnographic observation, semistructured interviews
</td>
<td>
stored at PC
</td>
<td>
confidential
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 25
</td>
<td>
OTD NL Valkenburg CONFIDENTIAL
</td>
<td>
interviews
</td>
<td>
**TUDe**
</td>
<td>
WP7
</td>
<td>
7.1
</td>
<td>
01/04‐01/06/2016
</td>
<td>
text, pictures
</td>
<td>
PDF
</td>
<td>
May 10, 2016
</td>
<td>
Interview project leaders of Location Valkenburg
</td>
<td>
interview
</td>
<td>
stored at conject pm
</td>
<td>
confidental ‐ only for U_CODE members
</td>
<td>
U_CODE members only
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 26
</td>
<td>
Legal Framework NL
</td>
<td>
interviews + models
</td>
<td>
**TUDe**
</td>
<td>
WP2
</td>
<td>
2.1
</td>
<td>
01/04‐01/06/2016
</td>
<td>
text, pictures
</td>
<td>
PDF
</td>
<td>
TBD
</td>
<td>
interview, + publications
</td>
<td>
interview + publications
</td>
<td>
to be stored at conject pm
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 3
</td>
<td>
ideagrams of workshop
</td>
<td>
logfiles of discussions/interviews
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP1
</td>
<td>
1.2
</td>
<td>
02/03/2016‐31/07/2019
</td>
<td>
picture, graph
</td>
<td>
</td>
<td>
</td>
<td>
workshop with U_CODE partners
</td>
<td>
workshop
</td>
<td>
stored at conject pm
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 35
</td>
<td>
examplatory Model Data for OPTIS WP4 virtual space implementation
</td>
<td>
digital models in the standardized format IFC. Different discipilines (partial
models) and sizes
</td>
<td>
**CONJECT/OPT**
</td>
<td>
WP4
</td>
<td>
D4.4
</td>
<td>
20.06.2016
</td>
<td>
Digital Building Model. Format: IFC part 21 physical file (ISO 10303‐21)
</td>
<td>
ifc
</td>
<td>
June 2016
</td>
<td>
freely available ifc sources
</td>
<td>
conject sample files, internet no copyrights
</td>
<td>
stored at conject pm. Optis will use them for trial in virtual environment
</td>
<td>
open access
</td>
<td>
U_CODE, in special for OPTIS testing purposes
</td>
<td>
ISO 16739 ‐ Industry Foundation Classes (IFC) for data sharing in the
construction and facility management industries
</td>
<td>
</td>
<td>
available on conject PM, project U_CODE
</td>
<td>
subject to the conject PM versioning, backup and security procedures
</td>
<td>
subject to conject PM long‐term preservation policy (i.e. hard disc image of
the project including entire project document set)
</td>
<td>
following in later stages of U_CODE:
annotations to models (participant feedback) in BCF format
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 8
</td>
<td>
Design Heurisics and Design Decision making process
</td>
<td>
models
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP2
</td>
<td>
T2.4
</td>
<td>
01/02/2016‐30/11/2016
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 16
</td>
<td>
Semantic / Sentiment Analysis in Social Media
</td>
<td>
models
</td>
<td>
**TUDr ‐AL**
</td>
<td>
WP2
</td>
<td>
T2.3
</td>
<td>
01/02/2016‐30/11/2016
</td>
<td>
software
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 19
</td>
<td>
Moderated Models
</td>
<td>
MoM
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
3.1‐3.3
</td>
<td>
01/04/2016‐01/12/2016
</td>
<td>
text, pictures,
</td>
<td>
pdf, docx, MP3, JPEG
</td>
<td>
</td>
<td>
</td>
<td>
ethnographic observation, semistructured interviews
</td>
<td>
stored at PC
</td>
<td>
confidential
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 40
</td>
<td>
UMLDiagrams
</td>
<td>
first UML diagrams
</td>
<td>
**CONJECT**
</td>
<td>
WP1
</td>
<td>
D7.1
</td>
<td>
01/04/2016‐01/12/2017
</td>
<td>
visual graphs
</td>
<td>
eg. vpp
</td>
<td>
Apr 16
</td>
<td>
created by the author
</td>
<td>
</td>
<td>
</td>
<td>
only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 18
</td>
<td>
Pictures
</td>
<td>
pictures
</td>
<td>
**ISEN**
</td>
<td>
WP
</td>
<td>
3.1‐3.3
</td>
<td>
01/04/2016‐01/12/2016
</td>
<td>
pictures,
</td>
<td>
pdf, JPEG
</td>
<td>
</td>
<td>
</td>
<td>
ethnographic observation, semistructured interviews
</td>
<td>
stored at PC
</td>
<td>
confidential
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 5
</td>
<td>
netplans
</td>
<td>
graphs/pictures
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP2
</td>
<td>
1.3
</td>
<td>
02/03/2016‐31/07/2020
</td>
<td>
pictures
</td>
<td>
jpeg, img
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 41
</td>
<td>
Project Information Model
</td>
<td>
prototype
</td>
<td>
**CONJECT**
</td>
<td>
WP5
</td>
<td>
T5.1/D5. 1
</td>
<td>
01/06/2016‐31/12/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 42
</td>
<td>
Data Space structure (cloud server)
</td>
<td>
prototype
</td>
<td>
**CONJECT**
</td>
<td>
WP5
</td>
<td>
T5.2/D5. 2
</td>
<td>
01/07/2016‐31.12.2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 43
</td>
<td>
Co‐design space
</td>
<td>
prototype
</td>
<td>
**CONJECT**
</td>
<td>
WP5
</td>
<td>
T5.3/D5. 3/M32
</td>
<td>
01/08/2016‐31/08/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 44
</td>
<td>
Social Media component
</td>
<td>
prototype
</td>
<td>
**CONJECT**
</td>
<td>
WP5
</td>
<td>
T5.4/D5. 4/M32
</td>
<td>
01/08/2016‐31/08/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 45
</td>
<td>
Toolkit for design
</td>
<td>
prototype
</td>
<td>
**CONJECT**
</td>
<td>
WP5
</td>
<td>
T5.6/D5. 5/M34
</td>
<td>
01/09/2016‐31/12/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 46
</td>
<td>
Exchange information architecture (HUB)
</td>
<td>
prototype
</td>
<td>
**CONJECT**
</td>
<td>
WP
</td>
<td>
D5.6
</td>
<td>
01/09/2016‐31/12/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 51
</td>
<td>
Public project space (interface for front‐end design, version 1+2)
</td>
<td>
prototype
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
T4.2/D4. 2/D4.5
</td>
<td>
01/07/2016‐28/02/2017
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 52
</td>
<td>
Public project space (interface for front‐end design) with 3D (version 1)
</td>
<td>
prototype
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
T4.3/D4. 3
</td>
<td>
01/08/2016‐30/06/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 53
</td>
<td>
Exchange data functionality
</td>
<td>
prototype
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
T4.4/D4. 4
</td>
<td>
01/08/2016‐30/06/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 54
</td>
<td>
Public project space (interface for front‐end design) with 3D (version 2)
</td>
<td>
prototype
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
D4.6
</td>
<td>
01/08/2016‐31/12/2018
</td>
<td>
software
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
dataset cannot be shared due to IP
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 11
</td>
<td>
Functionality scheme of a communication system
</td>
<td>
publication
</td>
<td>
**TUDr‐MC**
</td>
<td>
WP2
</td>
<td>
D2.2
</td>
<td>
01/02/2016‐31/12/2017
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 12
</td>
<td>
Revised functional specifications
</td>
<td>
publication
</td>
<td>
**TUDr**
</td>
<td>
WP2
</td>
<td>
D2.4
</td>
<td>
01/02/2016‐30/11/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 20
</td>
<td>
Initial report on co‐design sessions, ethnographic study and interviews
</td>
<td>
publication
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
T3.1/D3. 1
</td>
<td>
01/02/2016‐30/09/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 21
</td>
<td>
Interaction Formats between professionals and citizens
</td>
<td>
publication
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
D3.2/M1 1
</td>
<td>
01/02/2016‐30/10/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 22
</td>
<td>
Functional specifications of U_CODE and use case description
</td>
<td>
publication
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
D3.3
</td>
<td>
01/02/2016‐30/10/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 23
</td>
<td>
Functional description of the U_CODE tool
</td>
<td>
publication
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
D3.4
</td>
<td>
01/02/2016‐30/10/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
interview partners: …
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 24
</td>
<td>
Roadmap for implementation and of a validation test plan
</td>
<td>
publication
</td>
<td>
**ISEN**
</td>
<td>
WP3
</td>
<td>
D3.5
</td>
<td>
01/02/2016‐28/02/2017
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 33
</td>
<td>
Assessment report and testbed report
</td>
<td>
publication
</td>
<td>
**TU Delft**
</td>
<td>
WP7
</td>
<td>
T7.2/D7. 1/M38
</td>
<td>
01/05/2016‐01/02/2019
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 34
</td>
<td>
cross‐cultural comparison study
</td>
<td>
publication
</td>
<td>
**TU Delft**
</td>
<td>
WP7
</td>
<td>
D7.2/M3 8
</td>
<td>
01/05/2016‐01/02/2019
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 50
</td>
<td>
Technical specifications of interface development
</td>
<td>
publication
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
T4.1/D4. 1/M14
</td>
<td>
01/07/2016‐31/12/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 55
</td>
<td>
Reports on end‐users feedback and enhanced functional requirements
</td>
<td>
publication
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
D4.7/M3 6
</td>
<td>
01.07.2016‐31/12/2018
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 1
</td>
<td>
Reviewreport on Kick off meeting in Dresden
</td>
<td>
report
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP1
</td>
<td>
1.2
</td>
<td>
02/03/2016‐04/03/2016
</td>
<td>
text, pictures, , visual protocols
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
workshop with U_CODE partners
</td>
<td>
workshop
</td>
<td>
stored at conject pm
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 2
</td>
<td>
Reviewreport on GA Meeting in Dresden
</td>
<td>
report
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP1
</td>
<td>
1.2
</td>
<td>
01/06/2016‐03/06/2016
</td>
<td>
text, pictures, , visual protocols
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
workshop with U_CODE partners
</td>
<td>
workshop
</td>
<td>
stored at conject pm
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 4
</td>
<td>
photo documentation of meetings & ws
</td>
<td>
report
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP1
</td>
<td>
1.2
</td>
<td>
02/03/2016‐31/07/2019
</td>
<td>
pictures
</td>
<td>
jpeg, img
</td>
<td>
</td>
<td>
workshop with U_CODE partners
</td>
<td>
workshop
</td>
<td>
stored at conject pm
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 7
</td>
<td>
technical & financial quarterly reports
</td>
<td>
report
</td>
<td>
**TUDr‐KA**
</td>
<td>
WP1
</td>
<td>
1.2
</td>
<td>
02/03/2016‐31/07/2019
</td>
<td>
text,
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
U_CODE partners
</td>
<td>
</td>
<td>
stored at conject pm
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 14
</td>
<td>
Mopo24 – Morgenpost Sachsen
</td>
<td>
articles of daily newspaper
</td>
<td>
**TUDr‐AL**
</td>
<td>
WP2
</td>
<td>
II.3
</td>
<td>
2014–2016
</td>
<td>
text
</td>
<td>
xml
</td>
<td>
</td>
<td>
https://mopo24.de/share/sitemap.xml
</td>
<td>
</td>
<td>
</td>
<td>
only for U_CODE members
</td>
<td>
</td>
<td>
TEI XML
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
short‐term preservation (test file)
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 15
</td>
<td>
Presseschau Dresden
</td>
<td>
articles of different newspapers about Dresden and surrounding area
</td>
<td>
**TUDr‐AL**
</td>
<td>
WP2
</td>
<td>
II.3
</td>
<td>
2007–2016
</td>
<td>
text
</td>
<td>
xml
</td>
<td>
</td>
<td>
daily newsletter sent via E‐mail
</td>
<td>
</td>
<td>
</td>
<td>
only for U_CODE members
</td>
<td>
</td>
<td>
TEI XML
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
short‐term preservation (test file)
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 36
</td>
<td>
Use Case Framework
</td>
<td>
Formalization methods for WP7 Testbed Assesment Reports
</td>
<td>
**CONJECT/TUDe**
</td>
<td>
WP7
</td>
<td>
D7.1
</td>
<td>
29.04.2016
</td>
<td>
power point presentation
</td>
<td>
pptx
</td>
<td>
Apr 16
</td>
<td>
created by the author
</td>
<td>
</td>
<td>
</td>
<td>
only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
Office Open XML
</td>
<td>
</td>
<td>
available on conject PM, project U_CODE
</td>
<td>
subject to the conject PM versioning, backup and security procedures
</td>
<td>
subject to conject PM long‐term preservation policy (i.e. hard disc image of
the project including entire project document set)
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 37
</td>
<td>
U_CODE Sales Presentation
</td>
<td>
SPIN ‐ Presentation for potential U_CODE customers
</td>
<td>
**CONJECT/TUDr KA**
</td>
<td>
WP8
</td>
<td>
D8.1
</td>
<td>
07.06.2016
</td>
<td>
power point presentation
</td>
<td>
pptx
</td>
<td>
June 2016
</td>
<td>
created by the author
</td>
<td>
</td>
<td>
</td>
<td>
only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
Office Open XML
</td>
<td>
</td>
<td>
available on conject PM, project U_CODE
</td>
<td>
subject to the conject PM versioning, backup and security procedures
</td>
<td>
subject to conject PM long‐term preservation policy (i.e. hard disc image of
the project including entire project document set)
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 38
</td>
<td>
Agile Methodology
</td>
<td>
Introduction into agile methods and tools
</td>
<td>
**CONJECT/TUDr KA**
</td>
<td>
WP1
</td>
<td>
T1.1
</td>
<td>
stopped due to line problems,
will be resumed as life presentation in Toulon
</td>
<td>
webinar
</td>
<td>
</td>
<td>
June 2016
</td>
<td>
created by the author
</td>
<td>
</td>
<td>
to be done after successful presentation
</td>
<td>
only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 39
</td>
<td>
introduction into UML
</td>
<td>
webinar on UML methodology, first UML diagram types and how to use them in
U_CODE
</td>
<td>
**CONJECT/TUDe**
</td>
<td>
WP1
</td>
<td>
D7.1
</td>
<td>
01.04.2016
</td>
<td>
webinar
</td>
<td>
</td>
<td>
Apr 16
</td>
<td>
created by the author
</td>
<td>
</td>
<td>
</td>
<td>
only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 56
</td>
<td>
Exploitataion, Dissemination ans Communication
</td>
<td>
report
</td>
<td>
**SilSax / TUDr KA**
</td>
<td>
WP8
</td>
<td>
8.3
</td>
<td>
02/03/2016‐31/07/2019
</td>
<td>
text, picture
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
U_CODE members only
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 57
</td>
<td>
Collection of future customers
</td>
<td>
report
</td>
<td>
**SilSax / TUDr KA**
</td>
<td>
WP9
</td>
<td>
8.4
</td>
<td>
02/03/2016‐31/07/2020
</td>
<td>
text, picture
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
U_CODE members only
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 49
</td>
<td>
Natural interface development
</td>
<td>
Software prototypes
</td>
<td>
**OPTIS**
</td>
<td>
WP4
</td>
<td>
D4.1 ‐
D4.5
</td>
<td>
Month 06 ‐> 36
</td>
<td>
Software application, 3D visualization, Natural interfaces
</td>
<td>
exe, docx, pdf, pptx
</td>
<td>
</td>
<td>
WP3 deliverables
</td>
<td>
</td>
<td>
Stored at OPTIS headquarter
</td>
<td>
Confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
yes
</td> </tr>
<tr>
<td>
Data set 31
</td>
<td>
Co‐Design methodologies in urban design (initial version)
</td>
<td>
survey
</td>
<td>
**TUDe**
</td>
<td>
WP2
</td>
<td>
T2.1/D2. 1
</td>
<td>
01/02/2016‐30/11/2016
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
research and review of existing tools for urban planning (e.g. Poldering in
NL)
</td>
<td>
using of linguistic data form social networks
</td>
<td>
stored at conject pm
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 32
</td>
<td>
Co‐Design methodologies in urban designs
</td>
<td>
survey
</td>
<td>
**TUDe**
</td>
<td>
WP2
</td>
<td>
T2.1/D2. 3
</td>
<td>
01/02/2016‐31/12/2017
</td>
<td>
text, pictures
</td>
<td>
pdf, docx,
</td>
<td>
</td>
<td>
interview partners: …
</td>
<td>
</td>
<td>
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 13
</td>
<td>
Usability Testing
</td>
<td>
test data
</td>
<td>
**TUDr‐MC**
</td>
<td>
WP6
</td>
<td>
T6.2
</td>
<td>
01/08/2016‐30/11/2018
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 47
</td>
<td>
Functionality Testing
</td>
<td>
test data
</td>
<td>
**CONJECT**
</td>
<td>
WP6
</td>
<td>
D6.1/M2 4
</td>
<td>
01/07/2016‐31/12/2018
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
short‐term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 48
</td>
<td>
Integration and standardisation
</td>
<td>
test data
</td>
<td>
**CONJECT**
</td>
<td>
WP6
</td>
<td>
T6.3/D6. 2
</td>
<td>
01/05/2018‐31/12/2018
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential ‐ only for U_CODE members
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
conject platform
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 29
</td>
<td>
Workshop I&M report
</td>
<td>
workshop + observations
</td>
<td>
**TUDe**
</td>
<td>
WP2
</td>
<td>
2.1
</td>
<td>
01/04‐20/04/2016
</td>
<td>
text, pictures
</td>
<td>
PDF
</td>
<td>
TBD
</td>
<td>
workshop at Dutch ministry I&M
</td>
<td>
workshop report
</td>
<td>
to be stored at conject pm
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 30
</td>
<td>
O‐Testbed Description Workshop ‐ report literature review of social media and
communication workflows
</td>
<td>
workshop report
</td>
<td>
**TUDe**
</td>
<td>
WP2, 3, 7
</td>
<td>
7.1
</td>
<td>
01/04‐01/05/2016
</td>
<td>
text, pictures
</td>
<td>
PDF
</td>
<td>
Apr 27, 2016
</td>
<td>
workshop with U_CODE members
</td>
<td>
workshop report
</td>
<td>
stored at conject pm
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr>
<tr>
<td>
Data set 9 (scientific) publications **TU Dr MC** WP2 2.2 01/04‐ ongoing text,
pictures pdf, docx, ppt (scientific) literature stored at conject pm only for
U_CODE members U_CODE long term preservation in urban planning
</td> </tr>
<tr>
<td>
Data set 10
</td>
<td>
review of existing crowdsourcing and gaming approaches in
</td>
<td>
(scientific) publications
</td>
<td>
**TU Dr MC**
</td>
<td>
WP2
</td>
<td>
2 2
</td>
<td>
01/04 ongoing
</td>
<td>
text pictures
</td>
<td>
pdf docx ppt
</td>
<td>
</td>
<td>
(scientific) literature
</td>
<td>
</td>
<td>
stored at conject pm
</td>
<td>
only for U CODE members
</td>
<td>
U CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
long term preservation
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data set 28
</td>
<td>
Phases presentation TUDelft
</td>
<td>
conceptual framework
</td>
<td>
**TUDe**
</td>
<td>
WP2
</td>
<td>
2.1
</td>
<td>
01/04‐01/06/2016
</td>
<td>
text, pictures
</td>
<td>
PDF
</td>
<td>
Jun 7, 2016
</td>
<td>
interview, + publications
</td>
<td>
interview + publications
</td>
<td>
stored at conject pm
</td>
<td>
open access
</td>
<td>
U_CODE
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
no
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0879_MAGIC_687228.md
|
**Introduction**
All Health and Social Care organisations (HSC) must ensure that when sharing
HSC data for nondirect care (secondary purposes), assurances are provided by
the requesting organisations that they comply with the Data Protection Act
(1998) and that staff are aware of the relevant DPA Policies and Procedures in
place.
Researchers undertaking studies and who require access to patient identifiable
information and / or anonymous HSC data should follow the research protocol
(Research Governance Framework for Health and Social Care in Northern
Ireland).
Please be aware that it may be more appropriate to make use of the Honest
Broker Service (HBS) rather than completing a Data Access Agreement. The HBS
will enable the provision of anonymised, aggregated and in some cases
pseudonymised health and social care data to the DHSSPS, HSC organisations and
in the case of anonymised data for ethically approved Health and Social care
related research.
Arrangement for access to personal data may already be covered by a contract
(eg a contract for supplier support on an information system) therefore
organisations need to be clear that any proposed data sharing is either
covered adequately by that contract or make sure that a Data Access Agreement
is completed.
The following Data Access Agreement must be completed by any organisation
wishing to access
HSC Trust data. It must be considered for approval and signed by the supplier
organisation’s Personal Data Guardian.
In the event of a breach of this agreement which results in a financial
penalty, claim or proceedings, the parties agree to co-operate to identify and
apportion responsibility for the breach and the defaulting party will accept
responsibility for any such claim.
Please refer to Appendix 2, ‘Principles Governing Information Sharing’ for
guidance.
The form is divided into Sections (A-I) as detailed below:
<table>
<tr>
<th>
**Section A** :
</th>
<th>
Details of Requesting Organisation
</th> </tr>
<tr>
<td>
**Section B:**
</td>
<td>
Commissioning Organisation
</td> </tr>
<tr>
<td>
**Section C:**
</td>
<td>
Details of data items requested
</td> </tr>
<tr>
<td>
**Section D:**
</td>
<td>
Consent issues
</td> </tr>
<tr>
<td>
**Section E:**
</td>
<td>
Data Protection
</td> </tr>
<tr>
<td>
**Section F:**
</td>
<td>
Measures to prevent disclosure of Personal Identifiable Information
</td> </tr> </table>
**Section G:** Data Retention
**Section H:** Declaration: Requesting Organisation
**Section I:** Declaration: Owner Organisation
**Appendix 1:** Data Destruction Notification and checklist
**Appendix 2:** Principles Governing Information Sharing
Please ensure that this form is returned to: _____________________________
_____________________________
_____________________________
_____________________________
Internal Reference: _______________________
Internal Contact:
Name ___________________________________
IAO_____________________________________
Service Group (if relevant):__________________
<table>
<tr>
<th>
Title of Agreement
</th>
<th>
</th> </tr>
<tr>
<td>
Date of Request
</td>
<td>
</td> </tr> </table>
Please state if this is an update of a previous agreement or a new request for
personal identifiable information
Date Access Begins: _______________________
Date Access Ends: ________________________
Review date if on-going agreement:_____________
An update of an earlier extract New application
<table>
<tr>
<th>
**(A) Details of Requesting Organisation**
</th> </tr>
<tr>
<td>
Name of Requesting Organisation: Please note that the Data Access Agreement
will be immediately returned unless the requesting organisation has signed
section H.
</td> </tr>
<tr>
<td>
Name of Authorised Officer
Requesting Access to Trust Data (please print)
</td>
<td>
</td> </tr>
<tr>
<td>
Position/Status
</td>
<td>
</td> </tr>
<tr>
<td>
Address
Postcode
</td>
<td>
</td> </tr>
<tr>
<td>
Sector of the requesting organisation e.g. Voluntary, Public, Private etc.
</td>
<td>
</td> </tr>
<tr>
<td>
Telephone Number
</td>
<td>
</td> </tr>
<tr>
<td>
Email Address
</td>
<td>
</td> </tr>
<tr>
<td>
Name and Telephone Number of
Requesting Organisation or Personal
Data Guardian
</td>
<td>
</td> </tr> </table>
If you require the data to carry out work on behalf of another organisation,
please complete section (B) below. If not, please go straight to section (C).
<table>
<tr>
<th>
**(B) Commissioning Organisation**
</th> </tr>
<tr>
<td>
Name of Commissioning Organisation
</td>
<td>
</td> </tr>
<tr>
<td>
Contact Name
</td>
<td>
</td> </tr>
<tr>
<td>
Title
</td>
<td>
</td> </tr>
<tr>
<td>
Contact Number
</td>
<td>
</td> </tr>
<tr>
<td>
Email Address
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**(C) Details of ‘Data Items’ Required:**
</th>
<th>
**Rationale for data Items**
</th> </tr>
<tr>
<td>
Please provide a list and description of the data to which the request
applies, eg include all identifier attributes, (eg Name, Address, Postcode,
Date of Birth, Gender, HSC
Number, Diagnosis Code, Religion etc)
</td>
<td>
Please indicate the reasons for requiring each of these data items
</td> </tr>
<tr>
<td>
1 _________________________________
2___________________________________
3 ___________________________________
4___________________________________
5___________________________________
6___________________________________
7___________________________________
8___________________________________
</td>
<td>
1__________________________________
2 __________________________________
3___________________________________
4 ___________________________________
5___________________________________
6 ___________________________________
7___________________________________
8 ___________________________________
</td> </tr>
<tr>
<td>
Please state in as much detail as possible, the purpose for which the data are
required by the organisation named in section (A) including any record linking
or matching to other data sources.
Please continue on a separate sheet if necessary or attach any relevant
documentation.
</td> </tr>
<tr>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**Processing of Data**
</th> </tr>
<tr>
<td>
Please indicate how you propose to process the data once received (e.g. to
extract and anonymise Service User information; for auditing and monitoring of
Service User care and treatment.
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
System(s) from which Data is to be extracted (If Known) Please include sites
or Geographical locations (If Known)
</th> </tr>
<tr>
<td>
For example PAS, RVH
</td> </tr>
<tr>
<td>
Is the Data to be Viewed only (V); or Viewed and Updated (U); or Transferred
and Viewed (T)?
</td>
<td>
Please specify: _______
</td> </tr>
<tr>
<td>
Will Data contain Client Identifiable Details?
</td>
<td>
**(Please Tick)**
Yes No
</td> </tr>
<tr>
<td>
If you have answered “No” to the question above have you considered whether
the data could be released via the Honest Broker Service?
</td>
<td>
**Yes**
</td>
<td>
</td>
<td>
No
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Frequency of transfers
</td>
<td>
Once Only
Other (Please specify)
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**(D) Consent Issues**
</td> </tr>
<tr>
<td>
Do you have the individuals’ consent?
</td>
<td>
Yes No
</td> </tr>
<tr>
<td>
If yes please provide a copy of the consent form i.e Explicit consent should
be obtained for the processing of sensitive personal data.
</td>
<td>
</td> </tr>
<tr>
<td>
If no, why is it not practical to obtain consent?
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**(E) Data Protection (of Requesting Organisation)**
</th> </tr>
<tr>
<td>
Do you have a confidentiality / privacy policy which complies with the Data
Protection Act 1998?
</td>
<td>
Yes
</td>
<td>
No
</td> </tr>
<tr>
<td>
Are confidentiality clauses included within contracts of all staff with access
to the person identifiable information?
</td>
<td>
Yes
</td>
<td>
No
</td> </tr>
<tr>
<td>
Are all staff trained and aware of their
responsibilities under the Data Protection Act 1998 and adhere to the eight
Data Protection Act Principles?
</td>
<td>
Yes
</td>
<td>
No
</td> </tr>
<tr>
<td>
Provide details /copy of your ICT security policy for your organisation
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Provide confirmation that your organisation has Data Protection notification
for purposes of analysis.
Please provide your ICO notification/registration number
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Have you conducted a Privacy Impact Assessment?
If yes please include a copy with this form.
</td>
<td>
Yes
</td>
<td>
No
</td> </tr> </table>
<table>
<tr>
<th>
**(F) Measures to Prevent Disclosure of Person Identifiable Information (of
Requesting Organisation)**
</th> </tr>
<tr>
<td>
Will this data be accessed or transferred by you to another organisation?
</td>
<td>
Yes No
(If Yes, please give details including in what country it will be stored)
</td> </tr>
<tr>
<td>
If Yes, has your Data Controller/Data Processor granted permission for onward
disclosure?
</td>
<td>
</td> </tr>
<tr>
<td>
How will you secure the information provided being transferred?
</td>
<td>
</td> </tr>
<tr>
<td>
If Applicable how will you secure information provided being transferred by
you to another organisation
</td>
<td>
</td> </tr>
<tr>
<td>
Describe the physical security arrangements for the location where person
identifiable data is to be:
* processed; and
* stored _(if different to above)._
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**System Information**
</th> </tr>
<tr>
<td>
Provide details of access and/or firewall controls implemented on the system,
and measures to encrypt which are in place.
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**(G) Data Retention (of requesting Organisation)**
</th> </tr>
<tr>
<td>
Please state the date by which you will be finished using the data.
</td>
<td>
</td> </tr>
<tr>
<td>
If the retention period which you require the data is greater than one year,
please indicate the reasons.
(The maximum data retention period is 2 years, after this time a review of
this agreement is required)
</td>
<td>
</td> </tr>
<tr>
<td>
Describe the method of data destruction you will employ when you have
completed your work using person identifiable data
</td>
<td>
</td> </tr> </table>
**Please ensure that the Data Destruction Notification (Appendix 1) is
completed within the specified retention period and returned to the contact
person on the front of the form.**
<table>
<tr>
<th>
**(H) Declaration: Requesting Organisation**
</th> </tr>
<tr>
<td>
**Data Protection Undertaking on Behalf of the Organisation Wishing to Access
the Data**
My organisation requires access to the data specified and will conform to the
Data Protection Act 1998 and the guidelines issued by the DHSSPS Executive in
January 2009 in _“The Code of Practice on Protecting the Confidentiality of
Service User Information”._
I confirm that the information requested, and any information extracted from
it,
</td> </tr> </table>
<table>
<tr>
<th>
* Is relevant to and not excessive for the stated purpose
* Will be used only for the stated purpose
* Will be stored securely
* Will be held no longer than is necessary for the stated purpose
* Will be disposed of fully and in such a way that it is not possible to reconstitute it.
* That all measures will be taken to ensure personal identifiable data will not be disclosed to third parties.
* The Health and Social Care organisation will be informed of the data being deleted / destroyed.
I _(name: printed)_ ______________________________, as the Authorised Officer
of _(Organisation)_ _________________________________, declare that I have
read and understand my obligations and adhere to the conditions contained in
this Data Access Agreement.
**______________________________________________________ Signed:**
**(Personal Data Guardian)**
**Signed: (IAO/SIRO)**
**Date:**
______________________________________________________
</th> </tr>
<tr>
<td>
**(I) Declaration – Owner Organisation**
</td> </tr>
<tr>
<td>
**DATA ACCESS AGREEMENT I CONFIRM THAT:**
1. Southern Health and Social Care Trust consents to the disclosure of the data specified, to the organisation identified in Section A of this form.
The disclosure of the data conforms to the guidelines issued by the DHSSPS NI
Code of Practice on Protecting Confidentiality of Service User Information,
2012.
2. The data covered by this agreement are: **(*delete as appropriate)**
Either data which are exempt from the Data Protection Act 1998, or
</td> </tr>
<tr>
<td>
Are notified under the Data Protection Act 1998 and their disclosure
conforms to the current notification under The Act.
**Signed:** _____________________________________________________
**(Personal Data Guardian) OR (Senior Information Risk Owner SIRO)**
**Date:** _____________________________________________________
</td> </tr> </table>
**Please note that this organisation has the right to inspect the premises and
processes of the requesting organisation to ensure that they meet the
requirements set out in the agreement.**
**Any loss, theft or corruption of the shared data by the requesting
organisation must be immediately reported to the Personal Data Guardian of the
owning organisation. Please also note that any serious breaches, data loss,
theft or corruption should also be reported to the ICO by the Data
Controller.**
**Appendix 1**
**Data Destruction Notification and checklist**
Authorised users of the person identifiable data have, under the terms and
conditions of the Data Access Agreement, a requirement to destroy the data on
or before the retention date stated in Section (H).
This form should be completed on destruction of the data and returned to the
Personal Data Guardian.
This form should be completed on destruction of the data, and returned to:-
**ENTER ADDRESS**
<table>
<tr>
<th>
**Data Destruction Notification**
</th> </tr>
<tr>
<td>
Name of Organisation
</td>
<td>
</td> </tr>
<tr>
<td>
Name of Authorised Officer (please print)
</td>
<td>
</td> </tr>
<tr>
<td>
Position/Status
</td>
<td>
</td> </tr>
<tr>
<td>
Address
</td>
<td>
</td> </tr>
<tr>
<td>
Telephone Number
</td>
<td>
</td> </tr>
<tr>
<td>
Mobile Number (Optional)
</td>
<td>
</td> </tr>
<tr>
<td>
Fax Number
</td>
<td>
</td> </tr>
<tr>
<td>
Email Address
</td>
<td>
</td> </tr>
<tr>
<td>
Title of Agreement
</td>
<td>
</td> </tr>
<tr>
<td>
Date Declaration Signed
</td>
<td>
</td> </tr>
<tr>
<td>
Date Data Received
</td>
<td>
</td> </tr>
<tr>
<td>
Date Data Destroyed
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
Signature
</th>
<th>
</th> </tr>
<tr>
<td>
Date
</td>
<td>
</td> </tr> </table>
**Health and Social Care Checklist**
<table>
<tr>
<th>
**Termination of Data Access Agreement - Trust Checklist**
</th> </tr>
<tr>
<td>
Name of Internal Trust Contact
</td>
<td>
</td> </tr>
<tr>
<td>
Position/Status
</td>
<td>
</td> </tr>
<tr>
<td>
IAO
</td>
<td>
</td> </tr>
<tr>
<td>
Telephone Number
</td>
<td>
</td> </tr>
<tr>
<td>
Mobile Number (Optional)
</td>
<td>
</td> </tr>
<tr>
<td>
Email Address
</td>
<td>
</td> </tr>
<tr>
<td>
Title of Agreement
</td>
<td>
</td> </tr>
<tr>
<td>
Can you confirm Data flow has stopped
</td>
<td>
</td> </tr>
<tr>
<td>
Have you advised IT to stop facilitating transfer
</td>
<td>
</td> </tr>
<tr>
<td>
Have you received confirmation from receiving organisation that all
information has been destroyed or returned
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
Signature
</th>
<th>
</th> </tr>
<tr>
<td>
Date
</td>
<td>
</td> </tr> </table>
**Once the Destruction Notification Form and the Organisation Checklist has
been completed please return both to the contact person detailed on the
agreement.**
Horizon 2020
This project has received funding from the _European Union’s Horizon 2020
research and innovation programme_ under grant agreement No 687228
## Appendix 2 - Principles Governing Information Sharing 1
<table>
<tr>
<th>
**Code of Practice 8 Good Practice**
**Principles 2 **
</th>
<th>
**DPA Principles**
</th>
<th>
**Caldicott Principles 3 **
</th> </tr>
<tr>
<td>
1. All organisations seeking to use confidential service user information should provide information to service users describing the information they want to use, why they need it and the choices the users may have.
2. Where an organisation has a direct relationship with a service user then it should be aiming to implement procedures for obtaining the express consent of the service user.
3. Where consent is being sought this should be by health and social care staff who have a direct relationship with the individual service user.
4. ‘Third Party’ organisations seeking information other than for direct care should be seeking anonymised or pseudonymised data.
5. Any proposed use must be of clear general good or of benefit to service users.
6. Organisations should not collect secondary data on service users who opt out by specifically refusing consent.
7. Service users and/or service user organisations should be involved in the development of any project involving the use of confidential information and the associated policies.
8. To assist the process of pseudonymisation, the Health and Care Number should be used wherever possible.
</td>
<td>
1. Data should be processed fairly and lawfully.
2. Data should be processed for limited, specified and lawful purposes and not further processed in any manner incompatible with those purposes.
3. Processing should be adequate, relevant and not excessive.
4. Data must be accurate and kept up to date.
5. Data must not be kept longer than necessary.
6. Data must be processed in line with the data subject’s rights (including confidentiality rights and rights under article 8 of the Human Rights Act).
7. Data must be kept secure and protected against unauthorised access.
8. Data should not be transferred to other countries without adequate protection.
</td>
<td>
1. Justify the purpose(s) for using confidential information.
2. Only use it when
absolutely necessary.
3. Use the minimum
that is required.
4. Access should be on
a strict
need-toknow basis.
5. Everyone must understand his or her responsibiliti es.
6. Understand and comply with the law.
</td> </tr> </table>
1. These principles must be followed by health and social care organisations when considering use and disclosure of service user information.
2. Code of Practice, paragraph 3.17.
3. PDG Principles are adopted from the Caldicott Principles established in England and Wales.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0882_REMINDER_687931.md
|
**Introduction**
</th> </tr> </table>
# 1.1 Description of the deliverable content and purpose
The purpose of the present Document is to support the data management life
cycle for the data that will be collected, processed or generated by the
project. The Data Management Plan outlines how research data will be handled
during a project and after it is completed. It represents, at the same time, a
reference document for the project partners and an external means for the
evaluation of the project policy. It can be updated during the execution of
the project to reflect major changes or minor modifications in the management
of data.
# 1.2 Identification of the Data
The final goal of the project, as reflected in the Grant Agreement, is the
development of a demonstrator of an embedded DRAM solution focused on IoT. In
order to reach this goal, the project Work Plan includes the investigation of
different memory cells, the selection of the most promising one and the design
of the memory matrix. These tasks, during the course of the project, will
produce experimental data and simulation results. However these datasets are
meant as a guide for the development and the design and they are not meant as
long-term dataset which can be useful in later stages of the project of after
its termination. Therefore, the main categories of data that will be produced
in the project will be the scientific papers and the deliverables:
* Scientific publications represent an important part of the project dissemination effort, as detailed in the description of Work Package 6 (“Publication of our original results in scientific journals and international conferences (Europe, USA, Asia) will be stimulated, after Intellectual Property issues have been cleared”) and in the separate Dissemination Plan Document, available at the REMINDER project website. As required by the call, to insure wider access to such publications, an open access model will be employed.
* On the other hand, the work performed according to the project Work Plan and the necessary decisions taken during the course of the project will be documented in the different Deliverables included in the Work Plan.
Version: 1 - Date: 14-11-2016
Security: Public
Page 4
**H2020 project REMINDER 687931**
**Deliverable – WP6 / D6.2**
<table>
<tr>
<th>
**Data Management Plan**
</th> </tr> </table>
# 2.1 Expected data to be managed
We plan to manage and make available the primary analyzed data produced in
this project. These data are to be prepared and published promptly in the form
of peer-reviewed journal articles, book chapters and other print or electronic
publishing formats. As required by the Grant Agreement, the publications will
be provided in an open access form, so that their availability is guaranteed.
The work executed in the course of the project and the choices performed in
order to achieve the project goals will also be documented through the
different deliverables described in the project Work Plan. Preliminary data or
raw data, drafts of scientific papers, plans for future research, peer
reviews, communications with colleagues and physical samples are not included
in this plan, as well as the confidential information for the possible
commercial exploitation.
# 2.2 Data formats
All the documents will be available electronically in pdf format; moreover,
depending on the publisher and the journal, the scientific papers could also
be available in print.
# 2.3 Data access and sharing
Scientific papers will be published according the open access guidelines.
Therefore an electronic version will be available either at the publisher
website or at an institutional repository of a partner listed in the table
below. All these are validated repositories according to the OpenAIRE website
( _www.openaire.eu_ ).
<table>
<tr>
<th>
**Repository**
</th>
<th>
**URL**
</th> </tr>
<tr>
<td>
**Digibug (UGR**
</td>
<td>
_digibug.ugr.es_
</td> </tr>
<tr>
<td>
**Enlighten (Univ. of Glasgow)**
</td>
<td>
_eprints.gla.ac.uk_
</td> </tr>
<tr>
<td>
**HAL-CEA (CEA)**
</td>
<td>
_hal-cea.archives-ouvertes.fr_
</td> </tr>
<tr>
<td>
**HAL (CNRS, INPG)**
</td>
<td>
_hal.archives-ouvertes.fr_
</td> </tr> </table>
Regarding the project Delivarables, they will be available in the private area
of REMINDER website ( _reminder.eu_ ) and, once approved, also in the public
part.
A copy of all the documents generated during the project will also be backed
up in a file server facility available at the research group of the lead
beneficiary of WP 6 - Management Dissemination, Exploitation, and
Communication (UGR).
Version: 1 - Date: 14-11-2016
Security: Public
Page 5
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0883_EISCAT3D_PfP_672008.md
|
# Long term preservation plan
Retained documents will be stored for at least 10 years. The long term
preservation plan needs to be decided by the end of this project.
# Sharing policy
Deliverables and milestone documents are openly shared, unless specifically
determined to be confidential. The stored project documents such as tendering
documents, manufacturing contracts, etc., are shared with EISCAT associates.
The archive emails are not shared.
# Responsible person
Deliverables, milestone documents and other general project documents are
prepared and managed by Dr. Sathyaveer Prasad from EISCAT Scientific
Association.
# Resources used
The resources of EISCAT Scientific Association such as the project website,
the project staff are used to some extent in this project. This project also
plans to hire employees within this project and thus, to establish project
office for implementing the EISCAT_3D system.
**2\. ENGINEERING LEVEL SOFTWARE**
# Data Collected
A low level engineering software will be developed to operate and control the
sub-array, process the data from individual antennas and generate the phased-
array data products. The developed software will be as similar as possible to
that to be used for EISCAT_3D system. This software is considered data in the
general sense in this document.
# Collection method
The data collection will be done during the project and mostly in work package
(WP) 5 of this project. The data (software codes) are collected using web-
based systems.
**Metadata and documentation**
Documented by in-line comments and documentation principles of each IT partner
in question.
**Ethical and privacy issues**
None
# IPR issues
All produced software will stay as the ownership of the EISCAT scientific
association. However, it has to be licensed using an open software license, as
described in the grant agreement.
**Storage and backup project time**
A computer will be used for storage with a regular internal backup system.
# Access management
The developed software can be openly accessed by project staff and EISCAT
associates via project website whereas a controlled access will be provided to
the outside users. However, the access practicalities depend on the software
platform.
# Retention and preservation
Draft and final versions of the developed software codes are stored and
preserved for usage in the EISCAT_3D system.
# Long term preservation plan
A method will be defined by the end of this project for long term preservation
of the developed software.
**Sharing policy**
All the final versions of the developed software products will be shared
publicly.
# Responsible persons
Software engineer is responsible on version control, documentation and initial
storage. Chief engineer is responsible for the overall software produced in
the WP5 of this project and to control that the final versions are made
available.
# Resources used
Project staff and other resources (computers, software and hardware) from
EISCAT scientific association will be used to develop the software products.
**3\. SUB-ARRAY TEST DATA**
The testing of the sub-array will produce low-level data product of the
digitized signal voltages and it is comparable to the data products at the
beginning of the scientific data chains used by incoherent scatter radar
facilities around the world. Hence, the data processing and storage
requirements of this project are much less compared to the full EISCAT_3D
system.
# Data Collected
Data sets collected from sub-array testing and system calibration are of only
engineering interest, and do not have much additional value.
# Collection method
The data is collected by performing internal interference testing of various
subsystems and radar system performance testing. A detailed test data
collection method will be defined during the WP6 of this project.
# Metadata and documentation
The local metadata i.e. the results of the subsystem tests are only reported
in deliverables, but the actual testing data sets are not specifically
documented.
**Ethical and privacy issues**
None
**IPR issues**
EISCAT scientific association has ownership to all the test data produced.
**Storage and backup project time**
Individual storage options will be considered during testing with regular
backup of test data.
**Access management**
Only to WP in question.
# Retention and preservation
The director of EISCAT scientific association will make the final decision on
storing such test data. If the data is determined to be stored, it will then
be included in the documentation process and will be stored according to
standard EISCAT data storage policy as defined in the “Blue Book” [2].
# Long term preservation plan
The test data is usually not retained and if it is retained then it will be
done according to the standard EISCAT data storage policy as defined in the
“Blue Book” [2].
# Sharing policy
The sharing will be done only within the WP. However, the retained test data
is openly available and will be shared according to EISCAT scientific
association's sharing policy.
# Responsible persons
Electrical engineer, software engineer and chief engineer will be the
responsible persons for subarray testing. The decision about data retention
will be taken by the director of EISCAT scientific association.
# Resources used
The procured hardware from vendors, developed low-level engineering software
in WP5 of this project and the project staff of EISCAT scientific association
will be used to perform the sub-array testing.
**CONCLUSIONS**
The DMP has impact on both the project and the stakeholders.
# Impact on Project
This is the initial version of the DMP and it is clear from this document that
it needs to be further developed, detailed and updated during the project
period. However, it gives the overall data management plan in EISCAT3D_PfP
project and also, the most likely data types collected.
# Impact on Stakeholders
Data management plan actions are important for research infrastructures
because such document will guide the project personnel to access the produced
documents, software and key data sets in a standard manner.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0884_TBO-MET_699294.md
|
# 1 Executive Summary
The Data Management Plan (DMP) of the project TBO-Met is presented in this
document. Its target audience is the SESAR Joint Undertaking and the
consortium members: Agencia Estatal de Meteorología (Spain), MeteoSolutions
GmbH (Germany), University of Salzburg (Austria), University Carlos III of
Madrid (Spain), and University of Seville (Spain, consortium coordinator).
TBO-Met addresses the topic SESAR-04-2015 - Environment and Meteorology in
ATM, of the call H2020-SESAR-2015-1; in particular Meteorology. The overall
objective of the project is threefold: 1) to advance in the understanding of
the effects of meteorological uncertainty in TBO; 2) to develop methodologies
to quantify and reduce the effects of meteorological uncertainty in TBO; and
3) to pave the road for a future integration of the management of
meteorological uncertainty into the air traffic management system.
The DMP is intended to describe the data management life cycle for all
datasets to be collected, processed or generated by an H2020 project. The DMP
should address, at least, the data needed to validate the results presented in
scientific publications, including information on:
* the handling of research data during and after the end of the project,
* what data will be collected, processed and/or generated,
* what methodology and standards will be applied,
* whether data will be shared /made open access, and
* how data will be curated and preserved (including after the end of the project).
These elements are described in this document.
The input data used in the project is collected and identified in the
Appendix.
## 2 Introduction 1
According to the Guidelines on FAIR Data Management in H2020 [1], the Data
Management Plan (DMP) is intended to describe the data management life cycle
for all datasets to be collected, processed or generated by an H2020 project.
The DMP should address, at least, the data needed to validate the results
presented in scientific publications, including information on:
* the handling of research data during and after the end of the project,
* what data will be collected, processed and/or generated,
* what methodology and standards will be applied,
* whether data will be shared /made open access, and
* how data will be curated and preserved (including after the end of the project).
The TBO-Met project does not participate in the extended Open Research Data
Pilot; however, the delivery of a DMP was foreseen in the Grant Agreement on a
voluntary basis, because a DMP is a key element of good data management, and
implementing a good data management is considered to be a research best
practice.
This DMP describes a data management policy in line with the consortium
agreements on data management (see [2]), and consistent with exploitation and
Intellectual Property Rights (IPR) requirements. In particular, the data
management policy is based on making the research data findable, accessible,
interoperable and reusable (FAIR) in order to enhance knowledge discovery and
innovation, and subsequent data and knowledge integration and reuse.
The data handled in TBO-Met project is classified into three different
categories: Research data generated within the project, research data used
within the project and survey data. According to this classification, this
document is organized as follows. Next in this section, a list of acronyms is
given. A short explanation on generated research data is included in Section
3, whereas the used research data is described in Section 4. In Section 5, the
exclusion of survey data from data management is pointed out. Some provisions
for updating the DMP during TBO-Met project life cycle are given in Section 6
(the input data used in the project is collected and identified in the
Appendix). Finally, references are listed in Section 7.
# 3 Generation of Research Data
The TBO-Met project will not generate any research data, but it will develop
methodologies for trajectory planning under meteorological uncertainties, and
for sector demand analysis under meteorological uncertainties. In fact, one of
the expected outcomes of the project is to develop methodologies to quantify
the impact of meteorological uncertainty in TBO.
# 4 Use of Research Data
The TBO-Met project will make use of the following pre-existing research data:
1. EPS data (provided by Met Offices).
2. Nowcast data (also provided by Met Offices).
3. Aircraft model data (provided by Eurocontrol).
### 4.1 Data Description
Data used (EPS, Nowcast, and aircraft model) constitute the input of the
project, as they are needed to define methodologies for both trajectory
planning under meteorological uncertainty (WP4) and sector demand analysis
under meteorological uncertainty (WP5), and to perform the evaluation and
assessment of those methodologies (WP6).
#### EPS Data
In this project, EPS data are the output data of the global ensemble forecast
system ECMWF-EPS (ENS) and of the Grand Limited Area Model Ensemble Prediction
System GLAMEPS. AEMET as a project partner has access to data of these two
ensemble systems. In D2.1 [3], a larger description of ECMWFEPS and GLAMEPS,
and detailed information about data type, format, and processing technique can
be found. Relevant information will be excerpted below for completeness.
The meteorological information will include wind, temperature, and convection
indicators (or other variables that allow for computation of convection
indicators). The data concerning wind, temperature, and two convection
indicators will be obtained from ECMWF-EPS whereas the data concerning one of
these convection indicators and some temperatures that allow for the
computation of the other convection indicator will be obtained from GLAMEPS.
The data will be retrieved from the ECMWF MARS (Meteorological Archive and
Retrieval System) data base. The data will be downloaded as files in GRIB
format which contain meteorological parameters on a regular latitude-longitude
grid and in hybrid vertical coordinates for the desired forecast times. The
data will be extracted from the model grid to cover only the desired analysis
region in time and space for the flights to be examined. This is done with the
purpose of reducing the data amount and thus computation time in all further
data processing because the raw EPS output has a large or even global coverage
(ENS). The region to be extracted is defined by the minimum and maximum of
latitude/longitude, the pressure level where the flights will take place. The
extraction of the above defined sub grid can be realized by defining certain
request files for the MARS database interface.
#### Nowcast Data
In the 1 st Steering Board (SB) Meeting, the definition of the nowcast data
to be used was set as an internal milestone. Therefore, this section will
subsequently be updated (see schedule in Section 6).
#### Aircraft Model Data
In the Kick-off Meeting of the TBO-Met project, it was decided to use the Base
of Aircraft DAta (BADA), from Eurocontrol. According to Eurocontrol [4], BADA
is made of two components: The model specifications, which provide the
theoretical fundaments used to calculate aircraft performance parameters, and
the datasets containing the aircraft-specific coefficients necessary to
perform calculations. It includes:
* Aircraft operating parameters.
* Aerodynamic model.
* Fuel consumption model.
* Available thrust model.
### 4.2 FAIR data management scheme
#### Making data findable
In order to make the data used in the project identifiable and locatable,
unique identifiers must be defined. In particular, the following is proposed:
* EPS datasets will be identified by a string containing the following attributes: name, issuing office, date, delivery time, time step, coverage area, spatial grid resolution, barometric altitude, and variable name.
* Nowcast datasets. The identification of the nowcast data will be done once the data is defined (this section will be updated according to the schedule in Section 6.
* Datasets from BADA will be identified by the aircraft code and the BADA version.
#### Making data accessible
Each dataset used in the TBO-Met project is obtained from a pre-existing data
base, whose access is restricted to registered users. Some TBO-Met project
partners have been granted access to those data bases, but they have accepted
terms and conditions of use including non-disclosure clauses. Therefore, the
datasets cannot be made openly available or shared. However, obtaining access
for this databases is not difficult (at least for researchers in ATM
community), and therefore good data accessibility is had.
For certain datasets, a specific software tool can be used to automatize the
access to data. In particular, for the data retrieved from the ECMWF MARS data
base, ECMWF have developed an application program interface (called GRIB-API)
to provide an easy and reliable way for encoding and decoding GRIB messages.
With the GRIB-API library, which is written entirely in C, some command line
tools are provided to give a quick way to manipulate GRIB data. Moreover, a
Fortran 90 interface is available giving access to the main features of the C
library. Further information on GRIB-API can be found in
[5].
#### Making data interoperable
To facilitate interoperability of the data used, standard vocabulary from ATM
and MET disciplines will be used throughout the project. No uncommon nor
project specific ontologies or vocabularies will be generated.
#### Making data reusable
There is no restrictions in data re-use by third parties, other than the fact
that these third parties must have granted access to the aforementioned pre-
existing data bases
### 4.3 Data storage
To store all the data used in the project, a private space in the TBO-Met
website will be created, where all the unique identifiers will be collected.
# 5 Survey Data
To help in achieving TBO-Met project objectives, a survey among the
stakeholders involved (airlines,
ANSPs and Network Manager) is to be performed (WP3). The goals of the survey
are to ensure TBOMet is aligned with their current meteorological practices in
aviation (particularly any issue regarding meteorological uncertainty), and to
understand future expectations regarding meteorological uncertainty
management. The survey will provide information on the type of meteorological
services/products being used; the common understanding of meteorological
uncertainty; how the different actors provide robustness to the systems; the
desired values of predictability; and the efficiency cost they are willing to
pay.
The collected and processed survey data will provide a first-hand expert
description of current practice and future expectations, which will serve as a
valuable reference for the project activities. However, the survey data are
(anonymized) personal data, not research data. This fact has two immediate
consequences.
On one hand, two ethics requirements (that the TBO-Met project must comply
with) related to the protection of these personal data were identified in the
project Grant Agreement [6], and included as deliverables D8.2 and D8.3 (Refs.
[7, 8]). In deliverable D8.2, detailed information on the procedures that will
be implemented for survey data collection, storage, protection, retention and
destruction and confirmation that they comply with national and EU legislation
is provided. In deliverable D8.3, detailed information on the informed consent
procedures that will be implemented is provided.
On the other hand, the survey data are excluded from the data management
process outlined in this document, because they follow the procedures for the
protection of the personal data.
# 6 Updating the Data Management Plan
Following the Guidelines on FAIR Data Management in H2020 [1], updates in the
DMP are foreseen as new input data are used in the project. This does not
exclude the possibility of updating the DMP whenever unexpected significant
changes arise, such as changes in consortium policies, and changes in
consortium composition and external factors, _inter alia_ . Furthermore, the
DMP will be updated in time with the periodic evaluation of the project (that
is, every six months), and in time with the final review. Hence, the timetable
for DMP review can be summarized as follows:
Table 6.1. DMP Updating Timetable
<table>
<tr>
<th>
**Updating cause**
</th>
<th>
**Due date**
</th> </tr>
<tr>
<td>
Include data used in WP4
</td>
<td>
T0+12 (31/05/2017)
</td> </tr>
<tr>
<td>
Include data used in WP5
</td>
<td>
T0+18 (30/11/2017)
</td> </tr> </table>
### Updates
The DMP has been updated on 01 March 2018, after finishing all the technical
tasks that have required new input data. This update includes an Appendix
where all the input data used in the project is collected and identified.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0885_GRACeFUL_640954.md
|
1 Introduction 5
2 Data management plan 6
2.1 General 6
2.2 Attributes of datasets 7
2.3 Data set reference and name - file naming convention 8
2.4 Quality assurance and validation 8
2.5 Access to data and permissions, facilities for storage, storage after
project end 8
3 Overview / Expected data 9
4 List of data sets, including plan per data-set 10
4.1.1 Numerical input data for software developed within the project 10
4.1.2 Numerical parameters included in software developed within the project
10
4.1.3 Numerical output data of software developed within the project 10
4.1.4 Narrative information 10
4.1.5 Numerical statistics obtained based on narrative information 11
4.1.6 Source code 11
# 1 Introduction
This document concerns the Data Management Plan (DMP) of the H2020 GRACeFUL
project. “The purpose of the Data Management Plan (DMP) is to provide an
analysis of the main elements of the data management policy that will be used
by the applicants with regard to all the datasets that will be generated by
the project. The DMP is not a fixed document, but evolves during the lifespan
of the project. The DMP should address the points below on a dataset by
dataset basis and should reflect the current status of reflection within the
consortium about the data that will be produced” ( 1 ).
The DMP has a close relation to the open publishing policy, as can be elicited
from the presentation “Open Access and the Open Research Data Pilot in H2020”
2 : “Aim to deposit at the same time the research data needed to validate the
results” (see copies of slides in figure 1).
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
**Figure 1: Key slides of the European Commission on Open Access to data.**
Data is referred to in both GRACeFUL proposal and in the GRACeFUL consortium
agreement. The former refers to this DMP being a formal deliverable. The
latter states for each partner: “No data, know-how or information of [partner]
shall be needed by another Party for implementation of the Project (Article
25.2 Grant Agreement) or exploitation of that other Party’s Results (Article
25.3 Grant Agreement). This represents the status at the time of signature of
this Consortium Agreement.”
However, the project concerns the development of Rapid Assessment Tools
(RATs), and the expectation is that numerical data will be required to
populate the different tools which are developed. As a general rule, the data
to populate RATs will be provided by third parties, e.g. digital maps of the
City of Dordrecht, populated hydrodynamic models, etc.. Graceful will leave
the data disclosure and management with the data owner, ensure that proper
references are made and will, whenever applicable ensure that no data are
published without the owner’s consent. In addition, the project will elicit
data and information from stakeholders in the Climate Resilient Urban Design
case study. This may include responses to questionnaires and recordings of
meetings.
Referring to the slide “Pilot on Open Research Data (3)”, the project may need
to opt out of the pilot due to the aforementioned ownership of data and
privacy of results.
The remainder of this document follows the template provided in Guidelines on
Data Management in Horizon 2020 Version 1.0 ( 1 ).
# 2 Data management plan
## 2.1 General
In this GRACeFUL DMP we consider the following items as part of ‘data’:
1. Numerical input data for software developed within the project.
2. Numerical parameters included in software developed within the project.
3. Numerical output data of software developed within the project.
4. Narrative information
5. Numerical statistics obtained based on narrative information.
6. Source code
The general GRACeFUL policy for data not owned by third parties is as follows:
1. Generally speaking, data will be made accessible via the GRACeFUL website in a suitable format, protected by a login facility, including a disclaimer and only after agreeing terms of use.
2. Numerical data which are made accessible will contain meta-data. Wherever possible /feasible, the INSPIRE meta-data model will be used ( 3 ).
3. Narrative data, in particular results from stakeholder interventions will be published in suitable file formats and respecting the agreements on publishing by stakeholders as will be elicited in using Informed Consent Forms (GRACeFUL Deliverable 2.7).
4. Numerical external input data, that is data which are not developed by the project are not managed nor distributed without prior consent by GRACeFUL . Reference to the data-source will be made.
5. Numerical results of pre-processing of data, in other words pre-processed data will be managed internally to the project. Only procedures and reference to source data will be published for re-use and peer-review.
6. Source code of software will be properly managed. Unless there are weighty arguments, such as, potential commercialisation or inclusion of proprietary code, software will published for inspection and re-use.
7. Models and tools are defined as populated software, including input data and parameter settings. They will be properly managed. Unless there are weighty arguments models and tools will be published for inspection and re-use.
8. In case any pre-existing software relevant for the case study is used, data management of data used by this software (input / parameters/ output) will be determined case by case.
For purpose of keeping an overview, the data management plan is tabularized on
landscape, from chapter 4 “List of data sets, including plan per data-set”,
page 10 onwards. The items discussed are in the header row, and defined in
chapter 2.2.
## 2.2 Attributes of datasets
“Table 1: Attributes of datasets” provides an overview of items that will be
included for each dataset.
**Table 1: Attributes of datasets**
<table>
<tr>
<th>
**Item**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
Identifier for the data set to be produced.
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Description of the data that will be generated or collected, its origin (in
case it is collected), nature and scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration and reuse.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for
dissemination and necessary software and other tools for enabling re-use, and
definition of whether access will be widely open or restricted to specific
groups. Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.). In case the
dataset cannot be shared, the reasons for this
</td> </tr>
<tr>
<td>
</td>
<td>
should be mentioned (e.g. ethical, rules of personal data, intellectual
property, commercial, privacy-related, security-related).
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td>
<td>
Description of the procedures that will be put in place for long-term
preservation of the data. Indication of how long the data should be preserved,
what is its approximated end volume, what the associated costs are and how
these are planned to be covered.
</td> </tr> </table>
## 2.3 Data set reference and name - file naming convention
Data will be uploaded to a repository. The naming convention of files will be
WP##_[Description]_v##
## stands for a numerical value, v for version.
## 2.4 Quality assurance and validation
No explicit quality assurance and validation on third party data will be
carried out. If information on quality of data is readily and publicly
available this information will referred to.
Data and software developed within the project will be subject to internal
review and testing. The methodology will be determined a case-by-case.
## 2.5 Access to data and permissions, facilities for storage, storage after
project end.
The GRACeFUL consortium intends to make data publicly and digitally available
through the most appropriate means, for example on a secure section on the
website or via third party open access servers. The following aspects will be
taken into account when selecting the most appropriate means:
1. Ability to store data, source codes and publications.
2. Ability to store data, source codes and publications for some time under embargo, to ensure that GRACeFUL partners have the ability to publish first.
3. Ability to track downloads / users;
4. Ability to guarantee the users have read the terms of use (including indemnifying the project partners from consequences of using the data and tools);
5. Ability to guarantee privacy aspects of the users;
6. Cost during project lifetime;
7. Sustainability / cost after project termination.
The GRACeFUL consortium is looking into appropriate solutions.
# 3 Overview / Expected data
Table 2 provides an initial overview of data used per WP. Typically data are
either input or output, but in some cases generated input data such as causal
loop diagrams are both an input to other parts of the project and an output.
**Table 2: Overview of data**
<table>
<tr>
<th>
**Responsible**
**WP**
</th>
<th>
**Expected input data (including type and spatio-temporal resolution if
applicable)**
</th>
<th>
**Input/**
**Output**
**/Both**
</th>
<th>
**Origin (internal/external)**
</th>
<th>
**Openness of output data (yes/no/TBD [to be decided])**
</th> </tr>
<tr>
<td>
WP1
</td>
<td>
_None_
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
Data included in the
“Climate Adaptation
Support Tool (CAST)”
</td>
<td>
Input
</td>
<td>
external
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
Stakeholder based causal loop diagrams including individual / group weights
</td>
<td>
Both
</td>
<td>
internal
</td>
<td>
Yes (depersonalized)
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP5
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
# 4 List of data sets, including plan per data-set
### 4.1.1 Numerical input data for software developed within the project.
<table>
<tr>
<th>
WP/task
</th>
<th>
Data set reference and name
</th>
<th>
Data set description
</th>
<th>
Standards and metadata
</th>
<th>
Quality assurance status
</th>
<th>
Data sharing
</th>
<th>
Archiving and preservation
(including storage and backup)
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
### 4.1.2 Numerical parameters included in software developed within the
project.
<table>
<tr>
<th>
WP/task
</th>
<th>
Data set reference and name
</th>
<th>
Data set description
</th>
<th>
Standards and metadata
</th>
<th>
Quality assurance status
</th>
<th>
Data sharing
</th>
<th>
Archiving and preservation
(including storage and backup)
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
### 4.1.3 Numerical output data of software developed within the project.
<table>
<tr>
<th>
WP/task
</th>
<th>
Data set reference and name
</th>
<th>
Data set description
</th>
<th>
Standards and metadata
</th>
<th>
Quality assurance status
</th>
<th>
Data sharing
</th>
<th>
Archiving and preservation
(including storage and backup)
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
### 4.1.4 Narrative information
<table>
<tr>
<th>
WP/task
</th>
<th>
Data set reference and name
</th>
<th>
Data set description
</th>
<th>
Standards and metadata
</th>
<th>
Quality assurance status
</th>
<th>
Data sharing
</th>
<th>
Archiving and preservation
(including storage and
</th> </tr> </table>
10
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
backup)
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
### 4.1.5 Numerical statistics obtained based on narrative information
<table>
<tr>
<th>
WP/task
</th>
<th>
Data set reference and name
</th>
<th>
Data set description
</th>
<th>
Standards and metadata
</th>
<th>
Quality assurance status
</th>
<th>
Data sharing
</th>
<th>
Archiving and preservation
(including storage and backup)
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
### 4.1.6 Source code
<table>
<tr>
<th>
WP/task
</th>
<th>
Data set reference and name
</th>
<th>
Data set description
</th>
<th>
Standards and metadata
</th>
<th>
Quality assurance status
</th>
<th>
Data sharing
</th>
<th>
Archiving and preservation
(including storage and backup)
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
11
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0887_TANDEM_654206.md
|
# Introduction
This document, _D1.3 – Data Management Plan (DMP)_ is a deliverable of the
TANDEM project, which is funded by the European Union’s Horizon 2020 Programme
under Grant Agreement #654206. TANDEM aims at supporting dialogue between the
EU and African Research and Education Networks, with special attention to
Western and Central Africa, which at e-Infrastructure level is coordinated by
the Western and Central African Research and Education Network (WACREN). The
scope of the project is to promote cooperation by exploiting the
interconnection between the European research and education network (GEANT)
and the established African regional networks.
Research data is as important as the publications they support. Hence the
importance for TANDEM PROJECT to define a data management policy.
This document introduces the first version of the project Data Management Plan
(DMP).
The TANDEM PROJECT DMP primarily lists the different datasets that will be
produced by the project, the main exploitation perspectives for each of those
datasets, and the major management principles the project will implement to
handle those datasets.
The purpose of the DMP is to provide an analysis of the main elements of the
data management policy that will be used by the consortium with regard to all
the datasets that will be generated by the project.
The DMP is not a fixed document, on the contrary it will have to evolve during
the lifespan of the project. This first version of the DMP includes an
overview of the datasets to be produced by the project, and the specific
conditions that are attached to them. The next version of the DMP will get
into more detail and describe the practical data management procedures
implemented by the TANDEM PROJECT. The data management plan will cover all the
data life cycle.
_Figure 1: Steps in the data life cycle. Source: From University of Virginia
Library, Research Data Services_
# Data set reference and name
<table>
<tr>
<th>
</th>
<th>
RESPONSIBILITY FOR THE DATA
</th> </tr>
<tr>
<td>
Person in charge of the data during the project :
</td>
<td>
Damien Alline [email protected]_
Institut de Recherche pour le Développement (France)
</td> </tr> </table>
# Data set description
All TANDEM PROJECT partners have identified the dataset that will be produced
during the different phases of the project. The list is provided below, while
the nature and details for each dataset are given in the subsequent sections.
This list is indicative and allows estimating the data that TANDEM PROJECT
will produce – it may be adapted (addition/removal of datasets) in the next
versions of the DMP to take into consideration the project developments.
<table>
<tr>
<th>
**#**
</th>
<th>
Dataset (DS) name
</th>
<th>
Responsible partner
</th>
<th>
Related WP(s)
</th> </tr>
<tr>
<td>
1
</td>
<td>
DS1_Subscribers_WACREN_Collaborative_platform
</td>
<td>
WACREN
</td>
<td>
4
</td> </tr>
<tr>
<td>
2
</td>
<td>
DS2_Tandem_Newsletter_Subscribers
</td>
<td>
SIGMA
</td>
<td>
5
</td> </tr>
<tr>
<td>
3
</td>
<td>
DS3 _Tandem-Survey
</td>
<td>
BRUNEL
</td>
<td>
3
</td> </tr>
<tr>
<td>
4
</td>
<td>
DS4_End_users_mailing_list
</td>
<td>
WACREN
</td>
<td>
3
</td> </tr>
<tr>
<td>
5
</td>
<td>
DS5 Project Deliverables
</td>
<td>
IRD
</td>
<td>
1
</td> </tr> </table>
# General principles
## Participation in the Pilot on Open Research Data
The TANDEM PROJECT participates in the Pilot on Open Research Data launched by
the European Commission along with the Horizon 2020 programme. The consortium
strongly believes in the concepts of open science, and in the benefits that
the European innovation ecosystem and economy can draw from allowing reusing
data at a larger scale. Therefore, all data produced by the project can
potentially be published with open access – though this objective will
obviously need to be balanced with the other principles described below.
## IPR management and Security
Project partners obviously have Intellectual Property Rights (IPR) on their
technologies and data, on which their economic sustainability relies. As a
legitimate result, the TANDEM PROJECT consortium will have to protect these
data and consult the concerned partner(s) before publishing data.
Another effect of IPR management is that – with the data collected through
TANDEM PROJECT being of high value – all measures should be taken to prevent
them to leak or being hacked. This is another key aspect of TANDEM PROJECT
data management. Hence, all data repositories used by the project will include
a secure protection of sensitive data.
An holistic security approach will be undertaken to protect the 3 mains
pillars of information security: confidentiality, integrity, and availability.
The security approach will consist of a methodical assessment of security
risks followed by an impact analysis. This analysis will be performed on the
personal information and data processed by the proposed system, their flows
and any risk associated to their processing.
## Personal Data Protection
For some of the activities to be carried out by the project, it may be
necessary to collect basic personal data (e.g. full name, contact details,
background), even though the project will avoid collecting such data unless
deemed necessary.
Such data will be protected in compliance with the EU's Data Protection
Directive 95/46/EC 1 aiming at protecting personal data. National
legislations applicable to the project will also be strictly followed, such as
the Italian Personal Data Protection Code 2 . [The industrial pilot sites
will also implement health and safety management standards (BS OHSAS
18001:2007)].
All data collected by the project will be done after giving data subjects full
details on the experiments to be conducted, and after obtaining signed
informed consent forms.
# Data Management Plan
## DATASET 1:
<table>
<tr>
<th>
DS1_Subscribers_WACREN_Collaborative_platform
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset contains the posts and the contact details of all subscribers to
the WACREN collaborative platform
</td> </tr>
<tr>
<td>
Source
</td>
<td>
The WACREN collaborative platform is available at this URL:
community.wacren.net
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
WACREN
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
WACREN
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
WACREN
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
WACREN
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4, T 4.1
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
This dataset is the results of a collaborative work between NREN End Users
communities
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level : confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Posts and contact details are available only to the members of the communities
registered on the WACREN collaborative platform
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Users have control over the visibility of their personal data
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The dataset will be preserved in WACREN infrastructure.
</td> </tr> </table>
## DATASET 2:
<table>
<tr>
<th>
DS2_Tandem_Newsletter_Subscribers
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Mailing list containing email addresses and names of all subscribers to the
Tandem’s newsletter
</td> </tr>
<tr>
<td>
Source
</td>
<td>
This dataset is automatically generated when visitors sign up to the
newsletter form available on the project website.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
SIGMA
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
SIGMA
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
SIGMA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
SIGMA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5, Task 5.1
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The mailing list will be used for disseminating the project newsletter to a
targeted audience. An analysis of newsletter subscribers may be performed in
order to assess and improve the overall visibility of the project
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level : confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
As it implies personal data, the access to the dataset is restricted to TANDEM
consortium.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
The mailing list contains personal data (names and email addresses of
newsletter subscribers). People interested in the project voluntarily
register, through the project website, to receive the project newsletter. They
can unsubscribe at any time.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The dataset will be preserved in SIGMA’s server.
</td> </tr> </table>
## DATASET 3:
<table>
<tr>
<th>
DS3 _Tandem-Survey
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Dataset containing answers of people who have participated in the Tandem
Survey
</td> </tr>
<tr>
<td>
Source
</td>
<td>
The survey is built using Limesurvey and is hosted at
http://wacren.net/surveys/index.php/survey/index/sid/8653 34/newtest/Y/lang/en
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
BRUNEL
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
BRUNEL
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
BRUNEL
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
BRUNEL
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3, Task 3.2
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
This dataset will be used to produce an analytical report on the most
important NREN services expected by the End Users (Deliverable 3.2 of the
project)
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level : confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
As it implies personal data, the access to the dataset is restricted to TANDEM
consortium.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
The survey specifically asks if the participants are happy to share their
details. If so, they indicate this in the survey document and add their
details
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The dataset will be preserved in WACREN infrastructure.
</td> </tr> </table>
## DATASET 4:
<table>
<tr>
<th>
DS4_End_users_mailing_list
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset contains the email addresses of all NREN End Users (researchers,
students, teachers) known by the TANDEM partners.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Archives of TANDEM partners
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
WACREN
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
WACREN
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
WACREN
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
WACREN
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2, Task 2.1
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
This dataset is used to disseminate the information about the TANDEM survey
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level : confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
As it implies personal data, the access to the dataset is restricted to TANDEM
consortium.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The dataset will be preserved in WACREN infrastructure.
</td> </tr> </table>
## DATASET 5:
<table>
<tr>
<th>
DS5 Project Deliverables
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
The deliverables of the project.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Generated by WP leaders.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
IRD
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
IRD
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
IRD
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
EC
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP1, Task 1.2
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset is a combination of WORD/PDF documents.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
This dataset presents the outcomes of the project
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level : confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
This dataset does not contain confidential information. Therefore, access to
the dataset is public (except the financial information).
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
The dataset contains personal data: names of people included in the attendee
list of the workshops.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
SIGMA tool of the EC
</td> </tr> </table>
# Timescale
**18**
# Conclusion
This Data Management Plan provides an overview of the data that TANDEM PROJECT
will produce together with related challenges and constraints that need to be
taken into consideration.
The analysis contained in this report allows anticipating the procedures and
infrastructures to be implemented by TANDEM PROJECT to efficiently manage the
data it will produce.
Nearly all project partners will be owners or/and producers of data, which
implies specific responsibilities, described in this report.
The TANDEM PROJECT Data Management Plan will put a strong emphasis on the
appropriate collection – and publication should the data be published – of
metadata, storing all the information necessary for the optimal use and reuse
of those datasets.
Specific attention will be given to ensuring that the data made public breaks
neither partner IPR rules, nor regulations and good practices related to
personal data protection. For this latter point, systematic anonymization of
personal data will be made.
**19**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0888_FREME_644771.md
|
# EXECUTIVE SUMMARY
This deliverable provides a final version of FREME data management plan. The
deliverable outlines how the research data collected or generated has been
handled during the FREME action.
This document follows the template provided by the European Commission in the
Participant Portal.
# 1 FREME DMP
## 1.1 PURPOSE OF THE FREME DATA MANAGEMENT PLAN (DMP)
The FREME DMP describes the types of data that have been generated or gathered
during the project, the standards which have been used, the ways how the data
has been exploited and shared for verification or reuse, and how the data will
be preserved.
FREME is a H2020 project participating in the Open Research Data Pilot a part
of the Open Access to Scientific Publications and Research Data programme in
H2020 1 . The goal of the programme is to foster access to data generated in
H2020 projects. This document has been produced following these guidelines.
This document is a final version of the DMP, delivered in M6 and M13 of the
project. Information about the background of FREME DMP and objectives,
including the metadata schemes used for data management, has been provided in
D7.5 Data Management Plan II 2 .
# 2 DATA DESCRIPTION
## 2.1 FREME DMP: DATA SETS USED AND CONVERTED DURING FREME
FREME uses several datasets which are listed and described below.
### 2.1.1 Datasets integrated and currently used in FREME
This list provides details about the datasets used in the FREME project.
Almost all these datasets are linked to e-‐Entity service, some of them are
linked also to e-‐link and e-‐terminology.
These datasets were already created and open sourced, so FREME has no
responsibility for their creation or curation. For this reason, they are just
listed on this DMP and no further details are provided. To find more
information about them, a link to datahub has been added to the table entries.
<table>
<tr>
<th>
**DATA SET NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**FREME USED**
</th>
<th>
**USED IN SERVICE**
</th>
<th>
**LICENSE**
</th>
<th>
**LINK TO DATAHUB**
</th> </tr>
<tr>
<td>
**DBPEDIA**
</td>
<td>
Dbpedia is a crowd-‐sourced community effort to extract structured
information from wikipedia and make this information available on the web.
Dbpedia allows you to ask sophisticated queries against wikipedia, and to link
the different data sets on the web to wikipedia data. we hope that this work
will make it easier for the huge amount of information in wikipedia to be used
in some new interesting ways. furthermore, it might inspire new mechanism for
navigating, linking and improving the encyclopaedia itself.
</td>
<td>
Used
</td>
<td>
e-‐Entity and also available
in e-‐Link
</td>
<td>
CC-‐BY
</td>
<td>
https://datahub.io/d ataset/dbpedia
</td> </tr>
<tr>
<td>
**ONLD**
</td>
<td>
The NCSU Organization Name Linked Data (ONLD) is based on the NCSU
Organization Name Authority, a tool maintained by the Acquisitions & Discovery
department to manage the variant forms of name for journal and e-‐resource
publishers, providers, and vendors in E-‐Matrix, our locally-developed
electronic resource management system (ERMS).
</td>
<td>
Used
</td>
<td>
e-‐Entity
</td>
<td>
Creative
Commons
CC0
</td>
<td>
https://datahub.io/da
taset/ncsu-organization-‐name-‐
linked-‐data
</td> </tr>
<tr>
<td>
**VIAF**
</td>
<td>
VIAF (Virtual International Authority File) is an OCLC dataset that virtually
combines multiple LAM (Library Archives Museum) name authority files into a
single name authority service. Put simply it is a large database of people and
organizations that occur in library catalogues.
</td>
<td>
Used
</td>
<td>
e-‐Entity
</td>
<td>
Open Data
Commons
Attribution
</td>
<td>
https://datahub.io/d
ataset/viaf
</td> </tr>
<tr>
<td>
**GEOPOLITICAL ONTOLOGY**
</td>
<td>
The FAO geopolitical ontology and related services have been developed to
facilitate data exchange and sharing in a standardized manner among systems
managing
information about countries and/or regions.
</td>
<td>
Used
</td>
<td>
e-‐Entity
</td>
<td>
tbd
</td>
<td>
https://datahub.io/d ataset/fao-‐
geopolitical-‐ontology
</td> </tr>
<tr>
<td>
**AGROVOC**
</td>
<td>
AGROVOC is a controlled vocabulary covering all areas of interest of the Food
and Agriculture Organization (FAO) of the United Nations, including food,
nutrition, agriculture, fisheries, forestry, environment etc. It is published
by FAO and edited by a community of experts.
</td>
<td>
Used
</td>
<td>
e-‐terminology
</td>
<td>
CC4.0 BY-SA
</td>
<td>
https://datahub.io/d ataset/agrovoc-‐skos
</td> </tr>
<tr>
<td>
**EUROPEANA**
</td>
<td>
Europeana.eu is an internet portal that acts as an interface to millions of
books, paintings, films, museum objects and archival records that have been
digitised throughout Europe.
</td>
<td>
Used
</td>
<td>
e-‐Entity
</td>
<td>
CC0; but certain subsets depend on
their
provider
</td>
<td>
http://www.europea
na.eu/portal/en
</td> </tr>
<tr>
<td>
**GWPP**
**GLOSSARY**
</td>
<td>
A set of scientific terms and their definitions that are used inside the
Global Water Pathogen Project online book. This dataset is crowdsourced by a
large number of researchers and engineers on the fields of water sanitation
and environmental sciences.
</td>
<td>
Used
</td>
<td>
e-‐Entity
</td>
<td>
CC
Attribution
4.0
Internation
al
</td>
<td>
http://www.waterpa thogens.org/glossary
</td> </tr> </table>
**Table 1 Datasets currently used in FREME**
### 2.1.2 Datasets converted and used by FREME 2
During the FREME project, several datasets that have been adapted for usage in
FREME. Since these datasets have been created by FREME, below we provide a
detailed description using a combination of the META-‐SHARE and DATAID
metadata schemes. The first two columns of each table show the fields
according to each scheme. The third column shows the metadata value.
#### 1 ORCID
<table>
<tr>
<th>
**META-‐**
**SHARE**
**FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE**
**NAME**
</td>
<td>
**DATASET**
**REFERENCE AND**
**NAME:**
</td>
<td>
ORCID 2014 Dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
https://api.freme-‐project.eu/datasets/orcid/orcid-‐dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
http://datahub.io/dataset/orcid-‐dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**MAINTAINER:**
</th>
<th>
AKSW/KILT, INFAI, Leipzig University
</th> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
ORCID (Open Researcher and Contributor ID) is a non-‐proprietary alphanumeric
code to uniquely identify scientific and other academic authors. This dataset
contains RDF conversion of the ORCID dataset. The current conversion is based
on the 2014 ORCID data dump, which contains around 1.3 million JSON files
amounting to 41GB of data.
The converted RDF version is 13GB large (uncompressed) and it is modelled with
well-known vocabularies such as Dublin Core, FOAF, schema.org, etc., and it
is interlinked with GeoNames.
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
Open Researcher and Contributor ID (ORCID) -‐ http://orcid.org/
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
ORCID is a useful resource for interlinking general datasets with research and
scientific information. users profiting from ORCID are open data developers,
SMEs and researchers in data science and NLP, especially entities from
research domain.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
the CORDIS dataset.
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
The CORDIS dataset can be integrated into other datasets and re-‐used for
data enrichment and mashup purposes.
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
N-‐triples
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE:**
</td>
<td>
N-‐triples -‐ compressed in x-‐bzip2
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA**
**DESCRIPTION:**
</td>
<td>
Done in linked data using Dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata.
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES**
**AND**
**ONTOLOGIES:**
</td>
<td>
Dublin Core, FOAF, schema.org
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
CC0 1.0 Public Domain Dedication
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
http://creativecommons.org/publicdomain/zero/1.0/
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
ORCID is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
ORCID needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/orcid-‐dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
Preservation of the ORCID is guaranteed by archival of old versions on the
scripts used for its creation and referencing to the source data. also,
preservation is guaranteed by archival of the old ORCID converted versions on
the archive server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of ORCID.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
https://api.freme-‐project.eu/datasets/orcid/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
13GB large
</td> </tr> </table>
#### 2 CORDIS FP7
<table>
<tr>
<th>
**META-‐**
**SHARE**
**FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE**
**NAME**
</td>
<td>
**DATASET**
**REFERENCE AND**
**NAME:**
</td>
<td>
Name: CORDIS FP7 Dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
https://api.freme-‐project.eu/datasets/cordis/cordis-‐dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
https://datahub.io/dataset/cordis-‐corpus
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
CORDIS (Community Research and Development Information Service), is the
European Commission’s core public repository providing dissemination
information for all EU-funded research projects. This dataset contains RDF of
the CORDIS FP7 dataset which provides descriptions for projects funded by the
European Union under the seventh framework programme for research and
technological development (FP7) from 2007 to 2013. The converted dataset
contains over 1 million of RDF triples with a total size of around 200MB in
the N-‐Triples RDF serialization format.
The dataset is modelled with well-‐known vocabularies such as Dublin Core,
FOAF, DBpedia ontology, DOAP, etc., and it is interlinked with DBpedia.
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
European Commission
https://open-‐data.europa.eu/en/data/dataset/cordisfp7projects
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
CORDIS FP7 is a useful resource for interlinking general datasets with
research and scientific information. users profiting from ORCID are open data
developers, SMEs and researchers in data science and NLP, especially entities
from research domain.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
The ORCID dataset.
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
The CORDIS dataset can be integrated with other research datasets and reused
for data enrichment purposes.
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
N-‐triples
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE:**
</td>
<td>
N-‐triples -‐ compressed in x-‐bzip2
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
Done in linked data using Dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata.
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES**
**AND**
**ONTOLOGIES:**
</td>
<td>
Dublin Core, FOAF, DBpedia ontology, DOAP
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
http://ec.europa.eu/geninfo/legal_notices_en.htm
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
n/a
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
CORDIS is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
CORDIS needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/cordis-‐corpus
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
preservation of the CORDIS is guaranteed by archival of old versions on the
scripts used for its creation and referencing to the source data. also,
preservation is guaranteed by archival of the old CORDIS converted versions on
the archive server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of CORDIS.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
https://api.freme-‐project.eu/datasets/cordis/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
over 1 million of RDF triples
</td> </tr> </table>
#### 3 DBpedia Abstracts
<table>
<tr>
<th>
**META-‐**
**SHARE FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE NAME**
</td>
<td>
**DATA SET**
**REFERENCE AND**
**NAME:**
</td>
<td>
DBpedia Abstracts Dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
https://api.freme-‐project.eu/datasets/dbpedia-‐abstracts/dbpedia-‐abstracts-dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
https://datahub.io/dataset/dbpedia-‐abstract-‐corpus
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**PUBLISHER:**
</th>
<th>
AKSW/KILT, INFAI, Leipzig University
</th> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
This corpus contains a conversion of Wikipedia abstracts in six languages
(dutch, english, french, german, italian and spanish) into the NLP Interchange
Format (NIF). The corpus contains the abstract texts, as well as the position,
surface form and linked article of all links in the text. As such, it contains
entity mentions manually disambiguated to Wikipedia/DBpedia resources by
native speakers, which predestines it for NER training and evaluation.
Furthermore, the abstracts represent a special form of text that lends itself
to be used for more sophisticated tasks, like open relation extraction. Their
encyclopaedic style, following Wikipedia guidelines on opening paragraphs adds
further interesting properties. The first sentence puts the article in broader
context. Most anaphora will refer to the original topic of the text, making
them easier to resolve. Finally, should the same string occur in different
meanings, Wikipedia guidelines suggest that the new meaning should again be
linked for disambiguation. In short: The type of text is highly interesting.
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
Wikipedia -‐ https://www.wikipedia.org/
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
DBpedia Abstracts is a useful multilingual resource for learning various nlp
tasks. E.g. learning named entity recognition models, relation extraction, and
similar.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
The Wikiner dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
The DBpedia abstracts dataset can be integrated with other similar training
corpora and reused for training various NLP tasks.
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT**
</td>
<td>
Turtle
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE**
</td>
<td>
Turtle -‐ compressed in x-‐gzip
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
Done in linked data using Dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata.
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES**
**AND**
**ONTOLOGIES:**
</td>
<td>
NIF
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
CC-‐BY
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
http://purl.org/net/rdflicense/cc-‐by-‐sa3.0
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
Dbpedia abstracts is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
Dbpedia abstract needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/dbpedia-‐abstract-‐corpus
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
preservation of the Dbpedia abstracts is guaranteed by archival of old
versions and referencing to the source data. also, preservation is guaranteed
by archival of the old Dbpedia abstracts converted versions on the archive
server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of dataset
and its extension to other languages.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
http://wiki-‐link.nlp2rdf.org/abstracts/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
743 million RDF triples
</td> </tr> </table>
#### 4 Global airports
<table>
<tr>
<th>
**META-‐**
**SHARE FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE**
**NAME**
</td>
<td>
**DATA SET**
**REFERENCE AND**
**NAME:**
</td>
<td>
Global airports in RDF
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
https://api.freme-‐project.eu/datasets/global-‐airports/global-‐airports-‐dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
https://datahub.io/dataset/global-‐airports-‐in-‐rdf
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
DFKI and AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
DFKI and AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
**DESCRIPTION:**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
This corpus contains RDF conversion of Global airports dataset which was
retrieved from openflights.org. The dataset contains information about airport
names, its location, codes, and other related info.
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
openflights.org
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
Global airports is a useful resource for interlinking and enrichment of
content which contains information about airports and related.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
DBpedia
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
The Global Airports dataset can be reused for data enrichment purposes and
integrated with other relevant datasets such as DBpedia.
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
Turtle
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE:**
</td>
<td>
Turtle, text/turtle
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA**
**DESCRIPTION:**
</td>
<td>
done in linked data using Dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata.
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES**
**AND ONTOLOGIES:**
</td>
<td>
DBpedia ontology, SKOS, schema.org
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
Open Database License -‐ for more see:
http://openflights.org/data.html#license
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
http://purl.oclc.org/NET/rdflicense/odbl1.0
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
Global airports is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
Global airports needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/global-‐airports-‐in-‐rdf
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
Preservation of the Global airports is guaranteed by archival of old versions
and referencing to the source data. also, preservation is guaranteed by
archival of the old global airports converted versions on the archive server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of the
dataset.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
https://api.freme-‐project.eu/datasets/global-‐airports/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
74K RDF triples
</td> </tr> </table>
#### 5 GRID
<table>
<tr>
<th>
**META-‐**
**SHARE FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE**
**NAME**
</td>
<td>
**DATA SET**
**REFERENCE AND**
**NAME:**
</td>
<td>
GRID dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
http://api.freme-‐project.eu/datasets/grid/grid-‐dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
https://datahub.io/dataset/grid_dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
GRID is a free, openly accessible database of research institution identifiers
which enables users to make sense of their data. It does so by minimising the
work required to link datasets together using a unique and persistent
identifier. This is RDF version of the dataset.
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
GRID -‐ https://www.grid.ac/downloads
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
GRID is a useful statistics resource for enrichment of various kind of content
related to research institutions.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
the CORDIS, ORCID, PermID
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
The GRID dataset can be integrated with other relevant datasets and reused for
data enrichment purposes.
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
N-‐Triples
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE**
</td>
<td>
N-‐Triples -‐ compressed in x-‐bzip2
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
Done in linked data using Dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata.
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES AND ONTOLOGIES:**
</td>
<td>
Dublin Core, DBpedia Ontology, FOAF, VCARD, SKOS
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
CC BY Creative Commons
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
http://purl.oclc.org/NET/rdflicense/cc-‐by3.0
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
GRID is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
GRID needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/grid_dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
Preservation of the GRID is guaranteed by archival of old versions on the
scripts used for its creation and referencing to the source data. also,
preservation is guaranteed by archival of the old GRID converted versions on
the archive server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of the
datasets and converting additional.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
https://api.freme-‐project.eu/datasets/grid/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
581K RDF triples
</td> </tr> </table>
#### 6 GWPP
<table>
<tr>
<th>
**META-‐**
**SHARE FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE NAME**
</td>
<td>
**DATA SET**
**REFERENCE AND**
**NAME:**
</td>
<td>
GWPP dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
https://api.freme-‐project.eu/datasets/gwpp/gwpp-‐dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
https://datahub.io/dataset/gwpp-‐glossary
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
The GWPP glossary is a set of scientific terms and their definitions that are
used inside the Global Water Pathogen Project online book. This dataset is
crowdsourced by a large number of researchers and engineers on the fields of
water sanitation and environmental sciences.
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
GWPP -‐ http://www.waterpathogens.org/glossary
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
GWPP is a useful terminology resource for enrichment of various kind of
content related to water sanitation and environmental sciences.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
AGRIS, AGROVOC
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND**
**INTEGRATION:**
</td>
<td>
The GWPP dataset can be integrated with other relevant datasets and reused for
data enrichment/annotation purposes.
</td> </tr>
<tr>
<td>
**RESOURCE**
**TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
N-‐Triples
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE**
</td>
<td>
N-‐Triples -‐ compressed in x-‐bzip2
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
done in linked data using Dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata.
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES AND ONTOLOGIES:**
</td>
<td>
RDFS
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
CC Attribution 4.0 International
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
http://purl.oclc.org/NET/rdflicense/cc-‐by4.0
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
GWPP is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
GWPP needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/gwpp-‐glossary
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
Preservation of the GWPP is guaranteed by archival of old versions on the
scripts used for its creation and referencing to the source data. also,
preservation is guaranteed by archival of the old GWPP converted versions on
the archive server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of the
datasets and converting additional.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
https://api.freme-‐project.eu/datasets/gwpp/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
346 terms
</td> </tr> </table>
### 2.1.3 Datasets converted but not being used by FREME.
During the FREME project the Statbel dataset has been created but not used by
the project.
#### Statbel corpus
<table>
<tr>
<th>
**META-‐SHARE**
**FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE NAME**
</td>
<td>
**DATA SET REFERENCE AND NAME:**
</td>
<td>
Statbel corpus
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
https://api.freme-‐project.eu/datasets/statbel/statbel-‐dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
https://datahub.io/dataset/statbel-‐corpus
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
This corpus contains RDF conversion of datasets from the "Statistics Belgium"
(also known as Statbel) which aims at collecting, processing and disseminating
relevant, reliable and commented data on Belgian society.
http://statbel.fgov.be/en/statistics/figures/
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Currently, the corpus contains three datasets:
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Belgian house price index dataset: measures the inflation on residential
property market in Belgium. The data for conversion was obtained from
http://statbel.fgov.be/en/statistics/figures/economy/constructio
n_industry/house_price_index/
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Employment, unemployment, labour market structure dataset: data on employment,
unemployment and the labour market from the labour force survey conducted
among Belgian
households. The data for conversion was obtained from
http://statbel.fgov.be/en/statistics/figures/labour_market_living
_conditions/employment/
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Unemployment and additional indicators dataset: contains unemployment related
statistics about Belgium and its regions. The data for conversion was obtained
from
http://statbel.fgov.be/en/modules/publications/statistics/march
e_du_travail_et_conditions_de_vie/unemployment_and_additio
nal_indicators_2005-‐2010.jsp
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
Statbel -‐ http://statbel.fgov.be/en/statistics/figures/
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
Statbel is a useful statistics resource for enrichment of various kind of
content related to belgium and the belgian society.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
The UNdata
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
tbd
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE:**
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
Done in linked data using dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata. vocabularies and ontologies: Data cube
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
tba n/a
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
n/a
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
Statbel is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
Statbel needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/statbel-‐corpus
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
Preservation of the Statbel is guaranteed by archival of old versions on the
scripts used for its creation and referencing to the
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
source data. also, preservation is guaranteed by archival of the old Statbel
converted versions on the archive server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of the
datasets and converting additional.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
https://api.freme-‐project.eu/datasets/statbel/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
Few thousands of triples
</td> </tr> </table>
### 2.1.4 Other datasets used in FREME
The following list includes the datasets that have been used by some partners
in the context of FREME, but the dataset itself is not being used as part of
the general FREME framework.
<table>
<tr>
<th>
**DATA SET NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**FREME USE?**
</th>
<th>
**USED IN**
**SERVICE**
</th>
<th>
**LICENSE**
</th>
<th>
**LINK**
</th> </tr>
<tr>
<td>
**WAND FINANCE**
**AND INVESTMENT**
**TAXONOMY**
**WAND INC**
</td>
<td>
**-‐**
</td>
<td>
A taxonomy with specific Topics and
entities related to
Finance and Investment
</td>
<td>
Upload
for
testing
</td>
<td>
No
</td>
<td>
Evaluation License
</td>
<td>
www.wandinc.com/wa
nd-‐finance-‐and-investment-taxonomy.aspx
</td> </tr>
<tr>
<td>
**CIARD RING**
</td>
<td>
</td>
<td>
The CIARD RING is a global directory of web-based information services and
datasets for agricultural research for development. It is the principal tool
created through the CIARD initiative
(http://www.ciard.net)
to allow information providers to register their services and datasets in
various categories and so facilitate the discovery
of sources of
agriculture-‐related information across the world.
</td>
<td>
not part
of
Framew
ork yet
</td>
<td>
No
</td>
<td>
CC Attribution
</td>
<td>
https://datahub.io/data
set/the-‐ciard-‐ring,
http://ring.ciard.info/rd f-‐store
</td> </tr>
<tr>
<td>
**AGRIS**
</td>
<td>
International
Information System for the Agricultural Science and Technology
</td>
<td>
validate freme service
</td>
<td>
e-‐
Termin ology
</td>
<td>
no clear license available yet.
It will be soon
</td>
<td>
https://datahub.io/data
set/agris
</td> </tr>
<tr>
<td>
**LIBRARY OF**
**CONGRESS**
</td>
<td>
The dataset provides
access to authority data at the Library of
Congress.
</td>
<td>
Uploade
d for
testing purposes
</td>
<td>
e-‐Entity
</td>
<td>
See terms of
service
(http://id.lo
c.gov/about
/)
</td>
<td>
http://id.loc.gov/descri
ptions/
</td> </tr>
<tr>
<td>
**GETTY**
</td>
<td>
Provides structured terminology for art and other material culture, archival
materials, visual surrogates, and bibliographic materials.
</td>
<td>
Uploade
d for
testing purposes
</td>
<td>
e-‐Entity
</td>
<td>
Open Data
Commons
Attribution
</td>
<td>
http://vocab.getty.edu/
</td> </tr>
<tr>
<td>
**LINKEDGEODATA**
</td>
<td>
LinkedGeoData uses the information collected by the OpenStreetMap
project and makes it
available as an RDF knowledge base
according to the Linked
Data principles.
</td>
<td>
Used via e-‐Link
</td>
<td>
e-‐Link
</td>
<td>
Open
Database
License
</td>
<td>
http://linkedgeodata.or
g/About
</td> </tr>
<tr>
<td>
**GEONAMES**
</td>
<td>
The GeoNames geographical database
covers all countries and contains over eleven
million placenames that
are available for download free of charge.
</td>
<td>
Used via e-‐Link
</td>
<td>
e-‐Link
</td>
<td>
Creative Commons
attribution
</td>
<td>
http://www.geonames. org/
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0889_FREME_644771.md
|
# EXECUTIVE SUMMARY
This deliverable provides a final version of FREME data management plan. The
deliverable outlines how the research data collected or generated has been
handled during the FREME action.
This document follows the template provided by the European Commission in the
Participant Portal.
# 1 FREME DMP
## 1.1 PURPOSE OF THE FREME DATA MANAGEMENT PLAN (DMP)
The FREME DMP describes the types of data that have been generated or gathered
during the project, the standards which have been used, the ways how the data
has been exploited and shared for verification or reuse, and how the data will
be preserved.
FREME is a H2020 project participating in the Open Research Data Pilot a part
of the Open Access to Scientific Publications and Research Data programme in
H2020 1 . The goal of the programme is to foster access to data generated in
H2020 projects. This document has been produced following these guidelines.
This document is a final version of the DMP, delivered in M6 and M13 of the
project. Information about the background of FREME DMP and objectives,
including the metadata schemes used for data management, has been provided in
D7.5 Data Management Plan II 2 .
# 2 DATA DESCRIPTION
## 2.1 FREME DMP: DATA SETS USED AND CONVERTED DURING FREME
FREME uses several datasets which are listed and described below.
### 2.1.1 Datasets integrated and currently used in FREME
This list provides details about the datasets used in the FREME project.
Almost all these datasets are linked to e-‐Entity service, some of them are
linked also to e-‐link and e-‐terminology.
These datasets were already created and open sourced, so FREME has no
responsibility for their creation or curation. For this reason, they are just
listed on this DMP and no further details are provided. To find more
information about them, a link to datahub has been added to the table entries.
<table>
<tr>
<th>
**DATA SET NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**FREME USED**
</th>
<th>
**USED IN SERVICE**
</th>
<th>
**LICENSE**
</th>
<th>
**LINK TO DATAHUB**
</th> </tr>
<tr>
<td>
**DBPEDIA**
</td>
<td>
Dbpedia is a crowd-‐sourced community effort to extract structured
information from wikipedia and make this information available on the web.
Dbpedia allows you to ask sophisticated queries against wikipedia, and to link
the different data sets on the web to wikipedia data. we hope that this work
will make it easier for the huge amount of information in wikipedia to be used
in some new interesting ways. furthermore, it might inspire new mechanism for
navigating, linking and improving the encyclopaedia itself.
</td>
<td>
Used
</td>
<td>
e-‐Entity and also available
in e-‐Link
</td>
<td>
CC-‐BY
</td>
<td>
https://datahub.io/d ataset/dbpedia
</td> </tr>
<tr>
<td>
**ONLD**
</td>
<td>
The NCSU Organization Name Linked Data (ONLD) is based on the NCSU
Organization Name Authority, a tool maintained by the Acquisitions & Discovery
department to manage the variant forms of name for journal and e-‐resource
publishers, providers, and vendors in E-‐Matrix, our locally-developed
electronic resource management system (ERMS).
</td>
<td>
Used
</td>
<td>
e-‐Entity
</td>
<td>
Creative
Commons
CC0
</td>
<td>
https://datahub.io/da
taset/ncsu-organization-‐name-‐
linked-‐data
</td> </tr>
<tr>
<td>
**VIAF**
</td>
<td>
VIAF (Virtual International Authority File) is an OCLC dataset that virtually
combines multiple LAM (Library Archives Museum) name authority files into a
single name authority service. Put simply it is a large database of people and
organizations that occur in library catalogues.
</td>
<td>
Used
</td>
<td>
e-‐Entity
</td>
<td>
Open Data
Commons
Attribution
</td>
<td>
https://datahub.io/d
ataset/viaf
</td> </tr>
<tr>
<td>
**GEOPOLITICAL ONTOLOGY**
</td>
<td>
The FAO geopolitical ontology and related services have been developed to
facilitate data exchange and sharing in a standardized manner among systems
managing
information about countries and/or regions.
</td>
<td>
Used
</td>
<td>
e-‐Entity
</td>
<td>
tbd
</td>
<td>
https://datahub.io/d ataset/fao-‐
geopolitical-‐ontology
</td> </tr>
<tr>
<td>
**AGROVOC**
</td>
<td>
AGROVOC is a controlled vocabulary covering all areas of interest of the Food
and Agriculture Organization (FAO) of the United Nations, including food,
nutrition, agriculture, fisheries, forestry, environment etc. It is published
by FAO and edited by a community of experts.
</td>
<td>
Used
</td>
<td>
e-‐terminology
</td>
<td>
CC4.0 BY-SA
</td>
<td>
https://datahub.io/d ataset/agrovoc-‐skos
</td> </tr>
<tr>
<td>
**EUROPEANA**
</td>
<td>
Europeana.eu is an internet portal that acts as an interface to millions of
books, paintings, films, museum objects and archival records that have been
digitised throughout Europe.
</td>
<td>
Used
</td>
<td>
e-‐Entity
</td>
<td>
CC0; but certain subsets depend on
their
provider
</td>
<td>
http://www.europea
na.eu/portal/en
</td> </tr>
<tr>
<td>
**GWPP**
**GLOSSARY**
</td>
<td>
A set of scientific terms and their definitions that are used inside the
Global Water Pathogen Project online book. This dataset is crowdsourced by a
large number of researchers and engineers on the fields of water sanitation
and environmental sciences.
</td>
<td>
Used
</td>
<td>
e-‐Entity
</td>
<td>
CC
Attribution
4.0
Internation
al
</td>
<td>
http://www.waterpa thogens.org/glossary
</td> </tr> </table>
**Table 1 Datasets currently used in FREME**
### 2.1.2 Datasets converted and used by FREME 2
During the FREME project, several datasets that have been adapted for usage in
FREME. Since these datasets have been created by FREME, below we provide a
detailed description using a combination of the META-‐SHARE and DATAID
metadata schemes. The first two columns of each table show the fields
according to each scheme. The third column shows the metadata value.
#### 1 ORCID
<table>
<tr>
<th>
**META-‐**
**SHARE**
**FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE**
**NAME**
</td>
<td>
**DATASET**
**REFERENCE AND**
**NAME:**
</td>
<td>
ORCID 2014 Dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
https://api.freme-‐project.eu/datasets/orcid/orcid-‐dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
http://datahub.io/dataset/orcid-‐dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**MAINTAINER:**
</th>
<th>
AKSW/KILT, INFAI, Leipzig University
</th> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
ORCID (Open Researcher and Contributor ID) is a non-‐proprietary alphanumeric
code to uniquely identify scientific and other academic authors. This dataset
contains RDF conversion of the ORCID dataset. The current conversion is based
on the 2014 ORCID data dump, which contains around 1.3 million JSON files
amounting to 41GB of data.
The converted RDF version is 13GB large (uncompressed) and it is modelled with
well-known vocabularies such as Dublin Core, FOAF, schema.org, etc., and it
is interlinked with GeoNames.
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
Open Researcher and Contributor ID (ORCID) -‐ http://orcid.org/
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
ORCID is a useful resource for interlinking general datasets with research and
scientific information. users profiting from ORCID are open data developers,
SMEs and researchers in data science and NLP, especially entities from
research domain.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
the CORDIS dataset.
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
The CORDIS dataset can be integrated into other datasets and re-‐used for
data enrichment and mashup purposes.
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
N-‐triples
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE:**
</td>
<td>
N-‐triples -‐ compressed in x-‐bzip2
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA**
**DESCRIPTION:**
</td>
<td>
Done in linked data using Dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata.
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES**
**AND**
**ONTOLOGIES:**
</td>
<td>
Dublin Core, FOAF, schema.org
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
CC0 1.0 Public Domain Dedication
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
http://creativecommons.org/publicdomain/zero/1.0/
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
ORCID is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
ORCID needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/orcid-‐dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
Preservation of the ORCID is guaranteed by archival of old versions on the
scripts used for its creation and referencing to the source data. also,
preservation is guaranteed by archival of the old ORCID converted versions on
the archive server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of ORCID.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
https://api.freme-‐project.eu/datasets/orcid/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
13GB large
</td> </tr> </table>
#### 2 CORDIS FP7
<table>
<tr>
<th>
**META-‐**
**SHARE**
**FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE**
**NAME**
</td>
<td>
**DATASET**
**REFERENCE AND**
**NAME:**
</td>
<td>
Name: CORDIS FP7 Dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
https://api.freme-‐project.eu/datasets/cordis/cordis-‐dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
https://datahub.io/dataset/cordis-‐corpus
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
CORDIS (Community Research and Development Information Service), is the
European Commission’s core public repository providing dissemination
information for all EU-funded research projects. This dataset contains RDF of
the CORDIS FP7 dataset which provides descriptions for projects funded by the
European Union under the seventh framework programme for research and
technological development (FP7) from 2007 to 2013. The converted dataset
contains over 1 million of RDF triples with a total size of around 200MB in
the N-‐Triples RDF serialization format.
The dataset is modelled with well-‐known vocabularies such as Dublin Core,
FOAF, DBpedia ontology, DOAP, etc., and it is interlinked with DBpedia.
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
European Commission
https://open-‐data.europa.eu/en/data/dataset/cordisfp7projects
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
CORDIS FP7 is a useful resource for interlinking general datasets with
research and scientific information. users profiting from ORCID are open data
developers, SMEs and researchers in data science and NLP, especially entities
from research domain.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
The ORCID dataset.
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
The CORDIS dataset can be integrated with other research datasets and reused
for data enrichment purposes.
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
N-‐triples
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE:**
</td>
<td>
N-‐triples -‐ compressed in x-‐bzip2
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
Done in linked data using Dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata.
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES**
**AND**
**ONTOLOGIES:**
</td>
<td>
Dublin Core, FOAF, DBpedia ontology, DOAP
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
http://ec.europa.eu/geninfo/legal_notices_en.htm
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
n/a
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
CORDIS is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
CORDIS needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/cordis-‐corpus
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
preservation of the CORDIS is guaranteed by archival of old versions on the
scripts used for its creation and referencing to the source data. also,
preservation is guaranteed by archival of the old CORDIS converted versions on
the archive server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of CORDIS.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
https://api.freme-‐project.eu/datasets/cordis/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
over 1 million of RDF triples
</td> </tr> </table>
#### 3 DBpedia Abstracts
<table>
<tr>
<th>
**META-‐**
**SHARE FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE NAME**
</td>
<td>
**DATA SET**
**REFERENCE AND**
**NAME:**
</td>
<td>
DBpedia Abstracts Dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
https://api.freme-‐project.eu/datasets/dbpedia-‐abstracts/dbpedia-‐abstracts-dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
https://datahub.io/dataset/dbpedia-‐abstract-‐corpus
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**PUBLISHER:**
</th>
<th>
AKSW/KILT, INFAI, Leipzig University
</th> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
This corpus contains a conversion of Wikipedia abstracts in six languages
(dutch, english, french, german, italian and spanish) into the NLP Interchange
Format (NIF). The corpus contains the abstract texts, as well as the position,
surface form and linked article of all links in the text. As such, it contains
entity mentions manually disambiguated to Wikipedia/DBpedia resources by
native speakers, which predestines it for NER training and evaluation.
Furthermore, the abstracts represent a special form of text that lends itself
to be used for more sophisticated tasks, like open relation extraction. Their
encyclopaedic style, following Wikipedia guidelines on opening paragraphs adds
further interesting properties. The first sentence puts the article in broader
context. Most anaphora will refer to the original topic of the text, making
them easier to resolve. Finally, should the same string occur in different
meanings, Wikipedia guidelines suggest that the new meaning should again be
linked for disambiguation. In short: The type of text is highly interesting.
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
Wikipedia -‐ https://www.wikipedia.org/
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
DBpedia Abstracts is a useful multilingual resource for learning various nlp
tasks. E.g. learning named entity recognition models, relation extraction, and
similar.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
The Wikiner dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
The DBpedia abstracts dataset can be integrated with other similar training
corpora and reused for training various NLP tasks.
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT**
</td>
<td>
Turtle
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE**
</td>
<td>
Turtle -‐ compressed in x-‐gzip
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
Done in linked data using Dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata.
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES**
**AND**
**ONTOLOGIES:**
</td>
<td>
NIF
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
CC-‐BY
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
http://purl.org/net/rdflicense/cc-‐by-‐sa3.0
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
Dbpedia abstracts is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
Dbpedia abstract needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/dbpedia-‐abstract-‐corpus
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
preservation of the Dbpedia abstracts is guaranteed by archival of old
versions and referencing to the source data. also, preservation is guaranteed
by archival of the old Dbpedia abstracts converted versions on the archive
server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of dataset
and its extension to other languages.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
http://wiki-‐link.nlp2rdf.org/abstracts/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
743 million RDF triples
</td> </tr> </table>
#### 4 Global airports
<table>
<tr>
<th>
**META-‐**
**SHARE FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE**
**NAME**
</td>
<td>
**DATA SET**
**REFERENCE AND**
**NAME:**
</td>
<td>
Global airports in RDF
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
https://api.freme-‐project.eu/datasets/global-‐airports/global-‐airports-‐dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
https://datahub.io/dataset/global-‐airports-‐in-‐rdf
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
DFKI and AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
DFKI and AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
**DESCRIPTION:**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
This corpus contains RDF conversion of Global airports dataset which was
retrieved from openflights.org. The dataset contains information about airport
names, its location, codes, and other related info.
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
openflights.org
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
Global airports is a useful resource for interlinking and enrichment of
content which contains information about airports and related.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
DBpedia
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
The Global Airports dataset can be reused for data enrichment purposes and
integrated with other relevant datasets such as DBpedia.
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
Turtle
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE:**
</td>
<td>
Turtle, text/turtle
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA**
**DESCRIPTION:**
</td>
<td>
done in linked data using Dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata.
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES**
**AND ONTOLOGIES:**
</td>
<td>
DBpedia ontology, SKOS, schema.org
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
Open Database License -‐ for more see:
http://openflights.org/data.html#license
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
http://purl.oclc.org/NET/rdflicense/odbl1.0
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
Global airports is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
Global airports needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/global-‐airports-‐in-‐rdf
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
Preservation of the Global airports is guaranteed by archival of old versions
and referencing to the source data. also, preservation is guaranteed by
archival of the old global airports converted versions on the archive server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of the
dataset.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
https://api.freme-‐project.eu/datasets/global-‐airports/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
74K RDF triples
</td> </tr> </table>
#### 5 GRID
<table>
<tr>
<th>
**META-‐**
**SHARE FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE**
**NAME**
</td>
<td>
**DATA SET**
**REFERENCE AND**
**NAME:**
</td>
<td>
GRID dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
http://api.freme-‐project.eu/datasets/grid/grid-‐dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
https://datahub.io/dataset/grid_dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
GRID is a free, openly accessible database of research institution identifiers
which enables users to make sense of their data. It does so by minimising the
work required to link datasets together using a unique and persistent
identifier. This is RDF version of the dataset.
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
GRID -‐ https://www.grid.ac/downloads
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
GRID is a useful statistics resource for enrichment of various kind of content
related to research institutions.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
the CORDIS, ORCID, PermID
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
The GRID dataset can be integrated with other relevant datasets and reused for
data enrichment purposes.
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
N-‐Triples
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE**
</td>
<td>
N-‐Triples -‐ compressed in x-‐bzip2
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
Done in linked data using Dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata.
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES AND ONTOLOGIES:**
</td>
<td>
Dublin Core, DBpedia Ontology, FOAF, VCARD, SKOS
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
CC BY Creative Commons
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
http://purl.oclc.org/NET/rdflicense/cc-‐by3.0
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
GRID is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
GRID needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/grid_dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
Preservation of the GRID is guaranteed by archival of old versions on the
scripts used for its creation and referencing to the source data. also,
preservation is guaranteed by archival of the old GRID converted versions on
the archive server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of the
datasets and converting additional.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
https://api.freme-‐project.eu/datasets/grid/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
581K RDF triples
</td> </tr> </table>
#### 6 GWPP
<table>
<tr>
<th>
**META-‐**
**SHARE FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE NAME**
</td>
<td>
**DATA SET**
**REFERENCE AND**
**NAME:**
</td>
<td>
GWPP dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
https://api.freme-‐project.eu/datasets/gwpp/gwpp-‐dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
https://datahub.io/dataset/gwpp-‐glossary
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
The GWPP glossary is a set of scientific terms and their definitions that are
used inside the Global Water Pathogen Project online book. This dataset is
crowdsourced by a large number of researchers and engineers on the fields of
water sanitation and environmental sciences.
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
GWPP -‐ http://www.waterpathogens.org/glossary
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
GWPP is a useful terminology resource for enrichment of various kind of
content related to water sanitation and environmental sciences.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
AGRIS, AGROVOC
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND**
**INTEGRATION:**
</td>
<td>
The GWPP dataset can be integrated with other relevant datasets and reused for
data enrichment/annotation purposes.
</td> </tr>
<tr>
<td>
**RESOURCE**
**TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
N-‐Triples
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE**
</td>
<td>
N-‐Triples -‐ compressed in x-‐bzip2
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
done in linked data using Dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata.
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES AND ONTOLOGIES:**
</td>
<td>
RDFS
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
CC Attribution 4.0 International
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
http://purl.oclc.org/NET/rdflicense/cc-‐by4.0
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
GWPP is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
GWPP needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/gwpp-‐glossary
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
Preservation of the GWPP is guaranteed by archival of old versions on the
scripts used for its creation and referencing to the source data. also,
preservation is guaranteed by archival of the old GWPP converted versions on
the archive server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of the
datasets and converting additional.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
https://api.freme-‐project.eu/datasets/gwpp/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
346 terms
</td> </tr> </table>
### 2.1.3 Datasets converted but not being used by FREME.
During the FREME project the Statbel dataset has been created but not used by
the project.
#### Statbel corpus
<table>
<tr>
<th>
**META-‐SHARE**
**FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE NAME**
</td>
<td>
**DATA SET REFERENCE AND NAME:**
</td>
<td>
Statbel corpus
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
https://api.freme-‐project.eu/datasets/statbel/statbel-‐dataid.ttl
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
https://datahub.io/dataset/statbel-‐corpus
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
AKSW/KILT, INFAI, Leipzig University
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
This corpus contains RDF conversion of datasets from the "Statistics Belgium"
(also known as Statbel) which aims at collecting, processing and disseminating
relevant, reliable and commented data on Belgian society.
http://statbel.fgov.be/en/statistics/figures/
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Currently, the corpus contains three datasets:
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Belgian house price index dataset: measures the inflation on residential
property market in Belgium. The data for conversion was obtained from
http://statbel.fgov.be/en/statistics/figures/economy/constructio
n_industry/house_price_index/
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Employment, unemployment, labour market structure dataset: data on employment,
unemployment and the labour market from the labour force survey conducted
among Belgian
households. The data for conversion was obtained from
http://statbel.fgov.be/en/statistics/figures/labour_market_living
_conditions/employment/
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Unemployment and additional indicators dataset: contains unemployment related
statistics about Belgium and its regions. The data for conversion was obtained
from
http://statbel.fgov.be/en/modules/publications/statistics/march
e_du_travail_et_conditions_de_vie/unemployment_and_additio
nal_indicators_2005-‐2010.jsp
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
Statbel -‐ http://statbel.fgov.be/en/statistics/figures/
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
Statbel is a useful statistics resource for enrichment of various kind of
content related to belgium and the belgian society.
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
The UNdata
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
tbd
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE:**
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
Done in linked data using dataid, a metadata description vocabulary based on
dcat. DMP reports are automatically generated and maintained up to date using
this metadata. vocabularies and ontologies: Data cube
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
tba n/a
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
n/a
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
Statbel is an open dataset
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
Statbel needs no additional software to be used.
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
https://datahub.io/dataset/statbel-‐corpus
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
Preservation of the Statbel is guaranteed by archival of old versions on the
scripts used for its creation and referencing to the
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
source data. also, preservation is guaranteed by archival of the old Statbel
converted versions on the archive server.
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
FREME aims at providing conversion of the newer, richer versions of the
datasets and converting additional.
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
https://api.freme-‐project.eu/datasets/statbel/
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
Few thousands of triples
</td> </tr> </table>
### 2.1.4 Other datasets used in FREME
The following list includes the datasets that have been used by some partners
in the context of FREME, but the dataset itself is not being used as part of
the general FREME framework.
<table>
<tr>
<th>
**DATA SET NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**FREME USE?**
</th>
<th>
**USED IN**
**SERVICE**
</th>
<th>
**LICENSE**
</th>
<th>
**LINK**
</th> </tr>
<tr>
<td>
**WAND FINANCE**
**AND INVESTMENT**
**TAXONOMY**
**WAND INC**
</td>
<td>
**-‐**
</td>
<td>
A taxonomy with specific Topics and
entities related to
Finance and Investment
</td>
<td>
Upload
for
testing
</td>
<td>
No
</td>
<td>
Evaluation License
</td>
<td>
www.wandinc.com/wa
nd-‐finance-‐and-investment-taxonomy.aspx
</td> </tr>
<tr>
<td>
**CIARD RING**
</td>
<td>
</td>
<td>
The CIARD RING is a global directory of web-based information services and
datasets for agricultural research for development. It is the principal tool
created through the CIARD initiative
(http://www.ciard.net)
to allow information providers to register their services and datasets in
various categories and so facilitate the discovery
of sources of
agriculture-‐related information across the world.
</td>
<td>
not part
of
Framew
ork yet
</td>
<td>
No
</td>
<td>
CC Attribution
</td>
<td>
https://datahub.io/data
set/the-‐ciard-‐ring,
http://ring.ciard.info/rd f-‐store
</td> </tr>
<tr>
<td>
**AGRIS**
</td>
<td>
International
Information System for the Agricultural Science and Technology
</td>
<td>
validate freme service
</td>
<td>
e-‐
Termin ology
</td>
<td>
no clear license available yet.
It will be soon
</td>
<td>
https://datahub.io/data
set/agris
</td> </tr>
<tr>
<td>
**LIBRARY OF**
**CONGRESS**
</td>
<td>
The dataset provides
access to authority data at the Library of
Congress.
</td>
<td>
Uploade
d for
testing purposes
</td>
<td>
e-‐Entity
</td>
<td>
See terms of
service
(http://id.lo
c.gov/about
/)
</td>
<td>
http://id.loc.gov/descri
ptions/
</td> </tr>
<tr>
<td>
**GETTY**
</td>
<td>
Provides structured terminology for art and other material culture, archival
materials, visual surrogates, and bibliographic materials.
</td>
<td>
Uploade
d for
testing purposes
</td>
<td>
e-‐Entity
</td>
<td>
Open Data
Commons
Attribution
</td>
<td>
http://vocab.getty.edu/
</td> </tr>
<tr>
<td>
**LINKEDGEODATA**
</td>
<td>
LinkedGeoData uses the information collected by the OpenStreetMap
project and makes it
available as an RDF knowledge base
according to the Linked
Data principles.
</td>
<td>
Used via e-‐Link
</td>
<td>
e-‐Link
</td>
<td>
Open
Database
License
</td>
<td>
http://linkedgeodata.or
g/About
</td> </tr>
<tr>
<td>
**GEONAMES**
</td>
<td>
The GeoNames geographical database
covers all countries and contains over eleven
million placenames that
are available for download free of charge.
</td>
<td>
Used via e-‐Link
</td>
<td>
e-‐Link
</td>
<td>
Creative Commons
attribution
</td>
<td>
http://www.geonames. org/
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0890_FREME_644771.md
|
# EXECUTIVE SUMMARY
This deliverable provides an update of the first version of FREME data
management plan (D7.4) 1 . The deliverable outlines how the research data
collected or generated is being handled during the FREME action. It describes
which standards and methodologies for data collection and generation are used
in FREME, and whether and how data is and will be shared.
This document follows the template provided by the European Commission in the
Participant Portal.
# 1 DMP IN H2020
## 1.1 PURPOSE OF THE FREME DATA MANAGEMENT PLAN (DMP)
FREME is a H2020 project participating in the Open Research Data Pilot. Open
Research Data Pilot is part of the Open Access to Scientific Publications and
Research Data programme in H2020 2 . The goal of the programme is to foster
access to data generated in H2020 projects.
Open Access refers to a practice of giving online access to all scholarly
disciplines information that is free of charge to the end-‐user. In this way
data becomes re-‐usable, and the benefit of public investment in the research
will be improved.
The EC provided a document with guidelines 3 for projects participating in
the pilot. The guidelines address aspects like research data quality, sharing
and security. According to the guidelines, projects participating need to
develop a DMP.
The DMP describes the types of data that will be generated or gathered during
the project, the standards which will be used, the ways how the data will be
exploited and shared for verification or reuse, and how the data will be
preserved.
This document has been produced following these guidelines. This document is
an update of the first version of the DMP, delivered in M6 of the project. The
DMP will be updated once more time and will be documented in deliverable D7.6
(M24).
## 1.2 BACKGROUND OF THE FREME DMP
The FREME DMP has been written in reference to the Article 29.3 in the Model
Grant Agreement called “Open access to research data” (research data
management). Project participants must deposit their data in a research data
repository and take measures to make the data available to third parties. The
third parties should be able to access, mine, exploit, reproduce and
disseminate the data. This should also help to validate the results presented
in scientific publications. In addition, Article 29.3 suggests that
participants will have to provide information, via the repository, about tools
and instruments needed for the validation of project outcomes .
The DMP is important for tracking all data produced during the FREME project.
Article 29 states that project beneficiaries do not have to ensure access to
parts of research data if such access would lead to a risk for the project’s
goals. In such cases, the DMP must contain the reasons for not providing
access. According to the abovementioned DMP Guidelines it is planned that
research data management projects funded under H2020 will receive support
through the Research Infrastructures Work Programme 2014-‐15 (call 3
e-‐Infrastructures). Full support services support is expected to be
available only to research projects funded under H2020, with preference to
those participating in the Open
Research Data Pilot.
# 2 FREME DMP
## 2.1 OBJECTIVES OF THE FREME PROJECT
One of FREME general objectives is to build an open, innovative and
commercial-‐grade framework of e-services for multilingual and semantic
enrichment of digital content. We understand digital content as any type of
content that exists in digital form and in various formats. FREME will improve
existing processes of digital content management by grabbing vast amounts of
structured and unstructured multilingual datasets and reusing them in our
enrichment services.
By enrichment we mean annotation of content with additional information. We
focus on semantic and multilingual enrichment. One aim of FREME is to
transform unstructured content into a structured representation.
In terms of data and tooling, FREME will produce the following:
·∙ Six e-‐Services realised as Web services for semantic and multilingual
enrichment of digital content;
·∙ Access to the e-‐Services via APIs (and GUIs);
·∙ Access to existing data sets for enrichment;
·∙ Conversion of selected data sets into a standardised, linked data
representation to make them suitable for enrichment;
·∙ Facilities for FREME users to convert their own data sets into linked data
for usage in enrichment scenarios.
The design of the FREME e-‐Services, APIs and GUIs, and the selection of data
sets is driven by the FREME business case partners, working on four business
scenarios:
·∙ BC 1: Authoring and publishing multilingually and semantically enriched
eBooks
·∙ BC 2: Integrating semantic enrichment into multilingual content in
localisation
·∙ BC 3: Enhancing cross-‐language sharing and access to open data
·∙ BC 4: Empowering personalised content recommendation
One crucial aspect of FREME success will be to provide new business
opportunities for these partners. Hence, the requirements on data management
depend on the context of each business case and must not hinder the business
opportunities.
## 2.2 FREME DMP: A BRIDGE BETWEEN LANGUAGE AND DATA TECHNOLOGIES
FREME is building bridges between two communities: Language technologies and
data technologies.
### 2.2.1 META-‐SHARE
META-‐SHARE belongs to the language technology community. In terms of EC
funding, the current focus of language technology is in ICT 17. The ICT 17
project that is most relevant for data management is CRACKER 3 . CRACKER
adopts and promotes methodologies developed within the META-‐NET initiative.
With its “Cracking the Language Barrier” initiative CRACKER is promoting a
collaboration that includes, among others, projects funded through ICT 17 and
ICT 15. FREME signed the corresponding Memorandum of Understanding and is
participating in this collaboration. As part of the effort FREME will make
available its metadata from existing datasets that are used by FREME, using
the META-‐SHARE template provided by CRACKER.
<table>
<tr>
<th>
**RESOURCE NAME**
</th>
<th>
**COMPLETE TITLE OF THE RESOURCE**
</th> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**CHOOSE ONE OF THE FOLLOWING VALUES:**
**LEXICAL/CONCEPTUAL RESOURCE, CORPUS, LANGUAGE DESCRIPTION**
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**THE PHYSICAL MEDIUM OF THE CONTENT REPRESENTATION, E.G. VIDEO, IMAGE, TEXT,
NUMERICAL DATA, N-‐GRAMS, ETC.**
</td> </tr>
<tr>
<td>
**LANGUAGE (S)**
</td>
<td>
**THE LANGUAGE(S) OF THE RESOURCE CONTENT**
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**THE LICENSING TERMS AND CONDITIONS UNDER WHICH THE TOOL/SERVICE CAN BE
USED**
</td> </tr>
<tr>
<td>
**DISTRIBUTION MEDIUM**
</td>
<td>
**THE MEDIUM I.E. THE CHANNEL USED FOR DELIVERY OR PROVIDING ACCESS TO THE
RESOURCE, E.G. ACCESSIBLE THROUGH INTERFACE, DOWNLOADABLE, CD/DVD, ETC.**
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**FORESEEN USE OF THE RESOURCE FOR WHICH IT HAS BEEN PRODUCED**
</td> </tr>
<tr> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE OF THE RESOURCE WITH REGARD TO A SPECIFIC SIZE UNIT MEASUREMENT IN FORM
OF A NUMBER**
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**A BRIEF DESCRIPTION OF THE MAIN FEATURES OF THE FEATURES**
</td> </tr> </table>
##### Table 1 META-‐SHARE schema for datasets description
META-‐SHARE 4 is an open resource exchange infrastructure, i.e., a
sustainable network of repositories for language resources (including data
sets, tools, technologies and web services), documented with high-quality
metadata, aggregated in inventories allowing for uniform search and access to
resources. Data and tools can be both open and with restricted access rights,
free and for-‐a-‐fee. META-‐SHARE targets existing but also new and
emerging language data, tools and systems required for building and evaluating
new technologies, products and services. This infrastructure started with the
integration of nodes and centres represented by the partners of the META-‐NET
initiative. META-‐SHARE is gradually being extended to encompass additional
nodes and centres, and to provide more functionality.
As described above, FREME will follow META-‐SHARE practices for data
documentation, verification and distribution, as well as for curation and
preservation, ensuring the availability of the data and enabling access,
exploitation and dissemination.
An example of a dataset description according to the META-‐SHARE schema:
<table>
<tr>
<th>
**RESOURCE NAME**
</th>
<th>
**DBPEDIA 2014 DATASET**
</th> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**LEXICAL/CONCEPTUAL RESOURCE-‐**
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**LINKED DATA**
</td> </tr>
<tr>
<td>
**LANGUAGE (S)**
</td>
<td>
**126 LANGUAGES**
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**CC-‐BY-‐SA 3.0**
</td> </tr>
<tr>
<td>
**DISTRIBUTION MEDIUM**
</td>
<td>
**HTTP://DOWNLOADS.DBPEDIA.ORG/2014/DATAID.TTL#DATASET**
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**DBPPEDIA IS AN OPEN DATASET**
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**1.200.000.000 TRIPLES**
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DBPEDIA IS A CROWD-‐SOURCED COMMUNITY EFFORT**
</td> </tr> </table>
**Table 2 DBpedia as an example of META-‐SHARE schema for datasets
description**
### 2.2.2 DATAID
DataID is a machine-‐readable metadata format put forward by the community of
linguistic linked data, representing the data technology community. DataID is
used in the DBpedia community and in the ALIGNED project. The FREME consortium
partner Infai is also partner in ALIGNED.
The effort around DataID comes with a tool called DMP generator. The generator
takes as input a DataID file and produces an HTML report that can be used
as-‐is. Currently the generator is in early prototype stage.
The DataID model establishes a system to describe metadata for datasets. This
system improves the form of datahub.io, a data management platform by the Open
Knowledge adding richer semantics in several properties relevant to LOD
datasets. Even though this system is compliant to datahub-‐io.
<table>
<tr>
<th>
**DATA SET REFERENCE AND NAME**
</th>
<th>
**NAME**
**METADATA URI**
**HOMEPAGE**
**PUBLISHER**
**MAINTAINER**
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td>
<td>
**DESCRIPTION**
**PROVENANCE**
**USEFULNESS** **SIMILAR DATA**
**RE-‐USE AND INTEGRATION**
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA**
</td>
<td>
**METADATA DESCRIPTION**
**VOCABULARIES AND ONTOLOGIES**
</td> </tr>
<tr>
<td>
**DATA SHARING**
</td>
<td>
**LICENSE**
**ODRL LICENSE DESCRIPTION**
**OPENNESS**
**SOFTWARE NECESSARY**
**REPOSITORY**
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION**
</td>
<td>
**PRESERVATION**
**GROWTH**
**ARCHIVE**
**SIZE**
</td> </tr> </table>
##### Table 3 DATAID schema for datasets description
In the last two years, the data community has gathered in the LIDER project 5
, which ended in December 2015, a group of stakeholders around LLD. LLD means
the representation of language resources using linked data principles 6 .
One outcome of LIDER is guidelines on working with linguistic linked data 7
. FREME will adopt the general guideline of how to include data in the
linguistic linked data cloud and the specific guidelines of using the DataID
metadata format.
DataID provides a bridge to the DBpedia community and the DBpedia association.
DataID is also used in other H2020 projects, especially the ALIGNED project 8
. The tools for creating DataID metadata records will also be used in FREME.
An example of a dataset description according to the DATAID schema:
<table>
<tr>
<th>
**DATA SET REFERENCE AND NAME**
</th>
<th>
**NAME: DBPEDIA 2014 DATASET**
**METADATA URI:**
**HTTP://DOWNLOADS.DBPEDIA.ORG/2014/DATAID.TTL#DATASET**
**HOMEPAGE: HTTP://DBPEDIA.ORG/**
**PUBLISHER: DBPEDIA ASSOCIATION**
**MAINTAINER: DBPEDIA ASSOCIATION**
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td>
<td>
**DESCRIPTION: DBPEDIA IS A CROWD-‐SOURCED COMMUNITY EFFORT TO EXTRACT
STRUCTURED INFORMATION FROM WIKIPEDIA AND MAKE THIS INFORMATION AVAILABLE ON
THE WEB. DBPEDIA ALLOWS YOU TO ASK SOPHISTICATED QUERIES AGAINST WIKIPEDIA,
AND TO LINK THE DIFFERENT DATA SETS ON THE WEB TO WIKIPEDIA DATA. WE HOPE THAT
THIS WORK WILL MAKE IT EASIER FOR THE HUGE AMOUNT OF INFORMATION IN WIKIPEDIA
TO BE USED IN SOME NEW INTERESTING WAYS.**
**PROVENANCE: WIKIPEDIA (WIKIMEDIA FOUNDATION)**
**USEFULNESS: DBPEDIA IS A USEFUL RESOURCE FOR INTERLINKING GENERAL DATASETS
WITH ENCYCLOPEDIC KNOWLEDGE. USERS PROFITING FROM DBPEDIA ARE OPEN DATA
DEVELOPERS, SMES AND RESEARCHERS IN DATA SCIENCE AND NLP**
**SIMILAR DATA: FREEBASE OR YAGO PROVIDE SIMILAR DATASETS**
**RE-‐USE AND INTEGRATION: HTTP://DATAHUB.IO/DATASET/DBPEDIA**
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA**
</td>
<td>
**METADATA DESCRIPTION IS DONE IN LINKED DATA USING DATAID, A METADATA
DESCRIPTION VOCABULARY BASED ON DCAT. DMP REPORTS ARE**
**AUTOMATICALLY GENERATED AND MAINTAINED UP TO DATE USING THIS**
**METADATA.**
**VOCABULARIES AND ONTOLOGIES:**
**HTTP://DOWNLOADS.DBPEDIA.ORG/2014/DBPEDIA_2014.OWL**
</td> </tr>
<tr> </tr>
<tr>
<td>
**DATA SHARING**
</td>
<td>
**LICENSE: CC-‐BY-‐SA 3.0**
**ODRL LICENSE DESCRIPTION: HTTP://PURL.ORG/NET/RDFLICENSE/CC-‐BY-‐**
**SA3.0DE**
**OPENNESS: DBPEDIA IS AN OPEN DATASET**
**SOFTWARE NECESSARY: DBPEDIA NEEDS NO ADDITIONAL SOFTWARE TO BE**
**USED. DBPEDIA PROVIDES COMPLEMENTARY SOFTWARE FOR EXTRACTION,**
**DATA MANAGEMENT AND ENRICHMENT UNDER**
**HTTP://DATAHUB.IO/DATASET/DBPEDIA**
**REPOSITORY: HTTP://DATAHUB.IO/DATASET/DBPEDIA**
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION**
</td>
<td>
**PRESERVATION: PRESERVATION OF THE DBPEDIA IS GUARANTEED BY ARCHIVAL OF OLD
VERSIONS ON THE ARCHIVE SERVER, THE INTENT OF THE DBPEDIA ASSOCIATION TO KEEP
THE PROJECT RUNNING, AS WELL AS THE DBPEDIA LANGUAGE CHAPTERS AND THE DBPEDIA
COMMUNITY**
**GROWTH: DBPEDIA IS AN ONGOING OPEN-‐SOURCE PROJECT. GOAL OF THE PROJECT IS
THE EXTRACTION OF THE WIKIPEDIA, AS COMPLETE AS POSSIBLE. CURRENTLY 126
LANGUAGES ARE BEING EXTRACTED. IN THE FUTURE**
**DBPEDIA WILL TRY TO INCREASE ITS IMPORTANCE AS THE CENTER OF THE LOD CLOUD
BY ADDING FURTHER EXTERNAL DATASETS**
**ARCHIVE: HTTP://DOWNLOADS.DBPEDIA.ORG**
**SIZE: 1.200.000.000 TRIPLES**
</td> </tr> </table>
**Table 4 DBpedia as an example of DATAID schema for datasets description**
# 3 FREME DATA DESCRIPTION
## 3.1 FREME DMP: DATA SETS USED AND CONVERTED DURING FREME
The approach of the FREME data management plan is to provide its metadata
information combining both schemas provided by these two communities described
in Section 3.2 of this document.
FREME current version 0.5 is working using different kind of datasets, all of
them are listed and described here below.
### 3.1.1 LIST OF DATASETS CURRENTLY USED IN FREME
**Datasets currently used in FREME** 9 . This list details the datasets that
are being currently used in FREME project. Almost all these datasets are
linked to e-‐Entity service, some of them are linked also to E-‐link and
E-‐terminology.
These datasets were already created and open sourced, so FREME has no
responsibility on its creation or curation. For this reason they are just
listed on this DMP and no more detailed information is provided. To find more
information about them, a link to datahub has been added to the table.
<table>
<tr>
<th>
**DATA SET NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**FREME USED**
</th>
<th>
**USED IN SERVICE**
</th>
<th>
**LICENSE**
</th>
<th>
**LINK TO DATAHUB**
</th> </tr>
<tr>
<td>
**DBPEDIA**
</td>
<td>
**DBPEDIA IS A CROWD-‐SOURCED COMMUNITY**
**EFFORT TO EXTRACT STRUCTURED**
**INFORMATION FROM WIKIPEDIA AND MAKE**
**THIS INFORMATION AVAILABLE ON THE WEB.**
**DBPEDIA ALLOWS YOU TO ASK SOPHISTICATED**
**QUERIES AGAINST WIKIPEDIA, AND TO LINK**
**THE DIFFERENT DATA SETS ON THE WEB TO**
**WIKIPEDIA DATA. WE HOPE THAT THIS WORK**
**WILL MAKE IT EASIER FOR THE HUGE AMOUNT**
**OF INFORMATION IN WIKIPEDIA TO BE USED**
**IN SOME NEW INTERESTING WAYS.**
**FURTHERMORE IT MIGHT INSPIRE NEW**
**MECHANISM FOR NAVIGATING, LINKING AND**
**IMPROVING THE ENCYCLOPEDIA ITSELF.**
</td>
<td>
**USED**
</td>
<td>
**E-‐ENTITY AND ALSO AVAILABLE IN E-‐**
**LINK**
</td>
<td>
**CC-‐BY**
</td>
<td>
**HTTPS://DATAHUB.IO/DA**
**TASET/DBPEDIA**
</td> </tr> </table>
<table>
<tr>
<th>
**ONLD**
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
**THE NCSU ORGANIZATION NAME LINKED**
**DATA (ONLD) IS BASED ON THE NCSU**
**ORGANIZATION NAME AUTHORITY, A TOOL**
**MAINTAINED BY THE ACQUISITIONS & **
**DISCOVERY DEPARTMENT TO MANAGE THE**
**VARIANT FORMS OF NAME FOR JOURNAL AND**
**E-‐RESOURCE PUBLISHERS, PROVIDERS, AND**
**VENDORS IN E-‐MATRIX, OUR LOCALLY-DEVELOPED ELECTRONIC RESOURCE MANAGEMENT
SYSTEM (ERMS).**
</th>
<th>
**USED**
</th>
<th>
**E-‐ENTITY**
</th>
<th>
**CREATIVE**
**COMMON**
**S CC0**
</th>
<th>
**HTTPS://DATAHUB.IO/DA**
**TASET/NCSU-‐**
**ORGANIZATION-‐NAME-‐**
**LINKED-‐DATA**
</th> </tr>
<tr>
<td>
**VIAF**
</td>
<td>
**VIAF (VIRTUAL INTERNATIONAL AUTORITY**
**FILE) IS AN OCLC DATASET THAT VIRTUALLY**
**COMBINES MULTIPLE LAM (LIBRARY**
**ARCHIVES MUSEUM) NAME AUTHORITY FILES**
**INTO A SINGLE NAME AUTHORITY SERVICE.**
**PUT SIMPLY IT IS A LARGE DATABASE OF**
**PEOPLE AND ORGANIZATIONS THAT OCCUR IN LIBRARY CATALOGS.**
</td>
<td>
**USED**
</td>
<td>
**E-‐ENTITY**
</td>
<td>
**OPEN**
**DATA**
**COMMON**
**S**
**ATTRIBUTI**
**ON**
</td>
<td>
**HTTPS://DATAHUB.IO/DA TASET/VIAF**
</td> </tr>
<tr>
<td>
**GEOPOLITICA L ONTOLOGY**
</td>
<td>
**THE FAO GEOPOLITICAL ONTOLOGY AND RELATED SERVICES HAVE BEEN DEVELOPED TO
FACILITATE DATA EXCHANGE AND SHARING IN A STANDARDIZED MANNER AMONG SYSTEMS
MANAGING INFORMATION ABOUT COUNTRIES AND/OR REGIONS.**
</td>
<td>
**USED**
</td>
<td>
**E-‐ENTITY**
</td>
<td>
**TBD**
</td>
<td>
**HTTPS://DATAHUB.IO/DA**
**TASET/FAO-‐**
**GEOPOLITICAL-‐ONTOLOGY**
</td> </tr>
<tr>
<td>
**AGROVOC**
</td>
<td>
**AGROVOC IS A CONTROLLED**
**VOCABULARY COVERING ALL AREAS OF**
**INTEREST OF THE FOOD AND AGRICULTURE**
**ORGANIZATION (FAO) OF THE UNITED**
**NATIONS, INCLUDING FOOD, NUTRITION,**
**AGRICULTURE, FISHERIES, FORESTRY,**
**ENVIRONMENT ETC. IT IS PUBLISHED BY**
**FAO AND EDITED BY A COMMUNITY OF EXPERTS.**
</td>
<td>
**USED**
</td>
<td>
**E-‐TERMINOLOGY**
</td>
<td>
**CC4.0 BY-‐SA**
</td>
<td>
**HTTPS://DATAHUB.IO/DA**
**TASET/AGROVOC-‐SKOS**
</td> </tr> </table>
**Table 5 List of Datasets currently used in FREME**
### 3.1.2 LIST OF DATASETS CONVERTED IN FREME
**Datasets converted by FREME** 10 . During FREME action there were some
datasets that have been converted to be used on the project. Since these
datasets have been created by FREME they have been detailed described using a
combination of META-SHARE and DATAID schemas of dataset description. The first
two columns of each table detail the fields according to each schema. The
result is a combination of schemas that makes visible the differences between
each system of description.
**Detailed description of Datasets converted in FREME:**
<table>
<tr>
<th>
**O**
** R META-‐SHARE **
**FIELD**
**C**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**I**
**RESOURCE NAME**
**D**
</td>
<td>
**DATASET REFERENCE AND**
**NAME:**
</td>
<td>
**ORCID 2014 DATASET**
</td> </tr>
<tr>
<td>
</td>
<td>
**T**
</td>
<td>
**METADATA URI:**
</td>
<td>
**HTTP://RV2622.1BLU.DE/DATASETS/ORCID/ORCID-‐DATAID.TTL**
</td> </tr>
<tr>
<td>
</td>
<td>
**a b**
**l**
</td>
<td>
**HOMEPAGE:**
</td>
<td>
**HTTP://DATAHUB.IO/DATASET/ORCID-‐DATASET**
</td> </tr>
<tr>
<td>
</td>
<td>
**e**
**6**
</td>
<td>
**PUBLISHER:**
</td>
<td>
**AKSW/KILT, INFAI, LEIPZIG UNIVERSITY**
</td> </tr>
<tr>
<td>
</td>
<td>
**O**
**R**
</td>
<td>
**MAINTAINER:**
</td>
<td>
**AKSW/KILT, INFAI, LEIPZIG UNIVERSITY**
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**C**
**I**
**D**
**D**
**a**
**t**
**a s e**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
**ORCID (OPEN RESEARCHER AND CONTRIBUTOR ID) IS A NONPROPRIETARY**
**ALPHANUMERIC CODE TO UNIQUELY IDENTIFY SCIENTIFIC AND OTHER ACADEMIC
AUTHORS. THIS DATASET CONTAINS RDF CONVERSION OF THE ORCID DATASET. THE
CURRENT CONVERSION IS BASED ON THE 2014 ORCID DATA DUMP, WHICH CONTAINS AROUND
1.3 MILLION JSON FILES AMOUNTING TO 41GB OF DATA.**
**THE CONVERTED RDF VERSION IS 13GB LARGE (UNCOMPRESSED) AND IT IS MODELLED
WITH WELL KNOWN VOCABULARIES SUCH AS DUBLIN CORE, FOAF, SCHEMA.ORG, ETC., AND
IT IS INTERLINKED WITH GEONAMES.**
</td> </tr>
<tr>
<td>
</td>
<td>
**t**
</td>
<td>
**PROVENANCE:**
</td>
<td>
**OPEN RESEARCHER AND CONTRIBUTOR ID (ORCID) -‐ HTTP://ORCID.ORG/**
</td> </tr>
<tr>
<td>
</td>
<td>
**d e s c r**
**i**
**p**
**t**
</td>
<td>
**USEFULNESS:**
</td>
<td>
**ORCID IS A USEFUL RESOURCE FOR INTERLINKING GENERAL DATASETS WITH RESEARCH
AND SCIENTIFIC INFORMATION. USERS PROFITING FROM ORCID ARE OPEN DATA
DEVELOPERS, SMES AND RESEARCHERS IN DATA SCIENCE AND NLP, ESPECIALLY ENTITIES
FROM RESEARCH DOMAIN.**
</td> </tr>
<tr>
<td>
**SIMILAR DATA:**
</td>
<td>
**THE CORDIS DATASET**
</td> </tr>
<tr>
<td>
</td>
<td>
**i**
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
**THE CORDIS DATASET CAN BE INTEGRATED INTO OTHER DATASETS AND RE-‐USED FOR
DATA ENRICHMENT AND MASHUP PURPOSES.**
</td> </tr> </table>
**1.**
<table>
<tr>
<th>
**RESOURCE TYPE**
</th>
<th>
**FORMAT:**
</th>
<th>
**N-‐ TRIPLES **
</th> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE:**
</td>
<td>
**N-‐TRIPLES -‐ COMPRESSED IN X-‐BZIP2**
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA**
**DESCRIPTION:**
</td>
<td>
**DONE IN LINKED DATA USING DATAID, A METADATA DESCRIPTION VOCABULARY BASED ON
DCAT. DMP REPORTS ARE AUTOMATICALLY GENERATED AND MAINTAINED UP TO DATE USING
THIS METADATA.**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES**
**AND ONTOLOGIES:**
</td>
<td>
**DUBLIN CORE, FOAF, SCHEMA.ORG**
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
**CC0 1.0 PUBLIC DOMAIN DEDICATION**
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
**HTTP://CREATIVECOMMONS.ORG/PUBLICDOMAIN/ZERO/1.0/**
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
**ORCID IS AN OPEN DATASET**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**SOFTWARE ORCID NEEDS NO ADDITIONAL SOFTWARE TO BE USED.**
**NECESSARY:**
<table>
<tr>
<th>
</th>
<th>
**REPOSITORY:**
</th>
<th>
**HTTPS://DATAHUB.IO/DATASET/ORCID-‐DATASET**
</th> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
**PRESERVATION OF THE ORCID IS GUARANTEED BY ARCHIVAL OF OLD VERSIONS ON THE
SCRIPTS USED FOR ITS CREATION AND REFERENCING TO THE SOURCE DATA. ALSO,
PRESERVATION IS GUARANTEED BY ARCHIVAL OF THE OLD ORCID CONVERTED VERSIONS ON
THE ARCHIVE SERVER.**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
**FREME AIMS AT PROVIDING CONVERSION OF THE NEWER, RICHER VERSIONS OF**
**ORCID.**
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
**HTTP://RV2622.1BLU.DE/DATASETS/ORCID/**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**SIZE** **SIZE:** **13GB LARGE**
**Table 6 List of Datasets currently used in FREM**
###### 2\. CORDIS FP7
<table>
<tr>
<th>
**META-‐SHAE FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE NAME**
</td>
<td>
**DATASET REFERENCE AND**
**NAME:**
</td>
<td>
**NAME: CORDIS FP7 DATASET**
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**METADATA URI:**
</th>
<th>
**HTTP://RV2622.1BLU.DE/DATASETS/CORDIS/CORDIS-‐DATAID.TTL**
</th> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
**HTTPS://DATAHUB.IO/DATASET/CORDIS-‐CORPUS**
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
**AKSW/KILT, INFAI, LEIPZIG UNIVERSITY**
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
**AKSW/KILT, INFAI, LEIPZIG UNIVERSITY**
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
**CORDIS (COMMUNITY RESEARCH AND DEVELOPMENT INFORMATION SERVICE), IS THE
EUROPEAN COMMISSION’S CORE PUBLIC REPOSITORY PROVIDING DISSEMINATION
INFORMATION FOR ALL EU-‐FUNDED RESEARCH PROJECTS. THIS DATASET CONTAINS RDF
OF THE CORDIS FP7 DATASET WHICH PROVIDES DESCRIPTIONS FOR PROJECTS**
**FUNDED BY THE EUROPEAN UNION UNDER THE SEVENTH FRAMEWORK PROGRAMME FOR
RESEARCH AND TECHNOLOGICAL DEVELOPMENT (FP7) FROM 2007 TO 2013\. THE CONVERTED
DATASET CONTAINS OVER 1 MILLION OF RDF TRIPLES WITH A TOTAL SIZE OF AROUND
200MB IN THE N-‐TRIPLES RDF SERIALIZATION FORMAT.**
**THE DATASET IS MODELLED WITH WELL KNOWN VOCABULARIES SUCH AS DUBLIN CORE,
FOAF, DBPEDIA ONTOLOGY, DOAP, ETC., AND IT IS INTERLINKED WITH**
**DBPEDIA.**
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
**EUROPEAN COMMISSION**
**HTTPS://OPEN-‐DATA.EUROPA.EU/EN/DATA/DATASET/CORDISFP7PROJECTS**
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
**CORDIS FP7 IS A USEFUL RESOURCE FOR INTERLINKING GENERAL DATASETS WITH**
**RESEARCH AND SCIENTIFIC INFORMATION. USERS PROFITING FROM ORCID ARE OPEN
DATA DEVELOPERS, SMES AND RESEARCHERS IN DATA SCIENCE AND NLP, ESPECIALLY
ENTITIES FROM RESEARCH DOMAIN.**
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
**THE ORCID DATASET.**
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
**THE CORDIS DATASET CAN BE INTEGRATED WITH OTHER RESEARCH DATASETS AND REUSED
FOR DATA ENRICHMENT PURPOSES.**
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
**N-‐ TRIPLES **
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE:**
</td>
<td>
**N-‐TRIPLES -‐ COMPRESSED IN X-‐BZIP2**
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
**DONE IN LINKED DATA USING DATAID, A METADATA DESCRIPTION VOCABULARY BASED ON
DCAT. DMP REPORTS ARE AUTOMATICALLY GENERATED AND MAINTAINED UP TO DATE USING
THIS METADATA.**
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES AND ONTOLOGIES:**
</td>
<td>
**DUBLIN CORE, FOAF, DBPEDIA ONTOLOGY, DOAP**
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
**HTTP://EC.EUROPA.EU/GENINFO/LEGAL_NOTICES_EN.HTM**
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
**N/A**
</td> </tr>
<tr>
<td>
**USAGE:**
</td>
<td>
**OPENNESS:**
</td>
<td>
**CORDIS IS AN OPEN DATASET**
</td> </tr>
<tr>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
**ORDIS NEEDS NO ADDITIONAL SOFTWARE TO BE USED.**
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
**HTTPS://DATAHUB.IO/DATASET/CORDIS-‐CORPUS**
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
**PRESERVATION OF THE CORIDS IS GUARANTEED BY ARCHIVAL OF OLD VERSIONS ON
THE**
**SCRIPTS USED FOR ITS CREATION AND REFERENCING TO THE SOURCE DATA. ALSO,
PRESERVATION IS GUARANTEED BY ARCHIVAL OF THE OLD CORDIS CONVERTED VERSIONS ON
THE ARCHIVE SERVER.**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
**FREME AIMS AT PROVIDING CONVERSION OF THE NEWER, RICHER VERSIONS OF**
**CORDIS.**
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
**HTTP://RV2622.1BLU.DE/DATASETS/CORDIS/**
</td> </tr> </table>
**SIZE** **SIZE:** **OVER 1 MILLION OF RDF TRIPLES**
**Table 7 CORDIS Dataset description**
###### 3\. DBpedia Abstracts
<table>
<tr>
<th>
**META-‐SHARE**
**FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE NAME**
</td>
<td>
**DATA SET REFERENCE AND NAME:**
</td>
<td>
**DBPEDIA ABSTRACTS DATASET**
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
**HTTP://RV2622.1BLU.DE/DATASETS/DBPEDIA-‐ABSTRACTS/DBPEDIA-‐ABSTRACTS-‐**
**DATAID.TTL**
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
**HTTPS://DATAHUB.IO/DATASET/DBPEDIA-‐ABSTRACT-‐CORPUS**
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
**AKSW/KILT, INFAI, LEIPZIG UNIVERSITY**
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
**AKSW/KILT, INFAI, LEIPZIG UNIVERSITY**
</td> </tr> </table>
<table>
<tr>
<th>
**DESCRIPTION**
</th>
<th>
**DESCRIPTION:**
</th>
<th>
**THIS CORPUS CONTAINS A CONVERSION OF WIKIPEDIA ABSTRACTS IN SIX LANGUAGES
(DUTCH, ENGLISH, FRENCH, GERMAN, ITALIAN AND SPANISH) INTO THE NLP INTERCHANGE
FORMAT (NIF). THE CORPUS CONTAINS THE ABSTRACT TEXTS, AS WELL AS THE POSITION,
SURFACE FORM AND LINKED ARTICLE OF ALL LINKS IN THE TEXT. AS**
**SUCH, IT CONTAINS ENTITY MENTIONS MANUALLY DISAMBIGUATED TO
WIKIPEDIA/DBPEDIA RESOURCES BY NATIVE SPEAKERS, WHICH PREDESTINES IT FOR NER
TRAINING AND EVALUATION.**
**FURTHERMORE, THE ABSTRACTS REPRESENT A SPECIAL FORM OF TEXT THAT LENDS
ITSELF TO BE USED FOR MORE SOPHISTICATED TASKS, LIKE OPEN RELATION EXTRACTION.
THEIR ENCYCLOPEDIC STYLE, FOLLOWING WIKIPEDIA GUIDELINES ON OPENING PARAGRAPHS
ADDS FURTHER INTERESTING PROPERTIES. THE FIRST SENTENCE PUTS THE ARTICLE IN
BROADER CONTEXT. MOST ANAPHORASWILL REFER TO THE ORIGINAL TOPIC OF THE TEXT,
MAKING THEM EASIER TO RESOLVE. FINALLY, SHOULD THE SAME STRING OCCUR IN
DIFFERENT MEANINGS, WIKIPEDIA GUIDELINES SUGGEST THAT THE NEW MEANING SHOULD
AGAIN BE LINKED FOR DISAMBIGUATION. IN SHORT: THE TYPE OF**
**TEXT IS HIGHLY INTERESTING.**
</th> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
**WIKIPEDIA -‐ HTTPS://WWW.WIKIPEDIA.ORG/**
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
**DBPEDIA ABSTRACTS IS A USEFUL MULTILINGUAL RESOURCE FOR LEARNING VARIOUS NLP
TASKS. E.G. LEARNING NAMED ENTITY RECOGNITION MODELS, RELATION**
**EXTRACTION, AND SIMILAR.**
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
**THE WIKINER DATASET**
</td> </tr>
<tr>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
**THE DBPEDIA ABSTRACTS DATASET CAN BE INTEGRATED WITH OTHER SIMILAR TRAINING
CORPORA AND REUSED FOR TRAINING VARIOUS NLP TASKS.**
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT**
</td>
<td>
**TURTLE**
</td> </tr>
<tr> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE**
</td>
<td>
**TURTLE -‐ COMPRESSED IN X-‐GZIP**
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
**DONE IN LINKED DATA USING DATAID, A METADATA DESCRIPTION VOCABULARY BASED ON
DCAT. DMP REPORTS ARE AUTOMATICALLY GENERATED AND MAINTAINED UP TO DATE USING
THIS METADATA.**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES AND ONTOLOGIES:**
</td>
<td>
**NIF**
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
**CC-‐BY**
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
**HTTP://PURL.ORG/NET/RDFLICENSE/CC-‐BY-‐SA3.0**
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
**DBPEDIA ABSTRACTS IS AN OPEN DATASET**
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
**DBPEDIA ABSTRACT NEEDS NO ADDITIONAL SOFTWARE TO BE USED.**
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
**HTTPS://DATAHUB.IO/DATASET/DBPEDIA-‐ABSTRACT-‐CORPUS**
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
**PRESERVATION OF THE DBPEDIA ABSTRACTS IS GUARANTEED BY ARCHIVAL OF OLD
VERSIONS AND REFERENCING TO THE SOURCE DATA. ALSO, PRESERVATION IS**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**GUARANTEED BY ARCHIVAL OF THE OLD DBPEDIA ABSTRACTS CONVERTED VERSIONS ON**
**THE ARCHIVE SERVER.**
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
**FREME AIMS AT PROVIDING CONVERSION OF THE NEWER, RICHER VERSIONS OF DATASET
AND ITS EXTENSION TO OTHER LANGUAGES.**
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
**HTTP://WIKI-‐LINK.NLP2RDF.ORG/ABSTRACTS/**
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
**743 MILLIONS OF RDF TRIPLES**
</td> </tr> </table>
**Table 8 DBpedia Abstracts dataset description**
###### 4\. Global airports
<table>
<tr>
<th>
**META-‐SHARE FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE NAME**
</td>
<td>
**DATA SET REFERENCE AND NAME:**
</td>
<td>
**GLOBAL AIRPORTS IN RDF**
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
**HTTP://RV2622.1BLU.DE/DATASETS/GLOBAL-‐AIRPORTS/GLOBAL-‐AIRPORTS-‐**
**DATAID.TTL**
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
**HTTPS://DATAHUB.IO/DATASET/GLOBAL-‐AIRPORTS-‐IN-‐RDF**
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
**DFKI AND AKSW/KILT, INFAI, LEIPZIG UNIVERSITY**
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
**DFKI AND AKSW/KILT, INFAI, LEIPZIG UNIVERSITY**
</td> </tr>
<tr>
<td>
**DESCRIPTION:**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
**THIS CORPUS CONTAINS RDF CONVERSION OF GLOBAL AIRPORTS DATASET WHICH WAS
RETRIEVED FROM OPENFLIGHTS.ORG. THE DATASET CONTAINS INFORMATION**
**ABOUT AIRPORT NAMES, ITS LOCATION, CODES, AND OTHER RELATED INFO.**
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
**OPENFLIGHTS.ORG**
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**USEFULNESS:**
</td>
<td>
**GLOBAL AIRPORTS IS A USEFUL RESOURCE FOR INTERLINKING AND ENRICHMENT OF
CONTENT WHICH CONTAINS INFORMATION ABOUT AIRPORTS AND RELATED.**
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
**DBPEDIA**
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
**THE GLOBAL AIRPORTS DATASET CAN BE REUSED FOR DATA ENRICHMENT PURPOSES AND
INTEGRATED WITH OTHER RELEVANT DATASETS SUCH AS DBPEDIA.**
</td> </tr> </table>
<table>
<tr>
<th>
**RESOURCE TYPE**
</th>
<th>
**FORMAT:**
</th>
<th>
**TURTLE**
</th> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE:**
</td>
<td>
**TURTLE, TEXT/TURTLE**
</td> </tr>
<tr> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
**DONE IN LINKED DATA USING DATAID, A METADATA DESCRIPTION VOCABULARY BASED ON
DCAT. DMP REPORTS ARE AUTOMATICALLY GENERATED AND**
**MAINTAINED UP TO DATE USING THIS METADATA.**
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES AND ONTOLOGIES:**
</td>
<td>
**DBPEDIA ONTOLOGY, SKOS, SCHEMA.ORG**
</td> </tr>
<tr>
<td>
**LICENSE:**
</td>
<td>
**LICENSE:**
</td>
<td>
**OPEN DATABASE LICENSE -‐ FOR MORE SEE:
HTTP://OPENFLIGHTS.ORG/DATA.HTML#LICENSE**
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
**HTTP://PURL.OCLC.ORG/NET/RDFLICENSE/ODBL1.0**
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
**GLOBAL AIRPORTS IS AN OPEN DATASET**
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
**GLOBAL AIRPORTS NEEDS NO ADDITIONAL SOFTWARE TO BE USED.**
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
**HTTPS://DATAHUB.IO/DATASET/GLOBAL-‐AIRPORTS-‐IN-‐RDF**
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
**PRESERVATION OF THE GLOBAL AIRPORTS IS GUARANTEED BY ARCHIVAL OF OLD
VERSIONS AND REFERENCING TO THE SOURCE DATA. ALSO, PRESERVATION IS GUARANTEED
BY ARCHIVAL OF THE OLD GLOBAL AIRPORTS CONVERTED VERSIONS**
**ON THE ARCHIVE SERVER.**
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
**FREME AIMS AT PROVIDING CONVERSION OF THE NEWER, RICHER VERSIONS OF**
**THE DATASET.**
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
**HTTP://RV1460.1BLU.DE/DATASETS/GLOBAL-‐AIRPORTS/**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**SIZE** **SIZE:** **74K RDF TRIPLES**
**Table 9 Global airports dataset description**
###### 5\. Grid
<table>
<tr>
<th>
**META-‐SHARE FIELD** **DATAID FIELD**
</th>
<th>
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE NAME** **DATA SET REFERENCE NAME:**
</td>
<td>
**AND**
</td>
<td>
**GRID DATASET**
</td> </tr>
<tr>
<td>
**METADATA URI:**
</td>
<td>
</td>
<td>
**HTTP://RV2622.1BLU.DE/DATASETS/GRID/GRID-‐DATAID.TTL**
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**HOMEPAGE:**
</th>
<th>
**HTTPS://DATAHUB.IO/DATASET/GRID_DATASET**
</th> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
**AKSW/KILT, INFAI, LEIPZIG UNIVERSITY**
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
**AKSW/KILT, INFAI, LEIPZIG UNIVERSITY**
</td> </tr>
<tr>
<td>
**DESCRIPTION**
</td>
<td>
**DESCRIPTION:**
</td>
<td>
**GRID IS A FREE, OPENLY ACCESSIBLE DATABASE OF RESEARCH INSTITUTION
IDENTIFIERS WHICH ENABLES USERS TO MAKE SENSE OF THEIR DATA. IT DOES SO BY
MINIMISING THE WORK REQUIRED TO LINK DATASETS TOGETHER USING A UNIQUE**
**AND PERSISTENT IDENTIFIER. THIS IS RDF VERSION OF THE DATASET.**
</td> </tr>
<tr>
<td>
</td>
<td>
**PROVENANCE:**
</td>
<td>
**GRID -‐ HTTPS://WWW.GRID.AC/DOWNLOADS**
</td> </tr>
<tr>
<td>
**USEFULNESS:**
</td>
<td>
**GRID IS A USEFUL STATISTICS RESOURCE FOR ENRICHMENT OF VARIOUS KIND OF
CONTENT RELATED TO RESEARCH INSTITUTIONS.**
</td> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
**THE CORDIS, ORCID, PERMID**
</td> </tr>
<tr>
<td>
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
**THE GRID DATASET CAN BE INTEGRATED WITH OTHER RELEVANT DATASETS AND REUSED
FOR DATA ENRICHMENT PURPOSES.**
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**FORMAT:**
</td>
<td>
**N-‐T RIPLES **
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE**
</td>
<td>
**N-‐TRIPLES -‐ COMPRESSED IN X-‐BZIP2**
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
**DONE IN LINKED DATA USING DATAID, A METADATA DESCRIPTION VOCABULARY BASED ON
DCAT. DMP REPORTS ARE AUTOMATICALLY GENERATED AND**
**MAINTAINED UP TO DATE USING THIS METADATA.**
</td> </tr>
<tr>
<td>
</td>
<td>
**VOCABULARIES AND ONTOLOGIES:**
</td>
<td>
**DUBLIN CORE, DBPEDIA ONTOLOGY, FOAF, VCARD, SKOS**
</td> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
**CC BY CREATIVE COMMONS**
</td> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION: H**
</td>
<td>
**HTTP://PURL.OCLC.ORG/NET/RDFLICENSE/CC-‐BY3.0**
</td> </tr>
<tr> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
**GRID IS AN OPEN DATASET**
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
**GRID NEEDS NO ADDITIONAL SOFTWARE TO BE USED.**
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
**HTTPS://DATAHUB.IO/DATASET/GRID_DATASET**
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
**PRESERVATION OF THE GRID IS GUARANTEED BY ARCHIVAL OF OLD VERSIONS ON THE
SCRIPTS USED FOR ITS CREATION AND REFERENCING TO THE SOURCE DATA. ALSO,
PRESERVATION IS GUARANTEED BY ARCHIVAL OF THE OLD GRID CONVERTED VERSIONS ON
THE ARCHIVE SERVER.**
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
**FREME AIMS AT PROVIDING CONVERSION OF THE NEWER, RICHER VERSIONS OF**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**THE DATASETS AND CONVERTING ADDITIONAL.**
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
**HTTP://RV1460.1BLU.DE/DATASETS/GRID/**
</td> </tr>
<tr>
<td>
**SIZE:**
</td>
<td>
**SIZE:**
</td>
<td>
**581K RDF TRIPLES**
</td> </tr> </table>
**Table 10 Grid dataset description**
### 3.1.3 LIST OF DATASETS CONVERTED IN FREME BUT NOT BEING USED
**Datasets converted in FREME but not being used.** During the FREME project
there is the possibility that some datasets are created but not used by the
project. This is the case of Statbel dataset. Statbel dataset was requested by
a economic newspaper publisher during the action of FREME. Unfortunately their
interest on the project was dropped already when the dataset was created. So
Statbel dataset could never have been used in FREME.
**Detailed description of Datasets converted but not being used in FREME:**
###### 1\. Statbel corpus
<table>
<tr>
<th>
**META-‐SHARE FIELD**
</th>
<th>
**DATAID FIELD**
</th>
<th>
**VALUE**
</th> </tr>
<tr>
<td>
**RESOURCE NAME**
</td>
<td>
**DATA SET REFERENCE AND NAME:**
</td>
<td>
**STATBEL CORPUS**
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA URI:**
</td>
<td>
**HTTP://RV2622.1BLU.DE/DATASETS/STATBEL/STATBEL-‐DATAID.TTL**
</td> </tr>
<tr>
<td>
</td>
<td>
**HOMEPAGE:**
</td>
<td>
**HTTPS://DATAHUB.IO/DATASET/STATBEL-‐CORPUS**
</td> </tr>
<tr>
<td>
</td>
<td>
**PUBLISHER:**
</td>
<td>
**AKSW/KILT, INFAI, LEIPZIG UNIVERSITY**
</td> </tr>
<tr>
<td>
</td>
<td>
**MAINTAINER:**
</td>
<td>
**AKSW/KILT, INFAI, LEIPZIG UNIVERSITY**
</td> </tr> </table>
**DESCRIPTION** **DESCRIPTION:** **THIS CORPUS CONTAINS RDF CONVERSION OF
DATASETS FROM THE "STATISTICS**
**BELGIUM" (ALSO KNOWN AS STATBEL) WHICH AIMS AT COLLECTING, PROCESSING AND
DISSEMINATING RELEVANT, RELIABLE AND COMMENTED DATA ON BELGIAN**
**SOCIETY. HTTP://STATBEL.FGOV.BE/EN/STATISTICS/FIGURES/**
**CURRENTLY, THE CORPUS CONTAINS THREE DATASETS:**
-‐ **BELGIAN HOUSE PRICE INDEX DATASET: MEASURES THE INFLATION ON**
**RESIDENTIAL PROPERTY MARKET IN BELGIUM. THE DATA FOR CONVERSION WAS OBTAINED
FROM**
**_HTTP://STATBEL.FGOV.BE/EN/STATISTICS/FIGURES/ECONOMY/CONSTRUCTION_
INDUSTRY/HOUSE_PRICE_INDEX/_ **
-‐ **EMPLOYMENT, UNEMPLOYMENT, LABOUR MARKET STRUCTURE DATASET:**
**DATA ON EMPLOYMENT, UNEMPLOYMENT AND THE LABOUR MARKET FROM THE**
**LABOUR FORCE SURVEY CONDUCTED AMONG BELGIAN HOUSEHOLDS. THE DATA FOR
CONVERSION WAS OBTAINED FROM**
**_HTTP://STATBEL.FGOV.BE/EN/STATISTICS/FIGURES/LABOUR_MARKET_LIVING_C
ONDITIONS/EMPLOYMENT/_ **
-‐ **UNEMPLOYMENT AND ADDITIONAL INDICATORS DATASET: CONTAINS**
**UNEMPLOYMENT RELATED STATISTICS ABOUT BELGIUM AND ITS REGIONS. THE DATA FOR
CONVERSION WAS OBTAINED FROM**
**_HTTP://STATBEL.FGOV.BE/EN/MODULES/PUBLICATIONS/STATISTICS/MARCHE_D
U_TRAVAIL_ET_CONDITIONS_DE_VIE/UNEMPLOYMENT_AND_ADDITIONAL_INDI_ **
CATORS_2005-‐2010.JSP
<table>
<tr>
<th>
</th>
<th>
**PROVENANCE:**
</th>
<th>
**STATBEL -‐ HTTP://STATBEL.FGOV.BE/EN/STATISTICS/FIGURES/**
</th> </tr>
<tr>
<th>
**USEFULNESS:**
</th>
<th>
**STATBEL IS A USEFUL STATISTICS RESOURCE FOR ENRICHMENT OF VARIOUS KIND OF
CONTENT RELATED TO BELGIUM AND THE BELGIAN SOCIETY.**
</th> </tr>
<tr>
<td>
</td>
<td>
**SIMILAR DATA:**
</td>
<td>
**THE UNDATA**
</td> </tr>
<tr>
<td>
**RESOURCE TYPE**
</td>
<td>
**RE-‐USE AND INTEGRATION:**
</td>
<td>
**TBD**
</td> </tr>
<tr>
<td>
**FORMAT:**
</td>
<td>
</td> </tr>
<tr>
<td>
**MEDIA TYPE**
</td>
<td>
**MEDIA TYPE:**
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**METADATA DESCRIPTION:**
</td>
<td>
**DONE IN LINKED DATA USING DATAID, A METADATA DESCRIPTION VOCABULARY BASED ON
DCAT. DMP REPORTS ARE AUTOMATICALLY GENERATED AND MAINTAINED UP TO DATE USING
THIS METADATA.**
**VOCABULARIES AND ONTOLOGIES: DATA CUBE**
</td> </tr>
<tr> </tr>
<tr>
<td>
**LICENSE**
</td>
<td>
**LICENSE:**
</td>
<td>
**TBA N/A**
</td> </tr>
<tr> </tr>
<tr>
<td>
</td>
<td>
**ODRL LICENSE DESCRIPTION:**
</td>
<td>
**N/A**
</td> </tr>
<tr>
<td>
**USAGE**
</td>
<td>
**OPENNESS:**
</td>
<td>
**STATBEL IS AN OPEN DATASET**
</td> </tr>
<tr>
<td>
</td>
<td>
**SOFTWARE NECESSARY:**
</td>
<td>
**STATBEL NEEDS NO ADDITIONAL SOFTWARE TO BE USED.**
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**REPOSITORY:**
</td>
<td>
**HTTPS://DATAHUB.IO/DATASET/STATBEL-‐CORPUS**
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**PRESERVATION:**
</td>
<td>
**PRESERVATION OF THE STATBEL IS GUARANTEED BY ARCHIVAL OF OLD VERSIONS ON THE
SCRIPTS USED FOR ITS CREATION AND REFERENCING TO THE SOURCE DATA. ALSO,
PRESERVATION IS GUARANTEED BY ARCHIVAL OF THE OLD STATBEL CONVERTED VERSIONS
ON THE ARCHIVE SERVER.**
</td> </tr>
<tr>
<td>
</td>
<td>
**GROWTH:**
</td>
<td>
**FREME AIMS AT PROVIDING CONVERSION OF THE NEWER, RICHER VERSIONS OF THE
DATASETS AND CONVERTING ADDITIONAL.**
</td> </tr>
<tr>
<td>
</td>
<td>
**ARCHIVE:**
</td>
<td>
**HTTP://RV1460.1BLU.DE/DATASETS/STATBEL/**
</td> </tr>
<tr>
<td>
**SIZE**
</td>
<td>
**SIZE:**
</td>
<td>
**FEW THOUSANDS OF TRIPLES**
</td> </tr> </table>
**Table 11 Statbel dataset description**
### 3.1.4 LIST OF OTHER DATASETS USED IN FREME
**Other Datasets used in FREME.** To finalize with the datasets in FREME, the
following list includes the datasets that have been used by any of the
partners in the context of FREME, but the dataset itself is not being used by
the project.
**Other Datasets used in FREME:**
<table>
<tr>
<th>
**DATA SET NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**FREME USE?**
</th>
<th>
**USED IN SERVICE**
</th>
<th>
**LICENSE**
</th>
<th>
**LINK**
</th> </tr>
<tr>
<td>
**WAND**
**FINANCE AND**
**INVESTMENT**
**TAXONOMY -‐**
**WAND INC**
</td>
<td>
**A TAXONOMY WITH SPECIFIC TOPICS AND ENTITIES RELATED TO FINANCE AND
INVESTMENT**
</td>
<td>
**UPLOAD FOR TESTING**
</td>
<td>
**NO**
</td>
<td>
**EVALUATION LICENSE**
</td>
<td>
**WWW.WANDINC.COM/WAN**
**D-‐FINANCE-‐AND-‐**
**INVESTMENT-TAXONOMY.ASPX**
</td> </tr> </table>
<table>
<tr>
<th>
**CIARD RING**
**AGRIS**
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
**THE CIARD RING IS A**
**GLOBAL DIRECTORY OF**
**WEB-BASED INFORMATION**
**SERVICES AND DATASETS**
**FOR AGRICULTURAL**
**RESEARCH FOR**
**DEVELOPMENT. IT IS THE**
**PRINCIPAL TOOL CREATED**
**THROUGH THE CIARD**
**INITIATIVE**
**(HTTP://WWW.CIARD.NET) TO ALLOW INFORMATION**
**PROVIDERS TO REGISTER**
**THEIR SERVICES AND**
**DATASETS IN VARIOUS**
**CATEGORIES AND SO FACILITATE THE**
**DISCOVERY OF SOURCES**
**OF AGRICULTURE-RELATED**
**INFORMATION ACROSS THE**
**WORLD.**
</th>
<th>
**NOT YET**
</th>
<th>
**NO**
</th>
<th>
**CC**
**ATTRIBUTION**
</th>
<th>
**_HTTPS://DATAHUB.IO/DATAS_ **
**_ET/THE-‐CIARD-‐RING_ , **
**_HTTP://RING.CIARD.INFO/RD_ **
**_F-‐STORE_ **
</th> </tr>
<tr>
<th>
**INTERNATIONAL**
**INFORMATION SYSTEM FOR**
**THE AGRICULTURAL**
**SCIENCE AND**
**TECHNOLOGY**
</th>
<th>
**VALIDATE**
**FREME SERVICE**
</th>
<th>
**NO**
</th>
<th>
**NO CLEAR**
**LICENSE**
**AVAILABLE YET.**
**IT WILL BE**
**SOON**
</th>
<th>
**_HTTPS://DATAHUB.IO/DATAS_ **
**_ET/AGRIS_ **
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**Table 12 Other Datasets used in FREM**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0895_BENEFIT_635973.md
|
# Article 1. Introduction
The present Data Management Plan (DMP) concerns data management and
intellectual property rights with respect to the EC Horizon 2020 CSA Project
BENEFIT (Grant Agreement No. 635973). This document should be considered in
combination with:
▪ Articles 8.2.2, 8.2.3, 9.1, 9.2, Attachment 1 and Attachment 3 of the
Consortium Agreement ▪ Section 3 (Articles 23, 24, 25, 26, 27, 28, 29, 30 and
31) of the Grant Agreement No.
635973
The Plan is organised per project task in order to concretely describe the
contribution of each project partner to the final outcome as well as the spin-
off potential of each activity.
The scope of the BENEFIT project as described in its proposal and subsequent
grant agreement is as follows:
<table>
<tr>
<th>
BENEFIT takes an innovative approach by analysing funding schemes within an
inter-related system. Funding schemes are successful (or not) depending on the
Business Model that generates them. The performance of the Business Model is
effected by the implementation and the transport mode context. It is matched
successfully (or not) by a financing scheme. Relations between actors are
described by a governance model (contracting arrangements). These are key
elements in Transport Infrastructure Provision, Operation and Maintenance, as
illustrated in figure 1
Figure 1: BENEFIT Key Elements in Transport Infrastructure Provision,
Operation and Maintenance
Success is a measure of the appropriate matching of elements. Within BENEFIT
funding and financing schemes are analysed in this respect. Describing these
key elements through their characteristics and attributes and clustering each
of them into **typologies** is the basis of, first, developing a generic
framework. Identifying best matches in their inter-relations ( **matching
principles** ) leads to move from a generic framework to a powerful decision
making one ( **Decision Matching Framework** ) that is developed to guide
_policy makers and providers of funding_ (and financing) _extensive
comparative information on the advantages and limitations of different funding
schemes for transport infrastructure projects and improve the awareness of
policy makers on the needs of projects serving an efficient and performing
transport network within the horizon 2050._
</th> </tr> </table>
Besides, the framework allows policy makers to identify changes that may be
undertaken in order to improve the potential of success, such as improving the
value proposition of the business model.
In developing this framework, BENEFIT takes stock of case studies known to its
partners in combination with a meta-analysis of relevant EC funded research
and other studies carried out with respect to funding schemes for transport
(and other) infrastructure and direct contact with key stakeholder groups.
More specifically, BENEFIT uses the **published** case study descriptions of
**seventy-five** transport infrastructure projects funded and financed by
public and private resources from **nineteen** European and **four**
non–European Countries covering all modes of transport. It also exploits
**twenty-four** European country profiles with respect to contextual issues
(institutions, regulations, macroeconomic and other settings) influencing
funding and financing of transport infrastructure. This data has been produced
within the framework of activities undertaken by the OMEGA Centre for Mega
Projects in Transport and Development and the COST Action TU1001 on Public
Private Partnerships in Transport: Trends and Theory. In addition, BENEFIT,
through its partnership and respective experts, consolidates almost **twenty**
years of successful European Commission research with respect to issues
related to transport infrastructure and planning, assessment and pricing of
transport services. Therefore, its approach is supported by the **tacit**
knowledge and insights of the BENEFIT partnership with respect to
infrastructure projects in transport.
By applying the Decision Matching Framework, BENEFIT will undertake:
* An ex-post analysis and assessment of _alternative funding schemes (public, PPP and other) based on existing experiences in different transport sectors and geographical areas and their assessment with respect to economic development, value for public money, user benefits, lifecycle investment, efficiency, governance and procurement modalities, etc_ .; and, provide _lessons learned, identification of the limitations of the various schemes and the impact of the economic and financial crisis_ 1 .
* An ex-ante (forward) analysis and assessment of _the potential of transport investments and the related funding schemes, including innovative procurement schemes still in a pilot phase, to contribute to economic recovery, growth and employment, in view of future infrastructure needs with a 2050 horizon_ for modern infrastructure, smart pricing and funding.
Finally, the BENEFIT partnership covers twelve EU countries and includes
fourteen partner institutes. Eleven of these partner institutes are members of
the Management Committee and Working Groups of the abovementioned COST Action
TU1001 (more specifically, the chair, vicechair and the working group
leaders). BENEFIT also benefits from the contribution of three more partners
(2 transport consultancy SMEs), who extend and support its expertise and
competence to enrich the partnership with new insights and market views.
Besides, an International Advisory board of prominent academics and
international institutions provide guidance and support. Public sector
authorities responsible for transport infrastructure, financiers, transport
operators and sponsors, innovation providers will also be consulted throughout
this Coordination and Support Action.
BENEFIT is concluded within **twenty one months** and bears the following
innovative aspects:
<table>
<tr>
<th>
•
</th>
<th>
**Transport infrastructure business models** and their **project** **rating:**
Improved value propositions lead to funding schemes with enhanced
creditworthiness enabling viable financing, balancing of project
</th> </tr>
<tr>
<td>
</td>
<td>
financing and funding risks, increasing the value basis of stakeholders and
highlighting the _potential of transport investments_ .
</td> </tr>
<tr>
<td>
•
</td>
<td>
**Transferability of findings** with respect to _lessons learned, limitations
and the impact of the economic_
</td> </tr>
<tr>
<td>
</td>
<td>
_and financial crisis_ through the introduction of typologies _._
</td> </tr>
<tr>
<td>
•
</td>
<td>
**Open-access case study database** in a wiki format, allowing for continuous
updates and providing a
</td> </tr>
<tr>
<td>
</td>
<td>
knowledge base serving both practitioners and researchers.
</td> </tr> </table>
The project concept has been developed by the project coordinator and
published in: Athena Roumboutsos (2015) “Case studies in transport Public
Private Partnerships: transferring Lessons learnt”, TRB 2015, Washington DC.
# Article 2. DMP of WP 1: Management and Coordination
This work package concerns the management and coordination of all BENEFIT
Project activities. No data issues and property rights are related to this
work package.
# Article 3. DMP of Task 2.2: BENEFIT Database
The BENEFIT database is a combination of existing and new data collected
describing case studies. Additional data is also collected with respect to
existing case study data.
## 3.1 Data types
Data generated and used in this project include the following data types.
### 3.1.1 Existing Data
<table>
<tr>
<th>
**Dataset**
**Description**
</th>
<th>
COST Action TU1001
</th>
<th>
COST Action TU1001
</th>
<th>
COST Action TU1001
</th>
<th>
OMEGA Center, UCL
</th> </tr>
<tr>
<td>
**Contact**
</td>
<td>
Athena
Roumboutsos, Un.
of the Aegean
</td>
<td>
Koen Verhoest, Un. of Antwerp
</td>
<td>
Champika
Liyanage,
UCLAN
</td>
<td>
OMEGA Center, UCL
</td> </tr>
<tr>
<td>
**Data**
**Volume**
</td>
<td>
49 Case study descriptions
</td>
<td>
23 Country
Profiles
</td>
<td>
30 Case study performance assessments
</td>
<td>
13 Case studies
</td> </tr>
<tr>
<td>
**Data Format**
</td>
<td>
Hard/electronic copy (word doc) in templates
</td>
<td>
Hard/electronic copy (word doc) in templates
</td>
<td>
Electronic copy (xls) in templates
</td>
<td>
Hard/electronic copy (word doc) and additional support materials (eg. reports
and interview data)
</td> </tr>
<tr>
<td>
**Delivery Date**
</td>
<td>
2013 - 2014
</td>
<td>
2013 - 2014
</td>
<td>
2013-2014
</td>
<td>
2006 - 2011
</td> </tr>
<tr>
<td>
**Preservation Plan**
</td>
<td>
Transfer to electronic database.
</td>
<td>
Transfer to electronic database.
</td>
<td>
Transfer to electronic database.
</td>
<td>
\-
</td> </tr> </table>
<table>
<tr>
<th>
**Public**
**Availability**
</th>
<th>
The narratives of all case studies are published in:
Roumboutsos, A.,
Farrell, S.,
Liyanage, C. L. and Macário, R. (2013) _COST_
_Action TU1001_
_Public Private_
_Partnerships in_
_Transport: Trends_
_ & Theory P3T3, _
_2013 Discussion_
_Papers Part II_
_Case Studies_ ,
ISBN 978-88-
97781-61-5 Available at _http://www.ppptra_
_nsport.eu_
Roumboutsos, A.,
Farrell, S.,
Verhoest, K.
(2014) (Eds.)
(2014). _COST_
_Action TU1001 –_
_Public Private_
_Partnerships in_
_Transport: Trends & Theory: 2014 _
_Discussion Series:_
_Country Profiles_
_ & Case Studies _ ;
ISBN 978-88-
6922-009-8,
COST Office, Brussels Available at:
_http://www.ppptra_
_nsport.eu_
</th>
<th>
The narratives of all country profiles are published in:
Verhoest K.,
Carbonara N.,
Lember V.,
Petersen O.H., Scherrer W. and van den Hurk M (eds)., _COST_
_Action TU1001_
_Public Private_
_Partnerships in_
_Transport: Trends_
_ & Theory P3T3, _
_2013 Discussion_
_Papers Part I_
_Country Profiles_ ,
ISBN: 978-88-
97781-60-8,
COST Office, Brussels Available at:
http://www.ppptra nsport.eu.
Roumboutsos, A.,
Farrell, S.,
Verhoest, K.
(2014) (Eds.)
(2014). _COST_
_Action TU1001 –_
_Public Private_
_Partnerships in_
_Transport: Trends & Theory: 2014 _
_Discussion Series:_
_Country Profiles_
_ & Case Studies _ ;
ISBN 978-88-
6922-009-8,
COST Office, Brussels Available at:
_http://www.ppptra_
_nsport.eu_
</th>
<th>
Data is published in journals and other open access documents by COST Action
TU1001 working group
(performance) members.
</th>
<th>
The narratives of all case studies in summary and full
report are publically aavailable at:
http://www.omega centre.bartlett.ucl. ac.uk
</th> </tr>
<tr>
<td>
**Issues**
</td>
<td>
Case studies “owners” are to be referenced based on the above publications.
The case study owners have reserved no further rights.
Contact is a member of the BENEFIT Project Consortium and obliged to share
this data in the project.
</td>
<td>
Case studies “owners” are to be referenced based on the above publications.
The case study owners have reserved no further rights.
Contact is a member of the BENEFIT Project Consortium and obliged to share
this data in the project.
</td>
<td>
Case studies “owners” are to be referenced based on COST Action TU1001
publications. The case study owners have reserved no further rights.
Contact is a member of the BENEFIT Project Consortium and obliged to share
this data in the project.
</td>
<td>
Acknowledgment of IP.
Contact is a member of the BENEFIT Project Consortium and obliged to share
this data in the project.
</td> </tr> </table>
**3.1.2 New data generated in the course of the BENEFIT project**
New data generated concerns the following:
1. New case studies and other information collected based on BENEFIT requirements New data will be collected by BENEFIT partners. This data will in inserted in the BENEFIT Database part of the BENEFIT Portal operated by the University of the Aegean.
2. Updating of existing data to the BENEFIT project requirements.
Existing data will be updated to include information required by the BENEFIT
Project. Existing data will be updated by the data “owners”. If data “owners”
are not part of the BENEFIT project consortium, data will be updated by
members of the consortium. If this is not possible, data for these cases will
be used in their existing form and version. The OMEGA Center case studies will
be transferred to the structure of the database to secure compatibility.
Data collected (case study data) will always belong to the “owner” (provider
of the data). The data “owner” contributes the data to the BENEFIT project by
supplying the BENEFIT database. “Owner” Name and Affiliation are registered in
the database.
Data storage and back-up strategy follows the general rules for data security
followed by the University of the Aegean network services.
## 3.2 Data Organisation, Documentation and Metadata
Data is organized in a database and documented in a standardized way to
register:
1. “Owners”
2. Revisions and Updates
Values and indicators included in the case study description may be aggregated
into indicators required by the BENEFIT project. These indicators will be
included in the dataset accompanying the particular case study entry.
## 3.3 Data Access and Intellectual Property
Access to the database will be provided to only one member per partner. Access
to the database and downloads or export of datasets will be automatically
monitored by the University of the Aegean.
This information will be made released to the BENEFIT consortium on a monthy
basis. It will be used for self-regulation of usage safequarding against
abuse.
A narrative of the new data collected will be:
1. Published in an edited ISBN e-book and made available on the BENEFIT portal free of charge.
2. Contributed to the BENEFIT wiki and made available on the BENEFIT portal.
Existing data and their updates/revised narratives will be contributed to the
BENEFIT wiki.
Permissions with respect to “owners” of existing data belonging to the BENEFIT
project consortium and new data “owners” are not required as this is part of
the BENEFIT grant agreement.
Permissions with respect to “owners” of existing data not belonging to the
BENEFIT project consortium will be requested. If permission is not granted,
this data (case studies) will not feature in the BENEFIT wiki.
Finally, permissions with respect to the collection of data (eg. permission to
use interview data as requested by the EC) will be uploaded in the dataset per
case.
## 3.4 Data sharing and Reuse
All partners will have access to data during the project period as described
in section 2.3.
Following the completion of the BENEFIT project, rights will be established
based on each “owner’s” contribution to the database. These rights also
include the rights of “owners” who are not members of the BENEFIT project
consortium. The concrete algorithm with respect to ownership rights will be
established once the data collection period is concluded.
Two months before the end of the project period, a business plan will be
prepared to analyse and structure the potential exploitation of the database.
Potential reuse of data may be for future and further research and educational
purposes.
“Owners” or group of “Owners” may use their owned data to produce/publish
research academic papers or for teaching purposes. In all cases they are
obliged to:
1. Reference the publication where the data is first published (COST Action TU1001 Discussion Series, BENEFIT e-book etc.)
2. Reference the BENEFIT project in accordance to EC Grant rules
3. Reference the BENEFIT Database and the University of the Aegean.
The publication schedule is outlined in section 16 of the Data Management
Plan.
## 3.5 Data Preservation and Archiving
The database will be accessible through the BENEFIT portal for five years
following the end of the project. During this period, unless otherwise decided
by the consortium members under section 2.4, the database functionality will
remain the same as during the project duration.
Following, this 5-year period and if no other decision has been made with
respect to the database by the consortium members, the datasets will be
attached to the BENEFIT wiki for open-access.
# Article 4\. DMP Task 2.2: Funding Schemes & Business Models
This is a theory-consolidating task. Dimensions, Types and Indicators
generated concern the contributions of partners as follows:
1. business models – UAEGEAN, UT
2. funding schemes - TIS
3. transport mode context – UA
4. implementation context – UA, IBDIM, UCLAN.
**4.1 Data types**
No data is generated in this task.
## 4.2 Data Organisation, Documentation and Metadata
Theory generated in this task is reported in the respective BENEFIT project
deliverable. As noted in the Quality Assurance Plan, Quality is controlled by
the Task Leader, the Work Package Leader and the Project Coordinator. The task
deliverable is “Public” and will feature on the BENEFIT Portal.
## 4.3 Data Access and Intellectual Property
There is no issue with respect to access to data.
With respect to intellectual property rights, contributors are encouraged to
publish their work following the publication schedule in section 16 of the
DMP.
With respect to intellectual property rights these are shared as follows per
subtasks:
1. business models – UAEGEAN: 50%, UT:50%
2. funding schemes – TIS: 100%
3. transport mode context – UA :100%
4. implementation context – UA:50%, IBDIM:20%, UCLAN:30%.
The task leader, based on actual contribution may propose a different share of
ownership. This share should be approved by task members.
## 4.4 Data sharing and Reuse
There is no data sharing issue. Results are made publically available through
the BENEFIT report.
Publications or any other use of this output should reference the BENEFIT
project in accordance to EC Grant rules and the respective deliverable –
report.
The publication schedule is outlined in section 16 of the Data Management
Plan.
## 4.5 Data Preservation and Archiving
The respective report and all other publications generated within this
activity will be made available on the BENEFIT Portal. The portal will be
active for 5 years following the end of the project.
# Article 5. DMP of Task 2.3: Financing Schemes
This is a theory-consolidating task. Dimensions, Types and Indicators
generated concern the contributions from the following partners: KIT, UAEGEAN,
UCL, TRT, FCE.
**5.1 Data types**
No data is generated in this task.
## 5.2 Data Organisation, Documentation and Metadata
Theory generated in this task is reported in the respective BENEFIT project
deliverable. As noted in the Quality Assurance Plan, Quality is controlled by
the Task Leader, the Work Package Leader and the Project Coordinator. The task
deliverable is “Public” and will feature on the BENEFIT Portal.
## 5.3 Data Access and Intellectual Property
There is no issue with respect to access to data.
With respect to intellectual property rights, contributors are encouraged to
publish their work following the publication schedule in section 16 of the
DMP.
With respect to intellectual property rights these are shared equally between
partners (20% each). The task leader, based on actual contribution may propose
a different share of ownership. This share should be approved by task members.
## 5.4 Data sharing and Reuse
There is no data sharing issue. Results are made publically available through
the BENEFIT report.
Publications or any other use of this output should reference the BENEFIT
project in accordance to EC Grant rules and the respective deliverable –
report.
The publication schedule is outlined in section 16 of the Data Management
Plan.
## 5.5 Data Preservation and Archiving
The respective report and all other publications generated within this
activity will be made available on the BENEFIT Portal. The portal will be
active for 5 years following the end of the project.
# Article 6\. DMP of Task 2.4: Governance, Procurement and Contractual
Agreement
This is a theory-consolidating task. Dimensions, Types and Indicators
generated concern the contributions from the following partners: Univ. of
Twente, UCL, UCLAN, IBDIM, IST and KIT.
**6.1 Data types**
No data is generated in this task.
## 6.2 Data Organisation, Documentation and Metadata
Theory generated in this task is reported in the respective BENEFIT project
deliverable. As noted in the Quality Assurance Plan, Quality is controlled by
the Task Leader, the Work Package Leader and the Project Coordinator. The task
deliverable is “Public” and will feature on the BENEFIT Portal.
## 6.3 Data Access and Intellectual Property
There is no issue with respect to access to data.
With respect to intellectual property rights, contributors are encouraged to
publish their work following the publication schedule in section 16 of the
DMP.
With respect to intellectual property rights these are shared equally between
partners. The task leader, based on actual contribution may propose a
different share of ownership. This share should be approved by task members.
## 6.4 Data sharing and Reuse
There is no data sharing issue. Results are made publically available through
the BENEFIT report.
Publications or any other use of this output should reference the BENEFIT
project in accordance to EC Grant rules and the respective deliverable –
report.
The publication schedule is outlined in section 16 of the Data Management
Plan.
## 6.5 Data Preservation and Archiving
The respective report and all other publications generated within this
activity will be made available on the BENEFIT Portal. The portal will be
active for 5 years following the end of the project.
# Article 7. DMP of Task 3.1: Matching Principles
This is a theory-building task. Contributors are:
Framework Building: UCL and UAEGEAN (key contributors); IST, TIS, ULPGC
(contribution)
Receiving/ Giving Input: UA, KIT, UT, TRT, UCLAN
**7.1 Data types**
No data is generated in this task.
## 7.2 Data Organisation, Documentation and Metadata
Theory generated in this task is reported in the respective BENEFIT project
deliverable. As noted in the Quality Assurance Plan, Quality is controlled by
the Task Leader, the Work Package Leader and the Project Coordinator. The task
deliverable is “Public” and will feature on the BENEFIT Portal.
## 7.3 Data Access and Intellectual Property
There is no issue with respect to access to data.
With respect to intellectual property rights, contributors are encouraged to
publish their work following the publication schedule in section 16 of the
DMP.
With respect to intellectual property rights these are shared as follows:
UCL: 35%; UAEGEAN: 35%; IST: 10%; TIS: 10%; ULPGC: 10%
The task leader, based on actual contribution may propose a different share of
ownership. This share should be approved by task members.
## 7.4 Data sharing and Reuse
There is no data sharing issue. Results are made publically available through
the BENEFIT report.
Publications or any other use of this output should reference the BENEFIT
project in accordance to EC Grant rules and the respective deliverable –
report.
The publication schedule is outlined in section 16 of the Data Management
Plan.
## 7.5 Data Preservation and Archiving
The respective report and all other publications generated within this
activity will be made available on the BENEFIT Portal. The portal will be
active for 5 years following the end of the project.
# Article 8\. DMP of Task 3.2: Policy Tool & Rating Methodology
This is a theory-building task. Contributors are:
Framework Building/ Methodology: UAEGEAN and UCL
Receiving/ Giving Input: KIT, UT, TRT
**8.1 Data types**
No data is generated in this task.
## 8.2 Data Organisation, Documentation and Metadata
Theory generated in this task is reported in the respective BENEFIT project
deliverable. As noted in the Quality Assurance Plan, Quality is controlled by
the Task Leader, the Work Package Leader and the Project Coordinator. The task
deliverable is “Public” and will feature on the BENEFIT Portal.
## 8.3 Data Access and Intellectual Property
There is no issue with respect to access to data.
With respect to intellectual property rights, contributors are encouraged to
publish their work following the publication schedule in section 16 of the
DMP.
With respect to intellectual property rights these are shared as follows:
UAEGEAN: 50% and UCL: 50%.
The task leader, based on actual contribution may propose a different share of
ownership. This share should be approved by task members.
## 8.4 Data sharing and Reuse
There is no data sharing issue. Results are made publically available through
the BENEFIT report.
Publications or any other use of this output should reference the BENEFIT
project in accordance to EC Grant rules and the respective deliverable –
report.
The publication schedule is outlined in section 16 of the Data Management
Plan.
## 8.5 Data Preservation and Archiving
The respective report and all other publications generated within this
activity will be made available on the BENEFIT Portal. The portal will be
active for 5 years following the end of the project.
# Article 9. DMP of Task 4.1: Lessons Learned
This is a study task. Contributors are: TRT, UAEGEAN, UA, OULU, CEREMA, KIT,
IBDIM, IST, TIS, ULPGC, FCE, and UCL.
**9.1 Data types**
No data is generated in this task.
## 9.2 Data Organisation, Documentation and Metadata
The study is reported in the respective BENEFIT project deliverable. As noted
in the Quality Assurance Plan, Quality is controlled by the Task Leader, the
Work Package Leader and the Project Coordinator. The task deliverable is
“Public” and will feature on the BENEFIT Portal.
## 9.3 Data Access and Intellectual Property
There is no issue with respect to access to data.
With respect to intellectual property rights, contributors are encouraged to
publish their work following the publication schedule in section 16 of the
DMP.
With respect to intellectual property rights these are equally shared. The
task leader, based on actual contribution may propose a different share of
ownership. This share should be approved by task members.
## 9.4 Data sharing and Reuse
There is no data sharing issue. Results are made publically available through
the BENEFIT report.
Publications or any other use of this output should reference the BENEFIT
project in accordance to EC Grant rules and the respective deliverable –
report.
The publication schedule is outlined in section 16 of the Data Management
Plan.
## 9.5 Data Preservation and Archiving
The respective report and all other publications generated within this
activity will be made available on the BENEFIT Portal. The portal will be
active for 5 years following the end of the project.
# Article 10\. DMP of Task 4.2: Limitations and Tolerance to Change
This is a study task and a pilot test to the matching principles framework
(task 3.1). Contributors are: UCLAN, UAEGEAN, UA, OULU, CEREMA, TRT, IST, TIS,
ULPGC, UCL, FCE.
**10.1 Data types**
No data is generated in this task.
## 10.2 Data Organisation, Documentation and Metadata
The study is reported in the respective BENEFIT project deliverable. As noted
in the Quality Assurance Plan, Quality is controlled by the Task Leader, the
Work Package Leader and the Project Coordinator. The task deliverable is
“Public” and will feature on the BENEFIT Portal.
**10.3 Data Access and Intellectual Property** There is no issue with respect
to access to data.
With respect to intellectual property rights, contributors are encouraged to
publish their work following the publication schedule in section 16 of the
DMP.
With respect to intellectual property rights these are equally shared. The
task leader, based on actual contribution may propose a different share of
ownership. This share should be approved by task members.
## 10.4 Data sharing and Reuse
There is no data sharing issue. Results are made publically available through
the BENEFIT report.
Publications or any other use of this output should reference the BENEFIT
project in accordance to EC Grant rules and the respective deliverable –
report.
The publication schedule is outlined in section 16 of the Data Management
Plan.
## 10.5 Data Preservation and Archiving
The respective report and all other publications generated within this
activity will be made available on the BENEFIT Portal. The portal will be
active for 5 years following the end of the project.
# Article 11\. DMP of Task 4.3: Effects of Recent Economic and Financial
Crisis
This is a study task and a pilot test to the matching principles framework
(task 3.1). Contributors are: FCE, UAEGEAN, UA, OULU, CEREMA, KIT, TRT, IBDIM,
IST, ULPGC, UCLAN, UCL
**11.1 Data types**
No data is generated in this task.
## 11.2 Data Organisation, Documentation and Metadata
The study is reported in the respective BENEFIT project deliverable. As noted
in the Quality Assurance Plan, Quality is controlled by the Task Leader, the
Work Package Leader and the Project Coordinator. The task deliverable is
“Public” and will feature on the BENEFIT Portal.
**11.3 Data Access and Intellectual Property** There is no issue with respect
to access to data.
With respect to intellectual property rights, contributors are encouraged to
publish their work following the publication schedule in section 16 of the
DMP.
With respect to intellectual property rights these are equally shared. The
task leader, based on actual contribution may propose a different share of
ownership. This share should be approved by task members.
## 11.4 Data sharing and Reuse
There is no data sharing issue. Results are made publically available through
the BENEFIT report.
Publications or any other use of this output should reference the BENEFIT
project in accordance to EC Grant rules and the respective deliverable –
report.
The publication schedule is outlined in section 16 of the Data Management
Plan.
## 11.5 Data Preservation and Archiving
The respective report and all other publications generated within this
activity will be made available on the BENEFIT Portal. The portal will be
active for 5 years following the end of the project.
# Article 12\. DMP of Task 5.1: Potential of Investments in Transport
Infrastructure
This is a study task and an application of the policy tool (task 3.2).
Contributors are: UT, UAEGEAN, UA, OULU, CEREMA, KIT, TRT, IBDIM, IST, TIS,
UCL.
**12.1 Data types**
No data is generated in this task.
## 12.2 Data Organisation, Documentation and Metadata
The study is reported in the respective BENEFIT project deliverable. As noted
in the Quality Assurance Plan, Quality is controlled by the Task Leader, the
Work Package Leader and the Project Coordinator. The task deliverable is
“Public” and will feature on the BENEFIT Portal.
**12.3 Data Access and Intellectual Property** There is no issue with respect
to access to data.
With respect to intellectual property rights, contributors are encouraged to
publish their work following the publication schedule in section 16 of the
DMP.
With respect to intellectual property rights these are equally shared. The
task leader, based on actual contribution may propose a different share of
ownership. This share should be approved by task members.
## 12.4 Data sharing and Reuse
There is no data sharing issue. Results are made publically available through
the BENEFIT report.
Publications or any other use of this output should reference the BENEFIT
project in accordance to EC Grant rules and the respective deliverable –
report.
The publication schedule is outlined in section 16 of the Data Management
Plan.
## 12.5 Data Preservation and Archiving
The respective report and all other publications generated within this
activity will be made available on the BENEFIT Portal. The portal will be
active for 5 years following the end of the project.
# Article 13. DMP of Task 5.2: Policy Dialogues
This is a consulting task. Contributors are: IST, UAEGEAN, UA, OULU, CEREMA,
KIT, TRT, UT, TIS, ULPGC.
**13.1 Data types**
Policy opinions are registered.
## 13.2 Data Organisation, Documentation and Metadata
Opinions are reported in the respective BENEFIT project deliverable. As noted
in the Quality Assurance Plan, Quality is controlled by the Task Leader, the
Work Package Leader and the Project Coordinator. The task deliverable is
“Public” and will feature on the BENEFIT Portal.
**13.3 Data Access and Intellectual Property** There is no issue with respect
to access to data.
With respect to intellectual property rights, contributors are encouraged to
publish their work following the publication schedule in section 16 of the
DMP.
With respect to intellectual property rights these are equally shared. The
task leader, based on actual contribution may propose a different share of
ownership. This share should be approved by task members.
## 13.4 Data sharing and Reuse
There is no data sharing issue. Results are made publically available through
the BENEFIT report.
Publications or any other use of this output should reference the BENEFIT
project in accordance to EC Grant rules and the respective deliverable –
report.
The publication schedule is outlined in section 16 of the Data Management
Plan.
## 13.5 Data Preservation and Archiving
The respective report and all other publications generated within this
activity will be made available on the BENEFIT Portal. The portal will be
active for 5 years following the end of the project.
# Article 14\. DMP of Task 5.3: Policy Guidelines and Recommendations
This is a study task. Contributors are: UAEGEAN, UA, OULU, CEREMA, KIT, TRT,
UT, IST, TIS, ULPGC.
**14.1 Data types**
No data is generated.
## 14.2 Data Organisation, Documentation and Metadata
The study is reported in the respective BENEFIT project deliverable. As noted
in the Quality Assurance Plan, Quality is controlled by the Task Leader, the
Work Package Leader and the Project Coordinator. The task deliverable is
“Public” and will feature on the BENEFIT Portal.
**14.3 Data Access and Intellectual Property** There is no issue with respect
to access to data.
With respect to intellectual property rights, contributors are encouraged to
publish their work following the publication schedule in section X of the DMP.
With respect to intellectual property rights these are equally shared. The
task leader, based on actual contribution may propose a different share of
ownership. This share should be approved by task members.
## 14.4 Data sharing and Reuse
There is no data sharing issue. Results are made publically available through
the BENEFIT report.
Publications or any other use of this output should reference the BENEFIT
project in accordance to EC Grant rules and the respective deliverable –
report.
The publication schedule is outlined in section 16 of the Data Management
Plan.
## 14.5 Data Preservation and Archiving
The respective report and all other publications generated within this
activity will be made available on the BENEFIT Portal. The portal will be
active for 5 years following the end of the project.
# Article 15. DMP of WP 6: Dissemination & Exploitation
This work package concerns the dissemination of all BENEFIT Project findings.
No data issues and property rights are related to this work package. Issues of
exploitation are addressed in the respective sections.
# Article 16. Publications and Publication Schedule
The major foreseen output of the project are scientific publications.
Consortium members are encouraged to publish BENEFIT project findings as well
as spin-off concepts that develop from the research within the BENEFIT
project.
The partners are entitled to publish research results and development results
obtained from BENEFIT in the usual scientific form. However, all publications
must be put on the website and submitted to all Partners together with a
request for permission to publish. Requests for such permission to publish
shall be responded to within one month of receipt thereof. Agreement is
considered to have been granted, if no objection is raised within a period of
one month after submission of the manuscript to all partners. Such permission
shall not be withheld longer than it is needed to enable arising intellectual
property to be protected and, in any case, not longer than 6 months from the
date of the request to publish. Finally, the participating academic Partners
are entitled to use knowledge or results from the project that either have
been published or have been declassified for research and teaching purposes.
Appropriate reference will be made to the project’s funding (Horizon 2020).
The same applies to the use of knowledge in consultancy studies.
Scheduled publications and dissemination concern:
* Newsletters: Brief quarterly e-newsletter including BENEFIT findings addressing the points of interest and also relevant trends and evolutions in various parts of Europe and the world. The BENEFIT work programme is setup so as to coincide with the ability of quarterly reporting. Brief surveys will be conducted to monitor the level of satisfaction of context and its usefulness, so as to allow for improvements over the course of the project.
In support of this activity:
* Task leaders will provide highlights of their findings
* Advisory and Consultation Groups will be asked for news they would like to have posted
* Abstracts/highlights of scientific publications (relevant and generated from the BENEFIT project) will be provided
* Briefs of Targeted reports
* Future activities will be announced.
* Targeted Reports: BENEFIT is setup to address key issues in the White Paper and Horizon 2020 Strategy. To this effect it will provide reports targeting specific issues of interest such as
on transport infrastructure charging, promotion of the adoption of innovation
in infrastructure, new financing instruments, project rating and means of
enhancement and others. These are envisaged to be produced at the end of each
task to provide highlights to policy makers, providers of funding and finance,
EC officials, institutions and the relevant consultation group members.
BENEFIT will seek feedback so as to improve the usability of these reports.
* Publications (scientific articles, publications, press releases, conference papers etc) will be archived on the BENEFIT portal. Furthermore, disseminating key BENEFIT findings to the academic community will form a special issue in the “Case Studies in Transport Policy”, Elsevier Journal. Publications are expected to be produced through-out the course of BENEFIT.
* Final book publication “Post-Crisis PPP models”
# Article 17\. Patents & Protection of Intellectual Property
No patents or other form of intellectual property protection are expected to
be produced by the BENEFIT project. However, should such opportunity arise,
each partner is obligated to fully inform the Dissemination/exploitation
manager and the project coordinator of the filing of protection applications
of knowledge or results created in the field of the project within two weeks
of the date of filing. Results (resulting from the project) shall be made
available free of charge to BENEFIT partners of the consortium for the
implementation of the project, following the common rules of acknowledging the
project source, authors and EC funding. Results (resulting from the project)
owned by one or more of the partners shall be licensed to other partners of
the consortium on Fair and Reasonable conditions to if Needed to enable these
partners to exploit their own results following the procedures in the BENEFIT
Grant Agreement and Consortium Agreement. Use of Results for non-commercial
research shall be royalty free.
# Article 18. Updates and Revision
The DMP may be updated mid-term and project closing. It may be updated and
revised, if issues arise that have not been foreseen.
# Article 19. Miscellaneous
## 19.1 Language
This Data Management Plan is drawn up in English, which language shall govern
all documents, notices, meetings, arbitral proceedings and processes relative
thereto.
## 19.2 Applicable law
This Data Management Plan shall be construed in accordance with and governed
by the laws of Belgium excluding its conflict of law provisions.
## 19.3 Settlement of disputes
The parties shall endeavour to settle their disputes amicably.
All disputes arising out of or in connection with this Data Management Plan,
which cannot be solved amicably, shall be finally settled under the Rules of
Arbitration of the International Chamber of Commerce by one or more
arbitrators appointed in accordance with the said Rules.
The place of arbitration shall be Brussels if not otherwise agreed by the
conflicting Parties.
The award of the arbitration will be final and binding upon the Parties.
Nothing in this Data Management Plan shall limit the Parties' right to seek
injunctive relief in any applicable competent court.
# Article 20. Signatures
**AS WITNESS:**
The Parties have caused this Data Management Plan to be duly signed by the
undersigned authorised representatives (e-signature), the Project Coordinator
and the responsible to enforce the Data Management Plan.
Dr. Athena Roumboutsos, BENEFIT project Coordinator
Signature
Name
Title
Date
Dr. Thierry Vaneslander, responsible within the consortium of enforcing the
DMP
Signature
Name
Title
Date
**PANEPISTHMIO AIGAIOU [UAEGEAN],**
Signature(s)
Name(s)
Title(s)
Date
**UNIVERSITEIT ANTWERPEN [UA]**
Signature(s)
Name(s)
Title(s)
Date
**OULUN YLIOPISTO [OULUN YLIOPISTO]**
Signature(s)
Name(s)
Title(s)
Date
## CENTRE D ETUDES ET D EXPERTISE SUR LES RISQUES L ENVIRONNEMENT LA
**MOBILITE ET L AMENAGEMENT [CEREMA]**
Signature(s)
Name(s)
Title(s)
Date
**KARLSRUHER INSTITUT FUER TECHNOLOGIE [KIT]**
Signature(s)
Name(s)
Title(s) Date
**TRT TRASPORTI E TERRITORIO SRL [TRT]**
Signature(s)
Name(s)
Title(s)
Date
**UNIVERSITEIT TWENTE [UNIVERSITEIT TWENTE]**
Signature(s)
Name(s)
Title(s)
Date
**INSTYTUT BADAWCZY DROG I MOSTOW [IBDIM]**
Signature(s)
Name(s)
Title(s)
Date
**INSTITUTO SUPERIOR TECNICO [IST]**
Signature(s)
Name(s)
Title(s) Date
**TIS PT, CONSULTORES EM TRANSPORTES, INOVACAO E SISTEMAS, SA [TISPT]**
Signature(s)
Name(s)
Title(s)
Date
## UNIVERSIDAD DE LAS PALMAS DE GRAN CANARIA [UNIVERSIDAD DE LAS
**PALMAS DE GRAN CANARIA]**
Signature(s)
Name(s)
Title(s)
Date
**UNIVERSITY COLLEGE LONDON [UNIVERSITY COLLEGE LONDON]**
Signature(s)
Name(s)
Title(s)
Date
**UNIVERSITY OF CENTRAL LANCASHIRE [UCLAN]**
Signature(s)
Name(s)
Title(s) Date
**FACULTY OF CIVIL ENGINEERING [FACULTY OF CIVIL ENGINEERING]**
Signature(s)
Name(s)
Title(s)
Date
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0898_ADMONT_661796.md
|
# Chapter 1 “Introduction”
This deliverable briefly describes the data management plan and policy for
exploitation and protection of results. The DMP is based on article 29 from GA
and article 8 from consortium agreement. ADMONT DMP is following Horizon 2020
guidelines version 16 from Dec. 2013.
Article 29 from our GA described “Dissemination of Results, Open Access and
Visibility of Support”. Article 29.1 described “Obligation to dissemination
results” and in article 29.2 “Open access to scientific publications”. The
article 29.3 “Open access to research data” is not applicable for ADMONT.
Being more specific, it outlines how data will be handled, what methodology
and standards will be used, whether and how the data will be exploited or made
accessible for verification and re-use and how it will be curated and
preserved during and even after the ADMONT project is completed. The DMP can
be considered as a checklist for the future, as well as a reference for the
resource and budget allocations related to the data management.
However, to explain the reason why a DMP gets elaborated during the lifespan
of a research project, the European Commission’s vision is that information
already paid for by the public purse should not be paid again each time it is
accessed or used. Thus, other European companies should benefit from this
already performed research.
To be more specific, “data” refers to information, in particular facts or
numbers, collected to be examined and considered and as a basis for reasoning,
discussion, or calculation. In a research context, examples of data include
statistics, results of experiments, measurements, observations resulting from
fieldwork, survey results, interview recording and images. The focus is on
research data that is available in digital form.
The DMP is not a fixed document. It will evolve and gain more precision and
substance during the lifespan of the ADMONT project.
Article 8 of the ADMONT Consortium Agreement describes the rules and policy
for ownership, rights, transfer and dissemination of results. Article 9
implements access rights for use and exploitation of results, including
specific provisions for access rights to software.
In chapter 2, we provide our data management plan (DMP) and policy to ensure
open access to data from scientific publications.
In chapter 3, we provide our policy for ownership, rights and dissemination
from our Consortium Agreement.
# Chapter 2 “Data Management Plan (DMP)”
Regarding to the Grant Agreement article 29.2 we have to ensure open access to
data from scientific publications. We are following the guidelines on Data
Management in Horizon 2020 Version 16 December 2013 under use from Annex 1:
Data Management Plan template”. The term ‘Data Management’ stands for an
extensive strategy targeting data availability to target groups within an
organized and structured process converted to practice. Before making data
available to the public, the published data needs to be defined, collected,
documented and addressed properly.
## 2.1 Data set reference and name
In our multi-KET pilot line project we generate data along the value chain
during our pilot production or demonstrator preparation. Also during material,
process and module development, we produce different kind of data or metadata.
Normally, the partners deliver to the customer a set of standard data after
wafer processing, test or packaging. This data package is defined in the
business model along the pilot line. In table 1, a summary of the standard
data and data format is defined. This data set covers the normal foundry or
subcontracting business.
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Quality & Process Data **
</td>
<td>
**Data Format**
</td> </tr>
<tr>
<td>
PCM measurement data
</td>
<td>
csv
</td> </tr>
<tr>
<td>
Wafer map data
</td>
<td>
Customer format
</td> </tr>
<tr>
<td>
AVI map data
</td>
<td>
Customer format
</td> </tr>
<tr>
<td>
In-line data
</td>
<td>
csv
</td> </tr>
<tr>
<td>
Test data
</td>
<td>
Customer format
</td> </tr>
<tr>
<td>
Shipment information
</td>
<td>
Word, pdf
</td> </tr>
<tr>
<td>
Packaging information
</td>
<td>
Word, pdf
</td> </tr>
<tr>
<td>
Process deviation, findings
</td>
<td>
Word, pdf
</td> </tr> </table>
Table 1: Standard Data and Data Format for Customer
Additional to standard data the following data are produced: o Lab analysis
data (electrical, physical, chemical, optical) o Characterization data
(material, devices, modules, systems) o Reliability data (devices, IP,
modules, systems) o Simulation and modelling data (passive, active and
parasitic devices, module and system data)
o Mask and design data o Field application data o Product data sheets and
application manual o PDK (Process Design Kit) data
This list will be extended during the project lifespan.
## 2.2 Data set description
In all modern FABs, a lot of different data types with different data
structures need to be collected and processed and are used to control the
material flows, to determine process quality and to trigger preventive actions
in case of abnormality behavior.
For the virtual pilot line, some data sets are needed to ensure and control
next processing steps in the “next” FAB.
Typically data sets are: o Electrical data from micro or nano technology
devices o Design data from micro or nano technology devices o Mask data from
micro or nano technology devices o Material analysis data with concentration,
composition, distribution, morphology
* Reliability data for lifetime estimation, failure rate calculation, parameter degradation
* Outgoing maps to indicate yield and ink-positions (final maps)
* Technology and device simulation, process simulation, system simulation, mechanical stress simulation, reliability simulation,
* Electrical test data from modules, systems and sub-systems o Field application reports
This list will be extended during the project lifespan.
## 2.3 Standards and metadata
For design and mask data in micro and nano technology, the GDSII format is
common. PCM test and electrical test data are provided in csv format. All
modern lab measurement or analysis tools support the data transfer in
international standard formats. Also the data transfer to user or customer
specific format is common. In general we are following the international
standard to generate and collect data and metadata.
The data exchange in the virtual pilot lines needs to be standardized. Some
data sources are already producing industry standards, such as GDSII-Data for
design and mask data.
For other data sources a standard data format will be declared in detail. The
baselines for this standardization are:
* WIP, PCM test and electrical test data will be exchanged as ASCII-Files.
* Wafer and substrate mapping will be exchanged in SEMI-Standard E142.
* Wafermaps will be exchanged in SEMI-Standard G85-0703.
To support this data formats, the virtual pilot line partners need to
implement internal and external format converters and interfaces for data
reading and delivery.
An overall decentral data exchange mechanism needs to be implemented to ensure
reliable data exchange.
## 2.4 Data sharing
Basically, the ADMONT consortium agreed to follow the instructions of GA 29.2
for open access to scientific publications. The consortium is aware of the
importance of providing access to generated data in order to advance science
and maximise the research investments. Data sharing in ADMONT is an important
issue within the consortium as well as sharing data with consortium-external
interest groups.
The project internal data sharing is regulated by our CA (chapter 3) and
realized by our password protected SVN data management system. Every project
partner has open access to all project related collected and archived data. By
signing the GA and CA all partners agreed and accepted instruction GA 29 for
data sharing. Every partner is responsible to guaranty the open access to
scientific data.
The project consortium will incorporate interim project results into
scientific publications and present it on fairs, workshops and conferences.
The level of detail will be defined in correlation with the coordinator. For
scientific publication our CA foresees the acceptance of the project partners
before publication. Basically, the consortium-internal golden rule for making
data available to project external parties is that the publication of the data
will not negatively impact the project goals and outcomes.
All project results and deliverables are classified with a dissemination level
according to our DOA, differentiating between confidential or public.
Confidential project data will only be available for consortium members
including the Commission Services, while Public results will be launched on
the project website and are downloadable for all stakeholders. As the project
website will be kept even after the project lifetime, it can be assured that
the data will still be available after project end.
**In particular, it must:**
1. As soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications;
Moreover, the beneficiary must aim to deposit at the same time the research
data needed to validate the results presented in the deposited scientific
publications.
2. Ensure open access to the deposited publication — via the repository — at the latest:
1. On publication, if an electronic version is available for free via the publisher, or
2. Within six months of publication (twelve months for publications in the social sciences and humanities) in any other case.
3. Ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication.
The bibliographic metadata must be in a standard format and must include all
of the following:
* the terms “ECSEL”, “European Union (EU)” and “Horizon 2020”;
* the name of the action, acronym and grant number;
* the publication date, and length of embargo period if applicable, and - a persistent identifier.
## 2.5 Data archiving and preservation
Generally, the partners believe that it won’t be necessary to destroy any
data. However, it might be the case that some confidential data may need to be
restricted. This will be decided on a case by case basis. At this early stage,
some partners could not yet identify whether data destroying will be necessary
at all, as this also depends on the software and hardware targets that still
need to be decided.
On our ADMONT webpage the data will be stored three years longer over project
lifespan. After this time the data access is possible about the author,
institute or company where the author was working for up to submission his
scientific publication. Standard storage time for data is 10 years after
generation. Every partner is following his own data management and data
security policy.
Along with the project progress it will be agreed what data will be kept and
what data will be destroyed. This will be done according to the ADMONT project
rules, agreements and discussion within the consortium. So far, the partners
have already expressed that data that is relevant for scientific evaluation
and publication should certainly be kept.
The data generated will serve as basis for future scientific research work and
projects. For the consortium it is clear that foreseeable research uses for
the data can be for instance performance comparisons, in ADMONT particularly
with future systems and other hardware and software. Furthermore, the data may
even define the starting point for new standards and provide benchmarks for
research.
Regarding the retention and preservation of the data, ADMONT partners will
retain and/or preserve the produced data for several years, three years at
least.
As to the location of the storage, the ADMONT partners prefer to hold data in
internal repositories and/or servers. Further, they can be hold in marketing
repositories. Another option indicated by the partners is the storage in
public or institutional websites. Furthermore, it has been suggested to
establish a commodity cloud by using internal cloud infrastructure or,
depending on the confidentiality, an external platform.
For ADMONT the costs for data storage and archiving will occur, in particular
for server provision (infrastructure) and maintenance. Technikon has already
foreseen this in the project budget. At a later stage of the project it can be
better assessed, if further costs for data storage will occur. These costs
will then be covered by the partners with their own resources.
# Chapter 3 “Ownership, rights and dissemination of results”
Article 8 in the ADMONT Consortium Agreement describes the rules and policy
for ownership, rights, transfer and dissemination of results and article 9
describes access rights for use and exploitation of results, including
specific provisions for access rights to software.
### Copy from ADMONT Consortium Agreement
Even though IPR issues mainly arise during the project lifetime or even after
project end due to the dissemination (scientific and non-scientific
publications, conferences etc.) and exploitation (licensing, spin-offs etc.)
of project results, the ADMONT consortium considered the handling of IPR right
from the very beginning, already during the project planning phase. Therefore
a Consortium Agreement (CA) clearly states the background, foreground, and
side ground of each partner and defines rules regarding patents, copyrights,
(un-) registered designs and other similar or equivalent forms of statutory
protection.
Within the ADMONT project most data will be generated within internal
processes at partner level through measurement analysis. Close cooperation
within the consortium may lead to joint generation of data, which is clearly
handled in terms of IPR issues within the CA.
No third party data is reused in the current project phase. In case third-
party data will be reused, confidentiality restrictions might apply in
specific cases, which will be analyzed per case in detail.
No time lag or restriction for publication of results is planned. Publishable
data will be posted and published in due course.
**Section 8: Results**
For the application of the present article and for clarification purposes
regarding this Agreement as such, a third party with a legal link to a
beneficiary (e.g. in case of Joint Research Units) is considered as a third
party with the related rights and obligations according to the Grant
Agreement. It does not have the same rights according to this Consortium
Agreement as a Beneficiary who is Party to this Consortium Agreement.
**8.1 Ownership of Results**
Results are owned by the Party that generates them.
#### 8.2 Joint ownership
Where Results are generated from work carried out jointly by two or more
Parties and it **is not** possible to separate such joint invention, design or
work for the purpose of applying for, obtaining and/or maintaining the
relevant patent protection or any other intellectual property right, the
Parties shall have joint ownership of this work. The joint owners shall,
within a six (6) month period as from the date of the generation of such
Results, establish a written separate joint ownership agreement regarding the
allocation of ownership and terms of exercising, protecting, the division of
related costs and exploiting such jointly owned Results on a case by case
basis. However, until the time a joint ownership agreement has been concluded
and as long as such rights are in force, such Results shall be jointly owned
in shares according to their share of contribution (such share to be
determined by taking into account in particular, but not limited to, the
contribution of a joint owner to an inventive step, the person months or costs
spent on the respective work etc.) to the Results by the joint owners
concerned.
Unless otherwise agreed:
each of the joint owners shall be entitled to use the jointly owned Results
for noncommercial research activities on a royalty-free basis, and without
requiring the prior consent of the other joint owner(s), and each of the joint
owners shall be entitled to otherwise Exploit the jointly owned Results and to
grant non-exclusive licenses to third parties (without any right to sub-
license), if the other joint owners are given:
At least forty-five (45) calendar days advance notice; and
Fair and Reasonable compensation
The joint owners shall agree on all protection measures and the division of
related cost in advance.
#### 8.3 Transfer of Results
8.3.1 Each Party may transfer ownership of its own Results following the
procedures of the Grant Agreement Article 30.
8.3.2 It may identify specific third parties it intends to transfer the
ownership of its Results to in Attachment (3) to this Consortium Agreement.
The other Parties hereby waive their right to prior notice and their right to
object to a transfer to listed third parties according to the Grant Agreement
Article 30.1.
8.3.3 The transferring Party shall, however, at the time of the transfer,
inform the other Parties of such transfer and shall ensure that the rights of
the other Parties will not be affected by such transfer.
Any addition to Attachment (3) after signature of this Agreement requires a
decision of the Governing Council.
8.3.4 The Parties recognize that in the framework of a merger or an
acquisition of an important part of its assets, it may be impossible under
applicable EU and national laws on mergers and acquisitions for a Party to
give the full 45 calendar days prior notice for the transfer as foreseen in
the Grant Agreement.
8.3.5 The obligations above apply only for as long as other Parties still have
- or still may request- Access Rights to the Results.
#### 8.4 Dissemination
8.4.1 Dissemination of own Results
8.4.1.1 During the Project and for a period of 1 year after the end of the
Project, the dissemination of own Results by one or several Parties including
but not restricted to publications and presentations, shall be governed by the
procedure of Article 29.1 of the Grant Agreement subject to the following
provisions.
Prior notice of any planned publication shall be given to the other Parties at
least 45 calendar days before the publication. Any objection to the planned
publication shall be made in accordance with the Grant Agreement in writing to
the Coordinator and to the Party or Parties proposing the dissemination within
30 calendar days after receipt of the notice. If no objection is made within
the time limit stated above, the publication is permitted.
8.4.1.2 An objection is justified if
1. the protection of the objecting Party's Results or Background would be adversely affected
2. the objecting Party's legitimate academic or commercial interests in relation to the Results or Background would be significantly harmed.
The objection has to include a precise request for necessary modifications.
8.4.1.3 If an objection has been raised the involved Parties shall discuss how
to overcome the justified grounds for the objection on a timely basis (for
example by amendment to the planned publication and/or by protecting
information before publication) and the objecting Party shall not unreasonably
continue the opposition if appropriate measures are taken following the
discussion.
The objecting Party can request a publication delay of not more than 90
calendar days from the time it raises such an objection. After 90 calendar
days the publication is permitted, provided that Confidential Information of
the objecting Party has been removed from the Publication as indicated by the
objecting Party.
8.4.2 Dissemination of another Party’s unpublished Results or Background
A Party shall not include in any dissemination activity another Party's
Results or Background without obtaining the owning Party's prior written
approval, unless they are already published.
8.4.3 Cooperation obligations
The Parties undertake to cooperate to allow the timely submission,
examination, publication and defence of any dissertation or thesis for a
degree which includes their Results or Background subject to the
confidentiality and publication provisions agreed in this Consortium
Agreement.
8.4.4 Use of names, logos or trademarks
Nothing in this Consortium Agreement shall be construed as conferring rights
to use in advertising, publicity or otherwise the name of the Parties or any
of their logos or trademarks without their prior written approval.
# Chapter 4 “Summary and conclusion”
This data management plan outlines the handling of data generated within the
ADMONT project, during and after the project lifetime. As this document will
be kept as a living document and regularly updated by the consortium. The
partners put into write their plans and guarded expectations regarding
valuable and publishable data. The DMP is based on article 29 from GA and
article 8 from consortium agreement. ADMONT DMP is following Horizon 2020
guidelines version 16 from Dec. 2013.
Article 29 from our GA described “Dissemination of Results, Open Access and
Visibility of
Support”. Article 29.1 described “Obligation to dissemination results” and in
article 29.2 “Open access to scientific publications”. The article 29.3 “Open
access to research data” is not applicable for ADMONT.
The ADMONT consortium is aware of proper data documentation requirements and
will rely on each partners’ competence in appropriate citation etc. The
Consortium Agreement (CA) forms the legal basis in dealing with IPR issues and
covers clear rules for dissemination or exploitation of project data. Besides
the ADMONT public website, which targets a broad interest group, also
marketing flyers or the SVN repository will be used as a tool to provide data.
With regards to the retention and preservation of the data, ADMONT partners
will retain and/or preserve the produced data for several years, three years
at least.
**The ADMONT consortium is convinced that this data management plan ensures
that project data will be provided for further use timely, available and in
adequate form, taking into account the IPR restrictions of the project.**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0899_ESMERALDA_642007.md
|
# Preface
WP6 has as its main objective the effective promotion and dissemination of
ESMERALDA research across stakeholders and the general public. To ensure
effective communication, both external and internal, Pensoft has produced a
number of promotional tools and materials as a part of the project branding.
The following report describes these tools, the process of their discussion
with the consortium (more detail available in MS30) and their approval, as
well as their current and future implementation within the project
communication strategy.)
# Summary
As a foundation of the future effective communication activities, a sound set
of working dissemination tools and materials is crucial to be established
within the first months of the project start up. Accordingly, a project logo
and a web platform comprising an external website and Internal Communication
Platform (ICP) were developed in the first 3 months to form the backbone of
both project internal communication and public visibility.
In addition various dissemination materials such as an ESMERALDA brochure and
a poster were produced in high quality print versions for rising awareness at
events. The material has as well been uploaded in the Media Centre of the
website, to be available to anyone interested.
Templates were also produced and uploaded on the ICP to be available to the
consortium to facilitate future dissemination and reporting activities such as
letters, milestone and deliverable reports, Power point presentations, policy
briefs etc.
Accounts have been also set in 4 social media channels (Twitter, Facebook,
Google +, and LinkedIn) to ensure the widest possible impact and outreach of
ESMERALDA related results, news and events and to engage the interested
parties in a virtual community.
The longer term impact of the project's results will be secured by maintaining
the website for a minimum of 5 years after the closure of the project.
## 1\. Project branding and promotional materials
### 1.1. Project logo
Several versions of the logo were designed by Pensoft to reflect a concept
developed by the project coordinator and his team and were consequently passed
on for online discussion to the project’s Executive Board and the broader
consortium, before its final approval. (Fig.1). The logo is designed to help
the external audience to easily identify ESMERALDA and contributes to the
project visibility by providing a corporate identity from the very beginning
of the project.
**Figure 1: Current ESMERALDA project logo (above), including previous
suggestions (below).**
### 1.2. Project sticker
The ESMERALDA logo was used to create a promotional sticker, distributed for
the first time to project partners at the Kick‐off meeting in order to
increase visibility of the project and to promote it in the community (Fig.
2).
**Figure 2: ESMERALDA laptop sticker**
### 1.3. ESMERALDA brochure
The ESMERALDA brochure is designed in a way to capture the attention of the
different target groups and increase awareness of the project. It explains the
rationale behind the project ‐ its objectives, the activities and main tasks
planned as well as the expected results (Fig.3). The brochure was created to
reflect the conceptual design of the project logo and website and was a
subject to multiple online and personal discussions and improvements together
with the project consortium.
**Figure 3: ESMERALDA project brochure.**
### 1.4. ESMERALDA poster
The ESMERALDA poster was produced at the beginning of the project with
eye‐catching design, to introduce the project at conferences and meetings. The
poster reflects the main ESMERALDA design concept to keep the project branding
consistent and to make the project easily recognizable (Fig.4). The poster was
a subject to online discussion with the consortium.
**Figure 4: ESMERALDA project poster.**
### 1.5. Project corporate identity templates
ESMERALDA corporate identity templates were designed in the very beginning of
the project Implementation. These include:
* Milestone reports
* Deliverable reports
* Policy and technical briefs
* Power point presentation
* Meeting agenda and minutes
* Letterhead template for official project letters
Each template is specifically tailored to the information the document is
required to contain. The templates incorporate several important elements in
common:
* ESMERALDA project logo
* Suggests the information necessary to be included in the specific document
All templates are available through the Internal Online Library in the ICP and
easy to access and use for all partners.
## 2\. ESMERALDA Content Management System (CMS)
ESMERALDA website platform has been created to serve as a Project Content
Management System
(CMS) on two levels: (i) internal communication within the consortium and (ii)
external communication and dissemination of the project objectives and
results. The two main components developed by Pensoft are a public website
(www.esmeralda-project.eu) and the Internal Communication Platform (ICP)
accessible only by authorised users and designed specifically to facilitate
communication within the consortium.
### 2.1. ESMERALDA external website
ESMERALDA public website (Fig.5) was developed by the Pensoft team in close
cooperation with the coordination team. It is designed to act as an
information hub about the project’s aims, goals, activities and results. The
website serves as a prime public dissemination tool making available the
project deliverables and published materials. The events organized by
ESMERALDA or of relevance to the project are also announced through the
website.
The website comprises of separate information pages with project background
information, news, events, products, publications, contact details, etc. It is
regularly updated to keep the audience informed and ensure continued interest
of already attracted visitors. The website main pages are:
▪ Homepage featuring:
* Highlights: 3 recent news stories of relevance
* Live Tweet feed
* Member login area
* Feedback, RSS and Newsletter subscription forms
▪ The project: introducing the rationale and aims of the project
* Main outcomes: introducing the project objectives and expected results
* Work Packages: Introducing the WPs and their focus of involvement in the project
* Partners: presenting the different project partners
* Online library: dedicated to all ESMERALDA deliverables and other documents of interest
* News: introducing the project news other news of relevance
* Events: specific section to display the upcoming project events and other events of relevance
* Media Center: a place where all outreach materials are made available and can be freely downloaded
* Partner posters
* Posters
* Brochures
* Press releases
* Logo
* Newsletter
* Links: URL links to websites of interest and useful materials
* Contacts: listing the coordination team with their contact details
The website also provides direct links to the ESMERALDA social networks
profiles in Facebook, Twitter, Google+, LinkedIn.
RSS feeds links enable visitors to subscribe and receive project news, project
events announcements and project results releases directly in their mailbox.
## 2\. ESMERALDA Internal Communication Platform (ICP)
The ICP of ESMERALDA was developed by the Pensoft IT team to serve as a
communication hub and content management system of the ESMERALDA consortium.
A login button allows easy access to the restricted area for all registered
users. The ICP serves for exchange of various types of information such as:
documents related to the project management, datasets, results, coordination
decisions, timetables, presentations, and materials, and for reporting among
partners.
The ICP provides convenient and appropriate mechanisms to facilitate the free
flow of all sorts of information. At a glance, it has the following main
features:
* **Mailing module** : Users can send emails to one or more project participants after logging in the system. Users are assigned to one or more mailing groups depending on their role in the project. Collective emails can be sent to various selections of one or more mailing groups and individual users. All emails are archived.
* All registered users can upload files in the internal library and all internal documents related to the activities of the project are stored. Files that are placed in the **Internal Online Library** can be used only by the project members and are inaccessible to external visitors of the website.
* **Users** : this section contains the profiles of all project members that are granted access to the ICP, with their portrait photo, the affiliation, contact details and additional information.
* **Internal events:** a regularly updated time schedule for the work within the different work packages is placed on a prominent location of the Intranet pages. It contains information on the events (deliverables and milestones) to be delivered during the whole project lifetime - type and title of event, due date, description, participants and contact information.
* **Calendar:** the purpose of this section is to enable the visitors to easily spot and access the latest project information.
* Upload of **News** , **Events** and documents for the **external Online Library**
* **Dissemination Report Forms** – designed to facilitate the reporting of the ESMERALDA dissemination activities and make the intermediate results progressively available.
### 2.1. Log in
All project members will be registered in the ICP of ESMERALDA and will be
provided their username and password. New members can be registered by the
system administrators upon request from the team leaders, WP leaders or the
Coordination team (Fig. 6).
**Figure 6: ESMERALA Log in, located in the upper right corner**
### 2.2. Mailing Module
Users can send emails to one or more project participants after logging into
the system. There is a list of all participants arranged alphabetically.
Recipients can be easily selected by ticking the box next to their names.
Mailing groups have been created for each work package, as well as for the
case studies, WP Leaders, etc. (Fig. 7).
**Figure 7: ESMERALDA mailing groups.**
### 2.3. Upload of files, news and events
There are two types of libraries storing the documents resulting from the
project activities: (1) internal, which is visible only to the consortium
members, after login; and (2) external, which is accessible to anyone visiting
the website. To see all internal documents you need to click on the Library
button.
#### 2.3.1. Internal Document Library
All internal documents are stored in the Internal Document Library. The view
you will get is: (Fig. 8)
**Figure 8: Internal document library**
The Internal library is reserved for documents with restricted access,
intended only to the consortium members (for example administrative documents,
documents related to the project implementation, various sorts of documents
from the project meetings, deliverables intended only for internal use,
presentations, etc.). There are no limitations to the common formats of the
file for upload. Every user can upload files in the internal library.
#### 2.3.2. External Document Library
Publications (project-derived scientific publications and publications that
are not project-derived but of interest to the ESMERALDA participants) and
other information (deliverables with public access) that are open to public
can be uploaded on the Online library section of the website. This could be
done by pressing the button “ADD EXTERNAL DOCUMENT”. For more information on
how to upload files in the External Document library see the ICP guidelines
prepared by Pensoft.
#### 2.3.3. News
All project members are encouraged to post information that would be of
interest for the general public and the consortium in particular. This could
be article alerts, forthcoming meetings, and other relevant to ESMERALDA
activities. Users will be able to attach up to 3 files and an image. Outdated
news can be deleted by the person who uploaded them or by the administrator of
the website. All posted news go automatically to the Facebook and Twitter
profiles of ESMERALDA (and to their followers) and to all RSS feed
subscribers. For more information on how to upload news see the ICP guidelines
prepared by Pensoft.
#### 2.3.4. Events and Calendar
Information about forthcoming meetings, workshops, seminars, training courses,
etc. can be posted on the website by clicking on ADD EVENT buttons. All
project participants are encouraged to submit information on meetings, or
other events related to the project. It is also possible to attach documents
(venue location, agenda, list of participants, etc.). This information will
become visible on the project homepage.
#### 2.3.5. Internal events
The Internal Events module helps you keep track of every main activity in the
project providing you with the following concise information: title, due date,
nature, description, participants and contact information (responsible person
and email address). For more information on how to upload internal events see
the ICP guidelines prepared by Pensoft.
### 2.4. Dissemination report forms
With the aim to facilitate the reporting of the ESMERALDA dissemination
activities and make the intermediate results progressively available, three
online Dissemination report forms were created and made available in the ICP
(left menu) (Fig. 9)
* **Symposia & meetings ** – for any scientific event where ESMERALDA presentation is given;
* **General dissemination** – for publications other than the scientific ones (e.g. publications in newspapers, magazines, web publications, etc.), TV and radio broadcasts, various outreach
materials, press releases, policy briefs, PhD and master theses, etc.;
* **Scientific publications** – for reporting of ESMERALDA derived scientific publications.
**Figure 9: Symposia & Meeting form **
### 3\. ESMERALDA Social Media Accounts
To increase the project visibility and to promote ESMERALDA related news and
results Pensoft has also created accounts for 4 major social networks, namely
Facebook, Twitter, Google +, and LinkedIn (figs. 10, 11, 12, 13). The
ESMERALDA accounts have been created to reflect the general project branding
and in an engaging and interactive way. Each account aims a different group of
users reflecting the specificities of the network itself.
The ESMERALDA social media groups are fully operational and in process of
increasing popularity and member participation. All news and events are posted
through RSS feeds on the Twitter and Facebook account, while posts and
discussions are specifically tailored for Google + and LinkedIn.
Buttons are displayed on the project homepage which are linked directly to the
relevant social network.
#### 3.1. Twitter
Twitter provides a short, fast, easy communication. This social network is
popular and with high number of users. Twitter is increasingly used
professionally as a means of fast communication of organization specific news
and events.
**Figure 10: Screenshot of ESMERALDA twitter account**
#### 3.2. Facebook
Facebook remains one of the most popular social networks, despite the fact it
is less often used for professional purposes. Facebook has the advantage of
providing a community-like space, where news, links, photos and videos are
easily shared.
**Figure 11: ESMERALDA Facebook page**
#### 3.3. Google +
Although still comparatively small in size, Google + is a growing network,
which statistically displays growing popularity among the technical fields.
Among the advantages of Google + are: easiness and convenience in sharing
media; as well as its resemblance with a blog space, though with limited
capabilities.
**Fig12: ESMERALDA Google + account**
#### 3.4. LinkedIn
LinkedIn provides a predominantly professional network, creating potential for
networking across ESMEREALDA members. LinkedIn provides an opportunity for
starting and participation in professional and fruitful group discussions on
important ESMERALDA related topics.
**Figure 13: ESMERALDA LinkedIn account**
### 4\. Data Management Plan
The Data Management Plan (DMP) is designed to describe the data life cycle
through the project and regulate management policies for standalone datasets
created during the project, and data that underpin scientific articles
resulting from the project.
To the maximum possible extend important datasets will be deposited in
internationally recognized repositories, with all rights to be accessed,
mined, exploited, reproduced and disseminated, free of charge for any user.
Although data are often copyright-free, in some cases they can be protected if
containing sensitive information; in such cases justification for not making
such data public will be provided. All public data will have extended metadata
descriptions to facilitate discoverability, access and re-use.
Usage rights will be an important part of the metadata. Whenever possible
ESMERALDA will aim at publishing data under public domain dedication (CC 0).
The deposited data will be structured in compliance to community agreed,
domain-specific data standards (when available) to ensure interoperability and
re-use beyond the original purpose for which they were created. Information on
tools and instruments need to use the data or to reproduce and validate
results produced from them will be provided via the repository.
To secure long-term digital preservation, ESMERALDA will encourage all
partners to use the guidelines of the EU infrastructure OpenAIRE 1 and link
to global initiatives in data archiving, such as the Dryad Digital Repository,
Pangaea and others. ESMERALDA will benefit from the existing novel workflows
of Pensoft’s peer-reviewed open access journals Biodiversity Data Journal,
Nature Conservation, Research Ideas and Outcomes (RIO) journal for publishing
important datasets in the form of “data papers”. Data papers are a novel
instrument that will provide scientific record and citable publication for the
data creators, as well as motivate experts to engage with data curation after
the expiration of the project.
Data sharing and inter-operability of ESMERALDA outputs into various
established EU platforms such as OPPLA (OpenNESS/OPERAs Common Platform),
BISE, the ESP visualization tool will be ensured. A series of meetings (M06,
M12, M24, and M30) under MS31 will provide the necessary links with
stakeholders and ensure transferability of project results via these
platforms.
__________________________________________________________________________________
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0900_DISIRE_636834.md
|
# 1 Introduction
## 1.1 Summary
As part of Horizon 2020, the DISIRE project participates in a pilot action on
open research data. The aim is to provide indications as to what kind of data
the project will collect, how the data will be preserved and which sharing
policies will be adopted towards making these data readily available to the
research community.
## 1.2 Purpose of document
This Data Management Plan (DMP) details what kind of research data will be
created during the project's lifespan and prescribes how these data will be
made available - and thus re-usable and verifiable - by the larger research
community. The project's efforts in the area of open research data are
outlined giving particular attention to the following issues:
* The types of open and non-open data that will be generated or collected by the consortium, via experimental campaigns and research, during the project's lifespan;
* The technologies and infrastructures that will be used to securely preserve the data long-term;
* The standards used to encode the data;
* The data exploitation plans;
* The sharing/access policies applied to each data-set.
The plan can be considered as a checklist for the future and as a reference
for the resource and budget allocations related to data management.
## 1.3 Methodology
The content of this document builds upon the input of the project's industrial
partners and all the peers of work-packages 5, 6, 7 and 8. A short
questionnaire, outlining the DMP's objectives and stating the required
information in a structured manner, has been edited by LTU and disseminated to
the partners. The compiled answers have been integrated into a coherent plan.
The present DMP will evolve as the project progresses in accord with the
project's efforts in this area. At any time, the DMP will reflect the current
state of the consortium's agreements regarding data management, exploitation
and protection of rights and results.
## 1.4 Outline
For each partner involved in the collection or generation of research data a
short technical description is given stating the context in which the data has
been created. The different data-sets are identified by project-wide unique
identifiers and categorized through additional meta-data such as, for example,
the sharing policy attached to it.
The considered storage facilities are outlined and tutorials are provided for
their use (submitting and retrieving the research data). A further appendix
lists the format standards that will be used to encode the data and provides
references to technical descriptions of these formats.
## 1.5 Partners involved
**Partners and Contribution**
**Short Name Contribution**
**LTU** Coordinating and integrating inputs from partners
# 2 Data sharing, access and preservation
The digital data created by the project will be diversely curated depending on
the sharing policies attached to it. For both open and non-open data, the aim
is to preserve the data and make it readily available to the interested
parties for the whole duration of the project and beyond.
## 2.1 Non-Open research data
The non-open research data will be archived and stored long-term in the
REDMINE portal administered by LTU. The REDMINE platform is currently been
employed to coordinate the project's activities and to store all the digital
material connected to DISIRE.
## 2.2 Open research data
The open research data will be archived on the Zenodo platform (
http://www.zeno _do.org_ ) . Zenodo is a EU-backed portal based on the well
established GIT version control system ( _https://git-scm.com_ ) and the
Digital Object Identifier (DOI) system ( _http://www.doi.org_ ) . The
portal's aims are inspired by the same principles that the EU sets for the
pilot; Zenodo represents thus a very suitable and natural choice in this
context.
The repository services offered by Zenodo are free of charge and enable peers
to share and preserve research data and other research outputs in any size and
format: datasets, images, presentations, publications and software. The
digital data and the associated meta-data is preserved through well-
established practices such as mirroring and periodic backups.
Each uploaded data-set is assigned a unique DOI rendering each submission
uniquely identifiable and thus traceable and referenceable.
**3 List of the data-sets**
This section will lists the data-sets produced within the DISIRE project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0901_LAW-TRAIN_653587.md
|
# Executive summary
This document discusses the methodologies and procedures that will be followed
for sharing the research data collected by the LAW-TRAIN partners during the
execution of the project. It will provide information regarding the type of
data that will be generated, the standards used, information about how this
data will be made accessible for verification and re-use and information about
how the data will be curated and preserved.
It is important to mention that the LAW-TRAIN project involves several law-
enforcement forces and that given the nature of the project, some of the
research data generated will be confidential and therefore not shared with the
public.
This deliverable enables adherence to the Guidelines on Data Management in
Horizon 2020 provided by the European Commission.
# Introduction
The LAW-TRAIN project is a multi-national research collaboration funded by
European Commission. The focus of this project is to develop a simulation for
training law enforcement personnel in joint interrogations, which will be
accessed from various places and used in multiple languages. Since this
project has a specific target group, namely law enforcement personnel who need
training in joint interrogations, end-users have been integrated from the
beginning. These end-users have been made partners of the project, who will
provide insights on their training routine.
LAW-TRAIN will be developed within a period of three years. Hence, different
work packages have been created in the proposal. This paper will focus on a
deliverable from WP2 – User Requirements and Specifications/Structure and API.
The objective of this WP is to create a framework through user and technical
requirements, a technical structure, an ethical manual and a data management
plan for the entire project and all following work packages, deliverables and
tasks. Therefore the developed and established settings of WP2 have to be
considered and evaluated carefully.
## Aim of this deliverable
This deliverable aims to detail all the information regarding to the recovery,
generation, treatment and publication of research data obtained by the LAW-
TRAIN partners through the project execution and its curation during and after
the project.
In combination with this deliverable, an online version (only accessible by
project partners) of the Data Management Plan (DMP) will be generated in the
online DMPONLINE portal developed by the Digital Curation Centre (DCC). The
Digital Curation Centre (DCC) is a world-leading centre of experts in digital
information curation with a focus on building capacity, capability and skills
for research data management across the UK's higher education research
community.
## Document updates
This document will be considered a **“live” document** as it will be updated
constantly in order to reflect the inclusion of the new generated data sets.
The DMP will be at least updated by mid-term and final review of the project,
in order to fine-tune it to the data generated and the uses identified by the
consortium since not all data or potential uses are clear from the start.
## Tasks in this deliverable
The DMP covers all the research material generated during the whole project
execution. Nevertheless, within the work plan of LAWTRAIN project a specific
task has been foreseen to conglomerate the efforts dedicated to curate and
share the generated research data. **Task 2.5 Preparation of Ethical
Guidelines and Data Management Plan** (lead by USECON and involving IDENER as
the partner responsible for DMP and the rest of the partners as contributors)
covers the following specific issues:
* Ethical guidelines and procedures all consortium members need to adhere to during the entire research and development and testing of the LAW-TRAIN System. These ethical guidelines will focus on related consent and confidentiality procedures of the end-users of the LAW Train system as well as the protection of any data collected. These have been published in D2.3.
* Data Management Plan (DMP) describing the data management life cycle for all datasets that will be collected, processed or generated by the research project, will be prepared. This document will outline how research data will be handled during LAW-TRAIN and after its completion, describing what data will be collected, processed or generated and following what methodology and standards, whether and how this data will be shared and/or made open, and how it will be curated and preserved; IDENER will be responsible for the creation and maintenance of the DMP.
## Scope of this deliverable
Deliverable 2.4 – Data Management Plan includes the initial version of the DMP
for the LAW-TRAIN project. This document will be updated constantly as to
include the required updates regarding the information being collected.
The scope of this deliverable is twofold: on the one side it aims to establish
specific rules and methodologies that will be followed by all partners when
generating research data and, on the other side, it will provide specific
information about such generated research data, including naming and
references, data set descriptions, list of standards and metadata used and
data sharing policies.
## Structure of the deliverable
**Section 1** provides an introduction of the project, the deliverable and the
tasks within the deliverable. It further provides the cooperating partners
detailed information concerning the structure of the deliverable. **Section
2** describes the Data Management Plan designed for the LAW-TRAIN project. It
begins by including some general information about DMP and providing some
guidelines. Then there is space to include datasheets for describing each
produced research datasets. **Section 3** provides some guidelines about how
to handle the research data during the project and also after it. **Section
4** includes information about how the archival and preservation of the
research data will be carried out. **Section 5** focuses on giving a summary
on section 1 to 4 by mentioning the most important issues. It aims to serves
as a quick sheet for project partners to follow the rules established for data
generation, treatment and curation. **Section 6** provides references to the
documents used for the generation of the current deliverable.
## Project partners
As mentioned previously, LAW-TRAIN is a project with multi-national partners
funded and supported by the European Commission. In the three year course of
the project ten partners will participate and contribute to create the
training system simulation. Six of these ten partners are responsible for the
realization by contributing expert knowledge & methods while the remaining
four partners provide insights as LAWTRAIN’s end users.
The following parties participate as partners in LAW-TRAIN:
<table>
<tr>
<th>
**List of Partners**
</th>
<th>
</th>
<th>
**Country**
</th>
<th>
**Position**
</th> </tr>
<tr>
<td>
**BIU**
</td>
<td>
Bar-Ilan University
</td>
<td>
Israel
</td>
<td>
Partner
</td> </tr>
<tr>
<td>
**KU**
</td>
<td>
KU Leuven (University of Leuven)
</td>
<td>
Belgium
</td>
<td>
Partner
</td> </tr>
<tr>
<td>
**INESC ID**
</td>
<td>
Instituto de Engenharia de Sistemas e Computadores, Investigação e
Desenvolvimento em Lisboa
</td>
<td>
Portugal
</td>
<td>
Partner
</td> </tr>
<tr>
<td>
**IDENER**
</td>
<td>
Optimización orientada a la sostenibilidad S.L.
</td>
<td>
Spain
</td>
<td>
Partner
</td> </tr>
<tr>
<td>
**USECON**
</td>
<td>
USECON: the Usability Consultants
</td>
<td>
Austria
</td>
<td>
Partner
</td> </tr>
<tr>
<td>
**COMPEDIA**
</td>
<td>
Compedia
</td>
<td>
Israel
</td>
<td>
Partner
</td> </tr>
<tr>
<td>
**MINTERIOR**
</td>
<td>
Guardia Civil
</td>
<td>
Spain
</td>
<td>
Partner
(End-User)
</td> </tr>
<tr>
<td>
**MINPUBSEC**
</td>
<td>
Ministry of Public Security / Israel National Police
</td>
<td>
Israel
</td>
<td>
Partner
(End-User)
</td> </tr>
<tr>
<td>
**SPFJ**
</td>
<td>
Le Service Public Federal Justice
</td>
<td>
Belgium
</td>
<td>
Partner
(End-User)
</td> </tr>
<tr>
<td>
**MJ-PJ**
</td>
<td>
Ministério da Justiça - Polícia Judiciária
</td>
<td>
Portugal
</td>
<td>
Partner
(End-User)
</td> </tr> </table>
_Table 1: LAW-TRAIN project's partners_
**2.7 Open Access:**
The LAW-TRAIN project has been funded under the Horizon 2020 framework’s topic
FCT-07-2014: Law enforcement capabilities topic 3: Pan European platform for
serious gaming and training. This topics is not part of the Open Data Pilot,
especially since many of its deliverables have been marked as Confidential by
the security review and by the Consortium. Nevertheless the programme
specifies the following condition:
Open access must be granted to all scientific publications resulting from
Horizon 2020 actions, and proposals must refer to measures envisaged. Where
relevant, proposals should also provide information on how the participants
will manage the research data generated and/or collected during the project,
such as details on what types of data the project will generate, whether and
how this data will be exploited or made accessible for verification and re-
use, and how it will be curated and preserved.
Open access (EC, Open Access pilot, 2013) can be defined as the practice of
providing on-line access to scientific information that is free of charge to
the end-user and that is re-usable. In the context of research and innovation,
'scientific information' can refer to (i) peer-reviewed scientific research
articles (published in scholarly journals) or (ii) research data (data
underlying publications, curated data and/or raw data).
1. Open access to scientific publications refers to free of charge online access for any user. Legally binding definitions of “open access” and “access” in this context do not exist, but authoritative definitions of open access can be found in key political declarations on this subject. These definitions describe 'access' in the context of open access as including not only basic elements such as the right to read, download and print, but also the right to copy, distribute, search, link, crawl, and mine.
2. Open access to research data refers to the right to access and re-use digital research data under the terms and conditions set out in the Grant Agreement. Openly accessible research data can typically be accessed, mined, exploited, reproduced and disseminated free of charge for the user.
There are two main routes towards open access to publications:
1. Self-archiving (also referred to as 'green' open access) means that the published article or the final peer-reviewed manuscript is archived (deposited) by the author - or a representative - in an online repository before, alongside or after its publication. Repository software usually allows authors to delay access to the article (‘embargo period’).
2. Open access publishing (also referred to as 'gold' open access) means that an article is immediately provided in open access mode as published. In this model, the payment of publication costs is shifted away from readers paying via subscriptions. The business model most often encountered is based on one-off payments by authors. These costs (often referred to as Author Processing Charges, APCs) can usually be borne by the university or research institute to which the researcher is affiliated, or to the funding agency supporting the research. In other cases, the costs of open access publishing are covered by subsidies or other funding models.
In the context of research funding, Open Access (OA) requirements in no way
imply an obligation to publish results. The decision on whether or not to
publish lies entirely with the funded organisations. Open access becomes an
issue only if publication is elected as a means of dissemination. Moreover, OA
does not interfere with the decision to exploit research results commercially,
e.g. through patenting. Indeed, the decision on whether to publish open access
must come after the more general decision on whether to publish directly or to
first seek protection. This is illustrated in the graphic representation of
open access to scientific publication and research data in the wider context
of dissemination and exploitation at the end of this section.
_Figure 1: Open Access description_
# Data Management Plan
The following figure represents the life cycle that will be followed in the
LAW-TRAIN project for the management of the research data that will be
generated during the project.
_Figure 2: Data Management Plan life-cycle_
The project is currently in the Data Plan stage, this being the project
partners analysing what information will be generated and what would be the
conditions for sharing it. In the meantime, the public research repository
that IDENER will put at project disposition design is being carried out.
## Global guidelines
In this section the global methodology to be used by project partners for the
research data will be sketched.
### Data format
In the web _http://5stardata.info_ , Tim Berners-Lee’s (the inventor of the
Web and Linked Data initiator) suggests a five star deployment scheme for open
data:
<table>
<tr>
<th>
★
</th>
<th>
make your stuff available on the Web (whatever format) under an open licence
</th> </tr>
<tr>
<td>
★★
</td>
<td>
make it available as structured data (e.g. Excel instead of a scan of a table)
</td> </tr>
<tr>
<td>
★★★
</td>
<td>
use non-proprietary formats (e.g. CSV instead of Excel)
</td> </tr>
<tr>
<td>
★★★★
</td>
<td>
use URIs to denote things, so that people can point at your stuff
</td> </tr>
<tr>
<td>
★★★★★
</td>
<td>
link your data to other data to provide context
</td> </tr> </table>
In the LAW-TRAIN project, partners are encouraged to observe the above
provided rules and try (as much as possible and always considering first their
research needs) to provide their research data in a nonproprietary format and
including enough meta-data to allow the interpretation of the content and its
linking with other data sources.
### Resource location
In addition, and in order to locate and identify more easily the generated
information, IDENER is currently analysing the different possibilities for
using a Digital Object Identifier for each of these research data. A digital
object identifier (DOI) is a unique alphanumeric string assigned by a
registration agency (the International DOI Foundation) to identify content and
provide a persistent link to its location on the Internet. The publisher
assigns a DOI when your article is published and made available
electronically.
### Publication
The LAW-TRAIN project is not part of the Open Research Data Pilot of the
H2020. Nevertheless project partners are encouraged to follow the Open Access
method when publishing their data. In any case, partners should not publish
their results when the following conditions apply:
* They will exploit or protect their research.
* The publication of the research data will be in violation of the security restrictions of the project.
* The publication of the research data will be in violation of the privacy restrictions of the project.
* If the achievement of the action’s main objective would be jeopardized by making those specific parts of the research data openly accessible.
In the case the partner will not publish or share their research under one of
the above conditions, justification explaining this issue should be included.
### Confidential deliverables
The following deliverables of the project are marked as confidential:
D2.1, D2.2, D3.2, D3.3, D4.2, D4.3, D4.4, D4.7, D5.1, D5.2, D6.1, D6.2 and
D7.2
### Ethics
The information stored in the project repository and in general all the
research conducted in the LAWTRAIN project will comply with the Ethics
Guidelines & Procedures (Deliverable D2.3).
### Archiving and preservation
All LAW-TRAIN partners are encouraged to store their contributions and to
provide access to it. In addition to this suggestion, IDENER will provide a
research repository as described in section 5.
### Licensing
While practice varies from discipline to discipline, there is an increasing
trend towards the planned release of research data. The need for data
licensing arises directly from such releases, so the first question to ask is
why research data should be released at all.
A significant number of research funders now require that data produced in the
course of the research they fund should be made available for other
researchers to discover, examine and build upon. Opening up the data allows
for new knowledge to be discovered through comparative studies, data mining
and so on; it also allows greater scrutiny of how research conclusions have
been reached, potentially increasing research quality. Some journals are
taking a similar stance, requiring that authors deposit their supporting data
either with the journal itself or with a recognized data repository.
There are many additional reasons why releasing data can be in a researcher’s
interests. The discipline of working up data for eventual release helps in
ensuring that a full and clear record is preserved of how the conclusions were
reached from the data, protecting the researcher from potential challenges. A
culture of openness deters fraud, encourages learning from mistakes as well as
from successes, and breaks down barriers to interdisciplinary and ‘citizen
science’ research. The availability of the data, alongside associated tools
and protocols, increases the efficiency of research by reducing both data
collection costs and the possibility of duplication. It also has the potential
to increase the impact of the research, not only academically, but also
economically and socially.
In order to license their research, partners can opt for any license they
want: Prepared licenses, Bespoke licenses, Standard Licenses, Multiple
licensing, etc. For a more extended description of licensing options please go
to _http://www.dcc.ac.uk/resources/how-guides/license-research-data_ where
you can find a much more detailed guide.
Within the LAW-TRAIN project we will recommend the use of Creative Commons
licenses.
## Data sets template
The following template will be followed by each partner to describe the
research data that they will generate and to provide information about its
nature, the metadata and standards used and the sharing policy. Explanations
of each of the expected fields are provided here and should be replaced by the
specific content. If some field is not yet determined TBD (To be determined)
should be put instead. If some aspect is provisional [*] should be included at
the beginning of the field.
**Data set information**
Reference of the data set using the following nomenclature:
‘LT_’PartnerShortName’_’UniqueIndex’’
**Data set reference** Where PartnerShortName is the one present in Table 1.
UniqueIndex is an incremental number (starting at 1) that will be used by each
partner to numerate theirs data subsets
Names will be done using the following nomenclature:
‘LT_’PartnerShortName’_’Type’_’Title’
Type will be one of the following options, depending on
**Data set name** the content: ‘RD’ (Research Data), ‘PM’ (Publishable
Material (e.g. papers), ‘O’ (Others).
Title will be set by the researcher to indicate in a brief way (up to 20 words
separated by underscores (‘_’) instead of spaces).
**Data set brief description (up to 100** Up to 100 words describing the main
content of the **words)** research data as well as the objective of such
research
D.O.I. Provided by your organization (if available) or by
**D.O.I. (if available)** IDENER should an agreement with a D.O.I.
registration office be accomplished
Name of your organization. Please put the short name
**Researcher – Main organization**
first(e.g. BIU – Bar-Ilan University)
**Researchers involved (include name,**
Involved researches, being the first the main one.
**mail address and organization info)**
**Generation date** Date of data collection or the document generated, …
**Publication date** Date of document publication (if applicable)
<table>
<tr>
<th>
**Te**
</th>
<th>
**chnical description**
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
Description of the data generated or collected, its origin (in case it is
collected), nature and scale
</td> </tr>
<tr>
<td>
**Possible applications**
</td>
<td>
Description of whom this information could be useful and how it could be used
or integrated for reuse
</td> </tr>
<tr>
<td>
**Usage in publication**
</td>
<td>
Should the results of researching this data have been published, reference
(DOI, citation) to the resulting document
</td> </tr>
<tr>
<td>
**Non previously existent data**
</td>
<td>
Analysis report on existence of previous information that could be used in its
place. Should a new research subset of data be necessary, justification of why
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Format of the files (type of file) shared. Taking into account recommendations
of Tim Berners-Lee described in section 3.1.1 and use of open data formats
instead of commercial ones should be followed.
</td> </tr>
<tr>
<td>
**Standards**
</td>
<td>
CEN (European Committee for Standardisation)
_http://standards.cen.eu/index.html_ can be used to identify suitable
standards.
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
Metadata included in the research data. In a general way
CERIF - Common European Research Information Format should be used, in
addition to any specific metadata standards typical in the area of work.
Indications on adequate metadata, can be found at:
_http://www.dcc.ac.uk/resources/metadata-standards_
References to the funding under H2020 should be included in the metadata
</td> </tr>
<tr>
<td>
**Documents naming convention**
</td>
<td>
All files within a research data set should be included in a folder whose name
should match the data set reference. Inside the folder the names of the files
should provide clues about their content (as much as possible but without
interfering with a normal operation of the research activities).
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
**Data shared**
</td>
<td>
Yes or no.
</td> </tr>
<tr>
<td>
**Justification to not share**
</td>
<td>
Should the answer to the previous question, justification of it. Reasons not
to publish it can be found in section 3.1.3.
</td> </tr>
<tr>
<td>
**Sharing methodology**
</td>
<td>
Sharing policies among the following options:
* Green Open Access
* Gold Open Access
* Non-shared
* Publish in pay journal
* Other (Describe)
</td> </tr>
<tr>
<td>
**Embargo**
</td>
<td>
Description of the embargo that will be held on the research data (i.e.
previous publication on pay magazine and later published openly) and the
period that this embargo will entail.
</td> </tr>
<tr>
<td>
**Necessary equipment / software**
</td>
<td>
Equipment and or software that would be required to:
* re-create the research data (validation)
* Integrate the data in other research projects (reuse)
</td> </tr>
<tr>
<td>
**Licensing**
</td>
<td>
Licensing (if any) for the data
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
Description of the procedures that will be put in place for long-term
preservation of the data. Indication of how long the data should be preserved,
what is its approximated end volume, what the associated costs are and how
these are planned to be covered.
</td> </tr> </table>
## Data set 1
<table>
<tr>
<th>
</th>
<th>
**Technical description**
</th> </tr>
<tr>
<td>
**Data set**
**description**
</td>
<td>
Existing closed case files that were tried by the Federal Prosecutor’s Office
in Belgium (confidential)
</td> </tr>
<tr>
<td>
**Possible applications**
</td>
<td>
No application
</td> </tr>
<tr>
<td>
**Usage in**
**publication**
</td>
<td>
[*] No publication
</td> </tr>
<tr>
<td>
**Non previously**
**existent data**
</td>
<td>
No previously existent available data that could replace it
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Confidential – the case files are in paper and need to be consulted at the
Federal
Prosecutor’s Office
</td> </tr>
<tr>
<td>
**Standards**
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Documents naming convention**
</td>
<td>
Confidential – these files will not be made public to other consortium members
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
**Data shared**
</td>
<td>
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Justification not share**
</td>
<td>
**to**
</td>
<td>
Confidential case files, only accessible for KU Leuven consortium members
</td> </tr>
<tr>
<td>
**Sharing methodology**
</td>
<td>
</td>
<td>
Non-shared
</td> </tr>
<tr>
<td>
**Embargo**
</td>
<td>
</td>
<td>
No embargo
</td> </tr>
<tr>
<td>
**Necessary equipment software**
</td>
<td>
**/**
</td>
<td>
No equipment or software
</td> </tr>
<tr>
<td>
**Licensing**
</td>
<td>
</td>
<td>
No licensing
</td> </tr>
<tr>
<td>
**Archiving**
</td>
<td>
**and**
</td>
<td>
</td> </tr>
<tr>
<td>
**preservation (including storage and backup)**
</td>
<td>
No preservation of the data set – only accessible for consultation at the
Federal
Prosecutor’s Office
</td> </tr> </table>
## Data set 2
**Data set information**
**Data set reference**
LT_KU_2
**Data set name**
LT_KU_PM_Publication_on_literature_research_from_WP3
**Data set brief description (up to 100**
**words)**
Depending on the possibilities (since the deliverables in WP3
are confidential), KU Leuven would like to publish an article
on the best practices in joint in
terrogations or on the newly
developed methodology for joint interrogations.
**D.O.I. (if available)**
TBD
**Researcher**
**–**
**Main organization**
KU
–
KU Leuven
**Researchers involved (include name,**
**mail address and organization info)**
Prof. dr. Geert Vervaeke
Dr.
Emma Jaspaert
Ma. Ricardo Nieuwkamp
**Generation date**
TBD
**Publication date**
TBD
<table>
<tr>
<th>
</th>
<th>
**Technical description**
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
**Possible applications**
</td>
<td>
Publication useful for police practitioners and other actors involved in
transnational police or judicial cooperation in criminal matters
</td> </tr>
<tr>
<td>
**Usage in publication**
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
**Non previously existent data**
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
**Standards**
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
**Documents naming convention**
</td>
<td>
TBD
</td> </tr> </table>
**Data Sharing**
<table>
<tr>
<th>
**Data shared**
</th>
<th>
Yes (but not shared yet at the moment)
</th> </tr>
<tr>
<td>
**Justification to not share**
</td>
<td>
/
</td> </tr>
<tr>
<td>
**Sharing methodology**
</td>
<td>
[*] Gold Open Access (If I am correct, LAW-TRAIN has budgeted for open access?
</td> </tr>
<tr>
<td>
**Embargo**
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
**Necessary equipment / software**
</td>
<td>
No equipment /software
</td> </tr>
<tr>
<td>
**Licensing**
</td>
<td>
No licensing
</td> </tr>
<tr>
<td>
**Archiving and preservation**
**(including storage and backup)**
</td>
<td>
The article will be stored on a secure server of the KU Leuven (no costs) in
addition to the common project repository
</td> </tr> </table>
**3.5 Other Data Sets**
Additional datasets will be updated in the subsequent internal version of the
DMP, before M12.
# Guidelines for handling data
## During the LAW-TRAIN project
The LAW-TRAIN project involves cooperation with several law-enforcement forces
and therefore, some of the research data generated should be considered
security sensible. Even not having being marked as completely confidential by
the EC, some of the deliverables (and therefore some of the associated
research data) will have restricted access due to security issues.
To guarantee the security of these documents, proper guidelines for access
restriction, encryption and other measures are provided in the Project Manual
published in deliverable D1.2
Moreover, some of the research conducted in the LAW-TRAIN project involves
dealing with personal information. Guidelines as to anonymize and protect such
information have been previously defined and have been published in
Deliverable 2.3 Ethical Guidelines and procedures.
During the project and as for the deliverables and other research data, a
private online repository has been set up in USECON facilities and have been
used since the beginning of the project. This private repository has been set
using OWNCLOUD technology paying special attention to encryption and security
issues.
## After completion of the project
Each partner will be responsible for keeping the research data available up to
three years after the completion of the LAW-TRAIN project. During this period
they will have to keep applying the security measures provided during the
project.
In addition, the research data published in IDENER’s research repository will
be maintained for a minimum of three years after the completion of the project
although the company intends to keep this information stored for a longer
period so to maximize the dissemination and re-use possibilities of the
research results. These statements not go against any requirement regarding
the privacy of the subjects researched, and the elimination of the involved
personal data will be carried out as to follow the ethical guidelines.
# Archiving and preservation
All the research data generated during the project will be centralized and
stored in a research repository provided by IDENER, which is currently under
development. This research repository will comply will the following policies:
* Data will be stored in a secure way, avoiding unauthorized access or modification to it.
* Data will be backed up on a regular and frequent basis. Specifically, incremental backups will take place at least weekly and usually on a daily basis.
* Two backups will be generated. The first one will be stored in the IDENER online servers and the second one will be stored locally in IDENER main office.
* IDENER’s IT department will be responsible for the backup of data stored on such systems.
* On submission of a paper, the raw data must be submitted to IDENER’s IT department for archiving and the data set description should be included following the template of section 3. It may include:
* Information about the paper (title, journal, authors, etc.)
* The raw data (if access can be granted) o An index to the data files if required o A description of the structure of the data files
* Explanation of how to reproduce the experiments that lead to such results, if applicable
* IDENER will securely (encrypted) store archived data for the required period of time and will make it available as defined in this document and complying with the sharing policy of each data subset.
* The data will be stored on Linux systems that will be kept up to date with security patches and updates. These systems will require a passwords to access them. Redundancy systems such as hard disks on RAID configuration will be used to prevent against the failure of a single system.
* The backups systems will comply with the same security and redundancy issues as the main repository.
* The total currently reserved space for project research data is fixed at 3TB. Upon necessity of expanding this value, the systems will be upgraded to satisfy such requirement.
* Steps will be followed as to publish IDENER’s research repository in the re2data.org index
The information stored in the project repository will comply with the Ethics
Guidelines & Procedures (Deliverable D2.3 of the project). In addition any
personal information held by the project will be:
* Processed fairly and lawfully
* Processed for one or more specified and lawful purposes, and not further processed in any way that is incompatible with the original purpose
* Adequate, relevant and not excessive
* Accurate and, when necessary, kept up to date
* Kept for no longer than is necessary for the purpose for which it is being used
* Processed in line with the rights of individuals
* Kept secure with appropriate technical and organizational measures taken to protect the information.
* Not transferred outside the European Economic Area unless there is adequate protection for the personal information being transferred.
**5.1 Associated infrastructure and costs:**
In order to comply with the abovementioned policies IDENER is designing an
architecture including new services and equipment to complement their current
infrastructure.
Three different data servers will be used. A and B will be located in OVH’s
dedicated servers (using two different European data centres) and C will be
located in IDENER’s main office.
Estimated costs as for the implementation of this infrastructure as well as
the execution of the DMP is foreseen at around 6.000 € within the duration of
the project. It includes rental services of the abovementioned data servers
and the acquisition of equipment and software for IDENER offices. Arrangements
to try to include other costs as registration fees for D.O.I.s are being
scheduled.
# Summary
This document represents the Data Management Plan that will be followed during
the LAW-TRAIN project to manage the generated research data and results. In
addition to this deliverable an online version of the DMP will be generated
using the DMPONLINE portal developed by the Digital Curation Centre (DCC).
This document will be updated constantly as to reflect the identification of
new data sets and the changes of any of the already included ones.
The LAW-TRAIN project is not part of the Horizon 2020 Open Data Pilot
initiative. Nevertheless partners are encouraged to follow some of the
guidelines of such initiative. Specifically, partners should try to publish
their results (e.g. peer-reviewed papers) and the associated research data
following the Open Access practice.
Tables to be filled by each partner are provided to describe the different
research data sets to be generated during the project. These tables include
all the key information to be defined from naming and referencing to licensing
issues.
Details about the policies that will be applied by IDENER to set up a research
data repository that will be used by project partners to store and publish
their research information is provided. Moreover an estimation of the
associated costs is provided.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0903_DIVERSITY_636692.md
|
# INTRODUCTION
EC Horizon 2020 project DIVERSITY (Grant Agreement No. 636692) consortium
decided to take part in the Pilot on Open Research Data in Horizon 2020 on
voluntary basis by expressing this will on project proposal. This is the
associated Data Management Plan (DMP) in the form of a deliverable.
This document should be considered in combination with Section 3 of the Grant
Agreement “RIGHTS AND OBLIGATIONS RELATED TO BACKGROUND AND RESULTS” that
concerns intellectual property, ownership, exploitation and dissemination of
the results and access right to results of the project. This DMP is intended
as a first draft, keeping in mind what is stated in the _**Guidelines on data
management in Horizon 2020:** “The DMP is not a fixed document, but evolves
during the lifespan of the project _ ”. Eventual further developments will be
structured as additional deliverables if issues arise that have not been
foreseen.
Hereafter are listed the datasets that have been considered relevant to the
project to date, together with a description of their possible evolution. Each
dataset is examined following the template given by the _**Guidelines for Data
Management in Horizon 2020.** _
# SCIENTIFIC PUBLICATIONS
<table>
<tr>
<th>
**Data set description**
</th>
<th>
_Description and origin:_ Text and images. Scientific articles produced under
DIVERSITY and based on data collected during the project and on the
experiences with project business partners and business cases.
_To whom it would be useful:_ to researchers and companies interested in
carrying on program on PSS and lean engineering.
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
PDF, txt
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Standard: ISO 19005-1 PDF/A because these documents are thought for long term
archiving.
Basic metadata included:
* Title
* Author
* Subject
* Keywords
* Created
* Modified
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Open access via project website www.diversity-project.eu, a copy of
publication may be made available on zenodo.org and on publisher website.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
For long term preservation of text data type both pdf and txt version of the
file will be stored. This way as pdf might not be readable, information will
still be obtainable by the txt.
Data will be stored on UNIBG server, kept in a secure and fireproof location.
Server administration assures daily and monthly backups of the data files. The
project portal will be active for 5 years following the end of the project.
</td> </tr> </table>
# GENERIC ONTOLOGY
<table>
<tr>
<th>
**Data set description**
</th>
<th>
_Description:_ DIVERSITY General Ontology describes the components that build
DIVERSITY prototype and the relations among them. It describes also a general
scheme widely applicable with no possible limitation given by a specific
context.
This dataset will be produced as soon as a draft version of the ontology will
be defined. It will evolve during the project and will reach a final version
at the end of DIVERSITY.
_To whom it would be useful:_ to researchers and companies interested in
carrying on program on PSS and lean engineering.
_Indications on the existence of similar data:_
Example of ontology stored in zenodo with open access
https://zenodo.org/record/16493#.VaePafntmko
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
RDF/XML, OWL/XML
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
XML Schema and OWL as defined by W3C.
Basic metadata included:
* Title
* Author
* Subject
* Keywords
* Created
* Modified
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Open access via project website www.diversity-project.eu
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
For long term preservation of text data type both pdf and txt version of the
file will be stored. This way as pdf might not be readable, information will
still be obtainable by the txt.
Data will be stored on UNIBG server, kept in a secure and fireproof location.
Server administration assures daily and monthly backups of the data files. The
project portal will be active for 5 years following the end of the project.
</td> </tr> </table>
# SPECIFIC ONTOLOGY
<table>
<tr>
<th>
**Data set description**
</th>
<th>
Description: DIVERSITY Specific Ontology, describes the components that built
DIVERSITY prototype and the relations among them. Each file will describe an
ontology related to a specific business case. The ontology will be developed
in cooperation with industrial DIVERSITY’s partners.
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
RDF/XML, OWL/XML
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
XML Schema and OWL as defined by W3C.
Basic metadata included:
* Title
* Author
* Subject
* Keywords
* Created
* Modified
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Private dataset. It features data related to each specific business case
company’s confidential information
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
For long term preservation of text data type both pdf and txt version of the
file will be stored. This way as pdf might not be readable, information will
still be obtainable by the txt.
Data will be stored on UNIBG server, kept in a secure and fireproof location.
Server administration assures daily and monthly backups of the data files. The
project portal will be active for 5 years following the end of the project.
</td> </tr> </table>
# GENERAL REQUIREMENTS
<table>
<tr>
<th>
**Data set description**
</th>
<th>
Description: Text and images. Lists of requirements that are guiding the
development of the full prototype.
Set of lists underlying deliverable D1.2. The requirements descripted are “top
level” type and are extracted from the more detailed requirements associated
to each business case.
This list can be useful to software developers and researchers interested in
applications useful for PSS development and management.
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
PDF, txt
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Standard: ISO 19005-1 PDF/A because these documents are thought for long term
archiving.
Basic metadata included:
* Title
* Author
* Subject
* Keywords
* Created
* Modified
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Open access via project website www.diversity-project.eu, a copy of
publication may be made available on zenodo.org
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
For long term preservation of text data type both pdf and txt version of the
file will be stored. This way as pdf might not be readable, information will
still be obtainable by the txt.
Data will be stored on UNIBG server, kept in a secure and fireproof location.
Server administration assures daily and monthly backups of the data files. The
project portal will be active for 5 years following the end of the project.
</td> </tr> </table>
# SPECIFIC REQUIREMENTS
<table>
<tr>
<th>
**Data set description**
</th>
<th>
Text and images. Lists of requirements derived from business cases as a direct
description of the needs of each industrial partners. These requirements will
lead to the development of the specific ontologies at paragraph 4.
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
PDF, txt
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Standard: ISO 19005-1 PDF/A because these documents are thought for long term
archiving.
Basic metadata included:
* Title
* Author
* Subject
* Keywords
* Created
* Modified
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
No Open access and no public distribution due to confidential information
concerning industrial partners.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
For long term preservation of text data type both pdf and txt version of the
file will be stored. This way as pdf might not be readable, information will
still be obtainable by the txt.
Data will be stored on UNIBG server, kept in a secure and fireproof location.
Server administration assures daily and monthly backups of the data files. The
project portal will be active for 5 years following the end of the project.
</td> </tr> </table>
# BUSINESS CASES DESCRIPTIONS
<table>
<tr>
<th>
**Data set description**
</th>
<th>
Description and origin: Text and images. A set of documents which describe the
introduction of DIVERSITY tools and software into companies environments.
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
PDF, .txt, .ppt
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Standard: ISO 19005-1 PDF/A because these documents are thought for long term
archiving.
Basic metadata included:
* Title
* Author
* Subject
* Keywords
* Created
* Modified
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
No Open access due to the presence of confidential information on industrial
partners.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
For long term preservation of text data type both pdf and txt version of the
file will be stored. This way as pdf might not be readable, information will
still be obtainable by the txt.
Data will be stored on UNIBG server, kept in a secure and fireproof location.
Server administration assures daily and monthly backups of the data files. The
project portal will be active for 5 years following the end of the project.
</td> </tr> </table>
# GENERIC LEAN RULES
<table>
<tr>
<th>
**Data set description**
</th>
<th>
Description and origin: Text and images. These rules are currently being
developed by DIVERSITY as stated in D1.3 System Concept and they “will
facilitate the transformation of PSS design considerations in terms of
customer requirements and technical constraints into usable design
guidelines.”
Set of rules that will make reusable the knowledge acquired in a process of
PSS development. These rules occur at two different levels: Content Design
level and Development process level.
This dataset can be useful for industries and organizations that aim at the
introduction of a lean environment for PSS development.
These dataset does not contain confidential information on industrial
partners.
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
PDF, .txt
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Standard: ISO 19005-1 PDF/A because these documents are thought for long term
archiving.
Basic metadata included:
* Title
* Author
* Subject
* Keywords
* Created
* Modified
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Open access via project website www.diversity-project.eu
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
For long term preservation of text data type both pdf and txt version of the
file will be stored. This way as pdf might not be readable, information will
still be obtainable by the txt.
Data will be stored on UNIBG server, kept in a secure and fireproof location.
Server administration assures daily and monthly backups of the data files. The
project portal will be active for 5 years following the end of the project.
</td> </tr> </table>
# DEMO VIDEO OF PROTOTYPES (ONE FOR EACH COMPANY)
<table>
<tr>
<th>
**Data set description**
</th>
<th>
Description and origin: Demo videos associated to each full prototype (Lean
Design and Visualization tool, PSS engineering environment, Context Sensitive
tool for search Stakeholders feedback analysis and KPI) and its test case
within an industrial partners.
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
The format is to be defined but will probably be Audio-Video Interleave (AVI)
container and MPEG-4 codec, or any similar format.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Standard: standard associated to MPEG-4, ISO/IEC 14496-14:2003
Metadata to be defined but basic information about Product name, Author,
Copyright, Version, Language will definitely be included.
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Open access via project website www.diversity-project.eu where a link for
direct download will be provided. The demo version will be presented and
disseminated during various scientific events at the end of the project.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Measures for long term archiving will be taken once the dataset will be
defined.
Data will be stored on UNIBG server, kept in a secure and fireproof location.
Server administration assures daily and monthly backups of the data files. The
project portal will be active for 5 years following the end of the project.
</td> </tr> </table>
# SOFTWARE: FULL PROTOTYPE
<table>
<tr>
<th>
**Data set description**
</th>
<th>
Executable application and installation package of the full prototypes (Lean
Design and Visualization tool, PSS engineering environment, Context Sensitive
tool for search Stakeholders feedback analysis and KPI).
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
The format is to be defined.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Although standards and metadata may vary depending on the language of the
development, basic metadata included will be:
Product name, Author, Copyright, Version, Language
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Open access via project website www.diversity-project.eu.
A copy may be made available on zenodo.org
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
The first version will be archived on institutional servers at UNIBG and other
safe location of the consortium. Measures for long time preservation will be
taken when the format will be clearly defined.
</td> </tr> </table>
# SOURCE CODE
<table>
<tr>
<th>
**Data set description**
</th>
<th>
Source code package associated to the full prototype. It will contain all the
files needed to generate the full prototype.
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
Format to be defined. The extension of single files varies depending on the
development language chosen.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Although standards and metadata may vary depending on the language of the
development basic metadata included will be:
* Typology
* Version
* Copyright
* Dimension /volume
* Language
* Product Name
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Open access via www.diversity-project.eu, a copy may be made available on
zenodo.org and on publisher website.
The consortium is currently evaluating the licensing policy considering the
following licenses: Creative Commons CC BY-NCND, GNU GPL or Mozilla Public
License.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
The first version will be archived on institutional servers at UNIBG and other
safe locations of the consortium. Measures for long time preservation will be
taken when the format will be clearly defined. The project portal will be
active for 5 years following the end of the project.
</td> </tr> </table>
# PROTOTYPE USERS’ ROLES
<table>
<tr>
<th>
**Data set description**
</th>
<th>
Text. Roles associated to a specific user behaviour in approaching software
prototypes. This datasets will be the basis for producing user manual.
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
PDF, .txt other formats to be defined.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Standard: ISO 19005-1 PDF/A because these documents are thought for long term
archiving.
Basic metadata included:
* Title;
* Author;
* Subject;
* Keywords;
* Created;
* Modified.
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Open access via project website www.diversity-project.eu, sensitive
information will be cleaned out from the dataset.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
For long term preservation of text data type both pdf and txt version of the
file will be stored. This way as pdf might not be readable, information will
still be obtainable by the txt.
Data will be stored on UNIBG server, kept in a secure and fireproof location.
Server administration assures daily and monthly backups of the data files. The
project portal will be active for 5 years following the end of the project.
</td> </tr> </table>
# USERS' FEEDBACK ON TEST CASES
<table>
<tr>
<th>
**Data set description**
</th>
<th>
Text. Feedbacks gathered during the application test phase through a
questionnaire. They will include a detailed description of each test case of
the full prototype and the associated profiling of the user that is
undertaking the test.
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
PDF, .txt
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Standard: ISO 19005-1 PDF/A because these documents are thought for long term
archiving.
Basic metadata included:
* Title
* Author
* Subject
* Keywords
* Created
* Modified
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Open access via project website www.diversity-project.eu, with sensitive
information cleaned out from the dataset.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
For long term preservation of text data type both PDF and txt version of the
file will be stored. This way, as PDF might not be readable, information will
still be obtainable by the txt.
Data will be stored on UNIBG server, kept in a secure and fireproof location.
Server administration assures daily and monthly backups of the data files. The
project portal will be active for 5 years following the end of the project.
</td> </tr> </table>
# MULTIMEDIA
<table>
<tr>
<th>
**Data set description**
</th>
<th>
Promotional Demo videos created specifically for advertising purposes. These
videos (probably integrated with interactive features) will show the features
developed by DIVERSITY and the derived benefits to a PSS development. They are
thought to be shown around in fairs, conferences, etc.
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
Audio-Video Interleave (AVI) container and MPEG-4 codec.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Standard: standard associated to MPEG-4, ISO/IEC 14496-14:2003
Basic Metadata:
* Title
* Author
* Copyright
* More Info
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Open access via project website www.diversity-project.eu.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
The AVI container has been chosen even if proprietary, but it is widely
diffused and well supported by open source and other tools. Codec MPEG-4 has
been chosen as a good compromise between compression for archiving and loss of
information.
Data will be stored on UNIBG server, kept in a secure and fireproof location.
Server administration assures daily and monthly backups of the data files. The
project portal will be active for 5 years following the end of the project.
</td> </tr> </table>
# SOCIAL NETWORK POSTS
<table>
<tr>
<th>
**Data set description**
</th>
<th>
Comment and issues posted on social networks both internal and external to
partners companies. Immediately after the distribution of the DIVERSITY
software, a series of questions regarding user experience will be asked on the
internal social networks. Users' posts will be collected and examined in order
to evaluate their satisfaction and issues observed.
</th> </tr>
<tr>
<td>
**Format**
</td>
<td>
The format will be probably JSON (JavaScript Object Notation) and an
associated .csv translation.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Standard associated to json interchange format : ECMA-404 Metadata fields will
be defined later in this project.
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
Partial data sharing due to privacy-related issues and the presence of
sensitive information. Posts will be made anonymous. This dataset will be
stored on project website repository www.diversity-project.eu.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Even if JSON is yet good for long term preservation, as it is a text format
that is completely language independent .json file will be stored together
with a txt or csv version this way, if .json won’t be readable anymore
information will still be retrievable from the associated .csv/.txt.
Data will be stored on UNIBG server, kept in a secure and fireproof location.
Server administration assures daily and monthly backups of the data files. The
project portal will be active for 5 years following the end of the project.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0904_ROADART_636565.md
|
**1 Types of Data**
For the project following types of data will be generated and used:
1. Measurement data
2. Design descriptions
3. Input data/models for simulations
4. Data from numerical simulations
5. Computer code
6. Text based data: Reports, newsletter, research presentations, protocols
These are described in the following sections.
# 1.1 Measurement data
As part of the project, measurements will be performed and the results will be
stored; Measurements conducted during the course of the project include i) EM
far field measurements and S-parameters of antennas, ii) Radio Channel
Characterization measurements. The size of data measured will not be huge
(Megabyte range), as far as case (i) is concerned, and can be handled using
conventional formats. Occasionally, measurement data between partners will be
shared. The data will be stored, but are potentially of no use outside the
project. For case (ii), the measured data is of medium size (less than 10
Terabytes). Radio Channel Measurement data includes calibration data, raw data
(as acquired by the measurement campaign), as well as processed data (after
the application of calibration and radio channel extraction algorithms). Radio
Channel measurement data will be used to support scientific publications and,
occasionally, data will be shared between partners. For validation and
dissemination purposes, selected parts of processed data may be publicly
available with the consent and agreement of all partners. Due to the
importance of radio vehicular measurements for the scientific community, the
data will be stored to exploit possibility of reuse in other relevant research
activities.
# 1.2 Design descriptions
Occasionally, components such as antennas and antenna arrays on PCB and other
technologies will have to be designed. This may be done in collaboration with
partners, so design plans may be shared between partners. The size of data
created is small (Megabyte range). The data will underpin scientific
publications. The data will be stored and are potentially of use to interested
parties outside the project. Publication of the data will depend on whether
this is possible under existing NDAs with partners.
# 1.3 Input data/models for simulations
The project has a large simulation component for IMST, TNO and UPRC (EMPIRE
XPU); perform simulations for antennas inside of truck models provided by MAN
and perform simulations of CACC by TNO. A representative set of designs will
be stored. The size of data created is of medium size (Gigabyte range). The
data will underpin scientific publications. The data will be stored and are
potentially of use to interested parties outside the project, except from the
truck models. Publication of the data will depend on whether this is possible
under existing NDAs with partners. Moreover, UPRC will perform extended
simulations using in-house developed simulation engines (in MATLAB/OCTAVE) for
the performance evaluation of transmission and diversity schemes under
standardized radio networks using channel models. Input data/models for the
simulation includes i) measurements conducted within the project, ii) radio
protocol standards and iii) state-of-the art radio channel models available in
the literature. The size of data created is of medium size (Gigabyte range)
and will underpin scientific publications. The data will be stored and are
potentially of use to interested parties outside the project.
# 1.4 Data from numerical simulations
As part of the project EM numerical simulations will be performed and the
results will be stored; occasionally, numerical data will be shared between
partners. The size of data created is potentially huge (Terabyte range). The
data will underpin scientific publications. The data will be stored, but may
have limited use outside the project.
# 1.5 Computer code
Work in this project includes the development of several software components
for various objectives of the project. With the partners’ agreement, UPRC
intends to publicly distribute source form the implementation of the radio
channel model, after its validation through scientificallyreviewed
publications as part of dissemination activities, in order to be used by
interested parties outside the project. The size of data is small (Megabyte
range). In addition, during the integration activities, UPRC will exchange
developed computer software (either in source or executable form) with other
project partners for the scope of developing the diversity-enabled radio modem
and an improved GNSS localization system. The size of data is small (Megabyte
range). The specific software will not be publicly available and data sharing
will be managed by existing NDAs among partners.
# 1.6 Text based data: Reports, newsletter, research presentations, protocols
These data re produced for communication within the project; some of it will
be made public on the web-page or through conference presentations, other
parts are only for internal use. The data size is moderate (Mega-Gigabyte
range).
# Data collection / generation
## Methodologies for data collection / generation
Data can be stored on servers and dedicated memory repositories, stationed
across the partners’ premises, all connected to the LAN. Some data can also be
stored locally on PC hard discs. The main project data will be stored on a
special project repository that will host all the data from the project. This
will only be accessible for certain work groups of partner employees and
researchers. The project repository is backuped on a regular basis.
## Data quality and standards
There are existing standards from ETSI, which describe the design and
functionality of the ITS G5 stack. For the data exchange between our partners
we use the Data Distribution Service (DDS). This is a standard defined by the
Object Management Group (OMG). In this case we exchange our data in real-time,
defined for system-relevant-messages.
Each stored and shared data set should be accompanied by metadata files that
(if applicable) should include details for the scope, origin and
conditions/circumstances related with the data. Metadata should include: i)
time stamp for the creation date of the data set, ii) time stamp and revision
for each modification of the data set, iii) generation source (simulation or
data), iv) description of the data set, v) (for antenna measurements) antenna
configurations, vi) (for measurement data) measurement location, vii) (for
mobile measurements) GPS stamp of the measurements, viii) (for code sharing)
code revision and revision notes, ix) author and developer names, affiliation
and contact data. The manager, creator or developer of each data set is
responsible to generate and include the metadata in a text descriptor or an
open-standard format of choice (e.g. UML, JSON etc.).
# Data management, documentation and curation
## Managing, storing and curating data
The main project data will be stored on a special project repository that will
host all the data from the project. This is dedicated to data of all ongoing
Work Packages and is only accessible for certain categories and work groups of
partner employees and researchers. The project repository is backuped on a
regular basis.
## Metadata standards and data documentation
Data sets will contain metadata that will contain information specific to
origins of the data (e.g. through measurement or simulation). As the sources
of data vary significantly and to ensure that they can be subsequently
manipulated at a later date, this level of metadata will remain intact and
further details will be provided within a text descriptor or an open-standard
format of choice that also holds a unique link to the data on the project
server. Outline of metadata for data sets relevant to this project is
presented in 2.2.
## Data preservation strategy and standards
Data identified as requiring long term preservation (i.e. publications or
machine code) will be compressed and archived on mechanical hard drives which
will be held locally. An estimation of relevant storage costs for this
specific project is not possible to give because the long term storage units
are shared house wide for all our projects and it is not possible to break
down these costs. Also, it is quite unclear at this point in the project how
much data will be stored in the end.
Sufficient storage is already in place to cover the short term and due to the
low cost of suitable storage media the additional storage can easily be met
through currently available budgets.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0905_FIRES_649378.md
|
# Executive summary
The purpose of this document is to describe the data management life cycle for
all data sets that will be collected, processed or generated by the FIRES
project. This document provides a general overview of the nature of the
research data that will be collected and generated within the project and
outlines how these data will be handled during the project and after its
completion. This first version of the DMP serves as starting point and
guidelines for the researchers in FIRES project. The more elaborated versions
will be uploaded in later stages of the project, whenever it is relevant.
# Prepare
## Data Collection
Databases generated from the project will be submitted to the EC as part of
the deliverables planned in the Project:
D3.2 Pan European Database on Related Variety at NUTS-2 level
D4.2 Pan European Database Time Series GEDI at National Level
D4.4 Pan European Database REDI at Regional Level
D5.1 Database on Start up Processes
Data necessary for these deliverables will be collected mainly from public
data sources, proprietary and public sources and through surveys.
In particular, data that will be collected/generated in the FIRES project:
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
**Data type**
</th>
<th>
**Description of data**
</th>
<th>
**Origin/collection source**
</th>
<th>
**File**
**Format**
</th>
<th>
**Scale**
</th> </tr>
<tr>
<td>
D3.2 Pan
European
Database on
Related
Variety at
NUTS-2 level
</td>
<td>
Numerical data at national and regional NUTS-2.
</td>
<td>
Consists of a number European regions and countries of a certain number of
years.
</td>
<td>
The data will be collected from different sources of which the GEM and the
Skill-relatedness data of Neffke & Henning (2013) are two.
</td>
<td>
STATA
(.dta)
</td>
<td>
Not known yet.
</td> </tr>
<tr>
<td>
D4.2 Pan
European
Database
Time Series
GEDI at
National
Level
</td>
<td>
Numerical data at national level from
2002 to 2014
</td>
<td>
The database includes institutional and individual indicators that
characterize the national system of
entrepreneurship and refer on the performance of entrepreneurships in the
involved countries.
</td>
<td>
Individual data: GEM; institutional data: various sources
(World Economic
Forum, UN, UNESCO,
Transparency
International,
Heritage
Foundation/World
Bank, OECD, KOF,
EMLYON Business
</td>
<td>
Excel
(xlsx)
</td>
<td>
7,5 MB
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
School, IESE Business School). As compared to previous GEDI data collection
the Coface risk measurement has been replaced by OECD indicator.
Owner: GEDI
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
D4.4 Pan
European
Database
REDI at
Regional
Level
</td>
<td>
Numerical data in
NUTS-1 and/or
NUTS-2 (if feasible
– requires sufficient sample size)
</td>
<td>
This only refers to the entrepreneurship indicators that feed into REDI.
Approximately 125 region cells for two time periods: 2007-2011 and 2012-2014.
In case this is not feasible: 125 regions for one time period: 2010-
2014
</td>
<td>
Researchers are members with GEM and have access to the data
</td>
<td>
.xlsx (Excel) and
.dta
(Stata)
</td>
<td>
Limite d size
</td> </tr>
<tr>
<td>
D5.1
Database on
Start up
Processes
</td>
<td>
Mostly quantitative (numerical) data and some
qualitative
(interview quotes) data at corporate level that can, inter alia, be sorted by
country and industry (via NACE, NAICS, US
SIC codes)
</td>
<td>
Venture creation processes of 800 start-up companies in the US, UK, Germany
and Italy.
Dataset is restricted to alternative energy and ICT companies. The sample is
based on external database Orbis.
</td>
<td>
via CATIs with support of external call center. UU will be the owner
</td>
<td>
.xlsx /
.sav
</td>
<td>
60 MB
</td> </tr> </table>
**Requirements for access to existing datasets (previously collected data):**
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
**Description/summary**
</th>
<th>
**Data owner/source**
</th>
<th>
**Access issues (requirements to access existing data)**
</th> </tr>
<tr>
<td>
Global
Entrepreneurship
Monitor (GEM)
</td>
<td>
Data based on adult population surveys to adult population in
European countries
</td>
<td>
GEM
</td>
<td>
GEM members (including some FIRES members) have access to the micro data,
regional indicators can be compiled and published on mutual consent of the GEM
National Teams concerned
</td> </tr>
<tr>
<td>
Perfect Timing (PT)
Database
</td>
<td>
Venture creation processes of 420 start-up companies in the US, Germany and
the Netherlands.
</td>
<td>
Utrecht University:
Andrea
</td>
<td>
PI (Andrea Herrmann) is the owner of the data
</td> </tr>
<tr>
<td>
</td>
<td>
Dataset is restricted to alternative energy and ICT companies. The sample is
based on external database Orbis.
</td>
<td>
Herrmann
</td>
<td>
</td> </tr> </table>
## Data Documentation
The aim of the FIRES project is to document data in a way that will enable
future users to easily understand and reuse it.
All Datasets are Deliverable as a data file and will be labelled with a
persistent identifier received upon depositing the dataset. To all datasets,
there will be a separate report provided, describing in detail the collection
and presenting the descriptive statistics and data manipulations of each data
series in the dataset; and will be stored alongside the data.
Common _**metadata** _ that apply to all studies in your FIRES project on
study level will include. i.e. name, description, authors, date, subproject,
persistent identifier, accompanying publications, etc. For such generic
metadata the Dublin core or DDI metadata standard will be used. For D3.2 a new
metadata template must be developed; D4.2 and D4.4 can follow practice
developed in the GEDI- and REDI-indicators; whereas D5.1 can rely on earlier
work by Dr. Andrea Herrmann in her earlier Marie-Curie project, where she
collected exactly the same type of data in Germany and the US.
_**File naming and folder structure:** _
In order to better organize the data and save time the file naming convention
will be used to enable titling of folders, documents and records in a
consistent and logical way. The data will be available under filename composed
of the project Acronym and the Deliverable number, for example:
FIRESProjectD32.dta, FIRESProjectD42.dta, FIRESProjectD44.dta and
FIRESProjectD51.dta. reports will be stored under corresponding names.
Furthermore, specific project/data identifiers will be assigned. All variables
are given logical three letter codes and a complete codebook is provided, with
definitions and descriptive statistics.
# Handling research data
## Data Storage and Back-up
_**Raw data** _ will be stored on secure university fileservers and back up
versions will be saved on external portable storage devices (CD) and on
personal computers of responsible researchers.
For the duration of the project the research data _**master files** _ will be
stored on the university fileserver with the partner institution of the
responsible PI in order to ensure long term a and secure storage. From the
master file location, _**backups** _ will be made and stored on local drives –
on personal laptops with responsible researchers. _**Working copies** _ will
be accessible on cloud storage (Dropbox) that enables researchers to access
the data and allows editing environment. The updated working copies will be
synchronized regularly (after every edit) with the _master copy_ location. The
person responsible for the synchronization will be the responsible researcher
(the researcher who is responsible for generating the data, i.e. Deliverable
coordinator).
_**Version control** : _ Both master copy and back up versions will be using
the same identifier for newer versions to ensure the authenticity of the data
and to avoid work with outdated versions of files. For different versions
codes will be used: V1.00, V1.01; V2.01 etc. with ordinal numbers indicating
major and decimals minor changes. The original and definitive copy will be
retained. During the research also the intermediate major versions will be
retained to make it possible to go back in versions if needed.
STATA also allows for do-files that code all manipulations in the data. All
data sets generated in STATA will be thus presented as a collection of _raw
source files_ (with reference) and a series of _.dofiles_ that allow for exact
replication of aggregation, manipulation and analysis of the data. These
.dofiles are published with the raw and final cleaned data files.
## Data Access and Security
Within the duration of the project only the directly responsible researchers
have _access_ to the data files. They are thereby also responsible for the
integrity of the datasets and required to carefully document collection and
any manipulations made to the data. Data will be made public only after
publication of the reports and deliverables. For privacy reasons, raw
microdata in D5.1 will remain restricted access after the project, as do the
proprietary parts of the data used in D4.2 and D4.4. We will publish data
required for the reproduction of analyses. Principal investigators will
control the data up to the delivery of the deliverables.
Ownership of the data generated in the project lies with the beneficiary (or
beneficiaries) that generates them, as stated in the FIRES Consortium
Agreement. In case of joint owners of the data, these shall agree on all
protection measures of the data.
The data collected through survey in D5.1 will be anonymized. No privacy
/sensitive data are involved in the project.
# Preserve and Share
## Data Preservation and Archiving
All data generated by the project should be preserved permanently. They will
be preserved in Stata .dta and .do as well as a simpler database formats.
Together with the data also reports in .pdf and STATA .do-files will be stored
as supportive documentation. For the purposes of long term sustainable
archiving of the data suitable archiving system will be chosen in the course
of the project.
## Data Sharing and Reuse
Possible audiences identified for reuse of the data are mainly students and
scholars. In order to ensure that the data and its metadata can be easily
found, reused and cited and can also be retrieved even if at some point its
location changes, all data generated from the project will be deposited in a
public research data repository. Suitable repository that allows the
assignment of a persistent identifier as well as for long term storage and
open access, will be chosen through _re3data.org_ \- registry of discipline-
specific repositories. In order to create clarity for potential users towards
the use of the data, suitable licenses will be assigned to the data, using
creative commons licenses (mostly CC-BY).
Once delivered to the European Commission and approved, the data files will
also be made public on the website of the project. The data for deliverables
D4.2 and D4.4 are proprietary, but aggregated data can be made public. Micro-
data for D5.1 will not be made public until all reports foreseen in the
project have been published.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0906_I3U_645884.md
|
## Introduction
This section describes the data exchange protocol used in the I3U project. The
purpose of the protocol is to describe the computer format in which data is
made available for use within the project. The data exchange protocol is
binding for all partners. It must be used to submit all data in a to the
project database in a common format. The common project database will consist
of a number of files organized according to this protocol.
## File format and file names
The file format is Microsoft Excel binary file format, version 2007 or later.
This file format is the default Excel file format and has file extension
.xlsx. (also OpenOffice can also produce this file format).
The files will be named as follows: _“WPXXnameYYYY.xlsx”_
where _“XX”_ is the number of the workpackage that produced the file, _“name”_
is the name of a group of indicators, and _“YYYY”_ is the version number of
the data, where all digits are used. Version numbers is not consecutive but
indicate major steps in data construction process.
We expect several updates during the project’s lifetime, so several version
numbers will be saved.
## Data documentation
Use Calibri font throughout the file, point 11.
The leftmost worksheet in the file should be named documentation, and should
contain a description of all data contained in the file. Column A in this
sheet should be set to column width 100 and text wrapping on. Cell A1 contains
a general description of the data contained in this file. It should cover
approximate definitions (what phenomenon is measured by these indicators?). If
this description needs several paragraphs, use one cell per paragraph, and
continue using as many rows as are needed. Leave one row empty after the
general description is completed.
Start reporting formal variable definitions on the next row, starting with the
variable name, in bold, followed by the formal definition. Use one cell per
variable, and leave one row empty after the last definition.
Start reporting sources on the next row, starting with text “source for
variable name”, in bold, followed by a description of the source. Use one cell
per variable. Leave one row empty after the last variable.
Write any messages about permissions for data use and/or attribution of
efforts in collecting the data in this cell. Mention the I3U project in the
attribution.
## Data presentation
The worksheets following the documentation sheets contain the actual data. Use
one worksheet per variable, and name the worksheet by the exact variable name
(used in the documentation sheet).
The top row of a worksheet containing data documents the units to which the
data refer (countries, sector, regions, etc.; we refer to these as labels in
this document), and the years for which data is available. Start with label
country in column A, and use subsequent columns for additional labels in the
database (such as sector or region). Use as many columns as there are label
types (e.g., 3 columns if there are countries, regions and sectors). Document
the first year for which data are available in the column following this, and
continue years after this. Freeze panes at the 2nd row below the first year.
Adjust column width according to the data format and labels, but do not make
columns any smaller than width 3, nor wider than width 15 (including columns
for labels). Left align label columns, right align data columns.
Always provide text for any label column that is used (do not leave any cells
empty below a label), and set
the cell format to General for all labels. Use full country names as used on
the Eurostat website (see below for selected countries). For any other labels
than countries, provide a separate worksheet explaining the labels used (see
below).
Provide the data below the years, and set the cell format to Number for all
cells containing data. Use an appropriate fixed number of decimals throughout
the worksheet for a single variable, but implement this as a display format,
not as actual rounding (provide full decimals in the actual writing of
variables). Use two dots (..) for missing data (also right align these), and 0
for values that are actually 0.
## Notes to individual datapoints
In case your data has any notes (e.g., to indicate exceptions to definitions,
breaks in definitions or sources, etc.), include a separate sheet for every
variable for which such notes exist, and name this sheet “variable name –
notes”. Insert the sheet to the immediate right of the sheet with data.
The notes sheet has exactly the same format as the actual datasheet, except
that the cells where the data are in the data sheet will contain the notes.
Set the format to General for these cells, but keep them right aligned.
## Aggregations for sectors and EU
When possible, provide EU totals for all variables that you supply. When
appropriate, these totals are weighted averages, using the natural weights
that lead to a value that spans the entire country set.
## Labels for countries, sectors, regions and other dimensions
### Countries
Use full country names (as specified below, or for non EU countries, use
official country names as specified in _this UN document_ .
<table>
<tr>
<th>
www.i3u-innovationunion.eu
</th>
<th>
**Page 10 of 11**
</th> </tr> </table>
The following table provides country memberships of the EU12, EU15, EU25, EU27
and EU28 groups:
<table>
<tr>
<th>
**Country**
</th>
<th>
**Remarks**
</th>
<th>
**EU12**
</th>
<th>
**EU15**
</th>
<th>
**EU25**
</th>
<th>
**EU27**
</th>
<th>
**EU28**
</th> </tr>
<tr>
<td>
**Belgium**
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Bulgaria**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Czech Republic**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Denmark**
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Germany**
</td>
<td>
For data until 1990 use former territory of the FRG, indicate this in notes if
any data for 1990 or before are included
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Estonia**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Ireland**
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Greece**
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Spain**
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**France**
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Croatia**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Italy**
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Cyprus**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Latvia**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Lithuania**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Luxembourg**
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Hungary**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Malta**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr> </table>
<table>
<tr>
<th>
**Netherlands**
</th>
<th>
</th>
<th>
Yes
</th>
<th>
Yes
</th>
<th>
Yes
</th>
<th>
Yes
</th>
<th>
Yes
</th> </tr>
<tr>
<td>
**Austria**
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Poland**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Portugal**
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Romania**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Slovenia**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Slovakia**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Finland**
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Sweden**
</td>
<td>
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**United Kingdom**
</td>
<td>
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr> </table>
**Table 1: country memberships of the EU12, EU15, EU25, EU27 and EU28 groups**
The official membership countries are: Iceland, Montenegro, the former
Yugoslav Republic of Macedonia, Albania, Serbia, Turkey.
### Regions
The project uses the _NUTS 2013_ classification, with NUTS-2 as the default
level of disaggregation. Whenever NUTS-3 data exist, these can be provided,
but in any case NUTS-2 (when available) must be provided. Use NUTS codes to
indicate regions.
### Sectors
The project uses the _NACE classification, Rev. 2._ Data availability will
determine the level of disaggregation. Use NACE codes to indicate sectors.
### Other labels
When other labels are necessary, use an official classification, and provide
details of this classification by referencing an official document.
<table>
<tr>
<th>
www.i3u-innovationunion.eu
</th>
<th>
**Page 11 of 11**
</th> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0907_WeHubs_645497.md
|
# Executive Summary
WeHubs is a coordination and support action (CSA) aimed at providing a strong
support to women web entrepreneurs (existing and potential) in Europe. WeHubs
seeks to create a favourable environment for women web entrepreneurs, by
linking together local fragmented web entrepreneurship ecosystem nodes into
the **first European Network for Women Web Entrepreneurs Hubs** . The WeHubs
network will facilitate knowledge sharing between relevant stakeholders,
develop dedicated services for women web entrepreneurs and offer access to
relevant events, platforms and support structures. In this respect the project
will foster the creation and scaling of web start-ups created or co-founded by
women, strengthen the existing web entrepreneurs ecosystems through networking
and complementary services and support the emergence of a dynamic European
ecosystem for women web entrepreneurs contributing to the formulation of
relevant policies, support the implementation of Startup Europe initiative and
the wider enforcement of the European digital sector.
In order to achieve the WeHubs goals, we will deliver during the lifetime of
the project activities were we will gather different types of data. To be sure
to comply with the strict requirements of the European commission this Data
management plan will serve as a manual to describe the process of gathering
the information, preserving and archiving it.
The main findings that will be subject to this DMP refers to:
* 2 questionnaires linked with the following tasks:
o T2.1 Questionnaire targeting women entrepreneurship ecosystems o T2.2
Questionnaire targeting women web entrepreneurs
* T2.2 in depth interviews identified based on mapping the initiatives
* T2.5 face-to face interviews to the participants of the events in order to gain knowledge on the added value of WeHubs for them
* T3.5 ideas competition for women o the reach out to potential applicants will be guaranteed through the mapping actions T2.2
This DMP will be updated during the lifetime of the project and if needed more
activities which are sensitive in terms of data management plan might be
added.
# Introduction
A Data Management Plan (DMP) describes the data management life cycle for all
data sets that will be collected, processed or generated by a research
project. The document is aiming at outlining the way the consortium will
collect and manage data and information from the involved stakeholders. It
should contains:
* The nature of the information gathered;
* The methodology used for the collection;
* The procedure used for the analysis;
* The main findings and conclusion;
* The way findings and data will be shared and preserved.
The DMP is not a fixed document as it evolves during the lifetime of the
project. The consortium foresees to provide an update version at the end of
the first year of the project and in any case once the main research conducted
in the project framework will be achieved.
The main findings that will be subject to this DMP refers to D2.1 – needs
analysis of local ecosystem and women web entrepreneurs leading to the
creation of WeHubs Gender Scorecard.
The abovementioned needs analysis refers to tasks 2.1, 2.2 and 2.5 of the
description of action. These activities require to highlight excellent
practices and biographies but also gender dimension and indicators of starting
up and growing a digital business. The report will describe the various web
entrepreneur ecosystems across Europe and analyse their needs: state of the
art, good practices, criticalities, knowledge & tools gaps and needs.
# Data set
## Description of the nature of the information
The main set of information to be collected and analysed in the framework of
the project will come from WP2 – setting up a European Network of women web
entrepreneurs hubs. In Task 2.1 stakeholders from the business incubators
networks and organizations providing support to start ups will be mapped and
interviewed for an assessment of current offers and how they could be made
more gender sensitive. Respondents will be identified based on mapping the
existing networks and other EU level research and mapping initiatives. In Task
2.2 individual start uppers or women joining teams and in various stages of
business development will be interviewed, including some failed experiences in
order to highlight gaps and hindering factors. Specific women’s experiences,
needs, and difficulties they had to face in the various steps of business
start-up will be collected sensitively by experienced interviewers and with
their permission within the context of better understanding the reasons for
failure. Part of the task 2.2 are also in depth interviews. This detailed
approach thanks to depth interviews will help us to set the ground both to the
design and provision of services to women web entrepreneurs and the ecosystem
stakeholders. In Task 2.5. Face-to face interviews to the participants of the
events will be conducted. Finally, in Task 3.5 we will reach out to potential
applicants through the mapping actions T2.2 in order to award the ideas
competition for women.
As agreed with the PO and stated in the DoA, for all the foreseen research
activities the WeHubs consortium will apply the following procedures which
will be better specified and defined:
1. Non-disclosure and privacy protection statements complying with EU and national legislation will be signed by task leaders and WP leaders and privacy/non-disclosure statements and information on data collection and data treatment will be made available to all respondents to all research actions.
2. Collected data will be: textual answers to an online survey; audio taped interviews and/or textual in depth interviews (T. 2.1, 2.2 and 2.5); pictures, audio and/or videotaped interviews and/or pictures (T 4.2)
3. All collected data will be treated anonymously and numerically identified in T 2.1, 2.2 and 2.5.
4. Task leaders will be responsible and accountable for data storage and treatment after the end of the project. Collected raw information in T 2.1 and 2.2 (texts and audio files of interviews as well as answers to questionnaires) will not be published online and/or made available to the public anyway.
5. For the specific release and dissemination of success stories in T 4.2 identified respondents will be involved in the contents design of their ‘story’ and will have to provide written and signed authorization to the release and publication of any audio/video/textual contents related to their own personal experiences.
6. All original, irreplaceable electronic project data and electronic data from which individuals might be identified will be stored on Task Leaders supported media and secure server space or similar; such data will never be stored on temporary storage media nor cloud services, unless fully compliant with national and EU regulation on data protection.
7. All research data will be accessible to partners of the project being in charge of the specific research action, as well as the Ethics auditors.
At the moment and since the project started, partners are carrying T2.1 and
T2.2 and the first version of D1.2 will refer mainly to the results T2.1.
## Methodology of collection and analysis
The instrument chosen for collecting the set of information in both T2.1 and
T2.2 is an online questionnaire available on WeHubs website. (Please find two
questionnaires in Annex).
Concerning the T2.1 Questionnaire, We have organized the questions in several
segments:
* Name and status of the node and what is the origin of their revenues. Basically, we are looking to understand the impact of private initiative, and the influence of the public money on their women oriented offerings.
* Kind of services the node offers and trying to understand the core activity of this node, because a lot of node have differentiated them and are trying to innovate in new activities to attract start-ups. It is important to understand their global business model and see if, how and how much women represent a part of this business model (what does it cost to attract women, what action or service could be best fit to attract what segment of women, and what is the return related to women for these organizations).
* We’ve asked several questions to try to understand what stage of development of the web start-up they are interested to address in terms of services and how they select the start-ups they would support, on which criteria. We try to understand at what stage of the development of the start-up women would need more support to become web entrepreneur.
* Having a better overview of the different services offered will give us better understanding of the dynamics of the ecosystems regarding women web entrepreneurs and what lever can be triggered in the local ecosystems to attract more women into web entrepreneurship.
* We have dedicated a specific set of questions related to the action/offerings concerning women more directly, trying to understand the motivations of the organization, and their approach to evaluate the results, what are their strength regarding women entrepreneurship, what they think is needed to improve women entrepreneurship and what they are willing to do in that direction.
* We ask also to the organizations what main obstacles they see to women entrepreneurship in their day to day activities and how to solve them by individual, local or more global actions.
Our focus in Task 2.1 is driven to understand how the nodes in an ecosystem
are addressing women entrepreneurs’ needs when they offer a set of services,
resources, facilities and different kinds of aids to women entrepreneurs to
accomplish their projects.
For the analysis of the questionnaires we have proceedeed with a bottom up
approach. Having this kind of approach meant we tackled the nodes through a
survey at the highest level of granularity to understand and characterize
their activities regarding web entrepreneurs. This approach is based on
complementing the survey’s data with observations, experience, and
identification of the real practices on the field among the nodes, the
stakeholders of the start-up ecosystems.
About Task 2.2.and the survey targeting women web entrepreneurs, research
tools which have been used are a questionnaire through an on line survey and
in depth interviews.
_Table n°1._
<table>
<tr>
<th>
_**Method and technical support** _
</th>
<th>
_**Aim** _
</th>
<th>
_**Target** _
</th>
<th>
_**Achieved target by 30/09/2015** _
</th> </tr>
<tr>
<td>
Questionnaire (multiple choices + open questions)- _Respondants asked to
identify themselves and fill in name/surname and emails._
Operated on line through
GoogleForms
</td>
<td>
Draw an overall picture of how women are experiencing web entrepreneurship and
support services in Europe (no statistical sampling)
</td>
<td>
Reach out to 600, collecting minimum
450 answers
</td>
<td>
137 complete answers, valid ones:
95\.
</td> </tr>
<tr>
<td>
In depth interviews
Held on the phone-skype or GoToMeeting and
audiorecorded
</td>
<td>
Collecting more detailed and fine grained individual experiences of women web
entrepreneurs as well as suggestions on how to make the European start up
scene and ecosystems more gender inclusive.
</td>
<td>
Min. 50
</td>
<td>
29
</td> </tr> </table>
The questionnaire has been structured into 4 sections, for a total of 46
questions (see _Full Questionnaire_ _at this link_ ) . Design of the
questionnaire has been guided mainly by two intentions:
1. using some indicators already present within the GEM Report on Women Entrepreneurs, as highlighted in Chapter, 1 in order to try and assess peculiarities of web entrepreneurship when compared to already existing cross sectoral studies on web entrepreneurship
2. aligning to categories and terminology already used in the T 2.1 Survey, with the goal of making the analysis of women’s need an interesting knowledge source for triggering reflections and debate within the ecosystems communities and business support organizations in particular and stimulate the setting up of the WeHubs Network.
* The first section “ _Your business story and motivation_ ” (Q3 to Q12) aims at gathering information on funding sources (Q3; Q4), eventual previous mergers, restructuring or termination of previous companies (Q5; Q7), main motivations driving women web entrepreneurs (Q6; Q8), self confidence perception and fear of failure as well as the importance of referring to other women as role models.
* The second section of the questionnaire “ _Your experience with support services to entrepreneurs”_ has been designed in view of assessing women’s feedback on services in startup ecosystems, from difficulties in identifying the right organizations for any specific needs they had (Q15), to perceptions of the quality of services they accessed (Q13; Q14; Q16), the degree the same services have actually met their needs (Q18), and if /any of those were targeted at women only asking for a title-description (open question Q19). The typology of structures and services has been kept the same as for 2.1 survey, incorporating some elements from EBN internal membership survey for Business Support Organizations.
* In addition, we have asked respondents to express their opinions about the capacity of services to startups and digital businesses to reach out to women (Q20), the suitability of offered services in terms of work life balance and the availability of family friendly services. We have also focused on perceived usefulness of possible gender oriented changes into startup ecosystems
(Q23; Q24), based on the WeHubs Conceptual Map for gendered transformations
(See Chapter 1, Figure 1), in order to test them against women web
entrepreneurs’ opinions before using them as guiding dimensions for the WeHubs
Scorecard. We also aimed at collecting good examples and proposals for
“services, measures, tools, campaigns, media actions for supporting women’s
start-up” from women themselves (Q25; Q26).
* The third section titled “Your experience as a woman in web entrepreneurship” was finalized at assessing some of the gender dimensions of web entrepreneurship: to this purpose a first question was dedicated to identify the main challenges in business creation (Q27), to continue with experience of gender biased treatment (Q28), its frequency (Q29), and the agents behind it (Q30) including descriptions of direct experiences Q31). Furthermore this section explored about work life balance issues, and the reasons why this is or it is not a problematic area for women web entrepreneurs (Q32).
* In the fourth section “Your Business” we asked for basic background information about businesses’ age and sectors (Q34; Q33 respectively), growth rate in the last business year (Q35) team and staff composition (Q36; Q37; Q38; Q39) and geographical market scope (Q40).
* Finally the last section was dedicated to respondents’ demographics asking for place of birth/residency for identifying -migration-mobility factors (Q41; Q42), education levels and household features (Q44; Q45).
* Based on the same dimensions and indicators, in depth interviews had the objective of getting more fine grained knowledge about women’s experiences in starting up and running web businesses, views and experiences on the ecosystems, their services and proposals for improvement, collecting insiders’ opinion about the lack of women among web entrepreneurs as well as concrete suggestions- on how to promote change (see attached structure for the interview Annex II).
Other Tasks where data collection is extensively foreseen are T.2.5 which
foresees in presence interviews with participants to events and T4.2 based on
video interviews with selected successful entrepreneurs.
Moreover, the WeHubs consortium ensures that all the information provided
through the questionnaire will be kept anonymous, data will be treated and
shared by partners responsible for this action only. All collected data will
be published in an aggregated way on the project's website and fully respect
the privacy and data protection rights, according to EU regulations complying
with Directive 95/46 EC and Ethics guidelines in data storage and treatment
within H2020.
## Findings
Findings of T2.1 refer to what service entrepreneurship ecosystems offer, to
whom, when, how they are managing their resources and how they reach their
goals, should they be financial or not. Data were analyzed and presented in an
aggregated way and fully anonimized in D 2.1.1.
Findings from T2.2 explored the needs of women web entrepreneurs also in
regard to their experiences with business support organizations and their
perceptions about how to make ecosystems more women friendly. Data were
anonymized and analyzed in an aggregate fashion in D2.1.2, and made available
on the WeHubs Web Site in a summarized version through 4 main infographics.
Findings from T2.5 will be part of D.2.4 Sustainability Plan.
Findings from the success stories video interviews will be part of the D.4.2
Success Stories Report and the video interviews will be showcased on line on
the WeHubs Website.
# Standards and metadata
Together with the analysis of the questionnaires, Partners reviewed existing
document and literature on the topics. A this stage, and mainly for the
analysis of T2.1, partners reviewed:
* EU Entrepreneurship Action Plan 2020
* Statistical Data on Women Entrepreneurs in Europe, September 2014, EU commission
* Evaluation on Policy: Promotion of Women Innovators and Entrepreneurship, July 2008, EU commission
* The Economist: Tech Startup, a Cambrian Moment
* The Accelerator Assembly
* The Startup Genome report (2)
For T2.2 a comprehensive literature on the issues at stake was reviewed, which
was made avaialble in the References’ list of D2.1.2
Among the most relevant documents providing inputs for elaborating qualitative
indicators we can mention:
* Ahl, H. (2006). Why research on women entrepreneurs needs new directions.
_Entrepreneurship Theory and Practice_ , _30_ (5), 595-621. European
Commission (2013b). Study on Women Active in the ICT Sector. _Publications
Office of the European Union_ , Luxembourg.
* European Commission (2008). Evaluation on Policy: Promotion of Women Innovators and Entrepreneurship. _DG Enterprise and Industry_ , Brussels.
* European Commission (2004). Promoting Entrepreneurship amongst women. _Best Report n°2, DG Enterprise_ , Brussels
* Hughes, K.D. Jennings, J. Carter, S. & J. Brush (2012). Extending Women's Entrepreneurship Research in New Directions. _Entrepreneurship Theory & Practice, _ vol. 36, Issue 3, pp. 429-442 _._
* Kelley, D. J., Brush, C.G., Greene, P. G. & J. Litovsky (2012). Global Entrepreneurship
Monitor. _2012 Women’s Report, Global Entrepreneurship Research Association_ .
Retrieved from _www.gemconsortium.org_
* OECD (2014). Enhancing women’s economic empowerment and business leadership in
OECD countries. _OECD Publication_
# Data sharing
Data will be publicly shared as aggregated information about main findings and
answers in the deliverable D2.1. The consortium have also access to the
individual answered questionnaires via online tool (Google Forms document)
until the end of the project.
While working on the reports, partners share the answers in the shared Dropbox
folder being the main project management tools for sharing materials and as
repository of document. The on line excel sheet comprehensive of all replies
to the questionnaires and related contacts was/is available to Tasks Managers
at the related GoogleForms link and it will still be kept open in the upcoming
months to increase the number of respondants. Downloaded excels with updated
questionnaires was shared among all partners on the common DropBox folder.
The audiotaped in depth interviews were and still are stored among partners
only on the DropBox shared folder.
# Archiving and preservation
The consortium at this stage is planning to make an electronic copy of the
answers at the end of the project and delete all online information from
Google Forms. Further solution will be discussed with the project officer
during the project lifetime.
# Conclusions
This version of the DMP as anticipated is a draft version that needs to be
finalized once the consortium will have a clearer idea of the amount of
findings in its possession and the complete analysis.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0909_CRACKER_645357.md
|
# Executive Summary
This document describes the Data Management Plan (DMP) adopted within CRACKER
and provides information on CRACKER’s data management policy and key
information on all datasets that have been and will be produced within
CRACKER, as well as resources developed by the “Cracking the language barrier”
federation of projects (also known as the “ICT-17 group of projects”) and
other projects who wish to follow a common line of action, as provisioned in
the CRACKER Description of Action.
This second version includes the principles according to which the plan is
structured, the standard practices for data management that are being
implemented, and the description of the actual datasets produced within
CRACKER. The final update of the CRACKER DMP document will be provided in M36
(December 2017).
The document is structured as follows:
* Background and rationale of a DMP within H2020 (section 2)
* Implementation of the CRACKER DMP (section 3)
* Collaboration of CRACKER with other projects and initiatives (section 4)
* Recommendations for a harmonized approach and structure for a Data Management Plan to be optionally adopted by the “Cracking the language barrier” federation of projects (section 5).
# Background
The use of a Data Management Plan (DMP) is required for projects participating
in the Open Research Data Pilot, which aims to improve and maximise access to
and re-use of research data generated by projects. The elaboration of DMPs in
Horizon 2020 projects is specified in a set of guidelines applied to any
project that collects or produces data. These guidelines explain how projects
participating in the Pilot should provide their DMP, i.e. to detail the types
of data that will be generated or gathered during the project, and after it is
completed, the metadata and standards which will be used, the ways how these
data will be exploited and shared for verification or reuse and how they will
be preserved.
In principle, projects participating in the Pilot are required to deposit the
research data described above, preferably into a research data repository.
Projects must then take measures, to the extent possible, to enable for third
parties to access, mine, exploit, reproduce and disseminate, free of charge,
this research data.
The guidance for DMPs calls for clarifications and analysis regarding the main
elements of the data management policy within a project. The respective
template identifies in brief the following five coarse categories 1 :
1. **Data set reference and name** : an identifier for the data set; use of a standard identification mechanism to make the data and the associated software easily discoverable, readily located and identifiable.
2. **Data set description** : details describing the produced and/or collected data and associated software and accounting for their usability, documentation, reuse, assessment and integration (i.e., origin, nature, volume, usefulness, documentation/publications, similar data, etc.).
3. **Standards and metadata** : related standards employed or metadata prepared, including information about interoperability that allows for data exchange and compliance with related software or applications.
4. **Data sharing** : procedures and mechanisms enabling data access and sharing, including details about the type or repositories, modalities in which data are accessible, scope and licensing framework.
5. **Archiving and preservation (including storage and backup)** : procedures for long-term preservation of the data including details about storage, backup, potential associated costs, related metadata and documentation, etc.
# The CRACKER DMP
## Introduction and Scope
For its own datasets, CRACKER follows META-SHARE’s ( _http://www.meta-_
_share.eu/_ ) best practices for data documentation, verification and
distribution, as well as for curation and preservation, ensuring the
availability of the data throughout and beyond the runtime of CRACKER and
enabling access, exploitation and dissemination, thereby also complying with
the standards of the Open Research Data Pilot 2 .
META-SHARE is a pan-European infrastructure bringing online together providers
and consumers of language data, tools and services It is organized as a
network of repositories that store language resources (data, tools and
processing services) documented with high-quality metadata, aggregated in
central inventories allowing for uniform search and access. It serves as a
component of a language resource marketplace for researchers, developers,
professionals and industrial players, catering for the full development cycle
of language resources and technology, from research through to innovative
products and services [Piperidis, 2012].
Language resources in META-SHARE span the whole spectrum from monolingual and
multilingual data sets, both structured (e.g., lexica, terminological
databases, thesauri) and unstructured (e.g., raw text corpora), as well as
language processing tools (e.g., part-of-speech taggers, chunkers, dependency
parsers, named entity recognisers, parallel text aligners, etc.). Resources
are described according to the META-SHARE metadata schema [Gavrilidou et al.
2012], catering in particular for the needs of the HLT community, while the
META-SHARE model licensing scheme has a firm orientation towards the creation
of an openness culture respecting, however, legacy and less open, or
permissive, licensing options.
META-SHARE has been in operation since 2012, and it is currently in its 3.0.3
version, released in May 2016. It currently features 29 repositories set up
and maintained by 37 organisations in 25 countries of the EU. The observed
usage as well as the number of nodes, resources, users, queries, views and
downloads are all encouraging and considered as supportive of the choices made
so far [Piperidis et al., 2014]. Resource sharing in CRACKER will build upon
and extend the existing META-SHARE resource infrastructure, its specific MT-
dedicated repository ( _http://qt21.metashare.ilsp.gr_ ) as well as editing
and annotation tools in support of translation evaluation and translation
quality scoring (e.g., _http://www.translate5.net/_ ).
This infrastructure, together with its bridges, provides support mechanisms
for the identification, acquisition, documentation and sharing of MT-related
data sets and language processing tools.
## Dataset Reference and Name
CRACKER opts for a standard identification mechanism to be employed for each
data set, in addition to the identifier used internally by META-SHARE itself.
Reference to the a dataset ID can be optionally made with the use of an ISLRN
( _International Standard Language Resource Number_ ), the most recent
universal identification schema for LRs which provides LRs with unique
identifiers using a standardized nomenclature, ensuring that LRs are
identified, and consequently recognized with proper references (cf. figures 1
and 2).
**Figure 1. An example resource entry from the ISLRN website indicating the
resource metadata, including the
ISLRN,_http://www.islrn.org/resources/060-785-139-403-2/_ . **
**Figure 2. Examples of resources with the ISLRN indicated, from the ELRA
(left) and the LDC (right) catalogues.**
## Dataset Description
In accordance with META-SHARE, CRACKER is addressing the following resource
and media types:
* **corpora** (text, audio, video, multimodal/multimedia corpora, n-gram resources),
* **lexical/conceptual resources** (e.g., computational lexicons, ontologies, machine-readable dictionaries, terminological resources, thesauri, multimodal/ multimedia lexicons and dictionaries, etc.)
* **language descriptions** (e.g., computational grammars)
* **technologies** (tools/services) that can be used for the processing of data resources
Several datasets that have been and will be produced (test data, training
data) by the WMT, IWSLT and QT Marathon events and, later on, extended with
information on the results of their respective evaluation and benchmarking
campaigns (documentation, performance of the systems etc.) will be documented
and made available through META-SHARE.
A list of CRACKER resources with brief descriptive information is provided
below. This list is only indicative of the resources to be included in CRACKER
and more detailed information and descriptions will be provided in the course
of the project.
### R#1 WMT Test Sets
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT Test Sets
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
The core languages are German-English and Czech-English; other guest language
pairs will be introduced in each year.
For 2015 the guest language was Romanian. We also included Russian, Turkish
and Finnish, with funding from other sources.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The source data are crawled from online news sites and carry the respective
licensing conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
For tuning and testing MT systems.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
3000 sentences per language pair, per year.
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the test sets for the WMT shared translation task. They are small
parallel data sets used for testing MT systems, and are typically created by
translating a selection of crawled articles from online news sites.
WMT15 test sets are available at _http://www.statmt.org/wmt15/_
WMT16 test sets are available at
_http://data.statmt.org/wmt16/translation-task/test.tgz_
</td> </tr> </table>
### R#2 WMT Translation Task Submissions
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT Translation Task Submissions
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
They match the languages of the test sets.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Preferably CC BY 4.0.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Research into MT evaluation. MT error analysis.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
The 2015 tarball is 25M
The 2016 tarball is 44M
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the submissions to the WMT translation task from all teams. We
create a tarball for use in the metrics task, but it is available for future
research in MT evaluation.
The WMT15 version is available at _http://www.statmt.org/wmt15/_ The WMT16
version is available at
_http://data.statmt.org/wmt16/translation-task/wmt16-submitted-datav2.tgz_
</td> </tr> </table>
### R#3 WMT Human Evaluations
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT Human Evaluations
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Pairwise rankings of MT output.
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Numerical data (in csv)
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
N/a
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Preferably CC BY 4.0
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
In conjunction with the WMT Translation Task Submissions, this can be used for
research into MT evaluation.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
For 2014, it was 0.5MB
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the pairwise rankings of the translation task submissions.
The WMT15 versions are available at _http://www.statmt.org/wmt15/_
The WMT16 versions will be available at _http://www.statmt.org/wmt16/_ .
They will be made available in time for the workshop in August 2016\.
</td> </tr> </table>
### R#4 WMT News Crawl
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT News Crawl
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English, German, Czech plus variable guest languages.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The source data are crawled from online news sites and carry the respective
licensing conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Building MT systems
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
For 2014, it was 5.3G (compressed)
The WMT16 version was 4.8G
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
This data set consists of text crawled from online news, with the html
stripped out and sentences shuffled.
For WMT15 it is available at _http://www.statmt.org/wmt15/_
For WMT16 it is available at
_http://data.statmt.org/wmt16/translationtask/training-monolingual-news-
crawl.tgz_
</td> </tr> </table>
### R#5 Quality Estimation Datasets
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
Quality Estimation Datasets
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Bilingual corpora labelled for quality at phrase-level
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
German-English, English-German and one of the challenging language pairs
addressed in WMT (either Romanian or Latvian)
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Creative Commons
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Other researchers working on quality estimation or evaluation of machine
translation
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
At least 1,000 machine translations will be annotated for quality to train and
test quality estimation systems for each language pair.
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
The corpus will consist of source segments in English, their machine
translation, a segmentation of these translations into phrases and a binary
score given by humans indicating the quality of these phrases.
</td> </tr> </table>
### R#6 WMT 2016 Automatic Post-‐editing data set
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2016 Automatic Post-editing data set
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English to German
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
TAUS Terms of Use
(https://lindat.mff.cuni.cz/repository/xmlui/page/licence-TAUS_QT21). TAUS
grants to QT21 User access to the WMT Data Set with the following rights:
i) the right to use the target side of the translation units into a commercial
product, provided that QT21 User may not resell the WM
T Data Set as if it is its own new translation;
2. the right to make Derivative Works; and
3. the right to use or resell such Derivative Works commercially and for the following goals:
i) research and benchmarking; ii) piloting new solutions; and iii) testing of
new commercial services.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Training of Automatic Post-editing and Quality Estimation components
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
1294 kb
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Training, development and text data (the same used for the Sentencelevel
Quality Estimation task) consist of English-German triplets ( _source_ ,
_target_ and _post-edit_ ) belonging to the Information Technology domain and
already tokenized. Training and development respectively contain 12,000 and
1,000 triplets, while the test set contains 2,000 instances. Target sentences
are machine-translated with the KIT system. Post-edits are collected by
Text&Form from professional translators. All data is provided by the EU
project QT21 ( _http://www.qt21.eu/_ ).
</td> </tr> </table>
## Standards and Metadata
CRACKER follows META-SHARE’s best practices for data documentation. The basic
design principles of the META-SHARE model have been formulated according to
specific needs identified, namely: (a) a typology for language resources (LR)
identifying and defining all types of LRs and the relations between them; (b)
a common terminology with as clear semantics as possible; (c) minimal schema
with simple structures (for ease of use) but also extensive, detailed schema
(for exhaustive description of LRs); (d) interoperability between descriptions
of LRs and associated software across repositories.
In answer to these needs, the following design principles were formulated:
* expressiveness, i.e., cover any type of resource;
* extensibility, allowing for future extensions and catering for combinations of LR types for the creation of complex resources;
* semantic clarity, through a bundle of information accompanying each schema element;
* flexibility, by employing both exhaustive and minimal descriptions;
* interoperability, through mappings to widely used schemas (DC, Clarin Concept Registry (which has taken over the ISOcat DCR)).
The central entity of the META-SHARE ontology is the Language Resource. In
parallel, LRs are linked to other satellite entities through relations,
represented as basic elements. The interconnection between the LR and these
satellite entities pictures the LR’s lifecycle from production to use:
reference documents related to the LR (papers, reports, manuals etc.),
persons/organizations involved in its creation and use (creators, distributors
etc.), related projects and activities (funding projects, activities of usage
etc.), accompanying licenses, etc. CRACKER will follow these standard
practices for data documentation, in line with their design principles of
expressiveness, extensibility, semantic clarity, flexibility and
interoperability.
The META-SHARE metadata can also be represented as linked data following the
work being done in Task 3.3 of the CRACKER project, the LD4LT group
(https://www.w3.org/community/ld4lt/), and the LIDER project, which has
produced an OWL version of the META-SHARE metadata schema
(http://purl.org/net/def/metashare). Such representation can be generated by
the mapping process initiated by the above tasks and initiatives.
As an example, a subset of the META-SHARE metadata records has been converted
to Linked Data and is accessible via the Linghub portal (
_http://linghub.liderproject.eu_ ).
Included in the conversion process to OWL was the legal rights module of the
METASHARE schema ( _http://purl.org/NET/ms-‐rights_ ), taking into account
the ODRL model & vocabulary v.2.1
(https://www.w3.org/community/odrl/model/2.1/).
## Data Sharing
As said, resource sharing will build upon META-SHARE. CRACKER will maintain
and release an improved version of the META-SHARE software.
For its own data sets, CRACKER will continue to apply, whenever possible, the
permissive licensing and open sharing culture which has been one of the key
components of META-SHARE for handling research data in the digital age.
Consequently, for the MT/LT research and user communities, sharing of all
CRACKER data sets will be organised through META-SHARE. The metadata schema
provides components and elements that address copyright and Intellectual
Property Rights (IPR) issues, restrictions imposed on data sharing and also
IPR holders. These together with an existing licensing toolkit can serve as
guidance for the selection of the appropriate licensing solution and creating
the respective metadata. In parallel, ELRA/ELDA has recently implemented a
licensing wizard 3 , helping rights holders in defining and selecting the
appropriate license under which they can distribute their resources. The
wizard will be possibly integrated or linked to
META-SHARE.
## Archiving and Preservation
All datasets produced will be provided and made sustainable through the
existing META-SHARE repositories, or new repositories that partners may choose
to set up and link to the META-SHARE network. Datasets will be locally stored
in the repositories’ storage layer in compressed format.
# Collaboration with Other Projects and Initiatives
CRACKER has created an umbrella initiative that includes all currently running
and recently completed EU-supported projects working on technologies for a
multilingual Europe, namely the Cracking the Language Barrier initiative 4 .
This federation of projects is set up around a short multi-lateral Memorandum
of Understanding (MoU) 5 .
The MoU contains a non-exhaustive list of general areas of collaboration, and
all projects and organisations that sign this document are invited to
participate in these collaborative activities.
At the time of writing (June 2016), the MoU has been signed by 10
organisations and 23 projects (including service contracts):
* _Organisations:_ CITIA, CLARIN, ELEN, EFNIL, GALA, LT-Innovate, META-NET, NPLD, TAUS, W3C.
* _Projects:_ ABUMATRAN, CRACKER, DLDP, ELRC, EUMSSI, EXPERT, Falcon, FREME, HimL, KConnect, KRISTINA, LIDER, LT_Observatory, MixedEmotions, MLi, MMT, MultiJEDI, MultiSensor, Pheme, QT21, QTLeap, SUMMA, XLiMe
Additional organisations and projects have been approached for participation
in the initiative. The group of members is constantly growing.
# Recommendations for Harmonised DMPs for the ICT-‐17 Federation of Projects
One of the areas of collaboration included in the CRACKER MoU refers to the
data management and repositories for data, tools and technologies; thus, all
projects and organisations participating in the initiative are invited to join
forces and to collaborate on harmonising data management plans (metadata, best
practices etc.) as well as data, tools and technologies distribution through
open repositories.
At the kick-off meeting of the ICT-17 group of projects on April 28, 2015,
CRACKER offered support to the “Cracking the language barrier” federation of
projects by proposing a Data Management Plan template with shared key
principles that can be applied, if deemed helpful, by all projects, again,
advocating an open sharing approach whenever possible (also see D1.2). This
plan has been included in the overall communication plan and it will inform
the working group that will maintain and update the roadmap for European MT
research.
In future face-to-face or virtual meetings of the federation, we propose to
discuss the details about metadata standards, licenses, or publication types.
Our goal is to prepare a list of planned tangible outcomes of all projects,
i.e., all datasets, publications, software packages and any other results,
including technical aspects such as data formats. We would like to stress that
the intention is not to provide the primary distribution channel for all
projects’ data sets but to provide, in addition to the channels foreseen in
the projects’ respective Descriptions of Actions, one additional, alternative
common distribution platform and approach for metadata description for all
data sets produced by the “Cracking the language barrier” federation of
projects.
<table>
<tr>
<th>
**In this respect, the activities that the participating projects may
optionally undertake are the following:**
1. Participating projects may consider using META-SHARE as an additional, alternative distribution channel for their tools or data sets, using one of the following options:
1. projects may set up a project or partner specific META-SHARE repository, and use either open or even restrictive licences;
2. projects may join forces and set up one dedicated “Cracking the language barrier” META-SHARE repository to host the resources developed by all participating projects, and use either open or even restrictive licences (as appropriate).
2. Participating projects may wish to use the META-SHARE repository software 6 for documenting their resources, even if they do not wish to link to the network.
</th> </tr> </table>
As mentioned above, the collaboration in terms of harmonizing data management
plans and recommending distribution through open repositories forms one of the
six areas of collaboration indicated in the _“Cracking the Language Barrier”
MoU_ . Participation in one or more of the potential areas of collaboration in
this joint community activity, is optional.
An example of harmonized DMP is that of the _FREME_ project. FREME signed the
corresponding Memorandum of Understanding and is participating in this
initiative. As part of the effort, FREME will make available its metadata from
existing datasets that are used by FREME, using a combined metadata scheme:
this covers both the META-SHARE template provided by CRACKER, as well as the
DataID schema 7 . FREME will follow both META-SHARE and DataID practices for
data documentation, verification and distribution, as well as for curation and
preservation, ensuring the availability of the data and enabling access,
exploitation and dissemination. Further details as well as the actual dataset
descriptions have been documented in the FREME Data management Plan 8 . See
section 3.1.2 of that plan for an example of the combined approach.
## Recommended Template of a DMP
As pointed out already, the collaboration in terms of harmonizing data
management plans is considered an important aspect of convergence within the
groups of projects. In this respect, any project that is interested in and
intends to collaborate towards a joint approach for a DMP may follow the
proposed structure of a DMP template. The following section describes a
recommended template, while the previous section (3) has provided a concrete
example of such an implementation, i.e. the CRACKER DMP. It is, of course,
expected that any participating project may accommodate its DMP content
according to project-specific aspects and scope. These DMPs are also expected
to be gradually completed as the project(s) progress into their
implementation.
<table>
<tr>
<th>
**I. The ABC Project DMP**
1. **Introduction/ Scope**
2. **Data description**
3. **Identification mechanism iv. Standards and Metadata**
**v. Data Sharing vi. Archiving and preservation**
</th> </tr> </table>
**Figure 3. The recommended template for the implementation and structuring of
a DMP.**
### Introduction and Scope
Overview and approach on the resource sharing activities underpinning the
language technology and machine translation research and development within
each participating project and as part of the “Cracking the language barrier”
initiative of projects.
### Dataset Reference and Name
It is recommended that a standard identification mechanism should be employed
for each data set, e.g., (a) a PID (Persistent Identifier as a long-lasting
reference to a dataset) or (b) _ISLRN_ (International Standard Language
Resource Number).
### Dataset Description
It is recommended that the following resource and media types are addressed:
* **corpora** (text, audio, video, multimodal/multimedia corpora, n-gram resources),
* **lexical/conceptual resources** (e.g., computational lexicons, ontologies, machine-readable dictionaries, terminological resources, thesauri, multimodal/ multimedia lexicons and dictionaries, etc.)
* **language descriptions** (e.g., computational grammars)
* **technologies** (tools/services) that can be used for the processing of data resources
In relation to the resource identification of the “Cracking the language
barrier” initiative and to have a first rough estimation of their number,
coverage and other core characteristics, CRACKER will circulate two templates
dedicated to datasets and associated tools and services respectively. Projects
that wish and decide to participate in this uniform cataloguing are invited to
fill in these templates with brief descriptions of the resources they estimate
to be produced and/or collected. The templates are as follows (also in the
Appendix):
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
Complete title of the resource
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Choose one of the following values:
Lexical/conceptual resource, corpus, language description (missing values can
be discussed and agreed upon with CRACKER)
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
The physical medium of the content representation, e.g., video, image, text,
numerical data, n-grams, etc.
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
The language(s) of the resource content
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The licensing terms and conditions under which the LR can be used
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
The medium, i.e., the channel used for delivery or providing access to the
resource, e.g., accessible through interface, downloadable, CD/DVD, hard copy
etc.
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Foreseen use of the resource for which it has been produced
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
Size of the resource with regard to a specific size unit measurement in form
of a number
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
A brief description of the main features of the resource (including url, if
any)
</td> </tr> </table>
**Table 1. Template for datasets description**
<table>
<tr>
<th>
**Technology Name**
</th>
<th>
Complete title of the tool/service/technology
</th> </tr>
<tr>
<td>
**Technology Type**
</td>
<td>
Tool, service, infrastructure, platform, etc.
</td> </tr>
<tr>
<td>
**Technology Type**
</td>
<td>
The function of the tool or service, e.g., parser, tagger, annotator, corpus
workbench etc.
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
The physical medium of the content representation, e.g., video, image, text,
numerical data, n-grams, etc.
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
The language(s) that the tool/service operates on
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The licensing terms and conditions under which the tool/service can be used
</td> </tr>
<tr>
<td>
**Distribution Medium**
</td>
<td>
The medium, i.e., the channel used for delivery or providing access to the
tool/service, e.g., accessible through interface, downloadable, CD/DVD, etc.
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Foreseen use of the tool/service for which it has been produced
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
A brief description of the main features of the tool/service
</td> </tr> </table>
**Table 2. Template for technologies description**
### Standards and Metadata
Participating projects are recommended to deploy the META-SHARE metadata
schema for the description of their resources and provide all details
regarding their name, identification, format, etc.
Providers of resources wishing to participate in the initiative will be able
to request and get assistance through dedicated helpdesks on questions
concerning (a) the metadata based LR documentation at _helpdesk-metadata@meta-
share.eu_ (b) the use of licences, rights of use, IPR issues, etc. at
[email protected]_ and (c) the repository installation and use at
[email protected]_ .
### Data Sharing
It is recommended that all datasets (including all relevant metadata records)
to be produced by the participating projects will be made available under
licenses, which are as open and as standardised as possible, as well as
established as best practice. as Any interested provider can consult the META-
SHARE licensing options and pose related questions to the respective helpdesk.
### Archiving and Preservation
As regards the procedures for long-term preservation of the datasets, two
options may be considered:
1. As part of the further development and maintenance of the META-SHARE infrastructure, a project that participates in the “Cracking the language barrier” initiative may opt to set up its own project or partner specific META-SHARE repository and link to the META-SHARE network, with CRACKER providing all support necessary in the installation, configuration and set up process.
2. Alternatively, one dedicated “Cracking the language barrier” META-SHARE repository can be set up to host the resources developed by all participating projects, with CRACKER catering for procedures and mechanisms enabling long-term preservation of the datasets.
It should be repeated at this point that following the META-SHARE principles,
the curation and preservation of the datasets, together with the rights of
their use and possible restrictions, are under the sole control and
responsibility of the data providers.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0911_ESiWACE_675191.md
|
## 1\. Executive Summary
The Data Management Plan (DMP) of ESiWACE gives an overview of available
research data, access and the data management and terms of use. The DMP
reflects the current state of the discussions, plans and ambitions of the
ESiWACE partners, and will be updated as work progresses.
## 2\. Introduction
**Why a Data Management Plan (DMP)?**
It is a well-known phenomenon that the amount of data is increasing while the
use and re-use of data to derive new scientific findings is more or less
stable. This does not imply, that the data currently unused are useless - they
can be of great value in future. The prerequisite for meaningful use, re-use
or recombination of data is that they are well documented according to
accepted and trusted standards. Those standards form a key pillar of science
because they enable the recognition of suitable data.
To ensure this, agreements on standards, quality level and sharing practices
have to be negotiated. Strategies have to be fixed to preserve and store the
data over a defined period of time in order to ensure their availability and
re-usability after the end of ESiWACE
**What kind of data are considered in the DMP?**
The main purpose of a Data Management Plan (DMP) is to describe _Research
Data_ with the metadata attached to make them _discoverable_ , _accessible_ ,
_assessable_ , _usable beyond the original purpose_ and _exchangeable_ between
researchers.
According to the “Guidelines on Open Access to Scientific Publication and
Research Data in Horizon 2020” (2015) _:_
“ _Research data_ refers to information, in particular facts or numbers,
collected to be examined and considered and as a basis for reasoning,
discussion, or calculation. In a research context, examples of data include
statistics, results of experiments, measurements, observations resulting from
fieldwork, survey results, interview recordings and images. The focus is on
research data that is available in digital form."
However, the overall objective of ESiWACE is to improve efficiency and
productivity of numerical weather and climate simulations on HPC systems by
enhancing the scalability of numerical models, foster the usability of
community wide used tools and pursue the exploitability of model output.
Thus ESiWACE focuses more on the production process and tools than on
production of research or observation data and so the amount of _Research
Data_ which ESiWACE intents to produce is limited, at least at this stage of
the project.
**What can be expected from ESiWACE DMP?**
In the following we will describe the lifecycle, responsibilities and review
processes and data management policies of research data, produced in ESiWACE.
The DMP reflects the current status of discussion within the consortium about
the data that will be produced. It is not a fixed document, but evolves during
the lifespan of the project.
The target audience of the DMP is all project members and research
institutions using the data and data produced.
## 3\. Register on numerical data sets generated or collected in ESiWACE
The register has to be understood as living document, which will be updated
regularly during project lifetime. The intention of the DMP is to describe
numerical model or observation datasets collected or created by ESiWACE during
the runtime of the project.
The information listed below reflects the conception and design of the
individual work packages at the beginning of the project. Because the
operational phase of the project started in January 2016, there is no dataset
generated or collected until delivery date of this DMP.
The data register will deliver information according to Annex 1 of the Horizon
2020 guidelines (2015) (in _italics)_ :
* **Data set reference and name:** _Identifier for the data set to be produced._
* **Data set description:** _Descriptions of the data that will be generated or collected, its origin (in case it is collected), nature and scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse._
* **Standards and metadata** _: Reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created._
* **Data sharing** _: Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.)._
_In case the dataset cannot be shared, the reasons for this should be
mentioned (e.g. ethical, rules of personal data, intellectual property,
commercial, privacy-related, security-related)._
* **Archiving and preservation (including storage and backup)** : _Description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered_
### 3.1 Datasets collected within WP1
<table>
<tr>
<th>
**WP1 Governance, Engagement and long-term sustainability**
</th>
<th>
</th> </tr>
<tr>
<td>
**What types of data will the project generate/collect?**
</td>
<td>
WP1 is not going to generate numerical data sets.
</td> </tr> </table>
### .2 Datasets collected within WP2
<table>
<tr>
<th>
**WP2 Scalability**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set reference and name**
</td>
<td>
EC-Earth model output and performance data
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
EC-Earth high-resolution model output will be generated for test runs.
Furthermore, performance data will be collected.
</td> </tr>
<tr>
<td>
</td>
<td>
Constraints: IFS data may not be used for commercial purpose.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Model output will be in NetCDF and GRIB.
No metadata is automatically generated by the model. CMIP6compliant metadata
generation may become available during the course of the project.
No quality check is applied automatically. If necessary, CMIP6 compliant
quality checking may be applied.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
EC-Earth model data and performance data will be shared (if useful):
* Within the ESiWACE project, particularly WP2
* Within the EC-Earth consortium
* Within the ENES community, particularly the IS-ENES2 project
Data sharing will generally be through access to the HPC systems or data
transfer to shared platforms.
If common experiments are run in the context of other projects (e.g.
PRIMAVERA, CMIP6), data publication may be through ESGF.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
**(including storage and backup)**
</td>
<td>
Long-term data storage will most likely not be needed for the data created in
this project, the exception being potential common experiments with other
projects. In the latter case, data storage will be provide by the respective
projects.
</td> </tr>
<tr>
<td>
**Reported by**
</td>
<td>
Uwe Fladrich ([email protected])
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
BSC Performance Analysis
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
In WP2, BSC will carry on performance analysis and modifications to the source
code of the earth system models to run in others programming models (like
OmpSs).
While the modified model code is no data to be described here, the performance
analysis will produce trace outputs that contain the information of an
execution of the model. In this case, the size can be a constraint. On many-
core systems, the traces generated by a complex model can have a very big size
(more than hundreds of gigabytes) so this can be a problem to share this
information between partners. The integration and the reuse of this
information would not be a problem if the different actors take a first
decision in the tools to be used in these performance analyses.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
All the tools to trace executions provide information about the format of the
outputs and how to read them. Moreover, some of these tools can convert
formats to improve the compatibility.
Data can be in a raw binary or text format. In this last case, CSV or XML are
usual formats to deal with the information.
In the case of Paraver tool, in each trace there is a file describing which
events are in the trace. This file usually contains a code and a text
description for each event.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
For the traces, a repository allowing the distribution of big files must be
implemented. If the distribution is individual and sporadic, a solution like
an FTP can fit to the requirement. If we want to setup a repository with all
the traces for further analyses, another solution must be deployed. The
solution will have to classify data among the model run, the platform, the
configuration. This can lead to a big number of different combinations.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
**(including storage and backup)**
</td>
<td>
Codes will be stored in the gitlab, during the time that the partners consider
it convenient, but for the traces, due to the high volume of the data
generated, another strategy has to be designed. Long term storage solution
(like tapes) could be a good solution. Traces are usually a collection of big
files suited to be stored in tape solution archive.
</td> </tr>
<tr>
<td>
**Reported by**
</td>
<td>
Kim Serradell ([email protected])
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
IFS and OpenIFS model output.
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
IFS and OpenIFS model integrations will be run and standard meteorological and
computing performance data output will be generated. Both will be run at
ECMWF, and only performance data will be made available to the public. The
meteorological output will be archived in MARS, as it is standard research
experiment output. The data will be used for establishing research and test
code developments, and will enter project reports and generally accessible
publications.
The IFS will not be made available, OpenIFS is available through a dedicated
license.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
IFS meteorological output (incl. metadata) and format follows WMO standards.
Compute performance (benchmark) output will be stored and documented
separately. Data will be in ASCII and maintained locally. The output will be
reviewed internally, and the ECMWF facilities allow reproduction of this
output if necessary.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
All output can be shared within the ESiWACE consortium, and is primarily
located in the ECMWF archiving system MARS.
Data provision to the public is limited for meteorological output, and it
adheres to the ECMWF data policy. Access can be granted in individual cases.
Computing performance output can be made publicly available. This output can
be managed by the ESiWACE website.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
**(including storage and backup)**
</td>
<td>
As no large quantities of data will be produced, there are no requirements for
long-term data management. The experiment output is stored in MARS that is
backed up regularly.
Volumes and cost are negligible.
</td> </tr>
<tr>
<td>
**Reported by:**
</td>
<td>
Peter Bauer ([email protected])
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
WP2 will extend the benchmark suite fro coupling technologies
</td> </tr>
<tr>
<td>
</td>
<td>
currently developed in IS-ENES2 to target new platforms with O(10K100K) cores
accessible during the ESiWACE longer timeframe. OASIS, OpenPALM, ESMF, XIOS
and YAC will be considered.
Benchmark suites for I/O libraries and servers will have to be built from
scratch. The inter- comparison will include XIOS, ESMF and CDIpio.
A subset of the results of these benchmarks for specific technologies on
specific computing platforms will be collected and made available as a
reference.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The data per se will be just text files containing numbers (e.g. the
communication time for a specific coupling exchange as a function of the
number of cores used to run the coupled components) and will not adhere to any
specific standard.
The metadata attached to the data will contain the revision number of the
benchmark sources that will be managed under SVN or GIT and a description of
the parameters tested for a specific set of results (e.g. number of cores,
number of coupling fields, etc.). The metadata will appear also as a text file
(in the form of a Readme file) available in the data directory.
The results of the benchmarks will be reviewed by the participating
IS-ENES2 partners and reported in ESiWACE D2.1
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
The benchmark sources (managed under SVN or GIT) and subset of results will be
freely accessible to all. The description on how to access the sources and
results will be available on ESiWACE web site.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
**(including storage and backup)**
</td>
<td>
The subset of benchmark results and associated metadata will be uploaded to a
data centre (e.g. DKRZ) and attached with a standard data DOI. Specific subset
of results data will curated and preserved as a reference to compare with for
the people who would want to run the benchmark themselves for O(10) years and
will be regularly replaced by new subsets of new tests for new platforms.
</td> </tr>
<tr>
<td>
**Reported by:**
</td>
<td>
Sophie Valcke ([email protected])
</td> </tr> </table>
**3.3 Datasets collected within WP3**
<table>
<tr>
<th>
**WP3 Usability**
</th>
<th>
</th> </tr>
<tr>
<td>
**What types of data will the project generate/collect?**
</td>
<td>
WP3 is not going to generate typical numerical data sets, WP3 is going to
produce papers and reports, and to some extent software code.
</td> </tr>
<tr>
<td>
**3.4 Datasets collected within WP4**
</td>
<td>
</td> </tr>
<tr>
<td>
**WP4 (Exploitability)**
</td>
<td>
</td> </tr>
<tr>
<td>
**What types of data will the project generate/collect?**
</td>
<td>
WP4 (Task 4.3) will generate semantic mappings between metadata standards. The
mappings will be made available through a SPARQL server and curated at STFC
and ECMWF
</td> </tr>
<tr>
<td>
**3.5 Datasets collected within WP5**
</td>
<td>
</td> </tr>
<tr>
<td>
**WP5** **Management and Dissemination**
</td>
<td>
</td> </tr>
<tr>
<td>
**What types of data will the project generate/collect?**
</td>
<td>
WP5 is not going to generate numerical data sets
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0912_QT21_645452.md
|
**1 Executive Summary** 1
This Data Management Plan (DMP) reports on the current state (as of month 6)
of the data QT21 will use and generate during its life. The DMP will be
updated during the project with new releases in months 18 and 30\.
This document follows the structure recommended for all Horizon2020 DMPs.
First by describing the **data selection methodology** before formally
describing the data. The formal data description starts **with a name and a
reference** to the data followed by a **description of the content** of the
data. Further **standards** , data **sharing** and data **archiving** have to
be addressed.
QT21 will make use of four data sets. Two of them will be produced by QT21 two
in conjunction with CRACKER for WMT. This document presents therefore four
DMPs.
The first DMP is organised around the data used and produced by the Workshop
on Machine Translation (WMT, _http://www.statmt.org_ ) for training SMT
engines. This data set will be used by Work Packages 1 and 2 (WP1, WP2), see
section 2.
Two other DMPs are defined with respect to the work done in WP3. The related
two data sets are new and will be produced by QT21. As these data sets are
also of a new type, the section on data selection methodology goes into
details.
The first DMP for WP3 (section 3) deals with human annotations (human
posteditions and human error-annotations). This data set is made of 50.000
(resp.
25.000) 3-tuples {source;reference;human-post-edition} in 4 language pairs 2
(resp. in the 2 language pairs 3 ). From these 3-tuples, 1.000 for each of
the language-pair are extended to 4-tuples by adding error-annotations to each
segment {source;reference;human-post-edition;human-error-annotation}.
Associated to the human post-editions and human error-annotations that will be
produced, guidelines have been produced that are appended to this deliverable.
These guidelines are meant to harmonise and coordinate (and are seen as
standardisation means of) the human post-edition and error-annotation
processes.
The second DMP for WP3 (section 4) will be generated in order to train a
Statistical Machine Translation system on two different domains: Information
Technology (IT) and Pharmacy. The data set will be produced out of a mixture
of in-domain data and similar-to-in-domain data extracted from a generic
corpus using a cross-entropy based filter.
Last but not least, the WP4 DMP will produce new data for 3 WMT “translation
tasks” that will run in 2016, 2017 and 2018. This data production will be
organised jointly (shared task) with the EC funded project CRACKER (see
section 5).
Each data set described here is referenced to as a whole. For documentation
sake, we give further an indication of the data split we will make use of,
separating training data from evaluation data (see the respective “data split”
sub-section in sections 2 and 3). For development test sets, it has been
decided not to impose them and to leave it to each partner to extract what
they need from the training data set.
2. **Data Plan for WP1-WP2**
1. **Introduction**
WP1 and WP2 are mainly focused on improving technology for the language pairs
considered. Both WPs have no specific requirement on data. As a consequence,
both WPs will rely on existing data sets.
2. **Data selection process – methodology**
The main issue for these WPs is comparability of results between different
technologies and methods used to improve MT and push the State-of-the-Art in
MT. Therefore partner have agreed to work on pre-defined training sets which
domains are known (for the blind test sets see section 5).
3. **Data description: WP1-WP2**
#### 2.3.1 Data set reference and name
<table>
<tr>
<th>
**Language**
**Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
QT21-EN-DE-wmt
</td>
<td>
WMT Data
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
QT21-EN-CS-wmt
</td>
<td>
WMT Data
</td> </tr>
<tr>
<td>
EN🡪LV
</td>
<td>
QT21-EN-LV-EP
</td>
<td>
Europarl Corpus
</td> </tr>
<tr>
<td>
EN🡪RO
</td>
<td>
QT21-EN-RO-EP
</td>
<td>
Europarl Corpus
</td> </tr>
<tr>
<td>
DE🡪EN
</td>
<td>
QT21-DE-EN-wmt
</td>
<td>
WMT Data
</td> </tr>
<tr>
<td>
CS🡪EN
</td>
<td>
QT21-CS-EN-wmt
</td>
<td>
WMT News Data
</td> </tr> </table>
**Table 2-1 – WP1-WP2-Training data: Reference set for each language pair**
#### 2.3.2 Data set description
For the German-English and Czech-English, well-established test and training
sets are available from the Workshop on Statistical Machine Translation. Using
these data sets, we are not only able to compare the performance within the
project, but also within the research community.
The data consists of translated news articles from different languages.
For English-Romanian and English-Latvian, we agree to use the Europarl domain
to evaluate the techniques developed within this work package.
In order to concentrate on method comparison, the training data is limited to
the data available for the WMT Evaluations. During the three years of the
project life, QT21 will follow the constraints given by WMT.
For the German–English pair, the parallel data consist of the Europarl Corpus
version 7, the News commentary corpus v10 and the Common Crawl Corpus. In
addition, monolingual news data is available.
For the Czech-English pair, in addition to the corpora referred to above for
the German-English pair, the CzEng 1.0 4 ( _http://ufal.mff.cuni.cz/czeng_ )
can be used to train the models.
All this data is downloadable from
_http://www.statmt.org/wmtXX/translationtask.html_ , XX being the year of the
WMT campaign.
For Latvian and Romanian to English, we will use the freely available Europarl
corpus to train the SMT systems.
<table>
<tr>
<th>
**Language**
**Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
QT21-EN-DE-train
</td>
<td>
Europarl V7 + News Commentary Corpus V10 + Common Crawl as defined in
_http://www.statmt.org/wmt15/translationtask.html_
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
QT21-EN-CS-train
</td>
<td>
Europarl V7 + News Commentary Corpus V10 + Common Crawl + CzEng 1.0 as defined
in
_http://www.statmt.org/wmt15/translationtask.html_
</td> </tr>
<tr>
<td>
EN🡪LV
</td>
<td>
QT21-EN-LV-train
</td>
<td>
_http://www.statmt.org/europarl/_
</td> </tr>
<tr>
<td>
EN🡪RO
</td>
<td>
QT21-EN-RO-train
</td>
<td>
_http://www.statmt.org/europarl/_
</td> </tr>
<tr>
<td>
DE🡪EN
</td>
<td>
QT21-DE-EN-train
</td>
<td>
Europarl V7 + News Commentary Corpus V10 + Common Crawl as defined in
_http://www.statmt.org/wmt15/translationtask.html_
</td> </tr>
<tr>
<td>
CS🡪EN
</td>
<td>
QT21-CS-EN-train
</td>
<td>
Europarl V7 + News Commentary Corpus V10 + Common Crawl + CzEng 1.0 as defined
in
_http://www.statmt.org/wmt15/translationtask.html_
</td> </tr> </table>
**Table 2-2 – WP1-WP2-Training: Reference set for each language pair**
#### 2.3.3 Standards and metadata
The data is collected over several years and available in standard formats. An
exact description can be found at _http://www.statmt.org/wmt15/translation-
task.html_ . See Annex A for an example of the format used.
**2.3.4 Data sharing**
The data is freely available at _http://www.statmt.org/wmt15/translation-
task.html_ .
**2.3.5 Archiving**
Following the rules set by _http://www.statmt.org_ .
**2.3.6 Data Split**
No data split. This data will be released in one shot as described above.
3. **Data Plan for WP3-Human annotations**
**3.1 Introduction**
The main goal of WP3 is the development of translation techniques that are
aware of the impact of specific error types on machine translation and can be
efficiently improved by learning from human feedback and corrections of
specific error types.
The success of WP3 is hence connected to the availability of large quantity of
data containing human feedback in the form of Human Post-Edition (HPE) and/or
Human Error Annotation (HEA) of MT errors. HPE is about the “what” is wrong:
it corrects translations and provides insight into what text is corrected. HEA
is about the “why” it is wrong: it identifies and names specific errors and is
thus useful for understanding why corrections are made and what types of
errors are made. HEA is 5 to 6 times more expensive than HPE.
This data also needs to contain the translated reference so that the other
work packages can work on this data also. Table 3-1 shows for each QT21
language pair the volume of human generated data (Post Editions and Error
Annotations) that the project will produce.
<table>
<tr>
<th>
**Language Pairs**
</th>
<th>
**Post Edition Volume**
</th>
<th>
**Error Annotation Volume**
</th>
<th>
**Data Set Label**
</th> </tr>
<tr>
<td>
EN-DE
</td>
<td>
50.000
</td>
<td>
1.000
</td>
<td>
Set A
</td> </tr>
<tr>
<td>
EN-CS
</td>
<td>
50.000
</td>
<td>
1.000
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
EN-LV
</td>
<td>
25.000
</td>
<td>
1.000
</td>
<td>
Set B
</td> </tr>
<tr>
<td>
EN-RO
</td>
<td>
25.000
</td>
<td>
1.000
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
DE-EN
</td>
<td>
50.000
</td>
<td>
1.000
</td>
<td>
Set C
</td> </tr>
<tr>
<td>
CS-EN
</td>
<td>
50.000
</td>
<td>
1.000
</td> </tr> </table>
#### Table 3-1 – WP3- QT21 language pairs and related HPE and HEA volumes in
number of segments
During the three years of the project, WP3 will use the two typologies to
generate two sets of human-annotated data with the following content:
1. HPE sentences: for each source sentence, the MT output, the reference and the post-edited sentence obtained by the work of professional translators will be made available;
2. HEA information: the source, target, reference and post-edited sentences will be enriched with error-annotations provided by professional translators using the harmonised error metric developed in WP3.
The next section describes the processes required to select the appropriate TM
data from which to create HPE and HEA data.
**3.2 Generation of HPE and HEA data – methodology**
#### 3.2.1 Corpus selection
The goal of WP3 is to develop new translation techniques leveraging and
learning from human feedback.
The efficiency of learning from human feedback depends very much on the
quality of the human feedback. We need a good balance of high quality MT
generated output (though not perfect) and lower quality 5 : the less
ambiguous is the human-annotation (on the MT output), the clearer the message
to the learning system.
Further, as the methods to be developed in WP3 are statistical methods, the
efficiency of these methods (learning from human feedback) also depends on the
number of similar annotations (messages) the learning system will observe: The
more repetition of error types the better. This can be best achieved when
working on a specific domain from which one can expect a higher repetitions of
errors.
The latter point has also the advantage that the consortium will be working on
data that reflects the kinds of data managed on a daily basis by Language
Service Providers and professional translators.
The WP3 data selected has to reflect the following minimal constraint set:
* Data contains source and reference segments 6
* Data is within a narrow domain 7
* Data can be shared and referenced within the research community
* Data covers the six QT21 language pairs (Table 3-1)
* Data should contain, for each language pair, at least 50k clean and high quality source-reference segments pairs.
The largest data set we have found that covers these constraints is that of
the TAUS Data Association (TDA). Table 3-2 gives the number of words for
translation memories available within TDA in two different domains that are of
interest for the WP3.
This set allowed us to define three data sets. Since within each set the
source segments are the same for each language-pair, two-by-two language-pair
comparison is possible:
**Set A** comprises bilingual segments in EN (US) - CS and EN (US) – DE in the
domain of Computer Software. The content creator is Adobe. The total number of
segments in the selected corpora are 6.5 Mio segments for EN-DE and nearly 1
Mio for EN-CS.
**Set B** comprises bilingual segments in CS – EN (UK) and DE – EN (UK) in the
domain of Pharmaceuticals and Biotechnology. The content creator is the
European Medicines Agency. For both corpora, the number of available segments
is about 450k.
**Set C** : comprises bilingual segments in EN (UK) – LV and EN (UK) – RO in
the domain of Pharmaceuticals and Biotechnology. The content creator is the
European Medicines Agency. For both corpora, the number of available segments
is about 450k.
<table>
<tr>
<th>
**Language Pairs**
</th>
<th>
**Computer**
**Hardware**
**(# Words)**
</th>
<th>
**Computer**
**Software**
**(# Words)**
</th>
<th>
**Pharma (# Words)**
</th>
<th>
</th>
<th>
**Data Set Label**
</th> </tr>
<tr>
<td>
EN-DE
</td>
<td>
24.166.846
</td>
<td>
83.001.203
</td>
<td>
412.397
</td>
<td>
</td>
<td>
Set A
</td> </tr>
<tr>
<td>
EN-CS
</td>
<td>
2.731.003
</td>
<td>
12.470.776
</td>
<td>
0
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
EN-LV
</td>
<td>
371
</td>
<td>
198.405
</td>
<td>
</td>
<td>
5.812.284
</td>
<td>
</td>
<td>
Set B
</td> </tr>
<tr>
<td>
EN-RO
</td>
<td>
1.119.292
</td>
<td>
545.915
</td>
<td>
</td>
<td>
5.556.027
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
DE-EN
</td>
<td>
6.298.559
</td>
<td>
1.211.718
</td>
<td>
</td>
<td>
6.385.014
</td>
<td>
</td>
<td>
Set C
</td> </tr>
<tr>
<td>
CS-EN
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
5.842.314
</td>
<td>
</td> </tr> </table>
**Table 3-2 – WP3-Domain Selection within TAUS Data: Based on number of words.
We had three domains to choose from. The data sets that have been selected are
in yellow.**
#### 3.2.2 Segment selection
To ensure good post-editions and good annotations, it is important that the
segments on which the MT engines will run are clean and sentence-like. For
this reason, we looked at the data provided by TAUS by adding the following
constraints:
1. To ensure comparability between language pairs, we select source segments that are identical to both language pairs (in other words the same text has been translated into several languages)
2. Each source segment contains between 3 and 50 words 8 .
3. Both the source and the target segment end with a punctuation mark. The five selected punctuation marks are the following (see also Table 1):
1. Full stop ‘.’
2. Colon ‘:’
3. Semicolon ‘;’
4. Question mark ‘?’
5. Exclamation mark ‘!’
4. The data does not contain duplicate bilingual segments: it is sorted-unique on bilingual segments.
Constraint number 1 and 2 above participated each in a relative size reduction
of the corpora by 15%.
Further constraint number 3 contributed most importantly to the size reduction
of the corpora we are working on (relatively by about 30%).
Table 3-3 shows how punctuation is used to classify segments as sentence-like
or not. If the last character of a segment is within that character set, it is
considered a sentence. This definition can be applied to both source and
target segments or only to one of them (e.g., only to the source segment) or
to neither source nor target.
For example, the data set extracted from the TAUS data that follows the
“Punct_5” labelled punctuation is a data set where both source and target
segments end with a character within the “Punct_5” set.
It has been observed that the data sets following the “Source_Punct_5” or
“Target_Punct5” definitions are very small in size, suggesting the TAUS Data
is very clean. For this reason we will consider only the two disjointed data
sets “Punct_5” and “No_Punct_5”.
<table>
<tr>
<th>
**Punctuation**
**Character**
**Set**
</th>
<th>
**Label**
</th>
<th>
**Source Segment ends in the punctuation set**
</th>
<th>
**Target Segment ends in the punctuation set**
</th> </tr>
<tr>
<td>
. ; : ? ! :
</td>
<td>
Punct_5
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
No_Punct_5
</td>
<td>
No
</td>
<td>
No
</td> </tr>
<tr>
<td>
Source_Punct_5
</td>
<td>
Yes
</td>
<td>
No
</td> </tr>
<tr>
<td>
Target_Punct_5
</td>
<td>
No
</td>
<td>
Yes
</td> </tr> </table>
**Table 3-3 – WP3-Punctuation: Punctuation sets labelled according to on which
data type it is applied (source or target).**
Applying the constraint set above we obtained a high quality set of segments
as showed in Table 3-4 - WP3-High Quality TM (based on the punctuation set
Punct_5) that can be used for Post Editions and Annotations from which the 50k
segments to be post edited and annotated will be randomly extracted 9 .
<table>
<tr>
<th>
**Data Set**
</th>
<th>
**Language**
**Pair**
</th>
<th>
**Punctuation Set**
</th>
<th>
**Number of**
**Segments**
</th>
<th>
**Domain**
</th>
<th>
**Data Provider**
</th> </tr>
<tr>
<td>
Set A
</td>
<td>
EN(US)-DE
</td>
<td>
Punct_5
</td>
<td>
80.874
</td>
<td>
IT-Soft
</td>
<td>
Adobe
</td> </tr>
<tr>
<td>
EN(US)-CS
</td>
<td>
Punct_5
</td>
<td>
81.352
</td>
<td>
IT-Soft
</td>
<td>
Adobe
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Set B
</td>
<td>
EN(UK)-LV
</td>
<td>
Punct_5
</td>
<td>
177.795
</td>
<td>
Pharma
</td>
<td>
European Medicines Agency
</td> </tr>
<tr>
<td>
EN(UK)-RO
</td>
<td>
Punct_5
</td>
<td>
179.285
</td>
<td>
Pharma
</td>
<td>
European Medicines Agency
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Set C
</td>
<td>
DE-EN(UK)
</td>
<td>
Punct_5
</td>
<td>
193.637
</td>
<td>
Pharma
</td>
<td>
European Medicines Agency
</td> </tr>
<tr>
<td>
CS-EN(UK)
</td>
<td>
Punct_5
</td>
<td>
193.516
</td>
<td>
Pharma
</td>
<td>
European Medicines Agency
</td> </tr> </table>
**Table 3-4 - WP3-High Quality TM (based on the punctuation set Punct_5) that
can be used for Post Editions and Annotations**
#### 3.2.3 HPE and HEA production
Different SMT systems will be tested and the one system with the best overall
BLEU score will be selected to produce the MT segments needed. The selection
of the final 50.000 segments (resp. 25.000 segments for set B) will be done
while looking at having a variety of quality levels. This data set defines
Table 3-5.
Professional human translators will follow the guidelines described under
section 3.3.3 to produce the HPE and HEA the work package will work on.
**3.3 Data description: WP3-HPE and HEA**
#### 3.3.1 Data set reference and name
##### 3.3.1.1 Translation memories
<table>
<tr>
<th>
**Language Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
TAUS-EN-DE-TM4HA-IT
</td>
<td>
Contact TAUS and ask for it
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
TAUS-EN-CS-TM4HA-IT
</td>
<td>
Contact TAUS and ask for it
</td> </tr>
<tr>
<td>
EN🡪LV
</td>
<td>
TAUS-EN-LV-TM4HA-Pharma
</td>
<td>
Contact TAUS and ask for it
</td> </tr>
<tr>
<td>
EN🡪RO
</td>
<td>
TAUS-EN-RO-TM4HA-Pharma
</td>
<td>
Contact TAUS and ask for it
</td> </tr>
<tr>
<td>
DE🡪EN
</td>
<td>
TAUS-DE-EN-TM4HA-Pharma
</td>
<td>
Contact TAUS and ask for it
</td> </tr>
<tr>
<td>
CS🡪EN
</td>
<td>
TAUS-CS-EN-TM4HA-Pharma
</td>
<td>
Contact TAUS and ask for it
</td> </tr> </table>
**Table 3-5 – WP3-Translation Memories for human annotation: Reference set for
each language pair**
##### 3.3.1.2 HPE and HEA segments
Both data sets will be made available when both HPE and HEA processes will be
finalised.
<table>
<tr>
<th>
**Language Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
HPE-EN-DE-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
HPE-EN-CS-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪LV
</td>
<td>
HPE-EN-LV-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪RO
</td>
<td>
HPE-EN-RO-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
DE🡪EN
</td>
<td>
HPE-DE-EN-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
CS🡪EN
</td>
<td>
HPE-CS-EN-Pharma
</td>
<td>
TBA
</td> </tr> </table>
###### Table 3-6 – WP3-HPE data: Reference set for each language pair
<table>
<tr>
<th>
**Language Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
HEA-EN-DE-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
HEA-EN-CS-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪LV
</td>
<td>
HEA-EN-LV-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪RO
</td>
<td>
HEA-EN-RO-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
DE🡪EN
</td>
<td>
HEA-DE-EN-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
CS🡪EN
</td>
<td>
HEA-CS-EN-Pharma
</td>
<td>
TBA
</td> </tr> </table>
**Table 3-7 – WP3- HEA data: Reference set for each language pair**
#### 3.3.2 Data set description
**Table 3-8** describes the TM segments from which HPE and HEA will be
generated.
<table>
<tr>
<th>
**Data Set**
</th>
<th>
**Language**
**Pair**
</th>
<th>
**Punctuation Set**
</th>
<th>
**Number of Segments**
</th>
<th>
**Domain**
</th>
<th>
**Data Provider**
</th> </tr>
<tr>
<td>
Set A
</td>
<td>
EN(US)-DE
</td>
<td>
Punct_5
</td>
<td>
50,000
</td>
<td>
IT-Soft
</td>
<td>
Adobe
</td> </tr>
<tr>
<td>
</td>
<td>
EN(US)-CS
</td>
<td>
Punct_5
</td>
<td>
50,000
</td>
<td>
IT-Soft
</td>
<td>
Adobe
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Set B
</td>
<td>
EN(UK)-LV
</td>
<td>
Punct_5
</td>
<td>
25.000
</td>
<td>
Pharma
</td>
<td>
European
Medicines Agency
</td> </tr>
<tr>
<td>
EN(UK)-RO
</td>
<td>
Punct_5
</td>
<td>
25.000
</td>
<td>
Pharma
</td>
<td>
European
Medicines Agency
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Set C
</td>
<td>
DE-EN(UK)
</td>
<td>
Punct_5
</td>
<td>
50,000
</td>
<td>
Pharma
</td>
<td>
European
Medicines Agency
</td> </tr>
<tr>
<td>
CS-EN(UK)
</td>
<td>
Punct_5
</td>
<td>
50,000
</td>
<td>
Pharma
</td>
<td>
European
Medicines Agency
</td> </tr> </table>
**Table 3-8 - WP3-High Quality TM (based on the punctuation set Punct_5) that
can be used for Post Editions and Annotations**
#### 3.3.3 Standards and metadata
The TM are produced along the standards defined in section 5.3.3 that can also
be seen at _http://www.statmt.org/wmt15/translation-task.html_ .
For HPE and HEA data, guidelines have been developed to help professional
translators producing consistent annotations between themselves. These
guidelines are based on the experience from TAUS and the work done on
QTLaunchPad.
These guidelines will evolve over time as the project will learn from MT
related errors and/or language related errors (Latvian and Romanian are new
languages to the consortium).
##### 3.3.3.1 Post-Edition
The post-edition guidelines (see Annex B) are aimed at helping QT21
participants (project managers, post-editors, evaluators) set clear
expectations and can be used as a basis on which to instruct post-editors. It
is not practical to present a set of guidelines that will cover all scenarios.
It’s better if these are used as baseline guidelines and are tailored as
required for the given purpose. Generally, these guidelines assume bi-lingual
post-edition that is ideally carried out by a paid translator but that might
in some scenarios be carried out by bilingual domain experts or volunteers.
While the QT21 project will aim at delivering 15.000 segments in six language
pairs, the guidelines presented here are not system or languagespecific, thus
can be applied throughout the whole project.
##### 3.3.3.2 Error Annotation
In QT21, annotation will always be made on segments that are also post edited
10 . This means HEA and HPE guidelines have to be harmonised, which leads to
more precise guidelines for the post-edition process when the segment has been
also annotated for errors: the specific error-annotation and related post-
edition guidelines are described in Annex C page 24.
For error-annotations, an XML form developed in the QTLaunchPad project will
be used. It groups together the results of multiple annotators and provides a
number of features.
The permissible elements and attributes are defined in the schema
(annotations.xsd) included in Annex D page 40 of this document.
The XSLT stylesheet included in Annex E page 41 can be used to convert the XML
format into an HTML output format.
Annex F page 42 gives the example of an XML file containing one annotated
segment.
Annex G page 43 gives a prose description of the XML basic elements and
attributes.
#### 3.3.4 Data sharing
We have two types of data to share. The TM from TAUS and the QT21 generated
annotations.
##### 3.3.4.1 Translation memories
In order to access data from TAUS, researchers and affiliates have to register
to TAUS. Once identified as belonging an academic institution, they can access
TAUS Data for free according to the TAUS academic membership plan and policy.
In order to ease access to and refer to the data used within QT21, the data
originated from TAUS will be marked with the different labels/names as defined
in Table 3-5.
This means each person registered in TAUS Data can have direct access
according their membership plan without ambiguity to exactly the same data as
used during the life of QT21.
##### 3.3.4.2 Human post-editions and error-annotations
HPE and HEA data will be made available on Meta-Share:
_http://www.metashare.eu/_
#### 3.3.5 Archiving
For the TM, the TAUS infrastructure is used.
For the HPE and HEA data, we use the Meta-Share infrastructure to make the
newly generated data available over time. _http://www.meta-share.eu/_
#### 3.3.6 Data Split
The consortium will release the data as the project advances, first releasing
2/3 of the data for training purposes and the last 1/3 of the data for
development and three different evaluation campaigns (2016, 2017 and 2018).
The data of the evaluation campaigns will be agreed between the translation
and quality estimation shared task (WP4 and WMT).
##### 3.3.6.1 Training data
<table>
<tr>
<th>
**Language Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
HPE-TRAIN-EN-DE-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
HPE-TRAIN-EN-CS-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪LV
</td>
<td>
HPE-TRAIN-EN-LV-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪RO
</td>
<td>
HPE-TRAIN-EN-RO-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
DE🡪EN
</td>
<td>
HPE-TRAIN-DE-EN-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
CS🡪EN
</td>
<td>
HPE-TRAIN-CS-EN-Pharma
</td>
<td>
TBA
</td> </tr> </table>
###### Table 3-9 – WP3-HPE TRAIN data: Reference set for each language pair
<table>
<tr>
<th>
**Language Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
HEA-TRAIN-EN-DE-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
HEA-TRAIN-EN-CS-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪LV
</td>
<td>
HEA-TRAIN-EN-LV-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪RO
</td>
<td>
HEA-TRAIN-EN-RO-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
DE🡪EN
</td>
<td>
HEA-TRAIN-DE-EN-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
CS🡪EN
</td>
<td>
HEA-TRAIN-CS-EN-Pharma
</td>
<td>
TBA
</td> </tr> </table>
**Table 3-10 – WP3- HEA-TRAIN data: Reference set for each language pair**
##### 3.3.6.2 Evaluation data
<table>
<tr>
<th>
**Language Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
HPE-EVAL-EN-DE-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
HPE-EVAL-EN-CS-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪LV
</td>
<td>
HPE-EVAL-EN-LV-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪RO
</td>
<td>
HPE-EVAL-EN-RO-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
DE🡪EN
</td>
<td>
HPE-EVAL-DE-EN-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
CS🡪EN
</td>
<td>
HPE-EVAL-CS-EN-Pharma
</td>
<td>
TBA
</td> </tr> </table>
###### Table 3-11 – WP3-HPE evaluation data: Reference set for each language
pair
<table>
<tr>
<th>
**Language Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
HEA-EVAL-EN-DE-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
HEA-EVAL-EN-CS-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪LV
</td>
<td>
HEA-EVAL-EN-LV-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪RO
</td>
<td>
HEA-EVAL-EN-RO-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
DE🡪EN
</td>
<td>
HEA-EVAL-DE-EN-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
CS🡪EN
</td>
<td>
HEA-EVAL-CS-EN-Pharma
</td>
<td>
TBA
</td> </tr> </table>
**Table 3-12 – WP3- HEA-EVAL data: Reference set for each language pair**
4. **Data Plan for WP3-Domain specific training data**
**4.1 Introduction**
A crucial aspect for the production of the post-edited segments and the
errorannotations is the creation of domain-specific data to train a SMT
system. This is necessary because a generic translation system will not be
able to correctly translate specific terms or expressions, thus producing
target sentences with too many errors. This is likely to make professional
translators rewrite the translations from scratch and produce a post-edition
in a similar manner to reference translations, making error-annotation
impossible.
**4.2 Data selection process – methodology**
The best data to train a domain specific MT engine is domain specific data
(indomain data). To start with, we will select in-domain TMs from the IT and
Pharma domains. This data will come from TAUS and from OPUS. As the amount of
indomain data gathered (see Table 4-4 and Table 4-5) is not enough to train
good domain specific MT engines, data selection techniques will also be
performed to identify from a large collection of generic parallel datasets
(out-of-domain) those segments that are the closest to the specific domains.
This process, referred to as data selection, applies techniques borrowed from
Information Retrieval such as the TF-IDF used by [Lu et al. 2007], to rank
each element from the large pool of data according its similarity in terms of
topic or style to the in-domain data.
For QT21, we propose to use cross-entropy-based selection for monolingual data
[Moore and Lewis, 2010] and its extended version for bilingual texts proposed
by [Axelrod et al. 2011]. Project partners (including FBK and USFD) have prior
experience with these techniques.
Originally proposed by [Gao and Zhang, 2002], perplexity-based approaches
consist of computing the perplexity score of each sentence of a generic corpus
against an in-domain language model, and doing the same against a language
model trained on the generic corpus itself. The sentences are then ranked
according to the difference between their two perplexity scores (in-domain and
generic). Once all the generic sentences have been ranked, the size of the
subset to extract is determined by minimising the perplexity of a development
set against a language model trained on increasing amount of the sorted
corpus. According to [Moore and Lewis, 2010], when using less but more
relevant data, perplexity decreases.
All of those methods [Gao and Zhang, 2002], [Moore and Lewis, 2010] and
[Axelrod et al. 2011] have been implemented in XenC [Rousseau, 2013], a freely
available open-source tool developed during the MateCat project.
For each language pair, the in-domain corpus will be selected from the
resources listed in Table 3-4 together with the generic corpora obtained from
large collections available on the WEB (e.g. Opus, Europarl, which still need
to be defined).
**4.3 Data description: WP3-Domain specific data**
#### 4.3.1 Data set reference and name
We have 3 sources of data on which to train the SMT engine that will generate
the
MT segments used as basis for HPE and HEA-EVAL. These are the Train-TAUS (for
IT and Pharma domains), the train-Opus (for the IT domain) and the train-Auto-
Extract corpora that we will generate based on the methodology described in
section 4.2.
<table>
<tr>
<th>
**Language**
**Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
TAUS-QT21-EN-DE-train-IT
</td>
<td>
Contact TAUS and ask for it
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
TAUS-QT21-EN-CS-train-IT
</td>
<td>
Contact TAUS and ask for it
</td> </tr> </table>
##### Table 4-1 – WP3-Train-TAUS: Reference set for each language pair
<table>
<tr>
<th>
**Language**
**Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
OPUS-EN-DE-train-Gnome
</td>
<td>
_http://opus.lingfil.uu.se/GNOME.php_
</td> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
OPUS-EN-DE-train-KDE4
</td>
<td>
_http://opus.lingfil.uu.se/KDE4.php_
</td> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
OPUS-EN-DE-train-KDEdoc
</td>
<td>
_http://opus.lingfil.uu.se/KDEdoc.php_
</td> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
OPUS-EN-DE-trainOpenOffice3
</td>
<td>
_http://opus.lingfil.uu.se/OpenOffice3.p_ _hp_
</td> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
OPUS-EN-DE-trainOpenOffice
</td>
<td>
_http://opus.lingfil.uu.se/OpenOffice.ph_ _p_
</td> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
OPUS-EN-DE-train-PHP
</td>
<td>
_http://opus.lingfil.uu.se/PHP.php_
</td> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
OPUS-EN-DE-train-Ubuntu
</td>
<td>
_http://opus.lingfil.uu.se/Ubuntu.php_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
OPUS-EN-CS-train-Gnome
</td>
<td>
_http://opus.lingfil.uu.se/GNOME.php_
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
OPUS-EN-CS-train-KDE4
</td>
<td>
_http://opus.lingfil.uu.se/KDE4.php_
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
OPUS-EN-CS-train-PHP
</td>
<td>
_http://opus.lingfil.uu.se/PHP.php_
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
OPUS-EN-CS-train-Ubuntu
</td>
<td>
_http://opus.lingfil.uu.se/Ubuntu.php_
</td> </tr> </table>
##### Table 4-2 – WP3-Train-OPUS for IT Domain: Reference set for each
language pair
Automatic extraction of domain-like data from general public and open corpora
<table>
<tr>
<th>
**Language Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
EN🡪DE
</td>
<td>
QT21-EN-DE-train-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪CS
</td>
<td>
QT21-EN-CS-train-IT
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪LV
</td>
<td>
QT21-EN-LV-train-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
EN🡪RO
</td>
<td>
QT21-EN-RO-train-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
DE🡪EN
</td>
<td>
QT21-DE-EN-train-Pharma
</td>
<td>
TBA
</td> </tr>
<tr>
<td>
CS🡪EN
</td>
<td>
QT21-CS-EN-train-Pharma
</td>
<td>
TBA
</td> </tr> </table>
**Table 4-3 – WP3-Train-auto-extract: Reference set for each language pair**
#### 4.3.2 Data set description
Table 4-4 describes more specifically the domain specific TM QT21 receives
from TAUS for the IT domain.
<table>
<tr>
<th>
**Data Set**
</th>
<th>
**Language**
**Pair**
</th>
<th>
**Constraint on segments**
</th>
<th>
**Number of Segments**
</th>
<th>
**Domain**
</th>
<th>
**Data Provider**
</th> </tr>
<tr>
<td>
Set A
</td>
<td>
EN-DE
</td>
<td>
No
</td>
<td>
7.974.430
</td>
<td>
IT-Soft+HW
</td>
<td>
Various except for Adobe
</td> </tr>
<tr>
<td>
EN-CS
</td>
<td>
No
</td>
<td>
1.260.696
</td>
<td>
IT-Soft +
HW
</td>
<td>
Various except for Adobe
</td> </tr> </table>
##### Table 4-4 - WP3-TAUS training data (all data)
Table 4-5 describes more precisely additional domain specific data QT21 will
get from OPUS, both for the IT and pharma domains. This data is complimentary
to the one in Table 4-4. For sets B and C (Pharma domain)
<table>
<tr>
<th>
**Data Set**
</th>
<th>
**Language**
**Pair**
</th>
<th>
**Constraint on segments**
</th>
<th>
**Number of Segments**
</th>
<th>
**Domain**
</th>
<th>
**Data Provider**
</th> </tr>
<tr>
<td>
Set A
</td>
<td>
EN-DE
</td>
<td>
No
</td>
<td>
310.285
</td>
<td>
IT
</td>
<td>
OPUS
</td> </tr>
<tr>
<td>
EN-CS
</td>
<td>
No
</td>
<td>
125.309
</td>
<td>
IT
</td>
<td>
OPUS
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Set B
</td>
<td>
EN-LV
</td>
<td>
No
</td>
<td>
1.005.272
</td>
<td>
Pharma
</td>
<td>
European
Medicines Agency
</td> </tr>
<tr>
<td>
EN-RO
</td>
<td>
No
</td>
<td>
969.499
</td>
<td>
Pharma
</td>
<td>
European
Medicines Agency
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Set C
</td>
<td>
DE-EN
</td>
<td>
No
</td>
<td>
1.058.752
</td>
<td>
Pharma
</td>
<td>
European
Medicines Agency
</td> </tr>
<tr>
<td>
CS-EN
</td>
<td>
No
</td>
<td>
1.003.385
</td>
<td>
Pharma
</td>
<td>
European
Medicines Agency
</td> </tr> </table>
##### Table 4-5 - WP3-OPUS+EMEA training data
In the next version of this deliverable we shall be able to describe the data
obtained from the data selection technique described section 4.2.
#### 4.3.3 Standards and metadata
The TM are produced along the standards defined in section 5.3.3 that can also
be seen at _http://www.statmt.org/wmt15/translation-task.html_ .
### 4.3.4 Data sharing
In order to access data from TAUS, researchers and affiliates have to register
to TAUS. Once identified as belonging an academic institution, they can access
TAUS Data for free according to the TAUS academic membership plan and policy.
In order to ease access to and refer to the data used within QT21, the data
originated from TAUS will be marked with the different labels/names as defined
in Table 4-1.
This means each person registered in TAUS Data can have direct access
according their membership plan without ambiguity to exactly the same data as
used during the life of QT21.
The other data are available through Meta-Share _http://www.meta-share.eu/_ .
### 4.3.5 Archiving
For the TAUS Data, TAUS has its own archiving system.
For the other data, we use the Meta-Share infrastructure to make the newly
generated data available over time _http://www.meta-share.eu/_ .
### 4.3.6 Data Split
No data split. This data will be released in one shot as described above.
# 5 Data Plan for WP4
## 5.1 Introduction
This data plan concerns only the shared task with WMT. We refer here only to
the
WMT test sets that WP1, WP2 and WP3 will make use of for their blind
evaluations.
In WP4 we organise together with CRACKER three annual shared task campaigns: a
translation shared task, a quality estimation shared task, and a metrics
shared task. These tasks continue a successful series of shared tasks held
with the Workshop on Statistical Machine Translation (WMT) in previous years.
We aim to create around 6000 sentences of human-translated text for each year
of the translation task, in two language pairs. This text will be used as an
evaluation set or be split into separate sets for system development and
evaluation.
Collaboration with other projects such as CRACKER will enable us to cover more
than just two languages in the shared tasks. The core language pairs are
GermanEnglish and Czech-English, but other challenging language pairs will be
introduced each year. We typically have three to five language pairs for WMT
shared tasks.
## 5.2 Data selection process – methodology
We crawl monolingual sources from online news sites. We then create manual
translations of crawled monolingual data to be used as test sets for the
shared tasks.
## 5.3 Data description: Evaluation data WP4
### 5.3.1 Data set reference and name
For WMT’16, the data set will be defined during the project meeting prior to
WMT’15.
<table>
<tr>
<th>
**Language Pair**
</th>
<th>
**Name**
</th>
<th>
**Reference**
</th> </tr>
<tr>
<td>
TBA
</td>
<td>
WMT Test Sets
</td>
<td>
Will be _http://www.statmt.org/wmt16/_
</td> </tr> </table>
**Table 5-1 – WP4-Test: Reference set for each language pair will be
announced**
### 5.3.2 Data set description
For German and Czech, we will use the yearly blind official evaluation sets of
the
WMT Evaluations that are being produced by QT21 and CRACKER together. They are
small parallel data sets used for testing MT systems, and are typically
created by translating a selection of crawled articles from online news sites.
For Latvian and Romanian, new evaluation test sets will be defined and
created, probably together with WMT for WMT’16 and WMT’17.
### 5.3.3 Standards and metadata
WMT test sets are typically distributed in an SGML format which is compatible
with common machine translation evaluation tools such as the NIST scoring tool
(mtevalv13a.pl).The text encoding is Unicode (UTF-8).
Metadata such as language codes and document identifiers are provided in the
SGML documents. See Annex A for an example of the format used.
### 5.3.4 Data sharing
The data will be made available from the appropriate WMT website (i.e.
_http://www.statmt.org/wmt15/_ for 2015).
### 5.3.5 Archiving
The data will remain available for download from _http://www.statmt.org/_ .
This website is currently hosted at the University of Edinburgh.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0913_ACTTiVAte_691473.md
|
# DATA SUMMARY
ACTTiVAte will focus its effort in setting up strategies that allow clusters
to lead the engagement of SMEs in activities intended to create new services
and products and therefore the generation of new value chains and emerging
industries across Europe. For that aim, an appropriate data collection and
generation process will help make better decisions, to drive decision making
and to increase project impact.
The formats to generate, disclose and store ACTTiVAte’s data will be both
digital and printed. As to the main types of produced data, we may highlight
the following ones:
* Project deliverables, which will collect the progress of ACTTiVAte.
* Data and information related to SMEs projects: viability analysis, business plan, prototype’s development data, innovation processes and technology achievements regarding the state of the art, marketing and commercial plans, industrialisation plans, etc.
* Data from surveys, questionnaires and face-to-face interviews conducted with project’s key stakeholders and target groups.
* Data that Consortium partners bring to ACTTiVAte as background IP, that have been described in the Consortium Agreement.
* External communication documents: all the public documents to communicate ACTTiVAte and the SMEs projects’ results, such as technology and business analysis, presentations, guidelines and manuals, articles, papers, newsletters, etc.
* Internal communication documents: They will be aimed at keeping all data generated during the project, mainly those related to communication among Consortium partners, such as agendas and minutes of meetings, emails, proceedings, agreements, etc.
There are existing data that will feed ACTTiVAte, both at technical and
strategic level. Among them, we find those relevant national and international
activities linked to the project, whose main outcomes will be helpful to be
used as ACTTiVAte’s inputs. Furthermore, the project will make use of any
existing guidebook produced by the European Commission or any other policy-
maker focused on clusters, social economy and entrepreneurship.
ACTTiVAte’s data gathered will be mainly stored in BAL.PM software with
restricted access and a back-up feature. All the partners will have access to
the system according to their roles in the project, and the Coordinator will
be in charge of managing access permissions, usage procedures and basic
training.
All the activities performed under ACTTiVAte, and therefore public data
generated during the project are focused on providing results to SMEs.
Nevertheless, other stakeholders will be able to make use of them, such as
clusters, RTDs, Regional Development Agencies, Enterprises Associations,
private investors groups and policy makers, among others. Therefore,
ACTTiVAte’s public data will be openly accessible to any third party interest
on them.
# FAIR DATA
## MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA
Efficient data management is vital for the success of projects. This is even
more important for large-scale integrating projects, like ACTTiVAte, with an
exceptional amount of participants and tasks demanding adequate reporting
procedures and collaboration support.
Basic methodology will comprise the considerations on how to manage the data
and information created during the project, in order to precisely describe all
procedures for keeping and disseminate results from ACTTiVAte, in their
different stages of execution and according to their usages and the way that
the audience accesses to information. The data related to project
deliverables, SMEs projects, external and internal communication documents
will follow specific procedures which will be developed by ACTTiVAte’s Quality
Manager and defined in the Quality Assurance Plan. These types of data will be
stored in a secured, software platform, which will be accessible to all
partners and external entities to the Consortium when needed. This methodology
will contain:
* Codification
* Templates
* Versions control
* Approval process for the documents generated during the project
* Storage procedures for the documents in the secured-web based platform (folder structures)
As mentioned before, to ensure data management process, ACTTiVAte puts in
place a software that is a configurable network-based environment build on SQL
and business intelligence databases to be implemented in the internet (global
access) or restricted intranets (company networks) and extranets (company
networks with restricted external access). Main target of this tool is to
maintain a central information source with most actual data accessible and
maintainable by different distributed users. These users will have different
rights to view or edit information maintained in this context according to
their roles within the project.
The data management software offers a sophisticated search mechanism to find
document stored in the database. When clicking the menu item “Document
search”, the following windows opens (Figure 1). It consists of three major
areas,
* Project tree (1)
* Result view (2)
* Search area (3)
To initiate a search, the request has to be entered in the line under “Search
terms”. The simplest possibility is to enter one or more words which means
that each document found contains at least one of the words. In order to
submit a more detailed query, the following special expressions may be used:
* “+” in front of a word means that it must be contained in the documents found.
* “-“ means that the word must not be contained.
* “AND” connects two words and means that they both have to be present.
* “OR” means that one or both word have to be present.
* “NOT” means that the word must not be present.
* “(“ and “)” may be used to change the preferences of expressions.
* By quoting an expression with “„”” signs, the enclosed string is searched for as a whole.
The search can be restricted to certain document types which can be selected
under “Search only in”.
The result set is shown in the middle part of the window (Figure 2). In the
project tree, all documents are connected to a hyperbolic tree in which the
nodes represent project, work packages, etc. and their related documents.
Dragging the nodes with the mouse gives a more detailed view on certain areas.
A double click on the black square recenters the tree.
The entity tree (Figure 3) uses the same principle but organises the result
set around key words that the documents contain. Despite this difference, the
tree behaves in exactly the same way.
The third result view is a list that might be ordered with respect to a score
that is assigned to the documents (Figure 4). The score is set automatically
in relation with the precision of the matching between document and search
expression.
In the hyperbolic tree, the following functionality is available:
* Positioning the mouse pointer on a document shows the document summary
(Figure 5).
* A click with the right mouse button opens a context menu (Figure 6):
* Show/Hide document’s metadata: expands the node (or collapses it again) by showing document metadata such as author, date, file name etc.
* Show/Hide document’s categories: expands the node (or collapses it again) by showing document related projects
* Show/Hide document’s entities: expands the node (or collapses it again) by showing document related key words
* Open document: launches the assigned viewer applications and opens the document inside it
* Download document: the document is downloaded and stored locally. o Find more like this: documents with similar content are shown.
The entity and project nodes contain similar context menus which enable the
user to expand the tree and browse the search result by following interesting
key words or concepts (Figures 7 and 8).
The entry “Additional options” offers the user some more parameters to
configure the query (Figure 9):
* Similarity: default value is “Exact” which means that only documents are found which match the query without any spelling mistakes. “Similar” and “Fuzzy” also find documents with deviated spelling. The purpose of this function is to find documents even if the exact spelling is not known or might be ambiguous.
* Project: only find documents belong to the mentioned projects
* Uploaded by: only find documents uploaded by certain persons
* Uploaded from/until: only find documents that have been uploaded after and/or before a certain date
**Figure 9 Document search, additional options**
## MAKING DATA OPENLY ACCESSIBLE
As described in section 2 “DATA SUMMARY” there are several kinds of data that
will be generated during ACTTiVAte, whose access rules will be:
* Project deliverables: according to section 1.3.2. “WT2 list of deliverables” of the Grant Agreement.
* Data and information related to SMEs projects: data generated during ACTTiVAte’s call for proposal process as well as those gathered during SMEs projects development phase will be confidential and for internal use, unless the express consent of the concerned parties. Only data stated as “public” may be openly accessible.
* Data from surveys, questionnaires and face-to-face interviews: according to the applicable data protection laws.
* Data that Consortium partners bring to ACTTiVAte as background IP: according to the stated in Attachment 1: “Background included” of the Consortium Agreement.
* External communication and dissemination material: several criteria may be taken into account depending on the data to be disclosed. According to their applicability, criteria may be:
1. Data protection laws
2. Section 29.1 “Obligation to disseminate results” of the Grant Agreement
3. Section 29.2 “Open access to scientific publications” of the Grant
Agreement
4. Section 29.3 “Open access to research data” of the Grant Agreement
* Internal communication documents: according to applicable data protection regulations laws.
ACTTiVAte’s data will be mainly stored in a secured, software platform
accessible to the entire Consortium according to the access permissions
previously agreed by the parties. The system consist of pre-configured
software and SQL Database Modules for the implementation in project specific
Internet und Intra/Extranets. User will be able to access the platform by
signing in with his username and password. The Coordinator will be in charge
of managing access permissions to the platform.
All the procedures needed to use the software tools in an efficient way will
be described in the user manual, available to all the members of the
Consortium. The platform will be work as an ASP solution, running on BAL.PM
web servers, specifically configured for ACTTiVAte and for 42 months (36
months plus 6 months for final reporting).
The software tool will be configured including all related databases.
Technical maintenance of functions and databases will be handled by the
BAL.PM. The Consortium will be responsible for all project specific content
and content management. For this task the Consortium will nominate a project
secretary (a member of the coordination team), who will be responsible for it.
Planned system availability per year is 99,9% and it will have a backup server
that could replace the BAL.PM server within less than 1 working day.
For the duration of the project and also later, BAL.PM is committed to use all
data in the database only for contractual related issues. The Consortium will
receive one month after the termination of the contract a copy of all data in
databases (SQL format) and a copy of all static webpages on CD-ROM or DVD. In
addition to that, BAL.PM will delete all project data after having handed over
copies of that data to the Consortium after termination of the contract.
Initially, it is not expected to count on a data access committee for
ACTTiVAte and the person in charge of being the focal point for any data
management issue that may arise during the project will be the Quality
Manager, belonging to the coordination team.
## MAKING DATA INTEROPERABLE
Data produced in ACTTiVAte may be subject to be exchanged and re-use between
researchers, institutions, organizations, etc. Even though, project data will
be mainly interoperable at Consortium level by means of the utilisation of the
software tool and according to the features describe in section 3.1. (data
sets, metadata vocabularies, standards, methodologies, etc.), there will be
data gathered during the project that might be used after its completion.
Among them, it is worth highlighting those coming from expert interviews,
deliverables development or SMEs projects. As for this latter case, only very
specific information (e.g. know-how matters) will not be allowed to be
interoperable and re-used.
In general, there will be three main sources of project’s public data that
will be accessible to any third party interested in the subject:
* European Cluster Collaboration Platform
* ACTTiVAte’s Cluster Collaboration Platform
* ACTTiVAte’s website and social networks
## INCREASE DATA RE-USE (THROUGH CLARIFYING LICENCES)
ACTTiVAte does not envisage providing any data licensing, and those
disclosable data, according to the agreed disclosure rules, will be freely
accessible. It is expected to keep ACTTiVAte’s data for five years after
project completion, period in which they may be re-used.
Project’s Quality Assurance Plan establishes how documentation requirements,
procedures, records and other documents are maintained and controlled,
including retention periods, during ACTTiVAte’s lifecycle.
# ALLOCATION OF RESOURCES
Costs for making data FAIR in ACTTiVAte are those allocated to the software
tool utilisation under partners "other direct costs" with the concept "Cost of
Software Licenses". The budget for the platform implementation and maintenance
is 22.968€, and its allocation among Consortium’s partners will be calculated
proportionally to the budget of each of them in the project.
The person responsible for coordinating ACTTiVAte’s data management process
will be the Quality Manager, and the General Assembly will be the ultimate
decision-making body of the Consortium and responsible for taking major
strategic decisions with respect to data management if necessary. It will also
promote consensus in case of conflict and, if no consensus can be found, it
will take decisions according to the procedures and rules defined in the
Consortium Agreement.
# DATA SECURITY
For applications distributed over the internet the risk is high to download
foreign code. Therefore, ACTTiVAte’s software tool is digitally signed. The
first time the user starts the application he will be asked whether he trusts
this signature. If some parts are exchanged by foreign code, the application
will not start and an error message will be displayed.
The platform always establishes a secured connection via HTTPS with its web
service. In case this is not possible, an unsecured connection via HTTP is
used and then, the user will see the following message after login: “Unsecure
connection established”, besides he will get symbols regarding the connection
status at the status bar on the left side: “secured” or “unsecured”. In
addition to that, the platform has a back-up function to keep project data
safe during the project execution.
# ETHICAL ASPECTS
There are not any ethical or legal issues that can have an impact on data
sharing in ACTTiVAte.
# OTHER
The guideline document “FAIR Data Management in Horizon 2020” has been used
for the development of ACTTiVAte’s Data Management Plan.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0914_ENviSION_645791.md
|
## 1\. Introduction
The purpose of this deliverable is to provide a data management plan for
ENVISION. The data management plan describes how data is being collected,
stored, documented and shared and reused during and after the project. Various
types of (big) data will be collected in ENVISION: survey, case study and log
data.
We consider the data management plan to be a living document that has to be
updated over the course of the project. Some decisions in this document are
tentative since they require further specification of the tools and platform
in WP2 and WP3. The data management plan has interdependencies with the
research database (D5.1) and the informed consent forms (D4.3, D5.5).
# 2\. Data collection
The following data collections will take place in the different Work Packages
of The Envision project.
### WP2 Flexible BMI Tooling
Both WP2 and WP3 will include three scopes of development. In the first scope,
tooling from WP2 will mainly involve downloadable templates, and hence will
not generate any log data.
In the second scope, tooling will include interactive tooling such as storing
and sharing business model ideas. At that stage, users will therefore store
their personal data which may be sensitive for competitiveness. For sharing
business model ideas with other users, access rights will have to be defined.
Specific measures on how to collect, manage and archive data for WP2 will be
defined during the development of the second scope.
### WP3 Development BMI Platform
In the first scope, the platform from WP3 will collect log data on who visits
the platform through Google Analytics. The number of unique users and
returning users will be collected. Data will be collected on where users come
from, length of sessions, time spent on a page, on which page a user leaves
the platform, browser type, device type etc. Privacy sensitive data like IP
addresses are not being collected. The log data will be used for research
purposes, i.e. to compute descriptive statistics on platform usage and
realized dissemination levels. For these purposes, data will be aggregated on
such a level that they cannot be traced back to the level of an individual
user.
Users will be able to log in to the platform. Mandatory data to register are
date of birth and email address. Further information can be entered but is not
mandatory. These login data will _not_ be used for research purposes. They
will also _not_ be disseminated as open data to outsiders.
The first scope of the platform will include an _Idea challenge_ , where users
are challenged to solve a BMI problem for an SME. Depending on whether this is
based on real cases, data management issues will have to be addressed in the
design of the _Idea challenge_ .
Specific measures on how to collect, manage and archive data for WP3 during
the second and third scope are to be further defined.
Assuming the foreseen adoption rate of 400,000 SMEs, size of the data will
grow exponentially during the project.
### WP4 Quantitative longitudinal research
Three waves of survey data will be collected through a European market
research agency. The collection of data in the longitudinal survey follows the
common practices of research agencies and/or the Dutch and Finnish statistical
offices involved in the project. Exact procedures for data collection are
specified in the quantitative protocol (D4.1) and informed consent forms
(D4.3). Survey data will comprise answers to the survey instrument developed
in WP4. Background demographics will be collected on gender, nationality,
industry sector etc. It should be taken into account that these background
demographics are on such level of aggregation that they cannot be traced back
to individuals. Data collected will be anonymized by the research agency and
made available to the researchers. After the project open data that will be
shared with the outside world will not be traceable to individuals or
companies, even due to small industry segments and dominance of individual
firms in a small industry sector. Data will be collected in SPSS and Excel
files.
### WP5 Action design case research
BMI case descriptions will be collected in various formats. Exact procedures
for data collection are specified in the qualitative protocol (D5.1) and
informed consent forms (D5.5). Raw data will include interview tapes,
transcripts, videos, annual reports, photos and so on. The data has to be
available for qualitative analyses with tools such as ATLAS.ti for WP5
researchers. Some of the data will also be utilised by other WPs. Data
heterogeneity will be an issue.
For the 60 short cases, pre-existing data will be reused which has been
collected in an Excel-based database.
Figure 1 describes how we see WP5 qualitative research data builds up and
responds to differing purposes.
**Figure 1.** ENVISION research material and access rights.
# 3\. Data Storage and Back-up
### WP3 Development BMI Platform
For the first scope, EVO’s subcontractor will host the platform and data.
For second scope and later, EVO will host the platform and store data
themselves in their self-administered server.
The second scope of platform development should specify how long data from
SMEs will be stored.
### WP4 & WP5 Quantitative research & Case research
Datasets from both work packages will be stored in a research database to be
developed by the University of Turku (UTU) in task T5.1. The database defines
the access and use rights for the author, WP5 researchers and the consortium.
Public access will be defined later.
Prototype version 1.0 is being developed and hosted by the University of Turku
at _http://envision.utu.fi_ . It will be available to WP5 researchers for
testing early June 2015. By autumn 2015 the version 2.0 will be available to
ENVISION consortium partners. The database and associated website will be
continuously maintained during the time period of the project. The technical
and structural modifications will be continuously held according to the Grant
Agreement, internal consortium requirements, research protocols from WP4 and
WP5, and database needs.
The development environment for metadata database is an open source
environment, which is based on the testing platform Rasberry Pi, server Linux
Debian, database MySQL, MyPHPAdmin, PHP and web platform Wordpress at the
University of Turku.
The public environment runs on a Webhotel of University of Turku's IT
Services. It contains database MariaDB (MySQL compatible), MyPHPAdmin and web
platform Wordpress on Linux Debian server.
For the WP4 survey data, personal information will not be collected by the
consortium but by the research agency and thus not be stored.
For WP5, the case studies may be fully open for dissemination purposes based
on rules as formulated with regard to informed consent, in which case personal
data from participants is being stored and disseminated. Case studies may also
be anonymized, in which case personal data is not to be disseminated. Personal
data from the case studies will be stored encrypted.
# 4\. Data Documentation
All involved researchers are aware that generating metadata is highly
important during data collection.
**WP2 & WP3 BMI Tooling & Platform **
To be defined during the second scope of developing tooling and platform.
### WP4 Quantitative survey research
Metadata will be collected in the form of labels in the SPSS files. Variable
names will include question numbers that are identifiable in the questionnaire
document. For computing statistics in SPSS, syntax files will be stored
including a short text-based description on what is being computed. For
conducting SEM analyses, researchers should maintain a logbook that describes
the steps being taken so that they can be reproduced. Logbook should be stored
on a separate folder on the Google Drive.
### WP5 Action design case research
Each ENVISION case study contains multiple data documents (such as voice
recordings, memos, videos, photos, transcripts, case report etc.). The access
rights to each document has to be defined, by the responsible researcher, so
that it is accessible either to general public, to consortium, to WP5
researchers or to the responsible researcher.
However we need to provide meta-level information about all research documents
for the consortium members to be able to search the database and identify
interesting documents from the whole database (even though he/she does not
have access to certain documents).
The following Metadata entity relationship diagram describes the meta-level
data.
ER diagram of the metadata database v1.0
Users of the database Companies in Cases Companies
* User id, Int, NULL, primary key - Case_id, M 1 \- Company_id, Int, NULL, primary key
* User role, text ((author, WP5 - Company_id - company_name, text researchers, consortium, public ) - address (street, city, country, mail,
* name, text phone)
* Organisation, text M - turnover, Int
* balance sheet total, Int
1 1 1 - female, Yes/No/
M Cases -- family,no. of employees, Yes/No/ Int
* Case_id, Int, NULL, primary key - Industry, text
* Case_name, text - Confidentiality, text M M 1 - Case_description, text,
Research Documents - Case_Responsible(refers to user id) researcher_id, Int 1
1
* Doc_id, Int, NULL, primary key - Case_type, text (Short case, ADR)
* Doc_name, text - Case_driver of BMI, text
* Doc_version, text (dr#, fv) - Case_BMI tool, text
* Doc_date - Case_Main message towards SMEs, text M M
* Case_id, Int (referring key) - Case_Suggestions for usage of the case
* Doc_Description, text material, text Contacts/Interviewees
* Doc_author, Int (refers to user id) - Case_Lessons learned, text \- Contact person _id, Int, NULL,
* Doc_language, text - Case_use_rights, text (author, WP5 primary key
* Doc_format, text researchers, consortium, public ) - Case_id, Int
* Doc_size Int
* Doc_location of master, text - company_id, text
(upload link) 1 M Document classification -- Name,address (street, city,
country, mail,
* Doc_use_rights, text (author, WP5 - Doc id, Int phone) researchers, consortium, public ) - keyword, text - Informed consent, Yes/No/
**Figure 2.** Entity relationship Diagram of the metadata database v1.0
UTU will develop the Metadata database structure according to the Figure 2.
The access to the database will be available at envision.utu.fi. The web site
contains the forms for adding and editing the data and a search engine for
browsing the data. During the summer the web site will be tested and adjusted
accordingly to the accumulated research data, which will be added to the
database by responsible researchers.
# 5\. Data Access
In general, data ownership is jointly shared among consortium partners.
Commercial exploitation of data is not foreseen.
**WP2 & WP3 BMI Tooling & Platform **
Data will be accessible to EVO project participants only through
username/password.
### WP4 & WP5 Quantitative research & Case research
The data access, share and change rights are assigned according to the
ENVISION project's management decisions.
The responsible researcher of each case will take care of adding the meta-
level information to the database by using the 'add new data' forms at
envision.utu.fi.
The responsible researcher uploads the respective original research documents
to envision.utu.fi. If this is not possible (for instance because the research
data is not collected during the ENVISION project, or is owned by someone who
is not part of ENVISION project) then at least the metadata has to be provided
with information where the original, full document is located.
The naming practices of files have to be distinct in order to achieve a clear
structure in the database. Each file will begin with a short name given to the
case referring to the organization such as “SmartScope”. After this comes the
content related part of the file name such as “CEO interview transcript” or
“short case description”, and the version information referring to the day,
month and year, of the last alteration of the file.
Then the affiliation, version (dr =draft, fv = final version) and reviewer
initials if needed.
ENVISION_WP5_"case"_”content”_”date”_”affiliation”_dr#_”reviewer
initials”.filetype E.g. ENVISION_WP5_Rauma_Owner
Interview_25052015_UTU_dr1.doc
Both draft and final versions can be stored in the database, but remember to
include dr# or fv to the name.
The responsible researcher is responsible that the access rights to each
document are correct. The access rights are maintained via envision.utu.fi
The ENVISION consortium partners can search the data using the search tools
provided at envision.utu.fi.
The search result will show the metadata and provide 'upload' link to the
original research documents.
# 6\. Data Sharing and Reuse
**WP2 & WP3 BMI Tooling & Platform **
To be decided by General Assembly since it was not defined in the grant
agreement.
### WP4 Quantitative survey research
Data gathered in the survey will be made openly available after the project
has finished and scientific papers have been published, once it has been
anonymized in such a way that it cannot be tracked back to individual
respondents, directly nor indirectly.
These data will be stored and made available in the 3TU.Datacentrum, which
complies fully with H2020 requirements. 3TU.Datacentrum is a Trusted Digital
Repository for technical-scientific research data in the Netherlands and
located at the TU Delft Library.
Data that we do not produce in the project (e.g. existing cases, existing
survey data, existing data from statistical offices) will not be made openly
available.
### WP5 Case studies
Research Data that is not privacy sensitive will be available open access
through the data center mentioned above, after the project has finished and
scientific papers have been published.
Data gathered in the case studies will be made openly available as long as it
does not harm privacy or competitiveness of the business being studied. This
will likely imply that we will make interview summary reports available, but
not interview recordings.
## 7\. Governance
To safeguard compliance with all aforementioned data management decisions, the
following governance measures are applied.
WP leaders are responsible for adhering to the above specifications for their
respective work package. For the overall project, TUD will be responsible for
complying with the data management plan. All consortium partners are
responsible for making sure personnel working on the project have read the
data management plan and internalized the principles. Data management will be
on the agenda in all monthly executive board Skype meetings as of September
2015.
The data management plan is considered a living document. As specified at
various points in this deliverable, some decisions cannot be taken yet because
they require further specifications of for instance the WP3 platform and WP2
tools. Updates to the data management plan are to be made and circulated
within the consortium in M12, M18, M24, M30 and M36. Major changes in the way
data is being managed in the WPs should be specified then. Major changes or
discussion points that cannot wait shall be addressed in the executive board
meetings.
To evaluate the efficacy of the data management plan, we will conduct an
evaluation in M12. The evaluation will at least include:
* WP2: Is the metamodel still consistent with what is being done in WP4 and WP5? Is updating the metamodel required?
* WP3: Is usage data of the platform being collected? Is it sufficiently aggregated to preserve anonymity?
* WP4: Is the survey data from first wave anonymized correctly and issued with unique identifier? Is the survey data being stored safely in the WP5 UTU database? Does the survey data include meaningful metadata (i.e. labels) that are understandable for outsiders? Do informed consent forms (D4.3) align with datamanagement plan?
* WP5: Is the instantiated WP5 UTU database consistent with the specifications in this document? Is data and metadata from the first cases being generated correctly and understandable for outsiders?
Do informed consent forms (D5.5) align with datamanagement plan?
* WP5: Should we update the metamodel for data collection (i.e. Figure 2 in this deliverable) to include the method used to collect data, time period covered by the data, geographical area covered by the data? Should the metamodel for data collection be updated to meet standards like CERIF (Common European Research Information Format) or Dublin Core metadata standard?
24 August 2015
Page 14
For
_**ENVISION** _
_Empowering SME Business Model Innovation_
– Project
About ENVISION
In the current tough economic environment, business model innovation can be
the key to becoming or staying
competitive. To support European competitiveness and job creation, the
ENVISION project aims at activating
small and medium sized enterprises (SME) across Europe to re-think and
transform their business models
with the help of an easy-to-use, open-access web platform. Through this
platform, every small or medium
company, regardless of the country, sector or industry, will be guided in
selecting the right tools for their
business makeover. The platform is being built for the use of 20 million
European SMEs.
The ambitious goal of the ENVISION project is pursued by a consortium of nine
partners from seven countries:
Delft University of Technology (The Netherlands), University of Turku
(Finland), Innovalor Ltd (The
Netherlands), evolaris next level Ltd (Austria), University of Maribor
(Slovenia), University of Murcia (Spain),
AcrossLimits Ltd (Malta), bgator Ltd (Finland), Kaunas University of
Technology (Lithuania).
http://www.envisionproject.eu
http://www.facebook.com/InnovateBusinessModels
_https://twitter.com/InnovateBM_
; @innovateBM
Website:
Facebook:
Twitter:
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0917_EWIT_641660.md
|
<table>
<tr>
<th>
</th>
<th>
The collection thematic area contains information on the volumes of ewaste
that is collected from households in the selected metropolitan areas,
collection methods that are harnessed in collecting municipal solid waste
(including e-waste), financing of municipal solid waste collection activities
and the involvement of the informal sector in collecting e-waste from
municipal solid waste management facilities.
The technology thematic area describes the technology that is being used to
treat obsolete e-waste, quantities of e-waste dismantled into components
(including dismantling technology used) and refurbished per annum.
The closed loop thematic area summarises the e-waste market in each
metropolitan area by identifying the number of market participants in the
ewaste market, the marketing routes for e-waste fractions, imports and exports
of e-waste fractions, the local downstream and recycling options currently in
use for different fractions, second-hand practices, reuse and refurbishment
practices.
The finance and legislation thematic area identifies the financing (taxes,
fees, costs) and legislative provisions that impact on the municipal solid
waste and e-waste in each of the eight metropolitan areas in Africa and
Europe. The assessment tools that were used in collecting the data in the
initial instance will be preserved for comparison of data across metropolitan
areas and future reference. Separate files will be kept; one for freshly
collected data from the selected metropolitan areas and the other one for data
that would have been uploaded on the information portal.
</th> </tr> </table>
<table>
<tr>
<th>
**Data quality & standards **
</th>
<th>
While the project team is making significant efforts in upholding high data
collection standards, the quality of the data being produced will be adversely
impacted by the e-waste context (demographic, economic & infrastructural) in
each metropolitan area. Large and better developed metropolitan areas in
Africa and Europe produce higher quality quantitative data than the smaller
and less developed areas.
The use of the standardised assessment tool in collecting data across the
eight metropolitan areas will ensure that comparable data are gathered from
the selected cities. The adoption of the WEEE Directive for EEE categorisation
ensures that standardised data on e-waste generation and collection by
category is obtained across the eight metropolitan areas.
Documentation of data sources, including the method of data provision (e.g.
official statistics, studies, expert guesses, others) will be documented with
the data.
</th> </tr>
<tr>
<td>
**Data access & sharing **
</td>
<td>
Policy makers in central government, municipalities, industrial users, the
informal sector, R&D organisations, universities, NGOs, participants and
stakeholders in the e-waste markets are the targeted main user groups that
will access the e-waste information portal for decision making purposes. Data
from the eight cities will be formatted, transformed and documented in a
common way that makes it comparable across the selected cities.
No online raw data will be distributed outside the consortium and only
Delivery Partners (DP) will have access to the raw data. Only Public documents
(PU) will be disseminated to the general public, unless otherwise agreed by
the Project Board (PB). A summary of document types and document sharing
envisaged is shown below.
</td> </tr>
<tr>
<td>
**Intellectual property**
</td>
<td>
The database rights, copyrights and patents with regard to the information
contained in the information portal belong to the EWIT consortium.
Reasonable steps will be taken to protect the security and confidentiality of
the information contained in the portal. Written agreements between
</td> </tr>
<tr>
<td>
</td>
<td>
metropolitan areas in Africa and Europe and interested stakeholders
(universities, research & development institutions) will be required in cases
where information, retrieved from the portal is re-used for planning, research
and development purposes.
</td> </tr>
<tr>
<td>
**Data archiving & preservation **
</td>
<td>
E-waste information contained in the portal will be preserved and archived to
ensure availability and access to such information in the long term. The
digital information could be deposited with a trusted digital archive where it
will be curated and handled according to good practices in digital
preservation. In addition to the distribution of the data through the e-waste
information portal, future long term use of the data will be ensured by
placing a copy of the data into a repository that safeguards the files.
The preserved information can be retrieved from the archives and be
electronically filtered and sorted using variables such as the metropolitan
area where it originated from, waste electrical and electronic equipment
(WEEE) category it falls under and the dates indicating when the information
was collected.
</td> </tr>
<tr>
<td>
**Main risks to data security**
</td>
<td>
The EWIT Project Board together with the management team needs to develop a
framework to manage data access on the e-waste information portal and enhance
data security. The Project Board and management team will decide, among other
issues, on how to enforce permissions, restrictions and embargoes on
information. The team will also consider other data security issues such as
the publication of sensitive data, the appropriateness of off-network storage,
the downloading and storage of information on devices such as personal
computers and laptops.
The main risks to data security envisaged in this project are:
* Unauthorised downloading and re-use of information retrieved from the e-waste information portal
Release of data from within metropolitan municipalities before being checked
for accuracy and authenticity
* Accidental damage or malicious modification of e-waste data
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0918_CRACKER_645357.md
|
# Executive Summary
This document describes the Data Management Plan (DMP) to be adopted within
CRACKER and provides information on CRACKER’s data management policy and key
information on all datasets to be produced within CRACKER, as well as
resources developed by the “Cracking the language barrier” federation of
projects (also known as the “ICT-17 group of projects”) and other projects who
wish to follow a common line of action, as provisioned in the CRACKER
Description of Action.
This first version includes the principles according to which the plan is
structured and the standard practices for data management that will be
implemented. Updates of the CRACKER DMP document will be provided in M18 (June
2016) and M36 (December 2017) respectively. In these next versions, more
detailed information on the actual datasets and their management will be
provided.
The document is structured as follows:
* Background and rationale of a DMP within H2020 (section 2)
* Implementation of the CRACKER DMP (section 3)
* Collaboration of CRACKER with other projects and initiatives (section 4)
* Recommendations for a harmonized approach and structure for a Data Management Plan to be optionally adopted by the “Cracking the language barrier” federation of projects (section 5).
# Background
The use of a Data Management Plan (DMP) is required for projects participating
in the Open Research Data Pilot, which aims to improve and maximise access to
and re-use of research data generated by projects. The elaboration of DMPs in
Horizon 2020 projects is specified in a set of guidelines applied to any
project that collects or produces data. These guidelines explain how projects
participating in the Pilot should provide their DMP, i.e. to detail the types
of data that will be generated or gathered during the project, and after it is
completed, the metadata and standards which will be used, the ways how these
data will be exploited and shared for verification or reuse and how they will
be preserved.
In principle, projects participating in the Pilot are required to deposit the
research data described above, preferably into a research data repository.
Projects must then take measures, to the extent possible, to enable for third
parties to access, mine, exploit, reproduce and disseminate, free of charge,
this research data.
The guidance for DMPs calls for clarifications and analysis regarding the main
elements of the data management policy within a project. The respective
template identifies in brief the following five coarse categories 1 :
1. **Data set reference and name** : an identifier for the data set; use of a standard identification mechanism to make the data and the associated software easily discoverable, readily located and identifiable.
2. **Data set description** : details describing the produced and/or collected data and associated software and accounting for their usability, documentation, reuse, assessment and integration (i.e., origin, nature, volume, usefulness, documentation/publications, similar data, etc.).
3. **Standards and metadata** : related standards employed or metadata prepared, including information about interoperability that allows for data exchange and compliance with related software or applications.
4. **Data sharing** : procedures and mechanisms enabling data access and sharing, including details about the type or repositories, modalities in which data are accessible, scope and licensing framework.
5. **Archiving and preservation (including storage and backup)** : procedures for long-term preservation of the data including details about storage, backup, potential associated costs, related metadata and documentation, etc.
# The CRACKER DMP
## Introduction and Scope
For its own datasets, CRACKER follows META-SHARE’s ( _http://www.meta-_
_share.eu/_ ) best practices for data documentation, verification and
distribution, as well as for curation and preservation, ensuring the
availability of the data throughout and beyond the runtime of CRACKER and
enabling access, exploitation and dissemination, thereby also complying with
the standards of the Open Research Data Pilot 2 .
META-SHARE is a pan-European infrastructure bringing online together providers
and consumers of language data, tools and services It is organized as a
network of repositories that store language resources (data, tools and
processing services) documented with high-quality metadata, aggregated in
central inventories allowing for uniform search and access. It serves as a
component of a language resource marketplace for researchers, developers,
professionals and industrial players, catering for the full development cycle
of language resources and technology, from research through to innovative
products and services [Piperidis, 2012].
Language resources in META-SHARE span the whole spectrum from monolingual and
multilingual data sets, both structured (e.g., lexica, terminological
databases, thesauri) and unstructured (e.g., raw text corpora), as well as
language processing tools (e.g., part-of-speech taggers, chunkers, dependency
parsers, named entity recognisers, parallel text aligners, etc.). Resources
are described according to the META-SHARE metadata schema [Gavrilidou et al.
2012], catering in particular for the needs of the HLT community, while the
META-SHARE model licensing scheme has a firm orientation towards the creation
of an openness culture respecting, however, legacy and less open, or
permissive, licensing options.
META-SHARE has been in operation since 2012, and it is currently in its 3.0.1
version, released in January 2013. It currently features 29 repositories set
up and maintained by 37 organisations in 25 countries of the EU. The observed
usage as well as the number of nodes, resources, users, queries, views and
downloads are all encouraging and considered as supportive of the choices made
so far [Piperidis et al., 2014]. Resource sharing in CRACKER will build upon
and extend the existing META-SHARE resource infrastructure, its specific MT-
dedicated repository ( _http://qt21.metashare.ilsp.gr_ ) as well as editing
and annotation tools in support of translation evaluation and translation
quality scoring (e.g., _http://www.translate5.net/_ ).
This infrastructure, together with its bridges, will provide support
mechanisms for the identification, acquisition, documentation and sharing of
MT-related data sets and language processing tools.
## Dataset Reference and Name
CRACKER will opt for a standard identification mechanism to be employed for
each data set, in addition to the identifier used internally by META-SHARE
itself. The options that will be addressed for the reference to the dataset ID
are the use of either a PID (Persistent Identifier as a long-lasting reference
to a dataset) or the ISLRN ( _International Standard Language Resource Number_
), the most recent universal identification schema for LRs which provides LRs
with unique names using a standardized nomenclature, ensuring that LRs are
identified, and consequently recognized with proper references (cf. figures 1
and 2).
**Figure 1. An example resource entry from the ISLRN website indicating the
resource metadata, including the
ISLRN,_http://www.islrn.org/resources/060-785-139-403-2/_ . **
**Figure 2. Examples of resources with the ISLRN indicated, from the ELRA
(left) and the LDC (right) catalogues.**
## Dataset Description
In accordance with META-SHARE, CRACKER will address the following resource and
media types:
* **corpora** (text, audio, video, multimodal/multimedia corpora, n-gram resources),
* **lexical/conceptual resources** (e.g., computational lexicons, ontologies, machine-readable dictionaries, terminological resources, thesauri, multimodal/ multimedia lexicons and dictionaries, etc.)
* **language descriptions** (e.g., computational grammars)
* **technologies** (tools/services) that can be used for the processing of data resources
Several datasets that will be produced (test data, training data) by the WMT,
IWSLT and QT Marathon events and, later on, extended with information on the
results of their respective evaluation and benchmarking campaigns
(documentation, performance of the systems etc.) will be documented and made
available through META-SHARE.
A preliminary list of CRACKER resources with brief descriptive information is
provided below. This list is only indicative of the resources to be included
in CRACKER and more detailed information and descriptions will be provided in
the course of the project.
### R#1
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT Test Sets
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
The core languages are German-English and Czech-English; other guest language
pairs will be introduced in each year.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The source data are crawled from online news sites and carry the respective
licensing conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
For tuning and testing MT systems.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
3000 sentences per language pair, per year. We typically have 5 language pairs
(not all funded by cracker).
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the test sets for the WMT shared translation task. They are small
parallel data sets used for testing MT systems, and are typically created by
translating a selection of crawled articles from online news sites. They are
made available from the appropriate WMT website (i.e.
_http://www.statmt.org/wmt15/_ for 2015)
</td> </tr> </table>
### R#2
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT Translation Task Submissions
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
They match the languages of the test sets.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Preferably CC BY 4.0.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Research into MT evaluation. MT error analysis.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
The 2015 tarball is 25M
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the submissions to the WMT translation task from all teams. We
create a tarball for use in the metrics task, but it is available for future
research in MT evaluation. Again it is available from the WMT website (
_http://www.statmt.org/wmt15/_ )
</td> </tr> </table>
### R#3
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT Human Evaluations
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Pairwise rankings of MT output.
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Numerical data (in csv)
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
N/a
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Preferably CC BY 4.0
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
In conjunction with the WMT Translation Task Submissions, this can be used for
research into MT evaluation.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
For 2014, it was 0.5MB
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the pairwise rankings of the translation task submissions. They will
also be available from the WMT website (e.g., _http://www.statmt.org/wmt15/_ )
</td> </tr> </table>
### R#4
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT News Crawl
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English, German, Czech plus variable guest languages.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The source data are crawled from online news sites and carry the respective
licensing conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Building MT systems
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
For 2014, it was 5.3G (compressed)
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
This data sets consists of text crawled from online news, with the html
stripped out and sentences shuffled.
They will also be available from the WMT website (e.g.,
_http://www.statmt.org/wmt15/_ )
</td> </tr> </table>
## Standards and Metadata
CRACKER will follow META-SHARE’s best practices for data documentation. The
basic design principles of the META-SHARE model have been formulated according
to specific needs identified, namely: (a) a typology for language resources
(LR) identifying and defining all types of LRs and the relations between them;
(b) a common terminology with as clear semantics as possible; (c) minimal
schemas with simple structures (for ease of use) but also extensive, detailed
schemas (for exhaustive description of LRs); (d) interoperability between
descriptions of LRs and associated software across repositories.
In answer to these needs, the following design principles were formulated:
* expressiveness, i.e., cover any type of resource;
* extensibility, allowing for future extensions and catering for combinations of LR types for the creation of complex resources;
* semantic clarity, through a bundle of information accompanying each schema element;
* flexibility, by employing both exhaustive and minimal descriptions;
* interoperability, through mappings to widely used schemas (DC, ISOcat DCR).
The central entity of the META-SHARE ontology is the Language Resource. In
parallel, LRs are linked to other satellite entities through relations,
represented as basic elements. The interconnection between the LR and these
satellite entities pictures the LR’s lifecycle from production to use:
reference documents related to the LR (papers, reports, manuals etc.),
persons/organizations involved in its creation and use (creators, distributors
etc.), related projects and activities (funding projects, activities of usage
etc.), accompanying licenses, etc. CRACKER will follow these standard
practices for data documentation, in line with their design principles of
expressiveness, extensibility, semantic clarity, flexibility and
interoperability.
The META-SHARE metadata can also be represented as linked data following the
work being done in Task 3.3 of the CRACKER project, the LD4LT group
(https://www.w3.org/community/ld4lt/), and the LIDER project. Such
representation can be generated by the mapping process initiated by the above
tasks and initiatives.
As an example, a subset of the META-SHARE metadata records has been converted
to Linked Data; accessible via the Linghub portal ( _http://linghub.lider-
project.eu_ ).
Included in the conversion process to OWL 3 was the legal rights module of
the META-SHARE schema, taking into account the ODRL model & vocabulary v.2.1
(https://www.w3.org/community/odrl/model/2.1/).
## Data Sharing
As said, resource sharing will build upon META-SHARE. CRACKER will maintain
and release an improved version of the META-SHARE software.
For its own data sets, CRACKER will continue to apply, whenever possible, the
permissive licensing and open sharing culture which has been one of the key
components of META-SHARE for handling research data in the digital age.
Consequently, for the MT/LT research and user communities, sharing of all
CRACKER data sets will be organised through META-SHARE. The metadata schema
provides components and elements that address copyright and Intellectual
Property Rights (IPR) issues, restrictions imposed on data sharing and also
IPR holders. These together with an existing licensing toolkit can serve as
guidance for the selection of the appropriate licensing solution and creating
the respective metadata. In parallel, ELRA/ELDA has recently implemented a
licensing wizard 4 , helping rights holders in defining and selecting the
appropriate license under which they can distribute their resources. The
wizard will be possibly integrated or linked to
META-SHARE.
## Archiving and Preservation
All datasets produced will be provided and made sustainable through the
existing META-SHARE repositories, or new repositories that partners may choose
to set up and link to the META-SHARE network. Datasets will be locally stored
in the repositories’ storage layer in compressed format.
# Collaboration with Other Projects and Initiatives
CRACKER will pursue close collaboration with the Coordination and Support
Action project LT-Observatory in coordinating their respective activities
regarding documentation, sharing, annotation and filtering of machine
translation related language resources.
The two projects have planned to use the META-SHARE and CLARIN infrastructures
respectively. META-SHARE/META-NET and CLARIN have a long standing
Collaboration Agreement, which was initially realised in terms of building
bridges and mapping services between their metadata models, the META-SHARE MD
schema 5 and the CLARIN CMDI 6 . Furthermore, the two infrastructures can
now engage in mutual harvesting of their metadata inventories using standard
protocols that have now been implemented by both of them.
In parallel, the two-year service contract CEF.AT, which aims at the
collection of data produced by public sector bodies in the EU for the CEF
Automated Translation Digital Infrastructure is another excellent opportunity
for collaboration with CRACKER. CRACKER will discuss the possibility of
storing or providing links to and curating the open datasets that will be
collected within CEF.AT.
# Recommendations for Harmonised DMPs for the ICT-‐17 Federation of Projects
One of CRACKER’s main goals is to bring together all actions also funded
through H2020-ICT17 ( _QT21_ , _HimL_ , _TraMOOC_ , _MMT_ , _LT_Observatory_
), including the FP7 project _QT-Leap_ and related other projects (the
“Cracking the language barrier” federation of projects), and to find synergies
and establish information channels between them, including a suggested
approach towards harmonised Data Management Plans that share the same set of
key principles.
At the kick-off meeting of the ICT-17 group of projects on April 28, 2015,
CRACKER offered support to the “Cracking the language barrier” federation of
projects by proposing a Data Management Plan template with shared key
principles that can be applied, if deemed helpful, by all projects, again,
advocating an open sharing approach whenever possible (also see D1.2). This
plan will be included in the overall communication plan and it will inform the
working group that will maintain and update the roadmap for European MT
research.
In future face-to-face or virtual meetings of the federation, we propose to
discuss the details about metadata standards, licenses, or publication types.
Our goal is to prepare a list of planned tangible outcomes of all projects,
i.e., all datasets, publications, software packages and any other results,
including technical aspects such as data formats. We would like to stress that
the intention is not to provide the primary distribution channel for all
projects’ data sets but to provide, in addition to the channels foreseen in
the projects’ respective Descriptions of Actions, one additional, alternative
common distribution platform and approach for metadata description for all
data sets produced by the “Cracking the language barrier” federation of
projects.
<table>
<tr>
<th>
**In this respect, the activities that the participating projects may
optionally undertake are the following:**
1. Participating projects may consider using META-SHARE as an additional, alternative distribution channel for their tools or data sets, using one of the following options:
1. projects may set up a project or partner specific META-SHARE repository, and use either open or even restrictive licences;
2. projects may join forces and set up one dedicated “Cracking the language barrier” META-SHARE repository to host the resources developed by all participating projects, and use either open or even restrictive licences (as appropriate).
2. Participating projects may wish to use the META-SHARE repository software 7 for documenting their resources, even if they do not wish to link to the network.
</th> </tr> </table>
The collaboration in terms of harmonizing data management plans and
recommending distribution through open repositories forms one of the six areas
of collaboration indicated in the _Multilateral Memorandum of Understanding,
“Cracking the Language Barrier”_ . This MoU document was initiated by CRACKER
upon the decision of the representatives of all European projects funded
through Horizon 2020, ICT-17, in Riga, April 2015\. All projects have been
invited to sign the MoU, whose goal is to establish a federation that
contributes to the overall strategic objective of “cracking the language
barrier”. Participation in one or more of the potential areas of collaboration
in this joint community activity, is optional.
## Recommended Template of a DMP
As pointed out already, the collaboration in terms of harmonizing data
management plans is considered an important aspect of convergence within the
groups of projects. In this respect, any project that is interested in and
intends to collaborate towards a joint approach for a DMP may follow the
proposed structure of a DMP template. The following section describes a
recommended template, while the previous section (3) has provided a concrete
example of such an implementation, i.e. the CRACKER DMP. It is, of course,
expected that any participating project may accommodate its DMP content
according to project-specific aspects and scope. These DMPs are also expected
to be gradually completed as the project(s) progress into their
implementation.
<table>
<tr>
<th>
**I. The ABC Project DMP**
1. **Introduction/ Scope**
2. **Data description**
3. **Identification mechanism iv. Standards and Metadata**
**v. Data Sharing vi. Archiving and preservation**
</th> </tr> </table>
**Figure 3. The recommended template for the implementation and structuring of
a DMP.**
### Introduction and Scope
Overview and approach on the resource sharing activities underpinning the
language technology and machine translation research and development within
each participating project and as part of the “Cracking the language barrier”
initiative of projects.
### Dataset Reference and Name
It is recommended that a standard identification mechanism should be employed
for each data set, e.g., (a) a PID (Persistent Identifier as a long-lasting
reference to a dataset) or (b) _ISLRN_ (International Standard Language
Resource Number).
### Dataset Description
It is recommended that the following resource and media types are addressed:
* **corpora** (text, audio, video, multimodal/multimedia corpora, n-gram resources),
* **lexical/conceptual resources** (e.g., computational lexicons, ontologies, machine-readable dictionaries, terminological resources, thesauri, multimodal/ multimedia lexicons and dictionaries, etc.)
* **language descriptions** (e.g., computational grammars)
* **technologies** (tools/services) that can be used for the processing of data resources
In relation to the resource identification of the “Cracking the language
barrier” initiative and to have a first rough estimation of their number,
coverage and other core characteristics, CRACKER will circulate two templates
dedicated to datasets and associated tools and services respectively. Projects
that wish and decide to participate in this uniform cataloguing are invited to
fill in these templates with brief descriptions of the resources they estimate
to be produced and/or collected. The templates are as follows (also in the
Appendix):
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
Complete title of the resource
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Choose one of the following values:
Lexical/conceptual resource, corpus, language description (missing values can
be discussed and agreed upon with CRACKER)
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
The physical medium of the content representation, e.g., video, image, text,
numerical data, n-grams, etc.
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
The language(s) of the resource content
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The licensing terms and conditions under which the LR can be used
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
The medium, i.e., the channel used for delivery or providing access to the
resource, e.g., accessible through interface, downloadable, CD/DVD, hard copy
etc.
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Foreseen use of the resource for which it has been produced
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
Size of the resource with regard to a specific size unit measurement in form
of a number
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
A brief description of the main features of the resource (including url, if
any)
</td> </tr> </table>
**Table 1. Template for datasets description**
<table>
<tr>
<th>
**Technology Name**
</th>
<th>
Complete title of the tool/service/technology
</th> </tr>
<tr>
<td>
**Technology Type**
</td>
<td>
Tool, service, infrastructure, platform, etc.
</td> </tr>
<tr>
<td>
**Technology Type**
</td>
<td>
The function of the tool or service, e.g., parser, tagger, annotator, corpus
workbench etc.
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
The physical medium of the content representation, e.g., video, image, text,
numerical data, n-grams, etc.
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
The language(s) that the tool/service operates on
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The licensing terms and conditions under which the tool/service can be used
</td> </tr>
<tr>
<td>
**Distribution Medium**
</td>
<td>
The medium, i.e., the channel used for delivery or providing access to the
tool/service, e.g., accessible through interface, downloadable, CD/DVD, etc.
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Foreseen use of the tool/service for which it has been produced
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
A brief description of the main features of the tool/service
</td> </tr> </table>
**Table 2. Template for technologies description**
### Standards and Metadata
Participating projects are recommended to deploy the META-SHARE metadata
schema for the description of their resources and provide all details
regarding their name, identification, format, etc.
Providers of resources wishing to participate in the initiative will be able
to request and get assistance through dedicated helpdesks on questions
concerning (a) the metadata based LR documentation at _helpdesk-metadata@meta-
share.eu_ (b) the use of licences, rights of use, IPR issues, etc. at
[email protected]_ and (c) the repository installation and use at
[email protected]_ .
### Data Sharing
It is recommended that all datasets (including all relevant metadata records)
to be produced by the participating projects will be made available under
licenses, which are as open and as standardised as possible, as well as
established as best practice. as Any interested provider can consult the META-
SHARE licensing options and pose related questions to the respective helpdesk.
### Archiving and Preservation
As regards the procedures for long-term preservation of the datasets, two
options may be considered:
1. As part of the further development and maintenance of the META-SHARE infrastructure, a project that participates in the “Cracking the language barrier” initiative may opt to set up its own project or partner specific META-SHARE repository and link to the META-SHARE network, with CRACKER providing all support necessary in the installation, configuration and set up process.
2. Alternatively, one dedicated “Cracking the language barrier” META-SHARE repository can be set up to host the resources developed by all participating projects, with CRACKER catering for procedures and mechanisms enabling long-term preservation of the datasets.
It should be repeated at this point that following the META-SHARE principles,
the curation and preservation of the datasets, together with the rights of
their use and possible restrictions, are under the sole control and
responsibility of the data providers.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0919_SHEER_640896.md
|
# Introduction
The SHEER database gathers a large amount of interdisciplinary data collected
from seven independent episodes, research data and data for the project
results dissemination process. In order to properly manage such a large volume
and variety of data, a Data Management Plan (DMP) for the SHEER project has
been prepared.
A data management plan describes the data management life cycle for all
datasets to be collected, processed or generated by a research project. It
must cover: 1) the handling of research data during and after the project, 2)
what data will be collected, processed or generated, 3) what methodology and
standards will be applied 4) whether data will be shared /made open access and
how, 5) how data will be curated and preserved.
When significant changes arise during the project (e.g. new data sets, changes
in consortium policies, external factors), the Data Management Plan should be
updated.
In this deliverable document the following issues are presented: description
of data types gathered within SHEER project, schedule of data sharing,
standards for data format and content, polices for data stewardship and
preservation and procedures for providing access to data. The document
presents information from the current state of project realization, therefore
it can be expected to evolve together with the project state.
# Type of data and information created
One of the main objectives of the SHEER project is to develop a probabilistic
methodology to assess and mitigate the short- and the long-term environmental
risks associated with the exploration and exploitation of shale gas. To this
end, the SHEER project will use monitoring data available in the literature
together with the monitoring data acquired during the project in the Wysin
shale gas exploration site in Poland.
From the perspective of the _Data Management Plan_ the following types of
SHEER data will be created during the project:
2.1 _Site data_ \- database consisting of seismicity, data on the state of
water and air and operational data collected from a shale gas site during the
project and respective type of data gathered from past case studies. The
database will include also data from conventional hydrocarbon exploration and
enhanced geothermal fields involving fluid injection that will be used as a
proxy;
2.2 _Research data_ \- developed methodology to assess environmental impacts
and risks across the different operational phases of shale gas exploitation;
proposal of best practices for the monitoring and assessment of environmental
impacts associated with shale gas exploration and exploitation; guidelines for
risk management of shale gas exploitation induced environmental impacts;
2.3 _Data for the SHEER results dissemination process_ – papers, leaflets,
posters, presentations, interviews, reports, photos, newsletters, etc.
Due to the multidisciplinary nature of the problem undertaken in the SHEER
project, the data collected will be heterogeneous. Furthermore, the data
gathered from past case studies do not conform to a single format. Therefore,
one of the objectives of this task is to homogenize and harmonize data coming
from different research fields (geophysical, geochemical, geological,
technological, etc.) and create and provide access to an advanced database of
environmental impact indicators associated with shale gas exploitation. This
requires the development of an over-arching structure for higher-level data
integration. Further, descriptive and readable metadata should be added as a
support to the database.
## Site data
During the SHEER project a new multidisciplinary environmental database from
the on-site monitoring of shale gas exploration operation at the Wysin Site in
Poland will be created. Moreover, the SHEER database will compile existing
multidisciplinary data from past shale gas exploitation test sites, processing
procedures, results of data interpretation and recommendations, as well as
other documents describing the state of the art. The database is planned to
include also data from proxy sites, from conventional hydrocarbon exploration
and enhanced geothermal fields that used fluid injection.
Following the EPOS WG10 nomenclature (Lasocki et al., 2014), the basic unit of
the SHEER database is the _episode_ . The episode is a comprehensive data
description of a geophysical (e.g. deformation) process, induced or triggered
by human technological activity in the field of exploration and exploitation
of georesources, which under certain circumstances can become hazardous for
people, infrastructure and/or the environment. Each episode consists of a
timecorrelated collection of geophysical data representing the geophysical
process, technological data representing the technological activity (which is
the cause of this process) and all other relevant geo-data describing the
environment, in which the technological activity and its result or by-product,
the geophysical process take place.
The SHEER episodes are:
1. Unique data sets from shale gas operation sites in Lubocino (Poland) and Preese Hall (UK);
2. Conventional oil and gas production sites (the Groningen site in the Nederland and the Beckingham site in UK);
3. Sites where stimulation for geothermal energy production and geothermal experiments took place. They will be included into the SHEER database due to their close analogy to the mechanisms of shale gas stimulation and induced seismicity problems. For this reason these data will be used as a proxy (The Geysers site in California, USA and Gross Schönebeck experimental site in Germany);
4. A unique component of the SHEER database is represented by the monitoring activity performed during the project in one active shale gas exploration site in Wysin, Pomerania, Poland. In this site, the seismicity, water conditions and air pollution are being monitored in the direct vicinity of newly drilled wells with horizontal stimulation. The monitoring activity commenced in the pre-operational phase, in order to determine the key baseline of the monitored parameters. Afterwards, the assessment of both the exploratory vertical drillings and the horizontal fracking phases will be performed. Finally, in order to assess experimentally protracted environmental effects, the monitoring activity will continue after the end of the exploration and appraisal operation phases.
The list of the SHEER episodes is provided in Table 1. Types of data, which
are planned to be integrated within the SHEER database, are summarized in
Table 2.
**Table 1 List of the SHEER database episodes.**
<table>
<tr>
<th>
**Inducing technology**
</th>
<th>
**Name**
</th>
<th>
**Case type**
</th> </tr>
<tr>
<td>
**Unconventional hydrocarbon extraction**
</td>
<td>
WYSIN Shale Gas
</td>
<td>
Present case study
</td> </tr>
<tr>
<td>
LUBOCINO Shale Gas
</td>
<td>
Past case study
</td> </tr>
<tr>
<td>
PREESE HALL Shale Gas
</td>
<td>
Past case study
</td> </tr>
<tr>
<td>
**Conventional hydrocarbon extraction**
</td>
<td>
BECKINGHAM SITE conventional hydrocarbon production
</td>
<td>
Past case study
</td> </tr>
<tr>
<td>
GRONINGEN FIELD conventional hydrocarbon production
</td>
<td>
Past case study
</td> </tr>
<tr>
<td>
**Geothermal energy production**
</td>
<td>
GROSS SCHÖNEBECK geothermal
energy production experiment
</td>
<td>
Past case study
</td> </tr>
<tr>
<td>
THE GEYSERS geothermal energy production
</td>
<td>
Past case study
</td> </tr> </table>
**Table 2 List of data types which are planned to be integrated within SHEER
database.**
<table>
<tr>
<th>
**Episode Name**
</th>
<th>
**Section Type**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
**Data relevant for the considered hazards**
</th>
<th>
**Industrial data**
</th>
<th>
**Geodata**
</th> </tr>
<tr>
<th>
**Seismic data**
</th>
<th>
**Water**
**quality data**
</th>
<th>
**Air quality data**
</th>
<th>
**Satellite data**
</th> </tr>
<tr>
<td>
**LUBOCINO**
**Shale Gas**
</td>
<td>
</td>
<td>
**X**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**PREESE HALL**
**Shale Gas**
</td>
<td>
**X**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
**X**
</td>
<td>
**X**
</td> </tr>
<tr>
<td>
**BECKINGHAM**
**SITE conventional hydrocarbon production**
</td>
<td>
**X**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
**X**
</td>
<td>
</td> </tr>
<tr>
<td>
**GRONINGEN FIELD**
**conventional hydrocarbon production**
</td>
<td>
**X**
</td>
<td>
</td>
<td>
</td>
<td>
**X**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**GROSS**
**SCHÖNEBECK**
**geothermal energy production experiment**
</td>
<td>
**X**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
**X**
</td>
<td>
**X**
</td> </tr>
<tr>
<td>
**THE GEYSERS**
**geothermal energy production**
</td>
<td>
**X**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
**X**
</td>
<td>
</td> </tr>
<tr>
<td>
**WYSIN Shale Gas**
</td>
<td>
**X**
</td>
<td>
**X**
</td>
<td>
**X**
</td>
<td>
**X**
</td>
<td>
**X**
</td>
<td>
**X**
</td> </tr> </table>
During the first phase of the SHEER project, the SHEER database inquiries were
sent to the data owners in order to collect information about the data
availability and the comprehensiveness. The information required are:
* availability verification of data planned to be integrated;
* completeness verification of episode data (mandatory availability of data relevant for the considered hazards and technological data);
* completeness verification of seismic data (mandatory availability of catalogue and signals);
* assessment of available data in terms of additional processing and preparation;
* assessment of available data formats in terms of format homogenization.
Data providers are also obligated to prepare information about the observation
network as an inventory.xml file (SeisComp standard).
Ground subsidence as the satellite data will be collected for Wysin and
Groningen Field episodes in determined time window.
## Research data
The processed, organized, structured or presented data in a given context, are
generated through different processes and they can be divided into different
categories. The categories considered in the SHEER project are:
1. Simulations: data generated from test models where model and metadata are also important;
2. Derived or compiled information: e.g. text and data mining, compiled database, 3D models;
3. Reference data: methodologies to assess environmental impacts and risks across the different operational phases of shale gas exploitation and exploration, best practices. This type of data will include: text or word documents, spreadsheets, questionnaires, photographs, slides, samples, collection of digital objects acquired and generated during the research process, data files, models, algorithms, scripts, contents of an application such as input, output, log files for analysis software, simulation software, schemas, methodologies and workflows; standard operating procedures and protocols.
The document repository of the Local Data Center of IG PAS (see Section 2.4),
which will be available through TCS AH at the end of the project, is ready to
collect the research data produced during the project as well as they will be
partially available on SHEER website.
## Data for the dissemination process of SHEER results
This group of data will include a SHEER communication strategy and tools for a
comprehensive policy of dissemination and integration in the following areas:
inside the SHEER Consortium, to the wider scientific and technical community
and to external stakeholders.
All the information are uploaded on the SHEER website (
_http://www.sheerproject.eu/_ ) in the specific “Dissemination” section,
where external users can download: presentations, scientific publications and
media articles, Figure 1.
**Fig. 1 SHEER website and Dissemination section.**
The information are also communicated through: social media tools (Twitter and
Facebook), newsletters and blogs, leaflets (available in eleven languages),
interviews, photos, newsletters, etc.
All the aspects related to the SHEER communication strategy and tool are
detailed in the deliverable D8.4 “Dissemination plan and guidelines for SHEER
reports”.
Dissemination data produced during the project will be also stored in the
document repository (see Section 2.4) of the Local Data Center of IG PAS.
## SHEER database
All the data collected and produced within the SHEER project are stored in the
Local Data Centre (LDC), which operates in IG PAS. Additionally, during the
project data will be systematically published on the TCS AH research platform
(see Chapter 3).
The LCD has been constructed using the CIBIS software – the system created to
store, manage, configure, verify and describe data in IS-EPOS project. CIBIS
has separate modules to manage:
* Users (which allows to set authorities to the groups of users);
* Episodes (which allows to group directories into episodes disregarding to which storage they are assigned);
* Storages (which allows to place directories physically in the server);
* Directories, with internal structure created on the basis of a data scheme (which allows not only to add files and directories but also to copy, download, compress, unpack, rename, move or view file contents. All versions of uploaded files are being stored in the system and it is possible to manage these versions.);
* Schemes (which are used to create structure trees inside directories and to define metadata rules and values);
* Configurations (which are used to manage validators and converters). Raw data can be easily converted to the chosen format using created converters, e.g. ASCII to miniSEED converter. Format of integrated data can be easily validated using chosen validators. There is also the possibility to add new converters or validators to the system;
* Data for publication (which is used to set metadata values for selected directories basing on various schema and rules files, which are defined in the ‘Schemes’ module).
Modules arranged for the SHEER database are:
* Storage (Figure 2),
* Episodes according to SHEER proposal (Figure 3),
* Schema supporting accepted data structure and rules for metadata (Figure 4), Document repository (Figure 5).
**Data storage from past case studies**
Raw data delivered by data providers are stored in separate directories named:
“buffer + name of the institution providing data”. Complete, verified and
homogenized data are stored in final directories listed in Figure 2.
**Data storage from Wysin site**
Currently, raw data and documents from WYSIN episode are stored in SHEERWER,
which is a local server operating in IG PAS, dedicated to the SHEER project
and open for external users (sheerwer.igf.edu.pl). Processed data in
homogenized formats, except waveforms, are available in LDC.
Waveforms, which are stored in SeisComp system on SHEERWER, are available via
Arclink protocol for registered users only (see Chapter 5).
**Fig. 2 Screenshot of CIBIS service of Local Data Centre for SHEER data
collection and management. This service enables to load, delete, copy and
upload data, validate and run conversions of data, set or change source of the
data.**
**Fig. 3 Screenshot of CIBIS service of Local Data Centre for episode data
management.**
**Fig. 4 Screenshot of CIBIS service of Local Data Centre for schemes.
Metadata are created for files and directories using xml format.**
**Fig. 5 Screenshot of CIBIS service of Local Data Centre for document
repository.**
## Data collection
**Past case studies**
The collection and preparation of data from past case studies was carried out
in five stages:
1. **Revision of the data:** The data providers were asked to fill inquiries in the SHEER database, in order to evaluate the content and the quality of data for each episode.
2. **Determination of data accuracy and limitations:** This stage compromises: analysis of the SHEER database inquiries; definition of the type of available and necessary data for the database compilation; evaluation of the comprehensiveness and completeness of the datasets.
3. **Data delivery:** The data providers of each episode uploaded data to temporary directories in LDC (names: “buffer + name of the institution providing data”). Each data provider has an access only to a buffer directory assigned to the native institution (see Chapter 5).
4. **Homogenization of data formats:** Delivered data were standardized according to .mat and .xlsx formats (see Chapter 3) and placed in the episode data structure of the final directories (Figures 2, 5).
5. **Metadata preparation:** Metadata were prepared following the XML standard format for metadata developed within the IS-EPOS 1 project.
**Present case study (Wysin Site)**
The collection and preparation of air and water quality data from Wysin Site
to LDC is continuously carried on according to the following stages:
1. **Data delivery:** Data providers update raw data files available on SHEERWER with most recent registrations.
2. **Homogenization of data formats:** Delivered data are standardized according to .mat and .xlsx formats (see Chapter 3) and placed in the episode data structure of LDC final directories (Figures 2, 5).
3. **Metadata preparation:** Metadata are prepared following the XML standard format for metadata developed within the IS-EPOS project.
As mentioned above, seismic data are transmitted on-line in real-time from 16
short period stations to SHEERWER and stored in the SeisComp system.
# Expected schedule for data sharing
All data gathered within the SHEER project are available for all Consortium
members via CIBIS (past case studies) and SHEERWER (Wysin site) in both
standard .mat and .xlsx formats (see Chapter 3). In the meantime, data are
being prepared to be integrated on TCSAH 2 platform developed in IS-EPOS and
EPOS IP projects. All episodes will be available on TCS-AH platform within the
30/04/2018. Before that, SHEER episodes will be accessible only to SHEER
project members. The discrimination will be on the basis of affiliation
assigned to the platform user. After the 30/04/2018, SHEER episodes will be
accessible to all registered platform users according to data and services
policy document, which is going to be prepared during the project. This
document will be based on and completely consistent with data and services
policy developed within IS-EPOS project and implemented to TCS AH platform and
EPOS IP 3 project.
TCS AH is a research platform which integrates distributed research
infrastructures (RI) to facilitate and stimulate research on anthropogenic
hazards (AH) especially those associated with the exploration and exploitation
of georesources. The innovative element is the uniqueness of the integrated RI
which comprises two main deliverables:
* Exceptional datasets, called ’episodes’, which comprehensively describe a geophysical process; induced or triggered by human technological activity, posing hazard for populations, infrastructure and the environment;
* Problem-oriented, bespoke services uniquely designed for the discrimination and analysis of correlations between technology, geophysical response and resulting hazard.
These objectives will be achieved through the Science - Industry Synergy built
by WG10 in EPOS PP 4 , ensuring bi-directional information exchange,
including unique and previously unavailable data furnished by industrial
partners. The Episodes and services to be integrated have been selected using
strict criteria during the EPOS PP. The data are related to a wide spectrum of
inducing technologies, with seismic/aseismic deformation and production
history as a minimum dataset requirement, and the quality of software services
is confirmed and referenced in literature.
All data from past case studies gathered in LDC are already available for all
Consortium members via CIBIS. Data uploaded by data providers are verified,
prepared and homogenized within two weeks from the delivery date and published
in final directory for all project members.
_Availability of data from Wysin site:_
* waveforms: available in real-time via Arclink protocol for registered users,
* processed air and water quality data: data in standard format available after maximum 2 weeks from data update via CIBIS,
* raw data and documents: available via SHEERWER for all Consortium members.
# Standards for format and content
All format and content standards used in the SHEER project have been adopted
from solutions developed within IS-EPOS and EPOS-IP projects in order to
preserve consistency of all data integrated on TCS AH platform and ensure its
compatibility with already implemented IT solutions.
## Database structure
Verified and homogenized data in LDC are gathered within final episode
directories (Figure 2). Each directory has the same internal structure of
subdirectories (Figure 6):
* **Data relevant for the considered hazards:**
* Seismic data (e.g. catalogue, signals, seismic / ground motion network); o Water quality data (e.g. physicochemical water properties, piezometric levels and abstraction rates);
* Air quality data (e.g. air properties, air stations); o Satellite data;
* **Industrial data** (e.g. drilling data, fracture data);
* **Geodata** (e.g. velocity model).
**Fig. 6 Structure of data for an episode. Green rectangle represents the
episode and the blue ones directories.**
### Other data
The main aim of the SHEER project is to develop methodologies to assess the
environmental impacts/risks associated with the exploitation and exploration
of shale gas.
The possible impacts of a shale gas project may include _primary impacts_
associated mainly with environmental issues (such as groundwater, air and
surface water), and _secondary impacts_ related mainly with the _disruption_
caused specifically to the community (that includes the built environment and
society) or the ecosystem localized in the surroundings of a shale gas
development project. The pathways resulting in primary impacts can be almost
fully considered from a physical point of view, whereas those considering the
secondary impacts do not pertain only to the physical domain but also to the
socio-economic domain. For this reason, parallel to the assessment of effects
in physical elements, secondary impacts can also embrace the socio-economic
effects.
Therefore, the assessment of secondary impacts requires the collection of
“other” types of data needed to characterize both the vulnerability and the
exposure of a community surrounding a shale gas site.
For example to assess the potential damage to the built environment due to the
seismicity induced by shale gas exploration and exploitation requires the
typological characterization of the portfolio of buildings and infrastructures
characterizing the site. Building and (and elements of an infrastructure)
typologies are defined based on an expected common seismic behaviour, i.e the
probability of reaching a certain damage state for a given seismic input can
vary depending on construction material, structural configuration and other
several constructive details. A convenient way for defining typological
seismic vulnerability is the use of fragility curves that provide the
probability of exceeding a certain damage state threshold conditional to a
selected seismic input parameter.
Fragility curves may be defined for selected building typologies, grouping
together structures, which are expected to have a similar seismic behaviour.
When developing a building classification the choice is between usability and
accuracy. An overly detailed subdivision may lead to very specific results but
mat be impractical due to both the use (i.e. needing much information to
assign a building class) and the derivation (many curves to develop). On the
other hand, an overly simplistic classification may group together buildings
with completely different seismic behaviour. Geometry, material properties,
morphological features, age, seismic design level, anchorage of the equipment,
soil conditions, and foundation details are among usual typology
descriptors/parameters and represents the most important factors affecting the
seismic response of a building.
The knowledge of the inventory of a specific structure in a region and the
capability to create building classes are among the main challenges when
carrying out a seismic risk assessment at a city scale, where it is
practically impossible to perform this assessment at building level. Thus, the
derivation of appropriate fragility curves for any type of structure depends
entirely on the creation of a reasonable taxonomy that is able to classify the
different kinds of structures in any system exposed to seismic hazard. There
are different taxonomies developed in past research projects, in Europe (e.g.
RISK-UE and LESSLOSS EU projects) and USA (e.g. HAZUS or ALA), that have been
reviewed and updated in the SYNER- G project in order to develop a unique
typology for all elements at risk. In SYNER-G a great effort was paid to
create a coherent and comprehensive taxonomy from which European typologies
for the most important elements at risk are defined (Pitilakis et al., 2014).
The main categories of this classification scheme proposed for buildings are:
force resisting mechanism (FRM), force resisting mechanism material (FRMM),
plan regularity (P), elevation regularity (E), cladding (C), detailing (D),
floor system (FS), roof system (RS), height level (H), and code level (CL) as
summarized in the following table.
**Table 3 Data inventory for the vulnerability characterization of buildings
of a specific site**
<table>
<tr>
<th>
**Buildings (SYNER-G taxonomy)**
Localization
Age
Type of construction
</th> </tr> </table>
Yes
Write a file format, e.g. GIS. Select yes or no.
Select yes or no
<table>
<tr>
<th>
Force resisting mechanism Force resisting mechanism
material
Plan Regularity
Elevation Regularity
Cladding
Detailing
Floor system
Roof system
Height Level
Code level
</th> </tr> </table>
Select yes or no Select yes or no
Select yes or no
Select yes or no
Select yes or no
Select yes or no
Select yes or no
Select yes or no
Select yes or no
Select yes or no
Up to now, such information is not available for the case studies of the SHEER
project. Therefore standards and formats have not been specified for this kind
of data.
However if they should become available for the case studies, specific
information regarding data inventory, format and standards will be provided
and specified in the case study deliverables (WP7) and specific folders will
be added to the platform for the dissemination of the data.
## Data formats
There are seven categories of data in LDC respect to the standard format
(Table 3).
**Table 4 Standard formats of different data categories.**
<table>
<tr>
<th>
**Data category**
</th>
<th>
**Standard format**
</th> </tr>
<tr>
<td>
Seismic / ground motion catalogue
</td>
<td>
mat
</td> </tr>
<tr>
<td>
Seismic signals
(seismogram / accelerogram)
</td>
<td>
miniSEED / SEED
</td> </tr>
<tr>
<td>
Seismic / ground motion network
</td>
<td>
SeisComp inventory xml
</td> </tr>
<tr>
<td>
Water quality, air quality, industrial data and geodata*
</td>
<td>
GDF (mat), xlsx
</td> </tr>
<tr>
<td>
Other geodata
</td>
<td>
geotiff / shapefile
</td> </tr>
<tr>
<td>
Satellite data
</td>
<td>
proposed: GDF, shapefile, geotiff
</td> </tr>
<tr>
<td>
Documents
</td>
<td>
pdf / graphic and video formats / presentation formats / other formats
</td> </tr> </table>
* Water quality, air quality, industrial data and geodata are stored in Local Data Centre in two formats: Generic Data Format: GDF (mat structure) and xlsx. The structure of GDF files depends on data type and its complexity. Xlsx files are created automatically from GDF files and stored in zip packages with the same names as corresponding GDF files.
### Seismic / ground motion catalogue
Seismic and ground motion catalogues are stored in LDC in standard .mat
format. Catalogue .mat file contains only one variable with no predefined
name. The variable is a structure array with 3 defined fields:
* **field** – name of the field in the catalogue (text value); consists of 123 predefined values (Table 4, Appendix 1)
* **type** – type of the field in the catalogue and the way of displaying it (numeric value, Appendix 1);
* **val** – column array of values. For text the column is a cell array with text fields. For other values the column is a numeric column.
If some values of catalogue fields are not calculated, they are filled with:
NaN (if the column should contain a numeric value) or null [] (if the column
should contain a text value). Obligatory fields for catalogue are: "ID",
"Time" and "Mw" or "ML".
**Table 5 Names of first 15 seismic catalogue fields with short description.
The rest of the list together with ground motion catalogue fields list are
available in Appendix 1.**
<table>
<tr>
<th>
**Name of**
**field**
</th>
<th>
**Description of the field**
</th>
<th>
**Data format**
</th>
<th>
**Number**
**of data type** 3
</th>
<th>
**Unit**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
**ID**
</td>
<td>
Event ID
</td>
<td>
text
</td>
<td>
3
</td>
<td>
</td>
<td>
required field
</td> </tr>
<tr>
<td>
**Time**
</td>
<td>
Matlab serial numerical time
</td>
<td>
double
</td>
<td>
5
</td>
<td>
days
</td>
<td>
required field
</td> </tr>
<tr>
<td>
**Lat**
</td>
<td>
Latitude
</td>
<td>
double
</td>
<td>
24,25
</td>
<td>
[o] – North positive
</td>
<td>
</td> </tr>
<tr>
<td>
**Long**
</td>
<td>
Longitude
</td>
<td>
double
</td>
<td>
24,25,34,
35
</td>
<td>
[o] – East positive
</td>
<td>
</td> </tr>
<tr>
<td>
**Depth**
</td>
<td>
Hypocentre depth measured from the ground level
</td>
<td>
double
</td>
<td>
11-13
</td>
<td>
[km]
</td>
<td>
</td> </tr>
<tr>
<td>
**Elevation**
</td>
<td>
Hypocentre elevation measured over the see
level
</td>
<td>
double
</td>
<td>
10
</td>
<td>
[m]
</td>
<td>
</td> </tr>
<tr>
<td>
**X**
</td>
<td>
Original Coordinate
</td>
<td>
</td>
<td>
10
</td>
<td>
</td>
<td>
Original coordinates if other than geographical.
</td> </tr>
<tr>
<td>
**Y**
</td>
<td>
</td>
<td>
10
</td>
<td>
</td> </tr>
<tr>
<td>
**Z**
</td>
<td>
</td>
<td>
10
</td>
<td>
</td> </tr>
<tr>
<td>
**EPI_err**
</td>
<td>
epicentral error
</td>
<td>
double
</td>
<td>
10
</td>
<td>
[m]
</td>
<td>
</td> </tr>
<tr>
<td>
**Depth_err**
</td>
<td>
depth error
</td>
<td>
</td>
<td>
10
</td>
<td>
[m]
</td>
<td>
</td> </tr>
<tr>
<td>
**Nl**
</td>
<td>
No of stations used in the localisation
</td>
<td>
</td>
<td>
2
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**M0**
</td>
<td>
Scalar moment
</td>
<td>
</td>
<td>
7
</td>
<td>
[Nm]
</td>
<td>
</td> </tr>
<tr>
<td>
**Mw**
</td>
<td>
moment magnitude
</td>
<td>
double
</td>
<td>
4
</td>
<td>
</td>
<td>
Mw or ML is required
</td> </tr>
<tr>
<td>
**Name of**
**field**
</td>
<td>
**Description of the field**
</td>
<td>
**Data format**
</td>
<td>
**Number of data type5**
</td>
<td>
**Unit**
</td>
<td>
**Comments**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
0.1 4
</td>
<td>
</td>
<td>
</td>
<td>
field
</td> </tr>
<tr>
<td>
**ML**
</td>
<td>
local magnitude
</td>
<td>
double
0.1
</td>
<td>
4
</td>
<td>
</td> </tr> </table>
### Seismic signals
There are two standard formats allowed for seismic signals storage in SHEER
database:
* SEED format: used to store triggered seismic signals in LDC,
* miniSEED format: used to store seismic signals from continuous registration in SeisComp structure. It is possible to receive signal also in SEED format if Arclink protocol is used for data transfer.
The Standard for the Exchange of Earthquake Data (SEED) is a data format
intended primarily for the archival and exchange of seismological time series
data and related metadata. The format is maintained by the _International
Federation of Digital Seismograph Networks_ and documented in the SEED Manual
(PDF format). Originally designed in the late 1980s, the format has been
enhanced and refined a number of times and remains in widespread use. A so-
called full SEED volume is the combination of time series values along with
comprehensive metadata. In essence a full SEED volume is the combination of
miniSEED with a matching dataless volume in a single file.
### Seismic / ground motion network
Seismic and ground motion networks are saved in standard SeisComp inventory
xml files. SeisComP is a seismological software for data acquisition,
processing, distribution and interactive analysis that has been developed by
the GEOFON Program at Helmholtz Centre Potsdam, GFZ German Research Centre for
Geosciences and gempa GmbH.
### GDF
GDF format is universal and easy to use. It contains information about
coordinate system in which data are stored, the time zone in which the time is
determined, and information about the stored data, such as: unit, data type,
names of variables with descriptions. The proposed format is based on data
structures that can be easily saved to a file and are easy to manipulate. This
structure contains nine variables, where d is the most essential one, because
it contains the data which can be further processed. The other variables are
used for complete data description – units, coordinate system, fields etc.
General GDF description is provided in Table 5.
**Table 6 Main structure of Generic Data Format (GDF).**
<table>
<tr>
<th>
**Variable name**
</th>
<th>
**Type**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**FormatName**
</td>
<td>
char
</td>
<td>
Name of data format GDF (Generic Data Format).
</td> </tr>
<tr>
<td>
**FormatVersion**
</td>
<td>
real
</td>
<td>
When changing/expansion of the format change its version. It can have one
number after the decimal point.
</td> </tr>
<tr>
<td>
**CRS**
</td>
<td>
char
</td>
<td>
Coordinate Reference System
EPSG code (or local) mapping surveying (http://epsg.io), standard WGS84 (EPSG:
4326)
</td> </tr>
<tr>
<td>
**TimeZone**
</td>
<td>
char
</td>
<td>
Acronym of Time Zone
(http://en.wikipedia.org/wiki/List_of_time_zone_abbreviations), normally UTC
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
char
</td>
<td>
The text description of the data contained in the file
</td> </tr>
<tr>
<td>
**FieldDescription**
</td>
<td>
cell array
</td>
<td>
Description of the fields. An array contains two columns: the first contains
the name of the field/column of data, the second contains a description of
them. All data must be specified
</td> </tr>
<tr>
<td>
**FieldUnit**
</td>
<td>
cell array of char
</td>
<td>
Description of units for individual data, e.g. m/s. An array contains two
columns: the first contains the name of the field/column of data, the second
contains the unit. All data must be specified.
</td> </tr>
<tr>
<td>
**FieldType**
</td>
<td>
cell array of char
</td>
<td>
Description of data type e.g. real. An array contains two columns: the first
contains the name of the field/column of data, the second contains the data
type description. All data must be specified. time - matlab serial numerical
time deg - angle in degrees, as for geographical coordinates, positive values
for N and E. int – integer number real – the real number char –
variable/parameter name
</td> </tr>
<tr>
<td>
**d**
</td>
<td>
struct array or cell array char
</td>
<td>
The variable containing the data. The data may be as a single variable, a
vector or an array.
</td> </tr> </table>
GDF files can be easily opened with both Matlab and open source Octave
software and converted to ASCII (CSV) format with homogenous structure.
The following data types are currently available in LDC in the form of GDF
files:
* hydrochemical data – site visit
* hydrochemical data – continuous measurement
* hydrochemical data 2– site visit
* barometric data
* well path
* station network
* water table
* air quality
* radon 222 concentration
* proppant concentration
* injection rate
* cumulative injection
* wellhead pressure
* bottomhole pressure
* flowback bottomhole pressure
* flowback rate
* flowback volume
* injected volume
* velocity model
Detailed structures of all GDF files listed above are presented in Appendix 2.
### Xlsx files
Xlsx files are created automatically from GDF files and stored in zip packages
with the same names as original GDFs. Xlsx files are created in order to make
data available for people who are not used to operate in Matlab or Octave
software. Each zip package contains at least two xlsx files:
* info file, where each worksheet presents information from different GDF variable except variable d. There is always one info file in each zip package.
* data file, where all data from GDF d structure is stored. Depending on d variable complexity each zip package can include one or more xlsx data files.
Date and time in xlsx files is determined as numeric value according to Excel
rules and can be easily converted to time format.
### Geotiff / shapefile
Geodata, which is frequently available in the form of maps and therefore
cannot be converted into GDF format, is stored as geographically orientated
data in geotiff (raster) or shapefile (vector) formats.
### Satellite data
Satellite data will be available in LDC in three various formats: GDF,
shapefile and geotiff.
### Documents
Documents, which are stored in Document Repository of LDC, are provided in
various data formats depending on their contents: pdf, graphic formats, video
formats, presentation formats or other formats.
## Metadata
### Episodes, directories and files
SHEER database metadata are prepared according to the guidelines of the IS-
EPOS project. Various metadata fields are required depending on the object
type (episode, directory or file). Values of metadata are inherited down
through the structure of data in the episode. For example, it is enough to set
the value of metadata field ‘episode name’ for episode and then all
directories and files belonging to this episode will have the same value of
field ‘episode name’ as this episode. The same rule applies to directories. Of
course it is possible to change inherited metadata value if needed. Metadata
set for each file is saved in LDC in xml format. The detailed description of
metadata are prepared separately for each object type: episode, directory and
file (Tables 6-8). Apart from metadata presented in the tables metadata ‘date’
is always published. All dataTypes available within the SHEER project are
presented in Table 9.
**Table 7 Detailed description of episode metadata.**
<table>
<tr>
<th>
**Metadata Name**
</th>
<th>
**Description and option**
</th>
<th>
**Required metadata**
</th> </tr>
<tr>
<td>
**episodeName**
</td>
<td>
Name of episode
</td>
<td>
x
</td> </tr>
<tr>
<td>
**episodeCode**
</td>
<td>
Code of episode
</td>
<td>
x
</td> </tr>
<tr>
<td>
**path**
</td>
<td>
Episode path
</td>
<td>
x
</td> </tr>
<tr>
<td>
**itemType**
</td>
<td>
Type of object = ‘episode’
</td>
<td>
x
</td> </tr>
<tr>
<td>
**episodeOwner**
</td>
<td>
Owner of episode
list: ‘IG PAS’, ‘KeU’, ‘KNMI’, ‘AMRA’
</td>
<td>
x
</td> </tr>
<tr>
<td>
**description**
</td>
<td>
Long episode content description
</td>
<td>
</td> </tr>
<tr>
<td>
**text**
</td>
<td>
Short episode content description
</td>
<td>
x
</td> </tr>
<tr>
<td>
**country**
</td>
<td>
Episode localization (country) list: ‘Poland’, ‘United Kingdom’,
‘Netherlands’, ‘Germany’, ‘USA/California’
</td>
<td>
x
</td> </tr>
<tr>
<td>
**region**
</td>
<td>
Episode localization (region) list: ‘Pomerania’, ‘Lancashire’, ‘Beckingham’,
‘Groeningen’, ‘Gross Schoenebeck’, ‘Geysers’
</td>
<td>
</td> </tr>
<tr>
<td>
**positionType**
</td>
<td>
Type of positioning of the episode list: ‘point’, ‘polygon’
</td>
<td>
x
</td> </tr>
<tr>
<td>
**coordinateSystem**
</td>
<td>
Coordinate system of positioning of the episode (e.g. ‘WGS-84’)
</td>
<td>
x
</td> </tr>
<tr>
<td>
**longitude**
</td>
<td>
Episode localization (longitude)
</td>
<td>
x
</td> </tr>
<tr>
<td>
**latitude**
</td>
<td>
Episode localization (latitude)
</td>
<td>
x
</td> </tr>
<tr>
<td>
**start**
</td>
<td>
Start time of episode
</td>
<td>
</td> </tr>
<tr>
<td>
**end**
</td>
<td>
End time of episode
</td>
<td>
</td> </tr>
<tr>
<td>
**impactingFactor**
</td>
<td>
Technology impacting the environment – one of: conventional hydrocarbon
extraction, unconventional hydrocarbon extraction, geothermal energy
production
</td>
<td>
x
</td> </tr>
<tr>
<td>
**allowedDownload**
</td>
<td>
Permission to download the data
list: ‘SHEER’, ‘EPOS-IP’,’SHEER, EPOS-IP’, ‘all’, ‘affiliated’
</td>
<td>
x
</td> </tr>
<tr>
<td>
**allowedVisibility**
</td>
<td>
Permission to see the data
list: ‘SHEER’, ‘EPOS-IP’,’SHEER, EPOS-IP’, ‘all’, ‘affiliated’
</td>
<td>
x
</td> </tr>
<tr>
<td>
**dataPolicy**
</td>
<td>
Data policy (e.g. ‘embargo_20180431’)
</td>
<td>
x
</td> </tr> </table>
**Table 8 Detailed description of directory metadata.**
<table>
<tr>
<th>
**Metadata Name**
</th>
<th>
**Description and option**
</th>
<th>
**Required metadata**
</th> </tr>
<tr>
<td>
**name**
</td>
<td>
Name of directory
</td>
<td>
x
</td> </tr>
<tr>
<td>
**path**
</td>
<td>
Directory path
</td>
<td>
x
</td> </tr>
<tr>
<td>
**itemType**
</td>
<td>
Type of object = ‘directory’
</td>
<td>
x
</td> </tr>
<tr>
<td>
**text**
</td>
<td>
Short directory content description
</td>
<td>
</td> </tr>
<tr>
<td>
**description**
</td>
<td>
Long directory content description
</td>
<td>
</td> </tr>
<tr>
<td>
**type**
</td>
<td>
Type of data section
list: ‘data relevant for the considered hazards’, ‘seismic’, ‘water
quality’, ‘air quality’, ‘satellite’, ‘industrial’, ‘geodata’
</td>
<td>
x
</td> </tr>
<tr>
<td>
**region**
</td>
<td>
Directory localization (region) list: ‘Pomerania’, ‘Lancashire’, ‘Beckingham’,
‘Groeningen’, ‘Gross Schoenebeck’, ‘Geysers’
</td>
<td>
</td> </tr>
<tr>
<td>
**start**
</td>
<td>
Start time of directory
</td>
<td>
</td> </tr>
<tr>
<td>
**end**
</td>
<td>
End time of directory
</td>
<td>
</td> </tr>
<tr>
<td>
**dataType**
</td>
<td>
Type of data
</td>
<td>
</td> </tr>
<tr>
<td>
**coordinateSystem**
</td>
<td>
Coordinate system of positioning of the directory (e.g. ‘WGS84’)
</td>
<td>
</td> </tr> </table>
**Table 9 Detailed description of file metadata.**
<table>
<tr>
<th>
**Metadata Name**
</th>
<th>
**Description and option**
</th>
<th>
**Required metadata**
</th> </tr>
<tr>
<td>
**name**
</td>
<td>
Name of file
</td>
<td>
x
</td> </tr>
<tr>
<td>
**path**
</td>
<td>
File path
</td>
<td>
x
</td> </tr>
<tr>
<td>
**itemType**
</td>
<td>
Type of object = ‘file’
</td>
<td>
x
</td> </tr>
<tr>
<td>
**text**
</td>
<td>
Short file content description
</td>
<td>
</td> </tr>
<tr>
<td>
**description**
</td>
<td>
Long file content description
</td>
<td>
</td> </tr>
<tr>
<td>
**dataType**
</td>
<td>
Type of data
</td>
<td>
x
</td> </tr>
<tr>
<td>
**start**
</td>
<td>
Start time of data
</td>
<td>
</td> </tr>
<tr>
<td>
**end**
</td>
<td>
End time of data
</td>
<td>
</td> </tr>
<tr>
<td>
**eventID**
</td>
<td>
ID number of seismic event
</td>
<td>
</td> </tr>
<tr>
<td>
**region**
</td>
<td>
File localization (region) list: ‘Pomerania’, ‘Lancashire’, ‘Beckingham’,
‘Groeningen’, ‘Gross Schoenebeck’, ‘Geysers’
</td>
<td>
</td> </tr>
<tr>
<td>
**coordinateSystem**
</td>
<td>
Coordinate system of positioning of the file (e.g. ‘WGS-84’)
</td>
<td>
</td> </tr>
<tr>
<td>
**auxiliary**
</td>
<td>
Auxiliary type of data: if auxiliary then = ‘1’
</td>
<td>
</td> </tr>
<tr>
<td>
**allowedVisibility**
</td>
<td>
Permission to see the data: list: ‘SHEER’, ‘EPOS-IP’, ‘SHEER, EPOS-IP’, ‘all’,
‘affiliated’
</td>
<td>
</td> </tr> </table>
**Table 10 List of dataTypes available within SHEER project.**
<table>
<tr>
<th>
</th>
<th>
**dataType name**
</th>
<th>
</th> </tr>
<tr>
<td>
**air measurement points**
</td>
<td>
flowback rate
</td>
<td>
physicochemical water properties
</td>
<td>
stratigraphy
</td> </tr>
<tr>
<td>
**barometric data continuous**
</td>
<td>
geotiff
</td>
<td>
production parameters
</td>
<td>
tectonics
</td> </tr>
<tr>
<td>
**barometric measurement points**
</td>
<td>
ground motion catalogue
</td>
<td>
proppant inf
</td>
<td>
underground water level
</td> </tr>
<tr>
<td>
**bottomhole pressure**
</td>
<td>
ground motion network
</td>
<td>
radon 222 content
</td>
<td>
velocity model
</td> </tr>
<tr>
<td>
**catalogue**
</td>
<td>
hydro borehole path
</td>
<td>
ray tracing angles
</td>
<td>
water lab analyses
</td> </tr>
<tr>
<td>
**chemical air properties**
</td>
<td>
injection parameters
</td>
<td>
rock parameters
</td>
<td>
water level
</td> </tr>
<tr>
<td>
**chemical water properties**
</td>
<td>
injection pressure
</td>
<td>
seismic network
</td>
<td>
water measurement points
</td> </tr>
<tr>
<td>
**episode image**
</td>
<td>
injection rate
</td>
<td>
seismic profile
</td>
<td>
waveform
</td> </tr>
<tr>
<td>
**episode logo**
</td>
<td>
injection volume
</td>
<td>
shear modulus
</td>
<td>
well path
</td> </tr>
<tr>
<td>
**episode xml**
</td>
<td>
physical air properties
</td>
<td>
shp file
</td>
<td>
well position
</td> </tr>
<tr>
<td>
**flowback bottomhole pressure**
</td>
<td>
physical water properties
</td>
<td>
signal
</td>
<td>
wellhead pressure
</td> </tr>
<tr>
<td>
**flowback volume**
</td>
<td>
physical water properties continuous
</td>
<td>
signal accelerogram
</td>
<td>
</td> </tr> </table>
### Documents
All documents stored in the document repository have to be described with
metadata. Each document type is described with different set of metadata.
Currently, two types of documents are defined: articles and reports. Together
with project development and according to appearing needs, more document types
will be added. Detailed description of metadata set for each document type is
presented in Table 10.
**Table 11 Metadata sets for articles and reports stored in document
repository.**
<table>
<tr>
<th>
Metadata field
</th>
<th>
Metadata field description
</th>
<th>
Required metadata
</th> </tr>
<tr>
<th>
Article
</th>
<th>
Report
</th> </tr>
<tr>
<td>
Abstract
</td>
<td>
</td>
<td>
x
</td>
<td>
</td> </tr>
<tr>
<td>
Additional information
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Corporate creators
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Creators
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Creators e-mail
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Creators Family Name
</td>
<td>
</td>
<td>
x
</td>
<td>
x
</td> </tr>
<tr>
<td>
Creators Given Name/Initials
</td>
<td>
</td>
<td>
x
</td>
<td>
x
</td> </tr>
<tr>
<td>
Creators Name
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data of document
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
DOI
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
id
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Institution
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
ISSN
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Journal or Publication Title
</td>
<td>
</td>
<td>
x
</td>
<td>
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Owner of document
</td>
<td>
</td>
<td>
x
</td>
<td>
x
</td> </tr>
<tr>
<td>
Page range
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Place of publication
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Publication Details
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Publisher
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Title
</td>
<td>
</td>
<td>
x
</td>
<td>
x
</td> </tr>
<tr>
<td>
Type
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
URL
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Volume
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
# Policies for stewardship and preservation
All data gathered within SHEER database are stored on the servers of Local
Data Centre (LDC) which belong to IG PAS. The software used for data storage
is CIBIS, which was developed within IS-EPOS project. It is software dedicated
for storage and management of big datasets. Two big advantages of CIBIS are:
(1) it enables ordered storage and management of all uploaded data versions
together with detailed information about each version, (2) it offers a wide
range of authorities which can be used to easily manage different users
profiles. The second point is discussed in detail in Chapter 5.
Data backup is being stored in CYFRONET Computer Centre of AGH University of
Science and Technology in Krakow and is updated every day. Only backup of
files is provided. Raw data from data providers is verified and homogenized by
Episode Adaptation Centre (EAC) staff working in IG PAS. For data quality
control of the SHEER database task a management system based on the open-
source Redmine application is used. The following path of data gathering,
verification, homogenization and quality control for SHEER database has been
established in EAC (Figure 7):
* Data upload: Raw data are uploaded to proper ‘buffer’ subdirectory by data provider. Each data provider has an access to dedicated buffer directory in CIBIS.
‘Administrator’ for this episode and its ‘Control group’ in EAC are assigned.
* QC1: Administrator sets episode as a new task in SHEER database task management system. The Control group roles are distributed and the workflow Observer is appointed (20%).
* Data conversion and validation: Data are verified, converted (if needed) and homogenized by people assigned by Administrator.
* QC2: The completeness and quality of prepared data are checked (50% of workflow).
* Data transfer, metadata preparation and publication: Correct data are placed in final directories (Figure 3) according to episode data structure (Figure 6). Now data are visible to all SHEER database users who have permission to see this episode. All data files are described with sets of metadata prepared according to rules described in Chapter 3\. Episode can be published to TCS AH platform if needed. Metadata are also visible for SHEER database users.
* QC3: Administrator checks metadata sets and accepts episode as correct (100%).
**Fig. 7 Episode Quality Control Workflow.**
# Procedures to provide access
## During the project
All data gathered within SHEER are currently available for project members via
CIBIS or SHEERWER. As mentioned earlier (see Chapter 2) during the project,
data will be systematically uploaded on the TCS AH platform and they will be
also available only for the affiliated SHEER members.
This will enable users to analyse gathered data using services already
implemented on TCS.
### SHEER database
SHEER database is protected by IG PAS firewall. However, in order to provide
access to LDC for SHEER project members, the firewall needed to be opened for
external IP addresses used by institutions which build SHEER Consortium.
The access to LDC, for each separate SHEER project member, has been provided
by EAC staff by the creation of personal CIBIS account with relevant
authorities. Two classes of users has been defined for each institution:
* Data downloader: user in this class can read and download data from all episodes (final directories – Figure 2). Additionally, user can browse document repository and download all published documents. User can read databases in ‘Data for publication’ module where all metadata assigned to episodes is visible (Figures 8, 10).
* Data provider: user in this class has the same authorities as data downloader and additionally he has the access to the buffer directory of his institution where he can upload and manage raw data. Data provider can also read and run ‘Configurations’ module where he can validate and convert raw data (Figures 9, 11).
In each institution only 1-2 persons have authorities of Data provider. The
rest of project members are assigned to Data downloader group in order to
avoid database disorders.
EAC staff, who is responsible for data verification, validation, conversion
and homogenization, is assigned to the **Data manager** group. Members of this
group have all authorities available in CIBIS (Figure 12).
**Fig. 8 Authorities of users in Data downloader class (CIBIS: Admin Panel).**
**Fig. 9 Authorities of users in Data provider class (CIBIS: Admin Panel).**
**Fig. 10 The view of SHEER database for user from Data downloader class.**
**Fig. 11 The view of SHEER database for user from Data provider class.**
**Fig. 12 The view of SHEER database for user from Data manager class.**
### SHEERWER
Each institution which is a part of the SHEER Consortium has unlimited access
to SHEERWER via dedicated account. Data available on SHEERWER can be browsed
and downloaded via internet browser interface (Figure 13).
Especially for seismic data download Arclink protocol can be used. Using
Arclink it is possible to download seismic data from any predefined time
period and station group in miniSEED or SEED format. For every user interested
in continuous seismic data separate login and password has been prepared in
order to safely download data from SeisComp.
The detailed instruction of the use of Arclink protocol for seismic data
download is available in Appendix 3.
**Fig. 13 The view of SHEERWER for every user.**
## After the project
After the end of the project all SHEER episodes will be integrated on TCS AH
platform and will become available for all registered platform users.
Therefore, data quality control, maintenance and safety will be fully provided
by TCS. The rules of TCS maintenance and operational costs funding will be
prepared in the cooperation with EPOS-ERIC.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0921_FURNIT-SAVER_645067.md
|
# FurnIT-SAVER project introduction
The traditional nature of the furniture industry and the limited incorporation
of ICT tools have reduced the ability of SMEs in the sector to innovate and
respond to the competition coming from larger companies. These specialised
furniture shops and small furniture manufacturers have been unable to compete
with the economies of scale advantages that larger furniture retailers can
offer.
On the other hand, smaller furniture companies can offer higher levels of
personalization and quality of customized goods that truly meet customers'
preferences and needs which represents a potential competitive advantage over
larger furniture providers. Nevertheless, as it is impossible to envisage how
the furniture will look and fit into the customers home, customised furniture
also bears an expensive risk if the final piece of furniture does not meet the
customer's needs or does not complement other furniture. Furthermore, these
customised services are predominantly provided on a face-to-face basis in
local and fragmented markets which prevents small manufacturers to benefit
from ecommerce growth and limit their international reach.
The FURNIT-SAVER project makes use of innovative ICT solutions based on a
combination of Virtual and Augmented Reality (VR/AR) technologies,
recommendation engines and ecommerce solutions, to produce a smart marketplace
for furniture customisation. Customers will be able to select among an
extensive furniture catalogue and properties and virtually try the selected
pieces in their rooms with three very simple steps: (1) Creating an accurate
3D virtual representation of their place, (2) Trying furniture of different
manufacturers in this virtual scenario and get recommendations according to
their preferences of a wide range of properties and pieces, and (3)
Visualizing the fit of the chosen products in their place using augmented
reality.
STEP 1 STEP 2 STEP 3
# Scope of the document
FurnIT-SAVER project is participating in the Horizon2020 Open Research Data
Pilot. As such, this Data Management Plan is produced to provide an analysis
of the main elements of the data management policy that will be used by the
partners with regard to all the datasets that will be generated or collected
by the project. This analysis includes an identification of the type of data
the project will generate or collect (type and purpose) as well as an outline
of how this data will be handled during the lifespan of the project and after
it is completed. This will have to be done without compromising any
Intellectual Property Rights (IPR) and commercial plans of the participants.
This document will be updated during the project in order to clearly identify
the data that will be shared, the channels through which this data will be
made available to thirds parties and the access regimes that are foreseen.
This document has been created following the _Guidelines on Data Management in
Horizon 2020_ issued by the DG Research and Innovation of the European
Commission (version 2.0 from October 30th 2015) and with the support of online
tools such as the DMP online web from the Digital Curation Centre in UK
(http://dmponline.dcc.ac.uk).
# Type of data the project generates/collects
The work detailed in the proposal can be anticipated to produce or collect
three broad categories of data: subjective test data, computer software and
digital models. The subjective test category includes analyzed data from
market survey carried out for user requirements definition (WP1) and feedback
forms and video recordings from beta testers during the validation phase
(WP4). The computer software category consists of mobile and web applications
and services including the different modules of the FurnIT-SAVER platform (WP2
and WP3). The digital models category includes the digital furniture pieces
provided by the partners and other stakeholders in order to populate the
platform with real available furniture products (WP3 and WP4).
The following table details the type of data generated or collected during the
project, its type and estimated expected size:
<table>
<tr>
<th>
**Project phase (WP)**
</th>
<th>
**Specification of type of research data**
</th>
<th>
**Software choice**
</th>
<th>
**Indicative data size**
</th> </tr>
<tr>
<td>
User Requirements definition (WP1)
</td>
<td>
Online anonymous survey to potential users
</td>
<td>
Word/Excel/Acrobat
</td>
<td>
10MB
</td> </tr>
<tr>
<td>
Video files for functional simulation
</td>
<td>
Webex/Powtoon
</td>
<td>
20MB
</td> </tr>
<tr>
<td>
System Development,
Integration and Testing (WP3)
</td>
<td>
FurnIT-SAVER platform
</td>
<td>
Various programming languages
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Project phase (WP)**
</td>
<td>
**Specification of type of research data**
</td>
<td>
**Software choice**
</td>
<td>
**Indicative data size**
</td> </tr>
<tr>
<td>
System Validation (WP4)
</td>
<td>
Anonymized user information and preferences
</td>
<td>
Web
</td>
<td>
10MB
</td> </tr>
<tr>
<td>
2D/3D furniture models
</td>
<td>
2D/3D modelling software
</td>
<td>
1GB
</td> </tr>
<tr>
<td>
Semi-structured interviews and Focus groups
</td>
<td>
Word/Video
</td>
<td>
2GB
</td> </tr>
<tr>
<td>
Project management and dissemination (WP5,WP6)
</td>
<td>
Deliverables and other public documentation
</td>
<td>
Word/Acrobat
</td>
<td>
50MB
</td> </tr>
<tr>
<td>
High quality project video
</td>
<td>
Multimedia software
</td>
<td>
250-500MB
</td> </tr> </table>
The research objectives of the project require qualitative data that are not
available from other sources. Some data exist that can be used to situate the
findings of the proposed research and which will supplement data collected as
part of the proposed research. Nevertheless, in their current form, they would
not permit to properly address the research questions. Therefore, additional
activities are organised in relevant work packages to collect the required
data. This activities includes the organisation of online surveys, semi-
structured interviews with individuals and focus group.
* Online surveys: Close to 200 people participated in an online survey to collect feedback about the project concept and functional requirements. This information has been included as part of D1.1.
* Semi-structured interviews with individuals: The consortium anticipates undertaking 25-50 semi-structured interviews in Spain and Slovenia with individual users and furniture experts. Data will be collected and stored using digital audio/video recordind (e.g. MP3) where the intervewers permit. In case they do not, interviews will be undertaken in pairsl to enable detailed note-taking. Interviews notes will be typed up according to agreed formats and standards.
* Focus group discussions matched to profiles: The sample frame for the focus group parcitipants will be derived from public data such as market studies and qualitative data from the project (i.e. online surveys). The final number of focus groups will depend on geographical and other vatriations in patterns; how quickly a robust pattern of findings emerges, and the scope for identifying and convening the appropiate groups. Thether recorded or not, the event will be transcribed or documented using agreed formats and standards for handling the issus of multiple voices, interruptions, labelling of participatory and visual activities, and so on.
# Roles and use of the data
The following table shows who is responsible of collecting each type of data
and who is using or analysing it.
<table>
<tr>
<th>
**Type of research data**
</th>
<th>
**Who is providing the data**
</th>
<th>
**Who is using/analysing the data**
</th> </tr>
<tr>
<td>
Online surveys
</td>
<td>
All partners in the project will be involved in the organisation of online
surveys for user requirements definition
</td>
<td>
CENFIM to lead the user requirements definition. WIC and the pilot
coordinators to shape their business cases and pilot scenarios. ACS and
Eurecat as a feedback for the platform definition.
</td> </tr>
<tr>
<td>
Video files for functional simulation
</td>
<td>
CENFIM and Eurecat will elaborate a set of materials to simulate the
functioning of the platform.
</td>
<td>
This resources will be used by all partners to support surveys and interviews.
</td> </tr>
<tr>
<td>
FurnIT-SAVER platform
</td>
<td>
ACS and Eurecat are in charge of the platform development (WP3)
</td>
<td>
The platform will be used by pilot coordinators to validate the project
concept and business hypothesis.
</td> </tr>
<tr>
<td>
Anomymized user information and preferences
</td>
<td>
ACS will elaborate a user quiz to be filled by users of the platform following
industrial partners guidance.
</td>
<td>
This information will be mainly used by Eurecat for the implementation of the
recommender.
</td> </tr>
<tr>
<td>
2D/3D furniture models
</td>
<td>
GONZAGA, WWING, CENFIM, WIC
will gather this data from the furniture manufacturers.
</td>
<td>
This data represent the main asset of the platform and will be used by ACS and
Eurecat in the different modules of the platform.
</td> </tr>
<tr>
<td>
Semi-structured interviews and Focus groups
</td>
<td>
The pilot coordinators will gather this information: WWING, CENFIM, GONZAGA,
WIC.
</td>
<td>
This information will be analysed in order to compare the platform functions
against the user requirements and validate business hypothesis.
</td> </tr>
<tr>
<td>
Deliverables and other public documentation
</td>
<td>
All partners are involved in the production of such data
</td>
<td>
This data will be used as evidence of the work done and effort invested as
well as for dissemination
</td> </tr>
<tr>
<td>
High quality project video
</td>
<td>
CENFIM will lead the elaboration of a project video
</td>
<td>
This video will be used as a representation of the work done in validation and
for dissemination.
</td> </tr> </table>
# Exploitation and sharing of data
The results of the research performed under this project will be disseminated
primarily through public publication of deliverables and conference
presentations. The documentation will be available to interested parties upon
request, and will be transmitted electronically via e-mail. On the other hand,
all the computer software generated represents the main exploitable results of
the project and hence its source code will not made public as it would
compromise the IPR and commercial plans of the participants. Furniture
manufacturers are the sole owners of the furniture models provided and hence
these will be stored with limited access by other manufacturers participating
in the pilot phase and third parties out of the consortium but only its
representation in the web will be available for use in validation.
The consortium has identified one relevant document deeming higher degree of
dissemination for its relevance to the sector and potential further research
in ICT technologies applied to traditional business sectors, that is the _D5.4
FurnIT-SAVER White Paper_ . The consortium will search for relevant open
access repositories, relevant resources databases and in general available
dissemination channels to widely make it available and increase its impact.
All other produced data and information will be self-archive and preserved
according to the details provided in the following section.
# Archiving and preservation (including storage and backup)
To ensure the safety of the data, the involved participants will use available
local file servers to periodically create backups of the relevant materials.
A Structured Query Language (SQL) databases will be created to locally store
the back end digital information as part of the computer software and models
category according to the defined database structure of Section 5 of _D2.1
System Architecture_ document.
Additionally, all other relevant documentation created during the project such
as deliverables or ancillary will be self-archive and preserved in the
collaboration tool made available for the project coordinator to the project,
called PROCEMM.
PROCEMM is an open source internet-enabled system with project management
applications that acts as information repository. In FurnIT-SAVER project this
tool is used for document management and project control. The tool is used as
a website with restricted user access for confidentiality reasons. Therefore,
the public documentation and other information declared public by the
consortium will be stored and available upon request in this tool.
Access to PROCEMM
in the project webs
ite
Figure 1 FurnIT-SAVER project website. Access to project repository
highlighted.
All of the research data and material will be in place for at least the 5
years prescribed by the European Commission audit services, as well as the
foreseeable future following that according to the agreements reached by the
consortium by the end of the project. The costs associated to PROCEMM
maintenance and the external hosting of the project website will be assume by
the project coordinator either during and after the lifespan of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0923_3D-Forensics_FTI_700829.md
|
# Data Summary
**What is the purpose of the data collection/generation and its relation to
the objectives of the project?**
The overall objective of 3D-Forensics/FTI is to launch a product based upon
research and development results delivered in the previous Framework 7 (FP7)
3D-Forensics project, in which the consortium developed a 3D scanner and
analysis software to improve the capturing and investigation of footwear and
tyre traces left at crime scenes. This will require taking the already
demonstrated prototype from the Technology Readiness Level (TRL) 6 to TRL9 and
product launch.
Three of the project’s specific objectives (SO) are related to the purpose of
data collection/generation:
* Complete last development step for the hard- and software to enable the marketing of the forensic product. (SO3 in the Description of the Action (DoA))
* Testing and evaluation of advanced product prototypes with 6 forensic public end users 1 , including reproducibility “round robin” testing and pilot testing. (SO4 in DoA)
* A body of evidence (data and its analysis) demonstrating the validity of the 3D-system for the purpose of providing evidence to criminal justice systems. (SO5 in DoA)
Experimental tests by the consortium and pilot testing by end users are
foreseen to provide feedback on how to further improve the forensic product
and to complete the last development steps before market launch. “Round robin”
testing and validation of the 3D-Forensics system are foreseen to prove the
system’s reproducibility of results and to support the acceptance of evidence
based on results from the system in court. The validation in an accredited
process will be a selling point when the product is launched to the market.
There are the following tasks for data collection/generation in
3D-Forensics/FTI:
**Table 1:** Purposes of data collection/generation in 3D-Forensics/FTI
<table>
<tr>
<th>
**Task**
</th>
<th>
**Tasks (DoA)**
</th>
<th>
**Who**
</th>
<th>
**Time**
**(_plan_ ) **
</th>
<th>
**Purpose**
</th>
<th>
**Open access**
</th> </tr>
<tr>
<td>
**Technical development**
</td>
<td>
2200
3300
</td>
<td>
Consortium participants
</td>
<td>
Month 1
–
Month 18
</td>
<td>
Evaluate the progress in the technical development of the system and
production of demo datasets
</td>
<td>
Partly
</td> </tr>
<tr>
<td>
**Familiarisatio n Testing**
</td>
<td>
4200
</td>
<td>
End users
</td>
<td>
Month 5
–
Month 10
</td>
<td>
Initial training and familiarisation with the new technology and identifying
of improvements
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Round Robin Testing**
</td>
<td>
4300
</td>
<td>
End users
</td>
<td>
Month 8
–
Month 13
</td>
<td>
Verify the performance and demonstrate the reproducibility of the technology
in a controlled situation (test bed)
</td>
<td>
Partly
</td> </tr>
<tr>
<td>
**Pilot testing**
</td>
<td>
4400
</td>
<td>
End users
</td>
<td>
Month 14
–
Month 20
</td>
<td>
Evaluate the system in (nearly) real crime scene situations and identify
further improvements
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Validation I**
</td>
<td>
4500
</td>
<td>
End users
</td>
<td>
Month 14
–
Month 25
</td>
<td>
Evaluate the usability and underpin the performance of the
system as well as its judicial acceptance
</td>
<td>
Partly
</td> </tr>
<tr>
<td>
**Validation II**
</td>
<td>
4600
</td>
<td>
End users
(at least one)
</td>
<td>
Month 20
–
Month 27
</td>
<td>
Evaluate the usability and performance of the system as well as its judicial
acceptance in an accredited process
</td>
<td>
Yes
</td> </tr> </table>
The consortium wants to publish results illustrating the performance of the
system and its validation in an accredited process. Data generated for this
purpose is planned to be completely openly accessible. This data collection is
foreseen to take place in the last part of the project ca. middle 2018. Other
data collections are planned to be opened in parts, mainly for demonstration
purposes.
**What types and formats of data will the project generate/collect?**
The 3D-Forensics system consists of a mobile 3D-scanner to capture footwear
and tyre traces and 3D analysis software to investigate the datasets.
**First** , data generation means the production of 3D scans of the following
kinds of objects:
* Footwear impressions in different undergrounds
* Tyre impressions in different undergrounds
* Soles of footwear
* Profile of tyres
* Specimens for 3D sensors
This data will contain a set of raw 3D scan results captured by the mobile
3D-scanner. Each raw scan will contain the following output data:
**Table 2:** Raw output data for each acquired dataset
<table>
<tr>
<th>
**Raw output data**
</th>
<th>
**Details**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
_Ordered 3D-pointcloud_
</td> </tr>
<tr>
<td>
**1a**
</td>
<td>
3D-coordinates of points
</td>
<td>
XYZ-coordinates in unit meter
</td> </tr>
<tr>
<td>
**1b**
</td>
<td>
Row and column index of 3D-points
</td>
<td>
Each 3D-point is dedicated to a pixel
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
_Textures of 3D-pointcloud mapped by row/column index_
</td> </tr>
<tr>
<td>
**2a**
</td>
<td>
Quality values
</td>
<td>
Each 3D-point has a quality based on the brightness and reflectivity of the
scanned surface at this point
</td> </tr>
<tr>
<td>
**2b**
</td>
<td>
Grey image
</td>
<td>
8-Bit grey values
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
_Textures of 3D-pointcloud mapped by relative calibration_
</td> </tr>
<tr>
<td>
**3a**
</td>
<td>
Colour image of external colour camera
</td>
<td>
24-Bit colour values
</td> </tr>
<tr>
<td>
**3b**
</td>
<td>
Calibration parameters
of external camera in OpenCV format
</td>
<td>
Extrinsic and intrinsic calibration parameters relative to 3D-point cloud
(including distortion)
</td> </tr> </table>
The raw data is saved in ASCII format (row, column, X, Y, Z, quality, grey
value) as txt-file and as a separate image file (jpg or cr2) with calibration
parameters in xml format (camera calibration model by OpenCV 2 ). For those
datasets, which will be openly accessible, the output data can be converted
into format E57 which is a more common standard and can be imported by most 3D
software packages.
**Second** , data generation means the processing and analysis of 3D raw data
with the 3DForensics’ software “R3 Forensic”. The software includes the
following tools (the raw data itself is never changed hereby):
* Shading of scans
* Alignment of single scans, e.g. tyre tracks
* Cropping / Masking of points
* Meshing of points
* Colour mapping of external camera image onto 3D-pointcloud and/or mesh
* Determination of reference plane
* Flipping
* Extraction of solid images
* Determination of linear measures
* Assignment of class characteristics, e.g. shoe type
* Assignment of identification marks
* Extraction of sections through 3D-pointcloud
* Determination of a colour coded height map
Throughout this processing additional data to the original raw scan data may
be produced: **Table 3:** Processed output data
<table>
<tr>
<th>
**Processed output data**
</th>
<th>
**Details**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
_Transformation matrixes_
</td>
<td>
Rotation and translation parameters and information on mirroring along one
coordinate axis (4 x 4 matrix for each 3Dpointcloud)
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
_Meshes_
</td>
<td>
Mesh of a 3D-pointcloud, may include colour mapping
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
_Solid images_
</td>
<td>
Virtual view onto the 3D data
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
_Sections_
</td>
<td>
2D data within one section place through the 3D data
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
_Annotations_
</td>
<td>
Annotations onto class and identification characteristics as well as
distances, including their position within the 3D data
</td> </tr> </table>
The easiest way to access this data is the through the 3D-Forensics’ system
software “R3 Forensic”, which saves it all in one project. The consortium will
discuss the possibility to provide a demo version of “R3 Forensic” with
limited functionality for use with data made openly accessible. An alternative
is to convert the data in more common formats (e.g. xml-format for
transformations and annotations, vrml-format for meshes).
**Will you re-use any existing data and how?**
As a result of the previous FP7 3D-Forensics project a small data collection
already exists. The results of those experiments have been taken into account
for the further improvement of the system. However this existing data will not
reflect the final technical state of the system and will therefore not be
openly accessible.
**What is the origin of the data?**
The raw data will be captured using (advanced) prototypes of the 3D-Forensics
scanner (Figure 1) and the processed data will be produced by the (advanced)
prototype of the 3D analysis software. Both capturing and processing will be
performed by partners of the consortium as well as associated end users.
**Figure 1** : 3D-Forensics scanner and analysis software
The format conversion for openly accessible data will be made using the
analysis software “R3 Forensic”.
**What is the expected size of the data?**
The size of a single raw scan of the 3D-scanner is about 100 MB. The size of
an analysis project is very dependent on the used analysis tools in the range
of 100 MB to 500 MB per scan. It is expected that about 100 datasets (raw and
processed output data) will be openly accessible. The consortium expects an
overall size of <50 GB.
**To whom might it be useful ('data utility')?**
The data collection in the project 3D-Forensics/FTI has the main objectives of
verifying the performance of the 3D-Forensics system and of validating the new
technology under forensic aspects. All data is focussed on the trace types:
footwear and tyre impressions. Thus the data is foreseen to be primarily aimed
at crime scene investigators and forensic experts as well as public
prosecutors who handle evidence in court. The data collection is foreseen to
support the product launch on the market.
Further the data collection may be useful concerning the general usability of
3D data in forensic applications, such as 3D data of other trace types /
situations or 3D data captured by other 3Dscanners.
# FAIR data
## Making data findable, including provisions for metadata
**Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?**
Data captured by a 3D-Forensics scanner is foreseen to be also usable in
court. Thus, it respects already the necessity of metadata and unique
identification. For each raw scan the following metadata is saved
automatically:
* Unique identification number for each single scan
* Unique identification number for groups of scans (e.g. scans that belong to one tyre track)
* Date and time of scan
* Scan settings (actual scan mode and brightness setting)
* User ID
* Device serial number
Further additional information on the scans is saved separately:
* Object type (e.g. underground material for the impression, type of shoe sole, etc.) Optionally: Location of object
During the processing and analysing of data the unique identification is still
traceable.
**What naming conventions do you follow?**
The files connected to raw scan datasets are named by the unique
identification number (+ file extension). All metadata connected to the raw
scans is saved in a separate project file in xml format. Processed data will
be named, so that the connection to the original raw data is clear.
**Figure 2:** Cut-out of the xml file containing meta data on raw scans
**Will search keywords be provided that optimize possibilities for re-use?**
Possible keywords could be: Type of object / underground, scan mode, time,
etc. Those are saved in the metadata xml file. However the size of the data
collection is probably not too large. The consortium will discuss the
necessity of a keyword search. An update on that question will be given in the
next update.
**Do you provide clear version numbers?**
Datasets belonging to one version will be grouped in one zipped file. The
version number is part of the zip file name.
**What metadata will be created? In case metadata standards do not exist**
There exists no standard for the kind of metadata described. Please see the
answer on the previous page.
## Making data openly accessible
**Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions.**
In relation to Table 1, there are the following reasons for sharing / not
sharing the data: **Table 4:** Reasons for sharing / not sharing data in
3D-Forensics/FTI
<table>
<tr>
<th>
**Task**
</th>
<th>
**Open access**
</th>
<th>
**Reason for sharing / not sharing**
</th> </tr>
<tr>
<td>
**Technical development**
</td>
<td>
Partly
</td>
<td>
Voluntary restriction
</td> </tr>
<tr>
<td>
**Familiarisation Testing**
</td>
<td>
No
</td>
<td>
Voluntary restriction
</td> </tr>
<tr>
<td>
**Round Robin Testing**
</td>
<td>
Partly
</td>
<td>
Voluntary restriction
</td> </tr>
<tr>
<td>
**Pilot testing**
</td>
<td>
No
</td>
<td>
Voluntary restriction
</td> </tr>
<tr>
<td>
**Validation I**
</td>
<td>
Partly
</td>
<td>
Voluntary restriction
</td> </tr>
<tr>
<td>
**Validation II**
</td>
<td>
Yes
</td>
<td>
No restriction
</td> </tr> </table>
**Note that in multi-beneficiary projects it is also possible for specific
beneficiaries to keep their data closed if relevant provisions are made in the
consortium agreement and are in line with the reasons for opting out.**
This is not relevant for 3D-Forensics/FTI because all generated data is
foreseen to be common property of the consortium
**How will the data be made accessible (e.g. by deposition in a repository)?**
To make the data accessible the consortium will evaluate two options until the
next update:
* Deposition on the project website _www.3D-Forensics.eu_
* Deposition in a repository (which specific one would be chosen later)
**What methods or software tools are needed to access the data?**
To access the data it is recommended to use the 3D analysis software R3
Forensic. The raw scan data is given in format E57 which can be imported into
other 3D software packages as well (including free software). The processed
scan data can be imported partly in other software packages (including free
software). The metadata of the raw scans and parts of the processed data can
be accessed by a simple text editor.
Data Management Report
**Is documentation about the software needed to access the data included?**
The documentation of the used standard formats are public available (e.g. E57
on _http://www.libe57.org_ ) . However most 3D software packages allow
importing this format without the need of any documentation. Documentation
about the metadata (given in xml format) and the processed data will be given.
**Is it possible to include the relevant software (e.g. in open source
code)?**
There is the possibility to provide the software R3 Forensic as binary (not
open source code) in trial mode. The consortium will decide until the next
update which software tool is provided. Open source code will not be provided.
**Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.**
Please see the answer on the previous page.
**Have you explored appropriate arrangements with the identified repository?**
The consortium has understood the possibilities offered by its project website
well. The consortium has not explored potential appropriate external
arrangements yet. This will be done after the deposition location is decided.
**If there are restrictions on use, how will access be provided?**
Restriction on use of the data will be discussed by the consortium and will be
reported in the next update.
**Is there a need for a data access committee?**
There is the need of a data access committee which decides what data will be
openly accessible. This data access committee is given by the General Assembly
of the project consortium.
**Are there well described conditions for access (i.e. a machine readable
license)?**
There are no conditions for access defined yet.
**How will the identity of the person accessing the data be ascertained?**
This will be done after the deposition location is decided.
## Making data interoperable
**Are the data produced in the project interoperable, that is allowing data
exchange and reuse between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different
origins)?**
The raw scan data is provided in the standard format E57 which can be imported
by most 3D software packages. Also parts of the processed scan data will be in
standard formats. The interoperability of the data is assured.
**What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?**
The raw scan data itself is provided in the standard format E57. Parts of the
processed data will be in standard formats, too. For the metadata no standard
exists. It will be given in a xml file format.
**Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?**
There exists no standard vocabulary for the accessible data types.
**In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?**
The consortium will try to provide mappings to more commonly used ontologies.
## Increase data re-use (through clarifying licences)
**How will the data be licensed to permit the widest re-use possible?**
The consortium will discuss the licensing of the permission and report its
policy in the next update.
**When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.**
There is in principle no need of any embargo for a specific time. However, the
consortium and its user testers will first analyse all data themselves before
making it openly accessible
**Are the data produced and/or used in the project useable by third parties,
in particular after the end of the project? If the re-use of some data is
restricted, explain why.**
The generated data is important to underpin the performance of the
3D-Forensics system and to demonstrate the validity of the data regarding
aspects, connected with its admissibility in court as a basis for expert
opinion evidence. Those issues are important for the time after the product
launch as well. Under these considerations, no restriction after the end of
the project is necessary.
**How long is it intended that the data remains re-usable?**
The data needs to be re-usable at least for the product lifetime of the
3D-Forensics system, which is expected to be >5 year after end of the project.
**Are data quality assurance processes described?**
All consortium partners will review the openly accessible data and assure its
quality. This quality assurance process will be described in detail in the
documentation of the openly accessible data.
# Allocation of resources
**What are the costs for making data FAIR in your project?**
Costs are foreseen to arise mainly in the form of personnel time to collect,
convert and document the open data. The costs are estimated to be not less
than one person month.
**How will these be covered? Note that costs related to open access to
research data are eligible as part of the Horizon 2020 grant (if compliant
with the Grant Agreement conditions).**
The costs will be covered as personnel costs in the WPs given in Table 1.
**Who will be responsible for data management in your project?**
All consortium partners are involved in the data management. The partner
Fraunhofer IOF has the main responsibility.
**Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?**
The topic of long term preservation was not yet discussed. This will reported
in a future update.
# Data security
**What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?**
At least one backup copy of all data will be stored by the consortium to avoid
any data loss. No sensitive data is foreseen to be generated.
**Is the data safely stored in certified repositories for long term
preservation and curation?**
The data will be stored safely with the own resources of the consortium.
However, the option of long term preservation and curation in a certified
repository will be discussed by the consortium and reported in a future
updated.
# Ethical aspects
**Are there any ethical or legal issues that can have an impact on data
sharing? These can also be discussed in the context of the ethics review. If
relevant, include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA).**
The data collection covers scans of footwear and tyre impressions of anonymous
shoes and tyres. No personnel data is foreseen to be acquired and as such no
ethical issues should arise
**Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?**
As stated above, informed consent is foreseen to be not necessary as no
personnel data is foreseen to be generated. However the project will prepare
an informed consent form to as part of deliverable D6.1, so it is available in
case there is unexpected need and the above issues are foreseen to be taken
into consideration.
# Other issues
**Do you make use of other national/funder/sectorial/departmental procedures
for data management? If yes, which ones?**
At the time of writing, no national/funder/sectorial/departmental procedures
are foreseen to be applied for data management in this project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0924_ODYSSEA_727277.md
|
# Executive Summary
ODYSSEA intends to develop, operate and demonstrate an interoperable and cost-
effective platform that fully integrates networks of observing and forecasting
systems across the Mediterranean basin, addressing both the open sea and the
coastal zone.
The platform is prepared to deliver a set of services focused on different
coastal user’s needs (navigation safety, ports operations, water pollution
prevention and response, eutrophication risks, search and rescue missions,
etc.) enabling the exploitation of the added value of integrated Earth
Observation (EO) technologies (satellite, airborne and ground based),
Copernicus Marine Service and ICT to deliver customized and ready to use
information. These services will provide an easy way to access in-situ data,
local high-resolution forecasts and products and services (e.g. meteo-
oceanographic conditions at specific locations, identification of optimum or
critical working windows, support to sea pollution response actions, etc.) for
a broad range of different users.
Taking in consideration that this platform will gather a large number of
diverse data sets (from existing networks and platforms and the ODYSSEA-
produced data consisting of operational numerical models data, data from in-
situ sensors and remote sensing data), the issues of data management and data
quality control assume a central concern. One goal of the platform is to
ensure that data from different and diverse data providers are readily
accessible and useable to a wider community. To achieve that, the strategy is
to move towards an integrated data system within ODYSSEA that harmonizes work
flows, data processing and distribution across different systems.
The value of standards is clearly demonstrable. In oceanography, there have
been many discussions for processing data and information. Many useful ideas
have been developed and put into practice, but there have been few successful
attempts to develop and implement international standards in managing data.
This document intends to provide an overview of the best practices concerning
these aspects and define the guidelines to be followed in themes such as
catalogues, metadata, data vocabulary, data standards and data quality control
procedures. This implies taking actions at different levels:
* Adopt proper data management procedures to implement metadata, provide an integrated access to data in order to facilitate the integration in existing systems and assure the adoption of proper data quality control.
* Enable integration of more data, improve the enhancement of the services (viewing, downloading, traceability and monitoring) to users and providers, facilitate the discovery of data through a catalogue based on ISO standards, provide OGC services (SOS, WMS, WFS, etc.) to facilitate development; and facilitate the visibility of existing data and the identification of gaps.
This deliverable will provide the guidelines and the first version of the Data
Management Plan (DMP) for the ODYSSEA Platform including strategies for
improving data management, data privacy and data quality control. As an EU
supported Mediterranean- focused platform, the data management will tune in to
specific aspects of ocean data management within the European context such as
existing networks of data and requirements of European industry and other end-
users.
An updated version of the DMP will be delivered in M18 of the project. It will
describe the implementation process, lessons learned and barriers overcome in
data management while deploying the ODYSSEA Platform. It will further
elaborate specific relevant aspects for ODYSSEA namely the "new" data series
approach using SOS, further integration into the ODYSSEA Platform among
others.
# Introduction
ODYSSEA aims to provide a set of services focused on different coastal user’s
needs (navigation safety, ports operations, water pollution prevention and
response, eutrophication risks, search and rescue missions, etc.) allowing to
exploit the added value of integrated Earth Observation (EO) technologies
(satellite, airborne and ground based), Copernicus Marine Service and ICT to
deliver customized and ready to use information. These services will provide
an easy way to get in-situ data, local highresolution forecasts and products
and services (e.g. meteo-oceanographic conditions at specific locations,
identification of optimum or critical working windows, support to sea
pollution response actions, etc.) to a broad range of different users.
This report describes the strategies to be implemented to improve data
management, data privacy and data quality control of ODYSSEA services. The
strategy has four main components:
* Catalogues, vocabulary and metadata;
* Data integration and fusion;
* Data quality control;
* Data privacy policy.
The issue of metadata, vocabulary and catalogues is of prime importance to
assure the interoperability and easy discovery of data. A proper data
management following widely accepted standards also contribute to reduce the
duplication of efforts among agencies; to improve the quality and reduce costs
related to geospatial information, thus making oceanographic data more
accessible to the public and helping to establish key partnerships to increase
data availability. Aiming to contribute to these objectives, ODYSSEA will
adopt the procedures already proposed in the most relevant EU initiatives such
as CMEMS, EMODNet and SeaDataNet, especially the standards in relation to
vocabularies, metadata and data formats. In practice the gridded data sets
addressing either dynamic data sets (similar to CMEMS) or static data sets
(similar to EMODnet) will follow procedures similar to the ones adopted by
these two services. Regarding the time series data, SeaDataNet procedures will
represent the main guidelines and NetCDF-CF format will be the standard to be
adopted. However, ODYSSEA will go one step forward and will use these NetCDF
files to feed an SOS service, supported by North 52 software to assure the
interface with the users of the platform.
The capability of serving time series through a standard protocol, such as
SOS, will represent a step forward in relation to the existing services
although, being a pioneer, it is foreseen that some barriers must be broken.
The service will be tested in the ODYSSEA platform V0 Edition and it will be
later the subject of a detailed assessment in Deliverable D3.3 or in an
updated version of this Deliverable.
The data integration and fusion policies to adopt in ODYSSEA it is another
relevant issue of the project. Data integration and fusion deals with the best
strategies to adopt when it comes to merge datasets obtained from different
data sources, to build the best available datasets or fuse different data
sources to produce aggregated data. Although not being an easy ground, a
proper address of this issue may represent a valuable contribution to improve
data accuracy and robustness of models’ initial and boundary conditions and to
provide to the users comprehensive data that merge together different data
sets based on reliable criteria.
The data quality control either related with the quality of observed in-situ
data (e.g. tidal gauges, wave buoys, weather stations, etc.) or the modelled
forecasts is another relevant aspect that will be addressed by ODYSSEA. In the
case of local acquired data automatic procedures will run regularly to detect
and remove anomalous values from observed datasets. In the case of the models,
the results will be automatically compared with observations (e.g., buoys and
CMEMS grid observation products) and the statistical analysis will be provided
on a daily basis to the end users.
In relation with data privacy (data protection and the rights of platform end-
users, customers and business contacts), it is apparent that ODYSSEA will
assure the respect of their personal data under the General Data Protection
Regulation (GDPR) (Regulation (EU) 2016/679) which will substitute Directive
95/46/EC on May 25, 2018. ‘Personal data’ means any information, private or
professional, which relates or can be related to an identified or identifiable
natural person (for the full definition, see Article 2(a) of EU Directive
95/46/EC).
In the following paragraphs a more detailed overview both of the state of the
art and the procedures to be adopted in ODYSSEA will be provided. Note that at
this stage this document mostly focuses in defining the guidelines to be
followed throughout ODYSSEA platform development, and although it does not
reflect yet a practical implementation of these guidelines which will be
subject of a later document.
# Ocean Data Management: the European context
Delivery of data to users requires common data transport formats, which
interact with other standards (Vocabularies, data q uality control).
Several initiatives exist within Europe for ocean data management, which are
now coordinated under the umbrella of EuroGOOS. EuroGOOS is the network
committed to develop and advance the operational oceanography capacity of
Europe, within the context of the intergovernmental Global Ocean Observing
System (GOOS). The scope of EuroGOOS is wide and its needs are partially
addressed by the on-going development within Copernicus, SeaDataNet and other
EU initiatives.
Therefore, to improve the quantity, quality and accessibility of marine
information, to support decision making and to open up new economic
opportunities in the marine and maritime sectors of Europe for the benefit of
European citizens and the global community, it was agreed at the annual
EuroGOOS meeting in 2010 that it is essential to meet the following needs
(AtlantOS, 2016):
* Provision of easy access to data through standard generic tools; where “easy” means the direct use of data without concerns on data quality and processing and that adequate metadata are available to describe how the data were processed by the data provider.
* To combine in situ-observation data with other information (e.g., satellite images or model outputs) to derive new products, build new services or enable better-informed decision-making.
The ocean data management and exchange process within EuroGOOS intends to
reduce the duplication of efforts among agencies; to improve the quality and
reduce costs related to geospatial information, thus making oceanographic data
more accessible to the public and helping to establish key partnerships to
increase data availability. In addition, a EuroGOOS data management system
intends to deliver a system that will meet European needs, in terms of
standards and respecting the structures of the contributing organizations.
The structure will include:
* Observation data providers, which can be operational agencies, marine research centres, universities, national oceanographic data centres and satellite data centres.
* Integrators of marine data, such as the Copernicus in-situ data thematic centre (for access to near real-time data acquired by continuous, automatic and permanent observation networks) or the SeaDataNet infrastructure (for quality controlled, long-term time series acquired by all ocean observation initiatives, missions, or experiments), ICES and EurOBIS for biodiversity observations, and the new European Marine Observation and Data Network (EMODnet) portals. The integrators that will support both data providers, willing to share their observation data, and users requesting access to oceanographic data (historic, real-time and forecasts). Integrators develop new services to facilitate data access and increase the use of both existing and new observational data.
* Links with international and cross-disciplinary initiatives, such as GEOSS (Global Earth Observation System of Systems), both for technical solutions to improve harmonization in an interdisciplinary global context.
## Towards an integrated EU data system
ODYSSEA aims to contribute to improve data availability across the
Mediterranean basin, addressing both the open sea and the coastal zone. One
goal is to ensure that data from different and diverse insitu observing
networks and forecasting models are readily accessible and useable. To achieve
this, the strategy is to move towards an integrated data system that
harmonizes work flows, data processing and data distribution across the in-
situ observing network systems, and integrates in-situ observations into
existing European and international data infrastructures (the so called
“Integrators”). These include the Copernicus INS TAC, SeaDataNet NODCs,
EMODnet, EurOBIS, and GEOSS.
The targeted integrated system deals with data management challenges that must
be met to provide efficient and reliable data service to users. These include:
* Common quality control for heterogeneous and near real time data;
* Standardization of mandatory metadata for efficient data exchange;
* Interoperability of Network and Integrator data management systems.
## Industry requirements
Presently, there is a need to change the way marine observatories and public
data-sharing initiatives engage with industry and users. Columbus project
proposes a set of recommendations designed to overcome some of the most
important gaps and barriers still faced by private data users. Taken together,
they represent the basic components of a strategy to open significant
opportunities for maritime industry to both benefit from and engage with
public marine data initiatives. This can ensure the optimum return of public
investments in the marine data sector, notably in support of meeting key EU
policy goals under the Blue Growth Strategy, the Marine Strategy Framework
Directive and the Maritime Spatial Planning Directive. Some barriers require
further analysis and discussion, but there are already many actions that can
be undertaken to improve the situation on the short and medium term (Columbus,
2017):
* _Industry representatives should be included in the governance and take part in the entire cycle of decision making, development and operation of marine observation and data-sharing initiatives._
* _There is a need for marine data-sharing initiatives to take a more pro-active approach and move out of the comfort zone of the traditional oceanographic marine monitoring and observing communities. This involves, among others, developing a more “service oriented approach”, learning new communication skills and language, being present and more visible in fora that attract industry and to exploit creative technologies._
* _Data, products and services offered by marine observation and data initiatives should be presented in a user-friendly, attractive and intuitive way which is adapted to the target users. If users from different communities or sectors are targeted, options to adjust the interface depending on the visitor should be considered._
* _Clear, succinct and open communication is critical: it should be instantly clear for industry what data, products and services are offered and what may be made available in the future. Equally important is to provide information on what is not available, and the limitations of the resources offered._
* _More efforts should be made to build upon early achievements and successes: presenting use case examples can trigger interest where there may previously have been none._
* _There is a significant role for maritime clusters in connecting marine data initiatives with industry and vice versa. Maritime clusters are an important bridge between private and public sector as they deal with both and have a good understanding of their culture, language, needs and concerns._
* _At European level there is a need for defragmentation of the plethora of marine observation and data and information sharing initiatives, as well as the online data portals. In the longer term, there is a need for a joint roadmap, agreed by the responsible coordinating and funding bodies including at the European Commission level, to set out the strategic framework._
* _Dedicated data-sharing policies to incentivise the private sector and address their specific needs should be developed. Ways forward could include: stating clearly the added-value or benefits of sharing data, moratorium on commercially sensitive data, provision of services in return for data which could support in-house data management, the development of a data-sharing ‘green label’ in recognition of corporate social responsibility. It is clear that implementation of the recommendations will require increased commitment and investment of time and resources, both from industry and from marine observation and data initiatives, but should provide both with significant returns over time_
## The ODYSSEA approach
The procedures to follow in ODYSSEA regarding this issue of the data
management will preferentially follow the examples from CMEMS, EMODNet or
SeaDataNet. In practice two major data types will be addressed: the gridded
data and the time series data.
The gridded data may address dynamic data sets (similar to CMEMS) or static
data sets (similar to EMODnet). In both cases the procedures to follow will be
similar to the ones adopted by these two services.
Regarding the time series data, SeaDataNet procedures will represent the main
guidelines and NetCDFCF format will be the standard to be adopted. However
ODYSSEA will go one step forward and will use these NetCDF files to feed an
SOS service, supported by North 52 software to assure the interface with the
users.
# Data quality control
The issue of data quality control will be addressed following the state of the
art recommendations of different projects such as SeaDataNet or AtlantOS.
SeaDataNet produced a comprehensive document presenting a set of guidelines to
be followed in marine data quality control. According to this document, from
which part is reproduced bellow, data quality control essentially and simply
has the following objective: “ _To ensure the data consistency within a single
data set and within a collection of data sets and to ensure that the quality
and errors of the data are apparent to the user who has sufficient information
to assess its suitability for a task_ ”. If done well, quality control brings
about several key advantages (SeaDataNet, 2010):
* _**Maintaining Common Standards** : There is a minimum level to which all oceanographic data should be quality controlled. There is little point banking data just because they have been collected; the data must be qualified by additional information concerning methods of measurement and subsequent data processing to be of use to potential users. Standards need to be imposed on the quality and long-term value of the data that are accepted (Rickards, 1989). If there are guidelines available to this end, the end result is that data are at least maintained to this degree, keeping common standards to a higher level. _
* _**Acquiring Consistency** : Data within data centres should be as consistent to each other as possible. This makes the data more accessible to the external user. Searches for data sets are more successful as users are able to identify the specific data they require quickly, even if the origins of the data are very different on a national or even international level. _
* _**Ensuring Reliability** : Data centres, like other organisations, build reputations based on the quality of the services they provide. To serve a purpose to the research community and others their data must be reliable, and this can be better achieved if the data have been quality controlled to a ‘universal’ standard. Many national and international programmes or projects carry out investigations across a broad field of marine science which require complex information on the marine environment. Many large-scale projects are also carried out under commercial control such as those involved with oil and gas and fishing industries. Significant decisions are made, and theories formed, on the assumption that data are reliable and compatible, even when they come from many different sources. _
ODYSSEA services data flux will be managed automatically by the ODYSSEA
platform. The data quality control will start by the execution of automatic
procedures (independently of the adoption of more complex procedures). The
data quality control methodology will focuses on in situ observations and
modelled forecasts and it will be addressed from two perspectives: the data
**Quality Assurance** and the **Quality Control** .
Quality Assurance (QA) is a set of review and audit procedures implemented by
personnel or an organization (ideally) not involved with normal project
activities to monitor and evaluate the project to maximize the probability
that minimum standards of quality are being attained. With regard to data, QA
is a system to assure that the data generated is of known quality and well-
described data production procedures are being followed. This assurance relies
heavily on the documentation of processes, procedures, capabilities, and
monitoring. Reviews verify that data quality objectives are being met within
the given constraints. QA is inherently a human-in-the-loop effort and
substantial documentation must accompany any QA action. QA procedures may
result in corrections to data. Such corrections shall occur only upon
authorized human intervention (e.g., marine operator, product scientist,
quality analyst, principal investigator) and the corrections may either be
applied in bulk (i.e., all data from an instrument during a deployment period)
or to selective data points. The application of QA corrections will
automatically result in the reflagging of data as ‘corrected’.
Quality Control (QC) is a process of routine technical operations, to measure,
annotate (i.e., flag) and control the quality of the data being produced.
These operations may include spike checks, out-of-range checks, missing data
checks, as well as others. QC is designed to:
* Provide routine and consistent checks to ensure data integrity, correctness, and completeness;
* Identify and address possible errors and omissions;
* Document all QC activities.
QC operations include automated checks on data acquisition and calculations by
the use of approved standardized procedures. Higher-tier QC activities can
include additional technical review and correction of the data by human
inspection. QC procedures are important for:
* Detecting missing mandatory information;
* Detecting errors made during the transfer or reformatting;
* Detecting duplicates;
* Detecting remaining outliers (spikes, out of scale data, vertical instabilities, etc);
* Attaching a quality flag to each numerical value to indicate the corrected observed data points.
A guideline of recommended QC procedures has been compiled by project
SeaDataNet after reviewing NODC schemes and other known schemes (e.g. WGMDM
guidelines, World Ocean Database, GTSPP, Argo, WOCE, QARTOD, ESEAS, SIMORC,
etc.). The guideline at present follows the QC methods proposed by SeaDataNet
for CTD (temperature and salinity profiles), current meter data (including
ADCP), wave data and sea level data. SeaDataNet is also developing efforts for
extending the guideline with QC methods for surface underway data, nutrients,
geophysical data and biological data.
ANNEX I provides a detailed description of the implementation process
procedure to be followed for QA/QC in ODYSSEA.
## Quality Control Flags
According to EuroGOOS (2016), an extensive use of flags to indicate the data
quality is recommended, since the end user will select data based on quality
control flags, amongst other criteria. These flags should always be included
in any data transfer (e.g., from ODYSSEA Observatories to the central ODYSSEA
platform) maintaining standards and ensuring data consistency and reliability
( _see Table 1_ ). The same flag scale is also recommended by SeaDataNet.
TABLE 1: QUALITY FLAG SCALE (REPRODUCED FROM EUROGOOS, 2016)
**Code Definition**
<table>
<tr>
<th>
0
</th>
<th>
No QC was performed
</th> </tr>
<tr>
<td>
1
</td>
<td>
Good data
</td> </tr> </table>
2. Probably good data
3. Bad data that are potentially correctable
<table>
<tr>
<th>
4
</th>
<th>
Bad data
</th> </tr> </table>
5. Value changed
6. Bellow detection limit
7. In excess of quoted value
8. Interpolated value
<table>
<tr>
<th>
9
</th>
<th>
Missing value
</th> </tr> </table>
A Incomplete information
_Data with QC flag = 0 should not be used without a quality control made by
the user._
_Data with QC flag different from 1 on either position or date should not be
used without additional control from the user._
_If date and position QC flag = 1 only measurements with QC flag = 1 can be
used safely without further analyses_ _if QC flag = 4 then the measurements
should be rejected_ _if QC flag = 2 the data may be good for some
applications, but the user should verify this_ _if QC flag = 3 the data are
not usable, but the data centre may be able to correct them in a delayed_
_mode_
## In situ observations quality control
The quality control of observations may be done in two phases. During the
download of in-situ observations automatic checks should be done such as those
proposed by SeaDataNet (2010) (e.g. global range test, date and time). After
quality control, only the valid data is stored in the database. At the second
phase a tool may be run to periodically perform a scientific quality control
check (SeaDataNet, 2010). This quality control aims to detect spikes, filter
high frequency noise (e.g. moving average or P50), data with abnormal
variability in time, etc. Specific tools will be running automatically with
this aim.
## Forecasts quality control
The modelled forecasts quality control may be done by comparing time-series
forecasts with in situ observations (e.g., wave buoys, tidal gauge, weather
stations, etc.) through automatically-run algorithms. Also, gridded data
forecasts may be compared automatically with observations (e.g., CMEMS gridded
data observations). As a result, several statistical parameters may be
computed (e.g., correlation coefficient, bias, RMSE, skill, etc.) to assess
the quality of forecasts.
# Data integration and fusion
## Low level data integration and fusion
The issue of the best strategies to adopt when it comes to merge datasets
obtained from different data sources, to build the best available datasets or
fuse different data sources to produce aggregated data, indices and products
it is not an easy ground. A possible solution when we have different solutions
with different resolutions for the same area is to make a fusion of these data
and offer a unique integrated dataset. Another option is to provide all
datasets separately with an eventual option of an integrated solution. No
matter the adopted solution, the final objective of the data integration and
fusion is to contribute to improve data accuracy and robustness of models’
initial and boundary conditions and to provide to the users comprehensive data
that merge together different data sets based on reliable criteria.
For example if a user is interested in wave data for a specific site and it
realizes that for the period in which he is interested there exist different
time series from different wave buoys, he may be interested in getting a
unique time series merging together and make compatible the different time
series data. This process may require complex actions regarding the levels of
accuracy of the different measuring devices, the measuring time rate and
units, etc.
## Semantic Information Integration and Fusion
Capacity for integration and fusion of semantic information will be provided
through the ODYSSEA platform. Semantic information is made of several
information items, potentially coming from different semantically rich
information sources. The main use of this capacity is for semantic network
enrichment and query.
The information processed is expressed through graphs of entities related with
each other and contains semantic metadata. The fusion is adapted to the domain
of application. This application domain is described through an ontology of
the domain. The fusion process is also adapted to the quality of the
information items, through the use of fusion heuristics.
The fusion heuristics integrate domain knowledge and user preferences. They
are the intelligent part of the semantic fusion system. They are end-users
defined functions used to express the confidence the users have in the
information sources, as well as specific strategies that must be followed in
order to fuse information coming from different sources. The two main semantic
information integration functionalities are:
* Insertion of new information in a semantic information network (Synthesis),
* Query for information in a semantic information network (Mining).
# Data management
## Providers code for data
Following the procedures adopted by AtlantOS, the Institutions providing data
to ODYSSEA platform should be reported and acknowledged following the EDMO
code recorded in the data file and the ODYSSEA platform cataloque. EDMO is the
European Directory of Marine Organizations, developed under SeaDataNet, and it
can be used to register any marine organization involved in the collection of
datasets (operators, funders, data holders, etc.). It delivers a code for the
organization to be included in the data or metadata leading to the
harmonization of information (compared to free text) and the optimization of
the datasets discovery. EDMO is coordinated by MARIS.
For EU Countries new entries are added by the National Data Centres (NODCs).
Through ODIP (Ocean Data Interoperability Platform) cooperation, there is also
a point of contact with the USA, Australia and some other non-EU countries.
The rest of the world is managed by MARIS, which also moderates the first
entrance in EDMO of new entries.
The request for a new entry in EDMO is sent to MARIS (current contact: Peter
Thijsse ([email protected]), who verifies if the institution is already
registered. If a new entry is needed, the basic entry is made by MARIS, after
which the appropriate NODC is responsible for updating further details and
managing changes.
## Data vocabulary
Use of common vocabularies in all meta-databases and data formats is an
important prerequisite towards consistency and interoperability with existing
Earth Observing systems and networks. Common vocabularies consist of lists of
standardised terms of reference covering a broad spectrum of disciplines of
relevance to the oceanographic and wider community. Using standardised terms
of reference the problem of ambiguities related to data structure,
organization and format is solved and therefore, common algorithms for data
processing may be applied. This allows the interoperability of datasets in
terms of their manipulation, distribution and long-term reuse.
ODYSSEA will adopt an Essential Variables list of terms (aggregated level)
that has been defined and was published in June 2016 on the NERC/BODC
Vocabulary Server 1 .
This new vocabulary is mapped to the standards recommended for ODYSSEA
parameter metadata: P01
(parameter), P07 (CF variable), P06 (units) from SeaDataNet controlled
vocabularies managed by NERC/BODC and the internationally assured AphiaID from
the WOrld Register of Marine Species (WoRMS) 2 .
## Metadata
Metadata refers to the description of datasets and services in a compliant
form as it has been defined by the Directive 2007/2/EC (INSPIRE) and
Commission Regulation No 1205/2008.
Metadata is the **data about the data** . Metadata describes how, when and by
whom a particular set of data or a service was collected or prepared, and how
the data is formatted, or the service is available. Metadata is essential for
understanding the information stored in and has become increasingly important.
Metadata is structured information that describes, explains, locates, or
otherwise makes it easier to retrieve, use, or manage an information resource.
Metadata is often called as the “data about the data or information about
information”.
Metadata is also data about services. Metadata describes the content, quality,
condition, and other characteristics of a data set or the capabilities of a
service. Creating metadata or data documentation for geospatial datasets is
crucial to the data development process. Metadata is a valuable part of a
dataset and can be used to:
* **Organize** data holdings (Do you know what you have?).
* Provide **information about** data holdings (Can you describe to someone else what you have?).
* Provide information **to data users** (Can they figure out if your data are useful to them?).
* **Maintain the value** of your data (Can they figure out if your data are useful 20 years from now?).
In the geographical domain we can have a description of spatial data (
**spatial data** metadata), a service ( **service** metadata) or a special
analysis process ( **process** metadata). Most for the standardization work is
done for data metadata, however service and process metadata become
increasingly important. Metadata is used in discovery mechanisms to bring
spatial information providers and users together. The following mechanisms are
recognized:
* **Discovery** : which data source contains the information that I am looking for?
* **Exploration (or evaluation)** : do I find within the data sources the right information to suit my information needs?
* **Exploitation (use and access)** : how can I obtain and use the data sources?
Each mechanism has its own use of metadata. The selected standards should
fulfil the needs to carry out services using these mechanisms. Metadata is
required to provide information about an organisation’s data holdings. Data
resources are a major national asset, and information of what datasets exist
within different organisations, particularly in the public sector, is required
to improve efficiencies and reduce data duplication. Data catalogues and data
discovery services enable potential users to find, evaluate and use that data,
thereby increasing its value. This is also becoming important at the European
level. In addition, metadata received from an external source may require
further information supplied to metadata to allow easy process and
interpretation.
In this context for all types of data the following information is required
(SeaDataNet, 2010):
* **Where** the data were collected: location (preferably as latitude and longitude) and depth/height;
* **When** the data were collected (date and time in UTC or clearly specified local time zone);
* **How** the data were collected (e.g., sampling methods, instrument types, analytical techniques). How do we organize the data (e.g., in terms of station numbers, cast numbers);
* **Who** collected the data, including name and institution of the data originator(s) and the principal investigator;
* **What** has been done to the data (e.g., details of processing and calibrations applied, algorithms used to compute derived parameters);
* **Watch** points for other users of the data (e.g., problems encountered and comments on data quality).
The ICES Working Group on Data and Information Management (WGDIM) has
developed a number of data type guidelines which itemize these elements that
are required for thirteen different data types (see table below). These Data
Type Guidelines have been developed using the expertise of the oceanographic
data centres of ICES Member Countries. They have been designed to describe the
elements of data and metadata considered as important to the ocean research
community. These guidelines are targeted towards most physical-chemical-
biological data types collected on oceanographic research vessel cruises. Each
guideline addresses the data and metadata requirements of a specific data
type. This covers three main areas:
* What the data collector should provide to the data centre (e.g., collection information, processing, etc.);
* How the data centre handles data supplied (e.g., value added, quality control, etc.);
* What the data centre can provide in terms of data, referral services and expertise back to the data collector. A selection of these guidelines, in particular for those data types that are not yet dealt with in detail here, are included in Appendix 1 of this document.
This document summarizes the concept of metadata that is intended to be
adopted by ODYSSEA data platform, following the commonly agreed INSPIRE data
specification template in its relevant parts, i.e., dataset-level, services
metadata and data quality. It also contains detailed technical documentation
on the XML source-code level and therefore provides specific guidelines to
correctly create and maintain metadata in the XML format.
## Metadata Catalogue Service
A **Metadata Catalogue Service** is a mechanism for storing and accessing
descriptive metadata and allows users to query for data items based on desired
attributes. The catalogue service stores descriptive information (metadata)
about logical data items. The Open Geospatial Consortium (OGC) has created
the **Catalogue Service for Web (CSW) standard** to enable the easy data
discovery from a catalogue node. Catalogue services support the ability to
publish and search metadata for data, services, and related information.
Metadata in catalogues can be queried and presented for evaluation and further
processing by both humans and software. Catalogue services (and other
resources such as bibliographic resources, datasets, etc.) are required to
support the discovery and binding to published web map services. The CSW
standard is extremely rich. In addition to supporting a query from a user, it
can support distributed queries (one query that searches many catalogues) and
the harvesting of metadata from node to node.
Catalogue services support the ability to publish and search collections of
descriptive information (metadata) for data, services, and related information
objects. Metadata in catalogues represent resource characteristics that can be
queried and presented for evaluation and further processing by both humans and
software. Catalogue services are required to support the discovery and binding
to registered information resources within an information community.
The International Organisation for Standardisation (ISO) includes ISO/TC
2112, which is an international, technical Committee for the standardisation
of geographical information. TC 211 has created a strong, globally implemented
set of standards for geospatial metadata: the baseline ISO 19115; ISO 19139
for implementation of data metadata and the ISO 19119 for services metadata.
These open standards define the structure and content of metadata records and
are essential for any catalogue implementation. ISO 19115 describes all
aspects of geospatial metadata and provides a comprehensive set of metadata
elements. It is designed for electronic metadata services, and the elements
are designed to be searchable wherever possible. It is widely used as the
basis for geospatial metadata services. However, because of the large number
of metadata elements and the complexity of their data model, implementation of
ISO 19115 is difficult.
The INSPIRE DIRECTIVE applies these standards and specifications in its
implementation. INSPIRE makes use of three catalogues for unique IDs
management: **(1) SeaDataNet, (2) ICES and (3) CMEMS.** ICES catalogue has a
geospatial component not present in the SeaDataNet catalogue while CMEMS
provides the reference to model results.
### Catalogue Service for Web (CSW)
This section describes briefly the Open GIS Consortium (OGC) specification for
catalogue services. According to this specification: “ _Catalogue services
support the ability to publish and search collections of descriptive
information (metadata) for data, services, and related information objects;
Metadata in catalogues represent resource characteristics that can be queried
and presented for evaluation and further processing by both humans and
software. Catalogue services are required to support the discovery and binding
to registered information resources within an information community_ ".
The Inspire initiative uses the CSW protocol and the ISO metadata application
profile (AP) for the specification and implementation of the Inspire Discovery
Service. In ODYSSEA, the ODYSSEA ISO metadata profile will be developed and
used as described in this document’s metadata sections.
The diagram presented below illustrates a generic view of the CSW protocol and
architecture.
### Harvesting
Harvesting is the procedure of collecting metadata records from other
(external) catalogues and synchronize the local catalogue with the collected
information.
In the majority of the cases the harvesting process is scheduled and
automatically executed once or at pre-defined intervals. It is usually also
possible to execute a harvesting procedure on-demand, i.e., executed by human
request.
The diagram below depicts a sample on how the harvesting procedures could be
seen between the ODYSSEA platform catalogue and other external catalogues.
To be noted that the harvesting procedure uses, within Inspire, the CSW
protocol. Within the catalogue responses to the harvesting requests there are
collections of metadata records, using the model described in this document
(i.e., INSPIRE Datasets and Services).
## Guidelines on using metadata elements
### Lineage
Following the ISO 19113 Quality principles, if a data provider has a procedure
for quality validation of their spatial datasets then the data quality
elements, listed in Chapter 2, should be used. If not, the Lineage metadata
element (defined in Regulation 1205/2008/EC) should be used to describe the
overall quality of a spatial dataset.
According to Regulation 1205/2008/EC, lineage “is a statement on process
history and/or overall quality of the spatial dataset. Where appropriate it
may include a statement whether the dataset has been validated or quality
assured, whether it is the official version (if multiple versions exist), and
whether it has legal validity. The value domain of this metadata element is
free text”.
Apart from describing the process history, if feasible within a free text, the
overall quality of the dataset (series) should be included in the Lineage
metadata element. This statement should contain any quality information
required for interoperability and/or valuable for use and evaluation of the
dataset (series).
### Temporal reference
According to Regulation 1205/2008/EC, at least one of the following temporal
reference metadata elements shall be provided: temporal extent, date of
publication, date of last revision, date of creation.
If feasible, the date of the latest revision of a spatial dataset should be
reported using the date of latest revision in a metadata element.
### Topic category
The topic categories defined in Part D 2 of the INSPIRE Implementing Rules for
metadata are derived directly from the topic categories defined in B.5.27 of
ISO 19115. Regulation 1205/2008/EC defines the INSPIRE data themes to which
each topic category is applicable, i.e., oceanography is the INSPIRE theme for
which the Geoscientific information topic category is applicable.
### Keyword
Regulation 1205/2008/EC requires that, for a spatial dataset or a spatial
dataset series, “at least one keyword shall be provided from the General
Environmental Multi-lingual Thesaurus (GEMET) describing the relevant spatial
data theme, as defined in Annex I, II or III to Directive 2007/2/EC”. Keywords
should be taken from the GEMET – General Multilingual Environmental Thesaurus
where possible.
# ODYSSEA datasets
This section describes the structure and the content of the proposed ODYSSEA
metadata profile on the dataset-level and includes general guidelines for the
metadata from two points of view – the first one is the ODYSSEA metadata,
while the second represents ODYSSEA data quality issues.
The structure described in this document is compliant to the existing ISO
standards for metadata – i.e., especially ISO EN 19115 and ISO 19139\. The
full list of used ISO standards can be found in the List of References at the
end of this document. The primary goal of this deliverable is to develop a
metadata profile for ODYSSEA geographic datasets and time-series datasets,
within the framework of these ISO standards, to support the interoperability
between the different metadata and/or GIS platforms.
The metadata model to be adopted in ODYSSEA is described in more detail in
Annex I.
## Dataset-level metadata
Metadata can be reported for each individual spatial object (spatial object-
level metadata) or once for a complete dataset or dataset series (dataset-
level metadata). If data quality elements are used at spatial object level,
the documentation shall refer to the appropriate definition in the Data
Quality Info section of this document. This section only specifies the
dataset-level metadata elements.
For some dataset-level metadata elements, in particular on data quality and
maintenance, a more specific scope can be specified. This allows the
definition of metadata at sub-dataset level, e.g., separately for each spatial
object type. When using ISO 19115/19139 to encode the metadata, the following
rules should be followed:
* The scope element (of type DQ_Scope) of the DQ_DataQuality subtype should be used to encode the scope.
* Only the following values should be used for the level element of DQ_Scope: series, dataset, featureType.
* If the level is featureType 3 then the levelDescription/MD_ScopeDescription/features element (of type Set <GF_FeatureType>) shall be used to list the feature type names.
Mandatory or conditional metadata elements are specified in the next sub-
section, while optional metadata elements are specified in subsequent sub-
Section. The tables describing the metadata elements contain the following
information:
* The first column provides a reference to a more detailed description. The second column specifies the name of the metadata element.
* The third column specifies the multiplicity.
* The fourth column specifies the condition, under which the given element becomes mandatory (only for the first and second tables).
In **Annex I** a detailed description of the metadata is presented.
## Service-level metadata
This section describes the structure and the content of the proposed ODYSSEA
metadata profile on the service-level and includes general guidelines for
ODYSSEA metadata from two points of view – the first one is the ODYSSEA-
specific metadata, while the second represents quality issues of the data
published by the services.
The structure described in this document is compliant to the existing ISO
standards for metadata – i.e., especially ISO EN 19115, EN ISO 19119 and ISO
19139 (the full list of used ISO standards can be found in List of References
at the end of this document. The primary goal of this deliverable is to
develop a metadata profile for ODYSSEA geographical data services, within the
framework of these ISO standards, to support interoperability between
instances of discovery services and different metadata and/or GIS platforms as
well.
Metadata can be reported for each individual spatial object (spatial object-
level metadata) or once for a complete dataset or dataset series (dataset-
level metadata). On the other hand, metadata can also be reported for the
services that are publishing ODYSSEA data – i.e., especially INSPIRE view and
download services. This section only specifies service-level metadata
elements.
For some service-level metadata elements, in particular for data quality, a
more specific scope can be specified. This allows the definition of metadata
at sub-dataset level, e.g., separately for each spatial object type. When
using ISO 19115/19139 to encode the metadata, the following rules should be
followed:
* The scope element (of type DQ_Scope) of the DQ_DataQuality subtype should be used to encode the scope.
* Only the following value should be used for the level element of DQ_Scope: service.
Mandatory or conditional metadata elements are specified in the ANNEX I.
Optional metadata elements are specified in the subsequent sub-section of this
ANNEX.
## Data format standards
### Ocean Data View data model and netCDF Format
As part of the ODYSSEA services, data sets will be accessible via download
services. Delivery of data to users requires common data transport formats,
which interact with other standards ( Vocabularies, data quality control).
In SeaDataNet it was decided that Ocean Data View (ODV) and NetCDF format are
mandatory.
ODYSSEA will follow the SeaDataNet (2017) procedures, as main concepts of this
document are reproduced in the following paragraphs. ODYSSEA will also follow
the fundamental data model underlying ODV format which, in practice, is
composed by a collection of rows, each having the same fixed number of
columns. In this model there are three different types of columns:
* _The metadata columns;_
* _The primary variable data columns (one column for the value plus one for the qualifying flag);_
* _The data columns._
The metadata columns are stored at the left-hand end of each row, followed by
the primary variable columns and then the data columns. There are three
different types of rows:
* _The comment rows;_
* _The column header rows;_
* _The data rows._
The CF metadata conventions (http://cf-pcmdi.llnl.gov/) are designed to
promote the processing and sharing of files created with the NetCDF API. The
conventions define metadata that provide a definitive description of what the
data in each variable represents, and the spatial and temporal properties of
the data. This enables users of data from different sources to decide which
quantities are comparable, and facilitates building applications with powerful
extraction, re-gridding, and display capabilities.
The standard is both mature and well-supported by formal governance for its
further development. The standard is fully documented by a PDF manual
accessible from a link from the CF metadata homepage (http://cf-
pcmdi.llnl.gov/). Note that CF is a developing standard and consequently
access via the homepage rather than through a direct URL to the document is
recommended to ensure that the latest version is obtained. The current version
of this document was prepared using version 1.6 of the conventions dated 5
December 2011.
The approach taken with the development of the SeaDataNet profile based on CF
1.6 was to classify data on the basis of feature types and produce a
SeaDataNet specification for storage of each of the following:
* _**Point time series** , such as current meter or sea level data, have row_groups made up of measurements from a given instrument at different times. The metadata date and time are set to the time when the first measurement was made. The primary variable is time (UT) encoded either as: _
* _A real number representing the Chronological Julian Date, which is defined as the time elapsed in days from 00:00 on January 1st 4713 BC. If this option is chosen, then the column must have the heading ‘Chronological Julian Date [days]’._
* _A string containing the UT date and time to sub-second precision corresponding to ISO8601 syntax (YYYY-MM-DDThh:mm:ss.sss) for example 2009-02-12T11:21:10.325. If this option is chosen, the column must have the heading ‘time_ISO8601’. If the time is not known to sub-second precision, then use the ISO8601 form appropriate to the known precision. For example, a timestamp to the precision of one hour would be represented by 2009-0212T11:00 and a time stamp to a precision of a day by 2009-02-12._
_Rows within the row_group are ordered by increasing time. Note that the z co-
ordinate (e.g., instrument depth), essential for many types of time series
data, needs to be stored as a data variable and could have the same value
throughout the row_group._
* _**Profile data** , such as CTD or bottle data, have row_groups made up of measurements at different depths. The metadata date and time are set to the time when the profile measurement started. The primary variable is the ‘z co-ordinate’, which for SeaDataNet is either depth in metres or pressure in decibars. Rows within the row_group are ordered by increasing depth. _
* _**Trajectories** , such as underway data, have row_groups made up of a single measurement, making the metadata time and positions the spatio-temporal co-ordinate channels. The primary variable is the ‘z co-ordinate’, which for SeaDataNet is standardised as depth in metres. Rows within the row_group are ordered by increasing time; _
* _**TimeSeriesProfile** (x, y, z fixed; t variable) but some variables can be measured at different depths at the same time var=f(t, z). The specification given is for storage of time series profiles such as moored ADCP. _
* _**TrajectoryProfile** (x, y, z, t all variable) but some variables can be measured at different depths at the same time var=f(t, z). The specification given is for storage of trajectory profiles such as shipborne ADCP. _
The specification was then developed through discussions on a collaborative
e-mail list involving participants in SeaDataNet, MyOcean, USNODC, NCAR and
AODN. The working objective focussed on producing profiles with the following
properties:
* _CF 1.6 conformant;_
* _Have maximum interoperability with CF 1.6 implementations in use by MyOcean (OceanSITES conventions), USNODC (USNODC NetCDF templates) and two contributors to AODN (IMOS and METOC);_
* _Include storage for all labels, metadata and standardised semantic mark-up that were included in the SeaDataNet ODV format files for the equivalent feature type._
Significant list discussion focussed on the version of NetCDF that should be
used for SeaDataNet. The conclusion was that NetCDF 4 should be used wherever
possible, but that NetCDF 3, although strongly discouraged, should not be
totally forbidden.
On ANNEX II some examples of the structure of these files are presented.
### Static data (Bathymetric, Chemical, Geologic, Geophysical, Biological,
Biodiversity data)
ODYSSEA will also adopt the SeaDataNet proposed standards for marine chemistry
(to support the EMODNet Chemistry pilot), bathymetry (to support the EMODNet
Hydrography and Seabed Mapping pilots), and geology and geophysics (to support
the Geo-Seas project and the EMODNet Geology pilot). and marine biology.
Based on an analysis of the present situation, and currently existing
biological data standards and initiatives, such as the Ocean Biogeographic
Information System (OBIS), Global Biodiversity Information Facility (GBIF),
Working Group on Biodiversity Standards (TDWG) and World Register of Marine
Species (WoRMS) standards, SeaDataNet proposed a format for data exchange of
biological data.
Key issues that steered the format development were (SeaDataNet III,
publishable summary):
* _Requirements posed by the intended use and application of the data format (data flows, density calculations, biodiversity index calculations, community analysis, etc…)_
* _Availability of suitable vocabularies (World Register of Marine Species, SeaDataNet Parameter list, SeaDataNet Unit list, etc…)_
* _Requirements for compatibility with existing tools and software (WoRMS taxon match services,_
_EurOBIS QC services, Lifewatch workflows, Ocean Data View, etc…)_
The requirements of the extended ODV format for biological data were defined
as follows:
* _The format should be a general and higher level format without necessarily containing all specifics of each data type, but rather focusing on common information elements for marine biological data._
* _At the same time the format needs to be sufficiently flexible/extendable to be applicable for at least part of the variety of biological data the NODC’s are managing._
* _It should be possible to derive OBIS or Darwin Core compatible datasets from the format._
* _The format should be self-describing, in the sense that all information needed to interpret the data should be included in the file format or be available through links to vocabularies or term lists that are part of the format._
A specific ODV extended format for biological data has been defined for
different types of files such as (see for details SeaDataNet deliverable
D8.4):
* _macrobenthos community with density and biomass values;_
* _zooplankton community with samples from different depths;_
* _demersal fish population with densities for different size classes and individual fish measurements;_
* _pollutant concentrations in biota specimens._
### Open source semantic information
Semantic information may be useful for a myriad of services to the end users.
However, the sources providing semantically rich information are very
heterogeneous. Semantically rich information can be found on Wikipedia and
Wikidata for instance. EMODnet, through the “Human activities” data sets also
provides some semantically rich information.
As one can see the sources of semantically rich information are very
heterogeneous in the availability, reliability and format. Furthermore, they
provide heterogeneous and partially redundant information. No standard model
exists in order for that type of information, as their variability is very
high. However, as one of ODYSSEA platform aim is to integrate and fuse this
kind of information, one must rely on a shared format in order to analyze and
make use of it.
Within the services that will be developed in ODYSSEA, a domain ontology will
be used in order to enable the integration of semantic information sources.
For each ODYSSEA use case, and for each ODYSSEA product relying on semantic
information analysis and integration, end users of the products will have to
develop, together with ODYSSEA technical partners, an ontology defining the
concepts of interest of the use case. This ontology will be the pivot language
and representation format used to integrate heterogeneous open information
sources.
_Figure 1: example of an ontology defining the main concepts used to analyze
the impact of port structures on the quality of bathing waters and fish
production_
# Data privacy policy
## General principles
Basic principles regulated by the data protection Act will be observed namely:
* ODYSSEA only hold the necessary personal data to offer services provided by its platform.
* Data is only used for the purposes described in the Data Protection Register Form and the Informed Consent Form.
* Personal data will only be hold for as long as necessary. Once data are no longer needed it will be deleted from ODYSSEA records by the ODYSSEA platform Administrator (namely the CLS Chief Technical Officer (CTO) / IT platform manager). More specifically, in case a certain period (one year) is passed without the entry of an end-user in the platform, CLS will alert him through a standardized electronic message on the destruction of personal data.
* Personal data storage will be secured to ensure that data are not accessible to unwanted third parties and are protected against disaster and risk.
* ODYSSEA will regularly email website news and information updates only to those end-users and customers who have specifically subscribed to our email service. All subscription emails sent by the ODYSSEA platform will contain clear information on how to unsubscribe from our email service.
* In any event, no personal data will be shared with any third party for direct marketing. ODYSSEA will never sell, rent or exchange mailing lists of personal data.
* All ODYSSEA partners shall comply with the data protection and privacy laws applicable in their country of origin, including their national laws applicable to exporting data into the EU.
* ODYSSEA partners from non-EU countries have provided signed declarations that they will meet all relevant H2020 ethical standards and regulations. _Exporting personal data from the EU to non-_
_EU countries must comply with the applicable EU rules on cross-border
transfer of personal data._
* In accordance with the Privacy and Electronic Communications (EC Directive) Regulations 2003,
ODYSSEA never send bulk unsolicited emails, (popularly known as Spams) to any
email addresses.
* ODYSSEA may send emails to existing end-users and customers or prospective end-users and customers having enquired or registered in ODYSSEA platform, regarding products or services directly provided by ODYSSEA platform.
* All emails sent by ODYSSEA will be clearly marked as originating from this platform. All such emails will also include clear instructions on how to unsubscribe from ODYSSEA email services. Such instructions will either include a link to a page to unsubscribe or a valid email address to which the user should reply, with “unsubscribe” as the email subject heading.
## Use of Cookies
Cookies are small text files which are placed on your computer by websites
that you visit. They are widely used in order to make websites work, or work
more efficiently, as well as providing information to the owner of the site.
ODYSSEA’s platform may generate cookies in order to work more efficiently.
These will enhance features such as platform search and optimized page
loading.
ODYSSEA may use Google Analytics to collect quantitative information on
platform’s performance and end-users’ interaction with the platform. ODYSSEA
will use this information to improve the service and experience offered by the
platform. The use of Social Media buttons on some of the pages link to third
party websites and services, like Facebook and Twitter also create cookies.
These services use cookies when clicking the button. Privacy policies will be
available for all of these services, and users should be able to read them to
be informed on how their information is being used, and how they can opt-out,
should they wish to.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0928_ODYSSEA_727277.md
|
## 1\. Execuve Summary
ODYSSEA intends to develop, operate and demonstrate an interoperable and cost-
effecve plaorm that fully integrates networks of observing and forecasng
systems across the Mediterranean basin, addressing both the open sea and the
coastal zone.
The plaorm is prepared to deliver a set of services focused on different
coastal user’s needs (navigaon safety, ports operaons, water polluon prevenon
and response, eutrophicaon risks, search and rescue missions, etc.) enabling
the exploitaon of the added value of integrated Earth Observaon (EO)
technologies (satellite, airborne and ground based), Copernicus Marine Service
and ICT to deliver customized and ready to use informaon. These services will
provide an easy way to access in-situ data, local high-resoluon forecasts and
products and services (e.g. meteo-oceanographic condions at specific locaons,
idenficaon of opmum or crical working windows, support to sea polluon response
acons, etc.) for a broad range of different users and stakeholders.
Taking in consideraon that this plaorm will gather a large number of diverse
data sets (from exisng networks and plaorms and the ODYSSEA-produced data
consisng of results from operaonal numerical models, data from in-situ sensors
and remotely sensed data), the issues of data management and data quality
control assume a central concern. One goal of the plaorm is to ensure that
data from different and diverse data providers are readily accessible and
useable to the wider oceanographic community. To achieve that, the strategy is
to move towards an integrated data system within ODYSSEA that harmonizes work
flows, data processing and distribuon across different systems.
The value of standards is clearly demonstrable. In oceanography, there have
been many discussions for processing data and informaon. Many useful ideas
have been developed and put into pracce, but there have been few successful
atempts to develop and implement internaonal standards in managing data.
This document intends to provide an overview of the best pracces concerning
these aspects and define the guidelines to be followed in themes such as
catalogues, metadata, data vocabulary, data standards and data quality control
procedures. This implies taking acons at different levels:
* Adopt proper data management procedures to implement metadata, provide an integrated access to data in order to facilitate integraon into exisng systems and assure the adopon of proper data quality control.
* Enable integraon of more data, improve the enhancement of the services (viewing, downloading, traceability and monitoring) to users and providers, facilitate the discovery of data through a catalogue based on ISO standards, provide OGC services (SOS, WMS, WFS, etc.) to facilitate development; and ensure the visibility of exisng data and the idenficaon of gaps.
As ODYSSEA is an EU supported Mediterranean-focused plaorm, the data
management plan will tune in all specific aspects of ocean data management
within the European context, such as exisng networks of data and requirements
of European industry and other end-users. This deliverable will be the final
and updated version of the Data Management Plan (DMP) for the ODYSSEA Plaorm,
including strategies for improving data management, data privacy issues and
data quality control procedures. In this updated document, problems faced
during the implementaon process, barriers and lessons learnt will be
discussed. It will further elaborate specific relevant aspects for ODYSSEA
namely the "new" data series approach using SOS, further integraon into the
ODYSSEA Plaorm among others.
## 2\. Introducon
ODYSSEA aims to provide a set of services focused on different coastal users’
needs (navigaon safety, ports operaons, water polluon prevenon and response,
eutrophicaon risks, search and rescue missions, etc.) allowing to exploit the
added value of integrated Earth Observaon (EO) technologies (satellite,
airborne and ground based), Copernicus Marine Service and ICT to deliver
customized and ready to use informaon. These services will provide an easy way
to get in-situ data, local high-resoluon forecasts and products and services
(e.g. meteo-oceanographic condions at specific locaons, idenficaon of opmum or
crical working windows, support to sea polluon response acons, etc.) to a
broad range of different users.
This report describes the strategies to be implemented to improve data
management, data privacy and data quality control of ODYSSEA services. The
strategy has four main components:
* Catalogues, vocabulary and metadata;
* Data integraon and fusion;
* Data quality control;
* Data privacy policy.
The issue of metadata, vocabulary and catalogues is of prime importance to
assure the interoperability and easy discovery of data. A proper data
management plan following widely accepted standards also contributes to the
reducon in the duplicaon of efforts among agencies. Likewise, the plan is to
improve the quality and reduce the costs related to geospaal data processing,
thus making oceanographic data more accessible to the broader public while
helping to establish key partnerships to increase data availability. Aiming to
contribute to these objecves, ODYSSEA will adopt the procedures already
proposed by the most relevant EU iniaves such as CMEMS, EMODNet and
SeaDataNet, especially the standards in relaon to vocabularies, metadata and
data formats. In pracce the gridded data sets addressing either dynamic data
sets (similar to CMEMS) or stac data sets (similar to EMODnet) will follow
procedures similar to the ones adopted by these two services. Regarding the me
series data, SeaDataNet procedures will represent the main guidelines and
NetCDF-CF format will be the standard to be adopted. However, **ODYSSEA will
go one step further and will use these NetCDF files to feed an SOS service,
supported by North 52 soware to assure the interface with the users of the
plaorm.**
The capability of serving me-series through a standard protocol, such as SOS,
will represent a step forward from the exisng services although, as a pioneer,
it is foreseen that ODYSSEA will be required to overcome some barriers. The
service has been tested in the ODYSSEA plaorm V0 Edion and it is one of the
subjects to be discussed and showcased in this updated/final version of the
ODYSSEA DMP.
The data integraon and fusion policies to be adopted in ODYSSEA are also
relevant issues of the project.
Data integraon and fusion deals with the best strategies to adopt when it
comes to merge datasets obtained from different data sources, building the
best available datasets or fusing different data sources to produce aggregated
data (i.e., secondary parameters and indicators). Although not easy,
addressing this issue properly may represent a valuable contribuon to improve
data accuracy and robustness of models’ inial and boundary condions and
provide the users with comprehensive data that merge different data sets based
on reliable criteria.
The data quality control either related with the quality of observed in-situ
data (e.g. dal gauges, wave buoys, weather staons, etc.) or the modelled
forecasts is another relevant aspect that will be addressed by ODYSSEA’s DMP.
In the case of locally acquired data, automac procedures will run regularly to
detect and remove anomalous values from observed datasets. In the case of the
models, the results will be automacally compared with observaons (e.g., buoys
and CMEMS grid observaon products) and the stascal analysis will be provided
on a daily basis to the end users.
Regarding data privacy (data protecon and the rights of plaorm end-users,
customers and business contacts), it is clear that ODYSSEA will respect
personal data under the General Data Protecon Regulaon (GDPR) (Regulaon (EU)
2016/679) which will substute Direcve 95/46/EC on May 25, 2018. ‘Personal
data’ means any informaon, private or professional, which relates or can be
related to an idenfied or idenfiable natural person (for the full definion,
see Arcle 2(a) of EU Direcve 95/46/EC).
In the following paragraphs a more detailed overview, both of the “state of
the art” and the procedures to be adopted in ODYSSEA, will be provided.
## 3\. Ocean Data Management: the European context
Delivery of data to users requires common data storage and transfer formats,
which interact with other standards (Vocabularies, data quality control).
Several iniaves exist within Europe for ocean data management, which are now
coordinated under the umbrella of EuroGOOS. EuroGOOS is the network commited
to develop and advance the operaonal oceanography capacity of Europe, within
the context of the intergovernmental Global Ocean Observing System (GOOS). The
scope of EuroGOOS is wide and its needs are parally addressed by the on-going
development within Copernicus, SeaDataNet and other EU iniaves.
Therefore, to improve the quanty, quality and accessibility of marine
informaon, to support decision making and to open up new economic opportunies
in the marine and marime sectors of Europe for the benefit of European cizens
and the global community, it was agreed at the annual EuroGOOS meeng in 2010
that it is essenal to meet the following needs (AtlantOS, 2016):
* Provide easy access to data through standard generic tools; where “easy” means the direct use of data without concerns on data quality and processing and that adequate metadata are available to describe how the data were processed by the data provider.
* Combine in situ-observaon data with other informaon (e.g., satellite images or model outputs) to derive new products, build new services or enable beter-informed decision-making.
The ocean data management and exchange process within EuroGOOS intends to
reduce the duplicaon of efforts among agencies as well as to improve the
quality and reduce the costs related to the geospaal informaon, thus making
oceanographic data more accessible to the public and helping to establish key
partnerships to increase data availability. In addion, the EuroGOOS data
management system intends to deliver a system that will meet European needs,
in terms of standards while respecng the structures of the contribung
organizaons.
The structure will include:
* Observaon data providers, which can be operaonal agencies, marine research centres, universies, naonal oceanographic data centres and satellite data centres.
* Integrators of marine data, such as the Copernicus in-situ data themac centre (for access to near real-me data acquired by connuous, automac and permanent observaon networks) or the SeaDataNet infrastructure (for quality controlled, long-term me series acquired by all ocean observaon iniaves, missions, or experiments), ICES and EurOBIS for biodiversity observaons, and the new European Marine Observaon and Data Network (EMODnet) portals. The integrators that will support both data providers, willing to share their observaon data, and users requesng access to oceanographic data (historic, real-me and forecasts). Integrators develop new services to facilitate data access and increase the use of both exisng and new observaonal data.
* Links with internaonal and cross-disciplinary iniaves, such as GEOSS (Global Earth Observaon System of Systems), both for technical soluons to improve harmonizaon in an interdisciplinary global context.
### 3.1. Towards an integrated EU data system
ODYSSEA aims to contribute to improving data availability for end-users and
stakeholders across the Mediterranean basin, addressing both the open sea and
the coastal zone. One goal is to ensure that data from different and diverse
in-situ observing networks and forecasng models are readily accessible and
useable. **To achieve this, the strategy is to move towards an integrated data
system that harmonizes work flows, processes data according to exisng
standards and disseminates data produced by the insitu observing and modelling
network system, while integrang in-situ observaons into exisng European and
internaonal data infrastructures** (the so called “Integrators”). Such
Integrators include: the Copernicus INS TAC, SeaDataNet NODCs, EMODnet,
EurOBIS, and GEOSS.
The targeted integrated system deals with data management challenges that must
be met to provide efficient and reliable data service to users. These include:
* Common quality control for heterogeneous and near real me data;
* Standardizaon of mandatory metadata for efficient data exchange;
* Interoperability of Network and Integrator data management systems.
### 3.2. Industry requirements
Presently, there is a need to change the way marine observatories and public
data-sharing iniaves engage with industry and users. The _Columbus_ _project_
(funded by the EU under H2020 which ended this year) proposes a set of
recommendaons designed to overcome some of the most important gaps and
barriers sll faced by private data users. Taken together, they represent the
basic components of a strategy to open significant opportunies for the marime
industry to both benefit from and engage with public marine data iniaves. This
can ensure the opmum return of public investments in the marine data sector,
notably in support of meeng key EU policy goals under the Blue Growth
Strategy, the Marine Strategy Framework Direcve and the Marime Spaal Planning
Direcve. Some barriers require further analysis and discussion, but there are
already many acons that can be undertaken to improve the situaon on the short
and medium term (Columbus, 2017):
* Industry representaves should be included in the governance and take part in the enre cycle of decision making, development and operaon of marine observaon and data-sharing iniaves.
* There is a need for marine data-sharing iniaves to take a more pro-acve approach and move out of the comfort zone of the tradional oceanographic marine monitoring and observing communies. This involves, among others, developing a more “service-oriented approach”, learning new communicaon skills and language, being present and more visible in fora that atract industry and to exploit creave technologies.
* Data, products and services offered by marine observaon and data iniaves should be presented in a user-friendly, atracve and intuive way which is adapted to the target users. If users from different communies or sectors are targeted, opons to adjust the interface depending on the visitor should be considered.
* Clear, succinct and open communicaon is crical: it should be instantly clear for industry what data, products and services are offered and what may be made available in the future. Equally important is to provide informaon on what is not available, and the limitaons of the resources offered.
* More efforts should be made to build upon early achievements and successes: presenng use case examples that can trigger interest where there may previously have been none.
* There is a significant role for marime clusters in connecng marine data iniaves with industry and vice versa. Marime clusters are an important bridge between private and public sector as they deal with both and have a good understanding of their culture, language, needs and concerns.
* At European level there is a need for defragmentaon of the plethora of marine observaon and data and informaon sharing iniaves, as well as the online data portals. In the longer term, there is a need for a joint roadmap, agreed by the responsible coordinang and funding bodies including at the European Commission level, to set out the strategic framework.
* Dedicated data-sharing policies to incenvise the private sector and address their specific needs should be developed. Ways forward could include: stang clearly the added-value or benefits of sharing data, moratorium on commercially sensive data, provision of services in return for data which could support in-house data management, the development of a data-sharing ‘green label’ in recognion of corporate social responsibility. It is clear that implementaon of the recommendaons will require increased commitment and investment of me and resources, both from industry and from marine observaon and data iniaves, but should provide both with significant returns over me
### 3.3. The ODYSSEA approach
The procedures to follow in ODYSSEA regarding this issue of the data
management will preferenally follow the examples from CMEMS, EMODNet or
SeaDataNet. In pracce two major data types will be addressed: the gridded data
produced by ODYSSEA models and the me-series data reported by the ODYSSEA stac
systems. Similarly, for the spao-temporal data produced via sensors integrated
into the ODYSSEA gliders, the SeaDataNet netCDF data standards for profiling
along trajectories will be adopted.
The gridded data may address dynamic data sets (similar to CMEMS) or stac data
sets (similar to
EMODnet). In both cases the procedures to follow will be similar to the ones
adopted by these two services.
Regarding the me series data, SeaDataNet procedures will represent the main
guidelines and netCDF-CF format will be the standard to be adopted. However,
**ODYSSEA will go one step forward and will use these netCDF files to feed an
SOS service, supported by North 52 soware to assure the interface with the
users.**
## 4\. Data quality control
The issue of data quality control will be addressed following the state-of-
the-art recommendaons of different projects such as SeaDataNet or AtlantOS.
SeaDataNet produced a comprehensive document presenng a set of guidelines to
be followed in marine data quality control. According to this document, quoted
below, data quality control essenally and simply has the following objecve: “
_To ensure the data consistency within a single data set and within a
collection of data sets and to ensure that the quality and errors of the data
are apparent to the user who has sufficient information to assess its
suitability for a task_ ”. If done well, quality control brings about several
key advantages (SeaDataNet, 2010):
* _**Maintaining Common Standards** : There is a minimum level to which all oceanographic data should be quality controlled. There is little point banking data just because they have been collected; the data must be qualified by additional information concerning methods of measurement and subsequent data processing to be of use to potential users. Standards need to be imposed on the quality and long-term value of the data that are accepted (Rickards, 1989). If there are guidelines available to this end, the end result is that data are at least maintained to this degree, keeping common standards to a higher level. _
* _**Acquiring Consistency** : Data within data centres should be as consistent to each other as possible. This makes the data more accessible to the external user. Searches for data sets are more successful as users are able to identify the specific data they require quickly, even if the origins of the data are very different on a national or even international level. _
* _**Ensuring Reliability** : Data centres, like other organisations, build reputations based on the quality of the services they provide. To serve a purpose to the research community and others their data must be reliable, and this can be better achieved if the data have been quality controlled to a ‘universal’ standard. Many national and international programmes or projects carry out investigations across a broad field of marine science which require complex information on the marine environment. Many large-scale projects are also carried out under commercial control such as those involved with oil and gas and fishing industries. Significant decisions are made, and theories formed, on the assumption that data are reliable and compatible, even when they come from many different sources. _
ODYSSEA services data flux will be managed automacally by the ODYSSEA plaorm.
The data quality control will start by the execuon of automac procedures
(independently of the adopon of more complex procedures). The data quality
control methodology will focus on in situ observaons and modelled forecasts
and it will be addressed from two perspecves: the data **Quality Assurance**
and the **Quality Control** .
Quality Assurance (QA) is a set of review and audit procedures implemented by
personnel or an organizaon (ideally) not involved with normal project acvies
to monitor and evaluate the project to maximize the probability that minimum
standards of quality are being atained. With regard to data, QA is a system to
assure that the data generated is of known quality and well-described data
producon procedures are being followed. This assurance relies heavily on the
documentaon of processes, procedures, capabilies, and monitoring. Reviews
verify that data quality objecves are being met within the given constraints.
QA is inherently a human-in-the-loop effort and substanal documentaon must
accompany any QA acon. QA procedures may result in correcons to data. Such
correcons shall occur only upon authorized human intervenon (e.g., marine
operator, product scienst, quality analyst, principal invesgator) and the
correcons may either be applied in bulk (i.e., all data from an instrument
during a deployment period) or to selecve data points. The applicaon of QA
correcons will automacally result in the reflagging of data as ‘corrected’.
Quality Control (QC) is a process of roune technical operaons, to measure,
annotate (i.e., flag) and control the quality of the data being produced.
These operaons may include spike checks, out-of-range checks, missing data
checks, as well as others. QC is designed to:
* Provide roune and consistent checks to ensure data integrity, correctness, and completeness;
* Idenfy and address possible errors and omissions;
* Document all QC acvies.
QC operaons include automated checks on data acquision and calculaons by the
use of approved standardized procedures. Higher-er QC acvies can include
addional technical review and correcon of the data by human inspecon. QC
procedures are important for:
* Detecng missing mandatory informaon;
* Detecng errors made during the transfer or reformang;
* Detecng duplicates;
* Detecng remaining outliers (spikes, out of scale data, vercal instabilies, etc);
* Ataching a quality flag to each numerical value to indicate the corrected observed data points.
A guideline of recommended QC procedures has been compiled by the SeaDataNet
project aer reviewing
NODC schemes and other known schemes (e.g. WGMDM guidelines, World Ocean
Database, GTSPP, Argo, WOCE, QARTOD, ESEAS, SIMORC, etc.). The guideline at
present follows the QC methods proposed by
SeaDataNet for CTD (temperature and salinity profiles), current meter data
(including ADCP), wave data and sea level data. SeaDataNet is also developing
efforts for extending the guideline with QC methods for surface underway data,
nutrients, geophysical data and biological data.
ANNEX I provides a detailed descripon of the implementaon process procedure to
be followed for QA/QC in ODYSSEA.
### 4.1. Quality Control Flags
According to EuroGOOS (2016), an extensive use of flags to indicate the data
quality is recommended, since the end user will select data based on quality
control flags, amongst other criteria. These flags should always be included
in any data transfer (e.g., from ODYSSEA Observatories to the central ODYSSEA
plaorm) maintaining standards and ensuring data consistency and reliability (
_see Table 1_ ). The same flag scale is also recommended by SeaDataNet.
**TABLE 1: QUALITY FLAG SCALE (REPRODUCED FROM EUROGOOS, 2016).**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Definion**
</th> </tr>
<tr>
<td>
0
</td>
<td>
No QC was performed
</td> </tr>
<tr>
<td>
1
</td>
<td>
Good data
</td> </tr>
<tr>
<td>
2
</td>
<td>
Probably good data
</td> </tr>
<tr>
<td>
3
</td>
<td>
Bad data that are potenally correctable
</td> </tr>
<tr>
<td>
4
</td>
<td>
Bad data
</td> </tr>
<tr>
<td>
5
</td>
<td>
Value changed
</td> </tr>
<tr>
<td>
6
</td>
<td>
Bellow detecon limit
</td> </tr>
<tr>
<td>
7
</td>
<td>
In excess of quoted value
</td> </tr>
<tr>
<td>
8
</td>
<td>
Interpolated value
</td> </tr>
<tr>
<td>
9
</td>
<td>
Missing value
</td> </tr>
<tr>
<td>
A
</td>
<td>
Incomplete informaon
</td> </tr> </table>
* Data with QC flag = 0 should not be used without a quality control made by the user.
* Data with QC flag different from 1 on either posion or date should not be used without addional control from the user.
* If date and posion QC flag = 1 only measurements with QC flag = 1 can be used safely without further analyses
* if QC flag = 4 then the measurements should be rejected
* if QC flag = 2 the data may be good for some applicaons, but the user should verify this
* if QC flag = 3 the data are not usable, but the data centre may be able to correct them in a delayed mode
### 4.2. In-situ observaons quality control
The quality control of observaons can be done in two phases. During the
download of in-situ observaons automac checks should be done such as those
proposed by SeaDataNet (2010) (e.g. global range test, date and me). Aer
quality control, only the valid data is stored in the database. In the second
phase, a tool may be run periodically to perform a scienfic quality control
check (SeaDataNet, 2010). This quality control aims to detect spikes, filter
high frequency noise (e.g. moving average or P50), data with abnormal
variability in me, etc. Specific tools will be running automacally for this
purpose.
### 4.3. Forecasts quality control
The quality control of modelled forecasts can be done by comparing me-series
forecasts with in-situ observaons (e.g., wave buoys, dal gauge, weather
staons, etc.) through automacally-run algorithms.
Similarly, gridded data forecasts may be compared automacally with observaons
(e.g., CMEMS gridded data observaons). As a result, several stascal parameters
may be computed (e.g., correlaon coefficient, bias, RMSE, skill, etc.) to
assess the quality of forecasts.
**QA/QC procedures will be followed as the data from local Observatory PCs
reach the central ODYSSEA plaorm. An extensive analysis of ODYSSEA QA/QC
procedures is provided in Secon 7.3 “Quality Control and Data Processing
Funconality” of Deliverable 2.3.**
## 5\. Data integraon and fusion
### 5.1. Low level data integraon and fusion
Adopng the best strategies for merging datasets obtained from different data
sources, in order to build the best available datasets or fuse different data
sources to produce aggregated data, indices and products, is not simple. A
possible soluon when we have different soluons with different resoluons for
the same area is to fuse these data and offer a unique integrated dataset.
Another opon is to provide all datasets separately with an opon of an
integrated soluon. No mater which soluon is adopted, the final objecve of the
data integraon and fusion is to contribute to improvement of data accuracy and
robustness of models’ inial and boundary condions as well as to provide users
with comprehensive data that merge different data sets based on reliable
criteria.
For example, if a user is interested in operaonal wave data for a specific
site and realizes that for the period of interest, there exist different me
series from different wave buoys, he may be interested in geng a unique me
series by merging different me series data and making them compable. This
process may require complex acons regarding the levels of accuracy of the
different measuring devices, the measuring me rate and units, etc.
### 5.2. Semanc Informaon Integraon and Fusion
Capacity for integraon and fusion of semanc informaon will be provided through
the ODYSSEA plaorm. Semanc informaon is composed of several pieces of
informaon, potenally coming from different semancally rich informaon sources.
The main use of this capacity is for semanc network enrichment and query.
The informaon processed is expressed through graphs of enes related with each
other and contains semanc metadata. The fusion is adapted to the domain of
applicaon. This applicaon domain is described through an ontology of the
domain. The fusion process is also adapted to the quality of the informaon
items, through the use of fusion heuriscs.
The fusion heuriscs integrate domain knowledge and user preferences. They are
the intelligent part of the semanc fusion system. They are end-user defined
funcons used to express the confidence the users have in the informaon
sources, as well as specific strategies that must be followed in order to fuse
informaon coming from different sources.
The two main semanc informaon integraon funconalies are:
* Inseron of new informaon into a semanc informaon network (Synthesis),
* Query for informaon in a semanc informaon network (Mining).
5.2.1. Informaon sources
Many valuable open informaon sources can be used and integrated in order to
provide a large and always up to date overview of the ongoing situaon in
specific zones. For instance, we will use informaon provided by the Wikipedia
1 encyclopaedia, the Wikidata 2 informaon base and social networks such as
Twiter. For scienfic data sets, bases such as the EMODnet plaorm can be used
to provide data on main ports acvies, quality of bathing waters … etc.
#### Wikipedia pages
Wikipedia encyclopaedia is a wide collaborave and constantly up-to-date source
of informaon.
Integrang informaon provided by the Wikipedia community enables providing end-
users with a very rich semanc source of informaon. Regarding domains such as
tourism, locaons and marine species, the informaon available on Wikipedia is
parcularly abundant.
#### Wikidata elements
Wikidata is another valuable source of semanc informaon. Contrary to the
informaon stored on
Wikipedia, Wikidata is a knowledge base thus the semanc aspect of informaon is
contained in the informaon source. Concepts and instances are defined in
Wikidata. Semanc relaons among these objects are specified. Therefore,
Wikidata is an open source of informaon of valuable importance.
#### EMODnet Human Acvies
Among providers, EMODnet provides several data portal for marine data. Some of
the portals, such as the
Human Acvies portal, provide informaon with a rich semancs and a high level of
interpretaon. This source may be of great interest in order to be integrated
with other sources of informaon, providing different perspecves on the marine
situaon of the different zones of interest.
#### Twiter
Social media provides a wealth of crowd sourced data on easily observable
physical phenomenon in sengs which lack tradional methods of monitoring and
can even prove to be more rapid and flexible. However, the challenge lies in
mining aconable data from the millions of tweets posted every hour. Hence a
collecon/filtering algorithm will be writen in order to collect tweets which
are contextually and geographically pernent.
The twiter API may be used to retrieve raw data that for further manipulaon
and processing in order to get useful informaon. The raw data supplied by a
Twiter API call consists of JSON objects, which contain a large number of
categories of informaon (an atribute followed by human readable text).
5.2.2. Need for informaon quality enhancement algorithms
If most of the encyclopaedic sources of informaon are reliable and complete,
social media informaon has a very low level quality. However, using social
media is of importance if we want to involve cizens in the monitoring and
protecon of our environment. For example, in ODYSSEA we are building the so-
called “models chain”, where each model is run in a predetermined order and
the data produced by one model are used as boundary condions for the next. In
this chain, several uncertaines may occur, for example the iniaon of the oil
spill model when an oil spill accident occurs. **Through the ODYSSEA semanc
informaon analysis, and more specifically through algorithms searching and
harvesng the Twiter for relevant informaon, a more instant response to a
disastrous event might occur.** Thus, the Oil Spill Model could be iniated at
the exact locaon as early as possible to the event of oil release to the
marine environment. Similar applicaons might include extreme events on
meteorologic/hydrgraphic condions (e.g., storms), eutrophicaon/bloom
incidents, jelly-fish outburst, etc.
One of the issues of using Twiter for instance, is that due to the increasing
digital data privacy restricons the twiter users have to acvely consent to
provide their exact geolocaon when posng their tweets. Otherwise the alternave
is using the self-reported locaon of the profile associated with a given
tweet, but this is usually very generalized (e.g. Europe), and hence unfit for
this purpose. Hence, the only reliable opon for gleaning geographic informaon
of the tweet is by using keywords within the body of the tweet (e.g. @laplaya,
#Madrid, etc.).
To overcome this limitaon and be able to use cizen informaon provided through
Twiter, it is necessary to deeply analyse the meanings (semancs) of the texts
in order to understand it. Regarding locaons, for instance, it is required to
make use of a Named Enty Extracon engine in order to extract the locaon of the
events reported in the tweets, rather than the locaon of the phones that were
used to tweet.
Furthermore, authors of tweets have very different levels of reliability
regarding specific issues. Average cizens won’t know the species of a specific
jellyfish that that see on a beach for instance. They may not be able to
report properly and with all characteriscs, an oil spill they witness. More
serious issues may be encountered with authors that spread rumours or even
create false informaon. A possible acon to overcome such limitaons is to build
lists of referenced Twiter accounts regarding each use case, based on the
accounts that the ODYSSEA end-users trust. For instance, marime governmental
organisaon accounts will be followed and analysed.
## 6\. Data management
### 6.1. Providers code for data
Following the procedures adopted by AtlantOS, the Instuons providing data to
ODYSSEA plaorm should be reported and acknowledged following the EDMO code
recorded in the data file and the ODYSSEA plaorm catalogue. EDMO is the
European Directory of Marine Organizaons, developed under SeaDataNet, and it
can be used to register any marine organizaon involved in the collecon of
datasets
(operators, funders, data holders, etc.). It delivers a code for the
organizaon to be included in the data or metadata leading to the harmonizaon
of informaon (compared to free text) and the opmizaon of the datasets
discovery. EDMO is coordinated by MARIS.
For EU Countries new entries are added by the Naonal Data Centres (NODCs).
Through ODIP (Ocean Data
Interoperability Plaorm) cooperaon, there is also a point of contact with the
USA, Australia and some other non-EU countries. The rest of the world is
managed by MARIS, which also moderates the first entrance in EDMO of new
entries.
The request for a new entry in EDMO is sent to MARIS (current contact: Peter
Thijsse, [email protected]), who verifies if the instuon is already registered.
If a new entry is needed, the basic entry is made by MARIS, aer which the
appropriate NODC is responsible for updang further details and managing
changes.
### 6.2. Data vocabulary
Use of common vocabularies in all meta-databases and data formats is an
important prerequisite towards consistency and interoperability with exisng
Earth Observing systems and networks. Common vocabularies consist of lists of
standardised Terms of Reference covering a broad spectrum of disciplines of
relevance to the oceanographic and wider community. Using standardised ToR the
problem of ambiguies related to data structure, organizaon and format is
solved and therefore, common algorithms for data processing may be applied.
This allows the interoperability of datasets in terms of their manipulaon,
distribuon and long-term reuse.
ODYSSEA will adopt an Essenal Variables list of terms (aggregated level) that
has been defined and was published in June 2016 on the NERC/BODC Vocabulary
Server 3 .
This new vocabulary is mapped to the standards recommended for ODYSSEA
parameter metadata: P01
(parameter), P07 (CF variable), P06 (units) from SeaDataNet controlled
vocabularies managed by
NERC/BODC and the internaonally assured AphiaID from the WOrld Register of
Marine Species (WoRMS) 4 .
### 6.3. Metadata
Metadata refers to the descripon of datasets and services in a compliant form
as it has been defined by the Direcve 2007/2/EC (INSPIRE) and Commission
Regulaon No 1205/2008.
Metadata is the **data about the data** . Metadata describes how, when and by
whom a parcular set of data or a service was collected or prepared, and how
the data is formated, or the service is available. Metadata is essenal for
understanding the informaon stored in and has become increasingly important.
Metadata is structured informaon that describes, explains, locates, or
otherwise makes it easier to retrieve, use, or manage an informaon resource.
Metadata is oen called as the “data about the data or informaon about
informaon”.
Metadata is also data about services. Metadata describes the content, quality,
condion, and other characteriscs of a data set or the capabilies of a service.
Creang metadata or data documentaon for geospaal datasets is crucial to the
data development process. Metadata is a valuable part of a dataset and can be
used to:
* **Organize** data holdings (Do you know what you have?).
* Provide **informaon about** data holdings (Can you describe to someone else what you have?).
* Provide informaon **to data users** (Can they figure out if your data are useful to them?).
* **Maintain the value** of your data (Can they figure out if your data are useful 20 years from now?).
In the geographical domain we can have a descripon of spaal data ( **spaal
data** metadata), a service
( **service** metadata) or a special analysis process ( **process** metadata).
Most for the standardizaon work is done for data metadata, however service and
process metadata become increasingly important. Metadata is used in discovery
mechanisms to bring spaal informaon providers and users together.
The following mechanisms are recognized:
* **Discovery** : which data source contains the informaon that I am looking for?
* **Exploraon (or evaluaon)** : do I find within the data sources the right informaon to suit my informaon needs?
* **Exploitaon (use and access)** : how can I obtain and use the data sources?
Each mechanism has its own use of metadata. The selected standards should
fulfil the needs to carry out services using these mechanisms. Metadata is
required to provide informaon about an organisaon’s data holdings. Data
resources are a major naonal asset, and informaon of what datasets exist
within different organisaons, parcularly in the public sector, is required to
improve efficiencies and reduce data duplicaon. Data catalogues and data
discovery services enable potenal users to find, evaluate and use that data,
thereby increasing its value. This is also becoming important at the European
level. In addion, metadata received from an external source may require
further informaon supplied to metadata to allow easy process and interpretaon.
In this context for all types of data the following informaon is required
(SeaDataNet, 2010):
* **Where** the data were collected: locaon (preferably as latude and longitude) and depth/height;
* **When** the data were collected (date and me in UTC or clearly specified local me zone);
* **How** the data were collected (e.g., sampling methods, instrument types, analycal techniques). How do we organize the data (e.g., in terms of staon numbers, cast numbers);
* **Who** collected the data, including name and instuon of the data originator(s) and the principal invesgator;
* **What** has been done to the data (e.g., details of processing and calibraons applied, algorithms used to compute derived parameters);
* **Watch** points for other users of the data (e.g., problems encountered and comments on data quality).
The ICES Working Group on Data and Informaon Management (WGDIM) has developed
a number of data type guidelines which itemize these elements that are
required for thirteen different data types (see table below). These Data Type
Guidelines have been developed using the experse of the oceanographic data
centres of ICES Member Countries. They have been designed to describe the
elements of data and metadata considered as important to the ocean research
community. These guidelines are targeted towards most physical-chemical-
biological data types collected on oceanographic research vessel cruises.
Each guideline addresses the data and metadata requirements of a specific data
type.
This covers three main areas:
* What the data collector should provide to the data centre (e.g., collecon informaon, processing, etc.);
* How the data centre handles data supplied (e.g., value added, quality control, etc.);
* What the data centre can provide in terms of data, referral services and experse back to the data collector. A selecon of these guidelines, in parcular for those data types that are not yet dealt with in detail here, are included in Appendix 1 of this document.
This document summarizes the concept of metadata that is intended to be
adopted by ODYSSEA data plaorm, following the commonly agreed INSPIRE data
specificaon template in its relevant parts, i.e., dataset-level, services
metadata and data quality. It also contains detailed technical documentaon on
the XML source-code level and therefore provides specific guidelines to
correctly create and maintain metadata in the XML format.
### 6.4. Metadata Catalogue Service
A **Metadata Catalogue Service** is a mechanism for storing and accessing
descripve metadata and allows users to query for data items based on desired
atributes. The catalogue service stores descripve informaon (metadata) about
logical data items. The Open Geospaal Consorum (OGC) has created the
**Catalogue Service for Web (CSW) standard** to enable the easy data discovery
from a catalogue node.
Catalogue services support the ability to publish and search metadata for
data, services, and related informaon. Metadata in catalogues can be queried
and presented for evaluaon and further processing by both humans and soware.
Catalogue services (and other resources such as bibliographic resources,
datasets, etc.) are required to support the discovery and binding to published
web map services. The CSW standard is extremely rich. In addion to supporng a
query from a user, it can support distributed queries (one query that searches
many catalogues) and the harvesng of metadata from node to node.
Catalogue services support the ability to publish and search collecons of
descripve informaon
(metadata) for data, services, and related informaon objects. Metadata in
catalogues represent resource characteriscs that can be queried and presented
for evaluaon and further processing by both humans and soware. Catalogue
services are required to support the discovery and binding to registered
informaon resources within an informaon community.
The Internaonal Organisaon for Standardisaon (ISO) includes ISO/TC 2112,
which is an internaonal, technical Commitee for the standardisaon of
geographical informaon. TC 211 has created a strong, globally implemented set
of standards for geospaal metadata: the baseline ISO 19115; ISO 19139 for
implementaon of data metadata and the ISO 19119 for services metadata.
These open standards define the structure and content of metadata records and
are essenal for any catalogue implementaon. ISO 19115 describes all aspects
of geospaal metadata and provides a comprehensive set of metadata elements. It
is designed for electronic metadata services, and the elements are designed to
be searchable wherever possible. It is widely used as the basis for geospaal
metadata services. However, because of the large number of metadata elements
and the complexity of their data model, implementaon of ISO 19115 is
difficult.
The INSPIRE DIRECTIVE applies these standards and specificaons in its
implementaon. INSPIRE makes use of three catalogues for unique IDs management:
**(1) SeaDataNet, (2) ICES and (3) CMEMS.** ICES catalogue has a geospaal
component not present in the SeaDataNet catalogue while CMEMS provides the
reference to model results.
6.4.1. Catalogue Service for Web (CSW)
This secon describes briefly the Open GIS Consorum (OGC) specificaon for
catalogue services.
According to this specificaon: “ _Catalogue services support the ability to
publish and search collections of descriptive information (metadata) for data,
services, and related information objects; Metadata in catalogues represent
resource characteristics that can be queried and presented for evaluation and
further processing by both humans and software. Catalogue services are
required to support the discovery and binding to registered information
resources within an information community_ ".
**FIGURE 6.1: GENERIC VIEW OF THE CSW PROTOCOL AND ARCHITECTURE**
The Inspire iniave uses the CSW protocol and the ISO metadata applicaon
profile (AP) for the specificaon and implementaon of the Inspire Discovery
Service. In ODYSSEA, the ODYSSEA ISO metadata profile will be developed and
used as described in the metadata secons of this document.
6.4.2. Harvesng
Harvesng is the procedure of collecng metadata records from other (external)
catalogues and synchronize the local catalogue with the collected informaon.
In the majority of the cases the harvesng process is scheduled and automacally
executed once or at pre-defined intervals. It is usually also possible to
execute a harvesng procedure on-demand, i.e., executed by human request.
The diagram below depicts a sample on how the harvesng procedures could be
seen between the
ODYSSEA plaorm catalogue and other external catalogues. To be noted that the
harvesng procedure uses, within Inspire, the CSW protocol. Within the
catalogue responses to the harvesng requests there are collecons of metadata
records, using the model described in this document (i.e., INSPIRE Datasets
and Services).
**FIGURE 6.2: SAMPLE HARVESTING PROCEDURES BETWEEN ODYSSEA PLATFORM CATALOGUE
AND EXTERNAL CATALOGUES.**
### 6.5. Guidelines on using metadata elements
6.5.1. Lineage
Following the ISO 19113 Quality principles, if a data provider has a procedure
for quality validaon of their spaal datasets then the data quality elements,
listed in Chapter 2, should be used. If not, the Lineage metadata element
(defined in Regulaon 1205/2008/EC) should be used to describe the overall
quality of a spaal dataset.
According to Regulaon 1205/2008/EC, lineage “is a statement on process history
and/or overall quality of the spaal dataset. Where appropriate it may include
a statement whether the dataset has been validated or quality assured, whether
it is the official version (if mulple versions exist), and whether it has
legal validity. The value domain of this metadata element is free text”.
Apart from describing the process history, if feasible within a free text, the
overall quality of the dataset
(series) should be included in the Lineage metadata element. This statement
should contain any quality informaon required for interoperability and/or
valuable for use and evaluaon of the dataset (series).
6.5.2. Temporal reference
According to Regulaon 1205/2008/EC, at least one of the following temporal
reference metadata elements shall be provided: temporal extent, date of
publicaon, date of last revision, date of creaon. If feasible, the date of the
latest revision of a spaal dataset should be reported using the date of latest
revision in a metadata element.
6.5.3. Topic category
The topic categories defined in Part D.2 of the INSPIRE Implemenng Rules for
metadata are derived directly from the topic categories defined in B.5.27 of
ISO 19115. Regulaon 1205/2008/EC defines the INSPIRE data themes to which each
topic category is applicable, i.e., oceanography is the INSPIRE theme for
which the Geoscienfic informaon topic category is applicable.
6.5.4. Keyword
Regulaon 1205/2008/EC requires that, for a spaal dataset or a spaal dataset
series, “at least one keyword shall be provided from the General Environmental
Mul-lingual Thesaurus (GEMET) describing the relevant spaal data theme, as
defined in Annex I, II or III to Direcve 2007/2/EC”. Keywords should be taken
from the GEMET – General Mullingual Environmental Thesaurus where possible.
## 7\. ODYSSEA datasets
This secon describes the structure and the content of the proposed ODYSSEA
metadata profile on the dataset-level and includes general guidelines for the
metadata from two points of view – the first one is the ODYSSEA metadata,
while the second represents ODYSSEA data quality issues.
The structure described in this document is compliant with the exisng ISO
standards for metadata – i.e., especially ISO EN 19115 and ISO 19139\. The
full list of used ISO standards can be found in the List of References at the
end of this document. The primary goal of this part of the deliverable is to
develop a metadata profile for ODYSSEA geographic datasets and me-series
datasets, within the framework of these ISO standards, aiding the support of
the interoperability between the different metadata and/or GIS plaorms.
The metadata model to be adopted in ODYSSEA is described in more detail in
Annex I.
### 7.1. Dataset-level metadata
Metadata can be reported for each individual spaal object (spaal object-level
metadata) or once for a complete dataset or dataset series (dataset-level
metadata). If data quality elements are used at spaal object level, the
documentaon shall refer to the appropriate definion in the Data Quality Info
secon of this document. This secon only specifies the dataset-level metadata
elements.
For some dataset-level metadata elements, in parcular on data quality and
maintenance, a more specific scope can be specified. This allows the definion
of metadata at sub-dataset level, e.g., separately for each spaal object type.
When using ISO 19115/19139 to encode the metadata, the following rules should
be followed:
* The scope element (of type DQ_Scope) of the DQ_DataQuality subtype should be used to encode the scope.
* Only the following values should be used for the level element of DQ_Scope: series, dataset, featureType.
* If the level is featureType 5 then the levelDescripon/MD_ScopeDescripon/features element (of type Set <GF_FeatureType>) shall be used to list the feature type names.
* Mandatory or condional metadata elements are specified in the next sub-secon, while oponal metadata elements are specified in subsequent sub-Secon. The tables describing the metadata elements contain the following informaon:
* The first column provides a reference to a more detailed descripon. The second column specifies the name of the metadata element.
* The third column specifies the mulplicity.
* The fourth column specifies the condion, under which the given element becomes mandatory (only for the first and second tables).
In **Annex I** a detailed descripon of the metadata is presented.
### 7.2. Service-level metadata
This secon describes the structure and the content of the proposed ODYSSEA
metadata profile on the service-level and includes general guidelines for
ODYSSEA metadata from two points of view – the first one is the ODYSSEA-
specific metadata, while the second represents quality issues of the data
published by the services.
The structure described in this document is compliant with the exisng ISO
standards for metadata – i.e., especially ISO EN 19115, EN ISO 19119 and ISO
19139 (the full list of used ISO standards can be found in List of References
at the end of this document). The primary goal of this secon is to explain the
development in the metadata profile of ODYSSEA geographical data services,
within the framework of these ISO standards. Through this process, the
principle of interoperability is supported and data are easily harvested and
exchanged between various discovery services and different metadata and/or GIS
plaorms.
Metadata can be reported for each individual spaal object (spaal object-level
metadata) or once for a complete dataset or dataset series (dataset-level
metadata). On the other hand, metadata can also be reported for the services
that are publishing ODYSSEA data – i.e., especially INSPIRE view and download
services. This secon only specifies service-level metadata elements.
For some service-level metadata elements, in parcular for data quality, a more
specific scope can be specified. This allows the definion of metadata at sub-
dataset level, e.g., separately for each spaal object type. When using ISO
19115/19139 to encode the metadata, the following rules should be followed:
* The scope element (of type DQ_Scope) of the DQ_DataQuality subtype should be used to encode the scope.
* Only the following value should be used for the level element of DQ_Scope: service.
Mandatory or condional metadata elements are specified in the ANNEX I. Oponal
metadata elements are specified in the subsequent sub-secon of this ANNEX.
### 7.3. Data format standards
7.3.1. Ocean Data View data model and netCDF Format
As part of the ODYSSEA services, data sets will be accessible via download
services. Delivery of data to users requires common data transfer formats,
which interact with other standards (Vocabularies, data quality control).
In SeaDataNet it was decided that Ocean Data View (ODV) and netCDF format are
mandatory.
ODYSSEA will follow the SeaDataNet (2017) procedures, as main concepts of this
document are reproduced in the following paragraphs. ODYSSEA will also follow
the fundamental data model underlying ODV format which, in pracce, is composed
of a collecon of rows, each having the same fixed number of columns.
In this model there are three different types of columns:
* The metadata columns;
* The primary variable data columns (one column for the value plus one for the qualifying flag);
* The data columns.
The metadata columns are stored at the le-hand end of each row, followed by
the primary variable columns and then the data columns.
There are three different types of rows:
* The comment rows; • The column header rows;
* The data rows.
The CF metadata convenons (htp://cf-pcmdi.llnl.gov/) are designed to promote
the processing and sharing of data files created with the NetCDF API. The
convenons define metadata that provide a definive descripon of what the data
in each variable represents, and the spaal and temporal properes of the data.
This enables users of data from different sources to decide which quanes are
comparable, and facilitates building applicaons with powerful extracon, re-
gridding, and display capabilies.
The standard is both mature and well-supported by formal governance for its
further development. The standard is fully documented by a PDF manual
accessible from a link from the CF metadata homepage (htp://cf-
pcmdi.llnl.gov/). Note that CF is a developing standard and consequently
access via the homepage rather than through a direct URL to the document is
recommended to ensure that the latest version is obtained. The current version
of this document was prepared using version 1.6 of the convenons dated 5
December 2011.
The approach taken with the development of the SeaDataNet profile based on CF
1.6 was to classify data on the basis of feature types and produce a
SeaDataNet specificaon for storage of each of the following:
* **Point me series** , such as current meter or sea level data, have row_groups made up of measurements from a given instrument at different mes. The metadata date and me are set to the me when the first measurement was made. The primary variable is me (UT) encoded either as:
* A real number represenng the Chronological Julian Date, which is defined as the me elapsed in days from 00:00 on January 1st 4713 BC. If this opon is chosen, then the column must have the heading ‘Chronological Julian Date [days]’.
* A string containing the UT date and me to sub-second precision corresponding to ISO8601 syntax (YYYY-MM-DDThh:mm:ss.sss) for example 2009-02-12T11:21:10.325. If
this opon is chosen, the column must have the heading ‘me_ISO8601’. If the me
is not known to sub-second precision, then use the ISO8601 form appropriate to
the known precision. For example, a mestamp to the precision of one hour would
be represented by 2009-02-12T11:00 and a me stamp to a precision of a day by
2009-02-12.
Rows within the row_group are ordered by increasing me. Note that the z co-
ordinate (e.g., instrument depth), essenal for many types of me series data,
needs to be stored as a data variable and could have the same value throughout
the row_group.
* **Profile data** , such as CTD or botle data, have row_groups made up of measurements at different depths. The metadata date and me are set to the me when the profile measurement started. The primary variable is the ‘z co-ordinate’, which for SeaDataNet is either depth in metres or pressure in decibars. Rows within the row_group are ordered by increasing depth.
* **Trajectories** , such as underway data, have row_groups made up of a single measurement, making the metadata me and posions the spao-temporal co-ordinate channels. The primary variable is the ‘z co-ordinate’, which for SeaDataNet is standardised as depth in metres. Rows within the row_group are ordered by increasing me;
* **TimeSeriesProfile** (x, y, z fixed; t variable) but some variables can be measured at different depths at the same me var=f(t, z). The specificaon given is for storage of me series profiles such as moored ADCP.
* **TrajectoryProfile** (x, y, z, t all variable) but some variables can be measured at different depths at the same me var=f(t, z). The specificaon given is for storage of trajectory profiles such as shipborne ADCP.
The specificaon was then developed through discussions on a collaborave e-mail
list involving parcipants in SeaDataNet, MyOcean, USNODC, NCAR and AODN. The
working objecve focussed on producing profiles with the following properes:
* CF 1.6 conformant;
* Have maximum interoperability with CF 1.6 implementaons in use by MyOcean (OceanSITES convenons), USNODC (USNODC NetCDF templates) and two contributors to AODN (IMOS and METOC);
* Include storage for all labels, metadata and standardised semanc mark-up that were included in the SeaDataNet ODV format files for the equivalent feature type.
Significant list discussion focussed on the version of netCDF that should be
used for SeaDataNet. The conclusion was that netCDF 4 should be used wherever
possible, but that netCDF 3, although strongly discouraged, should not be
totally forbidden.
On ANNEX II some examples of the structure of these files are presented.
7.3.2. Stac data (Bathymetric, Chemical, Geologic, Geophysical, Biological,
Biodiversity data)
ODYSSEA will also adopt the SeaDataNet proposed standards for marine chemistry
(to support the EMODNet Chemistry pilot), bathymetry (to support the EMODNet
Hydrography and Seabed Mapping pilots), and geology and geophysics (to support
the Geo-Seas project and the EMODNet Geology pilot). and marine biology.
Based on an analysis of the present situaon, and currently exisng biological
data standards and iniaves, such as the Ocean Biogeographic Informaon System
(OBIS), Global Biodiversity Informaon Facility (GBIF), Working Group on
Biodiversity Standards (TDWG) and World Register of Marine Species (WoRMS)
standards, SeaDataNet proposed a format for data exchange of biological data.
Key issues that steered the format development were (SeaDataNet III,
publishable summary):
* Requirements posed by the intended use and applicaon of the data format (data flows, density calculaons, biodiversity index calculaons, community analysis, etc…)
* Availability of suitable vocabularies (World Register of Marine Species, SeaDataNet Parameter list, SeaDataNet Unit list, etc…)
* Requirements for compability with exisng tools and soware (WoRMS taxon match services, EurOBIS QC services, Lifewatch workflows, Ocean Data View, etc…)
* The requirements of the extended ODV format for biological data were defined as follows:
* The format should be a general and higher level format without necessarily containing all specifics of each data type, but rather focusing on common informaon elements for marine biological data.
* At the same me the format needs to be sufficiently flexible/extendable to be applicable for at least part of the variety of biological data the NODC’s are managing.
* It should be possible to derive OBIS or Darwin Core compable datasets from the format.
* The format should be self-describing, in the sense that all informaon needed to interpret the data should be included in the file format or be available through links to vocabularies or term lists that are part of the format.
A specific ODV extended format for biological data has been defined for
different types of files such as (see for details SeaDataNet deliverable
D8.4):
* macrobenthos community with density and biomass values;
* zooplankton community with samples from different depths;
* demersal fish populaon with densies for different size classes and individual fish measurements;
* pollutant concentraons in biota specimens.
7.3.3. Open source Semanc Informaon
Semanc informaon may be useful for a myriad of services to the end users.
However, the sources providing semancally rich informaon are very
heterogeneous. Semancally rich informaon can be found on Wikipedia and
Wikidata for instance. EMODnet, through the “Human acvies” data sets, also
provides some semancally rich informaon.
As one can see the sources of semancally rich informaon are very heterogeneous
in their availability, reliability and format. Furthermore, they provide
heterogeneous and parally redundant informaon. No standard model exists for
that type of informaon, as their variability is very high. However, as one of
ODYSSEA plaorm aim is to integrate and fuse this kind of informaon, one must
rely on a shared format in order to analyze and make use of it.
Within the services that will be developed in ODYSSEA, a domain ontology will
be used in order to enable the integraon of semanc informaon sources. **For
each ODYSSEA use case, and for each ODYSSEA product relying on semanc
informaon analysis and integraon, end users of the products will have to
develop, together with ODYSSEA technical partners, an ontology defining the
concepts of interest of the use case.** This ontology will be the pivot
language and representaon format used to integrate heterogeneous open
informaon sources.
**FIGURE 7.1: EXAMPLE OF AN ONTOLOGY DEFINING THE MAIN CONCEPTS USED TO
ANALYZE THE IMPACT OF PORT**
**STRUCTURES ON THE QUALITY OF BATHING WATERS AND FISH PRODUCTION**
## 8\. Data privacy policy
### 8.1. General principles
Basic principles regulated by the Data Protecon Act will be observed namely:
* ODYSSEA will only hold the personal data which is necessary to offer services provided by its plaorm.
* Data is only used for the purposes described in the Data Protecon Register Form and the Informed Consent Form.
* Personal data will only be held for as long as necessary. Once data are no longer needed it will be deleted from ODYSSEA records by the ODYSSEA plaorm Administrator (namely the CLS Chief Technical Officer (CTO) / IT plaorm manager). More specifically, in case a certain period (one year) is passed without the entry of an end-user in the plaorm, CLS will alert him through a standardized electronic message on the destrucon of personal data.
* Personal data storage will be secured to ensure that data are not accessible to unwanted third pares and are protected against disaster and risk.
* ODYSSEA will regularly email website news and informaon updates only to those end-users and customers who have specifically subscribed to our email service. All subscripon emails sent by the ODYSSEA plaorm will contain clear informaon on how to unsubscribe from our email service.
* In any event, no personal data will be shared with any third party for direct markeng. ODYSSEA will never sell, rent or exchange mailing lists of personal data.
* All ODYSSEA partners shall comply with the data protecon and privacy laws applicable in their country of origin, including their naonal laws applicable to exporng data into the EU.
* ODYSSEA partners from non-EU countries have provided signed declaraons that they will meet all relevant H2020 ethical standards and regulaons. _Exporng personal data from the EU to non-_
_EU countries must comply with the applicable EU rules on cross-border
transfer of personal data._
* In accordance with the Privacy and Electronic Communicaons (EC Direcve) Regulaons 2003, ODYSSEA will never send bulk unsolicited emails, (popularly known as Spam) to any email addresses.
* ODYSSEA may send emails to exisng end-users and customers or prospecve end-users and customers having inquired or registered in the ODYSSEA plaorm, regarding products or services directly provided by the ODYSSEA plaorm.
* All emails sent by ODYSSEA will be clearly marked as originang from this plaorm. All such emails will also include clear instrucons on how to unsubscribe from ODYSSEA email services. Such instrucons will either include a link to a page to unsubscribe or a valid email address to which the user should reply, with “unsubscribe” as the email subject heading.
Details on the protecon of end-users’ personal data and the privacy rules to
be followed by ODYSSEA, the parcipaon of non-EU countries and the Informed
Consensus Procedures are provided in Deliverable
1.1.
### 8.2. Use of Cookies
Cookies are small text files which are placed on your computer by websites
that you visit. They are widely used in order to make websites work, or work
more efficiently, as well as to provide informaon to the owner of the site.
ODYSSEA plaorm may generate cookies in order to work more efficiently. These
will enhance features such as plaorm search and opmized page loading.
ODYSSEA may use Google Analycs to collect quantave informaon on plaorm’s
performance and end-user’s interacon with the plaorm. ODYSSEA will use this
informaon to improve the service and experience offered by the plaorm. The use
of Social Media butons on some of the pages link to third party websites and
services, like Facebook and Twiter also create cookies. These services use
cookies when clicking the buton. Privacy policies will be available for all
these services and users should be able to read them to be informed on how
their informaon is being used, and how they can opt-out, should they wish to.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0929_ATTRACkTIVE_730822.md
|
**1\. INTRODUCTION**
**1.1 Purpose of ATTRACkTIVE DMP**
The Data Management Plan (DMP) is a live document that describes the data
management life cycle for the data to be collected, processed and/or generated
by a Horizon 2020 project. As part of making research data findable,
accessible, interoperable and re-usable (FAIR), a DMP should include
information on:
* The handling of research data during and after the end of the project
* What data will be collected, processed and/or generated
* Which methodology and standards will be applied
* Whether data will be shared/made open access
* How data will be curated and preserved (including after the end of the project)
This data can be produced either by the partners of the project or they may be
collected by third parties. The latter case can apply if e.g. the data of
travel service providers will be needed to prove the outcome of the project
along use cases.
The DMP also provides an analysis of the main elements of the data management
policy that are going to be used by the Consortium. The policy can concern
the:
* Dissemination policies of data
* Collect, process and review restrictions on data
* Interaction with the IT2Rail Lighthouse project to perform an efficient synchronisation - Intellectual property protection of data produced and used by partners.
This document should be considered in combination with:
* Section 9 of the Consortium Agreement: “Access Rights”
* Chapter 4/ Section 3 of the Grant Agreement No. 730822: “Rights and Obligations related to Background and Results”
In this final version of the document the focus will be on data security in
chapter 2 and travel companion together with GDPR issues in chapter 3 as part
of each subsection.
**1.2 Background of ATTRACkTIVE Project**
In order to better understand the data used or generated by the project, a
brief overview of the project structure and objectives of each work package
(WP) is given below:
**Figure 1 – ATTRACkTIVE Project Structure**
• **WP1: Trip Tracking**
The Trip Tracking (TT) work package deals with the specification, design and
implementation of the system in charge of collecting travel information from
multiple sources, to detect and handle transport events, to analyse the impact
of disruptions for all modes and to provide alternatives if necessary and
possible. In this sense, so called partial Trip Trackers (pTT) will be treated
by a Tracking Orchestrator (TO) to inform and propose the user’s individual
solution for their current situation.
* The Tracking Orchestrator is responsible for tracking a whole journey and to inform travellers about any occurrences that may happen during a travel. It takes a beforehand selected journey, looks for appropriate partial Trip Trackers and instructs them to track this journey. Thereafter it waits for any information that the pTT may provide. It checks and combines this information to relevant information and forwards this to the traveller.
* Several partial Trip Trackers may coexist, processing events and providing the Orchestrator with impacts that will affect the tracked journey, accounting traveller preferences.
• **WP2: Travel Companion**
The Travel Companion work package aims to specify, design, and implement the
required techniques and tools to design novel forms of travel experiences.
This includes an advanced Personal Application running on Android devices as
well as allocated cloud based services to store private user specific
information. The system will be able to handle points of interests (POI),
provide navigation assistance and hide complex operations to deal with
different modes of transport.
* The Personal Application is the client which a traveller can use to access the whole ecosystem. This way, users are able to access all services through a homogenized user interface, allowing them to leverage all the capabilities of the system. Furthermore, Location Based Experiences are integrated to present entertainment, provide point of interests or any other information that might enrich the journey. In addition, Indoor/Outdoor Navigation will be presented to guide the traveller throughout their journey.
* The online counterpart Cloud Wallet serves as the secured repository for the users’ personal information. Storing this information in the Cloud allows the user to not only access information multiple times but enables them to use different devices. Cloud Wallet also acts as a bridge between the Personal Application and all external services, allowing travellers to receive information affecting their journey and providing them with ubiquitous access to travel rights in electronic wallets.
* **WP3: Technical Coordination**
The Technical Coordination work package will assure coordination amongst the
activities of the partners within ATTRACkTIVE and as well coordinate with the
other Technology Demonstrators inside the IP4 program in particular, IT2Rail,
Co-Active (CO-Modal Journey Re-Accommodation on Associated Travel Services),
ST4RT (Semantic Transformations for Rail Transportation) and GoF4R (Governance
of the Interoperability Framework for Rail and Intermodal Mobility). It will
also be in charge of integrating and testing WP1 and WP2 technical results and
organising evaluation sessions with end-users to collect feedback and new
requirements for the next releases.
* **WP4: Dissemination and Communication**
The Dissemination and Communication work package will put in place
communication tools and channels to guarantee seamless exchange between
partners and ensure that the outcomes of the project will be produced on time
and to high quality standards. Moreover, public events will also be organized
and conducted to share the acquired experience.
* **WP5: Project Management**
The Project Management work package will guarantee the efficient coordination
of the project work package and tasks, ensuring not only effective consortium
management, but overall administrative and financial management of the
project. Considering its nature, there will be no data produced by this WP
suitable for inclusion within this DMP.
**1.4 Reference Documents**
<table>
<tr>
<th>
[R1]
</th>
<th>
ATTRACkTIVE Grant Agreement – N° 730822
</th>
<th>
05/08/2016
</th> </tr>
<tr>
<td>
[R2]
</td>
<td>
ATTRACkTIVE Consortium Agreement
</td>
<td>
14/07/2016
</td> </tr>
<tr>
<td>
[R3]
</td>
<td>
Quality Plan (updated version)
</td>
<td>
07/07/2017
</td> </tr>
<tr>
<td>
[R4]
</td>
<td>
Guidelines on FAIR Data Management in Horizon 2020 (v3.0)
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_ma
nual/hi/oa_pilot/h2020-hi-oa-data-mgt_en.pdf_
</td>
<td>
26/07/2016
</td> </tr>
<tr>
<td>
[R5]
</td>
<td>
ATTRACkTIVE Public Website
_http://projects.shift2rail.org/s2r_ip4_n.aspx?p=ATTRACKTIVE_
</td>
<td>
</td> </tr> </table>
# Table 2: Reference Documents
2. **DATA MANAGEMENT AT PROJECT LEVEL**
This section describes general data management that applies to the whole
project and all data generated by the project.
**2.1 Typologies of Data**
The following categories of outputs will be provided by ATTRACkTIVE
Consortium, in order to fulfil the H2020 requirements of making it possible
for third parties to access, mine, exploit, reproduce and disseminate the
results contained therein:
* (Public) Deliverables,
* Conference/Workshop presentations (which may, or may not, be accompanied by papers, see below),
* Conference/Workshop papers and articles for specialist magazines, \- Research Data and Meta Data.
2. **Data Collection & Definition **
The responsibility to define and describe all non-generic data sets specific
to an individual work package shall belong to the WP leader. The WP leaders
shall formally review and update the data sets related to their WP.
All modifications/additions to the data sets shall be provided to the
ATTRACkTIVE Coordinator (HaCon) for inclusion in the DMP.
3. **Dataset Naming**
All dataset materials generated or used within the project will be named and
referred to in this DMP with the following codification, in order to have
unambiguous identification and ease of access:
<table>
<tr>
<th>
**WP (3 characters)**
</th>
<th>
**Number (3 digits)**
</th>
<th>
**Version**
</th> </tr>
<tr>
<td>
WP1
</td>
<td>
001
</td>
<td>
1
</td> </tr>
<tr>
<td>
WP2
</td>
<td>
001
</td>
<td>
1
</td> </tr>
<tr>
<td>
WP3
</td>
<td>
001
</td>
<td>
1
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
001
</td>
<td>
1
</td> </tr> </table>
# Table 3: Dataset Codification
**2.4 Archiving and Preservation**
Open access to public deliverables/reports, publication or presentation will
be achieved in ATTRACkTIVE by depositing the data into the Cooperation Tool
(CT), and activating their publication to the project Website [R5].
These documents will be available for at least 3 years following the project
completion.
**2.5 Data Security**
ATTRACkTIVE does not intend to use or produce any confidential or sensitive
data that would require setting up specific measures for secure storage or
transfer because it develops a technical demonstrator. It is not planned to
run this system productively in the market.
The current status of the project does not reconsider this position, but these
aspects will be monitored over time and taken into account during development
of the system. This ensures that once the system will be going live the data
is handled according to Data Security principles.
* **Travel Companion Personal Application**
Personal data will be stored in the Personal Application (PA) for caching
purposes in case of network connectivity losses. It will be a mirror of the
data stored in the Cloud Wallet and does not need any backup as this cache can
be retrieved at will. The storing of the data on the device will comply with
the operating systems rules, namely iOS and Android, which are secured through
encryption.
The PA will send (indirectly through the Cloud Wallet and the Tracking
Orchestrator) sensitive data to an identified partial Trip Tracker. All
sensitive data transfer will take place using secured encrypted channels (e.g.
HTTPS connections) and authorization mechanism based on temporal token
generation.
Once a traveller with a valid journey stored in their secured personal data
set intents to be tracked several actions take place:
* the Cloud Wallet is enabled to send push notifications to the travellers PA to inform users about any kind of obstacles
* the Tracking Orchestrator will subscribe this journey by using the User ID and the Offer ID representing the journey
* the journey is sent with a subscription ID anonymously to all partial Trip Trackers
* whenever an Event results in a relevant notification for the traveller the Tracking Orchestrator sends this notification to the Cloud Wallet which sends a push notification to the traveller.
The generated subscription IDs and tokens are valid until the journey has
finished or until the traveller terminates the tracking mode.
* **Partial Trip Tracker Repository**
Partial Trip Trackers do not store any personal data into their repositories.
The Partial Trip Trackers are not master of the data they manage. The stored
data come from sources that control data and the associated life cycles. The
storing is for caching purposes. In case of system crash, the data will be re-
acquired.
* **Cloud Wallet Repository**
Cloud Wallet data will be stored on a relational database (PostgreSQL)
protected by a user and password. The access to this data will only be allowed
to administration users. Currently, the access to the data has two ways:
* Through VPN connecting directly to the servers: in this way, a user and a password is needed in addition to the mandatory certificate VPN connection. Then, I will have access to the databases directly.
* Through services using a special admin account: this requires an administrator account registered in the Identity module and login with this account in the system to use a temporary token. Expiration time in temporary tokens is configurable.
The database will be located on the Microsoft Azure Cloud, one of the safest
cloud environments.
Microsoft Azure recently completed a new set of independent third-party ISO
and Cloud Security Alliance (CSA) audits to expand its certification
portfolio. Azure leads the industry with the most comprehensive compliance
coverage, enabling customers to meet a wide range of regulatory obligations.
The following table summarizes the Azure certifications:
<table>
<tr>
<th>
**Certification**
</th>
<th>
**Azure**
</th> </tr>
<tr>
<td>
CSA STAR Certification
</td>
<td>
√
</td> </tr>
<tr>
<td>
ISO 27001:2013
</td>
<td>
√
</td> </tr>
<tr>
<td>
ISO 27017:2015
</td>
<td>
√
</td> </tr>
<tr>
<td>
ISO 27018:2014
</td>
<td>
√
</td> </tr>
<tr>
<td>
ISO 20000-1:2011
</td>
<td>
√
</td> </tr>
<tr>
<td>
ISO 22301:2012
</td>
<td>
√
</td> </tr>
<tr>
<td>
ISO 9001:2015
</td>
<td>
√
</td> </tr> </table>
The access to this server will only be allowed using a Virtual Private Network
(VPN) protected by username and password that will only be available to
administration users. Only the owner of the cloud environment is able to add
new admin users if this is necessary.
This server will only be accessible using specific ports, blocking the access
using the most common ports used in hacking attacks, such as HTTP.
A database backup will be executed daily to generate a snapshot of the data
stored in the repository, and the backup will be stored on Azure
infrastructure to avoid data loss in case of a
server crash. This backup execution will be the responsibility of Indra as
well as the recovery in case of data loss.
A server backup will be configured in Azure daily for being able to restore
the full server in case of disaster.
All passwords stored physically in the database or in files will be stored
cyphered using a state of the art algorithm. The algorithm used is **bcrypt**
, a password hashing function based on the Blowfish cipher. This algorithm
incorporates a salt to protect against rainbow table attacks and it is also an
adaptive function: over time, the iteration count can be increased to make it
slower, so it remains resistant to brute-force search attacks even with
increasing computation power.
**2.6 Ethical Aspects**
The developed procedures and findings generated in ATTRACkTIVE are centred on
embedded systems topics and do not foresee any research on areas of ethical
relevance such as research on humans, animals, medical applications, genetics
or bio/nano-electronics.
During the project all possibly needed data will be simulated (replica data)
and thus constitute no personal data. In cases where personal data can or will
be used after the end of the project, this data will be secured and protected
by design.
None of the ethical issues that have been named in section 4 “Ethics issues
table” of the proposal submission forms are relevant for the ATTRACkTIVE
proposal.
**3\. DATA MANAGEMENT PLAN OF ATTRACKTIVE PROJECT**
• **FAIR Principles**
In order to have a well-organized data management, the FAIR Principles should
be applied, which mean to make the research data:
* **F** indable,
* **A** ccessible,
* **I** nteroperable,
* **R** eusable
for internal and external stakeholders of the project. The data architecture
of ATTRACkTIVE takes these rules into account for their specific parts having
in mind that this project is one of several ones in the course of Shift2Rail
IP4. The Open Call project My-TRAC deals with regulations according to GDPR
and parts of their outcomes will be reflected within this document.
Furthermore, produced data of the two work packages 1 and 2 are partly
targeted at one specific person and produce personal sensitive data, which is
not intended to be openly accessible so there is no need to enable an exchange
of data or to make them findable for an external usage. This applies
especially for the Travel Companion. It is guaranteed that data generated
within the project will only be reused with respect to corresponding projects
of IP4. For this usage data security especially for private sensitive data is
ensured due to anonymizing them so that data of this type would be reused in
an aggregated form. Data generated during the project is only used for the
development of new features to reach the mentioned objectives and will be
deleted after finalizing the project.
**3.1 DMP of WP1: Trip Tracking**
**3.1.1 _Data Summary_ **
The WP1 of ATTRACkTIVE project deals with the TD4.4 – Trip Tracker. The Trip
Tracker is designed to detect any kind of disruptions or obstacles of a
traveller’s itinerary and to provide users with alternative routes in case of
them.
The Trip Tracker deals with a wide range of multi-source and multi-format
information through direct links with various transport providers and data
providers. Therefore interfaces to support emerging and established standard
protocols such as VDV TRIAS, NeTEx, SIRI, GTFS static and real time will be
developed. Additionally information from urban OAS (Operational Assistance
Systems), ITS, suburban rail management systems, signalling infrastructure and
road traffic data will be taken into consideration. On top of that, data from
social networks related to the ongoing travels will be collected. The aim is
to analyse how social network information could feed the trip tracker and help
to enrich the trip tracking functions. This information can be complemented by
other information sources such as weather information that could affect common
operation. Finally the Travel Companion will be used as a data source for real
time information. Therefore this task also aims to interface it to the TC to
collect traveller information. The Trip Tracker will be fed with data from the
semantic web of transportation through the interoperability framework once the
S2R ecosystem is fully stablished.
The aim to collect data for the Trip Tracker is to retrieve all necessary
information required to enable tracking activation on journeys to be tracked.
This is in relation with the objective of collection of planned and real time
data for all modes including personal transport which is essential for all
following up calculations and assistance.
The collected data from Public Transport includes mixed reference data
(network data, time tables, planned journeys) as well as mixed dynamic data
(passing times, vehicle location, operating information,
situational/contextual information).
On the one hand, reference data is “real data” provided by Transport Services
Providers (e.g. STIB in Brussels). On the other hand, dynamic data is
“simulated data” from the simulator implemented within the ATTRACkTIVE project
to generate real time events in relation with the corridor/scenario that will
be defined in the final demonstrator. Real time events generated will be
linked with existing reference data in order to be as close as possible to
reality and respond to all situations that can happen within a network.
The consideration of relevant standards within the project as well as the
existence of Shift2Rail interoperability framework, will contribute to tear
down barriers and obstacles that stakeholders may find when joining Shift2Rail
ecosystem. Preventing those competitors in the transportation marketplace
might isolate themselves instead of participating in Shift2Rail in the
assumption that their market share would be higher.
The collected data from personal application components is extracted from
traveller’s mobile device sensors and traveller’s reported events. This data
will provide the ability to identify events based on user inputs and behaviour
without identifying that user.
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Set**
</th>
<th>
**Description**
</th>
<th>
**Origin**
</th>
<th>
**Type**
</th>
<th>
**Size**
</th>
<th>
**Personal**
**Data**
</th>
<th>
**Access/**
**License**
</th> </tr>
<tr>
<td>
WP1-
001-1
</td>
<td>
Weather
</td>
<td>
Yahoo Weather API allows you to get current weather information for your
location. It makes use of YQL (Query
Language) Query, a SQL-like language that allows you to obtain meteorological
information. The API is exposed like a service REST and returns the
information in a data structure JSON. The data are updated every 2 seconds.
</td>
<td>
Yahoo
Weather
API
</td>
<td>
</td>
<td>
</td>
<td>
No
</td>
<td>
open access; terms of use could be checked at _https://policie_
_s.yahoo.com/ us/en/yahoo/t erms/product_
_-_
_atos/apiforyd n/index.htm_
</td> </tr>
<tr>
<td>
WP1-
002-1
</td>
<td>
Planning data for Madrid
</td>
<td>
Feeds for Indra’s Urban TSP with the planning data of the urban transit in
Madrid (CRTM)
</td>
<td>
CRTM
</td>
<td>
GTFS
</td>
<td>
</td>
<td>
No
</td>
<td>
CRTM
(Consorcio
Regional de Transportes de Madrid)
</td> </tr>
<tr>
<td>
WP1-
002-2
</td>
<td>
Planning data for Barcelon a
</td>
<td>
Feeds for Indra’s Urban TSP with the planning data of the urban transit in
Barcelona (TMB)
</td>
<td>
TMB
</td>
<td>
GTFS
/RES
T API
</td>
<td>
</td>
<td>
No
</td>
<td>
_https://develo per.tmb.cat/d ocs/termsconditions_
</td> </tr> </table>
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Set**
</th>
<th>
**Description**
</th>
<th>
**Origin**
</th>
<th>
**Type**
</th>
<th>
**Size**
</th>
<th>
**Personal**
**Data**
</th>
<th>
**Access/**
**License**
</th> </tr>
<tr>
<td>
WP1-
003-1
</td>
<td>
STIB
GTFS
</td>
<td>
Open data from STIBMIVB, The Brussels Intercommunal
Transport Company. The Files API contains one operation returning the GTFS
Files. The GTFS files are updated every two weeks.
We will retrieve:
* Stops with their geolocation
* Lines and their routes
* Details of every stop on a line
* Theoretical timetables at every
stop
</td>
<td>
STIB
</td>
<td>
GTFS
</td>
<td>
~25
MB
</td>
<td>
No
</td>
<td>
https://opend ata.stibmivb.be/store
/license
</td> </tr>
<tr>
<td>
WP1-
004-1
</td>
<td>
STIB Opera-
tion
Monitoring API
</td>
<td>
Open data from STIBMIVB, The Brussels
Intercommunal
Transport Company.
The Operation
Monitoring API provides real-time information including:
* Waiting times at stops
* Vehicles
positions This API will not be used for demonstration purposes where SIRI SX
simulated data is better suited.
</td>
<td>
STIB
</td>
<td>
REST
API
</td>
<td>
N/A
</td>
<td>
No
</td>
<td>
https://opend ata.stibmivb.be/store
/license
</td> </tr>
<tr>
<td>
**Code**
</td>
<td>
**Data Set**
</td>
<td>
**Description**
</td>
<td>
**Origin**
</td>
<td>
**Type**
</td>
<td>
**Size**
</td>
<td>
**Personal**
**Data**
</td>
<td>
**Access/**
**License**
</td> </tr>
<tr>
<td>
WP1-
005-1
</td>
<td>
RT Data
VDV
Based
Data for
VBB
</td>
<td>
Non-Open Data provided by VBB (BerlinBrandenburg public transport association.
Data inherits plan data and real time data; the latter one is used in
ATTRACkTIVE Trip
Tracker
</td>
<td>
VBB
</td>
<td>
VDV 454
V2.1 Progr am-
status Real
</td>
<td>
N/A
</td>
<td>
No
</td>
<td>
Individual bilateral
</td> </tr>
<tr>
<td>
WP1-
006-1
</td>
<td>
Planned
and RT
Data from public Transport in Netherland
</td>
<td>
Data Source to be evaluated for prognosis events; based on Feeds created from
open data files published by the transitagencies under open license in
Netherlands
</td>
<td>
OVapi
</td>
<td>
GTFS
/GTF
S-RT
</td>
<td>
</td>
<td>
No
</td>
<td>
_http://gtfs.ova pi.nl/nl/_ _http://gtfs.ova pi.nl/READM_
_E_
</td> </tr> </table>
# Table 4: WP1 - Data summary
**3.1.2 _FAIR Principles_ **
In this chapter the data used and created in the Trip Tracker is listed
according to each of the FAIR categories.
• **Findable aspects**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Meta Data, Comments**
</th> </tr>
<tr>
<td>
WP1-001-1
</td>
<td>
Data can be accessed according to open access terms;
Comment: It was decided within the course of the project not to implement
weather forecast conditions
</td> </tr>
<tr>
<td>
WP1-002-1
</td>
<td>
Data has been obtained from the open data portal of the CRTM (
_http://datacrtm.opendata.arcgis.com/_ ) containing the GTFS files for Metro,
Buses, Coach, Tram and Train and this information is imported in the Indra’s
Urban TSP.
</td> </tr>
<tr>
<td>
WP1-002-2
</td>
<td>
According to its non-open status access is granted according to license. From
a developer portal you can access to GTFS and real-time data.
</td> </tr>
<tr>
<td>
WP1-003-1
</td>
<td>
Data has been obtained from the open data portal of the STIB :
https://opendata.stib-mivb.be
</td> </tr>
<tr>
<td>
WP1-004-1
</td>
<td>
Data could be obtained from the open data portal of the STIB :
https://opendata.stib-mivb.be
</td> </tr>
<tr>
<td>
WP1-005-1
</td>
<td>
According to its non-open status access is granted according to license
</td> </tr>
<tr>
<td>
WP1-006-1
</td>
<td>
Data has been obtained from the RESTful API publicly available. It contains
GTFS and GTFS-RT feed related to some Netherlands public transport agencies
</td> </tr> </table>
# Table 5: WP1 - Findable aspects
• **Accessible aspects**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Public/Private**
</th>
<th>
**Specific Restrictions**
</th>
<th>
**Access**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
WP1-
001-1
</td>
<td>
Public
</td>
<td>
</td>
<td>
Not applicable
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP1-
002-1
</td>
<td>
Public
</td>
<td>
</td>
<td>
Not applicable
</td>
<td>
Stored in the Indra’s Urban TSP repository
</td>
<td>
</td> </tr>
<tr>
<td>
WP1-
002-2
</td>
<td>
According license regulations
</td>
<td>
to
</td>
<td>
Specific regulations for
Real Time Data
</td>
<td>
Accessible through INDRA account.
</td>
<td>
Accessible for Shift2Rail Projects
</td> </tr>
<tr>
<td>
WP1-
003-1
</td>
<td>
Public
</td>
<td>
</td>
<td>
Specific open data license
</td>
<td>
Accessible through free account
</td>
<td>
</td> </tr>
<tr>
<td>
WP1-
004-1
</td>
<td>
Public
</td>
<td>
</td>
<td>
Specific open data license
</td>
<td>
Accessible through free account
</td>
<td>
</td> </tr>
<tr>
<td>
WP1-
005-1
</td>
<td>
According license regulations
</td>
<td>
to
</td>
<td>
Specific regulations for
Real Time Data
</td>
<td>
</td>
<td>
Accessible for Shift2Rail Projects
</td> </tr>
<tr>
<td>
WP1-
005-1
</td>
<td>
Non open data
</td>
<td>
Individual regularities
</td>
<td>
Access according to the individual regularities
</td>
<td>
Accessible for
Shift2Rail
Proects
</td> </tr> </table>
# Table 6: WP1 - Accessible aspects
• **Interoperable aspects**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
WP1-001-1
</td>
<td>
</td> </tr>
<tr>
<td>
WP1-002-1
</td>
<td>
The GTFS data can be combined with GTFS data from other providers to have a
complete multimodal environment covering multiple regions.
</td> </tr>
<tr>
<td>
WP1-002-2
</td>
<td>
The GTFS data can be combined with API/REST data to have a complete multimodal
environment.
</td> </tr>
<tr>
<td>
WP1-003-1
</td>
<td>
Based on GTFS standard
</td> </tr>
<tr>
<td>
WP1-004-1
</td>
<td>
Specific API
</td> </tr>
<tr>
<td>
WP1-005-1
</td>
<td>
VDV454 is a German standard used in public transport for data exchange
</td> </tr>
<tr>
<td>
WP1-006-1
</td>
<td>
Based on GTFS/GTFS-RT standard
</td> </tr> </table>
# Table 7: WP1 - Interoperable aspects
• **Reusable aspects**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
WP1-001-1
</td>
<td>
</td> </tr>
<tr>
<td>
WP1-002-1
</td>
<td>
The GTFS data can be used to feed the Travel Expert Repository and build the
Meta Network through the Meta Network Builder.
</td> </tr>
<tr>
<td>
WP1-002-2
</td>
<td>
The GTFS data can be used to feed the Travel Expert Repository and build the
Meta Network through the Meta Network Builder.
</td> </tr>
<tr>
<td>
WP1-003-1
</td>
<td>
The GTFS data are used to feed the pTT. They are available for other potential
purposes.
</td> </tr>
<tr>
<td>
WP1-004-1
</td>
<td>
Not applicable
</td> </tr>
<tr>
<td>
WP1-005-1
</td>
<td>
Real Time Data is according to its nature is not reusable as it is invalid
after it expires when the specific travel segment lies in the past.
</td> </tr>
<tr>
<td>
WP1-006-1
</td>
<td>
Not applicable
</td> </tr> </table>
# Table 8: WP1 - Reusable aspects
**3.1.3 _GDPR Issues_ **
Within chapter 2.5 – Data Security it is explained that anonymous tokens
(subscription IDs) are generated to enable partial Trip Trackers to forward
collected Events to a specific traveller. These tokens are invalid after the
journey is past or tracking is terminated manually by the traveller. Insofar
the system works according all needs described in the GDPR regulations.
The Real Time Datasets as described in WP1-002-1 till WP1-006-1 used to
receive Events if any are completely independent of human beings. They reflect
only technical situations along the operation time independent of any specific
journey. GDPR issues therefore are not tackled at all.
**3.1.4 _Specific Consideration_ **
No specific considerations regarding data within this WP.
**3.2 DMP of WP2: Travel Companion**
**3.2.1 _Data Summary_ **
The WP2 of ATTRACkTIVE Project deals with the TD4.5 – Travel Companion. The
Travel Companion aims to act as the “face to the customer”. It’s an
application running on the traveller’s smart device. This application needs a
counterpart with server application as well as storage in the cloud.
In order to offer some meaningful capabilities to the traveller, the Travel
Companion has to store a profile per user, containing their preferences as
well as historical data. This private data will be stored in the cloud
component of the Travel Companion, in a cloud database to be accessible from
all of the Shift2Rail components. Of course, access to this data will be
controlled through state of the art authentication mechanisms.
Moreover, one component of the personal application will collect user data,
another one will collect the information generated by the mobile device
sensors. The data generated will be anonymously sent to a dedicated partial
Trip Tracker. The pTT will analyse the data and use it to detect disruptions
if any. This will allow the Trip Tracker to better analyse traffic, temporary
accessibility issues, finally providing other users with more complete and
more up to date information.
All the other data will be handled in the Travel Companion either in the
Personal Application or in the corresponding Cloud Wallet.
For the time being there is no data to be listed in the Travel Companion.
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data**
**Set**
</th>
<th>
**Description**
</th>
<th>
**Origin**
</th>
<th>
**Types/ Format**
</th>
<th>
**Size**
</th>
<th>
**Personal**
**Data**
</th>
<th>
**Access/**
**License**
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
# Table 9: WP2 - Data summary
**3.2.2 _FAIR Principles_ **
In this chapter the data used and created in the Travel Companion is to be
listed according to each of the FAIR categories.
As no data are listed, the FAIR categories need not to be detailed.
**3.2.3 _GDPR Issues_ **
Collected data as well as cloud wallet data are not intended to be openly
accessible.
They are generated within the project and could be only reused in IP4 projects
if needed. For this usage data security especially for private sensitive data
is ensured due to anonymizing them so that data of this type would be reused
in an aggregated form. At least data generated during the project are only
used for the development of new features to reach the mentioned objectives and
will be deleted after finalizing the project.
**3.2.4 _Specific Consideration_ **
No specific considerations regarding data within this WP.
**3.3 DMP of WP3: Technical Coordination**
**3.3.1 _Data Summary_ **
WP3 handles all activities regarding technical coordination within the
Consortium and manages interaction with other Shift2Rail complementarity
projects.
In particular WP3 will carry out integration and testing activities of the TDs
developed within WP1 and WP2 and will mostly rely on data generated amongst
them.
It will produce deliverables in the form of integration and synchronization
reports that will be disseminated at public level.
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Set**
</th>
<th>
**Description**
</th>
<th>
**Origin**
</th>
<th>
**Types/ Format**
</th>
<th>
**Size**
</th>
<th>
**Perso nal**
**Data**
</th>
<th>
**Access/**
**License**
</th> </tr>
<tr>
<td>
WP3-
001-1
</td>
<td>
OSM
tiles
</td>
<td>
This is the Open Street Map's standard tile layer that can be used to display
maps in testing environments. Distributed experiences should use other
services to comply with the Tile Usage Policy.
</td>
<td>
OSM
</td>
<td>
TMS
</td>
<td>
N/A
</td>
<td>
No
</td>
<td>
https://operati ons.osmfoun dation.org/pol icies/tiles/
</td> </tr>
<tr>
<td>
WP3-
002-1
</td>
<td>
Mapbox
Maps
API
</td>
<td>
The experience engine also offers to use MapBox APIs to display maps in
testing or production environments.
</td>
<td>
Mapbo
x
</td>
<td>
WMTS
</td>
<td>
N/A
</td>
<td>
No
</td>
<td>
Commercial agreements
https://www. mapbox.com/
pricing/
</td> </tr> </table>
# Table 10: WP3 - Data summary
**3.3.2 _FAIR Principles_ **
In this chapter the data used and created in Technical Coordination is to be
listed according to each of the FAIR categories. For the time being there is
no data according to FAIR principles listed.
• **Findable aspects**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Meta Data, Comments**
</th> </tr>
<tr>
<td>
WP3-001-1
</td>
<td>
Testing data can be obtained from the open street map website (
_https://www.openstreetmap.org_ ). Editors are available to create custom
maps.
</td> </tr>
<tr>
<td>
WP3-002-1
</td>
<td>
Testing data has been obtained from the mapbox website
( _https://www.mapbox.com_ ). It provides map design tools to customized maps
that suit the experience authors want to provide.
</td> </tr> </table>
# Table 11: WP3 - Findable aspects
• **Accessible aspects**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Public/Private**
</th>
<th>
**Specific Restrictions**
</th>
<th>
**Access**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
WP3-
001-1
</td>
<td>
Public
</td>
<td>
Licensed under the Open
Data Commons Open
Database License (ODbL)
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP3-
002-1
</td>
<td>
Public
</td>
<td>
According to commercial agreements _https://www.mapbox.com/pricing/_
</td>
<td>
Accessible through registered account.
</td>
<td>
</td> </tr> </table>
# Table 12: WP3 - Accessible aspects
• **Interoperable aspects**
<table>
<tr>
<th>
**Code**
</th>
<th>
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
WP3-001-1
</td>
<td>
Not Applicable
</td>
<td>
</td> </tr>
<tr>
<td>
WP3-002-1
</td>
<td>
Not Applicable
</td>
<td>
</td> </tr> </table>
# Table 13: WP3 - Interoperable aspects
• **Reusable aspects**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
WP3-001-1
</td>
<td>
Maps from open street map can be used for any kind of experiences.
</td> </tr>
<tr>
<td>
WP3-002-1
</td>
<td>
The map used to create the testing and demonstration experiences are reusable
only for a testing and demonstration purpose.
</td> </tr> </table>
# Table 14: WP3 - Reusable aspects
**3.3.3 _Specific Consideration_ **
No specific considerations regarding data within this WP.
**3.4 DMP of WP4: Dissemination and Communication**
**3.4.1 _Data Summary_ **
This work package communicates the projects vision and results and ensures
that the partners of the related projects within IP4 will interact in a
seamless way by exchanging all relevant information. An essential part is to
organize expert and user groups to not only to inform relevant stakeholders,
but to collect their advice to take this into account during the development
of the system.
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data**
**Set**
</th>
<th>
**Description**
</th>
<th>
**Origin**
</th>
<th>
**Types/ Format**
</th>
<th>
**Size**
</th>
<th>
**Personal**
**Data**
</th>
<th>
**Access/**
**License**
</th> </tr>
<tr>
<td>
WP4-
001-1
</td>
<td>
News-
letter
</td>
<td>
ATTRACkTIVE
Project
Newsletter
</td>
<td>
Produced by the consortium
</td>
<td>
PDF
</td>
<td>
<1M
</td>
<td>
No
</td>
<td>
Public
</td> </tr>
<tr>
<td>
WP4-
002-1
</td>
<td>
Project Identity and website
</td>
<td>
ATTRACkTIVE
Project Identity and website
</td>
<td>
Produced by the consortium
</td>
<td>
PDF
</td>
<td>
<1M
</td>
<td>
No
</td>
<td>
Public
</td> </tr> </table>
# Table 15: WP4 - Data summary
**3.4.2 _FAIR Principles_ **
In this chapter the data used and created in dissemination and communication
is listed according to each of the FAIR categories.
• **Findable aspects**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Meta Data, Comments**
</th> </tr>
<tr>
<td>
WP4-001-1
</td>
<td>
This document is a deliverable of the project, disseminated at public level.
Under the form of a newsletter, it details the project progress and status.
</td> </tr>
<tr>
<td>
WP4-002-1
</td>
<td>
This document is a deliverable of the project, disseminated at public level.
Its purpose is to describe the setup of the project website.
</td> </tr> </table>
# Table 16: WP4 - Findable aspects
• **Accessible aspects**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Public/Private**
</th>
<th>
**Specific Restrictions**
</th>
<th>
**Access**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
WP4-001-
1
</td>
<td>
Public
</td>
<td>
read only
</td>
<td>
This document has been published over the project website and is accessible in
the deliverable section.
</td>
<td>
</td> </tr>
<tr>
<td>
WP4-002-
1
</td>
<td>
Public
</td>
<td>
read only
</td>
<td>
This document has been published over the project website and is accessible in
the deliverable section.
</td>
<td>
</td> </tr> </table>
# Table 17: WP4 - Accessible aspects
• **Interoperable aspects**
<table>
<tr>
<th>
**Code**
</th>
<th>
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
WP4-001-
1
</td>
<td>
Not applicable
</td>
<td>
</td> </tr>
<tr>
<td>
WP4-002-
1
</td>
<td>
Not applicable
</td>
<td>
</td> </tr> </table>
# Table 18: WP4 - Interoperable aspects
• **Reusable aspects**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
WP4-001-1
</td>
<td>
This document can be reused as a dissemination/communication tool to share
information regarding the ATTRACkTIVE project.
</td> </tr>
<tr>
<td>
WP4-002-1
</td>
<td>
No reusability is expected for this kind of document
</td> </tr> </table>
# Table 19: WP4 - Reusable aspects
**3.4.3 _Specific Consideration_ **
No specific considerations regarding data within this WP.
**4\. CONCLUSION**
The Data Management Plan has the following characteristics:
* It is a document outlining how all the research data generated will be handled during the project life, and even after it is completed, describing, whether and how these datasets will be shared or allowed data re-use and also allow validation of results presented in scientific publications generated by the project.
* It is a document outlining how all the research data and non-scientific documents generated during the lifetime of the project will be handled in terms of sharing policies, archiving and storage and preserving time.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0934_PROSEQO_687089.md
|
# 1 Introduction
PROSEQO participates in the Open Research Data Pilot in Horizon 2020 thus
contributing to improve and maximize access to and re-use of research data
generated by the project. This deliverable describes datasets that are planned
to be generated and released during the project. Further data may however
arise through the lifetime of the project and will lead to updates of the Data
Management Plan.
Following the Guidelines on FAIR Data Management in Horizon2020 1 the
description of each dataset (DS) includes the following information as far as
appropriate and applicable for the respective data:
* __Data set reference and name_ _
Identifier for the data set to be produced
* __Data set description_ _
Description of the data that will be generated or collected
* __Fair data: Findable_ _
Indication of the metadata, documentation or other supporting information that
will accompany the data for it to be interpreted correctly
* __Fair data: Accessible_ _
Information on whether and how it is planned to make data openly available
* __Fair data: Interoperable_ _
Information on how the interoperability of the dataset will be guaranteed
* __Fair data: Re-usable_ _
Information on how it will make sure to increase the data re-use
# 2 Datasets: overview
The consortium has identified N°12 datasets to be generated and released
during the project implementation. The table below gives an overview on the
datasets:
<table>
<tr>
<th>
**ID**
</th>
<th>
**Name**
</th>
<th>
**Responsible partner**
</th>
<th>
**PROSEQO Task(s)**
</th> </tr>
<tr>
<td>
DS#1
</td>
<td>
Data needed to validate results in scientific publications
</td>
<td>
UPSud
</td>
<td>
All tasks
</td> </tr>
<tr>
<td>
DS#2
</td>
<td>
Scientific publications
</td>
<td>
UPSud
</td>
<td>
All tasks
</td> </tr>
<tr>
<td>
DS#3
</td>
<td>
DNA_sequence
</td>
<td>
AB Analitica
</td>
<td>
Task 5.2
</td> </tr>
<tr>
<td>
DS#4
</td>
<td>
Research data
</td>
<td>
ALACRIS
</td>
<td>
Task 5.1-2
</td> </tr>
<tr>
<td>
DS#5
</td>
<td>
Analysis algorithms
</td>
<td>
ALACRIS
</td>
<td>
Task 5.4
</td> </tr>
<tr>
<td>
DS#6
</td>
<td>
Proof of concept using a standard (Rayleigh limited) focused beam for optical
trapping
</td>
<td>
UB
</td>
<td>
Task 4.1
</td> </tr>
<tr>
<td>
DS#7
</td>
<td>
Design new microfluidic chambers
</td>
<td>
UB
</td>
<td>
Task 2.3
</td> </tr>
<tr>
<td>
DS#8
</td>
<td>
Study of the translocation of DNA and Protein through the nano capillarities
using electrical measurements
</td>
<td>
UB
</td>
<td>
Task 4.2
</td> </tr>
<tr>
<td>
DS#9
</td>
<td>
Low speed polymer translocation
</td>
<td>
UB
</td>
<td>
Task 4.2
</td> </tr>
<tr>
<td>
DS#10
</td>
<td>
Plasmonic trap
</td>
<td>
UB
</td>
<td>
Task 4.4
</td> </tr>
<tr>
<td>
DS#11
</td>
<td>
Polymer translocation control via surface plasmon
</td>
<td>
UB
</td>
<td>
Task 4.5
</td> </tr>
<tr>
<td>
DS#12
</td>
<td>
Spectroscopy of DNA sequence
</td>
<td>
IIT
</td>
<td>
Task 3.3
</td> </tr> </table>
# 3 Dataset Management Tables
We established an on-line form for data management 2 . It addresses the
elements for data management listed in Section 1. The form supports the
process of data collection, alignment of data collection with the workplan,
communication across partners, and data publication. In the following, the
data sets expected to be collected in the runtime of PROSEQO are listed in
tabular structure. One dataset is listed per page.
## 3.1 1
<table>
<tr>
<th>
Data set name
</th>
<th>
Data needed to validate results in scientific publications
</th> </tr>
<tr>
<td>
Responsible partner
</td>
<td>
UPSud
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Data needed to validate results in scientific publications
</td> </tr>
<tr>
<td>
PROSEQO Task
</td>
<td>
All tasks
</td> </tr>
<tr>
<td>
File formats
</td>
<td>
Various
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Protocols
</td> </tr>
<tr>
<td>
Access
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Data repository
</td>
<td>
Institutional: University repository
</td> </tr>
<tr>
<td>
Supporting tools
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Interoperability
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Copyright and IP issues
management
</td>
<td>
As defined in the Consortium Agreement
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
Directly to all partners
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
As long as publication platform runs
</td> </tr>
<tr>
<td>
Expected re-use
</td>
<td>
Everybody
</td> </tr> </table>
## 3.2 Dataset # 2
<table>
<tr>
<th>
Data set name
</th>
<th>
Scientific publications
</th> </tr>
<tr>
<td>
Responsible partner
</td>
<td>
UPSud
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Public access version of scientific publication
</td> </tr>
<tr>
<td>
PROSEQO Task
</td>
<td>
All tasks
</td> </tr>
<tr>
<td>
File formats
</td>
<td>
Pdf
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Link to original publication
</td> </tr>
<tr>
<td>
Access
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Data repository
</td>
<td>
Institutional: University repository
</td> </tr>
<tr>
<td>
Supporting tools
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Interoperability
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Copyright and IP issues
management
</td>
<td>
UPSud
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
Directly after publication of the manuscript
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
As long as publication platform runs
</td> </tr>
<tr>
<td>
Expected re-use
</td>
<td>
Everybody
</td> </tr> </table>
## 3.3 3
<table>
<tr>
<th>
Data set name
</th>
<th>
DNA_sequence
</th> </tr>
<tr>
<td>
Responsible partner
</td>
<td>
AB Analitica
</td> </tr>
<tr>
<td>
Description
</td>
<td>
* Data originated from a NGS platform (Illumina MySeq)
* large dimension computer files
* similar data can be obtained with other instrumentation starting from the same bio-material (DNA/RNA)
</td> </tr>
<tr>
<td>
PROSEQO Task
</td>
<td>
Task 5.2
</td> </tr>
<tr>
<td>
File format
</td>
<td>
.FASTQ
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
The data will regard genetic information obtained from nucleic acids
sequencing. No label are foreseen up to now
</td> </tr>
<tr>
<td>
Access
</td>
<td>
PROSEQO consortium only
Justification: Data strictly related to the technology development. To be
shared only within the consortium
</td> </tr>
<tr>
<td>
Data repository
</td>
<td>
Data repository shared folder
</td> </tr>
<tr>
<td>
Supporting tools
</td>
<td>
A suitable software will be developed in order to read and interprete the data
</td> </tr>
<tr>
<td>
Interoperability
</td>
<td>
The data will be generated in a standard and well-known format (.FASTQ)
No standard vocabulary will be used
No mapping will be provided to more commonly used ontologies
</td> </tr>
<tr>
<td>
Copyright and IP issues
management
</td>
<td>
No copyright and IPR issues are expected
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
No data embargo is expected
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
Project duration
</td> </tr>
<tr>
<td>
Expected re-use
</td>
<td>
All the partners can be interested in re-use the data
</td> </tr> </table>
## 3.4 Dataset # 4
<table>
<tr>
<th>
Data set name
</th>
<th>
Research data
</th> </tr>
<tr>
<td>
Responsible partner
</td>
<td>
ALACRIS
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Experimental results in form of measurements, images, accompanying description
files, eventually sequencing data
</td> </tr>
<tr>
<td>
PROSEQO Task
</td>
<td>
Task 5.1-2
</td> </tr>
<tr>
<td>
File format
</td>
<td>
Data in form of facts and experimental results in a presentation
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Experiment description
</td> </tr>
<tr>
<td>
Access
</td>
<td>
PROSEQO consortium only – Justification: commercial
</td> </tr>
<tr>
<td>
Data repository
</td>
<td>
Institutional: a common repository folder on an internal IIT-server
</td> </tr>
<tr>
<td>
Supporting tools
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Interoperability
</td>
<td>
Standard vocabulary for all data types present in the data set will be used to
allow inter-disciplinary interoperability
</td> </tr>
<tr>
<td>
Copyright and IP issues
management
</td>
<td>
According to the CA IP definition
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
Upon publication
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
Project duration + 5 years
</td> </tr>
<tr>
<td>
Expected re-use
</td>
<td>
Project partners and external researchers (after being published)
</td> </tr> </table>
### 3.5 5
<table>
<tr>
<th>
Data set name
</th>
<th>
Analysis algorithms
</th> </tr>
<tr>
<td>
Responsible partner
</td>
<td>
ALACRIS
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Program for sequencing data analysis
</td> </tr>
<tr>
<td>
PROSEQO Task
</td>
<td>
Task 5.4
</td> </tr>
<tr>
<td>
File format
</td>
<td>
File with a program code
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Accompanying file with program description; presentation of program
performance
</td> </tr>
<tr>
<td>
Access
</td>
<td>
PROSEQO consortium only – Justification: commercial
</td> </tr>
<tr>
<td>
Data repository
</td>
<td>
Institutional: a common repository folder on an internal IIT-server
</td> </tr>
<tr>
<td>
Supporting tools
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Interoperability
</td>
<td>
Standard vocabulary for all data types present in the data set will be used to
allow inter-disciplinary interoperability
</td> </tr>
<tr>
<td>
Copyright and IP issues
management
</td>
<td>
According to the CA IP definition
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
Upon publication
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
Project duration + 5 years
</td> </tr>
<tr>
<td>
Expected re-use
</td>
<td>
Project partners and external researchers (after being published)
</td> </tr> </table>
### 3.6 6
<table>
<tr>
<th>
Data set name
</th>
<th>
Proof of concept using a standard (Rayleigh limited) focused beam for optical
trapping
</th> </tr>
<tr>
<td>
Responsible partner
</td>
<td>
UB
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Use of an infrared trap at the entrance of the single nanopore structure and
test it with several beads and molecules, first tested with DNA and next with
RNA and proteins.
Use of existing methods that avoid adsorption of molecules on beads
</td> </tr>
<tr>
<td>
PROSEQO Task
</td>
<td>
Task 4.1
</td> </tr>
<tr>
<td>
File formats
</td>
<td>
Notebooks
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Notebooks, web pages, papers
</td> </tr>
<tr>
<td>
Access
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Data repository
</td>
<td>
Project website
</td> </tr>
<tr>
<td>
Supporting tools
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Interoperability
</td>
<td>
Via mail contact
Standard vocabulary for all data types present in the data set will be used to
allow inter-disciplinary interoperability
</td> </tr>
<tr>
<td>
Copyright and IP issues
management
</td>
<td>
The data obtained in this set up will be open and it is not necessary any
license because there are a lot of references and papers published regarding
the topic
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
Indefined
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
n/a
</td> </tr>
<tr>
<td>
Expected re-use
</td>
<td>
Project partners and external researchers related to the field
</td> </tr> </table>
### 3.7 7
<table>
<tr>
<th>
Data set name
</th>
<th>
Design new microfluidic chambers
</th> </tr>
<tr>
<td>
Responsible partner
</td>
<td>
UB
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Design of new microfluidic chambers to control the translocation through the
nano pipette using our mini tweezers set up
</td> </tr>
<tr>
<td>
PROSEQO Task
</td>
<td>
Task 2.3
</td> </tr>
<tr>
<td>
File formats
</td>
<td>
Notebooks, web pages, papers
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Notebooks, web pages, papers
</td> </tr>
<tr>
<td>
Access
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Data repository
</td>
<td>
Project website
</td> </tr>
<tr>
<td>
Supporting tools
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Interoperability
</td>
<td>
Standard vocabulary for all data types present in the data set will be used to
allow inter-disciplinary interoperability
</td> </tr>
<tr>
<td>
Copyright and IP issues
management
</td>
<td>
The data obtained in this set up will be open and it is not necessary any
license because there are a lot of references and papers published regarding
the topic
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
Indefined
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
n/a
</td> </tr>
<tr>
<td>
Expected re-use
</td>
<td>
Project partners and external researchers related to the field
</td> </tr> </table>
## 3.8 Dataset # 8
<table>
<tr>
<th>
Data set name
</th>
<th>
Study of the translocation of DNA and Protein through the nano capillarities
using electrical measurements
</th> </tr>
<tr>
<td>
Responsible partner
</td>
<td>
UB
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Characterizing the translocation of DNA and protein through the nano
capillarities by means of electrical signal measurements
</td> </tr>
<tr>
<td>
PROSEQO Task
</td>
<td>
Task 4.2
</td> </tr>
<tr>
<td>
File formats
</td>
<td>
Notebooks, web pages, papers
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Notebooks, web pages, papers
</td> </tr>
<tr>
<td>
Access
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Data repository
</td>
<td>
Project website
</td> </tr>
<tr>
<td>
Supporting tools
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Interoperability
</td>
<td>
Standard vocabulary for all data types present in the data set will be used to
allow inter-disciplinary interoperability
</td> </tr>
<tr>
<td>
Copyright and IP issues
management
</td>
<td>
The data obtained in this set up will be open and it is not necessary any
license because there are a lot of references and papers published regarding
the topic
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
Indefined
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
n/a
</td> </tr>
<tr>
<td>
Expected re-use
</td>
<td>
Project partners and external researchers related to the field
</td> </tr> </table>
## 3.9 Dataset # 9
<table>
<tr>
<th>
Data set name
</th>
<th>
Low speed polymer translocation
</th> </tr>
<tr>
<td>
Responsible partner
</td>
<td>
UB
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Use an optical fiber laser mechanically coupled to a wiggler to produce a
steerable beam. Verify the translocation of the polymer via V-clamp signal
measurement
</td> </tr>
<tr>
<td>
PROSEQO Task
</td>
<td>
Task 4.2
</td> </tr>
<tr>
<td>
File formats
</td>
<td>
Report
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Notebooks, web pages, papers
</td> </tr>
<tr>
<td>
Access
</td>
<td>
PROSEQO consortium only – Justification: because these data set will be novel
</td> </tr>
<tr>
<td>
Data repository
</td>
<td>
Project website
</td> </tr>
<tr>
<td>
Supporting tools
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Interoperability
</td>
<td>
Standard vocabulary for all data types present in the data set will be used to
allow inter-disciplinary interoperability
</td> </tr>
<tr>
<td>
Copyright and IP issues
management
</td>
<td>
Copyright and Project partners
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
One year
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
As long as publication platform runs
</td> </tr>
<tr>
<td>
Expected re-use
</td>
<td>
Project partners
</td> </tr> </table>
## 3.10 Dataset # 10
<table>
<tr>
<th>
Data set name
</th>
<th>
Plasmonic trap
</th> </tr>
<tr>
<td>
Responsible partner
</td>
<td>
UB
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The nanotrap will be generated provided by a second nanostructure illuminated
by a secondary wavelength at the entrance of the nanochannel. Design of the
second nanostructure and test of its trapping capabilities with microspheres
of few tens until hundreds of nanometers
</td> </tr>
<tr>
<td>
PROSEQO Task
</td>
<td>
Task 4.4
</td> </tr>
<tr>
<td>
File formats
</td>
<td>
Reports
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Notebooks, web pages, papers
</td> </tr>
<tr>
<td>
Access
</td>
<td>
PROSEQO consortium only – Justification: because these data set will be novel
</td> </tr>
<tr>
<td>
Data repository
</td>
<td>
Project website
</td> </tr>
<tr>
<td>
Supporting tools
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Interoperability
</td>
<td>
Standard vocabulary for all data types present in the data set will be used to
allow inter-disciplinary interoperability
</td> </tr>
<tr>
<td>
Copyright and IP issues
management
</td>
<td>
Copyright and Project partners
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
One year
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
As long as publication platform runs
</td> </tr>
<tr>
<td>
Expected re-use
</td>
<td>
Project partners
</td> </tr> </table>
## 3.11 Dataset # 11
<table>
<tr>
<th>
Data set name
</th>
<th>
Polymer translocation control via surface plasmon
</th> </tr>
<tr>
<td>
Responsible partner
</td>
<td>
UB
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Developing a plasmonic trap device to control the biomolecule translocation
through a nano capillarity
</td> </tr>
<tr>
<td>
PROSEQO Task
</td>
<td>
Task 4.5
</td> </tr>
<tr>
<td>
File formats
</td>
<td>
Reports
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Notebooks, web pages, papers
</td> </tr>
<tr>
<td>
Access
</td>
<td>
PROSEQO consortium only – Justification: because these data set will be novel
</td> </tr>
<tr>
<td>
Data repository
</td>
<td>
Project website
</td> </tr>
<tr>
<td>
Supporting tools
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Interoperability
</td>
<td>
Standard vocabulary for all data types present in the data set will be used to
allow inter-disciplinary interoperability
</td> </tr>
<tr>
<td>
Copyright and IP issues
management
</td>
<td>
Copyright and Project partners
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
One year
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
As long as publication platform runs
</td> </tr>
<tr>
<td>
Expected re-use
</td>
<td>
Project partners
</td> </tr> </table>
## 3.12 Dataset # 12
<table>
<tr>
<th>
Data set name
</th>
<th>
Polymer translocation control via surface plasmon
</th> </tr>
<tr>
<td>
Responsible partner
</td>
<td>
IIT
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Optical data recorded as wavelength spectrum
</td> </tr>
<tr>
<td>
PROSEQO Task
</td>
<td>
Task 3.3
</td> </tr>
<tr>
<td>
File formats
</td>
<td>
Spectrum
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Any spectrum will has a ID
</td> </tr>
<tr>
<td>
Access
</td>
<td>
PROSEQO consortium only – Justification: novel data
</td> </tr>
<tr>
<td>
Data repository
</td>
<td>
Project repository folder
</td> </tr>
<tr>
<td>
Supporting tools
</td>
<td>
Computer software for spectrum reading
</td> </tr>
<tr>
<td>
Interoperability
</td>
<td>
The data are generic, i.e. intensity versus wavelength / energy
</td> </tr>
<tr>
<td>
Copyright and IP issues
management
</td>
<td>
IIT
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
A data embargo can be expected for IP reason
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
n/a
</td> </tr>
<tr>
<td>
Expected re-use
</td>
<td>
All the related scientific community
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0937_AfriAlliance_689162.md
|
# Executive Summary
The overall objective of this deliverable is to provide an updated Data
Management Plan that describes what data are generated during the project
execution, including formats and structure, and how the data (including
metadata) are collected, stored, and made accessible. This deliverable is
mandatory since AfriAlliance participates in the Pilot initiative from the
European Commission on Open Data. The deliverable follows the guidelines on
FAIR Data Management in Horizon 2020, which prescribes the inclusion of
specific elements in the plan, including: 1) a summary of the data being
collected; 2) methods for making sure data are FAIR (findable, accessible,
interoperable, re-usable); 3) resources to be allocated; 4) security of data,
as well as any other aspects. The document describes the _initial_ plans for
Data Management and will be revised as soon as additional elements regarding
Data Management have been identified in the course of the implementation of
the AfriAlliance project. In addition, the deliverable considers the new
General Data Protection Regulation (EU) 2016/679 (GDPR) which entered into
force on the 25 May 2018.
# 1 AfriAlliance Data Summary
AfriAlliance is a Coordination and Support Action project which nevertheless
consists of several distinct research activities to achieve its objectives,
such as studies into the motivations to participate in Working Groups in an
African context (WP1), specific short-term social innovation needs (WP2), the
barriers for online knowledge sharing (WP3), an inventory of current
monitoring and forecasting efforts (WP4) and the creation of Social Innovation
Factsheets on specific societal needs (WP5).
As a Coordination and Support Action, one of the main objectives of the
project is to share as broadly as possible any results generated by the
project with the broad water sector community, in particular with experts and
organizations active in the field of water and climate. This counts for both
data and metadata.
The Updated Data Management Plan deliverable complements the Project
Information Strategy deliverable, with the understanding that data generated
during the project are a subset of the overall information that will be
managed during the project (ref. D6.3, page 11). In particular, the scope of
the Data Management Plan concerns a subset of information mentioned in Table 1
of Deliverable 6.3, an (updated) extract of which is repeated below:
## Table 1 AfriAlliance Information (Data) (extract from Deliverable D6.3)
<table>
<tr>
<th>
**Type of**
**Information**
</th>
<th>
**Owner**
</th>
<th>
**Access Rights**
</th>
<th>
**Repository**
</th>
<th>
**Format Used**
</th>
<th>
**Standards Used**
</th>
<th>
**Quality Control**
</th>
<th>
**Purpose / Use**
</th> </tr>
<tr>
<td>
Input Data (e.g. surveys information)
</td>
<td>
Task
Leaders
</td>
<td>
Partners
</td>
<td>
AA GDrive
</td>
<td>
Different
</td>
<td>
Customized format (AA identity)
</td>
<td>
Content and
format by WP
leaders, with advice by project
managem ent team (PMT)
</td>
<td>
Raw data for processing into Task deliverables
</td> </tr>
<tr>
<td>
Output Data (reports, papers, policy notes) (*)
</td>
<td>
Task
Leaders
</td>
<td>
Open Access
</td>
<td>
AA GDrive, Website
</td>
<td>
MS Word, html, PDF, printed copies
</td>
<td>
Customized format (AA identity)
</td>
<td>
Content and
format by WP
leaders, with advice by
PMT
</td>
<td>
AfriAlliance information to be shared within the platform and to the broad
water sector (government staff, practitioners, researchers, etc.)
</td> </tr> </table>
Ethical aspects concerning the plan are covered in the Ethical aspects
deliverables (D7.1 – 7.3)
To comply with the Horizon 2020 Open Research Data Pilot, AfriAlliance makes
available data potentially useful for others as well as all aspects that are
needed to replicate the undertaken research. In this context, the following
types of data can be distinguished (see Table 2).
## Table 2 Summary of AfriAlliance Data
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Description**
</th>
<th>
**AfriAlliance WP/tasks**
</th> </tr>
<tr>
<td>
Empirical data
</td>
<td>
The data (set) needed to validate results of scientific efforts.
</td>
<td>
WP1: data from survey of motivations to participate in
Working Groups and data from surveys for the Social Network Analysis
WP2: data from interviews and Focus Group on shortterm social innovation needs
WP3: data from investigation of barriers and obstacles for online knowledge
sharing
WP4: inventory of current monitoring and forecasting efforts
</td> </tr>
<tr>
<td>
Associated metadata
</td>
<td>
The dataset’s creator, title, year of publication, repository, identifier etc.
based on the ISO 19157 standard.
</td>
<td>
WP1-WP4
Questionnaire, interviews and user-driven metadata entry through geodata
portal
</td> </tr>
<tr>
<td>
Documentation
</td>
<td>
Such as code books (concept definitions), informed consent forms, etc.: these
aspects are domain-dependent and important for understanding the data and
combining them with other data sources.
</td>
<td>
WP1-WP4
Questionnaire, interviews and user-driven metadata entry through the AA online
platform
</td> </tr>
<tr>
<td>
Methods & tools
</td>
<td>
(Information about) the software, hardware, tools, syntax queries, machine
configurations – i.e. domain-dependent aspects that are important for using
the data.
</td>
<td>
Data collection instruments
WP1: questionnaire and software to analyse and visualise the relationships
between stakeholders and their level of connectedness (SNA Analysis)
WP2: questionnaire (incl. via the AA online platform), Focus Group Discussion
protocol)
WP3: questionnaire, Focus Group Discussion protocol WP4: search terms and
questionnaire, interviews and user-driven metadata entry and search keywords
through the AA online platform.
</td> </tr> </table>
All data generated according to Table 2 is treated in compliance with the EU
GDPR regulation.
All generated data uses widely adopted data formats, including but not limited
to:
* Basic Data formats: CSV, XLS, XML
* Aggregated Data / metadata: PDF, HTM, MS files
Concerning Monitoring and Forecasting tools (WP4), the project makes extensive
use of existing data and repositories. In fact, the essence of the data
management concerning M&F tools is a more effective / more comprehensive use
of existing data rather than the generation of new (source) data.
Existing data which is going to be used for that purpose stems from many
different sources, especially generated locally in Africa.
# 2 AfriAlliance FAIR Data
AfriAlliance follows the FAIR approach to data, i.e. data is managed in order
to make them:
* Findable
* Accessible
* Interoperable
* Reusable
## 2.1 Making data findable, including provisions for metadata
### 2.1.1 Discoverability of Data
Data generated in AfriAlliance is available (for external use) via the
following resources (ref Table 1):
* AfriAlliance online platform : https://afrialliance.org/
* Akvo RSR (Really Simple Reporting) tool: https://afrialliance.akvoapp.org/en/projects/
* Web Catalogue Service (WCS) _https://www.opengeospatial.org/standards/wcs_ tool _https://geonetwork-opensource.org/_
* Akvopedia portal: https://akvopedia.org/wiki/Handbook_on_Data_Collection
The Website includes most of the (aggregated and summarised) data generated
during the project, including links to the AA web catalogue which uses
existing data.
The Akvo RSR tool provides overall and summarised information about
AfriAlliance Action Groups, including their results and impact. The tool is
compliant with the International Aid Transparency Initiative (IATI) standard
for reporting.
The WCS will contain in particular all metadata information concerning
existing data used by the foreseen improved monitoring and forecasting tool.
### 2.1.2 Identifiability of Data
AfriAlliance makes use of repositories assigning persistent IDs to data to
allow easy finding (and citing) of AfriAlliance data.
### 2.1.3 Naming Conventions
All names given to AfriAlliance Data is named according to the following
naming convention:
* Basic Data: AA WPx <name of data> -<date generated>-version
* Metadata: AfriAlliance <Descriptive Name of Data>-name generated-version
### 2.1.4 Keywords
Data is assigned relevant keywords to make them findable (e.g. through
internet browsing). Such keywords may vary depending on the Work Package where
data belong to.
### 2.1.5 Versioning
All data (and data sets) clearly mention the version (indicated both in the
naming and within the information included in the data) as long as contact
information (owner of the generated or aggregated data set).
### 2.1.6 Standards
Data, and in particular metadata, follow an identified standard for metadata
creation. Although there are many different standards, the initial preference
of the consortium is to follow ISO 19157 as it is specifically adopted to
ensure the quality of geographic information, which is the core of
AfriAlliance data (used by the foreseen WCS). Several ISO standards exist and
ISO 19157 is a recent one, also adopted by INSPIRE (Infrastructure for Spatial
Information in Europe) Directive and national implementations, and includes
metadata quality control mechanisms. Project data stored in Akvo RSR makes use
of the IATI standard.
## 2.2 Making Data Openly Accessible
### 2.2.1 Data Openly Accessible
AfriAlliance makes all data generated by the project available, with the
exception of basic data with ethics constraints which are kept within the
consortium and are only available on the AfriAlliance GDrive. WP4 data, the
WCS and the geoportal will be freely available with open access to all the
metadata and workflows. It must be noted that the WCS will contain little
(only sample) real hard data.
### 2.2.2 Modalities for Sharing Data
All data generated are available in the resources mentioned in 2.1.1. In
particular, data is made available with the following modalities:
Website: all generated data will have an easily identified section on the new
version of the AfriAlliance website where most of the data will be posted. The
website will also include reference to project data available through Akvo
RSR, and will therefore be the main source to retrieve also general data of
the project. Moreover, an easily findable reference will be made to access the
WCS tool.
The WCS tool being a web-based application, will exist also as “standalone”
resource (with a clear reference to AfriAlliance project), which will be
designed to get as many hits as possible with the most common web browsing
modalities.
Data for internal use (information sharing among partners) uses an intranet
site (Google Site).
### 2.2.3 Methods and tools needed to access data
Apart from widely known access methods (internet search based), it is
important to specifically mention that the WCS software source code will be
made available in an open source repository. The initial selection of the
consortium for this purpose is the Github resource.
Search terms and user-driven metadata entry and search key-words will be made
available through the AA WP4 geoportal. Entry search keywords will be rather
simple words such as for example: monthly rainfall, country, and other water-
and climate related searches, available from pre-coded drop down menus.
### 2.2.4 Data repositories
Most of the data generated will be stored on the internal GDrive. The WP4
geoportal will contain only metadata, which are web-based information on data
sources, data quality, etc.
### 2.2.5 Access to Data
No restrictions will apply to access to AA outputs. Access to programme
specific sources data (i.e. data from questionnaires) is restricted according
to the Ethics requirements as well as the GDPR regulations.
## 2.3 Making data interoperable
Interoperability of data is very important in AfriAlliance, especially in
relation to the geoportal.
The interoperability principle behind WP4 data is based on the principles and
standards of the Open Geospatial Consortium (OGC). The project includes the
concept of “volunteered geographic information” (VGI), which is the harnessing
of tools to create, assemble, and disseminate geographic data provided
voluntarily by individuals (Goodchild, 2007). VGI is a special case of a
broader phenomenon known as user-generated content. Common standards and
methodologies following the general principle will be adopted, and will be
further specified in updated revisions of the plan.
## 2.4 Owners and Access Rights
### 2.4.1 Data & Software Licences
Most of the data generated in AfriAlliance is open source, licenced under the
Creative Commons Attribution License (CC-BY), version 4.0, in order to make it
possible for others to mine, exploit and reproduce the data.
The WP4 geoportal WCS will be open source licenced using the GNU General
Public License Version 2 (GPL v2) (June 1991). The GeoNetwork opensource
software as used for the WCS is released under the GPL v2 license and can be
used and modified free of charge.
The portal user guide documentation will be provided and licensed under the
Creative Commons Attribution-NonCommercial 3.0 License. Minor changes can be
adopted in case it is required by certain Partners needs/regulations; those
cases will be properly documented.
**2.4.2 Data Re-use**
No restrictions apply for the re-use of Data, also no restriction in time.
### 2.4.3 Third Parties Usage
AfriAlliance will make data publicly available to Third Parties, under the
condition that the source is referenced according to indications provided in
the data.
### 2.4.4 Data Quality Assurance
Generally speaking, AfriAlliance will follow the quality assurance guidelines
provided in Deliverable 6.3 (Project Information Management strategy) to
ensure proper quality of data. With particular reference to quality of
metadata, the ISO19157 standard guidelines will be followed.
**2.4.5 Availability in Time**
In principle, data will be available indefinitely
# 3 Allocation of Resources for Data Management
## 3.1 Data Management Costs
Costs related to generating, storing, and distribution of data are properly
taken in consideration in the respective Work Package where data specified in
Table 2 will be collected.
In WP1, data generated from the network analysis as well as Action Groups
results are covered by both staff time and other direct costs directly
allocated to those activities.
In WP2, data generated from interviews, workshops and surveys are covered by
both staff time and other direct costs directly allocated to those activities
Dissemination material, which can be considered a particular subset of output
data in a CSA, has a specific budget line allocated to the respective WP
leader.
As regards data managed in WP4, Web Services and associated resources like
dissemination packages, and other production costs, have been allocated a
substantial budget (ref. DoA AfriAlliance for details).
## 3.2 Data Ownership
Ownership of data is largely determined by Work Package Ownership. A more
specific attribution of ownership is indicated in Table 1 above.
## 3.3 Long Term Preservation Costs
Long term preservation costs relates to costs for server/hosting, and time for
updating data formats. Those costs are being included in the concerned WP
budgets.
**4 Data Security**
Data Security aspects are covered in D7.1-3 (ethics).
# 5 Other
The AfriAlliance Data Management Plan follows largely the guidelines (and
template) recommended by Horizon 2020 in the framework of the Open Data
programme of the European Commission as well as the GDPR regulations as of 25
May 2018.
In addition, it is worth mentioning that any additional internal guidelines in
terms of Information Management practices and IPR policies that are currently
followed (or will be approved in the future) in the Coordinator’s organization
(IHE Delft) will be integrated, as appropriate, as part of the plan, after
previous discussion and agreement with the consortium members. Equally, in
case any regulations or policy prevailing in any organization of the
consortium, and any additional external practice/policy/standard, which
becomes relevant for the plan, will be integrated in further revisions of the
plan.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0940_InnoChain_642877.md
|
# 2.1. Objective and Approach to Data management
Innochain aims to promote and expand models for inter-
sector collaboration and feedback. The activities are
positioned between research in academia and practice.
Innochain aims furthermore to improve communication across
disciplines by developing new interdisciplinary methods that
integrate design led simulation. This position in the
crossing between disciplines and sectors in the building
profession places Innochain and the 15 ESR projects
in an interesting position, where not only the results,
but as well the datasets, which are at the base of
these, are of interest for a potentially large group
of scientists and professionals. This group will naturally
be widespread in terms of discipline, profession,
location and cultural background. Hence the approach
towards the publication of datasets has to be open,
easy and sustainable. The access to datasets is as
well of interest for reasons internal to Innochain, as
the publication of datasets will as well allow for
synergies and pick up between the Innochain projects.
Innochain follows the internationally established FAIR
principles 1 s - findable, accessible, interoperable and
reusable. Within the following chapters we are analysing
the datasets, which are produced in Innochain and
describe how the FAIR principles are implemented in the
project.
Innochain is taking place in a network of many
industrial and academic partners and the data used is in
parts property of partners or beneficiaries, or could
disclose their business secrets to third parties. This
data can hence not be shared with the public. The
same applies to data, which is of commercial interest
or could lead to new intellectual property. Researchers
need to be able to evaluate the results and base
their own research on the knowledge and data generated
in Innochain.
User generated analytics, if collected, will not be
shared unless it is strictly necessary, and then only in
a reduced and anonymised variant with all personal
details and other sensitive information stripped.
Innochain places the final decision on which datasets to
publish into the hand of the researcher in charge.
The Data Management Plan and the implementation of the
related infrastructure within the projects provide
guidelines and tools to decide, which and how datasets
should be published.
# 2.2. Datasets in Innochain
The Innochain project, by its nature, covers a diverse
range of projects with a diverse range of data
requirements and data outputs. To put this into
context, some projects actively use photography as a
documentation and analysis technique, while others rely
exclusively on written software. All the expected data
types are enumerated in the table below.
To note, specific data formats have specific archival
needs and require different approaches to storage and
retrieval. Nevertheless, an index of all data sets
will be centralised and made available on the project’s
webpage ( see S ection 3.1).
Given the diverse nature of archived material, it is
difficult to estimate the final size of the complete
dataset. For example, images and video recordings will
cover much more space than code repositories. We
cautiously estimate to fit within a 100 GB - 1 TB
bracket.
The data is mostly generated by the ESRs themselves,
and perhaps in some specific cases, is collected through
questionnaires and other polling mechanisms from various
case studies. The actual future usability of this data
depends on the datasets themselves, but one can expect
future experiments and research basing itself on the
provided datasets, as well as offering the possibility
of reproducing and verifying research results by other
parties.
The following table classifies the main types of data
that the researchers have either already produced or
expect to produce as part of their individual
projects. The three columns provide a summary of the
actual data, the specific format file the data comes
into and, most importantly its utility.
The utility of data is a classifier that we have
defined by weighing several aspects, namely:
* Does this data set help in reproducing research outputs?
* Can different experiments be built up on this data set?
* Does this data set contain sensitive information, and if yes, how easy is to anonymise it?
* Does this data set require proprietary software/hardware (not open source) to be used, and if yes, how easy is it to transform it into a format that would allow free and open source software to be used?
The utility column provides a summary of the above,
which has been defined in collaboration with a
representative cross section of the researchers involved
in the Innochain project.
We will prioritise the archival and the indexing of
datasets marked with the “high” qualifier. Other datasets
with will be made available as well, even if they
are under proprietary formats or of “medium” or “low”
utility, if deemed necessary by the researcher.
Sensitive data will not be released, unless properly
anonymised or an agreement is reached with the parties
involved ( see S ection 3.2 for a more in depth
explanation of the management of sensitive data and IP).
<table>
<tr>
<th>
**Data** **Description**
</th>
<th>
**Data** **Format**
</th>
<th>
**Utility**
</th> </tr>
<tr>
<td>
3d scan data
</td>
<td>
.fls, .xyz, .xyb, .e57, .las, .ptx,
.fws
</td>
<td>
**High** , for other
experiments/replicability
</td> </tr>
<tr>
<td>
3d models ( mesh)
</td>
<td>
.obj, .stl, .fbx, .vrml
</td>
<td>
**Medium** , for other experiments/replicability
</td> </tr>
<tr>
<td>
3d models ( NURBS)
</td>
<td>
.iges, .step
</td>
<td>
**Medium** , for other experiments/replicability
</td> </tr>
<tr>
<td>
3d models ( proprietary)
</td>
<td>
.3dm, .blend
</td>
<td>
**Low** , for other
experiments/replicability
</td> </tr>
<tr>
<td>
G-code
</td>
<td>
.nc, .mod
</td>
<td>
**Low** , highly Machine specific. Can not be used for
reproduction of results
</td> </tr>
<tr>
<td>
Scripts
</td>
<td>
.py, .sh, .bat, .gh
</td>
<td>
**Medium** , may be useful for reproduction of results, but
can be also environment specific.
</td> </tr>
<tr>
<td>
Software Code
</td>
<td>
code repositories ( .git)
</td>
<td>
**High** , useful for both other enterprises, future
experiments and replicability
</td> </tr>
<tr>
<td>
Database files
</td>
<td>
.xml, .json, .csv, .vtk
</td>
<td>
**High** , may contain highly
confidential and personal information
</td> </tr>
<tr>
<td>
Notes, and Temporal files
</td>
<td>
.txt, .xml, .json
</td>
<td>
**Low** , useful only for following procedural steps
</td> </tr>
<tr>
<td>
Simulation Datasets and Config Files
</td>
<td>
.csv, .vtk
</td>
<td>
**High** , useful for
reproducing experimental
steps and for using different analysis techniques in other
experiments
</td> </tr>
<tr>
<td>
Survey Data
</td>
<td>
.csv
</td>
<td>
**High** , may contain personal information. Useful for
reproducing results and informing future research.
</td> </tr> </table>
# FAIR data
## Making data findable, including provisions for metadata
Innochain’s aim, under the scope of the Horizon 2020
open research data guidelines, is to maximise the
reusability, impact and reach of the open data. An
important aspect here is the discoverability. The use of
Zenodo as the main repository of data ensures
adhesion to well established standards of data
identification and discovery. Zenodo assigns unique Digital
2 Object Identifiers ( DOI) and rich metadata ( compliant
to DataCite’s Metadata Schema ) to every record published
on the platform and indexes this metadata both at
Zenodo and DataCite servers to make it searchable.
Further to Zenodo’s provisions for discoverability,
Innochain will also maintain a central index of all
published datasets at the Innochain website to make
the data more easily discoverable by researchers that
relate to the projects. This central index will link
to the Zenodo repository and cite the DOI of each
dataset. The metadata that is produced by Zenodo3 is
also harvestable, using the Open Archives Initiative’s
Protocol for Metadata Harvesting ( OAI-PMH) making it
retrievable by search and discovery services outside of
Zenodo, DataCite and Innochain’s website.
2 3 https://schema.datacite.org/
https://www.openarchives.org/OAI/openarchivesprotocol.html
Search keywords that also increase the discoverability of
the datasets will also be used, in close relation to
search keywords used on the Innochain website. Further
to keywords, Zenodo allows datasets to be associated to
specific grants, and thus all published datasets will
be linked to the Innochain EC Grant (642877) to
promote the dissemination of all the Innochain projects.
The datasets produced by Innochain may in cases
contain more complex data, such as, for example, the
case of software code or scripts. In these cases,
file-level metadata will be generated whenever possible to
make internal data structures more easily identifiable
and discoverable. Naming conventions are also inherent to
management of such complex data structures and each
project will employ project-specific naming conventions on
the published datasets to promote the discoverability of
that data. Wherever standard naming conventions exist, such
as in programming languages, they will be followed.
Software code that will be openly published through
Innochain will be done so through GitHub repositories.
GitHub and Zenodo offer a seamless integration ( see
_s ection 3.2_ ) which facilitates the version control and
maintenance of software code along with the
discoverability and accessibility of an open data
repository. Thus, each dataset of software code will
be residing as a repository at GitHub but will be
given a unique DOI and rich metadata through the
Zenodo platform to be made discoverable and searchable as
a dataset.
## Making data openly accessible
As a general rule, Innochain strives to provide open
and easy access to datasets where possible. Most data
that is published (i.e. which supports or is cited
in publications from the research) will be made openly
available as default. Certain datasets will not be able
to be shared because of partner NDAs or because they
come from proprietary / industry sources. In these
cases, datasets will either be abstracted, anonymized,
or withheld, depending on the nature of the data and
the wishes of the owner of the data. These will
be handled on a case-by-case basis but the research
will strive to publish the involved data openly or
use datasets which are not restricted.
To this end, data will be collected and uploaded to
two main online, publicly-accessible resources. Github will
be used as the primary means of sharing source code
from the Innochain projects and Zenodo will be used
to host larger datasets such as models, point clouds,
simulation datasets, etc. These will be described and
linked to from the main Innochain project website (
innochain.net). Data organization, description, and supporting
documentation will reside on the Innochain website, with
direct links to the datasets on either of the two
storage platforms. Datasets will be uploaded to the
Zenodo open-access research data repository. This will
ensure open and fair access, and longevity of the
datasets beyond the Innochain timeframe.Innochain already
possesses a Github account which is being actively used
by the research projects.
The reasons for using Github and Zenodo are integration
and openness. The integration of GitHub with Zenodo
allows code and software to be citable and easily
found. Both are well-established online repositories with
built-in redundancy and high usage, and can be expected
to remain operational for the 5 year period that
these datasets will be made available. Their high visibility
and familiarity to the general public and community
members means that easy access to the data is
guaranteed. In the case of Github, this also allows
derivative projects and code forking to happen within
the same platform. Both storage solutions are also
accessed primarily through web browsers and popular
version control protocols such as Git. Datasets and
other uploads are enriched with descriptions, keywords,
author information, and other metadata, enabling them to
be found easily.
The relevant formats and their archival solution are
listed in the table below:
<table>
<tr>
<th>
**Data** **Description**
</th>
<th>
**Data** **Format**
</th>
<th>
**Archival** **Solution**
</th> </tr>
<tr>
<td>
3d scan data
</td>
<td>
.fls, .xyz, .xyb, .e57, .las, .ptx,
.fws
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
3d models ( mesh)
</td>
<td>
.obj, .stl, .fbx, .vrml
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
3d models ( NURBS)
</td>
<td>
.iges, .step
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
3d models ( proprietary)
</td>
<td>
.3dm, .blend
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
G-code
</td>
<td>
.nc, .mod
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
Scripts
</td>
<td>
.py, .sh, .bat, .gh
</td>
<td>
github.com
</td> </tr>
<tr>
<td>
Software Code
</td>
<td>
code repositories ( .git)
</td>
<td>
github.com
</td> </tr>
<tr>
<td>
Database files
</td>
<td>
.xml, .json,
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
Notes, and Temporal files
</td>
<td>
.txt, .xml, .json
</td>
<td>
Zenodo
</td> </tr>
<tr>
<td>
Simulation Datasets
</td>
<td>
.csv, .vtk
</td>
<td>
Zenodo
</td> </tr> </table>
In the same way, datasets will be converted to open
formats as much as possible, except where it may
result in a degradation or limitation of the dataset’s
use. Examples include proprietary formats for specific,
specialist programs. It is therefore inevitable that some
specialist knowledge will be needed to access and use
the included data, since most projects deal with very
specific knowledge domains and accompanying specialist
software. However, links or descriptions of the required
software and knowledge could be provided with the dataset
to provide a starting point. Software itself will
be included in the data repository as fas as the
licensing rule allow for it. In cases, where this is
not possible we will instead provide descriptions of the
software, data-formats, software versions and links to
the software vendor. However, as described in chapter _ 2
Data Summary_ __ the dataformats used in Innochain are
in most cases industry standard. A global use, wide
support in other softwares and hence longevity of the
formats can hence be expected.
In terms of licensing, the data will be released
under the Creative Commons license (
_https://wiki.creativecommons.org/wiki/CC0_ ) __ , in line with
the aim of providing publicly accessible open data. A
data access committee will therefore not be needed.
Securing the right to openly publish data that may
come from industrial partners or other proprietary
sources in an open way will be the responsibility of
each researcher, otherwise data that is proprietary or
belonging to a third party will not be published.
These conditions for access and licensing will be
outlined on the Innochain website. Each dataset will
also have with it the relevant conditions and terms
of access, if applicable. In general, however, Zenodo and
GitHub both allow unrestricted access ( without need for
authentication and authentication procedure) to the
datasets that they hosts.
Means of identifying the accessor of the data have
not been discussed. Since the Plan aims at democratic
and open access, this is not considered a high
priority.
## Making data interoperable
The assessment of the data produced within Innochain,
shows that two general classes of data exist, with
individual challenges and approaches towards interoperability:
### Data in standard formats of the building industry
The data produced in Innochain follows established
standards and formats. These are either open source and
hence well documented or are widespread in communities
of researchers and professionals in architecture, that an
understanding and interoperability of the data is certain.
This implies current as future practice. Metadata to
identify the filetype and the origin of the data is
implemented in the file headers. The data is
generated by well established software, such as Rhino3D or
Sofistik, which implements the metadata automatically in
the file header.
Some of the formats are created in programs, which
make use of dependencies. In some cases, such as the
popular McNeel Rhino / Grasshopper software environment,
this accounts to an extensive set of plugins and
libraries. This constellation is problematic, as plugins
and libraries change quickly and it is after a short
while not possible anymore to recreate the original setup
anymore. In Innochain dependencies are hence packaged
and the according zip with all dependencies and the
original files are placed alongside the original file.
### Code and data in novel formats
Interoperability in relation to code and data which is
in novel formats, such as the formats generated with
the Speckle 4 , the open source project, which emerged
in ESR 5, is following well established practices 5 of
software engineering in terms of metadata. It is
documented using the OpenApi Standard . It is both
machine as well as ‘human’ readable.
## Increase data reuse ( through clarifying licences)
Wherever possible, data will be made open and freely
available to promote dissemination and reuse. If
confidentiality or privacy issues exist, the data may
be protected with licenses of minimum possible restriction,
such as non-commercial or non-derivative creative commons.
Data will be made available as it is created to
promote reuse internally and externally both within and
after the course of the project. If a project is
seeking patent or a publication is pending, the data
will be made available as soon as the patent
application is filed or the publication is published.
In general, our intent is to maximise the dissemination
reach and reusability of the projects, therefore we aim
to implement the minimum restrictions to the data that
is produced. Therefore the least restrictive licenses,
such as creative commons attribution ( CC-BY 4.0) will be
used, where possible.
Innochain will make provisions for data storage and
maintenance on its website for at least 5 years from
the end of the project, with a possibility to
extend that period if the data is found to
be useful to the public. All datasets that are
published through Zenodo will follow the repository’s
retention period which is at least 20 years from now.
_4_ 5 _ https://speckleworks.github.io/SpeckleOpenApi/#schemas_
_https://www.openapis.org/_
It is in the interest of every researcher to
publish high quality data sets, so we expect each
dataset to be quality assured by the project that
generates the data. We are initially not planning to
centrally assure the quality of the disseminated data
sets, but we will evaluate the quality assurance provisions
in the next revision of the data plan.
# Allocation of resources
The Innochain project has set aside sufficient funds
for covering all the aspects regarding data storage and
ensuring its long term accessibility. The following costs
are estimated and presented only as guidelines,
nevertheless they provide a sufficient insight into what
is to be expected:
* Github.com: code repository. **Cost** : Free
* Zenodo ( will also mirror Github repositories): data repository. **Cost** : Free
* Innochain website hosting. **Cost** : €6.60/month, **Total:** € **400** ( for five years)
* Innochain.net domain name registration: **Cost** : €20/year, **Total:** € **100** ( for five years)
Uploads, classification and indexing of datasets is the
responsibility of the individual researchers. This will
nevertheless follow the prescribed classification procedures
offered by the Zenodo repository service, thus ensuring
a clear taxonomy and both machine and human readable
metadata.
# Data security
Sensitive ( personal) data will not be shared, and, as
such, security will only be focused towards the intact
preservation of the datasets and the anonymisation of
them if necessary. Data security shall be provided by
well established and up-to-date web technologies and is
the responsibility of the specific service providers (
i.e., Github/Zenodo).
Github provides an indefinite long-term availability of
the code repositories. Furthermore, Zenodo guarantees a 20
year retention period. The indexing, searchability and
discovery of these resources on the innochain.net website
will be guaranteed for five years onwards from the end
project by ensuring the domain name and hosting
payments upfront.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0943_ELENA_722149.md
|
**ELENA**
Low energy **ELE** ctron driven chemistry for the advantage of emerging **NA**
no-fabrication methods
**Data Management Plan**
**ORDP – Open Research Data Pilot**
March 31 st 2017 (M06)
# DATA collection and storage
ELENA is a research and training based project that will develop new research
data through the conduct of 15 collaborative PhD research projects. There are
three broad categories of data that will be acquired during the period of the
grant:
1. Data derived from laboratory based instruments.
2. Data derived from theoretical studies, computational models and simulations. (iii) Data from commercial facilities.
Each of these categories has its own types of data, different reduction
procedures and archiving requirements.
Categories (i) and (ii) are predominantly expected to provide fundamental data
that may be published in open access publications whilst data derived in the
third category using commercial instrumentation may be subject to some
restrictions due to commercial sensitivities and IPR issues.
_Data derived from laboratory based instruments;_ There are many different
types of data generated by the wide range of laboratory instruments available
across the ELENA Consortium all have their own associated, often written in-
house, data reduction and processing pipelines, but the raw data are always
preserved and archived.
Raw data from the instruments are produced as ASCII files. These are usually
read into a spreadsheet program, such as Excel, for reduction, from where they
are transferred into plotting or other display software (SigmaPlot etc.). The
exception to this protocol occurs when specialist control software associated
with the instrument takes in raw data and reduces it internally. The raw data
however, are still available for off-line manipulation. Software is generally
Excel, or other proprietary software that can read ASCII files. Because the
raw data are ASCII files, there are no issues associated with reading them.
Raw data are stored on the PC that controls each instrument. These are
regularly backed up to host server systems. Data processing is only applied to
copies of the raw files.
Records of all analyses are preserved by a combination of written and
computer-generated records for each instrument. There is usually no
proprietary period associated with data derived from laboratory-based
instrumentation. Data are preserved for a minimum period as required by local
protocols, indeed the raw data are usually preserved indefinitely, even after
staff/PDRA/students have left the host institution. During the ELENA project a
summary of such protocols for each Institution will be assembled and a guide
to good practice prepared between the partners.
There is usually a large amount of context information associated with any
specific measurement and therefore data that have value to others are
generally those that have been reduced through, e.g. background corrections,
calibration factors and standardisation and have a raft of supporting
information. Documentation of the data reduction processes and relevant
contextual information is usually maintained alongside the data, as text
files. Such reduced data are stored by individuals on their desktop PC and
usually automatically backed up to an institutional server system.
_Data derived from theoretical studies, computational models and simulations._
Data derived from theoretical studies, computational models and simulations
follows many of the same procedures as that derived from laboratory
instrumentation. Such data is often generated on dedicated workstations
accessing institutional computing services though data may also be generated
through access to larger external computational facilities (super computers
and cloud based services). Such studies may lead to generation of large
amounts of metadata and many generations of models, simulations and
theoretical formalisms not all of which are traditionally archived rather
records are maintained of the input data that allow for recreation of such
models, simulations and theoretical formalisms.
Product data are stored either directly on the PC/workstation that initiates
the theoretical study, computational models or simulations but duplicates are
commonly stored on the accesses computational facility (cluster,
supercomputer). These are regularly backed up by server systems. Relevant
contextual information is usually maintained alongside the data e.g. as text
files. There is usually no proprietary period associated with data derived
from theoretical, computational models and simulations. Data are preserved for
a minimum period as required by local protocols, indeed the raw data are
usually preserved indefinitely, even after staff/PDRA/students have left the
host institution. During the ELENA project a summary of such protocols for
each Institution will be assembled and a guide to good practice prepared
between the partners.
When new programmes are written and or derived these are also archived both on
the PC/Workstation and the enabling facility (Cluster/supercomputer). It is a
requirement of most institutions that relevant contextual information is
maintained alongside the programmes together with any source software.
Similarly, where data is produced using different versions/generations of
software older (replaced/upgraded) versions of such software are often
archived. Protocols for sharing good practice in archiving software and
programmes as well as the data produced will be discussed within the ELENA
consortium.
As for laboratory derived ‘raw’ data generated data may be subsequently
analysed by being into a spreadsheet program, such as Excel from where they
are transferred into plotting or other display software (SigmaPlot etc.).
Processed data are stored on the PC/workstation on which analysis is
performed. These data are regularly backed up to host server systems.
_Data from commercial facilities_ Whilst data management procedures for data
collected on commercial instrumentation are similar to those described above
for laboratory instrumentation and theoretical studies, computational models
and simulations it is recognised that commercial companies may develop their
own software and data analysis tools that are not openly available. Similarly
collected and derived data may by commercially sensitive subject to IPR and/or
subject to proprietary periods. Protocols and procedures for access, storing
and disseminating such data will be outlined where necessary/appropriate in
ESR projects including secondments.
# DATA Access
The ELENA project shall facilitate open access to all of its generated data
except where there are commercially sensitive and IPR declared issues. This is
in accord with the ELENA dissemination and outreach plan and the ELENA
Memorandum of Understanding for the ‘Promotion of commercial exploitation’ of
results.
Data will be published throughout recognised publications including Journals,
conference abstracts, proceedings, reviews and books. These published data are
anticipated to be analysed, reduced, product data and, where practical, will
be produced in accordance with open access protocols. Such data may also be
stored in Consortium member’s repositories (several members having on-line
repositories of published work where freely-available records of all published
work, including unformatted versions of manuscripts prior to final publication
are downloadable. In addition, the ELENA Website _https://elena-eu.org_ will
build a publicly available data repository that will list tables of data that
have been published (including on-line supporting material), linking to the
published articles.
The raw data will not be directly accessible without prior request, because
each set of data has its own custom-designed pipeline of reduction,
calibration and standardisation but the consortium may provide raw and
processed data to interested parties upon reasonable request.
One of the issues that has faced curation and archiving of reduced data is
keeping a record of the processing of raw data, especially if it has been
acquired as part of a collaboration or consortium and may therefore be
generated in several different institutions. Procedures for collating such
reduced data on the basis of named ESR projects will be explored during the
ELENA project.
_Data Sharing._ Data (raw and processed) may be reused for further research
and analysis upon satisfying criteria agreed by the members. Data transfer
between consortium partners to further conduct of ESR projects and to further
secondments and training is expected. In the event of any concerns or declared
conflicts of interest between members the ELENA Supervisory Board shall be
responsible for resolution of such issues in accord with managerial process as
declared in the Grant Agreement
# DATA Management resources
## Support for Data Management
At the present time, the ELENA project does not provide any additional
resources for data management and access as these are included as part of the
ELENA consortia members’ infrastructure. Each of the ELENA consortia members
are developing their Research Data Management capabilities in line with new
procedures and protocols for data management being enacted at national and
international level for example many are based on guidelines established by
the Digital Curation Centre; _http://www.dcc.ac.uk/_ ). Several members
(particularly HEIs) have a dedicated Research Data Manager who advises their
staff and students.
As part of their training, ELENA ESR students will be lectured on _Research
Data Management_ and part of such training will involves completion of a Data
Management Plan (DMP), again following the guidance developed by the Digital
Curation Centre
( _https://dmponline.dcc.ac.uk/_ ).
# DATA Security
All raw data is secured on PC/Workstations/Cluster/Supercomputer data archives
which are regularly backed up according to Host institutional and facility
data management protocols and processes. In many cases, raw data are preserved
indefinitely, even after staff/PDRA/students have left the institution, indeed
many of the consortium institutions ensure that when a member of staff, PDRA
or a student leaves, their data have been archived and curated appropriately
and this will be detailed to ELENA employed ESRs. Written records (laboratory
notebooks, online notebooks etc) are maintained in institutional repositories
and nominated managerial staff (e.g. Directors of Research) have access to all
datasets maintained on the department and university servers.
Analysed, published data is both archived by the publishers and also in
Institutional repositories and databases as described in 2 above ensuring long
term data curation and storage.
# ETHICAL ASPECTS
None of the data to be collected and or analysed in the ELENA programme is
subject to any ethical issues as defined in the Grant agreement.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0944_MASSTRPLAN_675132.md
|
# ADMIN DETAILS
**Project Name** : MASS spectrometry TRaining in Protein Lipoxidation ANalysis
for Inflammation
**Principal Investigator / Researcher:** Corinne Spickett
**Funder:** European Commission’s REA with the H2020 MSCA
**Institution:** Aston University
# DATA COLLECTION
**What data will you collect or create?**
Liquid chromatography mass spectrometry (LC-MS) data, including fragmentation
(MS/MS) data as raw data in the proprietary vendor format and as derived peak
lists in a standard format.
SDS-PAGE and western blots for protein separation and oxidation as image
files.
Activity assays for purified and oxidant-treated enzymes.
Chemical analytical data.
**How will the data be collected or created?**
LC-MS/MS analyses
SDS-PAGE and western blotting
Spectrophotometric or fluorimetric assays
HPLC assays
Chemical analysis as appropriate to the samples
# DOCUMENTATION AND METADATA
**What documentation and metadata will accompany the data?**
Sample name and treatment type, basic methodological information as
appropriate for experiments in vitro.
For clinical samples, only the condition, severity and sample handling
information, without any patient or personal information.
For MS data, the metadata will be reported according to the applicable
standards and controlled vocabularies of the established Human Proteome
Organization’s Proteomics Standards Initiative (HUPO PSI).
# ETHICS AND LEGAL COMPLIANCE
**How will you manage any ethical issues?**
Ethical issues relating to the analysis of clinical samples may arise in the 2
nd half of the project. It will be handled by anonymising any data from
patient and volunteer samples to ensure that it cannot be linked to
individuals in any way, and will only be made publicly available if this is
approved by the relevant ethical committees. Data that cannot be separated
from personal data or clinical records will not be made publicly available.
**How will you manage copyright and Intellectual Property Rights (IPR)
issues?**
This will be handled by Aston University’s legal team, if applicable.
IPR would be divided between the applicants and researcher, according to
intellectual input. Access to MS data that is to be disseminated via
established online repositories (ProteomeXchange for proteomics data, and
MetaboLights for small molecule data) is to be free to all users, as per the
licensing policy of the European Bioinformatics Institute. See also below.
Green open access route is likely to be used for any publication of the data.
# STORAGE AND BACKUP
**How will the data be stored and backed up during the research?**
Data that will be deposited in public repositories will be stored and backed
up in these repositories. In addition, and exclusively for data that falls
under ethical or privacy regulations, the data will be stored on local
infrastructure at the site of acquisition.
**How will you manage access and security?**
Data will be on secure Aston University computers and at this stage will only
be accessed by members of staff. After manuscript submission and during peer
review, data that are not subject to ethical or privacy rules will be
privately shared with the journal editor and anonymous peer reviewers through
the established public repositories. After publication of the associated
manuscript, all data in established public repositories will become publicly
available.
# SELECTION AND PRESERVATION
**Which data are of long-term value and should be retained, shared, and/or
preserved?**
Currently, all MS data has intrinsic long-term value, as evidenced by several
published data mining approaches that are based on data mining and/or re-
analysis of public data sets. Moreover, the data sets to be acquired during
this project will be of considerable interest as oxidized biomolecules
(peptides and small molecules) are not yet well represented in these
repositories.
**What is the long-term preservation plan for the dataset?**
The publicly available data will be disseminated through the established
repositories. Copies of the data, as well as data that is subjected to ethical
and privacy regulations, will also be archived locally for at least 7 years.
# DATA SHARING
**How will you share the data?**
MS data sharing will happen through the established, standard repositories
(ProteomeXchange for proteomics data; MetaboLights for metabolite data) hosted
at the European Bioinformatics Institute (EMBL-EBI). Exceptions apply to data
sets that cannot be made publicly available due to applicable ethical or
privacy regulations.
All data that is deposited in the abovementioned, established repositories
will be publicly accessible without restrictions for re-use as per the
licenses employed by EMBL-EBI for all data in its public repositories.
**Are any restrictions on data sharing required?**
Possibly, if patentable compounds or materials are produced. Note that these
potential restrictions are compatible with the free access to the data
deposited in public repositories because a specific clause in their licenses
states that users of the data should ensure that they do not violate any
patent rights held by the original data submitter.
# RESPONSIBILITIES AND RESOURCES
**Who will be responsible for data management?**
Prof Andrew R. Pitt
**What resources will you require to deliver your plan?**
Facilities for storage of large data sets at Aston University. Submission
support from the relevant public data repositories. Software to convert the
data into standardized form and to provide the required metadata annotation.
Software to aid the submission of large volumes of data to the repository.
6\. Addendum 2 – Data Management Plan for P2 - UAVR
# ADMIN DETAILS
**Project Name** : MASS spectrometry TRaining in Protein Lipoxidation ANalysis
for Inflammation
**Principal Investigator / Researcher:** Pedro Domingues
**Funder:** European Commission’s REA with the H2020 MSCA
**Institution:** Aveiro University
# DATA COLLECTION
**What data will you collect or create?**
Liquid chromatography mass spectrometry (LC-MS) data, including fragmentation
(MS/MS) data as raw data in the proprietary vendor format and as derived peak
lists in a standard format.
Western blots for protein oxidation as image files.
Thin layer chromatography for lipid oxidation as image files.
Inflammatory panel of samples.
**How will the data be collected or created?**
LC-MS/MS analyses
Western blotting
Spectrophotometric assays
Chemical analysis as appropriate to the samples
# DOCUMENTATION AND METADATA
**What documentation and metadata will accompany the data?**
Sample name and treatment type, basic methodological information as
appropriate.
For MS data, the metadata will be reported according to the applicable
standards and controlled vocabularies of the established Human Proteome
Organization’s Proteomics Standards Initiative (HUPO PSI) and Lipid Maps.
# ETHICS AND LEGAL COMPLIANCE
**How will you manage any ethical issues?**
No ethical issues that we are aware of.
**How will you manage copyright and Intellectual Property Rights (IPR)
issues?**
This will be handled by Aveiro University’s legal team, if applicable.
IPR would be divided between the applicants and researcher, according to
intellectual input. Access to MS data that is to be disseminated via
established online repositories (ProteomeXchange for proteomics data, and
MetaboLights for small molecule data) is to be free to all users, as per the
licensing policy of the European Bioinformatics Institute. See also below.
Green open access route is likely to be used for any publication related to
the data.
# STORAGE AND BACKUP
**How will the data be stored and backed up during the research?**
Data that will be deposited in public repositories will be stored and backed
up in these repositories. In addition, and exclusively for data that falls
under ethical or privacy regulations, the data will be stored on local
infrastructure at the site of acquisition.
**How will you manage access and security?**
Data will be on secure Aveiro University computers and at this stage will only
be accessed by members of staff. After manuscript submission and during peer
review, data that are not subject to ethical or privacy rules will be
privately shared with the journal editor and anonymous peer reviewers through
the established public repositories. After publication of the associated
manuscript, all data in established public repositories will become publicly
available.
# SELECTION AND PRESERVATION
**Which data are of long-term value and should be retained, shared, and/or
preserved?**
Currently, all MS data has intrinsic long-term value, as evidenced by several
published data mining approaches that are based on data mining and/or re-
analysis of public data sets. Moreover, the data sets to be acquired during
this project will be of considerable interest as oxidized biomolecules
(peptides, lipids and small molecules) are not yet well represented in these
repositories.
**What is the long-term preservation plan for the dataset?**
The publicly available data will be disseminated through the established
repositories. Copies of the data, as well as data that is subjected to ethical
and privacy regulations, will also be archived locally for at least 2 years.
# DATA SHARING
**How will you share the data?**
MS data sharing will happen through the established, standard repositories
(ProteomeXchange for proteomics data; MetaboLights for metabolite data) hosted
at the European Bioinformatics Institute (EMBL-EBI). Exceptions apply to data
sets that cannot be made publicly available due to applicable ethical or
privacy regulations.
All data that is deposited in the abovementioned, established repositories
will be publicly accessible without restrictions for re-use as per the
licenses employed by EMBL-EBI for all data in its public repositories.
**Are any restrictions on data sharing required?**
Possibly, if patentable compounds or materials are produced. Note that these
potential restrictions are compatible with the free access to the data
deposited in public repositories because a specific clause in their licenses
states that users of the data should ensure that they do not violate any
patent rights held by the original data submitter.
# RESPONSIBILITIES AND RESOURCES
**Who will be responsible for data management?**
Dr Pedro Domingues
**What resources will you require to deliver your plan?**
Facilities for storage of large data sets at Aveiro University. Submission
support from the relevant public data repositories. Software to convert the
data into standardized form and to provide the required metadata annotation.
Software to aid the submission of large volumes of data to the repository.
7\. Addendum 3 – Data Management Plan for P3 - ULEI
# ADMIN DETAILS
**Project Name** : MASS spectrometry TRaining in Protein Lipoxidation ANalysis
for Inflammation
**Principal Investigator / Researcher:** Maria Fedorova
**Funder:** European Commission’s REA with the H2020 MSCA
**Institution:** Leipzig University
# DATA COLLECTION
**What data will you collect or create?**
Liquid chromatography mass spectrometry (LC-MS) data, including fragmentation
(MS/MS) data as raw data in the proprietary vendor format and as derived peak
lists in a standard format.
Western blots for protein oxidation as image files
Microscopy data as image files
Chemical analytical data
**How will the data be collected or created?**
LC-MS/MS analyses
Western blotting
Microscopy imaging
Chemical analysis as appropriate to the samples
# DOCUMENTATION AND METADATA
**What documentation and metadata will accompany the data?**
Sample name and treatment type, basic methodological information as
appropriate.
For MS data, the metadata will be reported according to the applicable
standards and controlled vocabularies of the established Human Proteome
Organization’s Proteomics Standards Initiative (HUPO PSI).
# ETHICS AND LEGAL COMPLIANCE
**How will you manage any ethical issues?**
Ethical issues relating to the analysis of clinical samples will be handled by
anonymising any data from patient and volunteer samples to ensure that it
cannot be linked to individuals in any way, and will only be made publicly
available if this is approved by the relevant ethical committees. Data that
cannot be separated from personal data or clinical records will not be made
publicly available.
**How will you manage copyright and Intellectual Property Rights (IPR)
issues?**
This will be handled by Leipzig University’s legal team, if applicable.
IPR would be divided between the applicants and researcher, according to
intellectual input. Access to MS data that is to be disseminated via
established online repositories (ProteomeXchange for proteomics data, and
MetaboLights for small molecule data) is to be free to all users, as per the
licensing policy of the European Bioinformatics Institute. See also below.
Green open access route is likely to be used for any publication of the data.
# STORAGE AND BACKUP
**How will the data be stored and backed up during the research?**
Data that will be deposited in public repositories will be stored and backed
up in these repositories. In addition, and exclusively for data that falls
under ethical or privacy regulations, the data will be stored on local
infrastructure at the site of acquisition.
**How will you manage access and security?**
Data will be on secure Leipzig University computers and at this stage will
only be accessed by members of staff. After manuscript submission and during
peer review, data that are not subject to ethical or privacy rules will be
privately shared with the journal editor and anonymous peer reviewers through
the established public repositories. After publication of the associated
manuscript, all data in established public repositories will become publicly
available.
# SELECTION AND PRESERVATION
**Which data are of long-term value and should be retained, shared, and/or
preserved?**
Currently, all MS data has intrinsic long-term value, as evidenced by several
published data mining approaches that are based on data mining and/or re-
analysis of public data sets. Moreover, the data sets to be acquired during
this project will be of considerable interest as oxidized biomolecules
(lipids, peptides and small molecules) are not yet well represented in these
repositories.
**What is the long-term preservation plan for the dataset?**
The publicly available data will be disseminated through the established
repositories. Copies of the data, as well as data that is subjected to ethical
and privacy regulations, will also be archived locally for at least 10 years.
# DATA SHARING
**How will you share the data?**
MS data sharing will happen through the established, standard repositories
(ProteomeXchange for proteomics data; MetaboLights for metabolite data) hosted
at the European Bioinformatics Institute (EMBL-EBI). Exceptions apply to data
sets that cannot be made publicly available due to applicable ethical or
privacy regulations.
All data that is deposited in the abovementioned, established repositories
will be publicly accessible without restrictions for re-use as per the
licenses employed by EMBL-EBI for all data in its public repositories.
**Are any restrictions on data sharing required?**
Possibly, if patentable compounds or materials are produced. Note that these
potential restrictions are compatible with the free access to the data
deposited in public repositories because a specific clause in their licenses
states that users of the data should ensure that they do not violate any
patent rights held by the original data submitter.
# RESPONSIBILITIES AND RESOURCES
**Who will be responsible for data management?**
Dr Maria Fedorova
**What resources will you require to deliver your plan?**
Facilities for storage of large data sets at Leipzig University. Submission
support from the relevant public data repositories. Software to convert the
data into standardized form and to provide the required metadata annotation.
Software to aid the submission of large volumes of data to the repository.
8\. Addendum 4 – Data Management Plan for P4 - UMIL
# ADMIN DETAILS
**Project Name** : MASS spectrometry TRaining in Protein Lipoxidation ANalysis
for Inflammation
**Principal Investigator / Researcher:** Giancarlo Aldini
**Funder:** European Commission’s REA with the H2020 MSCA
**Institution:** University of Milan
# DATA COLLECTION
**What data will you collect or create?**
LC-ESI-MS raw data generated by Thermo instruments.
Gel electrophoresis images acquired by using Molecular Image Versa Doc Biorad
Data analyses generated by using GraphPad software and Proteome discoverer
**How will the data be collected or created?**
Data will be created and collected automatically by LC-ESI-MS instruments
Data will be created by analysing raw data using data analysis software
# DOCUMENTATION AND METADATA
All the activities and procedures will be recorded in a bound notebook which
will be signed daily by the scientist and lab manager. Raw data and analysed
data will be classified according to the day and time of data generation,
sample name and treatment type, basic methodological information as
appropriate.
# ETHICS AND LEGAL COMPLIANCE
**How will you manage any ethical issues?**
No ethical issues that we are aware of.
**How will you manage copyright and Intellectual Property Rights (IPR)
issues?**
This will be handled by the University of Milan’s legal team, if applicable.
IPR would be divided between the applicants and researcher, according to
intellectual input.
# STORAGE AND BACKUP
**How will the data be stored and backed up during the research?**
Public data will be uploaded stored and backed up in repositories according to
their rules. Data that will not be made publicly available will be stored on
local infrastructure of the University of Milan and suitably backed-up. Data
will also be stored for all the duration of MASSTRPLAN on the QNAP systems
available in the data room located in the lab managed by Giancarlo Aldini.
**How will you manage access and security?**
Data will be on secure on the University of Milan computers and storage
systems. Dedicated QNAP backup systems which are password protected and
encrypted will also be used. Each member of the staff will have the access to
the data through an encrypted password. Access will be recorded in a log file
and backup will be performed every day.
# SELECTION AND PRESERVATION
**Which data are of long-term value and should be retained, shared, and/or
preserved?** All the generated raw data and analysed data will be long-term
stored in the University of Milan storage system.
**What is the long-term preservation plan for the dataset?**
The publicly available data will be disseminated through the established
repositories. Copies of the data, as well as data that is subjected to ethical
and privacy regulations, will also be archived locally for at least 5 years.
# DATA SHARING
**How will you share the data?**
MS data sharing will happen through the established, standard repositories
(ProteomeXchange for proteomics data; MetaboLights for metabolite data) hosted
at the European Bioinformatics Institute (EMBL-EBI). Exceptions apply to data
sets that cannot be made publicly available due to applicable ethical or
privacy regulations. Data generated in the lab managed by Giancarlo Aldini
that will not be made publicly available will be shared among the other
MASSTRPLAN members through a Synology cubestation system.
**Are any restrictions on data sharing required?**
Possibly, if patentable compounds or materials are produced.
# RESPONSIBILITIES AND RESOURCES
**Who will be responsible for data management?**
Dr. Giancarlo Aldini
**What resources will you require to deliver your plan?**
Data Facilities for storage of large data sets at University of Milan (Big
Data facility). QNAP systems for 8 terabyte and backup system are already
available in the data centre of the research lab managed by Giancarlo Aldini.
9\. Addendum 5 – Data Management Plan for P6 - CSIC
# ADMIN DETAILS
**Project Name** : MASS spectrometry TRaining in Protein Lipoxidation ANalysis
for Inflammation
**Principal Investigator / Researcher:** Dolores Pérez-Sala
**Funder:** European Commission’s REA with the H2020 MSCA
**Institution:** Consejo Superior de Investigaciones Científicas
# DATA COLLECTION
**What data will you collect or create?**
We will collect data in the form of text files (.dat or .txt) or excel
spreadsheets generated by fluorescence and absorbance plate readers,
spectrofluorometers, analytical ultracentrifuges, scattering and other
specialized instruments. We will also collect .jpg and .tif images generated
by fluorescence and electron microscopes and by scan of SDS-PAGE gels. We will
create, as well, text files (.dat or .txt) or excel spreadsheets corresponding
to modelling of the raw data acquired by using specific software for analysis.
We will also generate data from proteomic analysis in the form of datasheets
of peaks from MALDI-TOF MS and MS-MS.
**How will the data be collected or created?**
Directly by the instruments and by the software used for data fitting or
simulation. Western blotting followed by scan of blots. Fluorescence
microscopy-generated tiff or videos.
# DOCUMENTATION AND METADATA
Images from fluorescence microscopy and from proteomic analysis will be
identified by date of acquisition, user name and a code related to sample
identity. Likewise, data files for biophysical and biochemical experiments
will be identified by date, user and sample identity codes.
# ETHICS AND LEGAL COMPLIANCE
Ethical issues could arise from the use of primary cells derived from patients
suffering from genetic diseases obtained from the Coriell Institute for
Biomedical Research (NIH, USA). These cells will be used according to the
conditions established by the Coriell Repository. This Institution already
takes care of anonymization. Therefore, no data from donors are associated to
the use of these cells. Any potential human samples will be obtained from
Official Biobanks and will be used according to their regulations and
subjected to approval by their committees and that of CSIC for Bioethical and
Biosafety issues.
**How will you manage copyright and Intellectual Property Rights (IPR)
issues?**
All data generated at CSIC and/or by CSIC researchers belong to the
Institution (copyright holder). Subjects on IPR are dealt by the CSIC Office
of Transfer of Technology. Therefore, in principle, all data generated are
considered confidential. We foresee sharing with other ITN teams specific sets
of data obtained during secondments and those directly related to
collaborative publications. Data will not be available in principle to the
general audience before publication, which we will try to do in Open Access
form.
# STORAGE AND BACKUP
**How will the data be stored and backed up during the research?**
Data will be stored in computer and external hard drives. Upon publication,
accepted manuscripts are posted by the Digital CSIC Repository.
**How will you manage access and security?**
Access to computers will be generally by password. External hard drives will
be safely kept in the lab. All drives are equipped with software for password
protection of their contents.
# SELECTION AND PRESERVATION
**Which data are of long-term value and should be retained, shared, and/or
preserved?** Image and other experimental data may be subject to re-analysis
in the mid-long term to assess parameters different from those originally
analysed. Data on proteomic analysis may prove valuable in broader contexts or
meta-analysis. Therefore, the metadata for MS data will be reported according
to the applicable standards and controlled vocabularies of the established
Human Proteome Organization’s Proteomics Standards Initiative (HUPO PSI).
**What is the long-term preservation plan for the dataset?**
The minimal storage time for data obtained at CSIC is 5 years. During that
time, laboratory notebooks and hard drives with security copies of data will
be kept at CIB-CSIC.
# DATA SHARING
**How will you share the data?**
Among teams, data will be shared under a confidentiality basis. Data will be
available to the public upon publication. MS data from proteomic
identifications will be shared through the established, standard repositories
(ProteomeXchange for proteomics data) hosted at the European Bioinformatics
Institute (EMBL-EBI). For other data, and when applicable, we will follow the
recommendations of the journals regarding the presentation of raw data or of
deposit in public repositories
(http://www.nature.com/sdata/data-policies/repositories). For instance, image
data may be shared through Figshare (https://figshare.com/). For some
journals, deposit is made at the time of submission and data are made public
upon manuscript acceptance. In addition, all published works will be available
in accepted author version at the open access repository of our Institution
Digital CSIC (https:// **digital** . **csic** .es/). This is mandatory for our
Institution. Non-published data will remain confidential.
**Are any restrictions on data sharing required?**
Yes, results or materials can be transferred on a royalty-free basis only for
research purposes, on a confidential basis and upon signing of an MTA with
CSIC. Results or materials will not be used for commercial purposes, they will
be subject to confidentiality and their use will require the signature of an
MTA with CSIC.
# RESPONSIBILITIES AND RESOURCES
**Who will be responsible for data management?**
Dr. Dolores Pérez-Sala
**What resources will you require to deliver your plan?**
Facilities for temporary storage of image data are available at CIB-CSIC. We
will increase our storage capacity through the acquisition of external hard-
drives and computers protected by passwords.
10\. Addendum 6 – Data Management Plan for P7 - CCM
# ADMIN DETAILS
**Project Name** : MASS spectrometry TRaining in Protein Lipoxidation ANalysis
for Inflammation
**Principal Investigator / Researcher:** Cristina Banfi
**Funder:** European Commission’s REA with the H2020 MSCA
**Institution:** Centro Cardiologico Monzino
# DATA COLLECTION
**What data will you collect or create?**
Clinical data of patients recruited at Centro Cardiologico Monzino.
Specifically, data are related to full clinical assessment, including
pulmonary function and lung diffusion for carbon monoxide (DLco), maximal
cardiopulmonary exercise test and measurements of circulating proteins,
including immature and mature forms of SP-B. Liquid chromatography mass
spectrometry (LC-MS) data, including fragmentation (MS/MS) data as raw data in
the proprietary vendor format and as derived peak lists in a standard format.
**How will the data be collected or created?**
All clinical data will be treated in confidence. Any information relates to
subject recruited in the study will be acquired and used solely for the
purposes described in the disclosure and in a manner consistent with current
legislation on protection of personal data (Legislative Decree 196/03). By
signing the informed consent form, the subject gives permission to have direct
access to his medical records. The doctor who follows the study will identify
the subjects with a code: the data collected during the study, with the
exception of the name will be recorded, processed and stored together with
this code, the date of birth, sex, weight and height. Only the doctor and
authorized entities may link this code to the name. Surname(s), first name(s)
and date of birth of the patient will be present only in the source document
(medical records). Any information or biological material will be identified
by a unique code that will allow to associate clinical data with laboratory
results but, in any case, with the patient's identity
Other data will be collected by LC-MS/MS analyses and biochemical assays on
clinical samples.
# DOCUMENTATION AND METADATA
**What documentation and metadata will accompany the data?**
Sample name and treatment type, basic methodological information as
appropriate.
For MS data, the metadata will be reported according to the applicable
standards and controlled vocabularies of the established Human Proteome
Organization’s Proteomics Standards Initiative (HUPO PSI).
# ETHICS AND LEGAL COMPLIANCE
**How will you manage any ethical issues?**
The Study will be conducted in full respect for human dignity and fundamental
human rights as dictated by the "Helsinki Treaty", as amended, by the
standards of "Good Clinical Practice" (GCP) issued by the European Community
and in accordance with all laws and local rules regarding the clinical trials
**How will you manage copyright and Intellectual Property Rights (IPR)
issues?**
This will be handled by CCM’s legal team, if applicable.
IPR would be divided between the applicants and researcher, according to
intellectual input. Access to MS data that is to be disseminated via
established online repositories (ProteomeXchange for proteomics data, and
MetaboLights for small molecule data) is to be free to all users, as per the
licensing policy of the European Bioinformatics Institute.
# STORAGE AND BACKUP
**How will the data be stored and backed up during the research?**
Sensitive data, managed on computer, will be treated accordingly to
legislation and then the personal data will be separated from those that
determine a patient's medical condition. The original data will be kept for
seven years by the investigators. The reference that links clinical data to a
patient will be kept in a Microsoft Excel file, protected by passwords, on the
personal computer of the doctor responsible of the project. The patient
demographics are not of interest for the trial in question.
LC-MS/MS data will be deposited in public repositories and will be stored and
backed up in these repositories.
**How will you manage access and security?**
Data will be on secure Centro Cardiologico Monzino computers and at this stage
will only be accessed by members of staff with personal account protected by
password. After manuscript submission and during peer review, data that are
not subject to ethical or privacy rules will be privately shared with the
journal editor and anonymous peer reviewers through the established public
repositories. After publication of the associated manuscript, all data in
established public repositories will become publicly available.
# SELECTION AND PRESERVATION
**Which data are of long-term value and should be retained, shared, and/or
preserved?**
Currently, all MS data has intrinsic long-term value, as evidenced by several
published data mining approaches that are based on data mining and/or re-
analysis of public data sets. Moreover, the data sets to be acquired during
this project will be of considerable interest as oxidized biomolecules
(peptides and small molecules) are not yet well represented in these
repositories. In addition, clinical data, in anonymous form, will be shared in
order to correlate oxidised biomolecule levels with clinical parameters.
**What is the long-term preservation plan for the dataset?**
The publicly available data will be disseminated through the established
repositories. Copies of the data, as well as data that are subjected to
ethical and privacy regulations, will also be archived locally for at least 7
years.
# DATA SHARING
**How will you share the data?**
The data, processed using electronic tools will be disclosed only in strictly
anonymous form, such as through scientific papers, statistics and scientific
conferences.
MS data sharing will happen through the established, standard repositories
(ProteomeXchange for proteomics data; MetaboLights for metabolite data) hosted
at the European Bioinformatics Institute (EMBL-EBI). Exceptions apply to data
sets that cannot be made publicly available due to applicable ethical or
privacy regulations.
All data that is deposited in the abovementioned, established repositories
will be publicly accessible without restrictions for re-use as per the
licenses employed by EMBL-EBI for all data in its public repositories.
**Are any restrictions on data sharing required?**
Possibly, if patentable compounds or materials are produced. Note that these
potential restrictions are compatible with the free access to the data
deposited in public repositories because a specific clause in their licenses
states that users of the data should ensure that they do not violate any
patent rights held by the original data submitter. Data that falls under
ethical or privacy regulations will be shared only in anonymous form.
# RESPONSIBILITIES AND RESOURCES
**Who will be responsible for data management?**
For the clinical data Prof Piergiuseppe Agostoni, director of the Heart
Failure Unit, and Dr Cristina Banfi will be responsible for scientific data.
**What resources will you require to deliver your plan?**
Facilities for storage of large data sets at CCM. Submission support from the
relevant public data repositories. Software to convert the data into
standardized form and to provide the required metadata annotation. Software to
aid the submission of large volumes of data to the repository.
11\. Addendum 7 Data Management Plan for P8 - CHUC
# ADMIN DETAILS
**Project Name** : MASS spectrometry TRaining in Protein Lipoxidation ANalysis
for Inflammation
**Principal Investigator / Researcher:** Artur Paiva
**Funder:** European Commission’s REA with the H2020 MSCA
**Institution:** Centro Hospitalar e Universitário de Coimbra
# DATA COLLECTION
**What data will you collect or create?**
Flow cytometry data in the format provided by FACSDiva software (Becton
Dickinson Biosciences). Gene expression data in the format provided by
LightCycler software (Roche Diagnostics).
**How will the data be collected or created?**
Flow cytometry.
Real time polymerase chain reaction.
# DOCUMENTATION AND METADATA
Sample name, patient diagnosis, cell stimulation conditions, and basic
methodological information as appropriate.
# ETHICS AND LEGAL COMPLIANCE
**How will you manage any ethical issues?**
To comply with legal and ethics requirements, no patient data will be shared
or any data that can be linked to patients or their personal and medical
history. However, analytical data that has been anonymised and cannot be
linked to individual patients will be shared.
**How will you manage copyright and Intellectual Property Rights (IPR)
issues?**
This will be handled by Centro Hospitalar e Universitário de Coimbra’s legal
team, if applicable.
IPR would be divided between the applicants and researchers, according to
intellectual input. Green open access route is likely to be used for any
publication of the data.
# STORAGE AND BACKUP
**How will the data be stored and backed up during the research?**
The data will be stored on local infrastructure at the site of acquisition.
**How will you manage access and security?**
Data will be on secure Centro Hospitalar e Universitário de Coimbra computers
and at this stage will only be accessed by members of staff. After manuscript
submission and during peer review, data that are not subject to ethical or
privacy rules will be privately shared with the journal editor and anonymous
peer reviewers through the established public repositories. After publication
of the associated manuscript, all data in established public repositories will
become publicly available.
# SELECTION AND PRESERVATION
**Which data are of long-term value and should be retained, shared, and/or
preserved?** All data is considered worthy of archiving.
**What is the long-term preservation plan for the dataset?**
Copies of the data, as well as data that is subjected to ethical and privacy
regulations, will be archived locally for at least 5 years.
# DATA SHARING
**How will you share the data?**
Relevant data will be shared among the MASSTRPLAN beneficiaries.
Publication related flow cytometry and RT-PCR data sharing will happen through
public repositories. Exceptions apply to data sets that cannot be made
publicly available due to applicable ethical or privacy regulations.
**Are any restrictions on data sharing required?**
Possibly, if patentable compounds or materials are produced. Note that these
potential restrictions are compatible with the free access to the data
deposited in public repositories because a specific clause in their licenses
states that users of the data should ensure that they do not violate any
patent rights held by the original data submitter.
# RESPONSIBILITIES AND RESOURCES
**Who will be responsible for data management?**
Dr. Artur Paiva
**What resources will you require to deliver your plan?**
Facilities for data storage at Centro Hospitalar e Universitário de Coimbra
12\. Addendum 8 Data Management Plan for P9 - MOL
# ADMIN DETAILS
**Project Name** : MASS spectrometry TRaining in Protein Lipoxidation ANalysis
for Inflammation
**Principal Investigator / Researcher:** John Wilkins
**Funder:** European Commission’s REA with the H2020 MSCA
**Institution:** Mologic Ltd, Bedford, UK
# DATA COLLECTION
**What data will you collect or create?**
Laboratory research notes on the development of antibodies, synthetic peptides
and immunochromatographic assays and formats.
Comparative ELISA or Lateral Flow (LF) assay data, obtained using
spectrophotometers or lab readers, with commercial or in-house assays.
Experimental data, such as HPLC and mass spectrometric (MS) data, as raw data
files in proprietary instrument manufacturer format, or as summary
spreadsheets.
**How will the data be collected or created?**
Significant data will be summarised and presented in Mologic research reports
and MASSTRPLAN documents.
MS files are archived and kept at Mologic for 5 years.
# DOCUMENTATION AND METADATA
**What documentation and metadata will accompany the data?**
Sample name and treatment type, plus basic methodological information, as
appropriate. For MS data, the metadata will be reported according to the
applicable standards and controlled vocabularies of the established Human
Proteome Organization’s Proteomics Standards Initiative (HUPO PSI).
# ETHICS AND LEGAL COMPLIANCE
**How will you manage any ethical issues?**
Human clinical samples may be required for method development and validation
of commercial assays, e.g. using urines or blood samples from healthy local
volunteers, and from patients of known disease/health status. Mologic adheres
to ethical procedures (informed consent, patient anonymity etc.), and is in
the process of applying for an HTA licence (UK Human Tissue Authority) to
allow local sample storage.
**How will you manage copyright and Intellectual Property Rights (IPR)
issues?**
Mologic will identify any IP opportunities that arise during the research
conducted at the Mologic premises. IP rights will be allocated equitably
between consortium members according to intellectual input.
# STORAGE AND BACKUP
**How will the data be stored and backed up during the research?**
Mologic will ensure that laboratory data and reports are archived locally and
held for a minimum of 5 years.
**How will you manage access and security?**
Data will be held on the secure Mologic computer system, and will only be
accessed by authorised members of staff.
# SELECTION AND PRESERVATION
**Which data are of long-term value and should be retained, shared, and/or
preserved?**
After any IP opportunities have been evaluated and protected, significant MS
data (i.e. of scientific interest or novelty) will be summarised and presented
in MASSTRPLAN documents and in scientific publications. These published data
will be made publicly available.
**What is the long-term preservation plan for the dataset?**
The data generated at Mologic will be archived locally for at least 5 years.
# DATA SHARING
**How will you share the data?**
Mologic MS data will be shared with consortium members. Data related to
Mologic’s proprietary method development will be kept private, until IP
opportunities have been explored.
**Are any restrictions on data sharing required?**
Data related to Mologic’s proprietary method development will be kept private,
until IP opportunities have been explored and resolved.
# RESPONSIBILITIES AND RESOURCES
**Who will be responsible for data management?**
Dr John Wilkins
**What resources will you require to deliver your plan?**
No additional resources will be required, as Mologic already has computer data
and laboratory notebook archive systems.
13\. Addendum 9 Data Management Plan for P10 - THERMO
# ADMIN DETAILS
**Project Name** : MASS spectrometry TRaining in Protein Lipoxidation ANalysis
for Inflammation
**Principal Investigator / Researcher:** (Dr Ken Cook)
**Funder:** European Commission’s REA with the H2020 MSCA
**Institution:** (Thermofisher Scientific)
# DATA COLLECTION
**What data will you collect or create?**
Chromatography data and Mass Spectrometry data from various protein and lipid
samples
**How will the data be collected or created?**
Data will be in Chromeleon files and Excalibur file format. There will be
presentation material in powerpoint and publications in word / PDF
# DOCUMENTATION AND METADATA
Data will be collected and stored via the instrument data collection software.
This will include instrument settings, run time and date, sample information
and injection amounts. The collected data from the instrument will include raw
data from all detection systems and the final report.
# ETHICS AND LEGAL COMPLIANCE
**How will you manage any ethical issues?**
We will not be collecting data from patients or any other source involving
ethical issues unless in collaboration with another institute, in which case
applicable regulations of the partner site will be followed.
**How will you manage copyright and Intellectual Property Rights (IPR)
issues?**
We do not anticipate any IPR issues. We would hope to publish findings from
the project.
# STORAGE AND BACKUP
**How will the data be stored and backed up during the research?**
Data will be stored on local computers and backed up on the Thermofisher
network.
**How will you manage access and security?**
Thermofisher have their own secure network protocols. Data access to useful
data will be shared through the consortium. Publishable data will be made
publicly available via established repositories (ProteomeXchange and
MetaboLights).
# SELECTION AND PRESERVATION
**Which data are of long-term value and should be retained, shared, and/or
preserved?**
Any publishable results or presentation worthy material will be shared and
preserved. We will also retain material which led to such discoveries.
**What is the long-term preservation plan for the dataset?**
Data will be shared with the consortium and data will also be stored on the
local computer and the Thermofisher network. Publishable data will be stored
and made publically available via established repositories in the field
(ProteomeXchange and MetaboLights), in addition to long term storage with the
consortium.
# DATA SHARING
**How will you share the data?**
Data will be made available to the consortium and a shared facility can be
used for consortium members who request it. Any data published will then be
publically available.
**Are any restrictions on data sharing required?**
Only on data that may be used in an upcoming publication until it is
submitted, when free access will be available. Such data may also be shared
early with consortium members if useful and needed.
# RESPONSIBILITIES AND RESOURCES
**Who will be responsible for data management?**
Dr Ken Cook, Dr Madalina Oppermann and the appointed Student
**What resources will you require to deliver your plan?** None other than
those already available.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0945_SALUTE_821093.md
|
# Executive Summary
This document, D12.3 Data Management Plan (DMP) is a deliverable of the SALUTE
project launched under the ENGINES ITD programme, which is funded by the
European Union’s H2020 through Clean Sky 2 Programme under Grant Agreement N°
821093.
ENGINES’s main objectives are to deliver substantial improvements in engine
technology. In particular, the following challenges are addressed:
* Developing full engine and major engine system solutions that can deliver a step change reduction in emissions.
* Taking a step-by-step approach to progressing the technology’s maturity or "Technology Readiness Level"
(TRL), utilising design studies and rig tests to explore and understand the
technologies under development, their system interactions and the risks
associated with their implementation. The ultimate goal of the project is to
achieve TRL4, supporting maturation of promising solutions toward TRL6.
These objectives will be achieved through the development of innovative
engine's subsystems that will allow, incrementally, improving the performance
and efficiency of the engine itself including the reduction of its noise
emission.
Indeed, modern aircraft propulsion is mostly based on high-bypass turbofan
engines. In this architecture, the gas turbine is used to operate the fan,
which provides a significant part of the thrust especially at approach. The
new used geometry and rotation speed produce new low frequency noise that need
to be treated. The main technologies of acoustic treatments currently used on
turbofan engines in service are no more as efficient to absorb UHBR fan noise
due to depth constraints. Indeed, these liners perform poorly at low
frequencies, which is a key requirement for UHBR engines. New liner
technologies are therefore needed and the present project will focus on
Active/Adaptive Acoustic Impedance treatments.
_Figure 1: active SDOF liner prototype developed in the frame of the ENOVAL
project_
The concept of Electroacoustic Resonator has been recently down-sized to
acoustic liner applications within the frame of the ENOVAL project, where an
array of 3x10 locally controlled (Electrodynamic) Electroacoustic Resonators
have been developed and assessed in a dedicated Acoustic Flow Duct facility.
Although local acoustic impedance control appears to allow efficient sound
absorption over a wide range of frequencies, the optimal organization of
individual active impedances can significantly extend the performances,
especially towards the low-frequency range. Inspired by acoustic metamaterials
concepts, where the array size rules the low frequency bound rather than the
individual unit size, distributed control strategies can be proposed. These
recent concepts still need to be developed and tested within a Distributed
Active Electroacoustic Liner configuration.
The main objective of this project is therefore to first reach TRL 3
implementations with a 2D liner implementing local and distributed active
acoustic impedance control, then reach TRL 4 in 3D liner integrations, and
finally assess their performances in realistic experimental test facility.
To tackle all the associated challenges the project is organized around 4
poles:
* Smart components and technologies screening, integration and development
* 2D Liner Design, manufacturing and Characterization : TRL3
* 3D Liner Design, manufacturing and Characterization: TRL4 ✔ Advance modelling and simulations
_Figure 2: Project overall description_
# Data management and responsibility
## DMP Internal Consortium Policy
The SALUTE project is engaged in the Open Research Data (ORD) pilot which aims
at improving and maximising access to and re-using of research data generated
by Horizon 2020 projects and takes into account the need to balance openness
and pro-tection of scientific information, commercialisation and Intellectual
Property Rights (IPR), privacy concerns, security as well as data management
and preservation ques-tions.
The management of the project data/results requires decisions about the
sharing of the data, the format/standard, the maintenance, the preservation,
etc.
Thus the Data Management Plan (DMP) is a key element of good data management
and is established to describe how the project will collect, share and protect
the data produced during the project. As a living document, the DMP will be
up-dated over the lifetime of the project whenever necessary.
## Data management responsible
In this frame the following policy for data management and responsibility has
been agreed for the SALUTE project:
## • The SALUTE Management Team (ECL-LMFA, ECL-LTDS, LAUM, FEMTO, EPFL) and
the topic manager
**(SAE)** analyse the results of the SALUTE project and will decide the
criteria to select the Data for which make the OPT-IN. They individuate for
all the dataset a responsible (Data Management Project Responsible (DMPR))
that will ensure dataset integrity and compatibility for its internal and
external use during the programme lifetime, etc. They also decide where to up-
load the data, when upload, when how often update, etc.
* **The Data Management Project Responsible (DMPR)** is in charge of the integrity of all the dataset, their compatibility, the criteria for the data storage and preservation, the long-term access policy, the maintenance policy, quality control, etc. Of course he will discuss and validate these points with SALUTE Management Team (ECL-LMFA, ECL-LTDS, LAUM, FEMTO, EPFL) and the topic manager (SAE).
<table>
<tr>
<th>
**Data management Project Responsible (DMPR)**
</th>
<th>
**Manuel COLLET**
</th> </tr>
<tr>
<td>
DMPR Affiliation
</td>
<td>
Ecole Centrale de Lyon
</td> </tr>
<tr>
<td>
DMPR mail
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
DMPR telephone number
</td>
<td>
**+33 (0)4 72 18 64 84**
</td> </tr> </table>
* **The Data Set Responsibles (DSR)** are in charge of their single Dataset and should be the partner possessing the data: validation and registration of datasets and metadata, updates and management of the different versions, etc. The contact details of each DSR will be provided in each data set document presented in the annex I of the DMP.
## Data nature, link with previous data and potential users
In the next section “1.4 Data summary”, the SALUTE Management Team (ECL-LMFA,
ECL-LTDS, LAUM, FEMTO, EPFL) and the topic manager (SAE) have listed the
project’s data/results that will be generated by the project and have
identified which data will be open. One also describes the link with previous
data and potential users.
The basic rule is based on the fact that only Data needed to validate the
results presented in scientific publications can be made accessible to third
parties. Research data linked to exploitable results will not be put into the
open domain if they compromise its commercialisation prospects or have
inadequate protection, which is a H2020 obligation.
## Data summary
The next table (table1) presents the different data collections generated by
the SALUTE project. For each data collection that will be open to public, a
dedicated dataset document will be completed in Annex I once the data are
generated.
_Explanation of the columns:_
* **Nature of the data** : experimental data, numerical data, documentation, software code, hardware, etc.
* **WP generation** : work package in which the database is generated
* **WP using** : work package in which data are reused in the SALUTE project
* **Data producer** : partner who generates the data
* **Data user** : partners and the topic manager who can use data in the project or for internal research.
* **Format** : can be .pdf / .step / .txt / .bin, etc.
* **Volume** : expected size of the data
* **Purpose / objective** : purpose of the dataset and its relation to the objectives of the project.
* **Confidentiality level** : some data associated with results may have potential for commercial or industrial protection and thus will not be made accessible to a third party (“confidential” confidentiality level); other data needed for the verification of results published in scientific journals can be made accessible to third parties (“public” confidentiality level).
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Nature of the data**
</th>
<th>
**WP**
**generation**
</th>
<th>
**WP using**
</th>
<th>
**Data producer**
</th>
<th>
**Data user**
</th>
<th>
**Format**
</th>
<th>
**Volume**
</th>
<th>
**Purpose/objecti ves**
</th>
<th>
**Confidentiality level**
</th> </tr>
<tr>
<td>
**1\. 2D Liners specification ‐ demonstrators data**
</td>
<td>
CAD/Plan
</td>
<td>
WP 2
</td>
<td>
WP 3,4,7
</td>
<td>
ECL-LMFA
</td>
<td>
ALL
</td>
<td>
.pdf, .step
</td>
<td>
1 GB
</td>
<td>
* Contains plans and CAD of test vehicles.
* Provides necessary information for test bench implementation and numerical simulation.
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**2\. UHBR Liners specification ‐ demonstrators data**
</td>
<td>
Metrology
</td>
<td>
WP 2
</td>
<td>
WP 3,7,8
</td>
<td>
ECL-LMFA
</td>
<td>
ALL
</td>
<td>
.txt, .bin
</td>
<td>
1 GB
</td>
<td>
* Contains sensors calibration and
position, test-bench qualification tests, tests log ...
* Provides necessary information on the
measurements and
2D test bench setup.
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**3\. components screening data**
</td>
<td>
Experimental measurements
</td>
<td>
WP 3
</td>
<td>
WP 4,7,8
</td>
<td>
EPFL
</td>
<td>
ALL
</td>
<td>
.txt, .bin
</td>
<td>
1 TB
</td>
<td>
* Contains all measurements in
measured primary
units (generally
volt). Including
steady and unsteady pressure, sound pressure.
* Provides measurement ready to be converted in the physical units.
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**4\. 2D prototypes panels design data**
</td>
<td>
Metrology
</td>
<td>
WP 4
</td>
<td>
WP 5, 7
</td>
<td>
LAUM
</td>
<td>
FEMTO
TM
</td>
<td>
.txt, .bin
</td>
<td>
1 GB
</td>
<td>
* Contains sensors calibration and
position, test-bench qualification tests, tests log ...
* Provides necessary information on the
measurements and
3D test bench setup.
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**5\. 2D prototypes**
**Transducers design data**
</td>
<td>
Experimental measurements
</td>
<td>
WP 4
</td>
<td>
WP 5, 7
</td>
<td>
EPFL
</td>
<td>
FEMTO
TM
</td>
<td>
.txt, .bin
</td>
<td>
1 TB
</td>
<td>
* Contains all measurements in
measured primary
units (generally
volt). Including
steady and unsteady
pressure and LDA measurements,
sound pressure.
* Provides
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
measurement ready to be converted in the physical units.
</td>
<td>
</td> </tr>
<tr>
<td>
**6\. 2D prototypes Hardware design data**
</td>
<td>
Experimental measurements
</td>
<td>
WP 5
</td>
<td>
WP 6, 7
</td>
<td>
FEMTO
</td>
<td>
ECL-LTDS
TM
</td>
<td>
.txt, .bin
</td>
<td>
1 TB
</td>
<td>
* Contains only validated
measurements in physical units.
* Provides measurements for the analysis step.
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**7\. 2D Prototypes Hardware panels prototypes and software codes**
</td>
<td>
Hardware and software codes
</td>
<td>
WP5
</td>
<td>
WP 6
</td>
<td>
ECL-LTDS,
EPFL, FEMTO and LAUM
</td>
<td>
ECL-LMFA
TM
</td>
<td>
NA
</td>
<td>
All 2D prototypes
</td>
<td>
\- Supply 2D prototypes
hardware (electromechanic and
electonical
components) and software codes
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**8\. 2D Prototypes Validated experimental data**
</td>
<td>
Documentation
</td>
<td>
WP 6
</td>
<td>
WP 7,8
</td>
<td>
ECL-LTDS
</td>
<td>
ALL
</td>
<td>
.docx+.pdf
</td>
<td>
10 MB
</td>
<td>
* Contains
measurement
descriptions and the
operating conditions
from the validated experimental
database.
* Provides necessary information to
perform analysis of the validated
experimental database.
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**9\. 2D Prototypes Published experimental data**
</td>
<td>
Experimental measurements
</td>
<td>
WP 6
</td>
<td>
WP 7, 8
</td>
<td>
ECL-LTDS
</td>
<td>
ALL
</td>
<td>
.docx+.pdf
</td>
<td>
100 MB
</td>
<td>
* Contains experimental data
used for publication purposes.
* Provides an experimental open-
access database for the research community.
</td>
<td>
Public
</td> </tr>
<tr>
<td>
**10\. Advanced modelling Codes**
</td>
<td>
Numerical simulation
</td>
<td>
WP 7
</td>
<td>
WP 5, 8
</td>
<td>
LAUM
</td>
<td>
ALL
</td>
<td>
.m .dat, ….
</td>
<td>
TB
</td>
<td>
* Contains numerical results of
simulations.
* Provides numerical results for the analysis step.
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**11\. Modelling Documentation DATA**
</td>
<td>
Documentation
</td>
<td>
WP 7
</td>
<td>
WP 5, 8
</td>
<td>
LAUM
</td>
<td>
ALL
</td>
<td>
.docx+.pdf
</td>
<td>
10 MB
</td>
<td>
\- Contains the numerical strategy setup (excluding the
</td>
<td>
Confidential
</td> </tr> </table>
Page
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
mesh or all
geometrical aspects).
\- Provides the necessary setup to
initialise numerical simulations with used software.
</th>
<th>
</th> </tr>
<tr>
<td>
**12\. Published modelling DATA**
</td>
<td>
Documentation
</td>
<td>
WP 7
</td>
<td>
WP 5, 8
</td>
<td>
LAUM
</td>
<td>
ALL
</td>
<td>
.docx+.pdf
</td>
<td>
10 MB
</td>
<td>
* Contains numerical data used for publication purposes.
* Provides a numerical open-
access database for the research
community.
</td>
<td>
Public
</td> </tr>
<tr>
<td>
**13\. 3D prototypes details design data**
</td>
<td>
CAD/Plan
</td>
<td>
WP 8
</td>
<td>
WP 7, 9
</td>
<td>
FEMTO
</td>
<td>
ECL LMFA, ECL-
LTDS
TM
</td>
<td>
.docx+.pdf
</td>
<td>
TB
</td>
<td>
* Contains plans and CAD of test vehicles.
* Provides necessary information for test bench implementation and numerical simulation.
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**14\. 3D prototypes integration data**
</td>
<td>
Metrology
</td>
<td>
WP 9
</td>
<td>
WP 7, 10
</td>
<td>
ECL-LMFA
</td>
<td>
ECL LMFA/LTDS
TM
</td>
<td>
.txt, .bin
</td>
<td>
1 TB
</td>
<td>
* Contains only validated
measurements in physical units.
* Provides measurements for the analysis step.
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**15\. 3D Prototypes**
**Hardware panels prototypes and software**
**codes**
</td>
<td>
Hardware and software codes
</td>
<td>
WP9
</td>
<td>
WP 10
</td>
<td>
ECL-LTDS,
EPFL, FEMTO and LAUM
</td>
<td>
ECL-LMFA/LTDS
TM
</td>
<td>
NA
</td>
<td>
All 3D prototypes
</td>
<td>
\- Supply 3D prototypes
hardware (electromechanic and
electonical
components) and software codes
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**16\. 3D Prototypes Validated experimental data**
</td>
<td>
Experimental measurements
</td>
<td>
WP 10
</td>
<td>
WP 7
</td>
<td>
ECL-LMFA
</td>
<td>
ECL
LMFA/LTDS
TM
</td>
<td>
.bin, .dat, .m
</td>
<td>
1 TB
</td>
<td>
* Contains
measurement
descriptions and the
operating conditions
from the validated experimental
database.
* Provides necessary
</td>
<td>
Confidential
</td> </tr> </table>
Page
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
information to
perform analysis of the validated
experimental database.
</th>
<th>
</th> </tr>
<tr>
<td>
**17\. 3D Prototypes**
**Published experimental data**
</td>
<td>
Documentation
</td>
<td>
WP.10
</td>
<td>
WP 7
</td>
<td>
ECL-LMFA
</td>
<td>
ALL
</td>
<td>
.docx+.pdf
</td>
<td>
1 GB
</td>
<td>
* Contains experimental data
used for publication purposes.
* Provides an experimental open-
access database for the research community.
</td>
<td>
Public
</td> </tr>
<tr>
<td>
**18\. Experimental Documentation DATA**
</td>
<td>
Documentation
</td>
<td>
WP 6, 10
</td>
<td>
WP 11
</td>
<td>
ECL-LTDS
</td>
<td>
ALL
</td>
<td>
.docx+.pdf
</td>
<td>
10 MB
</td>
<td>
* Contains the experimental
strategy setup, all plan
* Provides the necessary setup to realize experiments.
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**19\. Miniaturized & integrated liner design data **
</td>
<td>
CAD / Plan Documentation
</td>
<td>
WP 11
</td>
<td>
NA
</td>
<td>
EPFL
</td>
<td>
ALL
</td>
<td>
.pdf, .step
</td>
<td>
1 GB
</td>
<td>
* Contains plans and CAD engine integration.
* Provides necessary information for future implementation and numerical simulation.
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**20\. Innovative tunable demonstrator and results**
</td>
<td>
Hardware and software codes
Experimental measurements
</td>
<td>
WP11
</td>
<td>
NA
</td>
<td>
ECL-LTDS,
EPFL, FEMTO and LAUM
</td>
<td>
ECL-LMFA/LTDS
TM
</td>
<td>
NA
</td>
<td>
All Innovative tunable demonstrator
</td>
<td>
\- Supply Innovative tunable
demonstrator
hardware (electromechanic and
electonical
components) and software codes - Provides necessary information relative to
performance and integration tests
</td>
<td>
Confidential
</td> </tr> </table>
_Table1: datasets generated by the SALUTE project_
Page
# FAIR Data
**2.1 Making data findable**
## Public database (data sets 8, 11 and 15)
The databases generated in the project will be identified by means of a
Digital Object Identifier linked to the published paper, and archived on the
ZENODO searchable data repository together with pertinent keywords. As part of
the attached documentation, the file naming convention will be specified on a
case-by-case basis. In case of successive versions of a given dataset, version
numbers will be used. Where relevant, the databases will be linked to metadata
such as movies or sound recordings.
## Confidential database
Confidential databases are composed of both the methods (databases 1,2,4, 7
and 10) and the results (databases 3, 5, 6, 8 and 9).. Each owner is
responsible for its database repository and has to guarantee access of other
partners for its used during the project. Only datasets linked to 3D
implementations has restricted access.
Each measurement raw data are identified by a unique identifier. Each
measurement is recorded in the test log using this identifier and the
measurement information. A validated measurement data (databases 7, 14) uses
the same identification as the corresponding raw data. Main information on
measurement is reported in the data experimental guide (database 16).
Each numerical run (databases 9 and 11) corresponds to a unique identifier
recorded in the corresponding data guide (databases 10).
**2.2 Making data openly accessible**
## Public database (databases 8, 11 and 15)
By default, all scientific publications will be made publicly available with
due respect of the Green / Gold access regulations applied by each scientific
publisher. Whenever possible, the papers will be made freely accessible
through the project web site and the open access online repositories ArXiv and
HAL. The databases that will be selected to constitute the project validation
benchmarks will be archived on the ZENODO platform, and linked from the SALUTE
project website. The consortium has already used the ZENODO repository for a
previous project, and is familiar with the associated procedure. Ascii-
readable file formats will be preferred for small datasets, and binary
encoding will be implemented for large datasets, using freely available
standard formats (e.g. the CFD Generic Notation System) for which the source
and compiled import libraries are freely accessible. In the latter case, the
structure of the binary records (headers) will be documented as part of the
dataset. The SALUTE Consortium as a whole will examine the suitability of the
datasets produced by the project for public dissemination, as well as their
proper archival and documentation. Each dataset will be associated with a name
of a partner responsible for its maintenance.
## Confidential database
Each Partners in charge of a confidential database has to allow access for the
use during the project and validate its procedure by the project coordinator
(ECL) and ITD topic manager (SAE).
For all concerning PHARE implementation, ECL and ITD topic manager are
authorised to exchange all necessary data and allow other partners to access
necessary materials. At long term the data generated project can be used for
internal research.
**2.3 Making data interoperable**
## Public database (databases 8, 11 and 15)
The interoperability of the published datasets will be enforced by the
adoption of freely available data standards and documentation. Ad-hoc
interfaces will be developed and documented where needed. A common vocabulary
will be adopted for the definition of the datasets, including variable names
and units.
## Confidential database
Validated databases are used for analysis (3, 6 and 9). These databases are
directly expressed in physical units
(using SI unit system). Necessary information about results are recorded in
the different data guides (10 and 15).
**2.4 Increase data re-use**
**Data licence**
Data from public databases are open access and used a common creative licence
(CC-BY).
## Public database (databases 8, 11 and 15)
With the impulsion of SALUTE project, the open access databases can be used by
other laboratories and industrials to made comparison with other machines.
Methods developed and physical analysis become references to other test cases
and improve the knowledge of the community.
## Confidential database
The experimental setup and the huge quantity of experimental and numerical
data cannot be completely exploited in the SALUTE project. The project is the
starting point to a long collaboration. At the end of the project, the re-use
of data and test bench can be: • Analysis of data generated in SALUTE project:
* Subsequent projects for consortium members and SAE.
* Additional academic partners to work on not exploited data.
* Supplementary experimental measurements:
* Using the already installed adaptive liners on new operating conditions ✔ Measurements of supplementary field with SALUTE project results. ✔ Investigates new concept of vibroacoustic control.
* Investigation of numerical prediction performances:
* Calibrate low fidelity numerical method using higher fidelity methods. ✔ High fidelity simulatin of other speed.
For all these next projects the agreement with the topic manager (SAE) is
necessary.
# Allocation of resources
**Costs related to the open access and data strategy:**
* Data storage in partner data repositories: Included in partners structural operating cost.
* Data archiving with ZENODO and MyCore data repositories: Free of charge.
**Data manager responsible during the project:**
The Project Coordinator (ECL) is responsible for the establishment, the
updates during the lifetime of the project and the respect of the Data
Management Plan. The relevant experimental data and the generated data from
numerical simulations during the SALUTE project will be made available to the
Consortium members within the frame of the IPR protection principles and the
present Data Management Plan.
**Responsibilities of partners:**
Each partner is responsible for the data it produces and must contribute
actively to the data management as set in the DMP.
_**Refer to part 1.2 “DATA MANAGEMENT Responsible”.** _
# Data security
## Public database (databases 8, 11 and 15)
Long-term preservation: Using ZENODO and MyCore data repositories.
Data Transfer: Using ZENODO and MyCore web platforms
Intellectual property: All data set contains are attached to a common creative
licence.
## Confidential Data
* _Long-term preservation_ : ensured by partner institutions’ data repositories.
* _Data Transfer_ : depending on the data volume:
* Small and medium size files are transferred by partners securitised data exchange platform (Renater FileSender,OpenTrust MFT ...)
* Huge size files are transferred by an external hard disk during face to face meeting. This type of transfer is infrequent and only concerns transfer of final databases from Partners to SAE.
* _Intellectual property_ : Data are confidential and need to strictly respect the intellectual property rights as set out in the Consortium and Implementation agreement.
5. **Ethical aspects**
The data generated by the SALUTE project is not subject to ethical issues.
6. **Other**
No other procedure for data management.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0946_SPLEEN_820883.md
|
distinct test campaigns for two sets of cavity configurations. The rig tests
will also study the performance of the high-speed turbine stage at off-design
conditions by varying the leakage flow rates and the stage operating point.
# 2.1.3 SPLEEN - Technical Work Packages
The SPLEEN project is organized in two main technical work packages that
include the activities planned for the experimental campaigns in the linear
cascade rig (Work Package 1), and in the rotating turbine facility (Work
Package 2).
# 2.1.4 Purpose of the data collection/generation and its relation to the
SPLEEN Objectives
The SPLEEN project will mark a fundamental contribution to the progress of
high-speed low-pressure turbines by delivering unique experimental databases,
essential to characterize the time-resolved 3D turbine flow, and new critical
knowledge to mature the design of 3D technological effects.
# 2.1.5 Types and formats of data
In WP1, the large scale, high speed, low Reynolds number linear cascade
facility will host 2 different blade rows (Airfoil 1 and Airfoil 2). Each
blade row will be associated to a specific cavity type (A, B and C) (ejection
or ingestion) simulating the hub or shroud leakage patterns observed in a real
engine. A reference geometry of each cavity is complemented by 3 variants.
Finally, an innovative technology concept will be defined to limit as much as
possible the impact of the leakage on the secondary flows and the associated
losses. Those different concepts have to be integrated and tested in the
existing, modular S1/C wind tunnel.
In WP2, a 1.5 stage turbine will be installed in the Light Piston Isentropic
Compression tube facility (CT3) of the von Karman Institute. The design of the
3 blade rows and end-wall geometries of the turbine will be performed,
allowing modular modifications of the stator-rotor cavity geometries.
The testing programmes in the linear cascade (WP1) and in the roating annular
rig (WP2) will be conducted by varying the hub rim seal purge mass flow,
cavity geometry and thermodynamic speed and/or stage pressure ratio for the
1.5 stage turbine rig (WP2).
Upon completion of each turbine test, a first online verification will be made
on the overall validity of the various collected data (.csv, .txt, .xls, etc.)
by live monitoring of the acquired sensor traces and verification that the
nominal turbine operating conditions have been established within a small
repeatability band. An in depth data reduction procedure will then be applied
in order to transform the electrical recorded quantities in physical
quantities of interest (pressures, temperatures, flow angles, heat transfer,
radial/axial gaps, etc.). Depending on the final purpose of the measurement,
time-averaged or time-resolved procedures will be defined to look at the mean
value of a signal or at its statistical moments. Space-time diagrams will help
the interpretation and the understanding of the unsteady or periodic
phenomena. The integration of several individual quantities (total and static
pressures, flow angles, etc.) will provide the global impact, i.e. loss, on
the cascade or turbine stage performance. The uncertainty analysis will be
finalized and applied to all measurements issued from the testing phase.
Finally, the results obtained for variations of geometrical and flow
parameters (geometry, leakage mass flow, etc.) will carefully be compared.
This will allow drawing conclusions about sensitivities that should result in
design guidelines. The first application of the latter will be to propose and
implement an innovative technology to limit as much as possible the impact on
losses of the leakage flows. Its validation will be conducted in the high-
speed cascade facility within the work plan of WP1.
# 2.1.6 Existing data re-use and their origin
Any existing data that can be useful to carry out efficiently the SPLEEN
project will be re-used. That includes numerical tools and data concerning the
safe wind tunnel operation obtained during previous experimental campaigns
performed in the linear cascade rig and in the rotating turbine facility at
the VKI.
# 2.1.7 Expected size of the data
The size of the data may range from several “Megabytes” to datasets of the
order of “Terabytes”. The size of the generated data during the entirety of
the SPLEEN project (numerical and experimental results, technical notes and
reports) is expected to be of the order of magnitude of several dozens of
“Terabytes”. Such size is estimated based on the expected output of the
project that will collect time-resolved signals sampled at high sampling rates
(between 20 to 1 MHz) and the high measurement count that is in the order of
500 measurement point per WP per test configuration.
# 2.1.8 Data utility
The data collected will mainly be useful for the scientific community involved
in the turbomachinery research area and the Low Pressure Turbine (LPT)
manufacturers. The SPLEEN project aims at demonstrating high-speed LPT designs
up to a Technology Readiness Level of 5.
The turbine experiments run in the high-speed linear cascade allows
reproducing correctly the Strouhal numbers of incoming wakes, flow
coefficients, Reynolds and Mach numbers of engine turbines. Integration into
the cascade test section of engine cavity configurations and simulation of
purge or leakage flows with measurements of the 3D flow enables the turbine
designs to be validated up to TRL 3-4. The project will also introduce a new
strategy for the mitigation of turbine losses induced by the unsteady
interactions between the secondary-air and leakage streams with the passage
flow (TRL 1-2). Such technology will be then brought to higher TRLs (3-4) by
means of laboratory tests in the linear cascade facility. The experimental
campaigns planned on the rotating turbine rig will demonstrate a TRL of 5 for
a fully-featured multi-row high-speed LPT stage.
# FAIR data
## Making data findable, including provisions for metadata
### Identification mechanism and keywords
The databases generated in the project will be identified by means of a
Digital Object Identifier, and archived on the secure SPLEEN data repository
(see Section 5) together with pertinent keywords. The choice of adequate
keywords will be included to promote and ease the discoverability of data.
These keywords will include a number of common keywords in the turbomachinery
area but also generic keywords that can help to attract researchers from other
research areas to use and adapt SPLEEN results to their scientific fields.
### Naming conventions
Documents generated during the course of the project are referenced following
the convention “SPLEEN<Type>_<Title>_<Version>.<extension>” - <Type>:
MoM: Minutes of Meeting
KOM: Kick of Meeting
TN: Technical Note (biweekly frequency)
DS: Data Set
DX.Y: Deliverable (and the associated deliverable number: “X.Y” as example)
FR: Flash Report
Meeting#0: Presentation during technical meeting between VKI and the Topic
Leader “VKI/Safran
AE” (and the associated meeting number: “#0” as example)
CP: Conference Presentation
PU: Journal Publication - <Title>:
Description of the document - <version>:
See section 3.1.3 - <extension>:
depends on the document type
### Clear Versioning
Authors, approvers and modifiers of any kind of documents (deliverables,
technical notes,…) are recommended to use the Track changes functionality of
Word or PDF when making changes to any version of a document. Correction and
remarks can also be sent by email or directly discussed with the consortium
members.
If some modifications are made between members of the same affiliation, the
editable version changes from version 1.x to 1.x+1. Any -even minor-
modification required by a member from a different affiliation implies a
revision and hence the production of new reference by incrementing the version
(i.e. version 1.x to 2). The approval mechanism should be repeated until the
final formal approval of the document.
Modifications brought to the documents are identified in the “Document
history” section on the front page. The corresponding “Reason of change”
column details the origin of the modifications and summarizes the implemented
modifications.
<table>
<tr>
<th>
DOCUMENT HISTORY
</th>
<th>
</th> </tr>
<tr>
<td>
Version
</td>
<td>
Date
</td>
<td>
Changed by
</td>
<td>
Reason of change
</td> </tr>
<tr>
<td>
1.0
</td>
<td>
01.01.2019
</td>
<td>
A. Aaa
</td>
<td>
First version
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
### Type of metadata
Where relevant, the databases will be linked to metadata such as:
* _Descriptive metadata_ (describe a resource for purposes such as discovery and identification): it includes SPLEEN Identifier, Title, Abstract, Descriptive comments and Keywords.
* _Structural metadata_ (metadata about containers of data and indicates how compound objects are put together): Table of contents (for each delivered document) but also some Management Document describing the Types, Versions and Relationships between the SPLEEN digital materials (developed tools and experimental results)
* _Administrative metadata_ (provides information to help manage a resource): Author(s) and affiliation, Reviewer(s) and affiliation, Acceptance, Type of document, Dissemination Level, Document Status, Work Package, Estimated delivery date, Actual delivery date and Circulation List.
* _Process metadata_ (describe processes that collect, process, or produce data): Description of Calibration Procedure and Data Acquisition Method.
## Making data openly accessible
### Data produced and/or used in SPLEEN openly available as the default
By default, all SPLEEN scientific publications will be made publicly available
with due respect of the Green/Gold access regulations applied by each
scientific publisher. Whenever possible, the scientific publication will be
made freely accessible through the project web site (
_https://www.h2020-spleen.eu/_ ).
### Datasets to be shared under restrictions
The SPLEEN consortium as a whole (VKI in accordance with Safran AE and under
the access rights defined in the SPLEEN Implementation Agreement) will examine
the suitability of the datasets produced by the project for public
dissemination.
### Data accessibility
The databases that will be selected to be made openly accessible will be
archived on a data repository called “Zenodo” ( _https://zenodo.org/_ ) and
listed and linked from the SPLEEN project website and referred to in any
publications which contain and report such datasets.
### Methods or software tools needed to access the data
No specific methods or software tools are foreseen to get access to the SPLEEN
data. VKI has built up a strong experience using “Zenodo” and is familiar with
all the associated procedure. VKI will deliver all the necessary instructions
to invited members for a proper use of “Zenodo” and access to the open data
repositories.
### Location of the the data and associated metadata, documentation and code
Being the only beneficiary of the SPLEEN project, VKI will generate the SPLEEN
data and associated metadata, documentation and code. VKI will therefore be
responsible for the generated dataset preservation and maintenance. The
datasets (including the data, metadata and documentation) will be stored on
the “Zenodo” platform and on the VKI server (only accessible from the VKI
network).
### Data restrictions on use and data access committee
Final decision concerning the data access and data restrictions on use will be
taken in accordance with the Topic Leader. Access to the SPLEEN datasets will
be granted under the responsibility and the supervision of the Project
Coordinator.
3.2.7 Identification of the identity of the person accessing the data
The “Zenodo collaboration” does not track, collect or retain personal
information from users of “Zenodo”.
## Making data interoperable
The interoperability of the SPLEEN published datasets will be enforced by the
adoption of:
* generally used extensions, adopting well established formats (whenever it is made possible),
* clear metadata,
* keywords to facilitate discovery and integration of SPLEEN data for other purposes, - and detailed documentation (such as user guide, for instance).
User interfaces will be developed and documented where needed. A clear and
common vocabulary will be adopted for the definition of the datasets,
including variable names, spatial and temporal references and units (complying
with SI standards).
## Increase data re-use
SPLEEN is expected to produce a considerable volume of novel data and
knowledge through experimental investigation that will be presented to the
outside world through a carefully designed set of dissemination actions (see
“SPLEEN-Deliverable 3.1: First plan for communication dissemination and
exploitation actions”).
The SPLEEN consortium will specify a license for all publicly available files
(see Section 3.2). A licence agreement is a legal arrangement between the
creator/depositor of the data set and the data repository, signifying what a
user is allowed to do with the data. An appropriate licence for the published
datasets will be selected by the SPLEEN consortium as a whole (VKI in
accordance with Safran AE) by using the standards proposed by Creative Commons
(2017) [2].
Open data will be made available and accessible at the earliest opportunity on
the “Zenodo” repository. This fast publication of data is expected to promote
the data re-use by other researchers and industrials active in the Low
Pressure Turbine field as soon as possible, thereby contributing to the
dissemination of SPLEEN methodology, developed tools and state-of the art
experimental results. Possible users will have to adhere with the “Zenodo”
Terms of Use and to agree with the licensing content.
The SPLEEN consortium plans to make its selected data accessible to third
parties up to a period of 5 years after the project completion.
All these aforementioned methods are expected to bring their contributions to
a long-term and efficient reuse of the SPLEEN data.
# Allocation of resources
Being the only beneficiary of the SPLEEN project, VKI is responsible of the
SPLEEN proper data archival (for a period of up to 5 years after the project
completion), curation, maintenance, and documentation. The handling of the
“Zenodo” repository as well as all data management issues related to the
project fall in the responsibility of the Project Coordinator. Consequently,
VKI will also be responsible for applying for reimbursement for costs related
to making data accessible to others beyond the SPLEEN consortium.
Costs related to data management (dissemination, including open access and
protection of results) are eligible for reimbursement under the conditions
defined in the H2020 Grant Agreement, in particular Article 6 and Article 6.2.
D.3. The efforts associated with the archival, curation, documentation and
maintenance of the SPLEEN datasets is estimated equivalent to about 1 person-
month.
# Data security
## Transfer of sensitive data
The Project Coordinator launched a SPLEEN-store project site for information
and document exchange between the Beneficiary and the Topic Leader (VKI/Safran
AE) via an open source Enterprise Content Management (ECM) system called
“Alfresco”.
A so-called “Alfresco site” is an area where you can share content and
collaborate with other site members. The site creator becomes the Site Manager
by default, though additional or alternate managers can be added after this.
Each site has a visibility setting that marks the site as public, moderated,
or private. This setting controls who can see the site and how users become
site members. In the frame of the SPLEEN project, all created sites will be
private (i.e. only sites members can access the site and users must be added
to the site by a site manager).
An “Alfresco” site offers the following services:
* Online access to project-relevant documents like reports, minutes of meeting and deliverables.
* Track version functionality for documents,
* Upload, store and share documents such as CAD or experimental data files,
* Online notification on specific issue,
* A secure back-up system for final official document of the SPLEEN project.
All invited SPLEEN members will be granted to a personal access. Depending on
the need, members will be assigned to one specific role on the web based data
exchange site:
* Manager has full rights to all site content - what they have created themselves and what other site members have created,
* Collaborator has full rights to the site content that they own; they have rights to edit but not delete content created by other site members,
* Contributor has full rights to the site content that they own; they cannot edit or delete content created by other site members,
* Consumer has view-only rights in a site; they cannot create their own content.
The SPLEEN-store is registered under the following address:
_https://www.h2020-spleen.eu/share_
The site created on the Alfresco platform has been named
“SPLEEN_Data_Exchange”.
The home page of the SPLEEN-data repository looks as follows:
Figure 1: Screenshot of the SPLEEN-data repository
It contains five main folders containing all the shared SPLEEN-relevant
documents:
* Data Exchange (from Safran AE): contains all the documents from Safran AE to be shared with VKI,
* Data Exchange (from VKI): contains all the documents from VKI to be shared with Safran AE,
* Deliverables: contains both “Management” and “Technical” deliverables (only the final/approved versions),
* Meetings: contains files presented during VKI/Safran AE meetings as well as the related MoM (only the final approved versions),
* Reports: contains flash reports issued with a bi-weekly frequency.
## Data recovery and secure storage (VKI)
SPLEEN documents are stored on each team member’s computers (daily back-up for
data generated by each SPLEEN member and weekly back-up for all the SPLEEN
documents performed by each SPLEEN member). Computers used in the frame of the
SPLEEN project are all password-protected and can only be used on VKI ground
or accessed remotely by secured password available exclusively to SPLEEN team
members.
Besides that, all the documents issued by VKI members (including draft
versions) are stored and shared on the so-called SPLEEN-network
(turbomachinery department server that can only be accessed by granted VKI
members). This constitutes a secure back-up system for all the SPLEEN
documents issued by VKI members.
The SPLEEN folder contains four sub-folders related to each Work Package:
Figure 2: Folder organisation (screenshot)
An “Alfresco site” has also been created for VKI-relevant final versions of
the SPLEEN documents. This includes:
* deliverables,
* minutes of meetings with the Topic leader,
* academic presentations and papers,
* useful data reduction documents,
* final CAD files,
* internal reports,
* minutes of internal meetings, - purchases (quotations and invoices), - management documents.
The created site is named “SPLEEN_VKI” and the documents are stored under the
following adress: _https://www.h2020-spleen.eu/share_
# Ethical aspects
The SPLEEN consortium complies with the ethical principles as set out in
Article 34 of the Grant Agreement, which states that all activities must be
carried out in compliance with:
1. Ethical principles (including the highest standards of research integrity – as set out, for instance in the European Code of Conduct for Research Integrity – and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct)
2. Applicable international, EU and national law.
These ethical principles also cover the data management activities. The data
generated in the frame of the SPLEEN project are not subject to ethical
issues.
# Other issues
The SPLEEN project does not make use of other
national/funder/sectorial/departmental procedures for data management.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0948_PREFER_115966.md
|
# Introduction and aim
The main objective of the PREFER project is to strengthen patient-centric
decision making throughout the life cycle of medical products (a term which,
in the context of this project, includes medicinal projects(1) and medical
devices(2)) by developing evidence-based recommendations to guide industry,
Regulatory Authorities, Health technology assessment (HTA) bodies and
reimbursement agencies on how and when patient-preference studies could be
performed, and how the results can be used to support and inform decision
making.
The PREFER Consortium Agreement indicates that a specific data management plan
(DMP) will be created. More specifically, the Consortium agreement indicated
in section 7.5.4 the following:
_‘As an exception to Clause 29.3 of the Grant Agreement, as provided for in
its last paragraph, certain Beneficiaries have indicated that their main
objective in the Action would be jeopardized by making all or specific parts
of the research data of the Action openly accessible. Beneficiaries have
therefore agreed to a data management plan, which describes how data will be
handled instead of open access, and which plan details the reasons for not
giving open access. Such data management plan is a deliverable of Work Package
1 and shall be added as Appendix 7 to this Consortium Agreement.’ (3)_
The DMP is an evolving document with the final DMP forming the Appendix 7 of
the Consortium Agreement, describing all aspects of how the data generated
within PREFER were managed.
The Description of Action (DoA) of PREFER (p.19-20) provides the general
framework regarding data management, data protection, data sharing, data
ownership, accessibility, and sustainability requirements.(4)
In this initial DMP the management of generated and collected individual-level
data is described, not the management of analyses and reports containing
aggregated data. These issues will be covered in the final DMP.
Overall, the DMP provides a description of the data management that will be
applied in the PREFER project including:
* a description of the data repositories, who is able to access the data, and who owns the data.
* the main DMP elements for each of the research projects (interviews, literature review, case study, etc.) contributing to PREFER, to be defined and provided to PREFER (Chapter 5).
* the time period for which data must be stored.
* the standards for data collection and evaluation.
* the possibilities of and conditions for sharing data.
* the implementation of data protection requirements.
As the DMP is an evolving document, some of the aspects may be described in a
later version of the DMP.
In summary, the PREFER DMP gives guidance and provides an oversight of general
data management, while each research project needs to provide specific data
management information including, but not limited to, data capture systems,
data analysis systems, data protection and data privacy measures, including
description of de-identification of data sets and access rules. And in cases
where the research results are not open access a justification needs to be
provided.
# General principles
This is the Initial DMP for PREFER. The DMP is a working document, that will
evolve during the PREFER project, and will be updated to reflect project
progress. Table 1 lists the deliverable version updates of the DMP for PREFER.
Additional updates will be done whenever important changes occur e.g. due to
the creation of new data sets.
Processes relating to the different data management plan aspects will be
worked out between M6 and M18 and explained further in the next version of the
DMP due in M18.
**Table 1** PREFER Data Management Plan (DMP) deliverables
<table>
<tr>
<th>
Del. no.*
</th>
<th>
Deliverable name
</th>
<th>
WP no.
</th>
<th>
Short name of lead participant
</th>
<th>
Type
</th>
<th>
Dissemination level
</th>
<th>
Delivery Date **
</th> </tr>
<tr>
<td>
1.3
</td>
<td>
Initial DMP
</td>
<td>
1
</td>
<td>
Actelion
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
6 (March 2017)
</td> </tr>
<tr>
<td>
1.6
</td>
<td>
Update DMP
</td>
<td>
1
</td>
<td>
Actelion
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
18 (March 2018)
</td> </tr>
<tr>
<td>
1.9
</td>
<td>
Final DMP
</td>
<td>
1
</td>
<td>
Actelion
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
60 (September 2020)
</td> </tr> </table>
DMP= Data Management Plan; WP= Work packages; R = Document, report (excluding
the periodic and final reports); DEC = Websites, patents filing, press & media
actions, videos, etc.; PU = Public, fully open, e.g. web; CO = Confidential,
restricted under conditions set out in Model Grant Agreement
* According to the Table 3.1c: List of deliverables of the PREFER Description of Action(4) ** Measured in months from the project start date (October 2016, Month 1)
The DMP provides practical instructions with respect to any requirements for
local exceptions to data management.
The DMP follows the principles that research data are findable, accessible,
interoperable and reusable (FAIR)(5) as well as being attributable, legible,
contemporaneous, original and accurate (ALCOA)(6).
The terminology used in this DMP is explained in the glossary (Chapter 11 of
this DMP).
The general principles on access rules are defined in the consortium agreement
(section 8) (3).
For research data generated as part of an ongoing medicinal product
development program within industry, there may be proprietary and privacy
concerns that will be acknowledged and agreements made with the respective
partners on data accessibility and data storage. To acknowledge potential
differences for industry or academic case studies the DMP will refer to “data
generated in industry-led studies” and “data generated in academic-led
studies”.
# Overview of data managers, data repositories and access rules
Three repositories / platforms are used in the PREFER project. The responsible
contacts are listed in table 2.
* The platform “Projectplace” is used as an interaction platform for PREFER members to _**store and exchange** _ _**reports and anonymous data** _ .
* The data repository at KU Leuven (Digital Vault for Private Data) is used to _**store and exchange** _ _**sensitive personal data** _ in a secure and protected environment during the conduct of PREFER.
* The data repository at Uppsala University (ALLVIS) will be used for _**long-term storage** _ of reports and anonymized data particularly after the end of the PREFER project.
The use of the KU Leuven repository is preferred for the storage of interviews
and academic patient preference studies. Local national laws and requirements
need to be applied and can results in deviations. For example in the UK, the
UK Sponsor and the Research Ethics committee will determine where it is
allowed to store data.
Data sets containing personal data can also be stored by the data owners in
their own repository for a fixed period of time, as defined in the applicable
laws or regulations, but this should be a secure repository. Copies of
datasets containing personal data in the possession of partners other than the
research data owner (see 5.2) must be destroyed at the end of the PREFER
project. Other non-public and public datasets not containing personal data
will be stored for at least 10 years from the end of the PREFER project in the
Uppsala data repository, to ensure their long-term availability to future
researchers.
**Table 2** Main contacts for data management aspects
<table>
<tr>
<th>
Responsibilities
</th>
<th>
Name
</th>
<th>
E-mail address
</th> </tr>
<tr>
<td>
Data Management compliance contact
</td>
<td>
Monika Brand
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Deputy Data Management Compliance contact Eline van Overbeeke
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
ProjectPlace contact
</td>
<td>
Carl Steinbeisser
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
KU Leuven repository contact
</td>
<td>
Isabelle Huys
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
KU Leuven deputy repository contact
</td>
<td>
Eline van Overbeeke
</td>
<td>
[email protected]
</td>
<td>
</td> </tr>
<tr>
<td>
Uppsala repository contact
</td>
<td>
Mats Hansson
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Uppsala deputy repository contact
</td>
<td>
Head of the Department of
Public Health and Caring Sciences
</td>
<td>
</td> </tr> </table>
The WP1 data management team has the responsibility to update the names
related to the responsibilities, as people might change position.
All questions related to data management such as rules for uploading data
sets, request for access rights should be sent to the Data Management
Compliance contact and the Deputy. The processes and the role description of
the Data management compliance contact and its deputy will be worked out in
the next period between M6 and M18 and explained further in the next version
of the DMP due in M18.
WP leads are responsible for informing the Data Management Compliance contacts
about all generated data sets, in their research projects.
## Projectplace
Projectplace is the platform used by PREFER to facilitate collaboration
between PREFER members, to plan deliverables, to track progress of all tasks,
and to store meeting minutes and task reports. All PREFER members have an
account so they can access Projectplace.
## The KU Leuven (KUL) repository for personal data
A secured repository to store and to exchange sensitive personal data will be
provided by KUL and is known as a “digital vault for private data”. Within
this digital vault, researchers can keep personal data safe and apply strict
rules for data access. In addition, they can also anonymise information and
process it outside the digital vault without causing any data privacy risk.
The digital vault is a highly secure environment within a secure network.
Several vaults can be set up within this secure network, each for a different
project. Each vault consists of a protected server (Windows or Linux) and can
only be accessed by a well-defined user group.
The KUL repository will function as the virtual workplace to share and assess
the individual-level data as needed to fulfil the PREFER objectives. Processes
relating to the use of the KU Leuven repository will be worked out after M6
and explained further in the next version of the DMP due in M18.
### Specifications and costs of the KUL repository
The repository consists of:
* A **secure server and operating system** in the special, secure environment for private data:
* A virtual Windows server (1 CPU and 4 GB RAM) or a virtual Linux server (1 CPU and 2 GB RAM).
* An IP address, DNS entry and name for the virtual server. o An ICTS-guaranteed licence for the operating system (Windows server or Linux CentOS). o Installation of the Windows or Linux operating system (including latest upgrades and security patches) on the virtual server. o Monthly maintenance of the Windows or Linux operating system, i.e. regular application of upgrades and security patches.
* Access to the virtual server via an RDP client (remote desktop protocol) for Windows or an SSH client for Linux. Preliminary VPN connection is required.
* **Application software on the server** :
* Installation of SAS and SPSS on the virtual server.
* An ICTS-guaranteed SAS and SPSS software license.
* **Storage capacity for data** :
* 50 GB storage space for data (server back-end storage, type 1, with mirror).
* **Cost of the repository:** € 1.291,79 per year
### Procedures/tools for data accessibility / security
Details of all users of the digital vault must be registered with KUL.
External users must have minimal details registered. The user/requestor is
responsible for ensuring their registration. Access to identifiable personal
data on the secure ICTS server is restricted to a minimum number of people,
i.e. people whose task it is to decrypt or anonymise information. Anonymised
information is sufficient for the majority of researchers involved in a
project. These data can be processed outside the digital vault; therefore,
access to the digital vault is not necessary or even desirable for these
researchers. One person (the data owner, see chapter 5) per task will get
access to the data repository. If additional people require access to the
digital vault after its initial set up, this access must be requested by the
person responsible for the digital vault. This can be done by e-mailing
[email protected]_ . For PREFER the KUL repository manager is
listed in table 2. Access to the digital vault is only possible through a Luna
account (KU Leuven user ID and -password). The digital vault is only
accessible through the KU Leuven VPN solution. The user must authenticate when
setting up the VPN connection. A vault-specific VPN profile ensures that
access is possible only to the corresponding vault in the secure network.
Access to the secure network environment that houses all the vaults is
strictly protected. The secure network environment is protected from the
outside by a firewall, which only allows traffic:
* from the VPN solution (through a specific profile) to the servers and information in the corresponding vault;
* from the KUL ICTS management network (for system administration) from a central system console.
The server in the vault is managed by KUL ICTS and only KUL ICTS personnel
have administrator/root rights. KUL ICTS personnel are bound by the KUL ICT
code of conduct for staff.
### Duration of accessibility
Users with access to the digital vault only have user rights for access to the
data in their own vault. A service agreement for a “Digital vault for private
data” has a duration of 1 year, after which it tacitly renews each year unless
the IT manager responsible gives notice on the agreement by e-mail to
[email protected]_ , at the latest 3 months before the end of
the agreement. If notice is given on the digital vault agreement after the
project ends, the information will be irrevocably deleted and will become
irrecoverable. An agreement will be set up with KUL to guarantee access for 5
years, namely during the duration of the IMI PREFER project. Long term storage
after the end of the PREFER project are described in section 3.3.
### Data transfer
Data transfer files to be generated and uploaded to the digital vault can
directly be uploaded by the data owner in the secure environment. For this the
data owner needs to have access to the digital vault (see chapter 5). If the
data is not directly available to the data owner, the data can be transferred
to the data owner through a secure FTP (SFTP) or can be delivered to the data
owner via a physical medium (DVD/CD/USB).
### Back-up process
Stored data is backed up using “snapshot” technology, where all incremental
changes in respect of the previous version are kept online on a different
server at the KU Leuven. As standard, 10% of the requested storage is reserved
for backups using the following backup regime:
* An hourly backup (at 8 a.m., 12 p.m., 4 p.m. and 8 p.m.), the last 6 of which are kept.
* A daily backup (every day) at midnight, the last 6 of which are kept.
* A weekly backup (every week) at midnight between Saturday and Sunday, the last 2 of which are kept.
### Disaster Recovery
The repository has 50 GB storage space for data (server back-end storage, type
1), and a mirror storage system at a different building of the KU Leuven in
another part of the city is provided to enable disaster recovery.
## The Uppsala repository for long-term storage
The data repository ALLVIS at Uppsala university will be used to archive the
PREFER anonymized data used for publications as well as the PREFER
recommendation documents and all content form ProjectPlace. Mats G. Hansson is
the owner and responsible for the ALLVIS repository, listed in table 2. He is
deputized by the Head of the Department of Public Health and Caring Sciences,
if applicable.
### Specifications of ALLVIS
ALLVIS is a storage platform and the respective research data owner is
responsible for transferring anonymized data to ALLVIS for storage. The
process and timing for such storage will be further worked out after M6 and
detailed in the next version of the DMP due in M18. If stored data need to be
transferred to platform for processing again the research data owner is
responsible for the data transfer. ALLVIS will not release any data without
the agreement between the repository owner and the research data owner (see
section 5.2 for definition). However, the research data owner has to comply
with the principle of public access to official records.
The Principle of Public Access (Offentlighetsprincipen) in Sweden means that
activities of public authorities are open to the public and research
activities are no exception. Universities in Sweden are legally considered as
public authorities. Records of data and research results created in the
research process are subject to implementation of the Principle of Public
Access, regardless of the kind of research or source of funding.
Public access can either be 1) public without restrictions, 2) public but with
restricted access regulated by Secrecy Law. However, there might be working
documents that do not fall under the public access rules.
### Archiving
Administrative records (e.g. Ethics approval) are stored by public authorities
with reference to Archive Law. During the course of the PREFER project,
administrative records and documents are stored in ProjectPlace. These
documents and records will be archived in the ALLVIS repository at Uppsala
University for 10 years after the end of the project. Once archived, records
are subject to the principle of public access. Uppsala University shall draw
up a description of this archive and a systematic archival inventory.
### Procedures/tools for data accessibility/security
Access to the file repository is granted via a Windows file share using SMB
v3. Outside Uppsala University, access is granted only through a secure VPN-
connection. Authentication against the VPN and authentication against the file
share is granted using a personal/identifiable user account from Uppsala
University. Authentication at Uppsala University is handled by a central user
database and is used by the VPN and file share. Access to the project area is
limited to the research data owner (e.g. Principal Investigator (PI)) and
users granted access. Data is stored by an enterprise-grade NAS-system, which
has been installed and configured in accordance with the supplier’s guidelines
and is hosted in an on-campus server hall.
### Back-up process
Backups are incrementally saved every night using an enterprise-grade backup
system at a Universityaffiliated off-campus site.
### Disaster Recovery
Disaster recovery is in place and is possible. Disaster recovery is handled on
a per-case base. Requests can
be made either by phone or e-mail, contacting Uppsala University´s
Servicedesk. The Servicedesk can be contacted weekday´s 08:00-21:00 and on
weekends 14:00-17:30. E-mail: _http://uadm.uu.se/it/om/servicedesk._
# Overview of data types generated and collected in PREFER
The data generated and collected during the PREFER project can be divided into
two categories of decreasing confidentiality:
1. datasets containing personal data
2. datasets containing non-personal data
The data generated within the PREFER project are (a) primary data (original
research) produced by different stakeholder e.g. interviews and case studies,
and (b) secondary data (reuse of existing data) such as database studies and
literature reviews. Primary data are data sets more likely to contain personal
data, while secondary data sets are more likely to containing non-personal
data.
Patient data will be generated and processed during the activities planned in
WP 2, WP 3 and WP 4 (table 3).
* **WP 2** will generate datasets containing literature reviews, recorded interviews, transcriptions of interviews, and review of reports in preference research
* **WP 3** will create datasets containing both aggregated and patient-level identified or de-identified data. These data can be created from historical case studies, prospective industry-led and academic-led case studies, surveys, as well as from simulation studies
* **WP 4** will generate datasets containing literature reviews and data resulting from expert panel discussions and consultation rounds.
Appropriate strategies have to be put in place by the individual research
project owners, to ensure (personal) data protection/privacy, and individual
studies are asked to provide a small DMP as described in this DMP (Chapter 5).
The processes will be worked out and implemented between M6 and M18, in
collaboration with task WP 3 task 3.1 to align the templates and to use
synergies for research project descriptions.
The WP1 Data Management Team will generate a meta-data repository of all
research projects in a format as outlined in table 3 and with the support from
the WP leads, or research project owners, respectively. This meta-data
repository will be updated regularly (at least on a quarterly basis) and is
the master file for more detailed information of each research project as
described in table 3.
The WP1 Data Management Compliance Contacts (table 2) together with the WP
leads will establish a process to ensure that all generated data sets, or
research projects, respectively, will be gathered as described in this DMP.
The data are expected to be useful for the PREFER project, especially for the
specific tasks that generate or collect or re-use the data, and the analyses
and reports will be useful to all stakeholders.
Table 3 will be updated with the unique identification numbers as described in
Chapter 5.
**Table 3** Summary of the PREFER-generated data
<table>
<tr>
<th>
Task* Objective Design
</th>
<th>
Type
</th>
<th>
Format
</th>
<th>
Re- Origin Size Ca***
use**
</th> </tr>
<tr>
<td>
2.1
</td>
<td>
Identifying desires, expectations, concerns and requirements of stakeholders
about methodologies for PP elicitation and their use in decision making
</td>
<td>
Literature review
</td>
<td>
Born digital, reference
</td>
<td>
Textual
</td>
<td>
2.3
</td>
<td>
Secondary TBA 2
</td> </tr>
<tr>
<td>
</td>
<td>
Interviews
</td>
<td>
Born digital, observational
</td>
<td>
Multimedia + textual
</td>
<td>
2.3
</td>
<td>
Primary TBA 1
</td> </tr>
<tr>
<td>
2.2 Determine processes, conditions, contextual factors that influence the
</td>
<td>
Literature review
</td>
<td>
Born digital, reference
</td>
<td>
Textual
</td>
<td>
2.3
</td>
<td>
Secondary TBA 2
</td> </tr>
<tr>
<td>
utility and role of PP studies
</td>
<td>
Interviews
</td>
<td>
Born digital, observational
</td>
<td>
Multimedia + textual
</td>
<td>
2.3
</td>
<td>
Primary TBA 1
</td> </tr>
<tr>
<td>
2.3
</td>
<td>
Identification of assessment criteria
used at decision points throughout the DLC
</td>
<td>
Literature review
</td>
<td>
Born digital, reference
</td>
<td>
Textual
</td>
<td>
/
</td>
<td>
Secondary TBA 2
</td> </tr>
<tr>
<td>
</td>
<td>
Interviews
</td>
<td>
Born digital, observational
</td>
<td>
Multimedia + textual
</td>
<td>
/
</td>
<td>
Primary TBA 1
</td> </tr>
<tr>
<td>
2.4 Identification of preference elicitation Literature review
</td>
<td>
Born digital, reference
</td>
<td>
Textual
</td>
<td>
2.6
</td>
<td>
Secondary TBA 2
</td> </tr>
<tr>
<td>
methods
Interviews
</td>
<td>
Born digital, observational
</td>
<td>
Multimedia + textual
</td>
<td>
2.6
</td>
<td>
Primary TBA 1
</td> </tr>
<tr>
<td>
1\. Identification of Literature
2.5
educational/gamified tools review
</td>
<td>
Born digital, reference
</td>
<td>
Textual
</td>
<td>
/
</td>
<td>
Secondary TBA 2
</td> </tr>
<tr>
<td>
Literature
2\. Identification of psychological tools review
</td>
<td>
Born digital, reference
</td>
<td>
Textual
</td>
<td>
/
</td>
<td>
Secondary TBA 2
</td> </tr>
<tr>
<td>
3\. Presentation of risks Literature review
</td>
<td>
Born digital, reference
</td>
<td>
Textual
</td>
<td>
/
</td>
<td>
Secondary TBA 2
</td> </tr>
<tr>
<td>
Identification of candidate
2.7 methodologies and criteria to assess Interviews empirical case and
simulation studies
</td>
<td>
Born digital, observational
</td>
<td>
Multimedia + textual
</td>
<td>
/
</td>
<td>
Primary TBA 1
</td> </tr>
<tr>
<td>
Review of
Identifying and assessing historical
3.3 historical
case studies from industry partners
case studies
</td>
<td>
Born digital, reference
</td>
<td>
Textual
</td>
<td>
/
</td>
<td>
Secondary TBA 2
</td> </tr>
<tr>
<td>
Lessons learned survey of PREFER
3.3 members with preference research Survey experience.
</td>
<td>
Born digital, reference
</td>
<td>
Textual
</td>
<td>
/
</td>
<td>
Primary
</td>
<td>
TBA 1
</td> </tr>
<tr>
<td>
3.4 Identifying and supporting prospective PP case case studies from industry
partners study
</td>
<td>
Origin TBD, observational
</td>
<td>
Textual, numerical, multimedia
</td>
<td>
/
</td>
<td>
Primary
</td>
<td>
TBA 1
</td> </tr>
<tr>
<td>
3.5- Empirical case studies and simulation PP case
3.7 studies study
</td>
<td>
Origin TBD, observational + simulation
</td>
<td>
Textual, numerical, multimedia, models
</td>
<td>
/
</td>
<td>
Primary
</td>
<td>
TBA 1
</td> </tr>
<tr>
<td>
3.8 Additional case studies PP case study
</td>
<td>
Origin TBD, observational
</td>
<td>
Textual, numerical, multimedia
</td>
<td>
/
</td>
<td>
Primary
</td>
<td>
TBA 1
</td> </tr>
<tr>
<td>
4.3 Expert panels on recommendations Interviews
</td>
<td>
Born digital, observational
</td>
<td>
Multimedia + textual
</td>
<td>
/
</td>
<td>
Primary
</td>
<td>
TBA 1
</td> </tr>
<tr>
<td>
Consultation rounds on
4.4 recommendations Interviews
</td>
<td>
Born digital, observational
</td>
<td>
Multimedia + textual
</td>
<td>
/
</td>
<td>
Primary
</td>
<td>
TBA 1
</td> </tr> </table>
PP= Patient preferences; DLC=Drug Life Cycle; Ca= Category; TBA= To be
announced
* According to the description of the tasks and different work packages in the PREFER DoA document of 16/07/17.
** Displays in which other tasks of WP2 and WP3 the data are used.
*** The data produced and used during the PREFER project can be divided into
two categories (Ca):
1. datasets containing (sensitive) personal data
2. datasets containing non-personal data
# Operational data management requirements for PREFER research projects
Each research project (interviews, literature review, surveys, case studies,
etc.) needs to provide a short dataset specific DMP, including but not limited
to data capture systems, data analysis systems, data protection and data
privacy measures, including description of de-identification of data sets and
access rules. If the research results cannot be open access a justification
needs to be provided.
## Requirements for the short dataset specific DMP
All data owners need to fill in **Table 4** (available on ProjectPlace as a
template) containing the meta data and describing the data management of data
sets. Metadata are specifications for data that provide the contextual
information required to understand those data. Such specifications describe
the structure, data elements, interrelationships and other characteristics of
data, the data repository used, and need to be securely stored with the
database.
These tables will be reviewed by the WP1 data management team for
completeness, compliance with the DMP and compliance with the Consortium
Agreement. The text in _blue and Italic_ gives guidance on what information
should be provided and should be replaced.
As part of the DMP an evolving data governance document of the different study
types will be maintained (WP 1, Deliverables 1.3, 1.6 and 1.9, M6, M18, M60).
This data governance document (based on table 4) will be kept and maintained
in Projectplace and attached to the DMP at the given deliverables times.
Table 4 Metadata requested per dataset (adapted from the Data Management
General Guidance of the DMP
Tool)(7)
_This table will be made available on Projectplace as a template to fill in
for every dataset, research project by the data owner. The text in blue and
Italic gives guidance on what information should be provided and should be
replaced._
<table>
<tr>
<th>
General Overview
</th>
<th>
</th> </tr>
<tr>
<td>
**Title**
</td>
<td>
_Name of the dataset_
</td> </tr>
<tr>
<td>
**PREFER task**
</td>
<td>
_Mention to which (sub)task in PREFER this dataset belongs_
</td> </tr>
<tr>
<td>
**Identifier**
</td>
<td>
_An identifier will be given to all datasets. Format: PREFER_#.#_L/I/P_yyyy-
mm-dd. (L, I, or P is chosen according to the design of the study: L=
literature review, I= interviews, P= Patient Preference study (whatever design
it takes). Example for the interviews of task 2.2:_
_PREFER_2.2_I_2016-12-10)_
</td> </tr>
<tr>
<td>
**Research Data owner**
</td>
<td>
_Names and addresses of the responsible person and deputy of the organizations
who created the data; preferred format for personal names is surname first
(Format: Organization; Surname, First name)._
</td> </tr>
<tr>
<td>
**E-mail address of the data owner**
</td>
<td>
_Please provide the e-mail address of the data owner_
</td> </tr>
<tr>
<td>
**Start and end date**
</td>
<td>
_Project start and end date. Format: yyyy.mm.dd-yyyy.mm.dd._
</td> </tr>
<tr>
<td>
**Method**
</td>
<td>
_How the data were generated, listing equipment and software used (including
model and version numbers), formulae, algorithms, experimental protocols, and
other things one might include in a lab notebook_
</td> </tr>
<tr>
<td>
**Standards**
</td>
<td>
_Reference to existing suitable standards of the discipline can be made. If
these do not exist, an outline on how and what metadata will be created.
Depending of type of data, different standards for collection exist, including
but not limited to:_
_a. Systematic literature review: Cochrane and Joana Bridge institute
standards b. Interviews: QUAGOL_
3. _Focus group discussion: AMEE 91 guide_
4. _Patient preference studies: depending on the type of method, e.g. ISPOR guide for DCE_
</td> </tr>
<tr>
<td>
**Type of data**
</td>
<td>
* _datasets containing personal data_
* _datasets containing non-personal data_
</td> </tr>
<tr>
<td>
**Processing**
</td>
<td>
_How the data have been altered or processed_
</td> </tr>
<tr>
<td>
General Overview
</td> </tr>
<tr>
<td>
**Source** _Citations to data derived from other sources, including details of
where the source data is held and how it was accessed_
</td> </tr>
<tr>
<td>
**Funded by** _Provide information regarding financial support such as
research grants, or indicate that the data owner funds the study_
</td> </tr>
<tr>
<td>
Content Description
</td> </tr>
<tr>
<td>
**Data description** _Keywords or phrases describing the dataset or content of
the data. Indicate version number if applicable. Describe the nature and
origin of the data._
</td> </tr>
<tr>
<td>
**Language** _All languages used in the dataset_
</td> </tr>
<tr>
<td>
**Variable list** _Description with variable name, length, tape, etc. and code
lists. Example: SEX, length of field (1 or more characters), values: F for
female; M for male. DOB (Date of birth), length of field (1 or more
characters), values: yyyy.mm.dd_
</td> </tr>
<tr>
<td>
**Data quality** _Data quality: This section should include description of
data quality standards, procedures to assure data quality_
</td> </tr>
<tr>
<td>
**Code list** _Explanation of codes or abbreviations used in either the file
names or the variables in the data files (e.g. '999 indicates a missing value
in the data')_
</td> </tr>
<tr>
<td>
Technical Description
</td> </tr>
<tr>
<td>
**Repository** _Mention where the data is stored_
</td> </tr>
<tr>
<td>
**File inventory** _All files associated with the project, including
extensions (e.g. 'NWPalaceTR.WRL', 'stone.mov')_
</td> </tr>
<tr>
<td>
**File Formats** _Formats of the data, e.g., FITS, SPSS, HTML, JPEG, etc. No
data standards are used in general in PREFER to enable interoperability of
data, but the PREFER consortium is striving to use file formats that are
interoperable, such as .txt, .csv, or .rtf files._
</td> </tr>
<tr>
<td>
**File structure** _Organization of the data file(s) and layout of the
variables, where applicable_
</td> </tr>
<tr>
<td>
**Checksum** _A digest value computed for each file that can be used to detect
changes; if a recomputed digest differs from the stored digest, the file must
have changed_
</td> </tr>
<tr>
<td>
**Necessary** _Names of any special-purpose software packages required to
create, view, analyse, or_ **software** _otherwise use the data_
</td> </tr>
<tr>
<td>
Access
</td> </tr>
<tr>
<td>
**Rights** _The data owner should indicate which access rights are applicable.
Any known intellectual property rights, statutory rights, licenses, or
restrictions on use of the data_
</td> </tr>
<tr>
<td>
**Access** _Where and how your data can be accessed by other researchers_
**information**
</td> </tr>
<tr>
<td>
**Sharing** _Description of how data will be shared, including access
procedures, embargo periods (if any), outlines of technical mechanisms for
dissemination and necessary software and other tools for enabling re-use, and
definition of whether access will be widely open or restricted to specific
groups. Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.). In case the
dataset cannot be shared or made open access, the reasons for this should be
mentioned (e.g. ethical, rules of personal data, intellectual property,
commercial, privacy-related, security-related)._
</td> </tr>
<tr>
<td>
**Archiving and** _Archiving and preservation (including storage and backup):
Description of the procedures that_ **preservation** _will be put in place for
long-term preservation of the data. Indication of how long the data should_
**(including** _be preserved, what is its approximated end volume, what the
associated costs are and how these_ **storage and** _are planned to be
covered. This information should include the archiving procedure of the_
**backup)** _research project at the data owner's site and also if the data
can be archived at the UU repository ALLVIS - for a detailed description see
chapter 6 of the DMP._
</td> </tr> </table>
## Responsibilities of the data owner
Data owners per task will be identified and described in table 5, which will
be maintained. The data owner of the respective research projects must ensure
and is responsible to comply with all legal and ethical requirements for data
collection, handling, protection and storage. This includes adherence to
regulations, guidelines such as (but not limited to) the EU clinical trial
directive 2001/20/EC, Good clinical practice (GCP), Good Pharmacoepidemiology
Practice (GPP), as applicable. Only the research data owner will be granted
access to the secure data repository of KU Leuven. The process of granting
access to deputies will be worked out between M6 and M18.
All data protection rules described in chapter 7 of the DMP apply to the
archiving of the results underlying
PREFER publications and recommendation documents. Data generated in academic-
led studies which cannot be fully anonymized, e.g. interviews and personal
data, may only be stored at the KUL repository described in chapter 3.
**Table 5** Overview of data owners and data repository used per task
This table will be further employed after M6 to update the research data owner
including additions of research owner deputies, as people might change
position. The updated table will be displayed in the next version of the DMP,
due in M18.
<table>
<tr>
<th>
**Data owners**
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Task Design
</td>
<td>
Data repository
</td>
<td>
Research Data Owner
</td>
<td>
E-mail address
</td> </tr>
<tr>
<td>
2.1
</td>
<td>
Literature review
</td>
<td>
KU Leuven
KU Leuven
</td>
<td>
Rosanne Janssens
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
</td>
<td>
Interviews
</td>
<td>
Rosanne Janssens
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
2.2
</td>
<td>
Literature review
</td>
<td>
KU Leuven
</td>
<td>
Eline van Overbeeke
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
</td>
<td>
Interviews
</td>
<td>
KU Leuven
</td>
<td>
Eline van Overbeeke
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
2.3
</td>
<td>
Literature review
</td>
<td>
EUR
KU Leuven
</td>
<td>
Chiara Whichello
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
</td>
<td>
Interviews
</td>
<td>
Chiara Whichello
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
2.4
</td>
<td>
Literature review
</td>
<td>
EUR
</td>
<td>
Vikas Soekhai
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
</td>
<td>
Interviews
</td>
<td>
KU Leuven
</td>
<td>
Vikas Soekhai
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
2.5
</td>
<td>
Literature review 1
</td>
<td>
TBD
TBD
TBD
</td>
<td>
Sarah Verschueren
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
</td>
<td>
Literature review 2
Literature review 3
</td>
<td>
Selena Russo
Elisabeth Furberg
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
[email protected]
</td> </tr>
<tr>
<td>
2.7
</td>
<td>
Interviews
</td>
<td>
KU Leuven
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
3.3
</td>
<td>
Review of historical case studies
</td>
<td>
KU Leuven
</td>
<td>
Leo Russo
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
3.3
</td>
<td>
Lessons Learned Survey
</td>
<td>
Rachel
DiSantosstefano or Jorien Veldwijk
</td>
<td>
</td> </tr>
<tr>
<td>
3.4
</td>
<td>
PP case study
</td>
<td>
Industry*
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
3.5
</td>
<td>
PP case study
</td>
<td>
KU Leuven
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
3.6
</td>
<td>
PP case study
</td>
<td>
KU Leuven
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
3.7
</td>
<td>
PP case study
</td>
<td>
KU Leuven
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
3.8
</td>
<td>
PP case study
</td>
<td>
KU Leuven
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
4.3
</td>
<td>
Interviews
</td>
<td>
KU Leuven
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
4.4
</td>
<td>
Interviews
</td>
<td>
KU Leuven
</td>
<td>
TBD
</td>
<td>
</td> </tr> </table>
TBD= To be discussed; EUR= Erasmus University Rotterdam
* The datasets containing survey data and/or recorded and transcribed interviews generated by the industry-led case studies are by definition to be regarded as personal data and require safe storage and handling in accordance with national and European regulatory frameworks. The industry partner responsible for conducting the case study will be responsible for the secure storage of the personal data.
# Sharing and secondary use of PREFER generated or collected data
## Procedures for making data findable
With the unique identifier of the individual dataset of PREFER and the
overview of data owners and data repository used per task (table 5) available
on Projectplace, the data owner can be identified and contacted.
## Re-use within the PREFER consortium
To achieve the objectives of PREFER, it is imperative to follow the
collaborative approach the partners agreed on when signing the consortium
agreement. This includes the necessity to share data from the individual
research projects while respecting data protection and intellectual property
of the partners’ work.
For those individual research projects within PREFER that need to use data
generated in another PREFER task, table 5 contains the data owner contact
details to whom a requester can reach out if they need to access the results.
## Re-use of PREFER results by third parties
Scientific organizations all over the world are promoting a principle of open
science and sharing of research data. By making data public, duplicate
research can be prevented and there is a possibility to combine data. Also,
money and time can be saved. The PREFER-generated data will be a valuable
asset for further research.
For those external individual research projects wanting to use PREFER
generated or collected data during the course of PREFER, the Data Management
Compliance contact should be contacted (table 2). For those external
individual research projects wanting to use PREFER generated or collected data
when PREFER is completed, the Uppsala repository manager should be contacted
(table 2). Giving access to external parties will be considered by the
Steering Committee on a case by case basis. Access rules for the time after
PREFER termination will be worked out and described in the final DMP.
Only when participants of e.g. patient preference studies or PREFER surveys
agreed via informed consent that their study results may be used for secondary
research and the data are anonymous, the data can be shared. To obtain the
agreement of participants to use their data for secondary, research the
following lines can be included in the consent form:
* _I understand the information collected about me will be stored in secure database, which will be used for future research._
* _I authorise the research to use my anonymised study data for additional medical and/or scientific research projects._
# Protection of personal data
The collection of personal data will be conducted under the applicable
international, IMI, and national laws and regulations and requires previous
written informed consent by the individual, i.e., with public and commercial
entities and if applicable outside the EU in countries with lower data
protection standards. To obtain the agreement of participants of e.g. patient
preference studies or PREFER surveys to use their data for secondary, research
the following lines can be included in the consent form:
* _I understand the information collected about me will be stored in secure database, which will be used for future research._
* _I authorise the research to use my anonymised study data for additional medical and/or scientific research projects._
PREFER researchers commit to the highest standards of data security and
protection in order to preserve the personal rights and interests of study
participants. They will adhere to the provisions set out in the:
* General data protection regulation (GDPR), foreseen coming into effect in 2018(8)
* Directive 2006/24/EC of 15 March 2006 on the retention of data generated or processed in connection with the provision of publicly available electronic communication services or of public communications networks(9)
* Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications)(10)
* Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data(11)
Prior to collecting, storing, and processing sensitive personal data, the
consortium will seek approval of the applicable local and/ or national data
protection authorities and work within the processes recommended in the
e-Health Task Force Report “Redesigning Health in Europe for 2020.”
Consent forms will contain information on how personal data will be managed.
To secure the confidentiality, accuracy, and security of data and data
management, the following measures will be taken:
* All personal data obtained within the academic-led case studies will be transmitted to partners within the consortium only after anonymization or pseudonymization. Keys to identification numbers will be held confidentially within the respective research units. In situations were re-identification of study participants becomes necessary, for example the collection of additional data, this will only be possible through the research unit and in cases where informed consent for such cases has been given.
* Personal data are entered to secure websites. Data are processed only for the purposes outlined in the patient information and informed consent forms of the respective case studies. Use for other purposes will require explicit patient approval. Also, data are not transferred to any places outside the consortium without patient consent.
* Access to experimental data will be granted to partners in non-EU countries for restricted use within the PREFER project. Data handling in non-EU countries will be fully conforming to national laws and regulations and the European Directive 95/46/EC. In cases of contradiction, the tighter regulation shall prevail. The necessary and legally adequate measures will be taken to ensure that the data protection standards of the EU shall be complied with (see below). Transfer and subsequent use of PREFER data by partners in US will be governed in accordance with federal and state laws.
* None of the personal data will be used for commercial purposes, but the knowledge derived from the research using the personal data may be brought forward to such use as appropriate, and this process will be regulated by the Grant Agreement and the Consortium Agreement, in accordance with any generally valid legislation and regulations.
The following points to consider will guide the protection of data within the
PREFER project: (i) The entity providing personal data to the project shall
verify that:
* the initial collection of these data has been compliant with the requirements of the original purpose
* the collection and the provision of the data to the project meets all legal requirements to which the entity is subject
* further storage and processing of the data after completion of the research project is in compliance with applicable law
(ii) The entity which provides personal data to the project shall document any
restriction of use or obligation applicable to these data (e.g., the limited
scope of purpose imposed by the consent form)
The entity which uses personal data in the project shall be responsible to
ensure that it has the right under the applicable data protection and other
laws to perform the activities contemplated in the project.
Personal data shall always be collected, stored, and exchanged in a secure
manner, through secure channels.
# Ethical aspects
## General ethical aspects
The participants of PREFER are requested to adhering to all relevant
international, IMI, and national legislation and guidelines relating to the
conduct of prospective case studies as detailed below.
All research activities within PREFER requiring approval on ethical and legal
grounds through responsible local or national Ethics Committees and Regulatory
Authorities will be conducted only after obtaining such approval. All ethics
approvals will be submitted to IMI before commencement of any prospective case
study. A report by the Ethics Advisory Board will be submitted to IMI within
the periodic reports.
The proposed research will comply with the highest ethical standards,
including those outlined in the Grant Agreement (Article 34 of the Model Grant
Agreement) and the European Code of Conduct for Research integrity. The
balance between the research objectives and the means used to achieve them
will be given special attention. To ensure this, PREFER is supported by its
Ethical Advisory Board. The Ethical Advisory Board will consist of four
experts on ethics, law, and drug development representing the key areas of the
project, including a patient representative. The Ethical Advisory Board will
monitor the progress of the project and ensure a high standard of research by
taking part in the annual General Assembly meetings. In addition, it will:
* provide expert support to the consortium in all relevant ethical questions
* ensure compliance with legislation and guidelines
* conduct regular project reviews
* issue recommendations to the consortium when appropriate
Researchers are requested to have appropriate training regarding Good
Scientific, Good Clinical, Good Pharmacoepidemiology Practice Guidelines and
the legal and regulatory framework described in the following sections.
## Interviews and patient preference studies
The methodologies for eliciting patient preferences will be tested in
prospective case studies. At this stage, it is not yet fully decided which
patient populations will be involved in the case studies, but we foresee the
possibility of approaching vulnerable patient populations, children, parents,
care givers, and healthy volunteers. Each patient preference study requires
approval from the relevant ethical review boards with adherence to
requirements related to informed consent and protection of privacy.
Our foremost principles for the conduct of any research involving human
participants within PREFER are:
* respect for the rights, integrity, and privacy of patients
* protection of vulnerable patients
* continuous monitoring of patients’ safety
* generation of meaningful, high-quality data
* timely publication of case study results
All research in PREFER involving human participants will be conducted under
the applicable international, IMI, and national laws and regulations and only
after obtaining approval by the applicable local or national Ethics Committees
and Regulatory Authorities. In particular, the consortium is committed to:
* the Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects
(Adopted by the 18th World Medical Association (WMA) General Assembly,
Helsinki, Finland, June
1964, and last amended by the 64th WMA General Assembly, Fortaleza, Brazil,
October 2013)(12)
* the standards of the International Conference on Harmonisation on Good Clinical Practice(13)
* the Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine, ETS No. 164,
Oviedo, 4 April 1997; and the Additional Protocol on Biomedical Research (CETS
No. 195), 2005(14)
* the UNESCO: Universal Declaration on Bioethics and Human Rights (2005)(15)
* Case studies have not yet been defined in detail so at this stage it is unclear which countries will be involved. Research with human participants will be conducted in the applicable countries in accordance with national and international regulations. Preference studies regarding future risks or investigating how to balance benefits and risks may cause psychological distress e.g., for vulnerable patient groups. This implies that all studies conducted within the PREFER project will have to take actions in order to be able to support/counsel patients appropriately. This will be one of the requirements assigned to each leader of a clinical case study.
* As mentioned above PREFER will seek to include a broad selection of patient populations, including vulnerable patients if necessary. For ethical reasons iIt is important that perspectives from these patient groups are also included, and that patients who may experience certain difficulties to get their voice heard and their preferences taken into account. Vulnerable patient populations may be identified in the field of Neuromuscular disorders where many of the diagnosed diseases are rare and the patients are not adults. This is also why the PREFER project has included a patient organization within this disease area, i.e. Muscular Dystrophy UK. They, as well as the other patient organisations, will be asked to give extra attention to the situation of vulnerable patients and the how they are included in the case studies.
Patient Information and informed consent procedures will be approved by the
relevant national or local ethics boards. Data collectors collecting personal
data for a prospective collaborative research project will inform the study
participants about the project in an appropriate manner, including:
* the identity of the data controller
* the voluntariness of the collection of data
* the purposes of the processing
* the nature of the processed data, including its type (identifiable, coded, anonymised)
* the handling of the data
* the existence of the right of access to, and the right to rectify the data concerning themselves
* if the research project reasonably anticipates the sharing of data across research groups (including academic and commercial entities) and national borders (including information about potentially lower data protection standards outside EU)
* if the project involves collaboration with both academic and commercial partners
* that consent may be withdrawn and how this is done
The research conducted in PREFER does not have the potential for
malevolent/criminal/terrorist abuse. There are no other ethics issues
currently identified beyond those discussed above. Any potential issues that
arise during the project duration will be presented to the Ethics Advisory
Board who will ensure they are addressed by taking the appropriate
organisational, legal, and regulatory steps.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0950_BigData Heart_116074.md
|
# Overview of data types generated and collected in BigData@Heart
A goal of BigData@Heart is to create a beyond-the-state-of-the art, open-acess
informatics platform that will bridge the available knowledge on AF, ACS and
HF, related comorbidities and current/emerging risk factors from observational
and experimental studies. In this platform, the BigData@Heart consortium will
combine a variety of resources such as biomedical knowledge, phenotyping
algorithms and informatics tools for exploring and analyzing heterogeneous
data from clinical, multi-omics and imaging sources.
To reach its ambitious goals, BigData@Heart will leverage national and
international research and innovation activities. BigData@Heart will exploit
the data from the various cohorts and registry studies described above.
Of note, certain specifications on the data sets provided by EFPIA partners
can only be provided after all confidentiality agreements (as part of the
grant agreement) are in place. Cohort and registry studies may include:
* ACS, AF, HF disease-based genetic collections (e.g. GENIUS-CHD, HERMES,AFGen, EPICCVD, UCORBIO, Myomarks, CARDIoGRAMplusC4D, BiomarCaRE MORGAM, German young
MI GS, SMART-NL)
* Disease-based collections without omics (e.g. Nicor, ESC EORP HF Long Term, ESC EORP AF
General LT, SwedeHF, SwedeHeart, Hamburg clinical cohorts, German young MI
study)
* Hospital-based EHR data (e.g. HELIOS hospital group, UPOD, Farr Institute Scotland)
* Population based cohorts (e.g. CALIBER, ABUCASIS, Mondriaan)
* Population-based consented cohorts (e.g. ERFC)
* Healthy population cohorts with omics (e.g.INTERVAL, UCLEB, Blood donor cohorts, UK-
Biobank, LRGP, EPIC-NL)
* Trial Data (e.g. EAST - AFNET 4, AXAFA - AFNET 5 & 6)
# Operational data management requirements for BigData@Heart research
projects
Each research project (interviews, literature review, surveys, case studies,
etc.) needs to provide a short dataset-specific DMP, including but not limited
to data capture systems, data analysis systems, data protection and data
privacy measures, including description of de-identification of data sets and
access rules. If the research results cannot be open access a justification
needs to be provided.
## Requirements for the short dataset specific DMP
All data owners need to fill in **Table 1** containing the meta data and
describing the data management of data sets. Metadata are specifications for
data that provide the contextual information required to understand those
data. Such specifications describe the structure, data elements,
interrelationships and other characteristics of data, the data repository
used, and need to be securely stored with the database.
These tables will be reviewed by the WP1 PMO team for completeness, compliance
with the DMP and compliance with the Consortium Agreement.
The completed descriptions for the subprojects (based on Table 1) will be kept
and maintained in Internal Workspace of the project and attached to the DMP as
Annexes as updated deliverable during the annual technical report.
**Table 1** Data requested per dataset
## Sources
**Source** _E.g. Citations to data derived from other sources, including
details of where the source data is held and how it was accessed._
**Funded by** _Provide information regarding financial support such as
research grants, or indicate that the data owner funds the study._
## Content Description
**Data description** _Keywords or phrases describing the dataset or content of
the data. Indicate version number if applicable. Describe the nature and
origin of the data._
**Language** _Describe languages used in the dataset._
**Variable list** _Give a short description of the variable. Describe:
variable name, length, and code lists._
**Data quality** _Please describe the applicable data quality standards,
procedures to assure data quality._
**Contact person** _Please indicate who should be contacted for detailed
explanation of e.g. file names, codes or abbreviations used in either the file
names or the variables in the data files._
## Technical Description
**Repository** _Indicate where the data is stored._
**File** _Give a description of which files are stored, the data formats, and
file structure._ **inventory/formats/ description**
**Necessary software** _Indicate if specific software is needed_
## Access
**Rights** _Please indicate which access rights are applicable according to
the data owner. Any known intellectual property rights, statutory rights,
licenses, or restrictions on use of the data._ **Access** _Please indicate how
the data can be accessed by other researchers, and what procedures exist._
**information**
**Sharing** _Please describe how the data can be share, what procedures are
relevant, if any embargo periods exist, and other information that is relevant
for data sharing. If the dataset cannot be shared or made open access, please
indicate the reasons._
**Archiving and** _Please describe how and to what extent long-term
preservation of the data is assured. This includes_ **preservation**
_information on how this long-term preservation is supported._
# Responsibilities of the data owner
Data owners per task will be identified and described in Table 1/the Annexes,
which will be maintained. The data owner of the respective research projects
must ensure and is responsible to comply with all legal and ethical
requirements for data collection, handling, protection and storage. This
includes adherence to regulations, guidelines such as (but not limited to) the
EU clinical trial directive
2001/20/EC, Good clinical practice (GCP), Good Pharmacoepidemiology Practice
(GPP), as applicable.
# Sharing and secondary use of BigData@Heart generated or collected data
## Procedures for making data findable
With the overview of data owners and data repository used per task (Table 1)
available on the Internal Workspace and Annexes, the data owner can be
identified and contacted.
## Re-use within the BigData@Heart consortium
For the success of BigData@Heart, it is critical that partners adhere to the
to the collaborative approach agreed in the consortium agreement. With the
overview of data owners and data repository used per task (Table 1) available
on the Internal Workspace and Annexes, the data owner can be identified and
contacted.
## Re-use of BigData@Heart results by third parties
When third parties want to use data that was generated or collected as part of
the BigData@Heart project. The Consortium PMO office should be contact via
Linsey van Bennekom
([email protected]). Giving access to external parties will be
considered by Management Board and the Data Owner. Decision are made on a case
by case basis. The consortium strives for optimal access of third parties
during the course of the project, while keeping in mind the overall
objectives, goals and activities of the project consortium.
A separate procedure for accessing consortium data after the end of the
project will be described in the final version of the Data Management Plan.
# Protection of personal data
Personal data will be stored in accordance with relevant national and
international legislation and good practice. Only those data will be collected
that are of relevance for the proposed research, no excess data will be
stored. Data will only be processed for BigData@Heart research purposes. For
all studies in this proposal all data will be coded and de-identified, and
where possible fully anonymised.
BigData@Heart involves further processing or secondary use of existing data,
as well as of data that are being collected currently or during the project.
To ensure patient privacy, all datasets for researchers include subject unique
identification numbers that enable feedback about one subject to the data
manager but do not enable identification of that particular subject.
Importantly, we will comply with the General Data Protection
Regulation: i.e. Regulation of the European Parliament and of the Council
(http://data.consilium.europa.eu/doc/document/ST- 15039-2015-INIT/en/pdf) on
the protection of individuals with regard to the processing of personal data
and on the free movement of such data that all organisations must comply with
during the project life time.
All research is conducted in compliance with applicable EU (e.g. Directive
95/46/EC of the European Parliament and of the Council of 24 October 1995) and
national legislation, which includes:
* Compliance with the original study consent for which data were collected;
* Personally Identifiable Information (PII) is adequately protected; - Ensure that anonymisation/de-identification is conducted appropriately; - Ethical review is completed as required.
In general terms, the appropriate data protection principles will be observed,
including:
* Data are fairly and lawfully processed;
* Data are used only in ways that are compatible with the original consent;
* The amount of data collected is relevant and not excessive;
* All reasonable efforts are taken to ensure data accuracy;
* The data are used in accordance with the rights of the study participant;
* The data are stored securely;
* The relevant international and national guidance will be consulted.
The EU General Data Protection Regulation (Regulation [EU] 2016/679, revising
Directive 95/46/EC on Data Protection and Privacy) will apply to the project
from May 2018 and is taken into account to ensure continuing compliance (as
described in WP7). New techniques developed within the project shall comply
with the general principles of the EU General Data Protection Regulation such
as the data minimisation and privacy by design. Proposals for data handling
during the project will be presented to the independent ethics advisor for
ethical assessment.
# Ethical aspects
## General ethical aspects
To achieve BigData@Heart’s goals, data derived from clinical care and studies
with human participants will be used. Throughout BigData@Heart, the aim will
be to attain high ethical standards in the conduct of research involving
humans and the collection, handling and storage of data. The study will adhere
to fundamental ethical principles of respect for human dignity (including the
principles of non-exploitation, nondiscrimination and non-
instrumentalisation), respect for individual autonomy (entailing the giving of
free and informed consent, and respect for privacy and confidentiality of
personal data) and the principle of beneficence with regard to the improvement
and protection of health. The consortium is aware of international regulation,
conventions and declarations and will properly address any other currently
unforeseen ethical issue that may be raised by the proposed research.
An extensive strategy to ensure potential ethical issues are dealt with
accordingly will be in place throughout the project and has a prominent role
in WP7. Ethics issues will be actively monitored throughout the project and if
new issues arise, the European Commission will be notified immediately.
* provide expert support to the consortium in all relevant ethical questions
* ensure compliance with legislation and guidelines
* conduct regular project reviews
* issue recommendations to the consortium when appropriate
Researchers are requested to have appropriate training regarding Good
Scientific, Good Clinical, Good Pharmacoepidemiology Practice Guidelines and
the legal and regulatory framework described in the following sections.
## Studies using human data
For all studies that involved humans, approval of the local and national
ethics committees has been or will be sought. A portfolio of all relevant
documents such as ethical approvals, informed consent Forms, Information
sheets, and policy documents concerning recruitment, handling of incidental
findings, transfer of data and material etc. will be compiled. An analysis of
these documents will be performed – as part of WP7 – to create an overview of
current policies. This portfolio will be presented and discussed in the
Governance Committee (part of Task 7.3). In this committee, we will appoint an
independent Ethics Advisor. Any ethical issues arising from these documents
will be taken up by the partners from WP7.
Appropriate Informed consent from study participants has been and will be in
place prior to use of materials and prior to inclusion into the study.
Informed consent will be prepared according to EU standards and written in a
manner to enable laypersons to fully understand the aims of the studies, what
the study procedures are, which information will be used and for what purpose.
All potential participants will be informed about the relevance (with respect
to science and public health) and the content of the studies as well as about
the protection of their personal rights, data management and privacy. Copies
of the templates of Informed Consent and the ethical approvals which will
cover transfer of biological samples or personal data will be submitted to
IMI. Detailed information will be provided to the IMI on the procedures that
will be used for the recruitment of participants (e.g. number of participants,
inclusion/exclusion criteria, informed consent, direct/indirect incentives for
participation, the risks and benefits for the participants etc.). If
applicable, the applicants will demonstrate that human participants are
properly insured. All informed consent materials will be presented to the
independent Ethics Advisor for an ethical assessment.
## Human Cells and tissues
Medical and ethical approval for the gathering and use of the human blood
samples – the blood samples that will be used for BigData@Heart are left-over
material from routine exams as well as planned sampling according to cohort or
trial specifics. In cases where additional sampling is necessary for data
enrichment (WP4), the subject needs to undergo only minimal additional
procedures in order for us to procure the blood sample. In addition, we will
have access to the related patient files through pseudomised procedures at the
relevant facility. The researchers involved in this project will not have
direct access to the patient’s identity but will obtain the required
information for their research. The material will be provided only if the
patient has signed an informed consent. The protocol, as well as the informed
consent will describe how we will deal with retraction of permission, no-
solicited findings, insurance, vulnerable subjects, and other ethically
sensitive issues. Specially trained hospital staff informs patients about
their voluntary consent and answers all possible questions in separate private
sessions with the patients. The rights, safety and welfare of the research
subjects override the interests of the study, society and science.
The infrastructure and management of blood sample collection and database
management of patient information has previously been established at all
relevant biobanks. All documents relevant to ethics approval, informed
consent, ethical study conduct, transfer of data, handling of incidental
findings etc. will be part of the portfolio of described under Task 8.2
In the case of human cells/tissues that are obtained within the project,
ethics approval will be provided to the IMI. In the case of human
cells/tissues that are obtained within another project, details on
cells/tissues type and authorisation by primary owner of data (including
references to ethics approval) will be provided to the IMI. In the case of
human cells/tissues stored in a biobank, details on cells/tissues type will be
provided, as well as details on the biobank and access to it.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0953_LIFEPATH_633666.md
|
# 1\. PRINCIPLES
Data management is under the responsibility of the coordinator and is planned
in agreement with beneficiary 14, UNITO. It is regulated by bilateral DTAs
(see below, item 6 and 7) and follows the Imperial College and UNITO rules for
_Data sharing, confidentiality and information governance_ _(item 8)._
# 2\. GENERAL PLAN FOR DATA MANAGEMENT
The general scheme for data management has been agreed upon at the kick-off
meeting (D1.1) and it includes:
* The transfer of cohort data from single partners to (a) UNITO (with the exclusion of biomarkers) and (b) Imperial College for biomarkers. Both institutions have rules for data sharing, confidentiality and information governance
* The harmonization of relevant variables from all cohorts, depending on the needs of the WP, in particular in preparation of data analysis for the “decline phase” (Working Group 1, led by Stringhini - item 4 below), of the “build-up” phase (Working Group 2, led by Layte - item 5 below) and of the existing biomarkers (Working Group 3, led by Vineis).
* Harmonized variables will be made available to Work packages and Working Groups on request on the basis of the planned statistical analyses, reports and papers.
# 3\. DEFINITION OF AGEING AND HARMONIZATION OF SES
The **workshop of WP7** (held on June 10 2015) led to definitions/refinements
of SES and healthy ageing that will be used in the consortium (a report is
prepared separately by M Kelly-Irving in deliverable D7.1).
The following simplified definition of healthy ageing has been proposed as a
starting point: “ **life expectancy at age 65 without activity limitations**
”. We will use both hard indicators (death) and functional indicators
(activity limitations), though whenever possible we will emphasize the second.
## 1\. Proposal for harmonization of adult SES variables (written by Fulvio
Ricceri, Angelo d’Errico, Silvia Stringhini)
**EDUCATIONAL LEVEL:**
Variable1 (3 levels) – _all cohorts_ :
* primary or lower secondary school
* higher secondary school
* tertiary education (post-secondary)
Variable 2 (4 levels) – _not all cohorts_ :
* primary or lower secondary school
* vocational school
* higher secondary school
* tertiary education (post-secondary)
**EMPLOYMENT STATUS:**
Variable 1 (2 levels) – _all cohorts_ :
* employed
* not employed
Variable 2 (5 levels) – _not all cohorts_ :
* employed
* not employed: retired
* not employed: housewife
* not employed: unemployed
* not employed: disabled
**OCCUPATIONAL CLASS:**
Variable 1 (5 levels) – _not all cohorts:_
* higher professionals and managers (Class 1 ESEC – European Socio-economic
Classification – 9 classes)
* lower professionals and managers; higher clerical, services and sales workers (Class 2 and 3 ESEC)
* small employers and self-employed; farmers; lower supervisors and technicians (Class 4, 5, and 6 ESEC)
* lower clerical, services, and sales workers; skilled workers (Class 7 and 8 ESEC) - semi – and unskilled workers (Class 9 ESEC)
## INCOME
Variable 1 (3 levels):
* tertiles within each cohort
Variable 2 (4 levels) – _if possible_ :
* quartiles within each cohort
Variable 3 (5 levels) – _if possible_ :
* quintiles within each cohort
# 4\. PROPOSAL FOR HARMONISED SOCIO-ECONOMIC MEASURES IN THE BUILDUP PHASE
(WRITTEN BY RICHARD LAYTE)
For both papers and for other work in the growth phase working group we will
need to produce comparative data and this requires a harmonisation template
that can be applied across all of the cohorts contributing data. Whilst the
growth phase group could produce its own harmonisation guide, it makes sense,
where possible, to adopt that being used by the decline phase working group.
This will mean that should the same data be used in the different workgroups
we will not be creating more work for ourselves. Earlier in the summer Silvia
circulated an initial harmonisation guide for SES variables which I have
attached here for reference. This sets out harmonised variables for education,
income and social class and provides two/three levels of variable which can be
adopted depending on the level of information available. This is important as
data structures vary significantly across the cohorts and we will be forced to
use the lowest common denominator if we are looking to maximise the number of
countries in comparisons. Overall I think this is a good schema to use for the
SES variables although there are some questions about how these schemas would
be implemented in different countries and in different cohorts that I would
like to explore.
## Education Variables
For education for example, in Ireland or the UK there is no analogue to the
‘vocational school’ listed although I fully recognise that there is a
differentiation between general and vocational tracks in other countries. In
the CASMIN schema (see attached) which has been used for a great deal of
social mobility research, there are higher and lower vocational qualifications
which are essentially analogues of lower secondary and higher secondary
educational qualifications. Should ‘vocational school’ by grouped with the
latter in the LIFEPATH three level variable?
There are similar issues around how to classify ‘tertiary education’. Many
countries have postsecondary courses in vocational subjects but these would
not be classed as tertiary education and indeed, do not lead to the advantage
that a bachelor’s degree would in the labour market. For example, nursing
qualifications or technical apprenticeships. In the CASMIN schema these are
classified as 2c_voc. Tertiary would usually include practically orientated
study programs like college technical diplomas and professional qualifications
like social workers.
A third issue is the amount of differentiation to be used depending on the age
of the cohort under investigation. Because of educational expansion in most
countries it is now quite rare to find a young person whose highest level of
education is primary. I imagine this is the reason why Silva and colleagues
have collapsed primary and lower secondary levels in their schema. Among older
cohorts though (those prior to 1967 in Ireland), leaving school before
secondary education was far more common and this track had significant impacts
on life trajectory. This would suggest keeping these two levels separate among
older cohorts.
Can I suggest that we adopt the following using the CASMIN groups attached?
* Primary Education - 1a, 1b, 1c
* Lower Secondary School - 2a, 2b (‘Vocational School’ should be grouped here if education finished <=16)
* Primary and lower secondary can be grouped in younger cohorts.
* Higher Secondary School – 2c_gen, 2c_voc (‘Vocational School’ should be grouped here if education finished >16 & <=18)
* Tertiary Education (3a, 3b).ù
## Occupational Class
For the social (occupational) class variable Silva and colleagues have
suggested that we use an aggregated version of the European Socio-Economic
Classification (EsEC), a comparative schema created by David Rose based on the
Erikson/Goldthorpe/Portacarero schema from the early 1990s. EsEC is also close
to the ONS class scheme as used in the UK (which was also developed by David
Rose). This I think is a good choice as it is a theoretically based schema
that has proven to be a good predictor of outcomes (see
_https://irvapp.fbk.eu/sites/irvapp.fbk.eu/files/irvapp_seminar_2010_03_rose_harrison_slide.pdf_
). There are issues however in how teams are to allocate occupations to the
groups set out in the Harmonisation document. For example, there is likely to
be disagreement about which occupations are to be regarded as ‘professionals’
even within countries let alone across national borders and no clear way to
define ‘higher’ and ‘lower’ professionals. It is likely then that there would
be large discrepancies between the way different country teams would group
particular occupations. The usual response in comparative research is to apply
the International Standard Classification of Occupations (ISCO88, though there
is now a more recent version) and then group these on an agreed basis. It
looks from many of the submissions to Silvia that most studies do not ask for
occupational titles but instead ask respondents to allocate themselves to a
group at interview. In this situation we will have no choice but to apply a
different coding in each case and agree this across the team. However, I think
single occupation codes may be available in some cohorts and will check with
individual teams by email.
If we are to combine existing occupation/class groups could I suggest that we
adopt another aggregation of the EsEC classification that may lead to less
cross-national drift in allocation. The standard EsEC has 10 levels:
* 'Large employers, higher mgrs/professionals' – (owners with 25+ employees, lawyers, doctors and judges plus corporate managers)
* 'Lower mgrs/professionals, higher supervisory/technicians' (secondary school teachers, academics, engineers, accountants)
* 'Intermediate occupations' (clerical and administrative occupations as well as associate professionals like social workers, primary school teachers, Montessori teachers, secretaries, etc).
* 'Small employers and self-employed (non-agriculture)' (shop keepers, self-employed artisans etc)
* 'Small employers and self-employed (agriculture)' (Small farmers)
* 'Lower supervisors and lower technician occupations' (supervisors of manual occupations and equipment operators)
* 'Lower sales, services and clerical ' (cashiers, cooks, firemen, police officers and salespeople)
* 'Lower technical' (skilled construction workers and other artisans)
* 'Routine Occupations' (unskilled manual labourers)
* Never worked and long-term unemployed
I would suggest that we keep the professional classes together as they are
hard to differentiate and have outcomes which are quite similar anyway. The
intermediate occupations are often female but these women tend to be married
to men and have living standards like the skilled manuals and lower technical
groups so I would argue that 3 should be grouped with 6. I would argue for
keeping 4 and 5 separate as farmers vary hugely across countries in terms of
income and outcomes. It would also be good to differentiate between skilled
and unskilled manual occupations so I would suggest grouping 6, 7 and 8 and
having 9 and 10 separate. This gives us:
1. Higher and lower professionals, large employers, higher technical and intermediate. (1 +2)
2. Smaller Employers and self-employed (non-agricultural) (4)
3. Smaller Employers and self-employed (agricultural) (5)
4. Manual supervisors, lower technical, sales and service plus intermediate). (3, 6, 7, 8)
5. Routine and never worked. (9+10).
## Income Categories
Ideally, each team would have access to a measure of household net income that
could be equivalised to take account of the number of people dependent on the
income which would then be categorised into groups such as tertiles or
quintiles. It looks from the documents circulated that many teams only have
income categories so as with occupational class we will need to agree how
these are grouped.
**5\. DATA TRANSFER AGREEMENT (FACSIMILE) BETWEEN EACH PARTNER**
# (COHORTS) AND UNITO
## DATA TRANSFER AGREEMENT
This Data Transfer Agreement ("Agreement") and the Memorandum of Understanding
(the “MOU”) included herein as Attachment 1 is between … (the ”Provider”) and
those who are acquiring Data (as defined hereinafter), the Lifepath network
and the University of Torino, Department of Clinical and Biological Sciences,
Orbassano Italy (the “Recipient”), under this Agreement. **I. Definitions:**
1. PROVIDER: Organization providing the DATA. The name and address of this party will be specified herein.
2. PROVIDER SCIENTIST: The name and address of this party will be specified herein.
3. RECIPIENT: Organization receiving the DATA. The name and address of this party will be specified herein.
4. RECIPIENT SCIENTIST: The name and address of this party will be specified herein.
5. DATA: Data collected by PROVIDER. It includes specified non-identifiable data on individuals, in electronic format.
6. MODIFICATIONS: New data generated as a result of the analyses of the DATA. New data are a result of the harmonization of Data collected from PROVIDERs **II. Terms and Conditions of this Agreement:**
1. The PROVIDER retains ownership of the DATA, including any DATA contained or incorporated in MODIFICATIONS.
2. The PROVIDER and RECIPIENT will have joint ownership of MODIFICATIONS (except that, the PROVIDER retains ownership rights to the DATA included therein).
3. The PROVIDER will only transfer DATA to the RECIPIENT in good standing and if the RECIPIENT has been approved by the PROVIDER.
4. The PROVIDER, the RECIPIENT, and the RECIPIENT SCIENTIST agree that the DATA and MODIFICATIONS:
1. are to be used solely for the agreed academic research purposes, as specified in the attached MOU;
2. will not be used for other than the agreed purposes without the prior written consent of the PROVIDER;
3. are to be used only at the RECIPIENT organization, and in the RECIPIENT SCIENTIST's department under the direction of the RECIPIENT SCIENTIST or others working under his/her direct supervision; and
4. will not be transferred to anyone else within the RECIPIENT organization or external to the RECIPIENT organization without the prior written consent of the PROVIDER.
5. Any DATA delivered pursuant to this Agreement is understood to be a complete and accurate copy of the data retained by the PROVIDER.
6. This agreement shall not be interpreted to prevent or delay publication of research findings resulting from the use of the DATA or the MODIFICATIONS. The RECIPIENT SCIENTIST agrees to provide appropriate acknowledgement of the source of the DATA in all publications. See MOU for further information.
7. The RECIPIENT agrees to use the DATA in compliance with all applicable statutes and regulations, including those relating to research involving the use of humans.
8. This Agreement will terminate on the earliest of the following dates:
1. on completion of the proposed research with the DATA, as described in the MOU, or
2. on 1 month written notice by either party to the other, prior to completion of the project, provided that
1. if termination should occur under 8(a) above, the the RECIPIENT will discontinue its use of the DATA and will, upon direction of the PROVIDER, retain the DATA for a period of 7 years or destroy it. The RECIPIENT, at their discretion, will retain the
MODIFICATIONS for a period of 7 years.
2. in the event the PROVIDER terminates this Agreement under 8(b), the RECIPIENT will discontinue its use of the DATA upon the effective date of termination and will, upon direction of the PROVIDER, return or destroy all DATA and modify the MODIFICATIONS by removal of the PROVIDER data only.
9. The DATA is provided at no cost.
10. The Parties agree to abide by the terms of this Data Transfer Agreement and the MOU incorporated herein as Attachment 1. In the event of conflict between this Data Transfer Agreement and the MOU, the terms of the Data Transfer Agreement will prevail.
11. This Data Transfer Agreement along with the MOU included as Attachment 1 constitutes the entire agreement between the parties and supersedes all communications, arrangements and agreements, either written or oral, between the parties with respect to the matter hereof, except where otherwise required in law. This agreement may be varied by exchange of letters between the parties. No variation or extension to this Data Transfer Agreement or MOU shall be binding upon either party unless in writing and acknowledged and approved by both parties in writing.
## (Signatures begin on the following page)
**Acknowledged and agreed to:**
_For RECIPIENT_
The Dept of Clinical and Biological Sciences, University of Torino, Orbassano,
agrees to the details of the collaboration described herein.
____________________________________________________
RECIPIENT SCIENTIST Signature Date, 26/06/2015
Name: Giuseppe Costa
Title: Professor
Address: Regione Gonzole n. 10, Orbassano (TO)
Phone: +39 0116705487
Fax: +39 0116705704
Email: [email protected]
_**For** PROVIDER _
… as the person responsible for the study from which the data is being
provided agrees to the details of the collaboration outlined herein.
________________________________________________________________________
Provider Scientist Signature Date
Name:
Title:
Address:
Phone:
Fax:
Email:
Attachment 1
## 6\. MEMORANDUM OF UNDERSTANDING
### 1\. Purpose
RECIPIENT and PROVIDER have agreed to collaborate on a pooled analysis project
under the auspices of the Lifepath Consortium.
This Memorandum of Understanding (MOU) and the Data Transfer Agreement (DTA)
describe the terms of the collaboration and the transfer of the data,
including intellectual property rights, publication, confidentiality, other
financial terms, and the specifics of the data and their transfer.
### 2\. Study
The LIFEPATH project answers the call “PHC1. Understanding Health, ageing and
disease: Determinants, risk factors and pathways; Scope Option (ii)”.
The specific and original objectives of LIFEPATH are:
a) To demonstrate that healthy ageing is strongly uneven in society, due to
multiple environmental, behavioural and social circumstances that affect
individuals’ life trajectories (text of the Scope of the Work Programme: “The
identification of determinants and pathways characteristic of healthy and
active ageing”). b) To improve the understanding of the mechanisms through
which healthy ageing pathways diverge by social circumstances, by
investigating life-course biological pathways using omic technologies. c) To
provide evidence on the reversibility of the poorer ageing trajectories
experienced by individuals exposed to the strongest adversities, by using an
experimental approach ("conditional cash transfer" experiment for poverty
reduction in New York City); and to analyse the health consequences of the
current economic recession in Europe (i.e. changes in social and economic
circumstances). d) To provide updated, relevant and innovative evidence for
underpinning future policies.
The collaborative arrangements under this MOU and described below and will be
carried out in accordance with the terms and conditions described therein.
Neither party will deviate from the description of the project without an
exchange of documents explaining, acknowledging and approving the deviation.
### 3\. Contact information
RECIPIENT who will be receiving DATA shall advise in writing of any change in
contact information. Upon receipt of DATA and MODIFICATIONS, the RECIPIENT
will retain responsibility for the security of the data and the scientific
rigour of any remaining statistical analyses to be performed.
### 4\. Data
The DATA needed for project consists of SES and health data relevant to the
Lifepath consortium.
The DATA will be labelled with a unique subject identification number that
must be retained. The DATA will include documentation of the DATA including
names of the columns and values of each of the levels within a column.
**5\. Data transfer**
The PROVIDER will send the DATA in electronic format, via encrypted email or
CD-ROM, to… .
### 6\. Statistical analysis
Research will be conducted in accordance with the RECIPIENT Institutional
Review Board. Additionally, the approval of the RECIPIENT Institution Review
Board will be obtained prior to the receipt of any data.
The analyses that will be performed will be based on de-identified datasets
and will include all the statistical analyses foreseen in the Lifepath DoA.
Data will be used to test the study hypotheses and estimate associations using
a variety of statistical techniques.
Any additional analyses must be proposed and agreed to in writing by all
parties.
### 7\. Publications
The Lifepath publication policy will be followed with respect to authorship on
any manuscript resulting from this project.
The collaborators will ensure the timely dissemination of research findings.
## 7\. DATA TRANSFER AGREEMENT (FACSIMILE) BETWEEN EACH PARTNER (COHORTS) AND
IMPERIAL COLLEGE (BIOMARKER DATA)
**DATA TRANSFER AGREEMENT**
This Data Transfer Agreement ("AGREEMENT") is by and between
1. [name of providing institution] whose address is [address of supplying institution] (the “PROVIDER”); and
2. [name of receiving institution] whose address is [address of receiving institution] (the “RECIPIENT”).
1. **Definitions:**
8. PROJECT: The Horizon 2020 multi-party project entitled “LIFEPATH: Lifecourse biological pathways underlying social differences in healthy ageing”.
9. GRANT AGREEMENT: Grant Agreement No. 633666 for the Project which was signed by Provider and Recipient.
10. CONSORTIUM AGREEMENT: The Consortium Agreement for the Project which was signed by Provider and Recipient.
11. PROVIDER’s SCIENTIST: [Name and institutional address of this individual] who is supplying the DATA.
12. RECIPIENT’s SCIENTIST: [Name and institutional address of this individual] who is receiving the DATA.
13. DATA: Data collected by PROVIDER in electronic format which includes specified nonidentifiable information on individuals. The PROVIDER’s SCIENTIST will send the DATA to the RECIPIENT’s SCIENTIST in electronic format via encrypted email or CD-ROM.
14. MODIFICATIONS: New data generated as a result of the analyses of the DATA either as a result of the harmonization of DATA collected from PROVIDER.
2. **Terms and Conditions:**
12. The PROVIDER retains ownership of the DATA including any DATA contained or incorporated in MODIFICATIONS.
13. The PROVIDER and RECIPIENT will have joint ownership of MODIFICATIONS except, as noted above, the PROVIDER retains ownership rights to the DATA contained or incorporated in any MODIFICATIONS.
14. The PROVIDER, the RECIPIENT, and the RECIPIENT’s SCIENTIST agree that the DATA and MODIFICATIONS:
1. are to be used solely for the PROJECT as specified in the GRANT AGREEMENT’s Annex 1;
2. will not be used for any other purpose without the prior written consent of the PROVIDER;
3. are to be used only at the RECIPIENT organization, and in the RECIPIENT SCIENTIST's department under the direction of the RECIPIENT’s SCIENTIST or others working under his/her direct supervision; and
4. will not be transferred to anyone else within the RECIPIENT organization or external to the RECIPIENT organization without the prior written consent of the
PROVIDER.
15. The RECIPIENT and the RECIPIENT’s SCIENTIST shall acknowledge PROVIDER as the source of the DATA in any publication which mentions the DATA unless requested otherwise by the PROVIDER.
16. This AGREEMENT will terminate on the earliest of the following dates:
1. on completion of the proposed research with the DATA as described in the GRANT AGREEMENT’s Annex 1, or
2. on one (1) months’ written notice by either party to the other prior to completion of the PROJECT, provided that
1. if termination should occur under 5 (a) above, the RECIPIENT will discontinue its use of the DATA and will, upon direction of the PROVIDER, either retain the DATA for a period of 5 years or destroy it. The RECIPIENT, at their discretion, will retain the MODIFICATIONS for a period of 5 years.
2. in the event the PROVIDER terminates this Agreement under 5 (b), the RECIPIENT will discontinue its use of the DATA upon the effective date of termination and will, upon direction of the PROVIDER, return or destroy all DATA and modify the MODIFICATIONS by removal of the PROVIDER data only.
17. The DATA is provided at no cost.
18. The DATA will be labelled with a unique subject identification number that must be retained. The DATA will include documentation of the DATA including names of the columns and values of each of the levels within a column.
19. The parties agree to abide by the terms of this AGREEMENT, the GRANT AGREEMENT and the CONSORTIUM AGREEMENT.
20. This AGREEMENT along with the GRANT AGREEMENT and CONSORTIUM
AGREEMENT constitutes the entire agreement between the parties. This agreement
may be varied by exchange of letters between the parties. No variation or
extension to this AGREEMENT shall be binding upon either party unless in
writing and acknowledged and approved by authorised signatories of both
parties.
21. This AGREEMENT may be executed in two or more counterparts, each of which will be deemed an original, but all of which together shall constitute one and the same AGREEMENT. The PROVIDER and RECIPIENT acknowledge that an original signature or a copy thereof transmitted by PDF shall constitute an original signature for the purposes of this AGREEMENT.
### (Signatures begin on the following page)
**AGREED** by the PROVIDER and RECIPIENT through their authorised
signatories:-
_For and on behalf of the**PROVIDER** _
Signed:
Name:
Title:
Date:
_For and on behalf of the**RECIPIENT** _
Signed:
Name:
Title:
Date:
_Acknowledged and understood by the_
### _PROVIDER’s SCIENTIST_
Signed:
Date:
_Acknowledged and understood by the_
### _RECIPIENT’s SCIENTIST_
Signed:
Date:
## 8\. DATA SHARING, CONFIDENTIALITY AND INFORMATION GOVERNANCE: IMPERIAL
COLLEGE AND UNITO
Data sharing will be governed by multilateral Data Transfer Agreements
(template attached). The MRC-PHE Centre for Environment and Health at Imperial
College, where Lifepath is coordinated, has a strict policy on ethics, data
management and confidentiality (attached). Any studies initiated from within
the Centre are subject to national/international ethical review procedures. As
part of the Centre's research, considerable quantities of data on individuals
are held and analysed. In doing so the Centre complies with the **Data
Protection Act 1998 (UK)** and processes that information in accordance with
the eight Data Protection Principles set out in the Act. The Centre’s staff
includes the Data Protection Coordinator for the School of Public Health who
is responsible for maintaining a register of datasets and advising on
compliance. All PIs in the Centre have to undergo "information governance
training" and obtain a certificate. All data, whether held electronically or
manually, are securely stored. These rules apply to all partners in Lifepath.
In addition, all Lifepath data will be stored at the **Unito Center**
(University of Torino) after anonymization.
**_IT Policies – UNITO_ **
The following IT policies apply to data generated within the Lifepath action
and stored on the UNITO-Epi computer infrastructure. Giuseppe Costa, Angelo
d’Errico, and a to be defined person, have user accounts with extended rights
on the UNITO-Epi server and will need to obtain user accounts with extended
rights on the FTPS server at Imperial College for standard use and data
management purposes.
### Logical User Access Rights and Identity Management
Each person who has access to the UNITO-Epi server has a unique username and
login credentials to access the server. This information is managed by
Microsoft Active Directory. Non-IT personnel are limited to their own login
and do not have administrative access to the server. Password requirements are
implemented and each user must change his/her password regularly. Failure to
do so results in lockout from the network. All administrative tasks (access
rights, account revocation, etc.) are performed by UNITO-Epi’s IT department.
Periodic review of logical access rights is done to ensure that the rights are
enforced over time.
### Network Security (WAN/LAN)
The UNITO-Epi network is separated into two distinct segments: internal (non-
public) and external (Regional public administration network: Rupar). The
external network is composed of fiber channel access to Rupar network. Only
computers of the UNITO-Epi network have the ability to connect to the external
network No personal device can connect to the external network. Both networks
(internal & external) are protected by redundant firewalls. Internal switches
and routers are inaccessible by regular users, are password-protected and can
only be managed internally by IT personnel. Periodic review of firewall logs
is performed. No remote desktop access is allowed. Administrator/Root
passwords are changed on a periodic basis and are randomly generated
consisting of a minimum length, special and alphanumeric characters.
### UNITO-Epi internal IT Acceptable Use Policy
Every UNITO-Epi employee has signed the internal IT Policy document ensuring
data security and protection for the company and its business partners. In
this document, the following activities are rated as strictly prohibited, with
no exceptions:
* “Revealing your account password to others or allowing use of your account by others.”
* “Circumventing user authentication or security of any host, network or account.”
* “Distributing information deemed confidential by or under any agreement with UNITO-Epi or any agreement between UNITO-Epi and any other party.”
### Backup and Disaster Recovery
Three areas of concern in a disaster are data integrity, hardware availability
and physical infrastructure status. In the case of data integrity, data on the
server is tape-backed up once a month with incremental backups nightly.
Moreover, on the UNITO-Epi server is enabled daily the “shadow copy” service.
Tape backups are off-site in secure, fireproof locations. Server restoration
is possible and periodic testing of system restores including data recovery is
performed to ensure hardware and data integrity. The server is under service
contract with an external company for its lifespan. A comprehensive impact
analysis and risk assessment has been performed.
### Data Exchange
Typically customers of UNITO-Epi provide their data to us in one of the
following ways:
* Via secure HTTP (HTTPS) server
* Via secure FTP (FTPS) server
* Hand-delivered in person
In all cases, the data is only handled by IT-personnel or the Project IT
Policies.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0955_P4SB_633962.md
|
**Project no 633962 Initial Data Management Plan**
# P4SB data management plan
The data management plan will cover the exchange, storage, and use of data
generated in P4SB. The data management plan will be developed throughout the
project to accommodate the expected growing demand of data storage and
sharing. The overall aim is to effectively communicate with partners inside of
P4SB, with the scientific community, and with the general public.
This first version of the data management plan is based on an email survey to
the academic groups.
## General aspects
Effective coordination between the experimental, modelling, and analytics
tasks is pivotal for the transfer of the project, and we will use existing
data formats and standards wherever possible to benefit from and contribute to
existing resources. Several partners were partner of SYSMO and are hence
familiar with SysMO-DB ( _www.sysmo-db.org_ ) . Explicitly, we will actively
use and contribute to any data handling platform either generated during or
recommended by the EU. Our immediate strategy for data handling and
standardization is outlined below.
## Data storage – repositories and standards
All data generated from funded activities in this project will be uploaded
into standard public repositories, where available: genetic information
including full genomes in genbank at NCBI.Mmicroarray experiments will be
submitted into ArrayExpress at EBI and/or Gene Expression Omnibus at NCBI.
Both of these are MIAME-compliant (Minimal Information about a Microarray
Experiment) repositories. This concerns both raw data and data interpretation.
Protein and proteome data will be communicated via scientific publications.
Chemical molecules identified from MS-experiments will be referenced by
PubChem identifier, SMILES string or MOL-file format.
Pathway models and metabolic networks can be described in SBML format and
offered to other researchers.
## Internal communication
The project management tool EMDESK ( _www.emdesk.com_ ) is already
implemented for exchange of data, allowing model verification and result
dissemination between the partners.
The partners will use a common version-controlled file repository and project
management software to monitor progress via a ticket-based system. Project
partner RWTH is responsible for maintaining the repository and setting up user
accounts. The system will be used both for internal discussion and
documentation and outside presentation and publication of the project. The
internal area is restricted and password-protected. In addition, an effective
and simple communication platform is to facilitate the web services-based
exchange of data between partners.
P4SB
Deliverable D
8.3
Version
1.0
Page
**4**
of
**5**
4
**Project no 633962 Initial Data Management Plan**
## Public outreach
The P4SB partners quickly established Facebook, LinkedIn, and Twitter accounts
and keep them active by communicating general information of interest,
relevant publications, news, and own contributions. In addition the
dissemination of the results of the project to the scientific community is
followed in the form of publications, press releases, and conference
contributions. The partners set-up a webpage to enhance visibility, initiate
communication, and start interactions and collaborations within the scientific
community and the general public.
P4SB
Deliverable D
8.3
Version
1.0
Page
**5**
of
**5**
5
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0956_U-TURN_635773.md
|
**EXECUTIVE SUMMARY**
This deliverable is the first version of U-TURN's Data Management Plan (DMP).
It includes the main elements foreseen in the European Guidelines for H2020
and the data management policy that will be used for all the datasets
generated by the project. U-TURN's DMP is driven by the project's pilots.
Specifically, this document describes the minimum datasets related to the
three U-TURN pilots: 1) Distribution of packaged goods from food manufacturers
to retail outlets located in urban areas (Greece), 2) Distribution of fresh
food from local producers in urban areas (Italy), 3) Food delivery from online
retailers to consumers in urban areas (UK). For each of these datasets, the
document presents a unified approach of the name and the description to be
used. Furthermore, the standards and metadata are presented as well as data
sharing options along with archiving and preservation details.
<table>
<tr>
<th>
**1**
</th>
<th>
**Introduction**
</th> </tr> </table>
**1.1 Introduction**
The purpose of the Data Management Plan (DMP) is to provide a single point of
reference on the policy that governs the data received and managed by U-TURN
project as well as any data sources to be generated and made available to the
public. This document will evolve during the lifespan of the project. The
first version is delivered on M6 of the project, while it will be revised at
least by the mid-term and final review to be fine-tuned to the data generated.
Following the specified template (EU, 2015), the document analyses the
identifiers of the data and their description on how they are generated,
collected and reused. Also, a reference to relevant standards and the metadata
that will be created is provided. Archiving, preservation and data sharing
mechanisms are identified.
**1.2 Document Composition**
This document is comprised of the following chapters:
**Chapter 2** – Initial naming of the datasets
**Chapter 3** – Description of the minimum datasets to be collected for each
pilot
**Chapter 4** – Standards and metadata
**Chapter 5** – Data sharing mechanisms to be followed by internal and
external entities **Chapter 6** – Archiving and preservation of the data
<table>
<tr>
<th>
**2**
</th>
<th>
**Data set reference and name**
</th> </tr> </table>
U-TURN is driven by three different pilots, i.e., 1) Distribution of packaged
goods from food manufacturers to retail outlets located in urban areas
(Greece), 2) Distribution of fresh food from local producers in urban areas
(Italy), 3) Food delivery from online retailers to consumers in urban areas
(UK). The teams working under these pilots have already initiated a series of
interviews with several industry partners to identify the minimum set of data
that is useful for enabling the simulation mechanism, the matching algorithm
and the economic assessment of the project. After this set of data is agreed,
the industry partners and possible end users of the platform will provide
their historical data to the project pilots.
The data sets required by each pilot differ to each other, since the pilots
cover alternative urban freight distribution channels. However, similar naming
methodology will be followed. The partners will receive one or more files
(excel or csv) containing industrial data. The name of the file should follow
a specific structure, such as: TG_DS_PL_CM_FT_ND_D
TG: Target Group, the target group for which the data are contained in the
document (food producers, retailers etc.)
DS: DataSet, the set of data that is included for this target group
(transport, delivery, vehicle). It can also take the value “ALL” if the file
contains all the sets of data
PL: PiLot, the name of the pilot (GR, IT, UK)
CM: CoMpany, the name of the company from which we have taken the data
FT: FormaT, the format of the file of the data
ND: The name of the original document
D: The date of receiving the document or the date of creating this document
The folders used may follow similar structure: PL_CM_FT_DT_DT
PL: PiLot, the name of the pilot (GR, IT, UK)
CM: CoMpany, the name of the company from which we have taken the data
FT: FormaT, the format of the file of the data
DT: The name of the original document D: The date of receiving the document
<table>
<tr>
<th>
**3**
</th>
<th>
**Dataset Description**
</th> </tr> </table>
All three pilots have already reviewed a number of end users in their specific
areas. The target groups identified include: Pilot 1:
super markets 3PL companies suppliers
Pilot 2:
food producers/farmers
local markets, retailers, consumers, consumer aggregations transport and
logistics operators
Pilot 3:
retailers offering home deliveries of groceries
The specific target group is reflected also on the name of the file that
contains the data set.
For each of these target groups, several parameters have been identified by
each pilot. The data to be received by external sources will fill in these
parameters. The pilots have identified some identical parameters, however some
differ. For pilot 1 and Pilot 2 the three basic parameters are:
Transport: Data concerning the transportation of the goods
Delivery: Data concerning the delivery points
Vehicle: Data concerning the vehicles making the transportation
These parameters of course include several variables. An example is presented
below:
**Table 1 Variables for the three parameters of the datasets**
<table>
<tr>
<th>
Parameter
</th>
<th>
Type
</th>
<th>
</th>
<th>
Mandatory
</th> </tr>
<tr>
<td>
TRANSPORTS
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Transport ID: The ID of the specific transportation, as it exists in the
system
</td>
<td>
String
</td>
<td>
YES
</td>
<td>
</td> </tr>
<tr>
<td>
Date of transport: The date that the transportation was held (DDMMYY)
</td>
<td>
Date
</td>
<td>
YES
</td>
<td>
</td> </tr>
<tr>
<td>
Transport start point: The Postal Code of the transportation start point
(Postal Code of the 3PL’s warehouse, etc.)
</td>
<td>
String
</td>
<td>
YES
</td>
<td>
</td> </tr>
<tr>
<td>
Vehicle Code: Vehicle’s ID (e.g. license plate)
</td>
<td>
String
</td>
<td>
YES
</td>
<td>
</td> </tr>
<tr>
<td>
Distance travelled (in km): The distance from the
</td>
<td>
Numeric
</td>
<td>
YES
</td>
<td>
</td> </tr>
<tr>
<td>
start point to the last point of delivery (no return data computed)
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
DELIVERIES
</td>
<td>
</td> </tr>
<tr>
<td>
Transport ID: The ID of the specific transportation, as it exists in the
system
</td>
<td>
String
</td>
<td>
YES
</td> </tr>
<tr>
<td>
Delivery point: The Postal Code of the delivery points (if more than one
delivery points exist, they are depicted as distinct records)
</td>
<td>
Numeric
</td>
<td>
YES
</td> </tr>
<tr>
<td>
Carried load per delivery point (in Kg): carried load from the start point to
the specific delivery point
</td>
<td>
Numeric
</td>
<td>
NO
</td> </tr>
<tr>
<td>
Carried load per delivery point (as volume): carried load from start point to
the specific delivery point in cubic meters (m3)
</td>
<td>
Numeric
</td>
<td>
NO
</td> </tr>
<tr>
<td>
Carried load per delivery point (in pallets): carried load from start point to
specific delivery point in pallets
Load type: e.g. dry, refrigerate, etc.
</td>
<td>
Numeric
</td>
<td>
YES
</td> </tr>
<tr>
<td>
VEHICLES
</td>
<td>
</td> </tr>
<tr>
<td>
Vehicle ID: e.g. License plate
</td>
<td>
String
</td>
<td>
YES
</td> </tr>
<tr>
<td>
Vehicle type: Owned or Public Deliveries
</td>
<td>
String
</td>
<td>
YES
</td> </tr>
<tr>
<td>
Vehicle Engine Technology: e.g. Euro IV, Euro V
</td>
<td>
String
</td>
<td>
YES
</td> </tr>
<tr>
<td>
Fuel type: e.g. diesel, bio-fuel etc.
</td>
<td>
String
</td>
<td>
YES
</td> </tr>
<tr>
<td>
Vehicle gross weight: Maximum vehicle weight
(loaded) in kg
</td>
<td>
Numeric
</td>
<td>
NO
</td> </tr>
<tr>
<td>
Vehicle payload: maximum load a vehicle can transfer (in kg)
</td>
<td>
Numeric
</td>
<td>
NO
</td> </tr>
<tr>
<td>
Vehicles capacity in pallets
</td>
<td>
Numeric
</td>
<td>
YES
</td> </tr> </table>
Pilot 3 has also identified “Deliveries” as the basic parameter and several
variables.
**Table 2 Variables for the parameter "Deliveries" for Pilot 3**
<table>
<tr>
<th>
Parameter
</th>
<th>
Type
</th>
<th>
</th>
<th>
Mandatory
</th> </tr>
<tr>
<td>
**DELIVERIES**
</td>
<td>
</td> </tr>
<tr>
<td>
Delivery point: The Postal Code of the delivery points (if more than one
delivery points exist, they are depicted as distinct records)
</td>
<td>
String
</td>
<td>
YES
</td>
<td>
</td> </tr>
<tr>
<td>
Spoke (which DC / warehouse is responsible for the delivery)
</td>
<td>
String
</td>
<td>
NO
</td>
<td>
</td> </tr>
<tr>
<td>
Order index; i.e. the number of orders within a postcode relative to the
average
</td>
<td>
Numeric
</td>
<td>
YES
</td>
<td>
</td> </tr>
<tr>
<td>
Number of Orders
</td>
<td>
Numeric
</td>
<td>
YES
</td>
<td>
</td> </tr>
<tr>
<td>
Average items
</td>
<td>
Numeric
</td>
<td>
NO
</td>
<td>
</td> </tr>
<tr>
<td>
Carried load per delivery point (in Kg): carried load
</td>
<td>
Numeric
</td>
<td>
NO
</td>
<td>
</td> </tr>
<tr>
<td>
from the start point to the specific delivery point
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Carried load per delivery point (as volume): carried load from start point to
the specific delivery point in cubic meters (m3)
</td>
<td>
Numeric
</td>
<td>
NO
</td> </tr>
<tr>
<td>
Allocation of time windows of deliveries (day and hour)
</td>
<td>
String
</td>
<td>
NO
</td> </tr>
<tr>
<td>
Allocation of order’s time (day and hour)
</td>
<td>
String
</td>
<td>
NO
</td> </tr> </table>
An example of the excel files already retrieved from the industry partners of
Greece (Pilot 1) are depicted below for each of the parameters mentioned
above.
**Figure 1 Transport Dataset Example Pilot 1**
<table>
<tr>
<th>
**Postcode**
</th>
<th>
**Spoke**
</th>
<th>
**Order index**
</th>
<th>
**Orders**
</th>
<th>
**Average items**
</th>
<th>
**Average order volume (cubic cm)**
</th>
<th>
**Average order weight (kg)**
</th>
<th>
**Total items**
</th>
<th>
**Total volume (cubic cm)**
</th>
<th>
**Total weight**
</th> </tr>
<tr>
<td>
**CK784GY**
</td>
<td>
**??????**
</td>
<td>
0.3
</td>
<td>
2.9
</td>
<td>
53
</td>
<td>
80,855
</td>
<td>
35
</td>
<td>
154
</td>
<td>
235,553
</td>
<td>
101
</td> </tr> </table>
<table>
<tr>
<th>
**4**
</th>
<th>
**Standards and Metadata**
</th> </tr> </table>
U-TURN project is related to different pillars, i.e., transportation,
logistics, environment. Several standards exist, addressing interoperability,
adaptability and dynamicity issues of data on each of these specific fields.
This section presents the standards (BSI), (GS1), that will be taken into
account in order for the project to produce aligned data structures and data
exchange services.
**4.1 CEN 16258:2012**
This European Standard establishes a common methodology for the calculation
and declaration of energy consumption and greenhouse gas (GHG) emissions
related to any transport service (of freight, passengers or both). It
specifies general principles, definitions, system boundaries, calculation
methods, apportionment rules (allocation) and data recommendations, with the
objective to promote standardised, accurate, credible and verifiable
declarations, regarding energy consumption and GHG emissions related to any
transport service quantified.
**4.2 ISO 39001:2012**
ISO 39001:2012 specifies requirements for a road traffic safety (RTS)
management system to enable an organization that interacts with the road
traffic system to reduce death and serious injuries related to road traffic
crashes which it can influence. The requirements set by ISO 39001:2012 include
development and implementation of an appropriate RTS policy, development of
RTS objectives and action plans, which take into account legal and other
requirements to which the organization subscribes, and information about
elements and criteria related to RTS that the organization identifies as those
which it can control and those which it can influence.
**4.3 ISO 14001:2015**
ISO 14001:2015 specifies the requirements for an environmental management
system that an organization can use to enhance its environmental performance.
ISO 14001:2015 is intended for use by an organization seeking to manage its
environmental responsibilities in a systematic manner that contributes to the
environmental pillar of sustainability.
**4.4 ISO 9001:2015**
ISO 9001:2015 is the revised edition of ISO 9001:2008 which specifies
requirements for a quality management system when an organization needs to
demonstrate its ability to consistently provide products and services that
meet customer and applicable statutory and regulatory requirements, and aims
to enhance customer satisfaction through the effective application of the
system, including processes for improvement of the system and the assurance of
conformity to customer and applicable statutory and regulatory requirements.
**4.5 ISO 22000:2005**
ISO 22000:2005 specifies requirements for a food safety management system
where an organization in the food chain needs to demonstrate its ability to
control food safety hazards in order to ensure that food is safe at the time
of human consumption. It is applicable to all organizations, regardless of
size, which are involved in any aspect of the food chain and want to implement
systems that consistently provide safe products. The means of meeting any
requirements of ISO 22000:2005 can be accomplished through the use of internal
and/or external resources.
**4.6 BS OHSAS 18001**
BS OHSAS 18001 is a truly international standard which sets out the
requirements for occupational health and safety management good practice for
any size of organization. It provides guidance to help companies design their
own health and safety framework – allowing them to bring all relevant controls
and processes into one management system. This system is proven to enable the
business to be pro-active rather then reactive, therefore more effectively
protecting the health and welfare of the workforce on an on-going basis.
Each file associated with data will be accompanied with unique specified
metadata in order to allow their ease of access and re-usability. Below, the
form to be followed is presented.
<table>
<tr>
<th>
**Title**
</th> </tr>
<tr>
<td>
Document version
</td>
<td>
(The version of this document)
</td> </tr>
<tr>
<td>
Description
</td>
<td>
(A description of the data included in the document)
</td> </tr>
<tr>
<td>
Date
</td>
<td>
(The date of the creation of the document)
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
(Some keywords describing the content)
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
(Small description of the data source)
</td> </tr>
<tr>
<td>
**Creator (Name of the creator of the data source – In case of anonymous data
this can be empty)**
</td> </tr>
<tr>
<td>
Sector of the provider
</td>
<td>
(Information on the sector that this provider belongs to)
</td> </tr>
<tr>
<td>
Permissions
</td>
<td>
(The permission of this document are mandatory to be mentioned here)
</td> </tr>
<tr>
<td>
**Name of the Partner (The name of the partner that collected the data and is
responsible for)**
</td> </tr>
<tr>
<td>
Responsible person
</td>
<td>
(The name of the person within the partner, who is responsible for the data)
</td> </tr>
<tr>
<td>
Pilot
</td>
<td>
(For which pilot the data will be used)
</td> </tr>
<tr>
<td>
Scenario of data usage
</td>
<td>
(How the data are going to be used in this scenario)
</td> </tr>
<tr>
<td>
**Description of the Data Source**
</td> </tr>
<tr>
<td>
File format
</td>
<td>
(The format of the data source provided)
</td> </tr>
<tr>
<td>
File name/path
</td>
<td>
(The name of the file)
</td> </tr>
<tr>
<td>
Storage Location
</td>
<td>
(In case a URI/URL exists for the data provider)
</td> </tr>
<tr>
<td>
Data type
</td>
<td>
(Data type and extension of the file; e.g. Excel Sheet, .xlsx; Standard if
possible)
</td> </tr>
<tr>
<td>
Standard
</td>
<td>
(Data Standard, if existent. e.g. DATEX II, NaPTAN, etc.)
</td> </tr>
<tr>
<td>
Data Size
</td>
<td>
(Total data size, if possible)
</td> </tr>
<tr>
<td>
Time References of Data
</td>
<td>
Start Date
</td>
<td>
End Date
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
Start Date
</td>
<td>
End Date
</td> </tr>
<tr>
<td>
Data collection frequency
</td>
<td>
(The time frequency in which the data is collected; e.g. Hourly, every five
minutes, on demand, etc.)
</td> </tr>
<tr>
<td>
Data quality
</td>
<td>
(The quality of the data; is it complete, does it have the right collection
frequency, is it available, etc.)
</td> </tr>
<tr>
<td>
**Raw data sample**
</td> </tr>
<tr>
<td>
(Textual copy of a data sample)
</td> </tr>
<tr>
<td>
**Number of Parameters included:**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Parameter #1:**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**(Name)**
</td>
<td>
**(Type)**
</td>
<td>
**(Mandatory)**
</td> </tr>
<tr>
<td>
</td>
<td>
**…**
</td>
<td>
**…**
</td>
<td>
**…**
</td> </tr>
<tr>
<td>
**Parameter #2:**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**(Name)**
</td>
<td>
**(Type)**
</td>
<td>
**(Mandatory)**
</td> </tr>
<tr>
<td>
</td>
<td>
**…**
</td>
<td>
**…**
</td>
<td>
**…**
</td> </tr> </table>
<table>
<tr>
<th>
**5**
</th>
<th>
**Data Access and Sharing**
</th> </tr> </table>
Data Access and Sharing Plan, include several aspects that have to be
identified (DCC), (University). In line with the Consortium Agreement, access
to the data resulted from the project will be available for educational,
research and non-profit purposes. Also, according to the exploitation and
dissemination plan of the project the outcomes will be accessible to the
public. These plans may include publication of the results in waves during the
project or at the end of it. In more details, these issues regarding the data
access and sharing plan are presented below.
**5.1 Timeliness of Data Sharing**
The data sharing should occur in a timely fashion. This means that the data
resulted from the research conducted in the project should become available
close to the project results themselves. Furthermore, it is reasonable to
expect that the data would be released in waves as they become available or
main findings from waves of the data are published.
**5.2 IPRs and Privacy Issues**
Data access and sharing activities will be rigorously implemented in
compliance with the privacy and data collection rules and regulations, as they
are applied nationally and in the EU, as well as with the H2020 rules. Raw
data collected through the interviews from external to the consortium sources
may be available to the whole consortium or specific partners upon
authorization of the owners. This kind of data will not be available to the
public. Concerning the results of the project, these will become publicly
available based on the IPRs as described in the Consortium Agreement.
**5.3 Methods for Data Sharing**
Raw data or resulted data that are governed by any IPRs or confidentiality
issues will be added to a data enclave. Data enclaves are considered
controlled, secure environments for datasets that cannot be distributed to the
general public either due to participant confidentiality concerns or third-
party licensing or use agreements that prohibit redistribution.
An additional raw-data collection issue is the provision of data required
during the pilots of the project, such as basic data required for a use-case.
This kind of data will be inserted to the U-TURN platform either manually by
the user, or in batches using the defined system interfaces. Either way, the
confidentiality and integrity of these data will be guaranteed by the security
encryption scheme that will be defined in the respective deliverable regarding
the non-functional requirements of the platform.
On the other hand, data that are eligible for public distribution may be
disseminated through:
Scientific papers
Lectureships in case of Universities
Interest groups created by the partners of the project
Dissemination though the dissemination and exploitation channels of the
project to attract more interested parties
Appropriate repositories will be used for storing the results of the project
and providing access to the scientific community, such as OpenAIRE.
<table>
<tr>
<th>
**6**
</th>
<th>
**Archiving and Preservation**
</th> </tr> </table>
**6.1 Short term**
We recognise 2 cases where raw, generated or meta-data should be preserved and
archived. The first case refers to the requirements analysis phase, where raw
data are collected from industrial partners in a predefined file format (excel
or csv), with predefined fields. These data will provide to the system
designers a clear view on data availability and requirements, shedding light
on particular details of the industrial domain and users requirements. After
this phase is complete, such raw-data will be archived in their initial format
and stored on INTRASOFT's infrastructure online and offline. Access to these
datasets will be given only after request and during the design phases of the
project to the responsible person.
The second case refers to raw-data, meta-data, or data generated by the system
during the pilots of the project. All these kinds of data will be preserved to
a database (DB), the schema of which will be defined after the requirements
analysis phase and provided in the final version of this document. Back-ups of
the DB will be performed on a monthly-basis. Both, the DB server and the back-
ups will be stored on INTRASOFT's infrastructure online and offline.
The entire storage data set will be archived until the end of the project at
least. A full schema of the database will be provided. The files containing
the datasets will be versioned over time. Also the datasets will be
automatically backed up on a nightly and monthly basis. The backups will be
stored on INTRASOFT’s infrastructure online and offline.
**6.2 Long term**
The consortium partners will further examine platform solutions (e.g.
_https://joinup.ec.europa.eu/_ and _http://ckan.org/_ ) that will allow the
sustainable archiving of all the U-TURN datasets after the life span of the
project.
<table>
<tr>
<th>
**7**
</th>
<th>
**Conclusions**
</th> </tr> </table>
This deliverable presents the Data Management Plan of U-TURN project, which
will be used as guidance for the data that will be collected for the project’s
purposes, for the data that will be generated as well as for the metadata that
will accompany them. Specifically, it provides a way of describing the data to
follow for the whole consortium as well as guaranteeing consistency towards
any external users. It also analyses a preliminary framework of data sharing,
accessing and preserving.
Of course, this version of the deliverable is only an initial one. It will be
evolved during the lifespan of the project and thus it will be treated as a
living document. The first version is delivered on M6 of the project, while it
will be revised at least by the mid-term and final review to be fine-tuned to
the data generated.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0957_CITYLAB_635898.md
|
# Executive summary
The objective of the CITYLAB project is to develop knowledge and solutions
that result in rollout, up-scaling and further implementation of cost
effective strategies, measures and tools for emission free city logistics. In
a set of living laboratories, promising logistics concepts will be tested and
evaluated, and the fundament for further roll-out of the solutions will be
developed. In Horizon 2020, the emphasis on data management and open data has
been increased compared to earlier framework programmes.
In order to properly assess the urban freight transport environment and to
understand the effects of the measures being implemented, CITYLAB deals with
several types of data:
* Living lab data: Data and knowledge concerning the living lab cities will be collected and analysed in WP 2 and WP 3. These include open statistical urban freight data reflecting traffic and freight flows, location of facilities, environmental status, and data stemming from interviews with stakeholders.
* Data in models: Data will be collected to perform a robust evaluation and impact assessment.
* Implementation data: For each implementation, data will be collected in WP 4 to allow for before/after comparisons. These data relate to effects and impacts of implementations, as well as the processes themselves.
* Behavioural data: The behavioural modelling and analysis of willingness to pay requires surveys where the priorities of different actors are mapped. These data are at a more general level and neither contain personal nor commercially sensitive data.
* Transferability data: Data on critical indicators will be collected to check a possible transferability of the concept to another city.
Specific data sets within each of these groups will be further specified
during the course of the project.
In this document CITYLAB establishes a first version of a data management plan
(DMP) to make sure that the project data are managed in an appropriate manner.
The DMP describes the data management life cycle for data sets that are
collected, processed or generated by the project and defines a registration
system for data sets that arise during the project, covering:
* A general description of data sets, including type of data, methods used to obtain them and file formats
* Plans for preserving and sharing data
* Storage and backup responsibilities
The basic principle is that data should be accessible to the public, and a
dedicated area of the CITYLAB web site will be used for sharing publicly
accessible data. Exceptions from access can be made when legitimate academic
or commercial interests exist.
In cases where personal data are collected, plans for anonymisation must be
defined before data collection takes place and informed consent has to be
obtained from respondents of interviews or surveys.
DMPs should not be considered as fixed documents, as they naturally evolve
during the lifespan of a project.
# Introduction
The objective of the CITYLAB project is to develop knowledge and solutions
that result in rollout, up-scaling and further implementation of cost
effective strategies, measures and tools for emission free city logistics. In
a set of living laboratories, promising logistics concepts will be tested and
evaluated, and the fundament for further roll-out of the solutions will be
developed.
In Horizon 2020, the emphasis on data management and open data has been
increased compared to earlier framework programmes. Some projects participate
in the _Pilot on Open_
_Research Data in Horizon 2020,_ and these projects are obliged to develop a
data management plan (DMP). CITYLAB is not amongst these projects, but
nevertheless develops a DMP to make sure that the project data are managed in
an appropriate manner.
Amongst the reasons for having a data management plan (DMP) are (Jones, 2011):
* It will be easier to find and understand the data we have in our possession, and we avoid reworking and re-collection of data
* Data sharing increases collaboration and advances research
* Increased visibility of the available data may increase the impact of the research project
* Data underlying publications are systematically maintained, allowing results to be validated
The DMP describes the data management life cycle for data sets that are
collected, processed or generated by the project. DMPs should not be
considered as fixed documents, as they naturally evolve during the lifespan of
a project (European Commission, 2013a).
The establishment of a data management plan (DMP) for CITYLAB underlines an
appreciation of the project’s responsibility to manage relevant data in an
appropriate manner. All CITYLAB partners have to collect, store and manage
data in line with local laws and to treat data in line with the guidelines of
this document.
Several principles have to be used while dealing with research data, amongst
these are: Data protection and privacy has to be respected, and appropriate
solutions for data storage and handling must be established
* Open access to data should be the main principle for projects funded by public money
* Data should be discoverable, accessible and interoperable to specific quality standards
* Integrity of the research depends on the quality of data and that data are not manipulated, and data should be assessable and intelligible.
In this document we set out a few principles for data management in CITYLAB,
the structure is inspired by _DMP online_ of the Digital Curation Centre 1 ,
also recommended by the Consortium of European Social Science Data Archives
(CESSDA).
The rest of this deliverable is organised as follows. Chapter 2 deals with
data collection, data sets that are dealt with, and metadata. Chapters 3 and 4
deal with ethical issues and procedures for management and storing of data,
respectively. Finally, Chapter 5 defines the additional data management
process and responsibilities.
# Data collection
## Data in CITYLAB
The European Commission (2013b) define research data as _“information, in
particular facts or numbers, collected to be examined and considered as a
basis for reasoning, discussion or calculation. In a research context,
examples of data include statistics, results of experiments, measurements,
observations resulting from fieldwork, survey results, interview recordings
and images. The focus is on research data that is available in digital form.”_
In order to properly assess the urban freight transport environment and to
understand the effects of the measures being implemented, CITYLAB deals with
several types of data:
* Living lab data 2 : Data and knowledge concerning the living lab cities will be collected and analysed in WP 2 and WP 3. These include open statistical urban freight data reflecting traffic and freight flows, location of facilities, environmental status, and data stemming from interviews with stakeholders.
* Data in models: Data will be collected to perform a robust evaluation and impact assessment.
* Implementation data: For each implementation, data will be collected in WP 4 to allow for before/after comparisons. These data relate to effects and impacts of implementations, as well as the processes themselves.
* Behavioural data: The behavioural modelling and analysis of willingness to pay requires surveys where the priorities of different actors are mapped. These data are at a more general level and neither contain personal nor commercially sensitive data.
* Transferability data: Data on critical indicators will be collected to check a possible transferability of the concept to another city.
Specific data sets within each of these groups will be further specified
during the course of the project. A registration procedure is defined for data
sets in CITYLAB, see Section 5.2. To ensure that data sets are registered, the
regular reporting from each living lab will contain information on data sets
that are captured.
CITYLAB uses a harmonised approach for all living labs, which ensures
standardisation of data collected from the different locations and
implementations. This ensures interoperability of data and facilitates cross-
simulation of data for improved understanding. Next, CITYLAB builds on
previous projects and adapts parts of the evaluation frameworks of the FP7
projects STRAIGHTSOL and SMARTFUSION. By using similar indicator formats as
previous projects, we allow for cross-comparison also with other initiatives.
CITYLAB will follow established practice and international standards for data
collection and preservation.
## Metadata
Metadata can be defined as “structured or semi-structured information which
enables the creation, management and use of records [i.e. data] through time
and within and across domains” (Day, 2005). Metadata facilitates exchange of
data by making them more detectable, and makes it easier to organise,
reproduce and reuse data.
Metadata will be defined for data sets that are collected as part of the
project.
## Important data management issues
The European Commission (2013a) defines a set data issues that should be
addressed for data sets that are dealt with, these are summarised in Table 1.
**Table 1. Key data requirements and DMP questions.** _Source: European
Commission (2013a)._
<table>
<tr>
<th>
**Data requirements**
</th>
<th>
**DMP question**
</th> </tr>
<tr>
<td>
Discoverable
</td>
<td>
Are the data and associated software produced and/or used in the project
discoverable (and readily located), identifiable by means of a standard
identification mechanism (e.g. Digital Object Identifier)?
</td> </tr>
<tr>
<td>
Accessible
</td>
<td>
Are the data and associated software produced and/or used in the project
accessible and in what modalities, scope, licenses (e.g. licencing framework
for research and education, embargo periods, commercial exploitation, etc.)?
</td> </tr>
<tr>
<td>
Assessable
and intelligible
</td>
<td>
Are the data and associated software produced and/or used in the project
assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. are the minimal datasets handled
together with scientific papers for the purpose of peer review, are data
provided in a way that judgments can be made about their reliability and the
competence of those who created them)?
</td> </tr>
<tr>
<td>
Usable beyond the original purpose for which it was
collected
</td>
<td>
Are the data and associated software produced and/or used in the project
usable by third parties even a long time after the collection of the data
(e.g. is the data safely stored in certified repositories for long-term
preservation and curation; is it stored together with the minimum software,
metadata and documentation to make it useful; is the data useful for the wider
public needs and usable for the likely purposes of non-specialists)?
</td> </tr>
<tr>
<td>
Interoperable
to specific quality standards
</td>
<td>
Are the data and associated software produced and/or used in the project
interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc. (e.g. adhering to standards for data
annotation, data exchange, compliant with available software applications, and
allowing combination with different datasets from different origins)?
</td> </tr> </table>
# Ethics and legal compliance
In cases where personal data are collected, plans for _anonymisation_ must be
defined before data collection takes place.
For data that are collected or considered reused from existing sources, the
necessary rights to use the data have to be obtained. If data are planned to
be shared publicly, we have to make sure that we have the right to do so.
_Informed consent_ is crucial, where respondents of interviews or surveys are
made aware of the plans for use of the data and the rights they have to
withdraw, etc. Before data are collected, plans for future use have to be
discussed, so that participants in surveys and interviews may be informed on
these plans and agree to it. Appendix B contains a simple example template for
obtaining the right to use data.
Data are owned by the party that generates them, principles for intellectual
property rights are defined in the CITYLAB Consortium Agreement. Proprietary
data gathered by a consortium member remains in the care of that consortium
member, and will not be distributed to any other consortium member or any
party outside of the consortium. Processing and use of data will follow
Directive 95/46/EC (the data protection directive) and the “General Data
Protection Regulations law”. In addition, each CITYLAB partner is obliged to
collect and manage data in line with national legislation.
Integrity of the research depends on the quality of data and that data are not
manipulated, it is required that all CITYLAB partners refrain from such
manipulation.
# Storage, preservation and data sharing
All non-public data will be stored in secure environments at the locations of
consortium partners with access privileges restricted to the relevant project
partners. Non-public data will not be stored through Dropbox, Google Docs or
other third party cloud-based services.
CITYLAB is committed to distribute results and publications via Open Access
publishing and has allocated dedicated resources for this. Consortium partners
will seek to publish results in open access journals to widen the target
audience of the project’s results. Consortium partners will publish results in
scientific journals that can assure such open access without restriction.
The basic principle is that data should be accessible to the public, and a
dedicated area of the CITYLAB web site will be used for sharing publicly
accessible data. Exceptions from access can be made when legitimate academic
or commercial interests exist, and such issues will be handled by the
Management Committee. One such example is financial implementation data where
protection of information revealing, for instance, industry partners’ general
cost structure or competitive conditions may be needed. Possible methods by
which proprietary data could be made publicly available include referring to
relative changes rather than absolute values, aggregation and anonymization.
In CITYLAB’s WP 2 it is planned to develop an observatory for urban logistics,
and this will be one mechanism for sharing data. The observatory will be
connected to the web site hosted by University of Southampton.
For many previous European projects, it has been difficult to reuse the
findings because the web sites have closed down after the projects’ end dates.
The CITYLAB web site will be planned in such a way that before the project
ends, a post-project phase version will be established to facilitate access to
project data.
# Process and responsibilities
This chapter describes the process for ongoing management of data in CITYLAB.
## Process overview and responsibilities
Each CITYLAB partner has to respect the policies set out in this data
management plan. Data sets have to be created, managed and stored
appropriately and in line with national legislation.
University of Southampton has a particular responsibility to ensure that data
shared through the CITYLAB web site are easily available, but also that
backups are performed and that proprietary data are secured.
Monitoring and registration of data sets is the responsibility of the partner
that generates the data. In Section 5.2 the template for registration of data
sets is described; the full template is available in Appendix A. When a
partner is ready to register a new data set, they should send the requested
information to the Project Coordinator who will update the template in
CITYLAB’s Sharepoint site. This can be done at any time, but it will also be
possible to inform about new data sets as part of the regular living lab
reporting.
The partner that generates the data is also responsible for obtaining the
necessary rights to use and share the data. Appendix B contains a simple
example template for obtaining the right to use data.
Quality control of the data is the responsibility of the relevant WP leader,
supported by the Project Coordinator.
If data sets are updated, the party that possesses the data has the
responsibility to manage the different versions and to make sure that the
latest version is available in the case of publically available data. When
data sets are registered, a person with responsibility for the data set has to
be named. This can be changed later, for instance if the physical location of
the data are changed.
Table 2 summarises the main data management responsibilities.
**Table 2. Overview of data management responsibilities.**
<table>
<tr>
<th>
Activity
</th>
<th>
Responsible
</th> </tr>
<tr>
<td>
Registration of data sets
</td>
<td>
Partner generating the data set
</td> </tr>
<tr>
<td>
Ensure that rights to use (and if applicable share) the data are obtained
</td>
<td>
Partner generating/introducing the data set
</td> </tr>
<tr>
<td>
Keep overview of data sets at Sharepoint
</td>
<td>
Project Coordinator
</td> </tr>
<tr>
<td>
Quality control
</td>
<td>
Relevant WP leader
</td> </tr>
<tr>
<td>
Version control for files
</td>
<td>
Person defined to have data set responsibility (see Section 5.2.6).
</td> </tr>
<tr>
<td>
Backing up data
</td>
<td>
Organisation possessing the data. For data shared through the web site
University of Southampton is responsible
</td> </tr>
<tr>
<td>
Security and protection of data
</td>
<td>
Organisation possessing the data. For data shared through the web site
University of Southampton is responsible
</td> </tr> </table>
In the case of conflicts or issues that need discussion or voting, the
Management Committee will be consulted.
## Registration of data sets
Following the information in Chapter 2 and specific advice on data management
of the European Commission (2013a), a template for registration of data sets
has been established. The template can be found in Appendix A. Below we
explain each of the elements that has to be described in the template.
The registration of data should not be a complicated or complex task, and we
have therefore made a short version of the template emphasising what we
believe is most crucial.
Completed templates should be sent to the Project Coordinator who will keep
the information on the CITYLAB data sets up to date.
### Data set reference and name
An identifier has to be included (data sets are numbered consecutively) as
well as an appropriate name of the dataset. A data set can be defined as
(Wikipedia) “ _a single database table, or a single statistical data matrix,
where every column of the table represents a particular variable, and each row
corresponds to a given member of the data set in question” or alternatively as
“data in a collection of closely related tables, corresponding to a particular
experiment or event”_ (also Wikipedia) _._ Depending on the nature of the data
or information covered, both alternatives can be applicable in CITYLAB.
### Data set description
A proper description of the data should be included. The description should
cover what the data represent, its source (in case it is collected), nature
and scale and to whom it could be useful, and whether it underpins a
scientific publication. Information on the existence (or not) of similar data
and the possibilities for integration and reuse should be described if
applicable. If data from other sources are reused, this should be clearly
specified.
The data have to be properly described in terms of:
* Type of data (for example experimental, observational, raw or derived)
* Methods used to obtain them (for example manual collections, models, simulations)
* File format (for example text files, images, audio, etc.) and whether non-standard software is needed for further processing of data
### Standards and metadata
Metadata are “ _data that provides information about other data_ ” 3
describe the contents of data files and the context in which they have been
established. Several metadata standards exist (see _
https://en.wikipedia.org/wiki/Metadata_standards) . _ Proper metadata
facilitates use of the data by others, makes it easier to combine information
from different sources, and ensures transparency.
### Data sharing
Describe plans for sharing data. Describe how data will be shared (including
access procedures and embargo periods), outlines of technical mechanisms for
dissemination and necessary software and other tools for enabling re-use.
Please also define whether access will be widely open or restricted to
specific groups. Identify the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.).
If the dataset cannot be shared, the reasons for this should be elaborated
(e.g. ethical, rules of personal data, intellectual property, commercial,
privacy-related or security-related).
### Archiving and preservation (include storage and backup)
Describe procedures that will be put in place for long-term preservation of
the data. Indicate for how long the data should be preserved, and where the
data will be stored. If applicable, plans for destruction of data should be
described. This information should be available for each data set, but
procedures for backup will most likely be similar for multiple data sets
stored in the same location,
### Name of person responsible for data set
For each data set a specific responsible person (and belonging institution)
has to be defined. This person will be responsible for version control, answer
questions related to the data set, and for ensuring data security and backup
of the data. Responsibility for security and back-up
can be transferred to other persons/organisations if appropriate, for instance
if a data set is shared through the web site.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0958_SatisFactory_636302.md
|
# EXECUTIVE SUMMARY
The present document is a deliverable of the SatisFactory project, funded by
the European Union’s Horizon 2020 Research and Innovation programme (H2020).
It constitutes the fourth and final version of the project’s Data Management
Plan (DMP).
This version presents in detail the various datasets produced within the
project, as well as the strategy put in place around their storage, protection
and sharing among the project’s partners and beyond.
Throughout the project, the team needed to manage a large number of datasets,
generated and collected by various means, i.e. sensors, cameras, manual inputs
in IT systems and direct interactions with employees (e.g. interviews). By the
end of the project, 31 different datasets have been produced through the
SatisFactory’s technical activities, with almost all the partners being data
owners and/or producers.
All SatisFactory datasets have been handled considering the main data security
and privacy principles, respecting also the partners IPR policies. A dedicated
Data Management Portal, developed by the project, further supported the
efficient management, storage and sharing of the project’s datasets.
Finally, SatisFactory supports the Open Research Data Pilot (ORD) and believes
firmly in the concepts of open science. In this context, the team has taken
measures to ensure that the project’s results are used by other stakeholders,
such as researchers or industry actors, stimulating in this way the continuity
and transfer of the SatisFactory outputs to further research and other
initiatives, allowing others to build upon, benefit from and be influenced by
them - though this objective obviously needs to be balanced with IPR and data
privacy principles. Interested stakeholders will be able to access open
resources generated by the project, such as reports, publications and
datasets, through various platforms, even beyond the project’s duration. This
way, sustainability of the SatisFactory outcomes will be fostered.
# INTRODUCTION
The SatisFactory project aims to enhance and enrich the manufacturing working
environment towards attractive factories of the future that encompass key
enabling technologies, such as augmented reality, wearable and ubiquitous
computing, as well as customised social communication platforms, coupled with
gamification techniques for the efficient transfer of knowledge and experience
among employees.
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy used by the consortium regarding
all the datasets generated by the project.
The DMP has not been a fixed plan since the beginning, but has evolved during
the lifespan of the project. Its fourth and final version presented in the
current document includes:
* an updated overview of the datasets produced by the project, their characteristics and their management processes;
* additional information regarding the dissemination of the project’s open access knowledge and datasets, aiming to foster further exploitation of the SatisFactory’s results by the scientific community and industrial stakeholders.
# ACTIVITIES TIMELINE
The main activities planned and carried out during the project concerning the
SatisFactory data management are presented below, along with a high level time
plan (Figure 1):
* M6: Preliminary analysis and production of the first version of the Data Management Plan (submitted);
* M12: Refined analysis based on the progress in the development of the tools and the definition of the case studies, described in the second version of the Data Management Plan (submitted);
* M16: Drafting of the specifications for the project Data Management Portal (first included in D6.2 v3.0);
* M17-M19: Development of the Data Management Portal (completed by CERTH);
* M20: The Data Management Portal is operational;
* M24: Third version of the Data Management Plan, updated with procedures implemented by the project towards the pilot demonstrators, and preparing the sustainability of the data storage after the end of the project (submitted);
* M36: Final Data Management Plan, describing the plans implemented by
SatisFactory for sustainable storage and accessibility of the data.
**Figure 1 - Data management timeline**
# GENERAL PRINCIPLES
## IPR MANAGEMENT AND SECURITY
As an innovation action which is close to the market, SatisFactory covers
high-TRL technologies and aims at developing marketable solutions. The project
consortium includes many partners from the private sector, namely technology
developers (namely ABE, GLASSUP, and REGOLA) and end-users (namely COMAU and
SUNLIGHT). Those partners obviously have Intellectual Property Rights on their
technologies and data, on which their economic sustainability is at stake.
Consequently, the SatisFactory consortium protects that data and crosschecks
with the concerned partners before every data publication.
Considering the above, as well as the fact that the data collected through
SatisFactory are of high value, every measure should be taken to prevent them
from leak or being hacked. This is another key aspect of SatisFactory data
management, and therefore, every data repository used by the project is
effectively protected. A holistic security approach has been followed, in
order to protect the pillars of information security, i.e. confidentiality,
integrity and availability.
Security measures include the implementation of PAKE protocols, such as the
SRP protocol, and protection from bots, such as CAPTCHA technologies.
Moreover, the industrial demo sites apply monitored and controlled procedures
related to the data collection, their integrity and protection. The data
protection and assurance of privacy of personal information include protective
measures against infiltration, as well as physical protection of core parts of
the systems and access control measures.
## PERSONAL DATA PROTECTION
SatisFactory’s activities involve the human factor, as the pilots are
conducted in real shop floors with actual workers. However, no personal data
of these workers were required nor collected during the project. The team
generally avoided to collect even basic personal data (e.g. name, background,
contact details), unless it was really necessary (e.g. for managing external
participants of workshops). This data are protected in accordance with the
EU's _Data Protection Directive 95/46/EC_ 1 “on the protection of individuals
with regard to the processing of personal data and on the free movement of
such data”. National legislations are also applicable, such as the _Italian
Personal Data Protection Code_ 2 .
The industrial pilot sites also implement health and safety management
standards (BS OHSAS 18001:2007) and are compliant with the regulations on
managing personal information of their employees.
# DATA MANAGEMENT PLAN
## DATASET LIST
SatisFactory partners identified and later updated the data produced in the
different project activities. The datasets list is provided in the table
below, while the nature and details for each dataset are presented in the next
sub-section. The datasets added in this version of the report are marked with
“new M36”, while the updated ones with “updated M36”.
**Table 1 - Datasets tracking**
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset Name**
</th>
<th>
**Status**
</th> </tr>
<tr>
<td>
1
</td>
<td>
DS.CERTH.01.IncidentDetection
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
2
</td>
<td>
DS.CERTH.02.ProcessField
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
3
</td>
<td>
DS.CERTH.03.SocialCollaborationPlatform
</td>
<td>
“New M36”
</td> </tr>
<tr>
<td>
4
</td>
<td>
DS.COMAU.01.Accelerometer_jacket
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
5
</td>
<td>
DS.COMAU.01.Gyroscope_jacket
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
6
</td>
<td>
DS.COMAU.01.Cardio_jacket
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
7
</td>
<td>
DS.COMAU.02.RFID_torque_wrench
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
8
</td>
<td>
DS.COMAU.03.Work_bench_camera
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
9
</td>
<td>
DS.COMAU.04.Glasses
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
10
</td>
<td>
DS.COMAU.05.Digital_caliper_USB
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
11
</td>
<td>
DS.COMAU.06.Torque_wrench_USB
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
12
</td>
<td>
DS.COMAU.07.Dinamometer_USB
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
13
</td>
<td>
DS.COMAU.08.Micrometer_USB
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
14
</td>
<td>
DS.COMAU.09.Digital_dial_USB
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
15
</td>
<td>
DS.ISMB.01.FallDetection (previously named DS.ISMB.01.incidentDetection)
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
16
</td>
<td>
DS.ISMB.02.GestureDetection
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
17
</td>
<td>
DS.ISMB.03.PresenceDetection
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
18
</td>
<td>
DS.ISMB.04.VideoRecordingEvent
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
19
</td>
<td>
DS.ISMB.05.VideoRecording
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
20
</td>
<td>
DS.ISMB.06.LocalizationManager_VirtualFencing
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
</td>
<td>
(previously named DS.ISMB.06.UWB_VirtualFencing)
</td>
<td>
</td> </tr>
<tr>
<td>
21
</td>
<td>
DS.ISMB.07.UWB_Localization
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
22
</td>
<td>
DS.ISMB.08.Ergonomics_Data
</td>
<td>
“new M36”
</td> </tr>
<tr>
<td>
23
</td>
<td>
DS.ABE.01.IntegratedDSS
</td>
<td>
“updated M36”
</td> </tr>
<tr>
<td>
24
</td>
<td>
DS.FIT.01.UserRequirements
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
25
</td>
<td>
DS.Regola.01.ARModels
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
26
</td>
<td>
DS.Regola.02.TrainingData
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
27
</td>
<td>
DS.Sunlight.01.MotiveBatteriesAssembly
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
28
</td>
<td>
DS.Sunlight.02.Training&SuggestionsPlatform
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
29
</td>
<td>
DS.Sunlight.03.TempMonitoringInJarFormation
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
30
</td>
<td>
DS.Sunlight.04.MalfunctionIncidentManagement
</td>
<td>
“no change”
</td> </tr>
<tr>
<td>
31
</td>
<td>
DS.Sunlight.05.Handwashing
</td>
<td>
“new M36”
</td> </tr> </table>
## PLANS PER DATASET
<table>
<tr>
<th>
**DS.CERTH.01.IncidentDetection**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Dataset for incident detection, along with high-level activities and business
processes monitoring (e.g. activities occurring at the shop-floor, etc.),
obtained with thermal and depth cameras mounted at specific locations in the
shop-floor.
In particular, depth cameras will detect the following incidents: 1) human
falls, 2) falling items, 3) collisions between moving objects, 4) intrusions
to forbidden areas, while thermal cameras will detect overheated areas within
the shop-floor.
The depth and thermal images will be processed and not saved anywhere, while
only the metadata of the incident detection process will be stored. These data
will be comprised by anonymized alarms that will include the type of the
occurring incident (e.g. human fall, collision, sudden heating of an
electrical component etc.) accompanied by the corresponding timestamp and
exact location of the event (specific room and exact coordinates on the
architectural map of the building). Similar information (metadata of the
processed depth and thermal images) do not exist or provided freeware for
shop-floor environments.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
The dataset will be collected using thermal and depth cameras located at the
areas under interest.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
The device will be owned by the industrial plant (CERTH/CPERI, COMAU,
SUNLIGHT), where the data collection is going to be performed.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of WP3 and more
specifically within activities of T3.3 and T4.3.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The dataset will be accompanied by a detailed documentation of its contents.
Indicative metadata include:
(a) description of the experimental setup (e.g. location, date, etc.) and
procedure that led to the generation of the dataset; (b) annotated incident,
activity, business process, state of the monitored activity and the involved
humans per time interval, etc.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data will be stored at XML format (CIDEM compatible) and are estimated to
be 35 MB per day.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The collected data will be used for the development of the activities analysis
and incident detection methods of the SatisFactory project and all the tasks,
activities and methods that are related to it.
Furthermore, the different parts of the dataset could be useful in the
benchmarking of a series of human detection and tracking methods, activity
detection focusing either on pose and gestures analysis and tracking, on high-
level activity recognition, on affect related human activity analysis and on
incident analysis and detection.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it.
Furthermore, if the dataset or specific portions of it (e.g. metadata,
statistics, etc.) are decided to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The full dataset will be shared with the consortium using a data management
portal that created and maintained by CERTH. The public version of the data
will be shared within the portal as well. Of course, the data management
portal will be equipped with authentication mechanisms, so as to handle the
identity of the persons/organizations that download them, as well as the
purpose and the use of the downloaded dataset.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Both full and public versions of the dataset will be accommodated at the data
management portal created and maintained by CERTH, while links to the portal
will exist at the SatisFactory website. Furthermore, in order to avoid data
losses, RAID and other common backup mechanism will be utilized ensuring data
reliability and performance improvement. The dataset will remain at the data
management portal for the whole project duration, as well as for at least 2
years after the end of the project. The volume of data is estimated to be
about 10 GB for all pilots.
Finally, after the end of the project, the portal is going to be accommodated
with other portals at the same server, so as to minimize the needed costs for
its maintenance.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.CERTH.02.ProcessField**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Dataset for shop-floor information related to the status and condition of the
involved machinery and production-process system (e.g. field data related to
the status of a device, such as a pump, a motor etc.) that workers interact
with at the shopfloor, etc.).
The dataset will also include the human actions and the logging of commands
and activity during different conditions and states, to be used for the
decision support system and procedures. The dataset will include real-time
data and archived historical data of the involved process plants.
Similar raw information at the detail level that the project needs, neither
exist nor are provided freeware in the literature for shop-floor environments.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
The dataset will be collected through the automation systems that acquire the
signals from the respective field network of interest. The device managers
will communicate with the automation systems in order to transfer the selected
data to the Satisfactory repository.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
The device will be owned by CERTH/CPERI, where the data collection is going to
be performed.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
Initially the static and persistent data along will a set of dynamic data will
be collected within activities of WP3 and more specifically within activities
of T3.3 and T3.5. The dynamic data will be updated during WP5 and more
specifically in T5.3.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The dataset will be accompanied with the respective documentation of its
contents. Indicative metadata include: (a) description of the experimental
setup (e.g. process system, date, etc.) and procedure which is related to the
dataset (e.g. proactive maintenance action, unplanned event, nominal
operation. etc.),
(b) scenario related procedures, state of the monitored activity and involved
workers, involved system etc.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data will be stored in XML format and are estimated to be 200-1000 MB per
month.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The raw field layer data will be enriched with semantics by the middleware
components and will be used for the sharing of information between the process
system and the involved actors at the shop-floor.
The collected data will be used by the integrated decision support system and
the event manager, in order to represent the input for the operation and the
maintenance procedures, along with the identification of unplanned incidents
at the shopfloor and to analyse the response and behaviour of the workers
through their interaction with the Satisfactory platform.
The initial set of dynamic data will be analysed during the development of the
Satisfactory platform and the nominal dynamic data will be used during the
deployment phase at the shop-floor.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if specific portions of it (e.g.
metadata, statistics, etc.) are to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential correlation and identification of
the ethical issues with their publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The created dataset will be shared with the consortium using a data management
portal created and maintained by CERTH. The public version of the data will be
shared within the portal as well. Of course, the data management portal will
be equipped with authentication mechanisms, so as to handle the identity of
the persons/organizations that download them, as well as the purpose and the
use of the downloaded dataset.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Both full and public versions of the dataset will be accommodated at the data
management portal created and maintained by CERTH, while links to the portal
will exist at the SatisFactory website. Furthermore, in order to avoid data
losses, RAID and other common backup mechanism will be utilized ensuring data
reliability and performance improvement. The archiving system of CERTH/CPERI
will contain the initial data as sent to the Satisfactory repository.
The dataset will remain at the data management portal for the whole project
duration, as well as for at least 2 years after the end of the project.
The volume of data is estimated to be about 50 GB for all pilots. Finally,
after the end of the project, the portal is going to be accommodated with
other portals at the same server, so as to minimize the needed costs for its
maintenance.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.CERTH.03.SocialCollaborationPlatform**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data produced by the use of the platform such as a) posted text, images,
videos, b) posted questions and answers on the forum, c) notifications
generated by shop floor incidents, social activities, gamification events and
d) chat messages exchanged between users.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
The data are generated in the device where users access the Social platform
front-end and are stored in the back-end of the platform.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
The device will be owned by the industrial plant (CERTH/CPERI, COMAU,
SUNLIGHT), where the data collection is going to be performed.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of WP2, WP3, WP4, WP5 and
more specifically within activities of T2.3, T2.4, T3.4, T4.5, T5.1, T5.3 and
T5.4.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata that are being used are the appropriate details for each type of data
(e.g. title, description, date, access qualifier for videos ).
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data are stored in a MySQL database with the exception of multimedia
content (images, videos) that are stored in the filesystem of the server.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The collected data will be used for performing analytics through the Social
Platform’s dashboard view.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it.
Furthermore, if the dataset or specific portions of it (e.g. metadata,
statistics, etc.) are decided to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
Data is not currently shared
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Data from Social Collaboration Platform are stored in a database where the
administrator of the platform can decide if and how often a back-up needed as
well as the time period that back-ups should be saved too.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.COMAU.01.Jacket**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data for ergonomic parameters monitoring, coming from sensors installed on a
jacket worn by the operators. This dataset will comprise several distinct sub-
datasets corresponding to each type of sensor, in order to simplify its
maintainability. Those sub-datasets, named
DS.COMAU.01.Accelerometer_jacket,
DS.COMAU.01.Gyroscope_jacket,
DS.COMAU.01.Cardio_jacket etc., will follow similar procedures but will be
managed independently.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
The dataset will be collected by different sensors installed on jackets worn
by operators, namely: an accelerometer, a gyroscope, and a temperature sensor.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
COMAU
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
COMAU with ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of WP3 and WP4.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata include worker's posture, e.g., trunk bending
forward/backward and time stamp.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data format has not been defined yet. Approximately, the estimated volume
of data is less than 2MB per day per worker.
Format will be defined by the technical partners.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Benchmarking of a series of human detection and tracking methods, activity
detection focusing either on pose and gestures analysis and tracking
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if the dataset or specific portions of it
(e.g. metadata, statistics, etc.) are to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The created dataset could be shared by using open APIs through the middleware.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
All information belongs to the industrial partner that owns the shop floor.
All data will respect the partner policies. Data has to be stored on a SQL
Circular Database (not yet existing) that must contain data for at least 1
year, and then each day the oldest data (today-365) should be deleted.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.COMAU.02.RFID_torque_wrench**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Sensors installed on the workbench where the operator normally works
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
RFID installed on the torque wrenches used by the operator.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
COMAU
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
COMAU with REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of WP3 and
WP4
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata is yet to be defined by the solution provider.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
Format will be defined by the technical partners. Thus, a data volume
estimation will be later provided.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Production process recognition and help during the different production phases, avoiding mistakes
* Support of quality checks and production batches recalls
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if the dataset or specific portions of it
(e.g. metadata, statistics, etc.) are to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The sharing of this data is yet to be decided in accordance to COMAU policies
and other partners’ requirements.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
All information belongs to the industrial partner that owns the shop floor.
All data will respect the partner policies. Data has to be stored on a SQL
Circular Database (not yet existing) till the end of life/warranty of the
produced component.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.COMAU.03.Work_bench_camera**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Sensors installed on the workbench where the operator normally works
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Camera installed on the workbench where the operator works. The type of camera
will be defined at a later stage based on the use-cases to be developed.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
COMAU with REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of WP3 and
WP4
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata is yet to be defined by the solution provider.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
Format will be defined by the technical partners. Thus, a data volume
estimation will be later provided.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Production process recognition and help during the different production phases, avoiding mistakes
* Support of quality checks and production batches recalls
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if the dataset or specific portions of it
(e.g. metadata, statistics, etc.) are to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The sharing of this data is yet to be decided in accordance to COMAU policies
and other partners’ requirements.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
All information belongs to the industrial partner that owns the shop floor.
All data will respect the partner policies. Data has to be stored on a SQL
Circular Database (not yet existing) till the end of life/warranty of the
produced component.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.COMAU.04.Glasses**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
The data are collected by the sensors installed on the glasses developed by
GlassUp. Images, videos and sound create a dataset that arise from actions
triggered from them.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
The dataset will be collected by different sensors installed on the GlassUp
glasses worn by operators when they do daily activities. The actions are
supported by glasses and produce these data are remote assistance and remote
maintenance.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
GLASSUP
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
COMAU with GLASSUP
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
Middleware Manager with GLASSUP
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
Middleware Manager with GLASSUP
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of WP3 and
WP4
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The dataset will be accompanied by a detailed documentation of its contents.
Indicative metadata include:
(a) description of the experimental setup (e.g. location, date and time,
serial number of the eyeglass, badge number of the user, code number of the
test that was being performed etc.) and procedure that led to the generation
of the dataset, (b) action that the operator will take using the mobile
application (i.e. recording video, sending pictures, tag the log of date for
the event + information/description of the event).
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data will be stored in XML format and are not estimated yet,
being dependent on the video format and the test on uses cases. At least
1GB/day is expected.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Benchmarking of a series of human detection and tracking methods, activity detection focusing either on pose and gestures analysis and tracking
* For the cameras: Support for contextual data related on the machine/devices monitored, remote support for maintenance
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if the dataset or specific portions of it
(e.g. metadata, statistics, etc.) are to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The sharing of this data is yet to be decided in accordance to COMAU policies
and other partners’ requirements.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The data will be stored daily on the mobile app and will be daily backed up on
the server of the Satisfactory. Furthermore, in order to avoid data losses,
RAID and other common backup mechanism will be utilized ensuring data
reliability and performance improvement.
The dataset will remain at the data management portal for the whole project
duration, as well as at least for 2 years after the end of the project.
Finally, after the end of the project, the portal is going to be accommodated
with other portals at the same server, so as to minimize the needed costs for
its maintenance.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.COMAU.05.Digital_caliper_USB**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Sensors installed on the workbench where the operator normally works
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
RFID installed on the torque wrenches used by the operator.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
COMAU
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
COMAU with REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of WP3 and
WP4
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata is yet to be defined by the solution provider.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
Format will be defined by the technical partners. Thus, a data volume
estimation will be later provided.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Production process recognition and help during the different production phases, avoiding mistakes
* Support of quality checks and production batches recalls
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if the dataset or specific portions of it
(e.g. metadata, statistics, etc.) are to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The sharing of this data is yet to be decided in accordance to COMAU policies
and other partners’ requirements.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
All information belongs to the industrial partner that owns the shop floor.
All data will respect the partner policies. Data has to be stored on a SQL
Circular Database (not yet existing) till the end of life/warranty of the
produced component.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.COMAU.06.Torque_wrench_USB**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Sensors installed on the workbench where the operator normally works
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
RFID installed on the torque wrenches used by the operator.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
COMAU
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
COMAU with REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of WP3 and
WP4
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
it will be defined by the technical partners, thus a data volume estimation
will be later provided.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data format has not been defined yet.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Production process recognition and help during the different production phases, avoiding mistakes
* Support of quality checks and production batches recalls
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if the dataset or specific portions of it
(e.g. metadata, statistics, etc.) are to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The sharing of this data is yet to be decided in accordance to COMAU policies
and other partners’ requirements.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
All information belongs to the industrial partner that owns the shop floor.
All data will respect the partner policies.
Data has to be stored on a SQL Circular Database (not yet existing) till the
end of life/warranty of the produced component.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.COMAU.07.Dinamometer_USB**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Sensors installed on the workbench where the operator normally works
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
RFID installed on the torque wrenches used by the operator.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
COMAU
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
COMAU with REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of WP3 and
WP4
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata is yet to be defined by the solution provider.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data format has not been defined yet, it will be defined by the technical
partners , thus a data volume estimation will be later provided.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Production process recognition and help during the different production phases, avoiding mistakes
* Support of quality checks and production batches recalls
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if the dataset or specific portions of it
(e.g. metadata, statistics, etc.) are to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The sharing of this data is yet to be decided in accordance to COMAU policies
and other partners’ requirements.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
All information belongs to the industrial partner that owns the shop floor.
All data will respect the partner policies.
Data has to be stored on a SQL Circular Database (not yet existing) till the
end of life/warranty of the produced component.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.COMAU.08.Micrometer_USB**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Sensors installed on the workbench where the operator normally works
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
RFID installed on the torque wrenches used by the operator.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
COMAU
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
COMAU with REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of WP3 and WP4.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata is yet to be defined by the solution provider.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data format has not been defined yet. Ιt will be defined by the technical
partners, thus a data volume estimation will be later provided.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Production process recognition and help during the different production
phases, avoiding mistakes.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if the dataset or specific portions of it
(e.g. metadata, statistics, etc.) are to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The sharing of this data is yet to be decided in accordance to COMAU policies
and other partners’ requirements.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
All information belongs to the industrial partner that owns the shop floor.
All data will respect the partner policies.
Data has to be stored on a SQL Circular Database (not yet existing) till the
end of life/warranty of the produced component.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.COMAU.09.Digital_dial_USB**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Sensors installed on the workbench where the operator normally works
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
RFID installed on the torque wrenches used by the operator.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
COMAU
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
COMAU with REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
REGOLA
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of WP3 and WP4.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata is yet to be defined by the solution provider.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data format has not been defined yet. It will be defined by the technical
partners, thus a data volume estimation will be later provided.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Production process recognition and help during the different production phases, avoiding mistakes
* Support of quality checks and production batches recalls
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if the dataset or specific portions of it
(e.g. metadata, statistics, etc.) are to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The sharing of this data is yet to be decided in accordance to COMAU policies
and other partners’ requirements.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
All information belongs to the industrial partner that owns the shop floor.
All data will respect the partner policies.
Data has to be stored on a SQL Circular Database (not yet existing) till the
end of life/warranty of the produced component.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.ISMB.01.FallDetection**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td>
<td>
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
This dataset contains information about Smart Assembly Station workers fall
events.
The information is obtained by Gesture & Content Recognition Manager
performing a complex set of analysis on input video streams from a composite
device, including a conventional colour camera and a time-of-flight infrared
sensor.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Gesture & Content Recognition Manager connected to a Kinect XBOX 360 depth
sensor located in the Smart Assembly Station
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Devices are owned by the industrial partner.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
The data is not currently stored
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP3: T3.3
</td> </tr>
<tr>
<td>
**Standards**
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The fall dataset includes:
* the identifiers of both Smart Assembly Station and shop floor where the event occurred
* the date when the event occurred
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The dataset XML format is defined by the Common Information Data Exchange
Model XML Schema Definition (see WorkAlertInformationType). Each event size is
normally about 2 KB of data.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The events triggered are used by SatisFactory ecosystem components to improve
actors’ reaction times in safety related situations and to activate procedures
to avoid recurrence of accidents.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The dataset is confidential and all information belongs to the industrial
partner that owns the shop floor.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The dataset is distributed using the LinkSmart middleware MQTT broker.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The data is not currently stored
</td> </tr> </table>
<table>
<tr>
<th>
**DS.ISMB.02.GestureDetection**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Through a continuous monitoring of the Smart Assembly Station the Gesture &
Content Recognition Manager can spot predefined worker gestures.
This dataset contains information about the detected gestures.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Gesture & Content Recognition Manager connected to a Kinect
XBOX 360 depth sensor located in the Smart Assembly
Station.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Devices are owned by the industrial partner.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
The data is not currently stored
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP3: T3.3
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The gesture data set include:
* the identifiers of both Smart Assembly Station and shop floor where the event occurred
* the date when the event occurred
* the type of gestures (LeftHandSwipeRight,
RightHandSwipeLeft, BothHandsRaised,
RightArmRaisedLeftArmPointOut,
LeftArmRaisedRightArmPointOut)
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The dataset XML format is defined by the Common Information Data Exchange
Model XML Schema Definition (see
GestureType). Each event size is normally about 2 KB of data.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The events triggered are used to feed management toolkits and to manage
contactless applications where users don't have easy access to standard input
devices due to safety gear.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The dataset is confidential and all information belongs to the industrial
partner that owns the shop floor.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The dataset is distributed using the LinkSmart middleware MQTT broker.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The data is not currently stored
</td> </tr> </table>
<table>
<tr>
<th>
**DS.ISMB.03.PresenceDetection**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Through a continuous monitoring of the Smart Assembly Station the Gesture &
Content Recognition Manager can spot worker presence.
This dataset contains information about the number of workers detected.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Gesture & Content Recognition Manager connected to a Kinect
XBOX 360 depth sensor located in the Smart Assembly
Station.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Devices are owned by the industrial partner.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
The data is not currently stored
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP3: T3.3
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The presence data set include:
* the identifiers of both Smart Assembly Station and shop floor where the event occurred
* the date when the event occurred
* the people count and previous people count
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data set XML format is defined by the Common Information Data Exchange
Model XML Schema Definition (see PresenceType). Each event size is normally
about 2 KB of data..
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The events triggered are used to monitor the presence of workers in the Smart
Assembly Stations.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The dataset is confidential and all information belongs to the industrial
partner that owns the shop floor.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The dataset is distributed using the LinkSmart middleware MQTT broker.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Data is not currently stored.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.ISMB.04.VideoRecordingEvent**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
When specific events such as fall alarms are triggered by the Gesture &
Content Recognition Manager, the Multiple Media Manager server encodes
automatically the video and uploads it to the central unit.
The data set gives information about the current phase of the encoding
process.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Multiple Media Manager installed on Smart Assembly Station core hardware (an
Intel Next Unit of Computing)
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Devices are owned by the industrial partner.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
Data is not used
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP3: T3.3
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The video recording data set include:
* the identifiers of both Smart Assembly Station and shop floor where the event occurred
* the date when the event occurred
* the referred alert event id
* the path of the produced video
* the information of the current phase about the encoding process
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data set XML format is defined by the Common Information Data Exchange
Model XML Schema Definition (see RecordingType). Each event size is normally
about 2 KB of data.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Data can be used by supervisors to correctly retrieve stored video
information.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The dataset is confidential and all information belongs to the industrial
partner that owns the shop floor
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The dataset is distributed using the LinkSmart middleware MQTT broker.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The data is not currently stored.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.ISMB.05.VideoRecording**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
When specific events such as fall alarms are triggered by the Gesture &
Content Recognition Manager, the Multiple Media Manager server encodes
automatically the video and uploads it to the central unit.
The data set is the encoded video containing the recording of the last minutes
before the fall event.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Multiple Media Manager installed on Smart Assembly Station core hardware (in
current deployment an Intel Next Unit of Computing)
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Devices are owned by the industrial partner.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP3: T3.3
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The video contained in the data set is enriched using auxiliary data form the
Gesture & Content Recognition Manager.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The videos in the dataset have the following characteristic:
* MP4 format
* H264 video encoding
* Overlay metadata
* The video size for each incident is about 15Mb.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data could allow the visual detection and isolation of specific conditions
and factors which could have contributed to an incident in order to address
the necessary actions to avoid its recurrence.
Furthermore, the data analysis can provide mechanisms to search for incident
occurred previously in similar circumstances (e.g. workers without protective
equipment or with the same skills or at the same process time) to measure the
effects of preventive procedures.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The dataset is confidential and all information belongs to the industrial
partner that owns the shop floor.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The videos are distributed by the Multiple Media Manager Central Unit using
HTTPS progressive download.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The videos are stored by the Multiple Media Manager Central Unit for a
configurable period of time (currently 7 days). The size of the data is
strictly dependent on the number of incidents occurred in the configured
period.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.ISMB.06.LocalizationManager_VirtualFencing**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Dataset reporting potential incidents based on workers location in the shop
floor. A virtual fencing approach based on a point in polygon algorithm is
adopted. The localization manager (LM) detects whether a worker is approaching
a dangerous area or whether he/she is inside of it. Moreover, LM reports when
new dynamic dangerous areas are generated when abnormal measurements from
sensors at the shop floor are detected. In addition, it reports also whether
there is no more any potential incident as well as when the dynamic dangerous
area has been deleted.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
The data is generated by the localization manager which it is running on the
UWB GW device.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
The developed software is owned by ISMB whilst the stored data will be owned
by the industrial partners.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Industrial partners (CERTH, SUNLIGHT, COMAU)
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data has been be collected within activities of WP3 (T3.3) and WP5.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata include: device Id, state of the worker (e.g., inside or
outside a dangerous area), name of the dangerous area, current location of the
worker in relative coordinates, alert Id, time stamp and event id.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The geofencing events can be accessed through MQTT APIs. The data is XML
formatted and it is compliant with the CIDEM specifications. The volume of
data generated per worker depends on the number of incidents detected. One
single event generates 2 KB of data.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The collected is used for the development of the proactive incident detection
functionalities of the SatisFactory platform and for pilots demonstration.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset is confidential and only the members of the consortium have
access on it. Furthermore, if the dataset or specific portions of it (e.g.
metadata, statistics, etc.) are to become of widely open access, these could
be shared through the CIDEM. Of course, the data would be anonymized, so as
not to have any potential ethical issues with their publication and
dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The LocalizationManager VirtualFencing dataset could be shared using the APIs
of CIDEM. The public version of the data could be shared by using a suitable
authentication mechanism, so as to handle the identity of the
persons/organizations that access them.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Data is stored in the CIDEM by the LinkSmart middleware.
This is done by means of the event aggregator. The Localization Manager
software has been installed in the endusers’ premises for industrial pilot
demonstrators. The approximated end volume will be 4 MB of data and it will be
stored for as long as CIDEM works at the industrial pilots.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.ISMB.07.UWB_Localization**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Dataset reporting, in real-time, the current position of workers at the shop
floor by means on UWB-based wearable devices. In particular, a wearable device
continuously performs ranging (i.e. distance) measurements from fixed UWB
anchors, at the shop floor, and estimates the worker position running a
localization algorithm.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
The raw localization data is estimated by the UWB-based wearable devices which
are carried by workers. Wearable devices send workers location to an UWB GW
which is connected with the SatisFactory infrastructure. Moreover, this data
is put into CIDEM by the localization manager once per minute.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
The UWB-based wearable devices are owned by ISMB.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Industrial partners (CERTH, SUNLIGHT, COMAU)
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data has been collected within activities of WP3 (T3.3) and WP5.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata include: event id, device Id, estimated position of the
worker in relative coordinates, shop floor Id, anchor connectivity and time
stamp.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The localization data is stored in the CIDEM by the LinkSmart middleware. The
data is XML formatted and it is CIDEM compliant. For each worker, it is
estimated one position per minute. The volume of data generated depends on the
number of workers that are localized and for how long they are localized. One
single event generates 4 KB of data.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The raw data is used by the Localization Manager for the detection of
proactive incidents at the shop floor, based on current workers position.
Furthermore, localization data can be exploited by other components of
SatisFactory infrastructure such as Augmented Reality and Collaboration Tools.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset is confidential and only the members of the consortium have
access on it. Furthermore, if the dataset or specific portions of it (e.g.
metadata, statistics, etc.) are to become of widely open access, these could
be shared through the CIDEM. Of course. these data will be anonymized, so as
not to have any potential ethical issues with their publication and
dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The UWB Localization dataset could be shared using the CIDEM APIs. The public
version of the data could be shared through CIDEM by using a suitable
authentication mechanism so as to handle the identity of the
persons/organizations that access them.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Data is stored in the CIDEM by LinkSmart. This is done by means of the event
aggregator. The UWB infrastructure (i.e.
UWB GW, UWB anchor nodes, UWB-based wearable devices) has been installed in
the end-users’ premises for the industrial pilot demonstrators. The
approximated end volume will be 100 MB of data and it will last until CIDEM is
working at the industrial pilots.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.ISMB.08.Ergonomics_Data**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Dataset reporting, in real-time, workers attitude referred to their back as
well as alerts related to wrong postures (ergonomics alerts) adopted during
daily activities. This data set provides support to ergonomics applications
aiming to improve wellness at shop floor.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Raw attitude data is estimated by UWB-based wearable devices which are carried
by workers. Besides, these data and the ergonomics alerts are put into CIDEM
by the UWB GW.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
The UWB-based wearable devices are owned by ISMB.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Industrial partners (CERTH, SUNLIGHT, COMAU)
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ISMB
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data has been collected within activities of WP3 (T3.3) and WP5.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata include: shop floor id, event id, device id, data type,
time stamp, space id, short description and pitch and roll measurements
(attitude data) or alert id and priority for ergonomics data.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The ergonomics data is XML formatted and CIDEM compliant. For each worker, it
is generated one attitude data per second while the ergonomics data depends on
when the event happens. The volume of data generated depends on the number of
workers and triggered alerts per person during one working day. One single
event generates 1 KB of data.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Attitude data and ergonomics data are used by other SatisFactory components in
order to evaluate the ergonomics of workers.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset is confidential and only the members of the consortium have
access on it. Furthermore, if the dataset or specific portions of it (e.g.
metadata, statistics, etc.) are to become of widely open access, these could
be shared through the CIDEM. Of course. these data will be anonymized, so as
not to have any potential ethical issues with their publication and
dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The dataset could be shared using the CIDEM APIs. The public version of the
data could be shared through CIDEM by using a suitable authentication
mechanism so as to handle the identity of the persons/organizations that
access them.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Data is stored in the CIDEM by means of CIDEM APIs. The approximated end
volume will be 865 MB of data and it will last until CIDEM is working at the
industrial pilots.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.ABE.01.IntegratedDSS**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Input dataset of Shop Floor Feedback Engine and iDSS is obtained by Smart
Sensor Network. Thermal and Depth Cameras receive events on shopfloor.
Processed and anonymised alarms are sent to the Shop Floor Feedback Engine and
iDSS. Device Manager, Gesture Recognition
Context Manager, Semantic Manager, AR In – Factory Platform and Gamification
Adaptation Interface create events and store them in CIDEM. iDSS access data
from CIDEM though LiknSmart middleware. Output dataset of iDSS includes tasks
created on the Maintenance Toolkit and their propagation to the Shop Floor
Feedback Engine, Gesture Recognition Context Manager, Semantic Manager, AR In
– Factory Platform. Tasks are created based on input data. iDSS sends message
data to Gamification Platform in order users to gain points in Maintenance
game. Notification data such as push notifications, email etc is also the
output of iDSS. All output data is also communicated to CIDEM repository.
Output data also include data available for the Human Resourced Re –
Adaptation Toolkit and the creation of automated schedules according to tasks
on Maintenance Toolkit. Output data is used in the Visual Training Data
Analytics Toolkit to create Key Performance Indicators which will explain the
gained knowledge of the SatisFactory components. Output data is also used as
an input for the AR Glasses of the AR In – Factory Platform.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Device Manager, Event Manager, Semantic Context Manager,
AR In-Factory Platform, Gamification Adaptation Interface, Gesture Recognition
Context Manager, iDSS, CIDEM,
LinkSmart Middleware, Shop Floor Feedback Engine
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
The device will be owned by the industry (COMAU, CERTH/CPERI, SUNLIGHT), where
the data collection is going to be performed.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Various partners related to the specific incident and/or operation
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
Various partners related to the specific incident and/or operation
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
ABE will store data related to integratedDSS (various partners can handle the
rest of the data)
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of WP3 and WP4 and WP5
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata include: shop floor data, production results (including
timestamp, location), preventive maintenance schedules combined with
instruction and attachments. Metadata for Gamification platform contain
database entries such as ID, trades, taskID, task status. Notification
metadata also use database entries to create the notification.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
Data can be available in XML or JSON format. Estimation of the volume of data
cannot be expertly predicted in advance of real use of the technology on the
shop floor level, but an estimate of 25MB of data is possible. The number
contains all possible data creation including attachment files, which can be
different formats (PDF, TXT, JPEG, MPEG, etc). Http send messages are also
included in the estimate.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Collected data is used for operational purposes and real-time functionality of
iDSS. Data is used for processing, creating and publishing tasks to different
SatisFactory components and improve maintenance procedures on the shopfloors.
Real – time data should be available for continuous operation of iDSS as
defined. Data produced by the iDSS is also published and made available to
other components for real – time use. AR Glass will also use data in real –
time to show the created task on the screen.
Collected data will be used for better understanding of the processes and
activities evolving in the Shopfloor which in conjunction with pre-defined
response policies and strategies will provide actionable knowledge in the form
of a set of recommendations regarding both maintenance and manufacturing
operations. Also, new knowledge will be extracted and exploited from the
Gamification Adaptation Interface. This knowledge comes from the social
collaboration of workers outside the working environment. Visual Training Data
Analytics Toolkit (VTDAT) uses the available data to create visualisation of
KPIs and quantify the gained knowledge.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
If such a need arises, sharing of this data among consortium partners will be
decided and handled based on agreed terms with the respective industrial
partner.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Data will be stored in a DB. RAID and other common backup mechanism will be
utilized to ensure data reliability and performance improvement and to avoid
data losses.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.FIT.01.UserRequirements**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that is collected in user workshops with the goal of understanding the
shop floor workers’ work environment and their needs
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Semistructured interviews and other questioning techniques in user workshops
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Data not collected by device
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
The interviews will be conducted by COMAU, SUNLIGHT, CERTH/CPERI and FIT.
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
FIT
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
FIT
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The requirements engineering is focus of WP1.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The data collection process will be described as well as there will be minutes
of the workshops.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
Semistructured interviews, Questionnaires, Shadowing, Think Aloud Prototypes,
Velcro Modelling
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The collected data builds the foundation for all activities in the project.
The analysis will determine what the SatisFactory shall achieve and thus it
will determine the actions for all WPs.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. The results of the analysis will be accessible in the
public deliverables D1.1 and partly in D1.2. For this, all data will be
anonymized.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The dataset will be stored in a restricted folder of the BSCW and will be only
shared with the partners.
It cannot be made available due to confidentiality agreements with the
interviewees themselves.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
Forever
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Included in the normal BSCW backup strategy
</td> </tr> </table>
<table>
<tr>
<th>
**DS.Regola.01.ARModels**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Dataset containing information captured at low-level and representing human
activities in the shop floor context; graphic model describing objects
processed during the activities; video and audio recorded during actions (e.g.
activities occurring at the shop-floor, etc.) obtained with AR cameras mounted
on specific wearable devices; executable script containing stepby-step
instructions, dynamic help visualized trough the glasses, etc.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
The dataset will be collected using cameras integrated in the wearable device
and cameras located at the areas of interest.
The recordings will be colour and HR.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
The device will be owned by the industry (COMAU, CERTH/CPERI, SUNLIGHT), where
the data collection is going to be performed.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Regola
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
Regola
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
Regola
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of T2.5 and of T4.3.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The dataset will be accompanied by a detailed documentation of its contents.
Indicative metadata include: (a) description of the working phase (e.g.
location, date, etc.) SOP and procedure that led to the generation of the
dataset, (b) description of involved objects.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
3D Formats: 3DM, 3DS, DXF, DWG, IGES, Collada DAE, FBX, OBJ, PLY, ASC, RAW,
SKP, SLDPRT, STP, STEP, STL, WRL, VRML, SGF e SGP (proprietary scenegraph file
formats). Image Formats: BMP, DIB, JPG, TGA, PNG, DDS, HDR Audio Formats: WAV,
MID, MP3 Video Formats: AVP,
MPG Motion Capture Formats: BVH, C3D, HTR, GTR Original SOP Formats: PDF,
DOCX, XLS, XLSX, etc. R3D RT SOP Formats: RTS (proprietary XML-Based file
format). The data will be stored at XML format and are estimated to be 40 GB
per day.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The collected data will be used for analysis of operator's behaviour and the
development of the scripts containing stepby-step instruction and the controls
of correctness of activities.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. Furthermore, if the dataset or specific portions of it
(e.g. metadata, statistics, etc.) are to become of widely open access, a data
management portal will be created that should provide a description of the
dataset and link to a download section. Of course, these data will be
anonymized, so as not to have any potential ethical issues with their
publication and dissemination.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The full dataset will be shared using a data management portal. The data
management portal will be equipped with authentication mechanisms, so as to
handle the identity of the persons/organizations that download them, as well
as the purpose and the use of the downloaded dataset.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Both full and public versions of the dataset will be accommodated at the data
management portal; RAID and other common backup mechanism will be utilized
ensuring data reliability and performance improvement.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.Regola.02.TrainingData**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Dataset containing audit data captured by the Presentation Tool. The dataset
includes execution instances of the Training Procedures in the shop floor
context. The data are described in the Annex B of the deliverable: D2.5. They
are parameters identifying the trainee and the procedure; the procedure under
execution; the time spent for the execution of the procedure’s steps; the
survey data submitted at the end of the procedure; etc.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
The dataset will be collected using the platform where the Presentation tool
is under execution: Smartphones; Smartglass; Tablets.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
The device will be owned by the industry (COMAU, CERTH/CPERI, SUNLIGHT), where
the data collection is going to be performed.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Regola
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ABE
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
The industry (COMAU, CERTH/CPERI, SUNLIGHT), where the data collection is
going to be performed.
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data are going to be collected within activities of T2.5.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The dataset is stored in XML format, so the metadata specifying it are already
part of the dataset.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The dataset is stored in XML format. The overall volume of the data depends on
the amount of training sessions completed. A possible value could be compute
ad: 10Kb * N, where N is the number of training sessions.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The collected data will be used for further analysis by the Training Data
Analytics Tool currently under development by ABE. The aim of the analysis is
to assess the skills of the trainee and to gather training statistics, in
order to provide feedback able to improve the training procedures.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
The full dataset will be confidential and only the members of the consortium
will have access on it. If the dataset or portions of it are going to become
of widely open access, the data **must** be anonymized, in order to avoid A)
privacy issues, according to the applicable laws of the country where the data
are gathered, B) issues with the trade unions.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The dataset is not intended to be shared using a data management portal;
instead, it is intended to be managed by the Training Data Analytics tool.
Nevertheless, the dataset could be easily accessed / reused for further
applicable needs, taking in account how easily statistics about working steps
could be built from the bulk of the individual data gathered.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
The dataset will be stored in the CIDEM. The data lifetime is determined by
the applicable policies specified by the shop floors.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.Sunlight.01.MotiveBatteriesAssembly**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Technical data for battery assembly and working instructions for assembly and
quality control.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Technical data for battery assembly will be provided from SAP. Working
instructions will be available from a database which will be accessed through
the internal company network.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Sunlight will be the owner of the device.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
SUNLIGHT/CERTH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
Data will be collected for WP3, T3.4
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata that will be used are SAP codes, order numbers and drawing numbers.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data will be in text format including drawings and photos. The estimated
total volume will not exceed 1TB.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used only for the development of the Satisfactory
application.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
Data will be confidential. Data that cannot be shared because it includes
Customer order details and technical know-how details.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
Data can not be shared.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Data are stored in SAP but an intermediate storage unit will be used in order
to avoid data loses and provide a data backup.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.Sunlight.02.Training &SuggestionsPlatform **
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
1. Battery assembly training data (e.g. assembly instructions, drawings, quality check instructions, procedures etc.).
2. Workers’ suggestions data
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Data are stored in an internal database which will be accessed through
internal LAN
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Sunlight will be the owner of the device.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
CERTH/FIT
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
CERTH/FIT
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
SUNLIGHT/CERTH/FIT
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
Data will be collected for WP3, T3.3 and T3.4
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata that will be used are battery assembly procedures and instructions,
assembly drawings. For suggestions from users, metadata will be the timestamp
of the suggestion, and, if the user has submitted it, the identification of
the user.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data will be in text format
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used only for the development of the Satisfactory
application.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
Data will be confidential because it includes technical knowhow details.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
Data will be shared by using a data management portal.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Data are stored in a storage device of a server or computer. A back up will be
stored in an external storage device.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.Sunlight.03.TempMonitoringInJarFormation**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Battery Temperature measurements
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Thermal cameras
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Sunlight will be the owner of the device.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
CERTH/FIT
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
CERTH/FIT
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
SUNLIGHT/CERTH/FIT
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
Data will be collected for WP3, T3.3
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata that will be used are production dates and the Jar formation
equipment code number.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data will be in text format
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used only for the development of the Satisfactory
application.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
Data will be available only for members of the Consortium and the Commission
Services
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
Data will be shared by using a data management portal.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Data will be stored in the storage device of the developed system (computer).
A back up will be stored in an external storage device.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.Sunlight.04.MalfunctionIncidentManagement**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Malfunction Incidents
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Malfunction Incidents are logged manually in xls file or via the Shop Floor
Feedback Engine
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Sunlight will be the owner of the device.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
ABE
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ABE
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
SUNLIGHT/ABE
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
Data will be collected for WP3, T3.5
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata that will be used are the incident details (date, hour, place, etc.)
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data will be in text format
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used only for the development of the Satisfactory
application.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
Data will be available only for members of the Consortium and the Commission
Services
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
Data will be shared by using a data management portal.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Data will be stored in the storage device of the developed system (computer).
A back up will be stored in an external storage device.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.Sunlight.05.Handwashing**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Handwashing frequency data
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Data are stored in an internal database which will be accessed through
internal LAN
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Sunlight will be the owner of the device.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
CERTH/FIT
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
CERTH/FIT
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
SUNLIGHT/CERTH/FIT
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
Data will be collected for WP3, T3.3 and T3.4
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata that will be used are handwash frequencies and highest number of
handwashes for the past days. No personal data are included.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data will be in text format
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used only for the development of the Satisfactory
application.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
Data will be confidential.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
Data will not be shared.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
Data are stored in a storage device of a server or computer. A back up will be
stored in an external storage device.
</td> </tr> </table>
# DATA MANAGEMENT PORTAL
This section provides an analysis with regards to the specifications of the
SatisFactory Data Management Portal, a web based portal, developed within the
SatisFactory project for the purposes of the management of the various
datasets that will be produced by the project, as well as, for supporting the
exploitation perspectives for each of those datasets.
Based on the information provided in the previous sections, the Data
Management Portal needs to manage a large number of datasets, collected by
various devices, such as sensors and cameras, but also manually inserted in IT
systems and collected through direct interactions with employees (e.g.
interviews).
Furthermore, the Data Management Portal will need to be flexible in terms of
the parts of datasets that are made publicly available. Special attention is
going to be given on ensuring that the data made publicly available violates
neither IPR issues related to the project partners, nor the regulations and
good practices around personal data protection. For this latter point,
systematic anonymization of personal data will be made.
## DATA MANAGEMENT PORTAL FUNCTIONALITIES
The DMP offers a variety of functionalities in order to facilitate the
management of the data produced within the purposes of the SatisFactory
Project.
The Data Management Portal is implemented through a **web based platform**
which enables its users to easily access and effectively manage the various
datasets created throughout the development of the project. The portal **is
connected to the SatisFactory datasets,** as stored in CIDEM, through the
CIDEM API (RESTful services) providing access to static (i.e. shop floor
gbXML), as well as dynamic (i.e. Events) information.
Regarding the **user authentication** , as well as the respective permissions
and access rights, the following three user categories are foreseen:
* **Admin**
The Admin has access to all of the datasets and the functionalities offered by
the DMP and is able to determine and adjust the editing/access rights of the
registered members and users (open access area). Finally, the Admin is able to
access and extract the analytics, concerning the visitors of the portal.
* **Member**
When someone successfully registers to the portal and is given access
permission by the Admin, she/he is then considered as a “registered Member”.
All the registered members will have access to and be able to manage most of
the collected datasets.
* **User**
SatisFactory project is dedicated to knowledge sharing and for this reason, it
aims at providing a platform for the assessment of project outcomes and the
publication of material related to the understanding of smart factory
environments. As a result, apart from the admin and the registered members’
areas, an open access area will be available for users who will not need to
register and they will have access to some specific datasets, as well as to
project outcomes (e.g. demo datasets).
**Figure**
**2**
**-**
**Login Page of the**
**Data Management Portal**
Each dataset available in the DPM is accompanied by a short description. The
users are able to download the datasets in specific formats (e.g. xml, csv,
etc.) for further analysis. Security measures will be applied to avoid data
exposure (e.g. CAPTCHA technologies).
**Figure 3 - Data access page of the Data Management Portal**
Great emphasis will be given on properly visualizing the various data
collected within the project, so that the members can easily and effectively
manage them. A variety of graphs, pie charts etc. is going to be employed for
helping members to easily understand and elaborate the data.
## DATA MANAGEMENT PORTAL ARCHITECTURE & DESIGN
**Architecture**
The overall system architecture is shown in the figure below, presenting all
individual interfaces developed for the Data Management Portal, in a 3-tier
schema (database layer, application layer, client layer) where each layer
performs a specific function.
**Figure**
**4**
**-**
**Data Management Portal**
**Architecture**
**Data Tier**
The Data Tier includes the databases, the data management tools (CIDEM), as
well as the data access layer that encapsulates the recovery mechanisms for
historical data (CIDEM API). Through an Application Programming Interface
(API), the methods of data storage management are exposed to the Application
Tier, ensuring robust communication and continuous unobtrusive data flow.
**Application Tier**
The Application Tier checks the functionality of an application by performing
detailed editing. It constitutes the intermediate level and also the heart of
the analysis tool for extracting useful knowledge. It should be noted that
this level goes beyond simply presenting the data but mainly into processing
and analysing them (parallel simulation scenarios implementation), towards
extracting a variety of useful and meaningful indicators and graphs.
**Client Tier**
This is the highest level of the application, with which the end user
interacts via the user interface elements. The way of presenting the
information is very important, given that the tool will be used by people who
may not have be familiar with technology. In this context, the development of
the application is designed in such a way that ensures the user-friendliness
and quick adaptation to the end-users.
# DISSEMINATION AND EXPLOITATION OF OPEN RESEARCH DATA
Data constitutes a strong asset of the SatisFactory project, since the several
applications developed and tested in real industrial environments throughout
the project have led to the production of a considerable volume of 31
different datasets. On top of that, considerable new applied knowledge has
been produced during the project, captured in the several SatisFactory reports
and scientific publications.
The consortium believes firmly in the concepts of open science and the large
potential benefits the European innovation and economy can draw from allowing
reusing research data at a larger scale. By ensuring that the project’s
results are used by other research stakeholders, we will stimulate the
continuity and transfer of Satisfactory outputs to further research and other
initiatives, allowing others to build upon, benefit from and be influenced by
them.
To this end, SatisFactory participates in the **Open Research Data Pilot
(ORD)** launched by the European Commission along with the Horizon 2020
programme. In this context, certain data produced by the project will be
published with open access – though this objective will obviously need to be
balanced with IPR and data privacy principles.
## SATISFACTORY OPEN RESEARCH DATA
The main openly exploitable data assets of the project take the following
forms:
* Open datasets; Public deliverables; Scientific publications.
_**Open datasets** _
Through the SatisFactory Social Collaboration Platform which is piloted in the
three pilot industries (namely Comau, CERTH/CPERI and Sunlight), several data
around the shopfloor activity is recorded, relating to the following:
* Posted content (image, text, video);
* Training sessions;
* AR glasses usage;
* Forum participation;
* Gamified procedures; Incidents that may occur; Ergonomics.
Such data can be anonymised and shared with open access in the form of
statistics, which could be analysed for evaluating activity happening in a
workplace and possibly extracting knowledge from them. Each dataset can be
accompanied by several metadata e.g. type, gender, age, etc., which could
support multiple kinds of analysis on the historical data. Examples of how
this kind of data is currently analysed and presented through the SatisFactory
Social Collaboration Platform are shown in the following figure:
**Figure 5 - Statistics of the activity on the SatisFactory Social
Collaboration Platform**
_**Public deliverables** _
The project has produced and updated more than 20 public reports which
incorporate public data and knowledge produced and integrated during the
3-year duration of the grant. This knowledge revolves around multiple research
fields and disciplines, such as:
* End-user needs analysis;
* Industrial application scenarios/use cases;
* ICT systems architecture;
* HR management;
* User experience optimisation
* Gamification for industrial environments;
* On-the-job training;
* Semantics modelling;
* Social collaboration and information sharing in the workplace;
* Data aggregation/integration techniques;
* Evaluation methodologies;
* Industrial pilots;
* Dissemination and exploitation of results;
* etc.
An indicative relevant example of the open and re-usable models disseminated
by the project is the SatisFactory data exchange model itself – namely CIDEM
(Common Information Data
Exchange Model). CIDEM defines a shared and common vocabulary enabling to
address the information needs not only of the SatisFactory project, but of the
modern factory in general. It considers both static information (e.g. shop
floor maps, assets, procedures, etc.) and dynamic data (e.g. alerts,
measurements, several events) and translates them to a common understandable
format, allowing storing and retrieving heterogeneous information, while
supporting interoperability with common industry standards. Deliverable 1.3
describes the model in detail, allowing its further re-use and exploitation.
_**Scientific publications** _
Multiple open access scientific publications have been produced in the
framework of the project, published either in conferences or relevant
journals/books. These publications summarise main achievements of the project
that can be further exploited by the scientific community.
## OPEN DATA DISSEMINATION PLATFORMS
Visibility of the above mentioned assets is the key for allowing other
stakeholders to get inspired by the project and re-use the produced data and
knowledge, so as to fuel the open data economy. To ensure visibility of open
SatisFactory resources, several platforms have been employed by the team,
where other researchers and the general public can find information on the
project’s results, but also to download project’s data and documents. These
platforms are listed below:
_**SatisFactory website and social media** _
The project’s website is regularly updated not only with news about the
project, but also with the project’s outputs themselves, i.e. SatisFactory
public reports and publications, which are freely accessible to visitors.
In addition, the project’s social media pages support the wide communication
of the project’s outcomes. In particular, through the SatisFactory YouTube
channel several demo videos are promoted, showcasing what the SatisFactory
solutions are able to do and how these can be implemented and utilised in real
industrial settings.
**Figure 6 - Project resources accessible on the website**
_**Zenodo** _
Zenodo is a widely used research data repository, allowing research
stakeholders to search and retrieve open data uploaded by other researchers.
The project team ensures that open project resources are regularly uploaded on
Zenodo, such as public deliverables, scientific papers and datasets.
**Figure**
**7**
**-**
**SatisFactory open data on Zenodo**
_**EFFRA Innovation Portal** _
The European Factories of the Future Research Association (EFFRA) is a non-
for-profit, industry-driven association promoting the development of new and
innovative production technologies. It is the official representative of the
private side in the 'Factories of the Future' public-private partnership. The
EFFRA Innovation Portal is a unique resource combining a project database with
community building and ‘mapping’ functions, allowing users to map projects on
our ‘Factories of the Future 2020’ priorities.
The project team makes sure the EFFRA database is updated with information
about the latest project outputs, including reports and demo material.
**Figure 8 - SatisFactory page on the EFFRA Innovation Portal**
_**The OpenAIRE platform** _
Dissemination and exploitation of the project’s open data is supported through
the EC’s OpenAIRE platform, where visitors can access all types of
SatisFactory data, searching by various keywords and metadata.
**Figure 9 - SatisFactory recourses available on OpenAIRE**
_**The SatisFactory Data Management Portal** _
As explained in section 4, in order to promote exploitation of the
SatisFactory open data, the project has developed a dedicated Data Management
Portal, which, among others, allows visitors to access certain open datasets
uploaded by the project team.
# CONCLUSION
The present report constitutes the fourth and final version of the
SatisFactory Data Management Plan and provided an updated description of the
datasets produced throughout the project, the strategy put in place for their
storage, protection and sharing, as well as the infrastructure implemented to
efficiently manage them. In addition, it presented the project’s measures for
ensuring visibility, sustainability and dissemination of the SarisFactory open
research data.
Throughout the project, the consortium needed to manage a large number of
datasets, collected by various means, i.e. sensors, cameras, manual inputs in
IT systems and direct interactions with employees (e.g. interviews). Almost
all the project partners have become SatisFactory data owners and/or
producers. Similarly, all the technical work packages of the project produced
data. All datasets have been handled considering the main data security and
privacy principles, respecting also the partners IPR policies.
As part of the Open Research Data Pilot (ORD), the project has taken measures
to promote the open data and knowledge produced by the project. Interested
stakeholders, such as researchers or industry actors, will be able to access
open resources generated by the project, through various platforms, even
beyond the project’s duration. This way, sustainability of the SatisFactory
outcomes will be fostered. However, particular attention needs to be paid on
ensuring that the data made openly available violates neither IPR of the
project partners, nor the regulations and good practices around personal data
protection. For this latter point, systematic anonymization of data is
necessary.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0959_ECO-Binder_637138.md
|
# Introduction
The present document constitutes the first issue of Deliverable D8.3 “ECO-
Binder Data Management Plan” in the framework of the Project titled
“Development of insulating concrete systems based on novel low CO 2 binders
for a new family of eco-innovative, durable and standardized energy efficient
envelope components” (Project Acronym: ECOBinder; Grant Agreement No.:
637138).
## Purpose of the document
A novelty in Horizon 2020 is the Open Research Data Pilot which aims to
improve and maximize access to and re-use of research data generated by
projects.
In Horizon 2020 a limited and flexible pilot action on open access to research
data will be implemented (see guidance on Open Access to Scientific
Publications and Research Data in Horizon 2020). Participating projects will
be required to develop a Data Management Plan (DMP), in which they will
specify what data will be open.
Data Management Plan (DMP) details what data the project will generate,
whether and how it will be exploited or made accessible for verification and
re-use, and how it will be curated and preserved.
In this framework a background document has been prepared in order to describe
the open access issues associated to the Eco-Binder project.
This document from one side provides guidelines to maximise the spread of the
Eco-binder project results, on the other side provides management assurance
framework and processes that fulfil the data management policy according to
the confidentiality issues.
Within this document the following aspects are reported:
Rules set out for control in order to ensure quality of project activities;
effectively/efficiently manage the material/data generated within the
project; how data will be collected, processed, stored and managed.
The Data Management Plan (DMP) is not a fixed document but it will be updated
during the whole project by the coordinator.
Moreover, the DMP will be updated during the project to fine-tune it to the
data generated and the uses identified by the Consortium since not all data or
potential uses are clear from the beginning.
New versions of the DMP should be created whenever important changes to the
project occur due to inclusion of new data sets, changes in consortium
policies or external factors.
# Open Access and the Data Management Plan
## Overview on Open Access 1
Open Access is the immediate, online, free availability of research outputs
without restrictions on use commonly imposed by publisher copyright
agreements. Open Access includes the outputs that scholars normally give away
for free for publication; it includes peer-reviewed journal articles,
conference papers and datasets of various kinds.
Some advantages of the Open Access are:
* ACCESS CAN BE GREATLY IMPROVED
Access to knowledge, information, and data is essential in higher education
and research; and more generally, for sustained progress in society. Improved
access is the basis for the transfer of knowledge (teaching), knowledge
generation (research), and knowledge valorisation (civil society).
* INCREASED VISIBILITY AND HIGHER CITATION RATES
Open Access articles are much more widely read than those which are not freely
available on the Internet. Webwide availability leads to increased use which,
in turn, raises citation rates, a fact that has been empirically supported by
several studies. Depending on the field in question, Open Access articles
achieve up to three times higher citation rates and they are cited much sooner
* FREE ACCESS TO INFORMATION
Open Access content is freely available worldwide, thus enabling people from
poorer countries to access and utilise scientific knowledge and information
which they would not otherwise be able to afford.
Open Access to data generated in projects funded by the European Commission is
key to lower barriers to accessing publicly-funded research, as well as to
demonstrate and share the potential of research activities supported with the
help of public funding (and finally, of the European citizens).
## Data Management Plan
This DMP deliverable is prepared according to the “ _**Guidelines on Data
Management in Horizon 2020** _ “.
References to research data management are included in Article 29.2 and 29.3
of the Model Grant Agreement (article applied to all projects participating in
the Pilot on Open Research Data in Horizon 2020):
### Article 29.2: Open access to scientific publications
_Each beneficiary must ensure open access (free of charge, online access for
any user) to all peer-reviewed scientific publications relating to its
results._
_In particular, it must:_
1. _as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications;_
_Moreover, the beneficiary must aim to deposit at the same time the research
data needed to validate the results presented in the deposited scientific
publications._
2. _ensure open access to the deposited publication — via the repository — at the latest:_
1. _on publication, if an electronic version is available for free via the publisher, or_
2. _within six months of publication (twelve months for publications in the social sciences and humanities) in any other case._
3. _ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication._
_The bibliographic metadata must be in a standard format and must include all
of the following:_
* _the terms ["European Union (EU)" and "Horizon 2020"]["Euratom" and Euratom research and training programme 2014-2018"];_ \- _the name of the action, acronym and grant number;_
* _the publication date, and length of embargo period if applicable, and_ \- _a persistent identifier._
### Article 29.3: open access to research data
_Regarding the digital research data generated in the action (‘data’), the
beneficiaries must:_
4. _deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following:_
1. _the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;_
2. _other data, including associated metadata, as specified and within the deadlines laid down in the data management plan (see Annex I);_
5. _provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves)._
## Dissemination and Communication strategy
The DMP and the actions derived are part of the overall ECO-Binder
Dissemination and Communication strategy, which will be included into the
Initial, Interim and Final Plan for Use and Exploitation of Foreground (PUEF)
as well as into the Final Dissemination Report.
<table>
<tr>
<th>
RESEARCH
RESULTS
</th>
<th>
Decision
on IP
protection
(
patentig
or
other
forms
of
protection
)
Dissemination
:
Research
results
publicaiton
Exploitation
:
Research
results
commercialization
</th>
<th>
Not
open
access
</th> </tr>
<tr>
<th>
Open
access
</th>
<th>
‘Green’ open
access
</th> </tr>
<tr>
<th>
‘Gold’ open
access
</th> </tr> </table>
**Figure 1: Research results in the context of dissemination and
exploitation**
As described in the Technical Annex, **all dissemination actions in the
project will pass through the DESB (Dissemination, Exploitation and
Standardisation Board)** .
The DESB is a _“consultant project body that shall assist and support the
coordinator and the SC in matter of exploitation of results issues and
disagreement resolution. It constitutes the central office co-ordinating all
contacts towards stakeholder communities and other dissemination and
communication target audiences, including the media (web, TV, newsletters,
etc.).[…] [The DESB]_ _will report to the SC_ (Steering Committee) _on issues
regarding project strategy relative to exploitation, dissemination and
standardisation, and they will be responsible to propose activities aiming at
maximizing the impact of the project. Among these activities, the DESB will
also evaluate scientific papers to be submitted in line with confidentiality
issues […] “_ .
The ECO-Binder DESB will therefore support the coordinator and the
Dissemination Task Leader in the implementation of actions described in the
Data Management Plan.
## Position of the project
The cement market is a super-conservative market with highly regulated cement
and concrete products, where leakage of relevant information during the R&D
phase can severely affect the exploitation of results. To this respect, it is
worth recalling a statement already included in the Technical Annex referring
to publishing of project information (including research data).
“[…] _it is worth mentioning that permission to publish any information
arising from the project will need to be submitted to the Management Board
which will ask advice to the Exploitation Committee to ensure that sensitive
material is not disclosed. In the first half of the project, dissemination of
the information about the project will remain limited to the distribution of
the publishable abstracts. This is in order not to endanger the commercial
interests of the industrial partners and the possible patenting of the ideas._
”
In addition to that, the Consortium believes that all of the data on the
mechanical, physical and chemical performance of the new binders must be kept
confidential at least until the end of the project, for the following reasons:
1. Cement performance test methods as used today were developed for Ordinary Portland cements and for Portland composite cements. The objective of such methods is to make a prediction of the long term behavior of cements based on a short term measurement. Many of the methods applied today to characterize the performance of such cements have been in use for many decades. Despite that, they do not always reliably predict the long term behavior of classical cements under ambient conditions. None of these cement test methods was conceived to characterize BYF-type cements. Consequently, they are not yet validated for BYF cements. The ECO-Binder project will help us validate the procedures. This will be done in the first project phase, but it is definitely too early yet to say exactly when they will be considered to be validated. This will induce some delay. It is however absolutely necessary that the procedures be validated before we use them to generate data to be published. As they will be used for future standardization activities, it is essential that the data generated and shared with the public, and in particular with standardization bodies, be meaningful and correct. Any data inconsistencies will seriously endanger our ultimate goal of standardizing BYF type cements for a large range of applications, a goal essential for successful commercialization of these new, sustainable cements.
2. As soon as we have validated the methods, these methods themselves can – in principle – be put into the data base; but the results of applying the test methods to the BYF binders that will be tested will still have to be kept confidential for at least 18 months in case they need to be used to file patent applications. This may well be the case for many of the key data; however, it is not yet possible to say which data will be needed
for IP related activities and when these data will be considered non-
confidential. A decision will be made (for each data set) by the end of the
project.
Detail of this is shown in the list of expected project results (Chapter 3.3).
# Research Data
'Research data' refers to information, in particular facts or numbers,
collected to be examined and considered and as a basis for reasoning,
discussion, or calculation. In a research context, examples of data include
statistics, results of experiments, measurements, observations resulting from
fieldwork, survey results, interview recordings and images. The focus is on
research data that is available in digital form.
## Key principles for open access to research data 2
As indicated in Guidelines on Data Management in Horizon 2020 (European
Commission, Research & Innovation, 2013), scientific research data should be
easily:
<table>
<tr>
<th>
1. DISCOVERABLE
2. ACCESSIBLE
3. ASSESSABLE and INTELLIGIBLE
4. USEABLE beyond the original purpose for which it was collected
</th>
<th>
The data and associated software produced and/or used in the project should be
discoverable (and readily located), identifiable by means of a standard
identification mechanism (e.g. Digital Object Identifier).
</th> </tr>
<tr>
<th>
Information about the modalities, scope, licenses (e.g. licencing framework
for research and education, embargo periods, commercial exploitation, etc.) in
which the data and associated software produced and/or used in the project is
accessible should be provided.
</th> </tr>
<tr>
<th>
The data and associated software produced and/or used in the project should be
easily assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. the minimal datasets are handled
together with scientific papers for the purpose of peer review, data is
provided in a way that judgments can be made about their reliability and the
competence of those who created them).
</th> </tr>
<tr>
<th>
The data and associated software produced and/or used in the project should be
useable by third parties even long time after the collection of the data (e.g.
the data is safely stored in certified repositories for long term preservation
and curation; it is stored together with the minimum software, metadata and
documentation
</th> </tr>
<tr>
<td>
5) INTEROPERABLE
to specific quality standards
</td>
<td>
to make it useful; the data is useful for the wider public needs and usable
for the likely purposes of non-specialists).
</td> </tr>
<tr>
<td>
The data and associated software produced and/or used in the project should be
interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc. (e.g. adhering to standards for data
annotation, data exchange, compliant with available software applications, and
allowing re-combinations with different datasets from different origin).
</td> </tr> </table>
## Roadmap for data sharing
**What and when to deposit:**
Projects participating in the open Research Data Pilot are required to deposit
the research data described below:
* The data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;
* Other data, including associated metadata, as specified and within the deadlines laid down in a data management plan (DMP).
At the same time, projects should provide information (via the chosen
repository) about tools and instruments at the disposal of the beneficiaries
and necessary for validating the results, for instance specialised software or
software code, algorithms, analysis protocols, etc. Where possible, they
should provide the tools and instruments themselves.
**How to manage the research data**
A table template in order to collect the information generated during the
project is circulated periodically. The scope of this table is to detail the
research results that will be developed during the project life span detailing
the kind of results and how it will be managed.
**Tag of the Eco-Binder project results**
According to Annex 1 all results will be tagged by the following information:
* **Data set reference and name:**
Identifier for the data set to be produced. **Data set description:**
Description of :
* the data that will be generated or collected,
* its origin (in case it is collected), nature and scale
* to whom it could be useful,
* whether it underpins a scientific publication.
Information on the existence (or not) of similar data and the possibilities
for integration and reuse.
* **Standards and metadata**
Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created.
* **Data sharing**
* Description of how data will be shared, including access procedures, embargo periods (if any),
* outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups.
* Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). In the case of ZENODO, these are the particular features of the applied data management (extracted from http://www.zenodo.org/policies). Data sharing conditions might be different if another repository is chosen.
* In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy-related, security-related).
* **Archiving and preservation (including storage and backup)**
* Description of the procedures that will be put in place for long-term preservation of the data.
* Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered.
**Where to deposit**
Projects should deposit preferably in a research data repository and take
measures to enable third parties to access, mine, exploit, reproduce and
disseminate — free of charge for any user.
Binder Data Management Plan
OpenAIRE (www.openaire.eu/) implements the Horizon 2020 Open Access mandate
for publications and its Open Research Data Pilot, and provides a Zenodo
repository (www.zenodo.org) that could be used for depositing data.
We could use Zenodo Repository, but it is important to underline that it does
not permit to have information about the users since the data can be
downloaded without registration. Moreover, “Zenodo does not track, collect or
retain personal information from users of Zenodo, except as otherwise provided
herein. In order to enhance Zenodo and monitor traffic, non-personal
information such as IP addresses and cookies may be tracked and retained, as
well as log files shared in aggregation with other community services (in
particular OpenAIREplus partners). User provided information, like corrections
of metadata or paper claims, will be integrated into the database without
displaying its source and may be shared with other services. Zenodo will take
all reasonable measures to protect the privacy of its users and to resist
service interruptions, intentional attacks, or other events that may
compromise the security of the Zenodo website.”
Alternative solutions are being investigated by the ECO-Binder Dissemination
Task Leader, together with the DESB (Dissemination, Exploitation and
Standardization Board) Chairman and the Project Coordinator, as for example
the identification of other repositories where registration is required or the
creation of a dedicated ECO-Binder Repository to be hosted on the project
website.
**Figure 2: Zenodo webpages**
ECO-Binder
## Expected project results and related research data
Expected project results catalogued by Task are listed within table below. The
Table below reports a short description of the contents, the format of the
data and when they will be tentatively circulated. IN particular:
* Workpackage and task number originating the result
* The month within which the data related to the result is expected to be generated
* The partner leading the task that originates the data
* A description of the result and related data
* The expected format of the data linked with the result
* Relevant comments
This table template is circulated periodically in order to monitor the results
and set the strategy for their sharing.
**Table 1: table template for collection of project results and their sharing
strategy**
<table>
<tr>
<th>
**WP**
</th>
<th>
**Task**
</th>
<th>
**End Month**
</th>
<th>
**Leader**
</th>
<th>
**Contents**
</th>
<th>
**Format**
</th>
<th>
</th>
<th>
**Comments**
</th> </tr>
<tr>
<th>
**short description (metadata)**
</th>
<th>
**.xlsx**
</th>
<th>
**.pdf**
</th>
<th>
</th> </tr>
<tr>
<td>
**2**
</td>
<td>
**2.1**
</td>
<td>
**M12**
</td>
<td>
**VICAT**
</td>
<td>
Conduction calorimetry and setting times (EN 196-3) on pastes following EN
196-3
Heat of hydration on mortars.
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
to be kept confidential at least until the end of the project
</td> </tr>
<tr>
<td>
Relationships between time-dependent rheology and chemical admixture dosage at
various w/c ratios.
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
to be kept confidential at least until the end of the project
</td> </tr> </table>
<table>
<tr>
<th>
**WP**
</th>
<th>
**Task**
</th>
<th>
**End Month**
</th>
<th>
**Leader**
</th>
<th>
**Format**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<th>
**short description (metadata)**
</th>
<th>
**.xlsx**
</th>
<th>
**.pdf**
</th>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Plastic shrinkage for each set of mortars.
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
to be kept confidential at least until the end of the project
</td> </tr>
<tr>
<td>
**2.2**
</td>
<td>
**M24**
</td>
<td>
**TECNALIA**
</td>
<td>
Results of tests for the optimization of the time dependent behaviour of BYF
based concrete mixes (e.g.:
Slump (ASTM C143 - 12 or EN 206), Setting time (ASTM
C403 – 08), Bleeding (ASTM C 232))
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
to be kept confidential at least until the end of the project
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
**3.1**
</td>
<td>
</td>
<td>
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
to be kept confidential at least until the end of the project
</td> </tr>
<tr>
<td>
**3.2**
</td>
<td>
**M32**
</td>
<td>
**HC**
</td>
<td>
Results of compressive and flexural tests carried out on prismatic specimens
40 x40 x160 mm according to EN 1015 (Young’s Modulus)
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
to be kept confidential at least until the end of the project
</td> </tr>
<tr>
<td>
Internal loss factor using the resonance frequency method carried out on
prismatic specimens 500 x50 x40 mm.
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
to be kept confidential at least until the end of the project
</td> </tr> </table>
<table>
<tr>
<th>
**WP**
</th>
<th>
**Task**
</th>
<th>
**End Month**
</th>
<th>
**Leader**
</th>
<th>
**Format**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<th>
**short description (metadata)**
</th>
<th>
**.xlsx**
</th>
<th>
**.pdf**
</th>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Thermal conductivity (λ) (ISO 8301:1991)
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
to be kept confidential at least until the end of the project
</td> </tr>
<tr>
<td>
**3.3**
</td>
<td>
**M36**
</td>
<td>
**DTI**
</td>
<td>
Results of microstructural characterisation
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td>
<td>
to be kept confidential at least until the end of the project
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**4.1**
</td>
<td>
**M6**
</td>
<td>
**LCR**
</td>
<td>
Durability testing protocols
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td>
<td>
to be kept confidential until the protocols have been validated
</td> </tr>
<tr>
<td>
**4.2**
</td>
<td>
**M27**
</td>
<td>
**BRE**
</td>
<td>
Results of fire testing on BYF concretes (at different scale)
</td>
<td>
x
</td>
<td>
x
</td>
<td>
x
</td>
<td>
to be kept confidential until the protocols have been validated
</td> </tr>
<tr>
<td>
**4.3**
</td>
<td>
**M48**
</td>
<td>
**BRE**
</td>
<td>
Results of long term performance and durability test
</td>
<td>
x
</td>
<td>
x
</td>
<td>
x
</td>
<td>
to be kept confidential at least until the end of the project
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**5.1**
</td>
<td>
**M24**
</td>
<td>
**NOVEL TECH**
</td>
<td>
Results of finishing technologies classification
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td>
<td>
</td> </tr>
<tr>
<td>
**WP**
</td>
<td>
**Task**
</td>
<td>
**End Month**
</td>
<td>
**Leader**
</td>
<td>
**Format**
</td>
<td>
</td>
<td>
**Comments**
</td> </tr>
<tr>
<td>
**short description (metadata)**
</td>
<td>
**.xlsx**
</td>
<td>
**.pdf**
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**5.2**
</td>
<td>
**M24**
</td>
<td>
**NTUA**
</td>
<td>
Results of tests carried out on the novel finishing materials
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**M24**
</td>
<td>
**NOVEL TECH**
</td>
<td>
Insulation Materials Classification
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td>
<td>
</td> </tr>
<tr>
<td>
**M24**
</td>
<td>
**NUOVA TESI**
</td>
<td>
Results of tests carried out on insulating prefabricated samples
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**5.3**
</td>
<td>
**M27**
</td>
<td>
**NOVEL TECH**
</td>
<td>
Evaluation of optimal components – system
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td>
<td>
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
**6.1**
</td>
<td>
</td>
<td>
</td>
<td>
\-
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**6.2**
</td>
<td>
**M33**
</td>
<td>
**NUOVA TESI**
</td>
<td>
Results of quality control of the BYF cement based façade precast components
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td>
<td>
</td> </tr>
<tr>
<td>
**6.3 & **
**6.4**
</td>
<td>
**M45**
</td>
<td>
**ACCIONA**
</td>
<td>
Description of ECO-Binder envelope technologies demonstrators
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td>
<td>
</td> </tr>
<tr>
<td>
**6.5**
</td>
<td>
**M48**
</td>
<td>
**ACCIONA**
</td>
<td>
Results of monitoring analysis and evaluation of the new facade elements for
building envelopes in both new construction and renovation
</td>
<td>
x
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
**7.1**
</td>
<td>
**M36**
</td>
<td>
**GEO**
</td>
<td>
Results of LCA of the new concrete mix compositions at material level
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td>
<td>
</td> </tr>
<tr>
<td>
**7.2**
</td>
<td>
**M48**
</td>
<td>
**GEO**
</td>
<td>
Results of LCA of new products/constructions at application level (including
LCCA)
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td>
<td>
</td> </tr> </table>
Binder Data Management Plan
# Scientific Publications
As reported in the Technical Annex, “the Consortium is willing to submit at
least 2-3 papers for scientific/industrial publication during the course of
the project”. In the framework of the dissemination plan agreed by the GA, R&D
partners are responsible for the preparation of the scientific publications,
while the DESB is responsible for review and final approval.
Here follows the tentative description of the approach towards scientific
publications in ECO-Binder.
## Selection of the publisher
As a general approach, the R&D partners are responsible for the scientific
publications as well as for the selection of the publisher(s) considered as
more relevant for the subject of matter.
Each publisher has its own policies on self-archiving 3 . Basically for Open
Access there are different modalities:
* **Green open access:** researchers can deposit a version of their published work into a subject-based repository or an institutional repository.
* **Gold open Access:** alternatively researcher can publish in an open access journal, where the publisher of a scholarly journal provides free online access. Business Model for this form of open access vary. In some cases, the publisher charges the author’s institution or funding body an article processing charge.
For Example:
<table>
<tr>
<th>
Repository
</th>
<th>
Self-archiving policy
</th> </tr>
<tr>
<td>
_http://www.springer.com/gp/_
</td>
<td>
_"Authors may self-archive the author’s accepted manuscript of their articles
on their own websites. Authors may also deposit this version of the article in
any repository, provided it is only made publicly available 12 months after
official publication or later. He/ she may not use the publisher's version
(the final article), which is posted on SpringerLink and other_
_Springer websites, for the purpose of self-archiving or_
</td> </tr>
<tr>
<td>
</td>
<td>
_deposit. Furthermore, the author may only post his/her version provided
acknowledgement is given to the original source of publication and a link is
inserted to the published article on Springer's website. The link must be
provided by inserting the DOI number of the article in the following sentence:
“The final publication is available at Springer via http://dx.doi.org/[insert_
_DOI]”."_
</td> </tr>
<tr>
<td>
__www.oasis-open.org/_ _
</td>
<td>
_Publishers can facilitate Open Access in two main ways. The publisher may, of
course, publish the work with free, online access in an Open Access journal or
as an Open Access monograph. Alternatively, if the publisher's business model
is to sell monographs or subscriptions to journals, then the publisher can
still facilitate Open Access by permitting the author to self-archive the work
in an institutional or subject repository_
</td> </tr>
<tr>
<td>
_www.sherpa.ac.uk/romeo/_
</td>
<td>
_Author's Pre-print: author can archive pre-print (ie prerefereeing)_
_Author's Post-print: author can archive post-print (ie final draft post-
refereeing)_
_Publisher's Version/PDF: author cannot archive publisher's version/PDF_
</td> </tr> </table>
As reported above there are several policies for the publication of the data
and for the selfarchiving. In this framework all cases proposed by the
relevant R&D partners will be analysed and a strategy will be defined with the
support of ECO-Binder Project Coordinator and DESB.
In addition to the official repository Zenodo (open research data repository
lunched by CERN and OpenAIRE for open data generated by projects in the H2020
framework) and the repository above listed, **institutional repositories**
will be taken into account.
In particular TECNALIA has developed its own institutional repository (i.e.
_www.dsp.tecnalia.com/_ ) . All scientific publications with Tecnalia as
author will be deposited in their institutional repositories, regardless of
the fact the ECO-BINDER articles will be deposit also in other repositories.
## Bibliographic metadata
For adequate identification of accessible data, all the following metadata
information will be included:
Information about the grant number, name and acronym of the action:
* European Union (UE)
* Horizon 2020 (H2020) Innovation Action (IA) Eco-Binder [Acronym]
* Grant Agreement: GA N° 637138
Information about the publication date and embargo period if applicable:
* Publication date
* (eventual) Length of embargo period
Information about the persistent identifier (for example a Digital Object
Identifier, DOI):
* Persistent identifier, if any, provided by the publisher (for example an ISSN number)
# Conclusions
This document constitutes the first issue of the Data Management Plan. The aim
of the DMP is to provide preliminary guidelines for the management of the
project results during the life span of the project and beyond.
The DMP will be updated during the project to fine-tune it to the data
generated and the uses identified by the Consortium since not all data or
potential uses are clear from the beginning.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0960_URBANFLUXES_637519.md
|
# 1 INTRODUCTION
## 1.1 Purpose of the document
The URBANFLUXES (URBan ANthropogenic heat FLUX from Earth observation
Satellites) Data Management Plan describes the management for all data sets
that have been collected, processed or generated during the research project
by using in-situ measurements, Earth Observation (EO) data analysis, as well
as from Geographic Information Systems (GIS) analysis, processes and outputs.
It is a document outlining how research data have been handled during the
research project and even after the project has been completed, describing
what data have been collected, processed or generated and following what
methodology and standards, whether and how this data have been shared and/or
made open and how it will be curated and preserved.
# 2 DATA REPOSITORY
## 2.1 Infrastructure and Data types
URBANFLUXES Consortium has chosen to participate on a voluntary basis in the
H2020 Pilot on Open Research Data. FORTH has developed and operates a web-
server that hosts the Data Repository, the project web-site and the ftp-server
for internal data and information exchange. The URBANFLUXES web-server is a
PowerEdge R730xd server with Intel Xeon CPU, 32 GB of Ram and 48 TB HDD’s on a
RAID 10 backup and monitoring system. From the 48 TB of available storage
space, 24 TB are available for use in the project and 24TB for backup actions
in the project. Also, 2 HDD of 300 GB for OS and SW, serve the website of the
project and through it, all deliverables and public available publications and
data.
The URBANFLUXES Data Repository is a common place for the storage and
management of the data. The participants of the URBANFLUXES and the potential
users of the products and outputs have access to the repository (see Section
4). Raw data, auxiliary data, products and their associated metadata,
documents and multimedia are stored in the repository. The URBAFLUXES datasets
and products can be distinguished into two main categories:
1. Spatial Data:
1. Vector Data (Figure 1).
2. Raster Data (Figure 2).
3. Collections of data in tables (netCDF, HDF, CSV - tabular format with values separated by commas, Matlab files).
2. Non-Spatial Data:
1. Reports
2. Dissemination material
3. Scientific publications
4. Deliverables
5. Multimedia files:
i. Photographic material ii. Videos for the promotion of the project /
Documentaries
Figure 1. Building blocks, building footprints and road network as vector data
(Heraklion).
Figure 2. WorldView II acquisition over the historic centre of Heraklion.
## 2.2 Structure
URBANFLUXES has arranged all available data in a folder management system in
the URBANFLUXES web-server. The same structure is used for the produced data
during and after the end of the project. The data is accessible through the
URBANFLUXES web-site (Figure 3). The data can also be accessible through ftp
clients (Filezilla, SmartFTP, etc.), as shown in Figure 5. All URBANFLUXES
products related to publications are open and free after registration to the
URBANFLUXES web-site (see Section 4).
Figure 3. Access to URBANLUXES Data Repository.
The repository consists of 8 main folders, one folder for each partner:
* ALTERRA
* CESBIO
* DLR
* FORTH
* GEOK
* UNIBAS
* UoG
* UoR
Each partner retains full permission on storing and modifying the content of
its own folder, whereas have only permission to read and download files (but
not save or modify the content) from the folders of the rest of the partners.
Inside each partner folder there is one folder named PublicData, where each
partner add datasets accompanied with the respective metadata files (see
Section 3) in order to be publicly available in the URBANFLUXES website (See
Section 4).
Figure 4. The folder based Data Management Scheme, as is in the URBANFLUXES
web-server
Figure 5. The file structure of the URBANLUXES Data Repository accessed by
FILEZILLA ftp client software.
## 2.3 URBANFLUXES Datasets
2.3.1 Coordinate system, study area and grid
The UTM WGS84 projection is used as a project standard. When URBANFLUXES
products are made available to Local Authorities these are re-projected to the
local coordinate system, if requested so. All data in the URBANFLUXES Data
Repository are converted to UTM, each one for the three case study locations
(Table 1). For the three cities a focus area of interest has been selected and
a reference grid of 100m x 100m resolution has been created.
Table 1. Coordinate systems of the study areas.
<table>
<tr>
<th>
</th>
<th>
Coordinate systems
</th> </tr>
<tr>
<th>
UTM and EPSG code
</th>
<th>
Local System
</th> </tr>
<tr>
<td>
London
</td>
<td>
WGS84 Zone 31N - (EPSG:32631)
</td>
<td>
</td> </tr>
<tr>
<td>
Basel
</td>
<td>
WGS84 Zone 32N - (EPSG:32632)
</td>
<td>
CH1903+ LV95 (EPSG 2056)
</td> </tr>
<tr>
<td>
Heraklion
</td>
<td>
WGS84 Zone 35N - (EPSG:32635)
</td>
<td>
GGRS87 / Greek Grid (EPSG 2100)
</td> </tr> </table>
2.3.2 Earth Observation imagery and products
URBANFLUXES used multiple EO data sources for producing meaningful spatial
products to be used in the flux modeling approaches. The EO source data come
from:
* Sentinel 1 (SAR), 2 (HR) and 3 (MR) - Archived & new acquisitions
* ASTER custom night flights (HR) – New custom acquisitions
* LANDSAT mission (TM, ETM+, ETM+ SLC off and OLI/TIRS) (HR) - Archived & new acquisitions
* SPOT (HR) - Archived & new acquisitions
* WORLDVIEW II (VHR) – Archived & new acquisitions
* Aerial Imagery (VHR) and Lidar - Archived images
The main products derived from the EO data are:
* Land Cover Maps (VHR)
* Land Cover Fractions (100 m)
* Digital Surface Models (VHR)
* Urban surface morphometric parameters (100 m)
* Surface reflectance (EO data source resolution, 100m)
* Surface temperature (EO data source resolution, 100m)
* Surface emissivity (EO data source resolution, 100m)
* Leaf Area Index (EO data source resolution, 100m)
* Normalized Difference Vegetation Index (EO data source resolution, 100m)
* Surface albedo (EO data source resolution, 100m)
* Aerosol optical thickness (EO data source resolution, 100m)
* Cloud cover masks (EO data source resolution, 100m)
The information was extracted periodically; in specific time steps, e.g. every
year, month and season, depending on the needs of the project’s WP’s. Raster
data are stored in the format of GeoTIFF. GeoTIFF is a well-known, widely used
uncompressed raster format. Its only disadvantage is its large file size
comparing to other formats. Raw satellite images are stored separately, with
their associated metadata files as these are provided by the image providers.
The EO-derived products are described in detail in [R5].
Vector data have been also used for multiple purposes during URBANFLUXES
project. These include:
* Buildings and associated information (categories, height, building material)
* Building blocks and types
* Building footprints
* Road network and associated information (road type)
* Tree locations, canopy and height
The vector data are available by the Local Authorities of the case studies and
other open data sources, such as Urban Atlas 2012 (GMES/Copernicus land
monitoring services) and OpenStreetMap. In cases that these where outdated,
update procedures have been activated by using remote sensing tools and
methods. ESRI shapefile has been selected as the vector format for data
sharing. It is developed and regulated by Esri as an open specification for
data interoperability among Esri and other GIS software products such as QGIS,
ESA SNAP, etc. The shapefile format can spatially describe vector features:
points, lines, and polygons, representing, for example, buildings, roads, and
landmarks. Each item usually has attributes that describe it, such as name or
elevation.
2.3.3 In-situ measurements
Data from the in-situ measurements of the wireless networks of meteorological
stations (air temperature, relative humidity, wind speed, wind direction,
barometric pressure, precipitation) as well as measurements and products by
the Eddy Covariance systems and scintillometers (turbulent heat fluxes), have
been collected during URBANFLUXES and will continue to be active after the
project termination. Detailed time series of these data in dedicated formats
(CSV - tabular format with values separated by commas) have been collected by
the Partners that are responsible for the in-situ measurements in URBANFLUXES
Case studies: Basel, London and Heraklion (UNIBAS, UoR and FORTH,
respectively).
Figure 6. Access to weather station data for Heraklion by using the web-GIS
application of the URBANFLUXES web-site.
An online web-GIS tool has been developed during URBANFLUXES and hosted in the
URBANFLUXES web-site (Figure 6, Figure 7). Itprovide real-time overview and
data access to the meteorological station network recordings of Basel and
Heraklion. The data are sent automatically by the stations to the provider’s
cloud storage and then URBANFLUXES web-GIS internal procedures download the
data for storage in the data repository. A meteorological database has been
developed in each case study, freely accessible by the users for viewing and
downloading the required data. The use of cloud storage and URBANFLUXES
repository ensures the accessibility and preservation and backup of the data.
The online tool offers the possibility of real-time overview of the
meteorological conditions and for temporally aggregated time series and
meteograms. London is equipped with several meteorological stations that are
gathered in the London Urban Micromet data Archive (LUMA), managed by
University of Reading (UoR). There is also an in-house online tool for
plotting the real-time data while various meteorological parameters are
available from multiple meteorological stations. Access to the meteorological
data is available on-demand after user registration to the LUMA Archive.
Alternatively, all data gathered during the URBANFLUXES project iare also
stored in the URBANFLUXES repository and become available to URBANFLUXES
registered users on demand.
Figure 7. Access to weather station data for Basel by using the web-GIS
application of the URBANFLUXES web-site.
During the URBANFLUXES project, an Eddy Covariance system has been installed
in the center of Heraklion. The Eddy Covariance system of Heraklion is
connected to the network with realtime transmission of the measurements and
the full data archive is collected at the URBANFLUXES repository. The flux
measurements can be viewed online by the users through the online tool
provided by University of Basel (Figure 8) while the data are accessible to
users on-demand. Basel is equipped with three Eddy Covariance towers. Two are
installed in the center of the city (BKLI and BAES) and one in a rural area
(BLER). The Eddy Covariance towers are connected to the network with real-time
transmission of the measurements and the full data archive is collected in the
URBANFLUXES repository. The flux measurements can be viewed online by the
users through the online tool of the University of Basel and the data are
accessible to users on-demand. London had one Eddy Covariance tower (KSSW) and
three scintillometry sites in the centre of the city. Flux data are collected
real-time and stored in the London Urban Micromet data Archive (LUMA), managed
by University of Reading (UoR). There is an online tool for plotting the real-
time data (Figure 8). Access to the meteorological data is available on-demand
after user registration to the LUMA Archive.
Figure 8. Online real-time graphs of the flux and meteorological measurements
by the Eddy Covariance system in Heraklion.
2.3.4 Urban Energy Balance Flux maps
During URBANFLUXES project, a series of UEB flux maps for each case study
using multiple methodologies have been developed. There have been several
estimates of fluxes which have been modified with advancements within the
project. The Partners responsible of the development of each UEB flux
methodology archived in their respective Data Repository folders the multiple
versions of UEB flux maps of each case study. The final versions are
considered the more reasonable, with evaluations presented in the respective
deliverables. These datasets are the products of the project and have been
produced after intense and innovative scientific developments. Thus, are
sensitive data and have been kept private until a formal scientific
publication occurred. The UEB flux maps will be kept in the URANFLUXES
repository, accessible to all partners for internal use and will become public
with the respective publications. A sample image of a UEB flux map of London
is shown in Figure 9.
Figure 9. ΔQ S map on a clear summer day in London, (19 th of July, 11 am)
2.3.5 Data linked to publications
Final peer-reviewed manuscripts accepted for publication are deposited in the
repository for scientific publications (Publications Repository). This is done
at the latest upon publication, even where open access publishing is chosen in
order to ensure long-term preservation of the article. At the same time, the
research data used to verify the results presented in the deposited scientific
publications, are deposited into the Data Repository. The URBANFLUXES web-
server ensures open access to the deposited publications and underlying data.
Depending on each specific publication, either the self-archiving (green open
access), or the open access publishing (gold open access) option is selected.
In the former case the Consortium ensures open access to the publication
within a maximum of six months. In the latter case, open access is ensured
upon publication and the article processing charges incurred by beneficiaries
are eligible for reimbursement during the duration of the project. After the
end of the project, these costs may be covered by some partners’
Organizations. The URBANFLUXES web-server also ensures open access - via the
repository - to the bibliographic metadata that identify each deposited
publication. The bibliographic metadata are in a standard format and include:
the terms "European Union (EU)" and "Horizon 2020"; the name of the action;
the acronym and the grant number; the publication date; the length of embargo
period if applicable, and a persistent identifier, such as Digital Object
Identifier (DOI). URBANFLUXES makes publicly available all datasets linked
with the scientific publications that have been funded under this project. The
DOI of all project publications are linked with each dataset.
# 3 METADATA
## 3.1 Spatial product metadata
A metadata standard, which is currently used by most of the project partners,
is adopted in URBANFLUXES for the spatial products (i.e. maps of heat fluxes).
A template has been developed according to the INSPIRE standards for the
spatial data while for the meteorological observations, a simple Excel form
with the necessary information has been created. URBANFLUXES partners use the
online editor and viewer for the INSPIRE metadata standard (Figure 10) which
can be found at: _http://inspire-geoportal.ec.europa.eu_ .
Figure 10. The interface for the INSPIRE metadata editor.
This editor contains a limited number of obligatory metadata and can be
extended with much more information. It allows designing a metadata template
that fits the needs of URBANFLUXES, requiring only as much information as
needed, to reduce the workload, as for each dataset (vector and raster),
metadata have to be created. The metadata file can be exported in the form of
standard XML. There is a possibility to use also an offline INSPIRE metadata
editor for a more efficient metadata creation, like the GIMED and the
ArcCatalog metadata editor. It should be ensured that all relevant information
for the different WPs and users (internal and external) are stored in the
metadata.
The information that the metadata can have for the spatial data are:
1. Metadata on metadata:
1. Point of contact
2. Email
3. Metadata date
4. Metadata language
2. Identification:
1. Resource title
2. Identifier
3. Resource abstract
4. Resource locator
3. Classification:
1. Topic category
4. Keyword
1. Keyword from INSPIRE Data themes
2. Keywords from repositories
3. Free keywords
4. Originating controlled vocabulary
1. Title
2. Reference date iii. Data type
5. Geographic
1. Bounding box
2. Countries
6. Temporal reference
1. Temporal extend
1. Starting date
2. Ending date
2. Date of creation
3. Date of publication
4. Date of last revision
7. Quality and Validity
1. Lineage
2. Spatial resolution
1. Equivalent scale
2. Resolution distance
3. Unit of measure
8. Conformity
1. Specifications
2. Date
3. Data type
4. Degree
9. Constraints
1. Conditions applying to access and use
2. Limitations on public access
10. Responsible party
1. Organization name
2. Email
3. Responsible party role
These are the INSPIRE guidelines that can be applied to the spatial datasets
of the URBANFLUXES project. Table 2 contains the fields that are required for
the correct classification and description of the URBANFLUXES products, and
the respective fields of the INSPIRE directive.
Table 2. List of mandatory for URBANFLUXES metadata fields,
<table>
<tr>
<th>
</th>
<th>
Name of field
</th>
<th>
Name of the respective INSPIRE field
</th>
<th>
Visible in the web-site list
</th> </tr>
<tr>
<td>
1
</td>
<td>
Owner/Publisher
</td>
<td>
Metadata🡪 Organization name + email
Responsible Party 🡪 Organization name + email + role
</td>
<td>
</td> </tr>
<tr>
<td>
2
</td>
<td>
Title
</td>
<td>
Identification 🡪 Resource Title
</td>
<td>
YES
</td> </tr>
<tr>
<td>
3
</td>
<td>
File name
</td>
<td>
Identification 🡪 Identifier 🡪 Code
</td>
<td>
</td> </tr>
<tr>
<td>
4
</td>
<td>
Short Description
</td>
<td>
Identification 🡪 Resource abstract + Resource locator
</td>
<td>
</td> </tr>
<tr>
<td>
5
</td>
<td>
Topic category
</td>
<td>
Classification 🡪 Topic category
</td>
<td>
</td> </tr>
<tr>
<td>
6
</td>
<td>
INSPIRE keyword
</td>
<td>
Keyword 🡪 Keyword from INSPIRE Data themes
</td>
<td>
</td> </tr>
<tr>
<td>
7
</td>
<td>
Keywords
</td>
<td>
Keyword 🡪 Free keyword 🡪 Keyword value
</td>
<td>
</td> </tr>
<tr>
<td>
8
</td>
<td>
Geographic location
</td>
<td>
Geographic 🡪 Geographic bounding box
</td>
<td>
</td> </tr>
<tr>
<td>
9
</td>
<td>
Temporal Extent
</td>
<td>
Temporal 🡪 Temporal Extent
</td>
<td>
YES
</td> </tr>
<tr>
<td>
10
</td>
<td>
Reference Dates
</td>
<td>
Temporal 🡪 Date of Creation, Publication, last revision
</td>
<td>
</td> </tr>
<tr>
<td>
11
</td>
<td>
Process history
</td>
<td>
Quality&Validity 🡪 Lineage
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Name of field
</td>
<td>
Name of the respective INSPIRE field
</td>
<td>
Visible in the web-site list
</td> </tr>
<tr>
<td>
12
</td>
<td>
Spatial Resolution
</td>
<td>
Quality&Validity 🡪 Resolution distance + Unit of measure
</td>
<td>
YES
</td> </tr>
<tr>
<td>
13
</td>
<td>
Access and use
</td>
<td>
Constraints 🡪 Conditions applying to access and use + Limitations on public
access
</td>
<td>
</td> </tr>
<tr>
<td>
14
</td>
<td>
File size
</td>
<td>
_(automatic)_
</td>
<td>
YES
</td> </tr> </table>
## 3.2 Weather Station Metadata
For the in-situ measurements, different information is used in the metadata in
order to ensure that the instruments of the measurements are described. As
well as the entries from the Spatial metadata (excluding spatial-specific
entries 5 and 7), these are:
Sensor information
* Sensor type
* Manufacturer
* Sensor model
* Serial number
* Firmware version
* Measured variable identifier(s)
* Measurement unit of each variable
* Accuracy of each variable
* Raw sampling rate
* Transmission rate
Installation information
* Connection type / Transmission technology • Position (X, Y information in WGS84)
* Height of the instrument above ground (m)
* Estimated height of surrounding buildings (m)
* Vertical and horizontal orientation of instrument (degrees)
* Instrument mounting description
* Data format
* Photograph(s) of the station and immediate surroundings after installation
The above data are stored in a designed form, named with the station’s name
and code (if available). A consistent set of variable names and measurement
units for the weather stations have been agreed upon by the URBANFLUXES
Partners before the metadata are populated. It is noted that equipment may
need replacing at a particular station and it will be clear when this happens
in the framework of the project.
# 4 POLICY FOR RE-USE, ACCESS AND SHARING
According to the Grant Agreement [R1] and the Consortium agreement [R2],
URBANFLUXES participates on a voluntary basis in the H2020 Pilot on Open
Research Data. Open access to research data refers to the right to access and
re-use digital research data under the terms and conditions set out in the
Grant Agreement. Openly accessible research data can typically be accessed,
mined, exploited, reproduced and disseminated free of charge for the user. The
open access to research data is important to maximize the impact of the
project. URBANFLUXES partners have taken reasonable actions, defined in the
Consortium Agreement [R2] to protect the knowledge resulting from the project,
according to their own policy and legitimate interest and in observance of
their obligations under the Grant Agreement. According to the Consortium
Agreement, the knowledge is the property of the partner carrying out the work
leading to that knowledge and is subject to Intellectual Property Rights
(IPR). Therefore, the data access is free as long as the users credit
URBANFLUXES project and/or the data author for the original creation. To
ensure the proper distribution and re-use of URBANFLUXES data products, all
datasets in the URBANFLUXES repository are accompanied with metadata files
that defines the policy for re-use, access and sharing, along with the
original data author and project.
## 4.1 Data Repository
The URBANFLUXES Data Repository is split into two segments:
* The Public Data Repository, where URBANFLUXES products become freely available to all after the provision of basic information [R2].
* The Private Data Repository, where raw data, commercial data, unpublished data, as well as all internal documents are available to the URBANFLUXES Consortium [R2].
## 4.2 Public Data Repository
After the publication of the scientific publication presenting the analyses
methods to be developed in URBANFLUXES, the respective data and products
become available with free access through the URBANFLUXES in the Public Data
Repository (Figure 11). Any potential user of these datasets will have free
access, following simple registration instructions given in the respective
web-page. The user fills in a dedicated form with minimum information (name,
email, etc.), similar to which several projects use (JRC, UN, EEA, etc.) and
then grand access to these datasets. The users have the possibility to access,
mine, exploit, reproduce and disseminate (free of charge) the data, including
associated metadata, needed to validate the results presented in scientific
publications. As indicated in the respective metadata field of all URBANFLUXES
datasets, the data are protected by Intellectual Property Rights. Thus, the
users are obliged to refer to the data source (URBANFLUXES: grant agreement No
637519) when reproducing or using the data in articles or reports. By
following this procedure, the URBANFLUXES Consortium will monitor the
diffusion of these products, as well as the reuse in other projects,
publications, supporting in this way new scientific collaborations. There have
been 120 subscriptions to the URBANFLUXES web-site, gaining access to the
public data repository during the lifetime of the project. Most of the
subscribers are related to the scientific community and only few so far are
form public administrations and private companies.
Figure 11. The Data Repository section at the URBANLUXES website.
## 4.3 Private Data Repository
The Private Data Repository, hosted in URBANFLUXES web-server, include the raw
data (satellite images, vector data from public sources, etc.), the
unpublished results but also the data that have been classified as
confidential according to the Consortium agreement [R2]. Commercial EO imagery
and products that are subject access restrictions are also stored in the
private data repository. The members of the URBAFLUXES Consortium (Table 3)
have access by login with their credentials. Data that are used and produced
during the project are also available in the repository, with the respective
version numbers. Raw data and products or intermediate datasets are and will
remain online for sharing with the partners for further exploitation. Raw data
are available to the members of the URBANFLUXES Consortium according to the
rules in the Consortium Agreement [R2].
Table 3. The current list of users with access to the Private Data Repository
<table>
<tr>
<th>
Name
</th>
<th>
Organization
</th> </tr>
<tr>
<td>
Nektarios Chrysoulakis
</td>
<td>
FORTH
</td> </tr>
<tr>
<td>
Zina Mitraka
</td>
<td>
FORTH
</td> </tr>
<tr>
<td>
Dimitris Poursanidis
</td>
<td>
FORTH
</td> </tr>
<tr>
<td>
Stavros Stagakis
</td>
<td>
FORTH
</td> </tr>
<tr>
<td>
Thomas Esch
</td>
<td>
DLR
</td> </tr>
<tr>
<td>
Wieke Heldens
</td>
<td>
DLR
</td> </tr>
<tr>
<td>
Mattia Marconini
</td>
<td>
DLR
</td> </tr>
<tr>
<td>
Jean-Philippe Gastellu-Etchegorry
</td>
<td>
CESBIO
</td> </tr>
<tr>
<td>
Ahmad Al Bitar
</td>
<td>
CESBIO
</td> </tr>
<tr>
<td>
Lucas Landier
</td>
<td>
CESBIO
</td> </tr>
<tr>
<td>
Sue Grimmond
</td>
<td>
UoR
</td> </tr>
<tr>
<td>
Simone Kotthaus
</td>
<td>
UoR
</td> </tr>
<tr>
<td>
Ben Crawford
</td>
<td>
UoR
</td> </tr>
<tr>
<td>
Andrew Gabey
</td>
<td>
UoR
</td> </tr>
<tr>
<td>
William Morrison
</td>
<td>
UoR
</td> </tr>
<tr>
<td>
Eberhard Parlow
</td>
<td>
UNIBAS
</td> </tr>
<tr>
<td>
Christian Feigenwinter
</td>
<td>
UNIBAS
</td> </tr>
<tr>
<td>
Roland Vogt
</td>
<td>
UNIBAS
</td> </tr>
<tr>
<td>
Andreas Wicki
</td>
<td>
UNIBAS
</td> </tr>
<tr>
<td>
Fredrik Lindberg
</td>
<td>
UoG
</td> </tr>
<tr>
<td>
Frans Olofson
</td>
<td>
UoG
</td> </tr>
<tr>
<td>
Fabio Del Frate
</td>
<td>
GeoK
</td> </tr>
<tr>
<td>
Daniele Latini
</td>
<td>
GeoK
</td> </tr>
<tr>
<td>
Judith Klostermann
</td>
<td>
ALTERRA
</td> </tr>
<tr>
<td>
Channah Betgen
</td>
<td>
ALTERRA
</td> </tr> </table>
# 5 PLANS FOR ARCHIVING AND PRESERVATION
URBANFLUXES data repository will remain active after the project termination.
All users (registered and consortium members) will retain their credentials
and will have access to the data. Moreover, the repository will be updated
with new versions and up-to-date datasets when available by the partners.
URBANLUXES team remains committed to the research objectives of URBANFLUXES
and will continue to publish high quality research articles in scientific
journals and attend major conferences and symposia disseminating URBANFLUXES
achievements. The public data section of the repository is expected to
increase as new scientific articles become public and the associated data will
be uploaded in the public repository. The in-situ measurement networks will
also remain active and data will be continuously uploaded on the web-server
and archived in the data repository. Table 4 summarizes the data that will be
preserved in the data repository after the end of the project along with the
access status. All commercial imagery that has been purchased by the project
partners and are subject to distribution limitations will remain private. All
data products and data collected through URBANFLUXES are and will remain
public.
Table 4. Data preserved in the data repository after the end of the project
<table>
<tr>
<th>
Data
</th>
<th>
Resolution
</th>
<th>
Access
</th> </tr>
<tr>
<td>
Commercial EO imagery (raw)
</td>
<td>
VHR
</td>
<td>
Private
</td> </tr>
<tr>
<td>
Commercial EO-derived products
</td>
<td>
VHR
</td>
<td>
Private
</td> </tr>
<tr>
<td>
Project EO-derived products
</td>
<td>
100 m
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Meteorological measurements
</td>
<td>
point
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Eddy Covariance measurements
</td>
<td>
local
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Scintillometry measurements
</td>
<td>
local
</td>
<td>
Public
</td> </tr>
<tr>
<td>
UEB flux maps
</td>
<td>
100 m
</td>
<td>
Public
</td> </tr> </table>
The data products are archived with a specified format according to the needs
of the project and the specific data type as these evolved and be specified by
the scientists of the project. The production date is always included in both
the file name (e.g. LT8LULC20150430.tif) and the associated metadata (e.g.
LT8LULC20150430.xml, LT8LULC20150430.txt). Version of the updated data
products is retained in the data storage system, indicated in the folder name,
filename and associated metadata. Frequent backups (monthly) of the data
included in the data repository of the URBANFLUXES web-server are
automatically performed by FORTH. Also, weekend incremental backup is active
for the huge data of the project. RAID 10 system is used in the URBANFLUXES
web-server and 24TB of storage space are available for this crucial step.
Manual backups are retained if necessary by using external HDD’s and safe
storage in safe. If the data that are produced by the URBANFLUXES project
increase in volume and the current storage volume become insufficient for the
security and the backup of the data, addition storage space will be obtained
as the additional data volume and the server maintenance cost will not be
barriers for the long term preservation and distribution of the data. In the
long-term the high quality final data products generated by URBANFLUXES
project will become available for the use by the research and policy
communities in perpetuity.
6 APPENDIX
Metadata File Creation Walkthrough
In this section directions for the metadata creation are given along with an
example (asterisks* indicate that the field is already fixed in template
forms, see Section 6.15):
## 6.1 Owner/Publisher
In Metadata tab, fill in the fields:
* Organization name (i.e. _FORTH_ )
* E-mail (i.e. [email protected]_ )
Do the same for Responsible party tab:
* Organization name (i.e. _FORTH_ )
* E-mail (i.e. [email protected]_ )
\-
Responsible party role*
(
i.e.
_Author_
)
6
.1
6
.1
Figure 12. Metadata tab and Responsible party tab.
## 6.2 Title
In Identification tab, fill in the fields:
\- Resource title (i.e. _Sky-view factor (Basel)_ )
This is the most important field, because it describes the content of the
dataset, which is visible by the users on the online portal. After the title
always put the city name in parenthesis (already set in the templates).
## 6.3 File name
In Identification tab, fill in the fields:
\- Identifier Code (i.e. _Basel_SVF_ )
This code must be unique for each resource and is mandatory by INSPIRE
Metadata Editor
## 6.4 Short Description
In Identification tab, fill in the fields:
\- Resource abstract (i.e. _Sky-view factor is the fraction of sky visible
from the ground level._ ) - Resource locator* (i.e. __http://urbanfluxes.eu_
_ )
This is a short description on what the data refers to, technical
specification and/or some reference for the dataset.
Figure
13
. Identification tab.
6
.2
6
.
3
6
.
4
## 6.5 Topic category*
In Classification tab, fill in the fields:
\- Topic category* (i.e. _Geoscientific Information_ )
It is a mandatory field of the INSPIRE directive to select one of the high-
level classification scheme that is proposed by the Metadata Editor. It has
been decided to use one category for all URBANFLUXES products (i.e.
_Geoscientific Information_ ).
Figure
14
. Classification tab.
6
.5
## 6.6 INSPIRE Keyword
In Keyword tab, fill in the fields:
\- Keyword from INSPIRE Data themes (i.e. _Meteorological geographical
features_ )
It is mandatory to select one keyword from the INSPIRE Data themes. Some
relevant keywords are: Bio-geographical regions, Buildings, Elevation, Land
cover, Land use, Meteorological geographical features.
## 6.7 Keywords
In Keyword tab, fill in the fields:
\- Free keywords (i.e. _Basel SVF DSM_ )
The city name must always be one of the keywords (already set in the
templates) in order to be searchable in the online database. Other keywords
can be added after the city name depending on the type of the dataset. Each
keyword must be written independently (not altogether or comma-separated) in
the _keyword value_ field and press _Apply_ after each keyword. The list of
keywords is visible in the box at top of the page. You can remove any wrong
keywords pressing the “minus” sign next to each keyword.
Figure
15
. Keyword tab.
6
.6
6
.
7
## 6.8 Geographic location*
In Geographic tab, fill in the fields:
\- Geographic bounding box* (i.e. _47.64 N, 7.72 E, 47.46 S, 7.44 W_ )
The geographic bounding box of the spatial dataset is required in decimal
degrees with precision of at least two decimals. For example, the full grid of
Basel is 47.64 N, 7.72 E, 47.46 S, 7.44 W. When the degrees are completed in
the respective fields, plus sign must be pressed in order to create the
bounding box.
Figure
16
. Geographic Location tab.
6
.8
## 6.9 Temporal Extent
In Temporal tab, fill in the fields:
\- Temporal Extent (i.e. _2015-01-01, 2015-12-31_ )
The temporal extent defines the time period covered by the content of the
resource. Individual dates, as well as time intervals, or the mix of the two
can be inserted. When referring to an individual date, the date must be
inserted in _Starting date_ and _Now_ is applied in _Ending date_ . When
referring to a time interval _Starting_ and _Ending dates_ are completed.
## 6.10 Reference Dates
In Temporal tab, fill in the fields:
* Date of creation (i.e. _2015-12-04_ )
* Date of publication (i.e. _2016-02-02_ )
* Date of last version (i.e. _2016-02-02_ )
The completion of the reference dates (creation, publication, last revision)
is optional, yet their completion may be important for us in the future to
keep track of the published material. _Date of publication_ can be the same
with the date creating the metadata file (i.e. _Metadata date_ in _Metadata_
tab).
Figure
17
. Temporal tab.
6
.9
6
.10
## 6.11 Process history
In Quality & Validity tab, fill in the fields:
* Lineage (i.e. _The sky view factor was created using two high resolution (1 m) Digital Surface Models, one for the buildings and another one for city trees. It was created using the approach of Lindberg, F., & Grimmond, C. S. B. (2010). Continuous sky view factor maps from high resolution urban digital elevation models. Climate Research, 42(3), 177–183. _http://doi.org/10.3354/cr00882_ _ _This project has received funding from the European Union’s Horizon 2020 research and innovation programme URBANFLUXES under grant agreement No 637519_ )
All the information regarding the
* data sources,
* the methodology,
* the version of the dataset (in case we upload some revision in the future for the same dataset),
* the references,
* the quality and the validation (if available)
* the link of this dataset to a scientific publication (include article DOI)
* reference to the funding* (the sentence “ _This project has received funding from the European Union’s Horizon 2020 research and innovation programme URBANFLUXES under grant agreement No 637519”_ must be set in the end of every Lineage field) should be summarized in the _Lineage_ field.
## 6.12 Spatial Resolution
In Quality & Validity tab, fill in the fields:
\- Resolution distance (i.e. _1_ )
\-
Unit of measure
(
i.e.
_meters_
)
Figure
18
. Quality&Validity tab.
6
.11
6
.12
## 6.13 Access and use*
In Constraints tab, fill in the fields:
* Conditions applying to access and use* (always: _Free access and use to registered URBANFLUXES users_ )
* Limitations on public access* (i.e. _Intellectual Property Rights_ )
Another mandatory field of the INSPIRE directive is the definition of the
conditions and the limitations of the access and use of the data. As defined
by [R1], [R2] and [R3], the users will have the possibility to access, mine,
exploit, reproduce and disseminate (free of charge) the data, including
associated metadata. The users gain free access to the data after the online
registration to URBANFLUXES website. Therefore, the sentence “ _Free access
and use to registered URBANFLUXES users_ ” is completed in the _Conditions
applying to access and use_ field. Since URBANFLUXES data are protected by
_Intellectual Property Rights_ [R1], [R2] and [R3], the respective suggestion
(e) in the _Limitations on public access_ field is chosen pressing ENTER in
the empty field.
Figure
19
. Constraints tab.
6
.13
6.14 File size
Not applicable within INSPIRE, it will appear automatically for URBANFLUXES
data.
## 6.15 Use of Templates
To avoid filling the same fields repeatedly, one can use a template according
to the case study. Template xmls for Basel, London and Heraklion have been
created. By using the template, the fields one needs to fill only the fields
below:
5.1.1 Owner/Publisher (Responsible party role is already set)
5.1.2 Title (City name in parenthesis is already set)
5.1.3 File name
5.1.4 Short description (Resource locator is already set)
5.1.6 INSPIRE keyword
5.1.7 Keywords (City name is already set as keyword in the templates, you just
need to put the rest of the keywords)
5.1.9 Temporal extent
5.1.10 Reference dates
5.1.11 Process history (The last sentence is the funding reference and is
already set) 5.1.12 Spatial resolution
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0961_SPICES_640161.md
|
# 2 SPICES project overview
SPICES is a research and innovation project under the H2020-EO-1-2014 New
ideas for Earth-relevant space applications call in 2014. and running from
June 2015 to May 2018. The main objective of the SPICES is to develop new
methods to retrieve sea ice parameters from existing (and imminent) satellite
sensors to provide enhanced products for polar operators and prediction
systems, specifically addressing extreme and unexpected conditions.
Automatic remote sensing products traditionally provide general information on
sea ice conditions such as ice extent and concentration. However, for ice
charting, tactical navigation and management of off-shore activities much more
important is to know and avoid hazardous sea ice conditions. In general, sea
ice hazards are related to sea ice thickness. More often than not polar ships
and off-shore platforms are only operating during summer seasons and in
certain regions. This is because they are designed to resist typical forces of
induced by pack ice, but they are not designed to resist the extreme sea ice
conditions.
Ongoing climate warming has manifested as shrinking and thinning of pack ice
in the Arctic. This is a primary driver for the increasing shipping, oil and
gas explorations and mining activities in the Arctic. However, severe sea ice
conditions still exist and in consequence many locations are impossible for
ship based operations. Moreover, year-to-year variability of sea ice is very
large and hazardous multiyear ice (MYI) floes sometimes appear also in
typically seasonally ice-free regions.
In order to response needs of increase polar activities, we propose to focus
on detection of sea ice extremes and automatic production of “sea ice
warnings” products. In particular, we aim for a detection of MYI floes in an
area composed mostly first-year ice from synthetic aperture radar (SAR),
heavily ridged ice regions from SAR, the thickest ice from radar altimeter
(RA) thickness profiles, regional anomalies of thick or thin ice via passive
microwave (PMW) data, sea ice areas vulnerable for the wave action, detection
of early/late melting season and improving capabilities to forecast seasonal
sea ice extremes.
# 3 Data Summary
This document describes Data Management Plan for the H2020 SPICES project,
including data used and generated by the project, data access for verification
and re-use by a third party, and activities required to maintain the research
data long-term such that it is available for re-use and preservation (data
curation).
## 3.1 SPICES data overview
The SPICES sea ice products are based on wide variety of Earth Observation
(EO) data obtained from spaceborne sensors, and numerical weather prediction
(NWP) model data. For sea ice product development and validation a wide
variety of in-situ snow and sea ice data are used, as well as some airborne
remote sensing data. Existing data repositories (including in-situ, satellite
and model data) and infrastructure within SPICES partners are utilized in the
SPICES research work.
Data products generated in SPICES are stored in several existing data
repositories, many of them are wellknown. Thus, SPICES will not build up a new
e-infrastructure for the data storage and preservation.
This EO data is available free-of-charge from many sources; Copernicus, ESA,
EUMETSAT, JAXA, NOAA and NASA. In general, all needed EO data are stored by
the SPICES consortium by sharing some data storing by the partners, i.e. raw
sensor data (level 1) are stored by one institute, and others have access to
it. Within the project lifetime it may be possible to use forthcoming ESA
Thematic Exploitation Platforms (TEP) which are central facilities for the EO
data storage and product generation - EO data users can run their product
algorithms at TEPs without the need for downloading the raw EO data.
Multi-parameter data from different in-situ observations (platforms) are
combined into co-located data sets per parameters (e.g. sea ice thickness,
snow depth, roughness, freeboard). The original data sets are collected from
the respective data source. The final co-located data sets are stored and
shared by SPICES. Data from autonomous platforms (buoys) are additionally
available through the international buoy networks IABP and IPAB.
Input and generated data are kept by the partners after the project for a time
(at least for five years) we could expect there to be a public interest and
usage, but for EO raw sensor data and pre-processed data (e.g. level 1
products requiring large storage space) this is not a necessity as the EO data
are also available from the EO satellite operators.
The total amount data stored by SPICES is several tens TBs (largest part is
satellite level 1 data).
All sea ice products (e.g. level 2 swath products and level 3 gridded products
over the Arctic) generated in the SPICES are freely available to public, both
during and after the SPICES project. The storage space required by the sea ice
products will be several hundred GBs.
A SPICES sea ice product contain a sea ice variable (or multiple variables)
data and at minimum geolocation information. Depending on sea ice variable,
quality fields on input data and variable value may be included. The typical
product formats are ASCII-text file, GeoTiff-image, netCDF-file and shape-file
(vector polygon data). In general, product format and standards follow those
used by national Ice Services, EUMETSAT OSI SAF, and Copernicus CMEMS.
The SPICES sea ice products are expected to be useful for scientists working
on sea ice remote sensing or sea ice modelling and forecasting, Arctic climate
change, or developing sea ice products for the Arctic ship navigation. The
products are also of interest for shipping and off-shore companies operating
in the Arctic and needing near real time information in their operations, and
for Arctic policy makers.
### 3.1.1 SPICES input data
In the SPICES project following external datasets are used: 1) in-situ snow
and sea ice data, 2) satellite EO data, 3) airborne remote sensing data, and
4) NWP model data. The used data are described shortly below.
#### In-situ data
In-situ data are collected from different sources / platforms and processed
into common data formats to generate co-location data sets sorted by sea ice
parameters. The following data sets and sources are used by SPICES:
* Electromagnetic (EM) measurements of (total) sea ice thickness from ground based (EM31, GEM-2) and airborne (helicopter and airplane) applications. Such data sets are usually available from summer time icebreaker expeditions to Arctic and Antarctic sea ice. Sources: Pangaea.
* The EM data sets are accomplished by in-situ measurements of snow depth (survey data) and point measurements from drillings and stake measurements, as well as other physical properties of sea ice. Sources: Pangaea data sets, ITU & published literature & reports.
* Measurements from autonomous platforms (buoys) are coordinated through IABP and IPAB. These data provide time series of sea ice thickness, snow depth, air-snow-ice-water temperatures, drift speed and direction (derived from GPS positions) and other sea ice parameters. Sources: CRREL IMB web page, Meereisportal.de, Pangaea.
* Directional wave buoy data for validation of SAR-waves algorithms for extracting pancake ice thickness. Sources: various research cruises; data managed by CNR and UNIVPM.
* Visual ship-based sea ice observations following the Antarctic Sea Ice and Processes (ASPeCt) and the according Arctic (ASSIST) protocol: total and partial sea ice concentration (SIC) of the three thickest sea ice categories; for the latter also: sea ice thickness, snow depth, snow type, floe type and size, fraction of deformed ice, ridge height. Sources: ICDC data base at UB, the ASPECT home page http://www.aspect.aq, and data archived in Pangaea.
* Sea ice draft observations from Upward Looking Sonar (ULS), Weddell Sea (from PANGAEA:
http://doi.pangaea.de/10.1594/PANGAEA.785565, Behrendt et al., ESSD, 2013).
* Operation Ice Bridge data; see details at _http://nsidc.org/data/icebridge/_
* Norwegian Young Sea Ice Cruise (N-ICE2015); see details at _http://www.npolar.no/en/projects/nice2015.html_
#### Satellite EO data
Satellite EO data used in the SPICES includes SAR imagery, microwave
radiometer data, microwave scatterometer data, radar altimeter (RA) data, and
optical imagery. This EO data is available free-of-charge from many sources;
Copernicus, ESA, EUMETSAT, JAXA, NOAA and NASA. The main time period for the
EO data to be used is autumn 2010 onwards due to availability of CryoSat-2
radar altimeter data (processed to sea ice thickness) which is essential for
many SPICES sea ice products. CryoSat-2 data has Arctic wide coverage unlike
earlier ENVISAT RA2 and ERS-1/2 radar altimeters. Below is a list used (not
complete) sensors:
* SAR imagery: SENTINEL-1 SAR (source Copernicus), RADARSAT-2 SAR (Copernicus and national sources), ALOS-2 PALSAR-2 (ALOS-2 Research Announcement (RA) projects), COSMO-SkyMed (Copernicus, ESA Third Party Mission Program, AO projects), TerraSAR-X (Copernicus and AO projects), ENVISAT ASAR (ESA), ALOS PALSAR (MyOcean, RA projects).
* Microwave radiometer: SSMIS (NOAA/NASA), SMOS (ESA), SMAP (NASA), AMSR2 (JAXA).
* Microwave scatterometer: METOP ASCAT (EUMETSAT), SMAP (NASA), QuikSCAT, OSCAT.
* Radar altimeter: CryoSat-2 (ESA), SENTINEL-3 (Copernicus), SARAL / ALTIKA (CNES).
* Optical imagery: Landsat 5, 7, 8 (USGS), SENTINEL-2 (Copernicus), MODIS (NASA), VIIRS (NOAA), ENVISAT MERIS (ESA), SENTINEL-3 OLCI and SLTSR (Copernicus).
Some of the used EO data are from sensors no longer operating, like ENVISAT
ASAR and MERIS, and QuikSCAT.
In addition to the satellite data, also following derived sea products are
utilized:
* Melt pond fraction from MODIS: MODIS data (MODIS Surface Reflectance 1-Day L3 Global 500 m SIN Grid V005) of bands 1, 3 and 4 are use in an artificial neural network to obtain the melt pond cover fraction on the Arctic sea ice. The method uses the fact that for surface types melt ponds, sea ice, snow, and open water different reflectance values are measured in the above-mentioned MODIS frequency bands. An artificial neural network has been developed. The approach of Tschudi et al. (2008) has been used to obtain a training data set of typical reflectances for selected regions and typical steps of melt pond cover development. This data set was subsequently used to train the neural network. After evaluation of the training results it has been applied to MODIS reflectances of bands 1, 3, 4 projected into a 500 m grid-cell size polar-stereographic grid to classify abovementioned surface types.
The surface type distribution obtained is analysed and converted into a 12.5
km x 12.5 km grid-cell size product, i.e. the melt-pond fraction per grid
cell. In order to obtain a relative melt-pond fraction, i.e. relative to the
actual SIC, the melt-pond fraction needs to be divided by SIC (here: 1 minus
open water fraction). The data set offered comprises, on the one hand, the
full set of melt pond fraction, its standard deviation, the open water
fraction and the number of usable 500 m grid cells per 12.5 km grid cell.
Those 12.5 km grid cells with less than 10% usable 500 m grid cells or more
than 85% open water fraction are flagged. On the other hand, we offer in
addition melt pond fraction, its standard deviation and the open water
fraction for almost clear-sky areas, i.e. 12.5 km grid cells with more than
90% usable 500 m grid cells; areas with more than 85% open water fraction are
again flagged. This is version v02 of this data set. It differs from version
v01 by a bias correction of the melt pond cover fraction and the open water
fraction which were biased by 8% and 3%, respectively in the old version.
* Melt pond fraction from MERIS: The current dataset consists of daily averages of the melt pond fraction and broadband albedo for May-September 2011 retrieved from MERIS (Medium Resolution Imaging Spectrometer) swath Level 1b data over the ice-covered Arctic Ocean using the MPD retrieval (Zege et al. 2015). The data is gridded on a 12.5 km polar stereographic grid. The melt pond area fraction is retrieved via inversion of a forward model (Malinka et al. 2016). The MPD retrieval has been validated against field, ship-based and airborne measurements (Istomina et al. 2015a). Case studies and weekly trends are presented by Istomina et al. (2015b).
#### Airborne remote sensing data
Airborne sea ice observations are available in the recent years from polar
research aircraft campaigns during spring time. Routinely collected data sets
are total (snow plus ice) thickness with airborne electromagnetic induction
sounding (AEM) and snow freeboard with airborne laser scanning (ALS). The
surveys are regionally focused on the western Arctic between the Fram Strait
as the eastern limit and the Beaufort/Chukchi Sea north of Alaska as the
western limit. European airborne activities are usually carried out in the
cold-month period between mid-March and early-May in the framework of
institute based funding and international collaboration (AWI, Environment
Canada, York University, University of Alaska, Fairbanks) and ESA supported
satellite validation activities. Data from polar research aircraft is
available since 2009 in irregular intervals and earlier also from helicopter
activities of more limited range back to in key regions (Lincoln Sea since
2004, Beaufort Sea since 2007).
Operation IceBridge provides access to their data sets (snow freeboard, snow
depth and derived thickness products) from their flight lines in the western
Arctic since 2009 (http://nsidc.org/icebridge/portal/). In SPICES we use
IceBridge L4 Sea Ice Freeboard, Snow Depth, and Thickness dataset (Version 1).
This data set contains derived geophysical data products including sea ice
freeboard, snow depth, and sea ice thickness measurements in Greenland and
Antarctica retrieved from IceBridge Snow Radar, Digital Mapping System (DMS),
Continuous Airborne Mapping By Optical Translator (CAMBOT), and Airborne
Topographic Mapper (ATM) data sets.
In summer, sea ice thickness of the snow free summer ice pack is available
from helicopter-borne AEM measurements in the Transpolar-Drift at irregular
intervals.
#### NWP model data
Atmospheric data for derivation of various sea ice products from the EO data
were extracted from the European Centre of Medium-Range Weather Forecast
(ECMWF) ERA-Interim reanalysis data.
# 4 FAIR data
Data management in SPICES will be carried out in accordance with guidelines
for FAIR data management in H2020. This means data collected or generated in
the project must be:
* F (Findable) – “making data findable, including provisions for metadata”
* A (Accessible) – “making data openly accessible”
* I (Interoperable) – “making data interoperable”
* R (Reusable) – “increase data re-use (through clarifying licenses)”
## 4.1 Making data findable, including provisions for metadata
A key element of making SPICES sea ice products findable is to ensure that all
datasets are accompanied with rich metadata describing the contents and how
data has been processed, as well providing a persistent identifier, if
possible, that uniquely identifies every dataset. Currently, we are not
planning to obtain Digital Object Identifiers for all sea ice products, but to
use mainly SPICES internal naming convention for the products. If DOIs are
required for some products, e.g. due a scientific publication, it is possible
to obtain them.
In the following the SPICES sea ice products are first described, and then the
standards and metadata for the products are introduced.
### 4.1.1 SPICES sea ice products
The data generated in the SPICES includes many intermediate products generated
by one WP and used as input data in another WP, and the final sea ice
products. Here we list and describe only those intermediate products which
contain easy usable data outside the SPICES project. We do not plan to give
public access to pre-processed (calibration, geocoding, noise reduction etc.)
satellite level 1/2 data due to a large public storing space (e.g. ftp-site)
this would require. The pre-processing methods will be documented in SPICES
deliverables, and thus, any third party can pre-process level 1/2 data as the
SPICES do using his/her own software tools. In the following the generated
datasets are divided to intermediate products based on satellite EO data, and
in some cases also on NWP data, those based on multiple datasets, and to final
SPICES sea ice products based on multiple datasets and sea ice models.
Some similar datasets exists by operational services or from previous or
current research projects (e.g. CryoSat-2 ice thickness data). SPICES products
will be in common formats best suitable for merging with different sea ice,
NWP and sea ice model products.
All sea ice products underpin SPICES scientific publications, and generation
of some products require development of new methods which will be themselves
published in scientific journals.
The SPICES sea ice products are described in Tables 4.1-4.3 below. Access to
the SPICES sea ice products are described in Section 4.2.
Table 4.1 SPICES datasets based on multiple input datasets.
<table>
<tr>
<th>
**Name**
</th>
<th>
**Deliverable**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Sea ice data along IMB buoy trajectories
</td>
<td>
D1.2
</td>
<td>
Time series of snow/ice parameters along buoy drift trajectories as ASCII
files in ESA CCI RRDP format. Parameters include snow thickness, snow density,
ice thickness, surface temperature, ice/snow interface temperature,
temperatures at standard levels in snow and ice.
</td> </tr>
<tr>
<td>
Sea ice data from OIB and CryoVex campaigns
</td>
<td>
D1.3
</td>
<td>
Time series of snow/ice parameters along ice drift trajectories as ASCII files
in ESA CCI RRDP format. Parameters snow thickness, snow density, ice
thickness, surface temperature, ice/snow interface temperature, temperatures
at standard levels in snow and ice.
</td> </tr>
<tr>
<td>
Co-located daily sea ice dataset along buoy and
ice drift tracks
</td>
<td>
D1.4
</td>
<td>
Time series of satellite and ERA Interim NWP data co-located with the buoy and
ice drift trajectories. Satellite data includes SMOS, ASCAT, IR, SMAP, OSCAT,
SSMIS, Sentinel-1, CryoSat-2, etc. NWP data every 6 hours including 2 m air
temperature, 10 m wind speed, radiation fluxes, etc.
</td> </tr> </table>
Table 4.2 SPICES datasets based on satellite EO and NWP datasets.
<table>
<tr>
<th>
**Name**
</th>
<th>
**Deliverable**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
SAR based sea ice
products
</td>
<td>
D2.4
</td>
<td>
Set of SAR based sea ice products (e.g. sea ice types, degree of deformation,
ice concentration) generated using developed novel algorithms for utilization
in other WPs.
</td> </tr>
<tr>
<td>
Arctic sea ice type
product from satellite RA
</td>
<td>
D3.4
</td>
<td>
Sea ice type classification(e.g. WMO sea ice types) based on radar altimeter
waveform data (waveform shape parameters).
</td> </tr>
<tr>
<td>
Arctic large scale sea ice dataset at the end of
winter
</td>
<td>
D4.3
</td>
<td>
Estimates of snow and ice parameters from snap shots or time series of NWP and
satellite data. Snow/ice parameters include ice types, snow thickness, snow
density and snow/ice interface temperature. The dataset has Arctic wide
coverage for the month of May during several years. SPICES uses this dataset
in ice thickness retrieval and seasonal sea ice forecasting.
</td> </tr>
<tr>
<td>
Arctic summer time albedo, melt pond fraction and sea ice concentration data
</td>
<td>
D5.6
</td>
<td>
Arctic summer time albedo, melt pond fraction and ice concentration dataset
for at least three years. Based on MERIS (2002-2012), AMSR-E and SMOS
(starting on 2010) and starting on 2015 on Sentinel-3 (optical) and AMSR2 and
SMOS/SMAP observations.
</td> </tr> </table>
<table>
<tr>
<th>
Gridded product of SMOS and SMAP TB
</th>
<th>
D6.1
</th>
<th>
Gridded product of SMOS and SMAP brightness temperatures and uncertainties;
daily average, resolution 12-15 km. SMOS measures TB at a range of incidence
angles while SMAP uses a conical scan geometry and a constant incidence angle
at 40°. In order to generate a homogeneous SMOS/SMAP data product the SMOS TB
will be interpolated to the SMAP incidence angle of 40°.
</th> </tr>
<tr>
<td>
Gridded product of sea ice thickness from SMOS and SMAP
</td>
<td>
D6.3
</td>
<td>
The operational SMOS algorithm of UHAM will be adjusted for the use of SMOS
and SMAP TBs at a constant incidence angle. The ice thickness and its
uncertainty will be estimated from the TBs.
</td> </tr>
<tr>
<td>
Sea ice thickness from the SAR wave-spectrum
</td>
<td>
D6.6
</td>
<td>
Sea ice thickness estimation based on SAR wavespectrum analysis will be
applied to areas of frazilpancake (FP) ice during periods of new ice formation
and ice growth in regions of turbulence. Sentinel-1 C-band and Cosmo-SkyMed
X-band SAR images, in areas of the Arctic (Greenland Sea) and of Antarctica
(Ross Sea), will be used.
</td> </tr>
<tr>
<td>
Improved mean sea-
surface height product from RA
</td>
<td>
D7.2
</td>
<td>
An intermediate product of CryoSat-2 data processing is a sea-surface height
product. This will be made publicly available for various external
applications, e.g. in oceanography.
</td> </tr>
<tr>
<td>
Sea ice freeboard and thickness from CryoSat-2 with weekly resolution
</td>
<td>
D7.4
</td>
<td>
Sea-ice freeboard and thickness from CryoSat-2 data, and co-located snow depth
data, with weekly resolution.
</td> </tr>
<tr>
<td>
Operational sea ice freeboard and thickness data from synthetic aperture radar
altimetry
</td>
<td>
D7.5
</td>
<td>
Operational data product of sea ice thickness and freeboard from SAR altimetry
data. This data product will concentrate on regions of high interest, and it
will provide highest possible spatial and temporal resolution.
</td> </tr> </table>
Table 4.3 SPICES compiled datasets and sea ice forecasting datasets.
<table>
<tr>
<th>
**Name**
</th>
<th>
**Deliverable**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Compilation of novel/improved sea ice products
</td>
<td>
D8.1
</td>
<td>
Compilation of improved/novel sea-ice products suitable for initialization and
evaluation of coupled sea ice forecasts. Contain e.g. L2 SMOS and CryoSat-2
thickness, CryoSat-2 along track sea ice concentration, sea ice drift and snow
thickness from EO data.
</td> </tr>
<tr>
<td>
Sea ice initial conditions using improved and novel products
</td>
<td>
\-
</td>
<td>
Sea ice initial conditions at the end of winter from improved and novel SPICES
sea ice products.
</td> </tr>
<tr>
<td>
SPICES coupled sea ice forecasts
</td>
<td>
\-
</td>
<td>
Coupled sea ice forecasts produced using improved SPICES initial conditions
data.
</td> </tr> </table>
### 4.1.2 Standards and metadata
A SPICES sea ice product contain a sea ice variable (or multiple variables)
data and at minimum geolocation information. Depending on sea ice variable and
data format, different information and quality fields on input data and
variable value may be included. The typical product formats are GeoTiff-image,
NetCDF-file, ASCII text file, and shapefile (vector polygon data). In general,
product formats and standards follow those used by national Ice Services,
EUMETSAT OSI SAF, and Copernicus CMEMS. NetCDF CF (Climate and Forecast)
Metadata Conventions (http://cfconventions.org/) are used where-ever
applicable. The NetCDF files include search keywords to optimize re-use
outside SPICES. All products have clearly stated version numbers.
As an example a SPICES product in NetCDF-format and based on EO data can have
following product information and quality fields (metadata):
* Full name of the product.
* Geolocation information: details of the coordinate system (e.g. ellipsoid, reference longitude), corner coordinates in coordinate system, pixel size.
* List of satellite sensors and auxiliary datasets used.
* Full names of input data files, e.g. original EO data files.
* Version of input data.
* Product version.
* Product generation date and time in UTC.
* Product generation institution.
* Physical units of sea ice variables.
* Algorithm version for sea ice variable(s).
* Start and end times of satellite observations for the product.
* Acquisition time of satellite data for each pixel in the product.
* Contact information; email.
* Quality index for each pixel, if possible to derive: depends on the availability and coverage of the input datasets, quality of data processing, and weather and sea ice conditions. Quality parameter and its inputs depend on a sea ice variable.
* General quality parameter for the product.
For EO based products in tiff-image format the following information can be
given in the filename:
* Product name.
* Product version.
* Product generation date and time in UTC.
Metadata as for NetCDF can be included as tiff-image format tags. For the
shapefile (vector polygons) data the metadata is included as a XML-file (e.g.
similar to ICEMAR format).
## 4.2 Making data openly accessible
All the sea ice datasets described in Section 4.1 originate from the SPICES
project and are made openly accessible. The SPICES datasets are available at
following existing data repositories:
* WP1 RRDP at _http://www.seaice.dk/SPICES/_
* WP2 SAR based sea ice products – Zenodo research data repository; for details contact [email protected]
* WP3 radar altimeter orbit data based products: sea ice types, freeboard, ice thickness, RIO index – NRT images of products at _http://ice.fmi.fi_ , RIO product files at _http://ice.fmi.fi/SPICES/d3.4/_ , for other data files contact [email protected]
* WP4 ice concentration and multi-year ice fraction from the optimal estimation algorithm, and snow thickness on sea ice from a regression model – NRT images are available through the DTU Java browser at _www.seaice.dk_ . At the moment (May 2018) the datasets are not available as netCDF or Geotiff files.
* WP5 albedo, melt pond fraction and SIC products are available at: _https://seaice.uni-bremen.de/data/meris/mpf2.0/_ . The optical and PM products are stored as separate files and are currently not merged together. The output grid is 12.5km NSIDC polar stereographic grid.
* WP6 SMOS and SMAP and brightness temperature and sea ice thickness products – ftp-site by UHH _ftp://ftp-projects.cen.uni-hamburg.de/seaice/Projects/SPICES/_
* WP7 CryoSat-2 products – www.meereisportal.de hosted by AWI, see Section 4.2.1 below.
* Inquiries on WP8 sea ice forecast datasets can be sent to Steffen Tietsche / ECMWF; [email protected]_ . Unfortunately, it is not possible to have public access to the full output data sets, as they are very large.
These repositories have an open data policy and a data license for all
datasets. Some repositories may require registration.
The SPICES datasets can be easily read, processed and visualized using freely
available software tools (e.g. Python). A dataset will accompany a data user
manual, if needed.
A need for a data access committee was not foreseen in SPICES.
The licensing of the SPICES datasets is discussed in Section 4.4.
### 4.2.1 Sea-ice freeboard, thickness from CryoSat-2 and snow-depth with
weekly resolution
The weekly CryoSat-2 sea ice product (D7.4) is distributed as gridded fields
with a spatial resolution of 25 km in netCDF v4 gridded files following the
Climate & Forecast (CF) conventions. Two versions of the datasets exists, a
non-time critical (NTC) product with a timeliness of one month and a near-real
time (NRT) product with a timelines of two days. Both are based on different
ESA input datasets and available on a password-protected ftp site:
<table>
<tr>
<th>
Server
</th>
<th>
ftp://data.meereisportal.de
</th> </tr>
<tr>
<td>
Login
</td>
<td>
user: altim pwd: altim
</td> </tr>
<tr>
<td>
CryoSat-2 weekly (NTC)
</td>
<td>
altim/sea_ice/product/north/cryosat2/cs2awi-v2.0/l3c_weekly/
</td> </tr>
<tr>
<td>
CryoSat-2 weekly (NRT)
</td>
<td>
altim/sea_ice/product/north/cryosat2/cs2awi-v2.0/Latest/l3c_weekly/
</td> </tr>
<tr>
<td>
File naming
</td>
<td>
l3c-awi-seaice-cryosat2-<nrt|ntc>-nh25kmEASE2-<start>-<end>-fv2.0.nc
</td> </tr>
<tr>
<td>
</td>
<td>
<start>, <end>: Start and end date in form of YYYYMMDD
</td> </tr>
<tr>
<td>
Parameters
</td>
<td>
sea ice thickness: `sea_ice_thickness` sea ice freeboard: `sea_ice_freeboard`
snow depth: `snow_depth`
</td> </tr> </table>
The content of the netcdf files can be parsed with several script languages or
tools, e.g. panoply ( _https://www.giss.nasa.gov/tools/panoply/_ ) for data
visualization.
## 4.3 Making data interoperable
All SPICES datasets described in Section 4.1 can be freely integrated to other
datasets and used freely in scientific studies and commercial activities
outside SPICES, i.e. full unrestricted re-use is allowed by any user.
The SPICES dataset formats and standards follow those used by national Ice
Services, EUMETSAT OSI SAF, and Copernicus CMEMS. NetCDF CF (Climate and
Forecast) Metadata Conventions
(http://cfconventions.org/) are used where-ever applicable.
## 4.4 Increase data re-use (clarifying licences)
A SPICES dataset will be openly shared at earliest when the related
deliverable has been accepted by EC and the deliverable has made publicly
available at the SPICES web-site (https://www.h2020spices.eu/publications/).
All datasets are targeted to be available by the end of the SPICES project.
The reuse of the SPICES datasets is not restricted in anyway after the SPICES
project has ended. Inquiries on the SPICES datasets can be still send to the
SPICES scientists (contact information is in the metadata) after the SPICES
project end. The datasets remains re-usable as long as they are not
scientifically outdated (better products become available due to development
of satellite sensors, models, algorithms, etc.).
All SPICES datasets can be easily read, processed and visualized using freely
available software tools (e.g. Python). Common commercial softwares, like
Matlab, can also be used.
Quality of SPICES datasets (e.g. absolute accuracy against validation data)
will be described in detail in related SPICES deliverables and scientific
publications. We don’t foresee any possible quality problems in pre-processing
of satellite sensor data for sea ice parameter retrievals.
The Creative Commons Attribution 4.0 International license (CC BY 4.0) is used
by the SPICES project for all openly shared SPICES datasets. An end-user is
free to:
* Share — copy and redistribute the material in any medium or format.
* Adapt — remix, transform, and build upon the material for any purpose, even commercially.
SPICES cannot revoke these freedoms as long as the end-user follow the license
terms.
For details see: _https://creativecommons.org/licenses/by/4.0/legalcode_
# 5 Allocation of resources
The costs of making the SPICES datasets available in the formats described in
Section 4.1, and deposit them in data repositories, are eligible costs under
the H2020 Grant Agreement. The respective SPICES Work Packages are responsible
for ensuring that their datasets are uploaded to the data repositories.
General data management is part of WP9 (Management, Coordination and
Dissemination) and is led by Finnish Meteorological Institute (project
coordinator).
In general, all SPICES open access datasets are targeted for long-term secured
storage (at least five years after the project ending). If a data repository
requires decreasing storage space required by the SPICES datasets, then the
SPICES consortium decides which datasets are first deleted. The costs for long
term storage are estimated to be negligible.
# 6 Data security
The SPICES datasets will be stored in established data repositories with
secured funding for long term preservation and curation.
In case of total data loss in the data repositories the SPICES datasets can be
re-processed using input datasets described in Section 3.1.1 and softwares
coded in SPICES. It is assumed that the input datasets are available for a
long time, over ten years.
**7 Ethical aspects**
There are no ethical or legal issues that can have an impact on data sharing.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0967_KINDRA_642047.md
|
# **2\. DATA SUMMARY**
Main purpose of KINDRA’s data collection/generation was the creation of an
inventory of groundwater research and knowledge that would make them more
visible and accessible for European scientists, practitioners and policy
makers and would allow for a gap and trends analysis, to support the
implementation of the Water Framework Directive and the Groundwater Directive
and offer tools for a better protection of groundwater in Europe.
The project collected and generated the following types and formats of data:
* Dataset 1: European Inventory on Groundwater Research and knowledge. It includes metadata referring to scientific and other kind of publications on groundwater. Format: RDF and the RDF Data Cube vocabulary/Geonetwork based on ISO 19139. This is the primary dataset created in KINDRA.
* Dataset 2: Public documents generated during the project. It includes publications developed within the KINDRA project as a result of activities performed. Formats: pdf, PPT, Jpeg, .mov, .mp4 and AVI
* Dataset 3: Data for internal communication and information exchange. It includes a wide variety of documents useful to collaboratively perform the project works. Formats: Pdf, Word, PPT, JPEG, Excel.
KINDRA re-used existing data on groundwater research and knowledge in Europe
to populate the EIGR with metadata on publications.
The data originate from public and private repositories and websites scattered
around Europe, known amongst a wide group of European experts working for
project partners or linked third parties.
By the end of the project, more than 2000 datasets were included in the EIGR,
but the aim is to exponentially increase this number after project’s end to
boost the future exploitation of the EIGR.
The data are useful to groundwater scientists, practitioners and policy makers
performing activities for research organisations, water body authorities,
water companies, ONGs, public authorities at national and European level.
# **3\. DATA MANAGEMENT POLICY**
In compliance with the EU’s guidelines regarding the DMP (European Commission,
2016), this document should address for each data set collected, processed
and/or generated in the project the following elements:
1. Data set reference and name
2. Data set description
3. Standards and metadata
4. Data sharing
5. Archiving and preservation
For each data set, the consortium developed a number of strategies that were
followed during the project and are to be followed after its closure in order
to address the above elements.
In this section, we provide a detailed description of these elements for every
data set collected.
**4\. EUROPEAN INVENTORY ON GROUNDWATER RESEARCH AND KNOWLEDGE** _1\. Data set
reference and name_
DS1. European Inventory on Groundwater Research and knowledge
# 2\. Data set description
Nature: The datasets included in the EIGR are metadata referring to scientific
and other kind of publications on groundwater. The EIGR allows for the upload
of geographical and non geographical datasets. The resources may be referred
to a territory or not depending on the nature of the resource uploaded.
Scale: concerning the spatial dimension, the inventory concerns datasets
originated from European authors or institutions and/or about groundwater
issues in European Countries. Europe is here intended as geographical area,
and includes besides EU member states Ukraine, Switzerland and Serbia.
Nonetheless, the inventory is suitable to be used on much wider scales.
Concerning the temporal scale, the data sets uploaded to the EIGR ranges from
2000 to 2017, a limitation exclusively handled during the project for purposes
connected with the execution of project tasks. After project end this interval
can be extended.
Target groups: the Hydrogeological community as well as any linked discipline
in order to be able to find, analyse and register research projects, research
outcomes or knowledge sources in this domain. This includes researchers,
practitioners, managers, interest groups, policy makers.
_3\. Standards and metadata_
The EIGR is based on ISO 19139.
# 4\. Making data findable, including provisions for metadata
The EIGR handles the FAIR principles, amongst which the promotion of the
discoverability of data by providing metadata on groundwater research and
knowledge including so-called grey literature. The metadata include references
to all kinds of standard identification mechanism and where available
persistent and unique identifiers such as Digital Object Identifiers are
included.
Keywords have been identified and placed in the framework of an innovative
classification system, extensively described in WP1 deliverables: the
Harmonised Terminology and Methodology for classification and reporting
hydrogeology related research in Europe (HRC-SYS). It resulted to highly
facilitate search and analysis of data records.
Standards for metadata creation have been defined to assure the proper
exploitation by users of the classification system and inventory’s potential.
They are laid down in a user guide that can be found at the KINDRA website
(http://kindraproject.eu/eigr/).
# 5\. Data access, distribution and sharing
The EIGR is accessible from the KINDRA website at _http://kindraproject.eu/_
o r directly on the url
_http://kindra.kindraproject.eu/geonetwork/srv/eng/main.home_ .
The EIGR has three types of users:
Administrators: able to see, analyse and modify all uploaded metadata sets as
well as the technical configuration of the platform
Registered users: able to upload metadata sets and to see and analyse metadata
sets Everybody: able to see and analyse metadata sets.
The EIGR is freely accessible for users as far as search and analysis
activities concern, without any need for registration or login.
For the registration of new data sets, user credentials can be requested. They
comprise a user name and a password, randomly generated. Registered users have
access to edit the meta data they have supplied themselves.
Copyright issues are not at stake as the EIGR only contains metadata on
published documents, not the documents itself.
Data sharing is possible, but limitations are to be considered due to the
customization of the scheme to the KINDRA Hydrogeological Research
Classification System (HRC-SYS), thus including information that is exclusive
to this KINDRA Classification System.
Data-harvesting possibilities are to be explored in the future to allow for a
more easy and quicker upload of datasets from other databases.
Integration and reuse: Asides from possible missing data due to differences in
the fields included because of the features concerned by the customization of
the metadata schema, the EIGR can be integrated or reused with similar GN
catalogues. No licences are foreseen and no data embargo is applied to permit
the widest reuse possible. All data produced and/or used in the project are
useable by third parties taking into consideration that it concerns metadata
so no proprietory issues are involved.
Quality assurance was performed during the project by a selected group of
experts and this activity will be assured for 2018 by Sapienza University.
After 2018, a more stable institutional frame work for EIGR exploitation
should be set up to take over this kind of tasks.
Possibilities for integration with other data sets and interoperability : to
facilitate interoperability, the EIGR is based on Geonetwork and compatible
with other Geonetwork or similar catalogues which follow the ISO 19139 scheme.
A thesaurus has been delivered based on 284 keywords and URL to the most
appropriate definition in various resources, amongst which especially Gemet.
This allows to easily link the groundwater thesaurus developed in the project
to other existing thesaurus.
# 6\. Data management, archiving and preservation
The EIGR has been archived and preserved during the KINDRA project by Agencia
de Medio Ambiente y Agua de Andalucía on a server hired by La Palma Research
Center. The inventory will be transferred after the project's closure to a
server of University of Rome La Sapienza that has become available for this
purpose. The transfer is previewed for April 2018\.
University of Rome La Sapienza will assure its accessibility and preservation
for the forthcoming years, until a definitive allocation of the EIGR has been
found and realised in accordance with bodies that expressed their interest and
are currently assessing the technical, administrative and financial modalities
(see Deliverable 5.2).
# 7\. Allocation of resources
The EIGR is designed according to the FAIR principles, for which no additional
resources are needed. For the long term preservation and up-scaling of the
EIGR to a widely usable tool additional resources are needed that have been
described – as far as so far known – in the Exploitation Plan (D5.2) and will
be further assessed during 2018.
# 8\. Personnel data management
The personal data of registered users are stored and processed in compliance
with the General Data Protection Regulation (GDPR Regulation (EU) 2016/679).
These data concern: First name, Last name, E-mail Profession,
Institution/Company, Country.
Users give explicit permission to data storage and processing by a
registration form (informed consent), in which they also indicate if they
agree their data to be used for the following purposes:
* allow the administrator of the EIGR to contact me in case any correction to the by me uploaded records result necessary (mandatory)
* allow the administrator of the EIGR, or whom by him delegated, to ask my collaboration in user satisfaction and requirements inquiries, in order to gather knowledge for the improvement of the EIGR (mandatory)
* make and publish statistical evaluations on the profession, the type of institutions and the countries of EIGR editors, to improve the inventory's quality and promote its use (mandatory)
* send me information on events or opportunities concerning hydrogeology in Europe (facultative)
Responsible for data storage and processing during KINDRA implementation is
Van Leijen Srl, Via Emilio Lami n° 7, Rome, Italy. The appointed Data
Protection Officer and Controller is Gertruud van Leijen.
Data processing can be outsourced by Van Leijen Srl to other entities that
will be bound to the GDPR, the conditions established by the Controller and
the given consent.
No data will be used or shared for any other purpose than those for which here
above has been given explicit consent. Registered users are entitled to ask
access, correction or deletion of their data to the Data Protection Officer at
[email protected]_ .
KINDRA reserves the right to cancel records that were inserted by editors that
have asked and obtained the cancellation of their personal data.
After the closure of the KINDRA project, the appointed Data Protection Officer
will remain in charge as long as needed during the transition period until a
definitive allocation of the EIGR has been found.
**5\. PUBLIC DOCUMENTS GENERATED DURING THE PROJECT** _1\. Data set reference
and name_
DS2. Public documents generated during the project
# 2\. Data set description
Nature: the data set concerns publications developed within the KINDRA project
as a result of activities performed, comprising: presentations and posters
presented at conferences; publishable deliverables including reports on the
results of technical activities, reports on workshops and conferences,
outreach materials like brochures, Did you know, short video's and
infografics; news items and pictures of KINDRA events.
Scale: geographically the materials are not limited although most of them are
referred to European groundwater issues. Concerning the temporal dimension,
all materials have been prepared and delivered between 1st of January 2015 and
31th of March 2018, during the duration of the KINDRA project. After the
KINDRA project's closure, additional news may be published referred to after-
project activities.
Target groups: the materials are differentiated to correspond to the needs of
different target groups. We refer to the Communication and Dissemination Plan
(D4.2).
Firstly, the Hydrogeological community is targeted, including researchers,
managers, interest groups, policy makers, more precisely:
1. Representatives of European/international interest groups and bodies such as the European Innovation Partnership on Water (EIP), Cluster of ICT and water management projects (ICT4water),
CIS Working Group on Groundwater (CIS WG-C), Water supply and sanitation
Technology Platform
(WssTP), European science-policy portal for water related research and
innovation (WISE+RTD) and
Global Water Partnership (GWP);
2. Researchers and academic staff: main focus will be put on professional hydrogeologists, hydrologists, geologists and the members of the wider “Water Research Community of Europe”;
3. National member associations representing industry and agriculture: organisations using research results generated by EU and national research activities related to water in particular hydrogeology
(e.g national water work companies, members of EurEAU etc.);
4. Environmental NGOs dealing with the management and improvement of the water environment and/or directly active at safeguarding water and groundwater resources at a European and national level such as the European Water Association (EWA);
5. Public bodies (including funding agencies and financiers such as national research councils), all those organisations who may influence policy support and implementation of water directives at a national level such as ministries in charge, relevant regional directorates and water boards etc.
Secondly, the general public is target, with a particular focus on young
people, to make groundwater and EU funding visible.
Integration and reuse: the published data is freely accessible. In particular
the outreach material developed in the series "Did you know" (2 brochures in
different languages, an infografic and a video) are encouraged for reuse by
schools, in the framework of Researcher Nights and any other educational
context where groundwater issues could be promoted.
All materials acknowledge the EU funding and feature the EU emblem and the
KINDRA logo.
Moreover, a disclaimer is comprised where possible saying that the publication
reflects only the author's view and that the Agency is not responsible for any
use that may be made of the information it contains.
_3\. Standards and metadata_
pdf, PPT, Jpeg, .mov, .mp4 and AVI
# 4\. Data access, distribution, sharing and findability
The data set is freely accessible from the KINDRA website at
_http://kindraproject.eu/_ .
All materials in pdf are downloadable under the dedicated tab "downloads".
Use of the data is free of charge, but the acknowledgement to the KINDRA
project and the EU funding is mandatory.
Publications in journals and conference books are also accessible on the
webpages of the concerned editors. Peer-reviewed publications are findable by
Digital Object Identifiers and deposited in public repositories.
# 5\. Archiving and preservation
During the project, the website is managed by La Palma Research Centre at it's
system administrator`s server. After the project's closure, within January
2019, it will be migrated to a server of University of Rome La Sapienza and
managed by the Department of Environmental Science under the responsability of
Prof. Petitta.
**6\. DATA FOR INTERNAL COMMUNICATION AND INFORMATION EXCHANGE** _1\. Data set
reference and name_
DS1. Data for internal communication and information exchange
# 2\. Data set description
Nature: the data set contains a wide variety of documents useful to
collaboratively perform the project works, like: Grant Agreement and
Partnership Agreement, partner contact list, templates, Quality assurance
plan, (draft) deliverables, agenda's of meetings, (draft) minutes of meetings,
PPTs used in meetings, publications and PPTs and posters shown at conferences,
pictures, guidelines, data sources used for tasks, etc.
Target groups: the data set is only for internal use by the project partners.
The data set is in principle not intended for integration with other data sets
and reuse. Publishable parts of the data set are reused in the data set
"public documents generated during the project" (see below).
_3\. Standards and metadata_
Pdf, Word, PPT, JPEG, Excel.
# 4\. Data access, distribution and sharing
The modalities for data access and sharing have been laid down in the Quality
Assurance Plan (D5.1). The partnership uses a file repository hosted on
_drive.google.com/KINDRA_ , for internal communication and sharing of
documents. It is accessible only after permission granted by Peter van der
Keur (GEUS). All participants in the KINDRA project can be granted access:
staff of project members to the full repository, JPE members and European
Commission Review Panel and Project Officer to dedicated folders.
Data and documents are stored in an operational folder structure on
Drive/KINDRA/ and organized according to work packages, tasks, meetings,
official, references, deliverables and other relevant folders which can be
created on the go. Documents are stored and can be edited online in folders
which has the advantage of having one current document rather than circulating
various versions of documents which is time consuming and prone to error. A
short document on how to create and use documents and data on Drive/KINDRA/ is
provided on the repository.
For documents that are subject to collaborative work and/or review, like
deliverables, table of contents, minutes, etc., the agreed working procedure
is as follows:
The document coordinator (prime author) uploads the draft document on the file
repository. Participants can insert their comments and corrections directly
online using the "suggestion mode" (so that changes can be detected and
inspected by others in the same way as MS Word change tracker). The document
coordinator will then approve/cancel/comment the suggestions. A new
consolidated version can then be uploaded by the document coordinator as a vB
(vC, vD etc), leaving earlier versions available as archives to permit to
track the origin of changes later, if needed.
The partners contact list is not shared on the database for privacy concerns,
and is shared only by email.
# 5\. Archiving and preservation
The Google Drive repository has been adopted for its practical features,
acknowledging potential risk for unauthorized access if standard procedures
are not followed. This risk is considered small under appropriate
and careful management and acceptable in view of the fact that KINDRA does not
produce knowledge subject to property protection requirements.
Therefore, participants are also reminded not to share personal data on the
file repository and the partners data list is therefore shared only per email
distribution.
A back up of the repository is made weekly by Peter van der Keur.
The Coordinator conserves on its employee’s computer a full copy of all the
relevant documentation which will be kept at least until 5 years after project
closure.
# **7\. CONCLUSION**
The purpose of this document was to provide the plan for managing the data
generated and collected during the project: the Data Management Plan.
Specifically, the DMP described the data management life cycle for all
datasets to be collected, processed and/or generated by the KINDRA project. It
covered:
* the handling of data during and after the project
* what data was collected, processed or generated;
* what methodology and standards were and are applied; whether data have or will be shared/made open and how;
* how data are and will be curated and preserved.
The DMP currently involves 3 data sets. All data sets except one are openly
provided to the public on Web servers. Most of the data sets are published as
linked data using RDF and the RDF Data Cube vocabulary /Geonetwork while the
rest of them are published as CSV, Pdf, .mov, .mp4 and AVI and other formats.
After the project's closure, the public datasets DS1 and DS2 will be preserved
on a server owned and managed by University of Rome La Sapienza to assure
their preservation and accessibility. In the framework of the exploitation
activities foreseen, DS1 is expected to be migrated after 2018 to a definitive
host organisation that is currently being identified. On that occasion, the
DMP will be revised and the part relative to DS1 handled by the new host
organisation.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0968_CYTO-WATER_642356.md
|
# 1\. PROJECT INFORMATION
## Project name
Integrated and portable image cytometer for rapid response to _Legionella_ and
_Escherichia coli_ in industrial and environmental waters
Coordinator
Labaqua, S.A.
Grant Agreement Number
Nº 642356
Contact information
[email protected]
## Description
CYTO-WATER project is an innovation project co-founded by the European Union,
through the HORIZON 2020 initiative, call H2020-WATER-1-2014/2015: Bridging
the gap: from innovative water solutions to market replication.The project was
launched in June 2015 and it is expected to be completed in May 2018. The
project is coordinated by Labaqua in partnership with CETaqua, ICFO, Bertin
Technologies, microTEC and Memteq Ventures LTD. The objectiveof the project is
to deploy, for the first time in industrial and environmental waters, a new
imaging cytometer platform for the detection and quantification of
microorganisms. This will allow quantifying _Legionella_ and _E. coli_
population within 120 minutes from obtaining the sample, overcoming in this
way the main disadvantage of traditional methods used in laboratories, i.e.
long time-to results which can currently last up to 12 days in the case of
_Legionella_ and 1 day for _E. coli_ . This tool will be an easy-to-handle
portable form, which will increase its versatility and widen the possibilities
of onsite applications
# 2.DESCRIPTION OF DATA
The data generated may contain information that will be commercially sensitive
or subjected to patent or other IP applications. CYTO-WATERconsortium will use
reasonable efforts to ensure that as much data as possible are made accessible
to the research community; however this will be mitigated by the
aforementioned commercial considerations.
The data will be arriving as a continuous stream and stored in its raw form as
it is received from the individual partners. Given the heterogeneous sources
of CYTO-WATERdata, the consortiumwill keep all data in text files to maximize
their usability over platforms and time. Periodically, the existing data store
will be extracted, analyzed and archived, so that our overall data set will be
incremental in structure.
The expected data to be generated during this project include:
* Output data from CYTO-WATER project research activities;
* Software source created by consortium members; and
* Reports on consortium work, including publications, presentations, demonstrations, courses,documentation related with the commercial promotion of the project and the CYTO-WATER system, etc.
Additionally, the following type of data will also be stored:
* Externally-generated research data used as inputs for research activities;
It isrecognised that during the project development, other data sets may be
identified, assessed and stored by project participants. In such cases, the
DMP will be updated to reflect these additions.
In the following section all policies and activities planned for the
management and share of the data generated by CYTO-WATER project will be
addressed.
# 3.DATA MANAGEMENT
**Externally-generated research data(external to the consortium)** : Examples
of such data include laboratory data from researchers’ experimental work and
raw data from other facilities. Partners’facilities will not be the initial,
primary, or sole storage location for such data. Consequently, no special
provisions for such data areplanned, expecting that the rules for
preservation, dissemination, and sharing of such data will be principally set
and managed by the organizations that are responsible for, and have a stake
in, the initial generation of the data.
**Output data (Raw data)** : Output data includes experimental records
(statistics, results of experiments, measurements, observations from field
work, survey results, images, graphs), designs,congress presentations, etc.
The raw output of thisdata will be stored in a format designed to store and
organize large amounts of numerical data. The format willbe chosen to be
supported by many commercial and non-commercial software platforms, including
Java, MATLAB/Scilab, Octave, IDL, Python, R, Julia and Microsoft Office. The
participants of CYTO-WATER will follow the research data management policy of
the institutions generating the data. If the institution cannot keep the data
to the same standard as required by the coordinator’s data policy, the
coordinator will undertake to store the data under its policy terms.
**Software** : Software will be preserved in much the same way as the output
data.
**Reports:** The analyzed results obtained from the raw data files will be
stored in reports (in Word, PowerPoint or pdf format). Sharing and long-term
availability of the data isguaranteedby the Project coordinator (PC). Data
will initially be stored on local computers used during the measurements and
backed up in accordance with the procedures of the partner generating the
data. Additionally, upon decision of the partner, the relevant data/metadata
may beuploadedto a publicly available online repository, such as Zenodo (a
Website for Scientific publications that is especially suitable for EU project
data).
Labaqua, as PC,will encourage consortium partners to report frequently and
widely on their activities.A project website has been created in order to
allow public project reportsand outputscan bemade available to all interested
parties. In addition, as part of projectoutreach activities, a number of
demonstrations and similar activities will be conducted, and the content of
all of these results will be made available on the project website. Reports
will include deliverables of the project, validation reports, technical
specifications, manufacturing processes, modeling of these manufacturing
processes, and characterization of samples associated with these manufacturing
processes.
# 4.DATA SHARING
Data will be shared between partners withoutdelay. Outside of the consortium,
relevant data may also beshared, subject to commercial considerations, in
order to promote the benefits of the developed technology to the scientific
community, the end-users, and potential collaborators for future product
development.
A significant part of the measurement results (including images, graphs, etc.)
can beshared publicly. They demonstrate the performance of the technology
developed in the project, and can be considered a valuable input to other
researchers in the field, or a relevant content for dissemination purposes.
Technical designs and methods cannotbe shared publicly, because the
applicableintellectual property strategy is critical to enable commercial
advantage forsome of project partners.
Table 1 summarizes the main kind of data generated during the CYTO-WATER
project life, the level of privacy and the responsibility of the information.
Table 1: Main kind of data generated in CYTO-WATER project.
<table>
<tr>
<th>
Level of privacy & access
</th>
<th>
Data generated
</th>
<th>
Short description
</th>
<th>
Diffusion
</th>
<th>
Responsibility
</th> </tr>
<tr>
<td>
Public
</td>
<td>
Validations reports
</td>
<td>
Testing of suitable Celltrap units for the project (Memteq)
</td>
<td>
CYTO-WATER website
</td>
<td>
Consortium members
</td> </tr>
<tr>
<td>
Results of end-users tests at the end of the project (Bertin)
</td> </tr>
<tr>
<td>
Results of experiments
</td>
<td>
Comparisons of the different Celltrap units showing flow rates, sample
volumes, pressures, etc.
(Memteq)
</td>
<td>
CYTO-WATER website
</td>
<td>
Consortium members
</td> </tr>
<tr>
<td>
Private
</td>
<td>
Technical specifications
</td>
<td>
Celltrap membrane specifications for various sample types (Memteq)
</td>
<td>
Consortium members
</td>
<td>
Consortium members
</td> </tr>
<tr>
<td>
Description of technical and operation issues regarding the integration of the
entire platform
(IFCO)
</td> </tr>
<tr>
<td>
Observations from field work
</td>
<td>
Environmental water samples vary in turbidity, color, solid contents and the
selection of Celltrap membrane types will become important (Memteq)
</td>
<td>
Consortium members
</td>
<td>
Consortium members
</td> </tr>
<tr>
<td>
Drawings of prototypes
</td>
<td>
Drawings available for new Celltrap units (Memteq)
</td>
<td>
Consortium members
</td>
<td>
Consortium members
</td> </tr>
<tr>
<td>
Fluidic chip (microTEC)
</td>
<td>
Consortium members
</td>
<td>
microTEC
</td> </tr>
<tr>
<td>
Mechanical manufacturing drawings of the concentrator and electronic cards
manufacturing
</td>
<td>
Consortium members
</td>
<td>
Bertin
</td> </tr>
<tr>
<td>
</td>
<td>
plans of the concentrator (Bertin)
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Prototype samples
</td>
<td>
Prototype samples (microTEC)
</td>
<td>
Consortium members
</td>
<td>
microTEC
</td>
<td>
</td> </tr>
<tr>
<td>
Images
</td>
<td>
Graphical data available (Memteq)
</td>
<td>
Consortium members
</td>
<td>
Consortium members
</td>
<td>
</td> </tr>
<tr>
<td>
Validation reports
</td>
<td>
Analysis of MicroTec's cartridges material to assess its performance with
ICFO's reader (ICFO)
</td>
<td>
Consortium members
</td>
<td>
Consortium members
</td>
<td>
</td> </tr>
<tr>
<td>
Validation reports of the CYTOWATER system (Labaqua)
</td>
<td>
Consortium members
</td>
<td>
Labaqua
</td>
<td>
</td> </tr>
<tr>
<td>
Survey results
</td>
<td>
Market study (Bertin)
</td>
<td>
Consortium members
</td>
<td>
Bertin
</td>
<td>
</td> </tr>
<tr>
<td>
Design reports
</td>
<td>
Interface studies and integration studies (Bertin)
</td>
<td>
Consortium members
</td>
<td>
Bertin
</td>
<td>
</td> </tr>
<tr>
<td>
Software
development reports
</td>
<td>
Communications with other modules study (Bertin)
</td>
<td>
Consortium members
</td>
<td>
Bertin
</td>
<td>
</td> </tr>
<tr>
<td>
Results of experiments
</td>
<td>
Analysis of filter clogging tests, tests and analysis of biological and
physical recovery rates, and internal validation tests report of the
concentrator (Bertin)
</td>
<td>
Consortium members
</td>
<td>
Bertin
</td>
<td>
</td> </tr>
<tr>
<td>
Evaluation of ICFO's reader performance on different turbidity level water
samples provided by
MemTec (ICFO)
</td>
<td>
Consortium members
</td>
<td>
Consortium members
</td>
<td>
</td> </tr>
<tr>
<td>
Manufacturing processes
</td>
<td>
Description of ICFO's reader assembling (ICFO)
</td>
<td>
Consortium members
</td>
<td>
Consortium members
</td>
<td>
</td> </tr>
<tr>
<td>
Results of experiments
</td>
<td>
Spread sheet with all the row data of the experiment performed
(Labaqua)
</td>
<td>
Consortium members
</td>
<td>
Labaqua
</td>
<td>
</td> </tr>
<tr>
<td>
Experiments protocols
</td>
<td>
Text document where all the procedures performed in the laboratory are written
(Labaqua).
</td>
<td>
Consortium members
</td>
<td>
Labaqua
</td>
<td>
</td> </tr> </table>
## Public data
All participants in the project will publish the results of their work to the
extent that the commercial interests of project results are preserved
according to the exploitation and business plans to be agreed among project
partners. Papers will primarily be published in peer-reviewed journals and/or
conference proceedings. The results may also appear in books written in
English.Primary data and other supporting materials created or gathered in the
course of the work will be shared with other researchers upon reasonable
request and within a reasonable time of the request. Main research information
and reports will be published on theproject website. Informative and
commercial publications focused on the marketing of the CYTO-WATER systems
will be written in Spanish, French, German and English at partner websites and
in other relevant sources (National and international conferences, publicity
in on-line journals such as _i-ambiente_ , _i-agua_ with high diffusion at
national and international level,brochures about CYTO-WATER project).The
emphasis of data management will be on faithful and reproducible record
keeping, with an emphasis on transparency and accountability in methods
utilized.
Results of the research will be made available in digital form in spreadsheet
tables, tab-delimited files, or image files. Images will be saved in standard
image formats such as JPEG, TIFF, or PNG. Main research products will be
available online in digital form. Manuscripts will appear in PDF, and contain
text, calculations, drawings, plots, and images. The targeted journals for the
results of this research project provide a downloadable PDF copy of the
manuscript on the web. In addition, the PC will link to these journal
publications from the project website’s “Publications” section.Validation
reports and technical specifications will be public through project website.
Details of the main research products will therefore appear in text, tables,
plots, and images in peer-reviewed journal articles and/or conference
proceedings. The results may also be included in book chapters. Patents will
be sought when relevant.
In summary, main public data will be the externally-generated research data
(external to the consortium),mostoutput data and reports (including scientific
reports, congress presentations, etc.), validation reports and technical
specifications will be showed at CYTO-WATER project webpage.
## Private data
It is recognised by CYTO-WATER partners that certain data may be commercially
sensitive and thus the Consortium will withhold general access to those data
generated which may compromise such commercial sensitivity. Intellectual
Property issues restrict sharing of some of the data (designs, methods).
Filling and subsequent publication of the corresponding patent applications,
before making the data publicly available, would be a way of limiting such
restrictions.
The main private data of this research project are the development of
manufacturing processes, modeling of these manufacturing processes, and
characterization of samples associated with these manufacturing processes.
# 5\. ACCESS TO DATA AND DATA SHARING PRACTICES AND POLICIES
## Period of data retention
Data will be made publicly availableon the project website, but the editing
rights will be granted only to the PC Data will not be embargoedbut will be
opened up for public use upon completion of the project step. This will allow
maximum transparency of the project and enable the maximum benefit to decision
makers, as one of the project goals is to improve public policy decision
making. Data will be made publicly available through the project site with
guides and or visuals/charts to increase the ability of the public to consume
the results of individual project steps.
Public access to research products will be regulated by the consortium in
order to protect privacy and confidentiality concerns, as well to respect any
proprietary or intellectual property rights. Legal offices will be consulted
on a case-by-case basis to address any concerns, if necessary. Terms of use
will include proper attribution to the PC and authors along with disclaimers
of liability in connection with any use or distribution of the research data.
## Archiving and Preservation of Access
Research products will be made available immediately after publication.
Journal publications will be available online from respective journal websites
and linked to by the CYTO-WATER website. All data generated as a result of
this project will backed up daily to protect from loss of data from hardware
failures, fire, theft, etc.
Upon completion of the project, all data will be housed within the consortium
database and will be made available upon request in order to facilitate
maintenance and availability of project results. During a period of 5 years
after the project, results will be available upon request and the
responsibility will be of the project coordinator.
Initial data management and Sharing Plan
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0970_FESTIVAL_643275.md
|
## Executive Summary
The Deliverable 5.2 “Project initial exploitation and open data management
plan” has the aim to report a first analysis about the exploitation
opportunity of FESTIVAL project, introduce some starting concept related to
the business model and define how the open (research) data will be managed
during the project. Considering the early stage of the project, the main
topics introduced in this document, exploitation, business model and open data
management, cannot be totally defined and analysed, but they will be updated
during the whole project duration and reported in the next
WP5 deliverables.
The Deliverable 5.2 is structured in four main chapters: the first one
“Initial exploitation plan” contains an initial analysis of the possible
exploitation opportunities for the FESTIVAL project: all the partners
contributed to this chapter. In particular are identified a list of existent
assets that can be reused in the project and how they will further improved
during the activities. Besides it has been defined an initial list of possible
exploitable outcomes that FESTIVAL should produce: this includes not only IT
assets but also other types of items (e.g. methodologies, physical
environments etc.). At the end of the chapter, some initial exploitation
intentions of the different project partners are presented.
Chapter 2 is about the analysis of the Experimentation as a Service ecosystem
that is the basic approach of FESTIVAL and fundamental for the future
definition of a business model. The first section of the chapter reports a
description of the entities, processes and the stakeholder involved in a
possible typical EaaS scenario: each of them is described and put in relation
with the others. The last section presented a series of existent initiatives
that can be related to EaaS model and initial considerations about FESTIVAL
exploitation and sustainability.
Chapter 3 is dedicated to the Open Data an important topic for FESTIVAL
project that has different activities on this field with a specific focus on
the Open research data that will come from the experimentations. Section 3.1
analyses the situation of the adoption and maturity of the open data approach
in the different countries of the world and in particular the ones involved in
FESTIVAL field trials. The second section of the chapter gives a general
overview of the Open Data market and the potential business opportunity. The
Open Data Management plan is the focus of the last section of chapter 3: in
this early phase of the project, it has been possible to define the processes
that will be followed and relative output in the management of the open data
and in particular open research data.
The last chapter presents a roadmap with the future activities to be performed
to refine the exploitation plan, define a first version of the concrete Data
Management plan and of the Business model that will be included in the next
two WP5 deliverables, D5.3 “ _First year update to communication,
dissemination, exploitation and open data management activities”_ and D5.4 “
_Experimentation as a Service business model analysis_ ”.
## 1\. Initial exploitation plan
### 1.1. Project general scenario and impact
The world we live in is a changing world, the European Union and Japan have
already to face many challenges, and more can be foreseen for the near future.
The transforming power of ICT is set to revolutionize our economies and lives
as the new forms of communication become the medium for organizing and
managing the complex eco-‐systems we depend on (energy, transport,
industries, health…).
The achievement of this vision however requires significant investment in key
infrastructures. Test-‐beds and experimental facilities, both of small scale
and up to city scale, will be an essential enabler to facilitate the
development of this vision. **Facilitating the access to these test-‐beds to
a large community of experimenters in an “Experimentation as a Service”
approach is a key asset to the development of a large and active community of
application developers that are necessary to address the many challenges faced
by European and Japanese societies.** A global and far-‐reaching
interoperability between applications is another essential enabling element of
the IoT vision, in that sense the approach of the project of an
intercontinental federated test-‐bed will prove a key asset.
As presented in the description of work, the project targets important impact
not only from a scientific and technical aspect, but also at an economical and
societal level. While a significant aspect of the project is to study the
potential impacts (Work Package 4) of the project experiments and of the
project federated approach, this also translates in direct exploitation
opportunities that will be pursued by the project partners jointly:
* The Federation in itself of the different experimentation environments, made available in Europe and Japan, in an Experimentation as a Service model can be expected to be the main and most tangible outcome of the project.The project will provide valuable experimentation facilities in particular for innovation creators (researchers, start-‐ups, SMEs) who do not have necessary resources to setup and maintain large-‐scale experimentation facilities. In the long run, these services will allow the IoT ecosystem to bring robust and good quality products to the market, while decreasing the time to market by diminishing the necessary effort for testing. The development of experimentation facilities, including large (city size) operational test-‐beds, made available to a large community of stakeholders will be a key asset for the development of the IoT in Europe and Japan which economies will both strongly benefit from this technical leadership. **Maintaining and enhancing the provision of these EaaS services beyond the project lifespan and in a sustainable way will therefore be the main focus of the common exploitation work of the project.**
* A global and far-‐reaching interoperability between applications is another essential enabling element of the IoT vision, in that sense the approach of the project of an intercontinental federated test-‐bed will prove a key asset. Thus, if sustaining the EaaS access to the project testbeds is a first priority of the project, **a second common exploitation objective will be to maintain and extend the federation and interoperability beyond the project.** The project exploitation work will therefore identify possible structures to carry on the federation work beyond the project lifespan and this deliverable already looks into potential possibilities.
In addition to these main common opportunities and objectives, the scientific
knowledge and expertise acquired in several domains throughout the project
will be a strong asset for the project participants that can be translated in
additional individual or joint exploitation opportunities:
* The experimentation organized during the project on the federation of testbeds, and the application developed by the consortium but also by third parties will generate an important exploitation opportunity. These experimentations and applications, supported by the project through the organization of contests and through dedicated services (especially to handle relationships to end users and to privacy and ethics issues) will generate business opportunities that some of the project partners will be able to pursue in close collaboration with the surrounding business ecosystem created thanks to the project.
* The project gives the opportunity to create a technological federated architecture that can be the base for a real service ecosystem to be delivered and maintained beyond the duration of the project. This will be an opportunity for all the industrial stakeholders involved in the project, due to the professional experience maturated in it, to have a primary role in future technological standardisation/regulation in the domain of IoT platforms.
* Each individual organization involved in the project has a plan for exploiting the knowledge and expertise developed in the project. Either as a competitive advantage for the provision of existing or new product and services, or as part of scientific education and dissemination for academic partners and research institutions.
### 1.2. Existent reusable assets
This section presents a set of existent assets that the different partners of
FESTIVAL brought into the project as resources to be used in the concrete
activities. These assets, that are described in the following tables, consist
of IT artefacts (such as platforms or testbed) but also of other types of
resources such as living labs and collaborative spaces: all these assets will
be involved in the project experimentations and will be accessible by the
different FESTIVAL stakeholders. Each asset is described in a table including
a general description and consideration about innovation and interoperability
related to the project context; in addition to a plan about possible further
developments of the asset during the project implementation.
#### 1.2.1. SmartSantander platform
<table>
<tr>
<th>
**Asset name**
</th>
<th>
SmartSantander platform
</th> </tr>
<tr>
<td>
**Asset Overview**
</td>
<td>
</td> </tr> </table>
SmartSantander platform tier is the core of the SmartSantander testbed. It is
the top layer
within the SmartSantander
architecture and it is in charge of manage and control the
resources within the testbed. The description of the resources deployed in
SmartSantander
as well as the internal repositories for the data generated in the testbed
belong to this
layer. Furthermor
e, the SmartSantander core supports the integration of external services
to be stored and accessed using the SmartSantander APIs. Finally, all the
functionalities to
federate SmartSantander with other existing testbeds (FI
-‐
WARE, Fed4Fire) are within this
l
ayer.
**Figure**
**1**
**-‐**
**SmartSantander platform**
<table>
<tr>
<th>
**Type**
</th> </tr>
<tr>
<td>
Software platform
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
The SmartSantander platform is a set of software tools that are ready to
manage a massive deployment of IoT devices in smart city scenarios. Currently,
the platform tier manages more than 12000 IoT sensors.
The platform is composed of several components depending on the functionality
foreseen:
* IoT API: main interface for accessing and injecting new data to the platform. All the resources deployed within the SmartSantander testbed uses this interface to send data. Additionally, external information sources can use this interface to inject data into the platform. This interface implements all the authorization and authentication methods.
* Adapters/Wrappers: Set of software modules to integrate the SmartSantander platform in different federations approach (FI-‐WARE, FED4FIRE).
* Internal data repository: a noSQL data repository to keep all the data injected from (external and internal) IoTs.
* Resource Directory: this module manages and keeps track of the resources injecting data into the platform.
* nodeManager: this module is in charge of monitoring the infrastructure, seeking for inactive nodes to be deactivated from the platform.
* Testbed runtime and OTAP tools: these software modules implemented within the testbed platform and gateways enable the possibility of flashing nodes with new programs using Over The Air Programming protocols.
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
The SmartSantander platform foresee the integration of the resources with
other testbeds by use of a RESTful interface (aforementioned IoT API) which
enable a homogenous access to all the resources within the testbed, including
management and data mining. Additionally, as part of the FI-‐WARE and
FED4FIRE initiatives, the platform is being federated following these two
approaches.
The use of IoT API is used to integrate external services into the
SmartSantander platform; therefore, different testbeds can inject generated
data into the platform and access it homogeneously. Furthermore, data injected
into the platform can easily be included as part of FED4FIRE and FI-‐WARE,
accessing a much wider testbeds federation.
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
On the one hand, the SmartSantander platform envisions the federation with
other testbeds in Europe and Japan within FESTIVAL, enriching the accessing
possibilities for experimenters using SmartSantander. Enabling federation will
allow SmartSantander to access new resources not envisioned previously (e.g.
VMs with high-‐speed connectivity).
On the other hand, smart shopping use cases in FESTIVAL will require new
software tools to manage smart-‐shopping oriented sensors. Moreover, sensors
based on radio technologies such as Bluetooth are not part yet of the
SmartSantander platform, so specific management tools will be also
implemented.
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
University of Cantabria
</td> </tr> </table>
**1.2.2. Santander IoT Infrastructure**
<table>
<tr>
<th>
**Asset name**
</th>
<th>
Santander IoT Infrastructure
</th> </tr>
<tr>
<td>
**Asset Overview**
</td>
<td>
</td> </tr> </table>
The Santander IoT Infrastructure, which is shown in the following figure, is
currently composed of around 3000 IEEE 802.15.4 devices, 200 devices including
GPS/GPRS capabilities and 2000 joint RFID tag/QR code labels deployed both at
static locations (streetlamps, facades, bus stops) as well as on-‐board of
public vehicles (buses, taxis). It includes:
* static nodes, such as environmental sensors (temperature, noise, luminosity), parking sensor nodes, parks and gardens irrigation sensors (air temperature and humidity, soil moisture and temperature), traffic sensors (road occupancy, number of vehicles, speed);
* mobile nodes, which measures specific environmental parameters, such as CO,
NO2, Ozone, Microscope particles.
**Figure 2 -‐ Santander IoT Infrastructure**
Additionally, in order to improve some municipality services such as Water and
Waste management, different kinds of sensors (fixed and mobiles) have been
deployed within the city.
In the case of Waste management, sensors capable of measuring garbage levels
in bins, system for identification and monitoring litter bins (NFC and RFiD
tags) have been installed in fixed positions, while fleet management system
(GPS) have been deployed in vehicles, together with activity and environmental
sensors. Additionally, mobiles’ operators will be provided with NFC tags and
GPS. The following figure shows some of the installed sensors.
All the information retrieved by these sensors will be stored in the
SmartSantander Platform, and after being processed it will be sent to the
corresponding actor. Several Apps will be developed: for internal use (street
cleaning operators; bins and trash cans maintenance) and for citizens (to
report incidences).
<table>
<tr>
<th>
In the case of water management, a pilot project has been developed in an area
of the city, whose main objective is to optimize the provision, management and
use of this resource. Several sensors have been installed in the water
provision network (in order to monitor it) and also in citizens’ houses. The
information retrieved by these sensors, together with environmental
information will be gathered in order to improve not only water management but
also service provision. Additionally, tools for accessing to individual water
usage consumption and for reporting incidences on the network are available,
in order to improve the quality of service and optimize water consumption, by
involving citizens as key actors in this process.
Due to the positive results obtained in this project, a new phase will be
developed in another area of the city.
Santander IoT Infrastructure is being used in several research projects to
collaborate in the development of the future Smart City, being used to develop
use cases within these projects and generating real new services for the
citizens.
</th> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
Infrastructure (Hardware Platform)
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
As University of Cantabria has mentioned previously, the smart shopping use
case will use new sensors based on radio technologies such as Bluetooth. So
they will have to be developed, deployed and also integrated within the
current infrastructure.
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
In order to ensure experimentation and service provision, different protocols
are used including standard based, such as 802.15.4, and proprietary ones,
such as Digimesh.
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
Deployed devices will also extend the SmartSantander testbed capabilities,
allowing external experimenters to access new kind of datasets with
information about positioning.
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
Santander City Council
</td> </tr> </table>
#### 1.2.3. GIS Platform
<table>
<tr>
<th>
**Asset name**
</th>
<th>
GIS Platform
</th> </tr>
<tr>
<td>
**Asset Overview**
</td> </tr>
<tr>
<td>
A GIS Platform, which uses ESRI Technology, is provided by Santander City
Council in order to store and process geo referenced data, accordingly to the
city needs.
Some of the geo referenced data that will be used in this project are running
in this platform. The following figure shows a simplified vision of the
Platform Architecture:
**Figure 3 -‐ Santander GIS platform**
ArcGIS Server is a powerful and flexible platform which provides not only a
variety of spatial data and services to GIS users and applications
requirements, but also the ability for our organization to implement
server-‐based spatial functionality for focused applications utilizing the
rich functionality of ArcObjects. As it is known, building robust and scalable
applications is not a simple task, so proper application design is required.
This server is a distributed system whose different components can be
distributed across multiple machines. Each one of them plays a specific role
in the processes of managing, activating, deactivating, and load balancing the
resources located on a given server object or set of server objects.
</td> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
Software platform
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
ArcGIS platform provides innovative features in order to manage geographic
information, such as present aggregated data as context-‐rich maps, which
gives organizations powerful new tools to proactively manage their operations.
It also provides field data collection tools to be used in any mobile devices
without any additional software development. Additionally, it allows not only
to gather but also to manage geo located information. This server also
provides access to the GIS functionality that the resource contains. For
example, you might be able to share a map with someone through a server, but
it would be even better if that person could also interact with the map, like
find the closest hospital, restaurant, or bank and then get directions to it
from their location.
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
Santander GIS platform data Interoperability provides state-‐of-‐the-‐art
direct data access and data translation tools in addition to the ability to
build complex spatial extraction, transformation, and loading (ETL) tools for
data validation, migration, and distribution.
GIS platform Data Interoperability supports various proprietary formats and
protocols as well as standardized formats from OGC such as GML and CityGML,
WFS, and KML/KMZ, also ISO, and other GIS Standards bodies such as CSV, CAD,
JSON, XML, and RSS.
GIS Platform also delivers two APIs for interoperability, one of type RESTFUL
and another of type SOAP.
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
This platform provides the capacity of provide geo location information
required for the
Smart Shopping use case.
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
Santander City Council
</td> </tr> </table>
**1.2.4. Santander Open Data Platform**
<table>
<tr>
<th>
**Asset name**
</th>
<th>
Santander Open Data Platform
</th> </tr>
<tr>
<td>
**Asset Overview**
</td>
<td>
</td> </tr> </table>
Santander City Council has deployed an Open Data platform for offering to the
citizen all public data that resides in its internal databases, including
transportation, demography, shops. One of the main focus of this platform is
directed towards companies and entrepreneurs, in order to take advantage of
this data to create products and services on top of them, thereby fostering
not only the business opportunities but also the job creation. Additionally,
it is also focused on providing citizens proactively data, looking for a
better understanding of how an administration works internally, and reducing
and even eliminating in some cases, slow and costly administrative procedures
to access data that, although being public, it was not available to citizens.
The architectural definition of the platform, which includes a front-‐end and
an back-‐end, is shown in the next picture and described below:
**Figure 4 -‐ Santander Open Data platform**
The front-‐end may be defined as the graphical interface for the final user.
From the technological point of view, it is composed of three popular
components, developed by the open source community, which have been reused and
adapted to the specific needs of the platform. These components are:
1. Wordpress: Prestigious CMS aimed from its birth to build blogs, but over time has incorporated features to be considered the most used content management system in the network. This component is in charge of implementing the final graphical user interface that provides access to the data and the Open Data portal.
2. CKan: It is a CMS focused in Open Government Projects, which is used and managed mainly by United Kingdom Government, and reused by main open data portals
<table>
<tr>
<th>
worldwide. This component is in charge of outfitting with all infrastructures
for defining and meta-‐dating datasets and resources and supplies APIs to
allow developers to automate data consumption.
3\. Virtuoso: Virtuoso is a web tool that allows the terminology definition,
for the creation and promotion of Semantic Web. Its role within the Platform
is to define those specific words associated to a Municipality or local
Authority, which have not been defined so far by any standardization
corporation or any other open data platforms.
The back-‐end subsystem is in charge of doing data gathering tasks and
supporting Sparql engine. This subsystem is composed of two components, also
developed by Open Source Community, and adapted for the particular platforms
needs. These components are:
1. iCMS: A data gathering system developed by Government of Andalucía, which main function is to maintain front-‐end constantly fed with updated data. It is important to highlight that the data contained on the website is not mere snapshots of available data at a given time, but the system is being updated constantly by Municipal databases in order to always provide updated data. Therefore, this tool or technology component plays a transcendental role in the platform in order to enable a real time open data platform.
As an example, in the case of census dataset, if we query the number of people
from one day to another, we will see its variation depending on the
registrations and cancellations occurred in that time interval.
This real-‐time updating is made through ad-‐hoc drivers developed for every
Municipality data producer.
2. Marmota/Neogolsim: This tool is responsible for providing the SPARQL engine to the platform. The purpose of this engine is, on the one hand, to provide data in RDF format (format of choice for reuse) and on the other, to allow creating cross-‐data queries with other Open Data Platforms world-‐wide, thus providing platform with advance features that allows it to follow the Semantic Web path.
</th> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
_Software platform_
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
Open Data Platform is a key component of the innovation in the City, providing
tools to generate new ideas from citizens, fostering Crowd Sourcing. Most of
these ideas conclude in projects, whose outputs are new services for citizens.
Therefore, the City gets a new innovative channel to create new services.
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
Santander Open Data Platform delivers Open standards for data exchange that
are independent of any individual supplier. These standards are essential for
systems and data to be interoperability. Without them, open data can realize
only a fraction of its value.
Because Santander Open Data platform built on open standards, it helps data
from different sources work together. It also ensures that users are never
“locked in”, because data and metadata can easily be harvested into a
different system.
Standards like HTML, REST, CSV, XML, RDF, JSON-‐G, N3, TURTLE, ATOM, SHP,
WKT, are available and ready for use in the platform.
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
Open Data platform will provide the information required by the Smart shopping
use case. Additionally, output information may be included as a new category
in the current Open Data catalogue, which may be used by any other user in
order to develop a new application or service.
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
Santander City Council
</td> </tr> </table>
#### 1.2.5. Pedestrian Flow Analyzer platform
<table>
<tr>
<th>
**Asset name**
</th>
<th>
Pedestrian Flow Analyzer platform
</th> </tr>
<tr>
<td>
**Asset Overview**
</td> </tr>
<tr>
<td>
Pedestrian Flow Analyzer (in short PFA) platform consists of Wi-‐Fi packet
sensors and PFA server, and provides functionality to grasp the flow of
pedestrians equipped with Wi-‐Fi-enabled devices in real time. Wi-‐Fi
packet sensors collect Probe Request frames, which are periodically
transmitted from Wi-‐Fi-‐enabled devices to search Wi-‐Fi access points,
and anonymize MAC address fields then upload collected data to the PFA server,
where gathered data are used by a PFA engine to calculate the pedestrian flow
in real time. We aim to utilize analyzed data effectively for making disaster
prevention plans and for the evacuation guidance.
**Figure 5 -‐ Pedestrian Flow Analyser platform**
</td> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
Software platform
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
Pedestrian flow analysis based on Wi-‐Fi probe request frames itself attracts
attention of researchers. The technology has already been incorporated into
several commercial products. Innovative challenge in FESTIVAL project is
integrating the PFA functionality with existing testbeds and exploring means
to facilitate experimentations using collected data including personal
information by following correct procedures taking care of users’ privacy
concerns.
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
PFA platform foresees to be deployed in JOSE testbed and widely used by
experimenters interested using the real-‐time trajectory of pedestrians for
novel services. PFA has been incorporating communication functionality based
on MQTT over SSL, and will be able to accommodate both Wi-‐Fi packet sensors
and IoT actuators using the MQTT protocol.
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
We have been using PFA platform for experimentations in the area of university
campuses, exhibition halls, shopping malls, and underground shopping areas so
far. It is, however, difficult to widely open collected data and the result of
analysis due to the nature of dealing with personal information. We expect to
improve the situation by deploying the PFA platform onto FESTIVAL testbeds and
provide it for use by various experimenters in a more controlled environment
to explore efficient procedures for utilizing pedestrian flow information for
novel services and applications. We also foresee to incorporate functionality
to improve PFA based on BLE advertising packets.
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
Ritsumeikan University
</td> </tr> </table>
#### 1.2.6. PIAX
<table>
<tr>
<th>
**Asset name**
</th>
<th>
PIAX
</th> </tr>
<tr>
<td>
**Asset Overview**
</td> </tr>
<tr>
<td>
PIAX (Peer-‐to-‐peer Interactive Agent eXtensions) is an open source
framework that integrates P2P structured overlay network and agent platform.
PIAX is also a core of the PIAX Testbed.
Overlay network enables pervasive devices to communicate each other
efficiently, while agent platform on the overlay network encourages the
devices to cooperate with other devices. Consequently, a scalable and
efficient federated system can be realized not only in the ordinary
environment but also in a large-‐scale distributed environment (e.g.,
pervasive environment, cloud environment) where various kinds of data and
processes are located in each device.
**Figure 6 -‐ PIAX framework**
</td> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
Software platform
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
PIAX is core networking platform of the PIAX Testbed and provides networking
features including peer discovery and messaging.
PIAX consists of 2-‐layers, i.e., P2P structured overlay network layer and
agent platform layer. P2P overlay network has several overlay networks such as
DHT (Distributed Hash Table), LL-‐Net (Location-‐based Logical P2P Network),
and Skip Graph. Agent platform supports mobile agents that are processed on
the nodes and moves on the overlay network.
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
PIAX is a Java class library that integrates mobile agent platform and P2P
structured overlay network. PIAX therefore can be integrated into Java-‐based
projects that can benefit powerful networking features. PIAX agent programs
can also be tested on PIAX Testbed before deploying to real environment. On
PIAX Testbed, sensor measurements from sensors connected to JOSE Testbed can
be tested.
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
* New version PIAX 3.0.0 should be released soon.
* PIAX Testbed based on PIAX 3.0.0 will be deployed on April/May 2015\.
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
Owner: NICT
Responsible partner: ACUTUS
</td> </tr> </table>
#### 1.2.7. JOSE
<table>
<tr>
<th>
**Asset name**
</th>
<th>
JOSE
</th> </tr>
<tr>
<td>
**Asset Overview**
</td> </tr>
<tr>
<td>
JOSE provides a Japan-‐wide open testbed, which consists of a large number of
sensors, SDN capabilities and distributed “CLOUD” resources. The facilities of
JOSE are connected via high-‐speed network with SDN feature. JOSE will
accelerate field trials of “large‐scale smart ICT services” essential for
building future smart societies. JOSE has following four characteristics:
1. Huge amount of computation resources
2. Provides dedicated ‘sensor network’ by SDN
3. “takeout” sensor facilities for users’ own experiments
4. Many field trial experiments coexists
**Figure 7– JOSE testbed**
</td> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
IoT experiment service
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
* Distributed compute resources o 400 physical servers are available at 3 locations o 10 VMs run on each computer
▪▪ 12,000 VMs are provided
* Sensor data analysis data analysis
* Distributed storage resources o 10 servers are available at 5 locations
▪▪ 50 storage servers, 500 VMs
* Sensor data storage
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
JOSE supports IEEE1888 and its protocol and data format is already
standardized. JOSE also supports RESTful interface.
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
Currently, JOSE does not support user defined new agent functions on FIAP
Storage for JOSE, which is an extension of an implementation of IEEE1888 to
share data among multiple FIAP Storage instances. An approach to exploit JOSE
as a distributed data analysis backend will be investigated and improved. How
to utilize a SDN functionality of JOSE is also investigated.
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
Owner: NICT
Responsible partner: ACUTUS
</td> </tr> </table>
#### 1.2.8. Tuba Living Lab
<table>
<tr>
<th>
**Asset name**
</th>
<th>
Tuba Living Lab
</th> </tr>
<tr>
<td>
**Asset Overview**
</td> </tr>
<tr>
<td>
The TUBA is located in front the Part-‐Dieu train station in Lyon, in a
public area. The site is strategic as almost 500 000 persons get through the
place every day. This position allows TUBA to get in touch with a great
variety of people and profiles.
The events TUBA organises attract this public, which constitutes a large panel
TUBA can mobilise on different experimentations.
On the ground floor, the Tuba LAB : a 180 sqm’s showroom fully opened to the
citizen. Everybody is invited to discover what makes the city smarter and to
experiment new ideas, even to propose some! The Tuba LAB exposes new services
and prototypes leveraging the data exposed by the city and the partners.
Domains covered are Well-‐Being, Transportation,
Health and Culture.
On the first floor , the TubaMIX: 420 sqm dedicated to projects’ holders,
TUBÀ’s partners, and any other public or private entities involved in
Innovation & Smart City.
**Figure 8 -‐ Tuba living lab**
</td> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
Service, Place
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
The Tuba LAB could expose applications and services from other partners, using
Tuba and/or federated resources.
The Tuba Mix could co-‐design applications and services with remote partners
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
Interoperability will be made through a common methodology and the use of Open
Data standards
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
During the FESTIVAL project, methodology will evolve to access to the
federation. The Tuba/Metropole de Lyon’s Open Data repository will evolve to
streamline the use of external partner’s data
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
SOPRA/TUBA
</td> </tr> </table>
#### 1.2.9. Lyon’s Open Data Platform
<table>
<tr>
<th>
**Asset name**
</th>
<th>
Open Data platform
</th> </tr>
<tr>
<td>
**Asset Overview**
</td> </tr>
<tr>
<td>
TUBA has access to the Metropole of Lyon’s Open Data infrastructure, allowing
the partners to :
* Use existing Open Data sets, published by the Metropole and its service providers
* Create/Access private repositories to inject/use custom data for a specific experimentation
This Open Data repository leverages the following technologies: JSON, OGC, CSW
and KML and make use of Credentials when possible
.
Public
Specific
**Figure 9 -‐ Lyon’s Open Data Platform**
Domains covered are : transportation, public services, geographical data,
culture, economy, environment, urbanization, equipments, accessibility,
demography.
These data are also provided by the service contractors of the Metropole.
</td> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
Software Platform, Service
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
The Open Data infrastructure could interoperate with other Open Data
platforms, could get its data from external testbeds, and could interact live
with computational resources
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
Interoperability will be made through Open Data / API technologies
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
During the FESTIVAL project, methodology will evolve to find the right
architecture to access the resources and design complex applications/services.
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
SOPRA/TUBA
</td> </tr> </table>
#### 1.2.10. End user engagement methodology
<table>
<tr>
<th>
**Asset name**
</th>
<th>
End user engagement methodology
</th> </tr>
<tr>
<td>
**Asset Overview**
</td> </tr>
<tr>
<td>
A methodology for end user involvement, including ethics and privacy
protections and an evaluation framework of Quality of Experience has been
developed and deployed within
PROBE-‐IT (TRL 5) and BUTLER (TRL 6). The methodology includes:
* Analysis of IoT Ethics, Privacy and Data Protection issues (BUTLER)
* Informed consent procedures (BUTLER)
* Co-‐creation methodologies (BUTLER)
* Impact assessment methodologies (BUTLER)
* Security Risk Assessment Framework (BUTLER)
* Deployment evaluation methodology (PROBE-‐IT)
</td> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
Service / Methodology
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
The necessity to involve multiple stakeholders that are not used to work
together in new use cases is a characteristic of the IoT innovations and its
ability to disrupt existing models and value chains. To be well accepted, the
new deployed solutions must be well understood by all involved stakeholders,
including the end users and citizens it will affect. In that matter,
co-‐creation mechanisms and engagement of stakeholders throughout all the
phases of a deployment is a necessity
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
The methodology can be applied/adapted to other ICT / IoT projects.
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
The model will be extended and applied in FESTIVAL and will gain maturity. The
model and approach will be promoted to other deployments (TRL 7).
* Creation of communication material (factsheets) to present key aspect and methodologies of user involvement in a short and rapidly understandable way.
* Evolutions of the informed consent procedures
* Set up of a Privacy Impact Assessment rapid evaluation framework
* Evolutions based on external inputs
* Support to co-‐creation experiments
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
inno
</td> </tr> </table>
#### 1.2.11. The Lab. in Knowledge Capital
<table>
<tr>
<th>
**Asset name**
</th>
<th>
The Lab. in Knowledge Capital
</th> </tr>
<tr>
<td>
**Asset Overview**
</td> </tr>
<tr>
<td>
The Lab. is a showcase where general public, such as researchers, creators,
artists, students, senior citizens, housewives and children, can experience
the latest technologies and have interactions with other exhibitors.
The Lab. constitutes a space that attracts global prototypes and
world-‐leading technologies, and is a hub from which the latest strains of
culture emanate. Visitors not only get to see and touch ingenious inventions,
but are also given the chance to participate in the creative process as befits
the description of this space as a laboratory.
</td> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
Location / Service
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
Communicators are the specialists that introduce Knowledge Capital and
interlink people with other people, things, and information. At The Lab., they
are the ones who approach visitors, stir-‐up interaction, and encourage the
deepening of new encounters and experiences.
Communicators also play the role of gathering the
comments and reactions of visiting members of the public, and feeding this
information back to companies, researchers, and other event organizers
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
The Lab. can perform interoperability when the fundamental devices required in
other experimentations are installed.
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
The Lab. itself will enhance its performance with increase of the
participation of general public and other companies, universities and research
institutes. Through the implementation and dissemination of FESTIVAL project,
The Lab. aims at attracting more entities to operate various kinds of
experimentations, which will benefit as a result both participants and
Knowledge Capital.
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
Knowledge Capital
</td> </tr> </table>
**1.2.12. Validation framework for platform based services**
<table>
<tr>
<th>
**Asset name**
</th>
<th>
Platform quality assessment framework
</th> </tr>
<tr>
<td>
**Asset Overview**
</td> </tr>
<tr>
<td>
A methodology for the evaluation of software components (enablers) based
platforms has been developed in the context of the EU FIWARE initiative as
part of the health use case.
The methodology identifies several analysis dimensions:
* Readiness of Enabler implementation for use in software applications: black box-testing (BBT) of Enablers with model-‐based test case generation.
* Willingness of developers to adopt software components beyond the project: developers’ quality of experience (DQoE).
* Ability of Enablers to be used in software applications and services that target the healthcare sector: internal interoperability (IIO).
* Ability of Enablers to be appropriated in the healthcare sector: e-‐health interoperability (HIO) within health care sector activities.
* Preparation of Enablers for sustained provision and use: reuse readiness level (RRL) of the Enablers.
</td> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
Service / Methodology.
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
Platforms involve several stakeholders having different interests and thus
different expectations from the proposed platform services. At the same time,
collection of performance indicators should be undergone in a way to reduce
resources required to collect and analyse the information. An innovation
socio-‐technological alignment matrix has been proposed.
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
The methodology can be applied/adapted to other components based platforms
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
The model will be extended and applied in FESTIVAL and will gain maturity. The
model and approach will be promoted to other deployments (TRL 6).
* Adaptation of the FISTAR model to FESTIVAL specificities
* Extend the model to more domains (being health focused today) • Evaluate relevance in the Japanese context
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
EGM
</td> </tr> </table>
**1.2.13. Engineering FIWARE-‐Lab**
<table>
<tr>
<th>
**Asset name**
</th>
<th>
Engineering FIWARE-‐Lab
</th> </tr>
<tr>
<td>
**Asset Overview**
</td> </tr>
<tr>
<td>
The Engineering FIWARE-‐Lab (https://FI-‐WARE.eng.it) is a cloud instance of
FI-‐WARE that allows the users to deploy, configure and execute a set of
Generic Enablers (GE). The FIWARE-‐Lab allows managing virtual resources in
order to deploy and execute the GE: it is possible to a set of virtual
resources to different projects/users, manage Network configuration, virtual
images and their virtual resources (RAM, CPU, Storage, volumes).
The cloud infrastructure, hosted in the Engineering data centre located in
Vicenza (Italy) is based on OpenStack, an open source software for creating
cloud platforms. This FIWARE-Lab instance is directly managed by Engineering
and offers a specific environment and functionalities dedicated to the
FESTIVAL stakeholders. For instance, a set of preconfigured VM of Generic
Enablers are available for the partners to perform experiments related to the
FESTIVAL use cases: the GE can be used in As-‐A-‐Service approach, executing
them directly in the cloud environment, or can be downloaded to be deployed in
other environments. The Engineering team will offer also support in the usage
and management of the infrastructure
</td> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
Software platform
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
The FI-‐Lab is an example of open innovation making available all the
potentials of the different component developed by the FI-‐WARE project, the
Generic Enablers. The GE offer general-‐purpose functionalities in different
innovative areas such as Internet of Things, Security, Cloud, Data Management,
Network infrastructure etc. The provision of GE in new context and in
particular for Japanese partner will allow finding innovative way to use them.
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
FI-‐WARE provides a tool, called FI-‐OPS, that simplifies the deployment,
setup and operation of FI-‐WARE instances by Platform Providers. In
particular some tools are dedicated to the expansion of the FI-‐Lab network
through the federation of additional nodes (data centres) and allowing
cooperation of multiple Platform Providers. OpenStack APIs allow to launch
server instances, create images, assign metadata to instances and images,
create containers and objects, and complete other actions in OpenStack cloud.
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
The main improvement to be achieved through FESTIVAL is the federation between
the FIWARE-‐Lab and the other involved testbeds. In particular ,the
possibility to directly deploy a FIWARE-‐LAB instance on different platforms
(e.g. JOSE platform)will be explored.
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
Engineering
</td> </tr> </table>
#### 1.2.14. PTL: experimentation area in CEA
<table>
<tr>
<th>
**Asset name**
</th>
<th>
PTL: experimentation area in CEA
</th> </tr>
<tr>
<td>
**Asset Overview**
</td> </tr>
<tr>
<td>
Experimentation area of PTL (connecting technologies platform) is located in
the heart of CEA Grenoble. It’s a place composed of 150 sqm of modular
building, and 1300sqm of urban area.
**Figure 10 -‐ PTL -‐ Connecting technologies platform**
These areas allow its core partners to test and validate experimentations and
prototypes in a close to real environment.
The modular building allows reorganising rooms for experimentation specific
needs, and already provide many sensors for building monitoring, such as
temperature and humidity, as well as some of the most popular communication
protocols like KNX, LON, Zigbee, etc.
</td> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
Place equipped with IoT devices
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
The PTL testbed allows small and large enterprises to test their latest
products in close to real life conditions. FESTIVAL project’s Experimentation
as a Service model will bring an innovative approach to the existing
experimentation methodology by providing interoperability and possible
federation and replication with other testbeds. The approach will be validated
by deploying different use cases identified in the project.
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
Interoperability will be provided via the sensiNact platform, which will be
connected to platforms deployed in other testbeds. Experimentation as a
Service model will also play an important role for obtaining the
interoperability among the testbeds.
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
FESTIVAL project will allow PTL improving its experimentation methodology and
setup, as well as its replicability thanks to its Experimentation as a Service
model.
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
CEA
</td> </tr> </table>
#### 1.2.15. sensiNact platform
<table>
<tr>
<th>
**Asset name**
</th>
<th>
sensiNactIoT Platform
</th> </tr>
<tr>
<td>
**Asset Overview**
</td> </tr>
<tr>
<td>
CEA’s IoT platform (sensiNact) is a set of enablers and services that provide
means for building context-‐aware applications on top of smart connected
objects. It provides generic APIs to access resources provided by IoT devices.
The platform integrates different IoT devices and communication technologies
in order to provide a homogeneous access to the underlying heterogeneous
networks. The main advantage of the platform is simplicity of use and its
support of existing IoT protocols.
**Figure 11 – sensiNactIoT Platform**
</td> </tr>
<tr>
<td>
**Type**
</td> </tr>
<tr>
<td>
Platform
</td> </tr>
<tr>
<td>
**Innovation**
</td> </tr>
<tr>
<td>
The modular approach of the platform makes it easily extensible, thus allows
straightforwardly enhancing it with connections to testbeds. Its
service-‐oriented architecture facilitates its integration with other
platforms and adoption of the Experimentation as a Service model. The support
of various IoT protocols is an advantage for easy integration of the physical
testbeds available in the project equipped with various IoT devices. Adding
support for new IoT devices is possible by creating the necessary protocol
bridge with a quite small effort.
</td> </tr>
<tr>
<td>
**Interoperability**
</td> </tr>
<tr>
<td>
The platform supports various IoT protocols such as CoAP, ZigBee, BLE,
enOcean, KNX, Sigfox, etc. as well as protocols for remote access to the
platform. In this way, the platform provides an abstraction of physical
devices allowing higher level applications accessing them without being aware
of their technical details. In order to access remotely the services available
on the gateway, different protocols can be used, such as HTTP REST,
JSON-‐RPC, Web services, MQTT, etc. These different possibilities of
connection provide interoperable connection to other testbeds.
</td> </tr>
<tr>
<td>
**Foreseen Improvements**
</td> </tr>
<tr>
<td>
sensiNact will be integrating the Experimentation as a Service model of
FESTIVAL which will allow sensiNact to be used as a testing platform for IoT
applications. FESTIVAL project will also be an opportunity to test the
replicability of the platform in other testbeds.
</td> </tr>
<tr>
<td>
**Owner/Responsible partner**
</td> </tr>
<tr>
<td>
CEA
</td> </tr> </table>
### 1.3. Exploitable project items first thoughts
The first stage of an exploitation plan is to clearly define the outputs that
the project will produce during its lifetime: although some concrete items are
well defined in the project scope, many others can be discovered only during
the project execution. A set of possible exploitable have been identified at
this stage of the project and are listed in the following sections. The items
identification process will continue during the whole duration of the project
and the following table will be updated in next releases of the exploitation
and business model deliverables to include new items that can be exploited
after the project end.
In the following table each item includes a description, the innovation
aspects and the exploitation opportunities beyond the project.
<table>
<tr>
<th>
**Exploitable item name**
</th>
<th>
**Description**
</th>
<th>
**Innovation**
</th>
<th>
**Exploitation beyond the project**
</th> </tr>
<tr>
<td>
**SmartSantander SmartShopping deployment**
</td>
<td>
A set of devices equipped with presence, environmental and radio-‐based
sensors will be deployed within several shops in the city.
</td>
<td>
A novel
indoor/outdoor testbed deployment within real scenarios to support
experimentation
based in indoor/outdoor
localisation in shops. It will also be integrated within the
SmartSantander
testbed to experiment with both indoor and outdoor sensors.
</td>
<td>
The deployment of the smart shopping devices will seek two exploiting aspects:
On the one hand, it will introduce a novel Bluetooth ibeacon services in the
city of Santander, stimulating the consumption in the city centre shops.
On the other hand, including a new deployment on indoor/outdoor positioning
system aims at attracting the scientific community to the SmartSantander
testbed, pursuing novel results and an increased
scientific production.
</td> </tr> </table>
<table>
<tr>
<th>
**Exploitable item name**
</th>
<th>
**Description**
</th>
<th>
**Innovation**
</th>
<th>
**Exploitation beyond the project**
</th> </tr>
<tr>
<td>
**Pedestrian**
**analysis BLE**
**packets**
</td>
<td>
**flow using**
**advertising**
</td>
<td>
Pedestrian flow analysis based on Wi-‐Fi packets is to be enhanced by
incorporating the functionality using BLE advertising packets transmitted from
a variety of wearable devices to further improve the accuracy of
pedestrian flows.
</td>
<td>
Wi-‐Fi packet-‐based pedestrian flow analysis is suitable for
approximately
grasping both the flow and stagnant states of pedestrians with
Wi-Fi-‐enabled devices. By incorporating BLE advertising-‐based
pedestrian flow analysis functionality, we envisage improving the accuracy and
the speed of analysis in the coming age of wearable computing
devices.
</td>
<td>
We plan to maintain Wi-‐Fi packets sensors already installed in an
underground shopping mall in Osaka area during/after the project lifetime.
</td> </tr>
<tr>
<td>
**JOSE Sensing foundation deployment**
</td>
<td>
JOSE sensing foundation will add sensor handling features to PIAX and JOSE
Testbed.
</td>
<td>
JOSE sensing foundation provides support
experimentations on
PIAX and JOSE Testbed with sensor handling features and sensor data from
japan-‐wide pre-‐existing sensors.
</td>
<td>
We will continue to maintain and improve JOSE sensing foundation for future
experimentations
after the project lifetime.
</td> </tr> </table>
<table>
<tr>
<th>
**Exploitable item name**
</th>
<th>
**Description**
</th>
<th>
**Innovation**
</th>
<th>
**Exploitation beyond the project**
</th> </tr>
<tr>
<td>
**Constructing an application and investigating a SDN function on**
**JOSE testbeds**
</td>
<td>
KSU is planning to exploit JOSE testbed to construct a prototype Smart City
application using low cost sensors (e.g. current weather report) to
investigate the architecture required in Smart City applications.
KSU has also been developing a Pub/Sub middleware based on PIAX. The
middleware provides robustness by P2P functionality, and also optimizes packet
transfer path by SDN, especially OpenFlow functionality. The system can be a
prototype middleware to investigate mapping between abstracted application
requirements and network parameters.
</td>
<td>
The prototype Smart City application will provide us typical requirements in
supporting Smart City experiment.
The developed middleware will provide robustness and efficiency
simultaneously. It also has a possibility to integrate SDNs operated by
multiple
policy domains.
</td>
<td>
KSU will continue to apply and develop the architecture and middleware in
future
research projects.
</td> </tr>
<tr>
<td>
**Federated Open**
**Data and**
**resources**
</td>
<td>
The possibility to use federated resources to
experiment a new service
</td>
<td>
Complex services require multiple resources not
available on site
</td>
<td>
Propose this federated model as an offering to Tuba partners
</td> </tr>
<tr>
<td>
**End user**
**engagement methodology**
</td>
<td>
A methodology for end user involvement, including ethics and privacy
protections
</td>
<td>
A dedicated methodology for IoT deployments.
</td>
<td>
Inno will continue to apply and develop the methodology in future
research projects.
The methodology will be published as a project work
(to be reused by others).
</td> </tr> </table>
<table>
<tr>
<th>
**Exploitable item name**
</th>
<th>
**Description**
</th>
<th>
**Innovation**
</th>
<th>
**Exploitation beyond the project**
</th> </tr>
<tr>
<td>
**Socio economic**
**impact assessment framework**
</td>
<td>
An evaluation framework to assess the potential socio economic impact of IoT
deployments
</td>
<td>
A dedicated methodology for IoT deployments.
</td>
<td>
Inno will continue to apply and develop the framework in future research
projects.
The methodology will be published as a project work
(to be reused by others).
</td> </tr>
<tr>
<td>
**Quality evaluation framework for**
**EaaS built upon**
**federated testbeds.**
</td>
<td>
Set of KPIs relevant to evaluate relevance and quality of EAAS offer
</td>
<td>
A dedicated methodology being users oriented for platforms evaluation
</td>
<td>
EGM will continue making use of the proposed methodology within other testbeds
</td> </tr>
<tr>
<td>
**Active Lab.**
</td>
<td>
This exhibition area introduces exciting technologies and activities from
corporations, universities, and other institutions.
</td>
<td>
Implementation and dissemination of the experimentation with feedbacks from
general public.
</td>
<td>
Knowledge Capital is to accept implementation of different kinds of
experimentations and the stock of knowledge will be succeeded and also reused
in the future.
</td> </tr>
<tr>
<td>
**Active Studio**
</td>
<td>
A venue used for workshops, seminars, and other kinds of
public presentation.
</td>
<td>
Equipped with JGN-‐X and other devices that encourage interactive
communication with the visitors.
</td>
<td>
A number of events and workshops held in Active Studio continuously attract
public attention, which would result in improvements of its performance by
participations of any other companies, organizations and general public during
and beyond the project.
</td> </tr> </table>
<table>
<tr>
<th>
**Exploitable item name**
</th>
<th>
**Description**
</th>
<th>
**Innovation**
</th>
<th>
**Exploitation beyond the project**
</th> </tr>
<tr>
<td>
**xEMS system**
</td>
<td>
Low-‐latency, high-‐speed, reliable, secure, stable and interoperable Energy
Management Systems for various facilities (xEMS:x = Building, Community,
House,
Factory, Datacenter, …).
</td>
<td>
Integrating various and existing local EMS into ASP-‐based large-scale EMS
to further energy efficiency and cost reduction. Obtained EMS data will be
exploited as
Open Data.
</td>
<td>
Applying our system to Real-‐
world EMS based on experimental results on IoT federated testbeds.
</td> </tr>
<tr>
<td>
**SNS-‐like EMS system**
</td>
<td>
A novel EMS architecture that exploits the concept of SNS to realize direct
device-‐to-‐device communication.
</td>
<td>
Various operations are conducted via
``chatting’’ among system factors such as home appliances, sensors, and humans
such as operators and end users. Broker-‐
based pub/sub protocol is used (MQTT). Devices communicate with each other
autonomously.
Obtained EMS data will be exploited as
Open Data.
</td>
<td>
Applying our system to large-‐scale EMS environment such as data centers,
that has huge number of servers, power supplies, air conditioners, and various
kinds of sensors/actuators.
</td> </tr> </table>
<table>
<tr>
<th>
**Exploitable item name**
</th>
<th>
**Description**
</th>
<th>
**Innovation**
</th>
<th>
**Exploitation beyond the project**
</th> </tr>
<tr>
<td>
**Big data analysis system**
</td>
<td>
Efficient distributed system for data mining. Personalization is achieved by
clustering algorithms running on top of the distributed system.
</td>
<td>
We develop a data partitioning technique that reduces the communication cost
and balances the load among different cores/computers.
Also, we develop an efficient technique for reducing contentions in parallel
data mining processing.
</td>
<td>
Applying our system to other application domain, such as improving the care
quality for patients with cognitive impairment.
</td> </tr>
<tr>
<td>
**Smart Shopping Santander app**
</td>
<td>
A mobile application will be developed in order to deliver offers and
discounts generated by shops in the city center.
</td>
<td>
The use of Bluetooth technology as communication channel, based on proximity
among users and shops.
</td>
<td>
From City Council point of view, the objective is double: -‐ Fostering and
reinforcing the consumption in the city centre, where due to several factors
including crisis and shopping centers located on the outskirts, by taking
advantage of new technological solutions, and -‐ second, getting citizens
involvement in this initiative, as key factor of the project.
Finally, this app would be integrated within the city app (SmartSantander
Augmented Reality), once
FESTIVAL project ends.
</td> </tr> </table>
<table>
<tr>
<th>
**Exploitable item name**
</th>
<th>
**Description**
</th>
<th>
**Innovation**
</th>
<th>
**Exploitation beyond the project**
</th> </tr>
<tr>
<td>
**sensiNact platform**
</td>
<td>
An IoT platform which provides a communication hub among various IoT protocols
as well as a programming model for IoT
applications.
</td>
<td>
Its modular and service oriented approach allows easy integration of new
protocols, as well as rapid prototyping of
IoT applications.
</td>
<td>
FESTIVAL project will allow validating the usability and replicability of the
sensiNact platform in different testbeds using various new technologies. The
feedback from experimenters will allow improving sensiNact and adding new test
features to the platform.
</td> </tr>
<tr>
<td>
**Smart Image**
**sensors**
</td>
<td>
A smart camera that can embed various image sensors and high-‐level image
processing. Already developed image sensors can capture only relevant image
features, describing the content of an observed scene while taking
care of privacy aspects.
</td>
<td>
The modularity of the smart camera and its capability in terms of embedded
processing allow a wide variety of applications. For instance, the camera can
be designed to be low power and privacy friendly. Alternatively, it can be
designed to provide high-‐level interpretation of the scene.
</td>
<td>
In the FESTIVAL project, a mock-‐up of a smart camera using off-‐the-‐shelf
components (i.e. commercial image sensor, FPGA, embedded computer) and smart
imagers designed at the CEA will be developed and deployed. The goal is to
test the smart camera architecture in different IoT testbeds of the project
such as a public city area or a train station. This will allow the validation
of CEA smart sensor approaches and transfer this technology to industrial
partners after the project.
</td> </tr>
<tr>
<td>
**Exploitable item name**
</td>
<td>
**Description**
</td>
<td>
**Innovation**
</td>
<td>
**Exploitation beyond the project**
</td> </tr>
<tr>
<td>
**Federated Open Data Catalogue**
</td>
<td>
To be implemented in the
WP2, this web portal will be a single point of access for all the open data of
testbed. The
Federated Open Data
Catalogue will provide also services to access to open data from external
system and application in a standard way.
</td>
<td>
The Federated Open Data will allow to access and search data in a federated
way. The open data stored in different repositories will be available through
a single portal with common data models and standards
</td>
<td>
This asset can be exploitable after FESTIVAL in future projects, but also can
proposed as a business product for the public sector market.
</td> </tr>
<tr>
<td>
**Testbed**
**Federation API and technologies**
</td>
<td>
One of the main scope of the FESTIVAL project is to achieve the technical
federation among different testbed using a common and homogeneous
API and technologies
</td>
<td>
The main innovation topic of this items the federation among completely
different testbed and platform
</td>
<td>
The federation technologies defined in FESTIVAL can be proposed as specific
standards or exploited in the service and platform integration domain for the
development of commercial products.
</td> </tr> </table>
### 1.4. Single partners exploitation plan
#### 1.4.1. CEA
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
CEA-‐LETI is one of the laboratories of the Technological Research Division
of CEA. Nearly 1,800 persons are serving innovation and the transfer of
technology in key domains such as ICT, wireless communications, security,
creativity and usage of new technologies. More than 85% of its activity is
dedicated to research finalised with external partners. The laboratory secures
more than 170 patents every year and has already sparked the creation of
nearly thirty high-‐technology start-‐ups. CEA-‐LETI is today one of the
main contributors to European projects in the area of
Internet of Things and Smart ICT service such as SENSEI (coordinator),
IOT-‐A, IOT-‐I, SMART-‐SANTANDER, BUTLER (technical coordinator), EXALTED,
OUTSMART, and the EU-‐Japan project ClouT (coordinator).
CEA is a multidisciplinary institute of which different labs are participating
to the project. I) A software lab who has been involved in and coordinated
many projects in the domain of IoT, thus having broad experience on IoT
protocols and platforms in Europe and worldwide, ii) experimentation and
integration platform lab which provides a physical testbed for IoT
applications iii) a dedicated imaging lab that has a large experience and
competencies on smart cameras and image sensors and finally iv) art & science
division where artists and designers meet scientists to produce breakthrough
innovations responding real expectations of our society.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
FESTIVAL project will give the opportunity for CEA’s IoT platform and imaging
sensor to be validated in different testbeds of the project having particular
specificities and dealing with particular use cases. FESTIVAL project will
also allow applying the Experimentation as a Service model to its PTL
experimentation testbed, of which the aim is to speed up the development and
marketing of innovative products integrating advanced microelectronics
technologies in emerging and strategic fields of Health, Housing and
Transport, through the provision of technology platforms and associated
expertise.
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
FESTIVAL project will basically allow improving CEA’s testing skills in the
IoT domain. CEA will learn from the real life IoT deployment experiences of
the project partners, in addition to the possibility of experiencing with the
IoT testbeds of the project partners. The project will also extend CEA’s
competencies with the technologies used at those testbeds. Last but not least,
CEA will benefit from the project results obtained in terms of user
involvement in IoT experimentations...
</td> </tr>
<tr>
<td>
**Individual exploitation**
**intentions**
</td>
<td>
CEA has plans of commercially exploiting that platform with its industrial
partners. The results of the evaluation will determine the robustness of the
approach and better define its business plan. The results of the evaluation
will be published in scientific events. FESTIVAL project’s Experimentation as
a Service model will give the opportunity of testing it for CEA’s future offer
on reuse of those platforms by regional and national SMEs that need such
platforms to test their innovative applications.
</td> </tr> </table>
**1.4.2. Engineering Ingegneria Informatica S.p.A.**
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
Engineering Group is a global IT player, the first at Italian level, leader in
the provision of complete integrated services throughout the software value
chain. The group produces IT innovation to more than 1.000 large clients, with
a complete offer combining system and business integration, outsourcing, cloud
services, consulting, and proprietary solutions. Engineering Data Centres
offer business continuity and IT infrastructure management to about 15.000
servers and 230.000 workstations.
Engineering holds different responsibilities within the international research
community, including technical and overall co-‐ordination of large research
projects and consortia. In particular, the company is core partner of EIT ICT
Labs in Italy (European Institute of Innovation and Technology) focused on
leveraging ICT for Quality of Life; founding partner of the Future Internet
PPP initiative; member of the Board of EOS (European Organisation for
Security). Engineering is one if the partners that built and currently
supports FIWARE platform: in particular it is working in to build the FIWARE
Open Source Community to foster and support the evolution of standards for
Smart Cities and their spread worldwide. It is expected that the FIWARE Open
Source Community will be fully operational at the end of Q2-‐2015.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
The FESTIVAL project will be a concrete chance to improve the FIWARE platform
and the FIWARE-‐LAB adding new components (i.e. Generic Enablers) or extend
the open specifications to support federation with different testbeds and
platforms.
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
The skills that will be improved through the participation to the FESTIVAL
activities are main related to the FIWARE platform deployment and integration
in relation with the external testbed; also other competences will be acquired
thanks to the research on open data, defining new data models and standard API
suitable for the federation of the information among the different pilot
sites.
</td> </tr>
<tr>
<td>
**Individual exploitation**
**intentions**
</td>
<td>
ENG will take FESTIVAL as real opportunity to keep on consolidating its role
within FIWARE world and to expand the use of FIWARE also in different domains
covered by the project experimentations. It will be also interested in
exploring the opportunities to interoperate with other platforms, highlighting
FIWARE flexibility and enlarge the FIWARE ecosystem involving Japanese
stakeholders.
</td> </tr> </table>
**1.4.3. University of Cantabria**
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
University of Cantabria and, concretely, The Network Planning and Mobile
Communications Laboratory group, have a strong background in the research of
wireless technologies, such as data transmission techniques, mobile networks,
traffic engineering and network management. During the last years, the group
has increased its research in the IoT and smart city research areas with
projects like SmartSantander, Lexnet or EAR-‐IT, creating, promoting and
enhancing a unique-‐in-‐the-‐world urban testbed of IoT connected devices.
The testbed is also part of several federation initiatives looking to reach
the scientific IoT research community as much as possible. As part of
FESTIVAL, University of Cantabria will be able to expand the SmartSantander
testbed increasing international scientific collaborations. University of
Cantabria will also support FESTIVAL federation objective by sharing his
experience in previous federation projects.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
University of Cantabria will increase the possibilities of the SmartSantander
testbed by federating it with other testbeds of Europe and Japan.
Additionally, as part of the Smart Shopping use cases, University of Cantabria
expects to increase the SmartSantander testbed capabilities and services by
integrating new sensors devices in real scenarios to experiment with
indoor/outdoor positioning.
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
As expected from the project results, University of Cantabria will expand its
research by including a new field, indoor/outdoor positioning. Additionally,
new learnings on external testbeds are expected as part of the federation work
carried out within FESTIVAL. This will lead to experiment with more resources
in new areas by using several testbeds within the project.
</td> </tr>
<tr>
<td>
**Individual exploitation**
**intentions**
</td>
<td>
University of Cantabria pursue increasing the scientific production of the
institute by several means:
* Novel scientific papers, posters and conferences as result of the works carried out in the project.
* Increased possibilities in research fields for students of the university, such as end of degree projects, master thesis and
PhDs.
</td> </tr> </table>
**1.4.4. Ritsumeikan University**
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
The members of Ubiquitous Computing and Networking Laboratory at Ritsumeikan
University have a strong background in the research of indoor positioning
technologies using Wi-‐Fi, BLE, PDA, and the hybrid of them, as well as
wearable computing and pedestrian flow analysis method using Wi-‐Fi packet
sensors. As a member of FESTIVAL project, our research interest lies in
exploring how pedestrian flow information is effectively utilized in
experimentation testbeds in the context of smart shopping and smart cities
with different cultural backgrounds.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
Ritsumeikan University expects the system deployment onto the JOSE testbed in
Japan and possible federation with European testbed to further integrate with
IoT infrastructure and investigate novel applications and services.
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
Ritsumeikan University will expect to acquire information and experience of
IoT testbeds actively used in Europe to further collaborate with research
partners worldwide during/after FESTIVAL project.
</td> </tr>
<tr>
<td>
**Individual exploitation**
**intentions**
</td>
<td>
Ritsumeikan University intends to utilize experimentation opportunities using
a federated testbed between EU and Japan to further improve the system
deployed in existing experimentation fields in Osaka area to produce
scientific results.
</td> </tr> </table>
**1.4.5. Acutus Software, Inc.**
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
Acutus Software is software development company. The company provides custom
software development service including following areas:
* Video transmission for Android / iOS applications
* HDTV video transmission systems
* High quality voice transmission systems
* Low-‐latency TV conference systems
* Network monitoring softwares
* Tuning of high-‐definition video for software CODEC
* P2P platforms
Acutus Software will provide support for experimentations on PIAX and
JOSE Testeds and development of the required software components.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
Acutus Software will increase usability of PIAX and JOSE Testbeds by
development of the software components and federating it with other testbeds
of Europe and Japan.
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
As expected from the project results, Acutus Software will improve its own
skills and knowledge on the sensing platforms.
</td> </tr>
<tr>
<td>
**Individual exploitation intentions**
</td>
<td>
Acutus Software aims improvement of the sensing features of PIAX and JOSE
Testbeds and producing scientific papers on the area. In addition, Acutus
Software will be collecting the know-‐how of IoT and sensing platform for
future business opportunity.
</td> </tr> </table>
#### 1.4.6. KSU
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
Kyoto Sangyo University (KSU) is one of the private leading universities of
Japan. KSU has a project of developing a Software Defined Network (SDN) aware
Pub/Sub. Cyber Kansai Project (CKP) is a joint research consortium among
commercial sectors and academic entities in Japan. Several board members of
CKP are belonging to KSU. Its research topics are focused on leading-‐edge
technologies for the next generation Internet.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
KSU basically provides network environment in GFO area and connecting them
into the testbeds JOSE, PIAX and so on. KSU tries to construct a fundamental
Smart City application for investigating JOSE testbeds. KSU also tries to
investigate SDN function of JOSE testbed by applying existing Pub/Sub
middleware.
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
KSU will have significant experiences in the following competences and skills:
* knowledge of IoT
* knowledge of JOSE and PIAX testbed
* IoT application use case analysis
* IoT testbed analysis
</td> </tr>
<tr>
<td>
**Individual exploitation intentions**
</td>
<td>
KSU uses JOSE testbed for constructing a prototype Smart City application and
investigating SDN functionality of JOSE.
</td> </tr> </table>
**1.4.7. SOPRA**
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
Sopra Steria, European leader in digital transformation, provides one of the
most comprehensive portfolios of end to end service offerings in the market:
Consulting, Systems Integration, Software Development, Infrastructure
Management and Business Process Services.
Sopra helps its customers in their digital transformation by designing,
building and operating key business services.
Sopra Steria is a key founder of the Tuba Living Lab and is part of a mixed
consortium of public and private entities: Metropole ofLyon, Rhône-‐Alpes
Region, major companies as VEOLIA, , KEOLIS, EDF, ERDF and SFR, SMEs,
competitiveness clusters, research laboratories and start-‐ups.
Sopra Steria works with the Tuba to experiment new services related to Health,
Transportation, Public Services.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
Sopra Steria is dedicated to offer the best experimentation tools to its
partners and customers. Therefore Sopra Steria, the Tuba and FESTIVAL partners
can mutually benefit from:
* Federated Resources made accessible to experimenters, through interoperability
* User/Experimenter access made accessible to FESTIVAL partners
* A better testing/experimenting methodology and tools, based on the measure of performance done by Tuba
* The building and operating of an Experimentation as a Service instance
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
Sopra Steria provides through the Tuba its expertise and skills around
communication, project management and innovation management
</td> </tr>
<tr>
<td>
**Individual exploitation**
**intentions**
</td>
<td>
Sopra Steria plans to use the FESTIVAL end-‐to-‐end federation to design
better services and optimize their time-‐to-‐market.
FESTIVAL resources could be part of the economic model of selected projects
for its customers and partners.
Also the cultural aspect of an international federation helps understand the
local requirements and issues with organizations and end users.
</td> </tr> </table>
**1.4.8. Inno**
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
Inno group is a leading strategic management consultancy company operating in
nearly all-‐European countries. Inno group has offices in Karlsruhe (Inno
AG), Rostock, Berlin, Sophia-‐Antipolis and Stockholm. Inno offers a
multi-‐national, highly qualified team of more than 50 consultants. Over the
last 20 years, inno has combined highly specialized expertise, creativity and
pragmatism to assist more than 500 clients all over Europe. One of the core
activities of inno is to provide management, dissemination and exploitation
support to scientific leaders of complex inter-‐institutional and
trans-‐national projects, with particular focus on ICT. This includes support
in consortium & knowledge management, IPR issues, dissemination and
exploitation of research results, marketing and public relations activities,
event organization, and coordination of industrial case studies and animation
of working groups and industrial panels. Inno-‐group runs a patent
commercialization office in Germany.
Inno has over 15 years of experience in implementation of EU-‐wide
dissemination campaigns.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
* End user engagement tools
* Socio economic impact assessment framework
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
The following competences and skills will be improved through the project:
* End user engagement methodology
* Socio economic impact assessment methodologies
* Knowledge of the IoT and FIRE ecosystem
* Knowledge of Japanese IoT ecosystem
* Research and Innovation Project dissemination and communication
* Exploitation and support to innovation
* Project management
</td> </tr>
<tr>
<td>
**Individual exploitation**
**intentions**
</td>
<td>
Inno plans to reuse the knowledge and experience achieved through the
participation to the FESTIVAL project to reinforce its expertise of the Future
Internet Experimentation innovation ecosystem and its potential
socio-‐economic impact. This will fuel future consultancy business
development in helping public authorities take up and support FI innovations.
</td> </tr> </table>
**1.4.9. Easy Global Market**
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
EGM is providing solutions and services to develop market confidence in
technologies making the global market easy for companies looking for
globalisation. EGM is specialised in validation, interoperability,
Certification and label programmes including for FIRE and IoT areas. EGM is
working with state of the art testing and interoperability tools, validation
approaches and advanced techniques using experience gained by EGM’ Directors
working in +25 FP/H2020 projects and designing +10 worldwide label or
certification programmes. EGM is currently involved in 7 FP7 and H2020
projects including IoT (SA Smart Action), H2020 U-‐TEST on Model Based
Testing for CPS, Future Internet Experiment FIRE on IoT test beds (i.e. H2020
FIESTA) , FIWARE (FICORE) and FI-‐PPP Use Case FI-STAR Project. EGM is
founder member of the IoT Forum and lead one out the three WGs on “market
confidence”.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
• Validation tools and methods for federated testbeds
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
The following competences and skills will be improved through the project:
* Interoperability events organisation
* Data and Semantic interoperability
* Knowledge of on-‐going standardisation for IoT
</td> </tr>
<tr>
<td>
**Individual exploitation**
**intentions**
</td>
<td>
EGM intends to better identify the forthcoming standards of interest for IoT
at the worldwide scale and support the development of tools and methods
related to their conformance and interoperability evaluation. These tools
would be used in the possible development of labels and certifications within
the IoT sphere.
</td> </tr> </table>
**1.4.10. Knowledge Capital**
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
Knowledge Capital is a center for intellectual creation, where businesspeople,
researchers, creators, and ordinary people come together to create new values
by exchanging and combining knowledge and ideas. The center is fully equipped
with facilities for interpersonal exchanges, such as various sized offices, a
salon, labs, showrooms, a theater, event spaces, and a convention center. The
name Knowledge Capital represents the facilities, the organization, and the
activities itself. They will go beyond the conventional focus on the economy
to generate brand new activities that can possibly emerge only through human
interactions. Knowledge Capital believes this is the way to create innovative
culture, ideas, goods, and services.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
Knowledge Capital offers the experimentation location for FESTIVAL project,
close interactions and communication with general public and participants.
Also the coordination of the use of other Knowledge Capital facilities as
EU-‐Japan collaboration project, including other companies that concern Grand
Front Osaka. Through these contributions, Knowledge Capital will be able to
reinforce the partnership with participants and general public. Also the
implementation of different kinds of experimentations by utilizing the stock
of knowledge acquired during this project will bring about further engagement
and interaction.
</td> </tr>
<tr>
<td>
**Individual exploitation**
**intentions**
</td>
<td>
Supporting and facilitating the implementation of experimentation by realising
the requirements from other partners in order to secure the performance of
FESTIVAL experimentation.
</td> </tr> </table>
**1.4.11. OSAKA University**
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
Osaka University research team is the leading organization as an expert group
on Big Data technology and Green ICT, including energy management, smart grid,
and information protocols. The communication interface standardization project
for the interoperability supported by the Ministry of Internal Affairs in
Japan is being promoted. One of the members of the Osaka University has been
promoting the standardization as a Chair. In addition, in terms of Big Data
technology, Osaka University has been successful representing the world of
high-speed algorithm of graph mining and distributed processing platform.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
Osaka University will develop a novel EMS protocol, based on web-‐based
communication protocol, for realizing large-‐scale xEMS that integrates
existing EMS cites on buildings, datacenters, factories, homes. SNS-‐like EMS
is also constructed for direct communication among devices, sensors and
actuators, based on existing MQTT protocol over WebSocket protocol.
We will also develop two techniques for Big data analysis system and apply the
system to smart shopping applications. The first technique is a data
partitioning technique that reduces the communication cost and balances the
load among different cores/computers. The second one is for reducing
contentions in parallel data mining processing.
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
Osaka University will have significant amount of experiences and
technical/non-‐technical insights on IoT testbeds and their utilization for
establishing smart energy and smart shopping architecture. Osaka University
will also obtain tight relationship to Europe research organizations in
FESTIVAL project regarding to IoT testbeds for further research collaboration.
</td> </tr>
<tr>
<td>
**Individual exploitation intentions**
</td>
<td>
Osaka University will utilize the experimental experiences and obtained
results to apply real-‐world EMS like FEMS (Factory EMS), DEMS (Datacenter
EMS), BEMS (Building EMS), as well as CEMS (Community EMS) including them.
Standardization of communication protocol for EMS is also an important
exploitation by OSK.
In addition, Osaka university will make smart shopping experiments and
evaluate the effectiveness of the personalization and the efficiency of the
Big data analysis system. Then, we will make feedback to our system.
</td> </tr> </table>
**1.4.12. Japan Research Institute for Social Systems**
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
JRISS is a private company dedicating fundamental technologies applied to
civil engineering and information science, excel in development information
systems as well as urban and transportation planning.
We provide a linked data oriented digital signage system with touch panel
operation that is now already installed at many railway stations. This signage
system provides information about railway services, digital maps around
stations, railway timetables and also provides multimedia advertisement
messages on idling.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
Our main mission is project management. Our research interest lies in how to
apply the IoT technology on the actual urban development project.
JRISS and RU have data being able to be shared in FESTIVAL project, which
gathered from some experiments of the Wi-‐Fi packet sensor in the some
traffic survey on car traffic flow and pedestrian flow analysis.
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
JRISS provide legal and economic problem solving skills to install IoT
technology on the real world.
</td> </tr>
<tr>
<td>
**Individual exploitation intentions**
</td>
<td>
We intend to utilize experimentation opportunities using a federated testbed
between EU and Japan In order to explore the applicability on the real world.
</td> </tr> </table>
**1.4.13. City of Santander**
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
The city of Santander is the capital of the Cantabria region located in the
north of Spain and with a current population of 174,000 inhabitants. The city
Council is strongly committed with the Innovation and, in this way, is working
to provide a more efficient and closer to the citizen city management through
the use of new technologies.
The city participates in diverse initiatives related to smart cities. Among
them, the SmartSantander project has established a before and after in the way
of conceiving and organizing innovation in the city. Thus, Santander is well
known as a unique living lab in which to experiment with new technologies,
applications and services. Currently, it is supporting other European
Projects, such as ClouT and FESTIVAL.
A sustainability model has been developed based on the creation of a City
Platform, which will be fed with data coming from all the urban services. In
order to ensure this data provision, urban service tenders will include
innovation clauses, as has occurred with the waste and water management
services.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
Santander Municipality will:
* Improve existing mobile app through the smart shopping use case, by adding new functionalities,
* Include Smart shopping use case outputs in the current Open Data catalogue, adding new categories which may be used to develop new applications and/or services,
* Improve the existing IoT infrastructure by the federation with other EU and JP testbeds, which may provide future projects
* Increase IoT infrastructure by the integration of new sensors, which will be used not only in this project, but also in future ones.
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
</td>
<td>
Santander Municipality expects to improve the following competences and skills
related to provide a more efficient city management, closer to the citizen
through the use of new technologies:
* Improve relationship with Market associations and shopkeepers, listening to their necessities and providing them new tools in order to foster the consumption in the city center.
* Improve citizens communication, reinforcing their involvement as key actors in the use case,
Take advantage of federation of EU and JP testbeds, which may provide not only
new collaborative opportunities, but also new services/apps development.
</td> </tr>
<tr>
<td>
**Individual exploitation**
**intentions**
</td>
<td>
The purpose of field trials for Santander is to start carrying out valuable
services to the citizen. Santander City Council is at the moment drawing the
strategy that it will follow in the next years around the Smart City concept.
As aforementioned, the main goal of this project is to foster the consumption
in the city centre shops, together with getting citizens involvement.
</td> </tr> </table>
**1.4.14. West Japan Marketing Communications Inc.**
<table>
<tr>
<th>
**Partner Profile**
</th>
<th>
West Japan Marketing Inc. (jcomm) is a Japanese advertising agency subsidiary
of West Japan Railway Company. We are handling all types of advertisement and
publicity including ones for newspapers, magazines, TVs, radios,
transportation, sales promotion.We are exclusively contributing with
advertising solutions at the train stations and buildings around the stations.
</th> </tr>
<tr>
<td>
**Technical outcomes**
</td>
<td>
Our main mission is providing the experimental location for FESTIVAL project,
while protecting personal information of the common space. This policy will
comply even in FESTIVAL experiments.
Jcomm and JRISS has succeeded in joint development of touch-‐panel digital
signage system. Taking advantage of this relationship, we will work even in
solutions of FESTIVAL project.
</td> </tr>
<tr>
<td>
**Competence and skills to be improved**
</td>
<td>
Jcomm offers the experimentation location for FESTIVAL project, for example
train stations and buildings around the stations. In this area we install many
digital signage for the provision of advertising and rail way information. The
digital signage system use Wi-‐Fi and WiMAX technologies. We can thus
contribute to location owners to make them try new communication system.
</td> </tr>
<tr>
<td>
**Individual exploitation**
**intentions**
</td>
<td>
We intend to utilize experimentation opportunities using a federated testbed
between EU and Japan and to explore the applicability on the train station and
buildings around the station.
</td> </tr> </table>
## 2\. Experimentation as a Service ecosystem
FESTIVAL project is based on the Experimentation as a Service approach. In
order to define the exploitation opportunities and the business model for the
services and products that the project will provide as outcomes, it is
fundamental to define and study the possible ecosystem based on EaaS that
involves entities and processes. This section presents an initial definition
of this ecosystem trying to describe the processes that are necessary to
collect the resources to create federated testbeds and to manage and run
experiments, as well as the user’s roles and the concrete stakeholders that
participate in the process. In this version, our analysis is at its early
stage and will be improved and updated in the next deliverables related to
business models.
### 2.1. EaaS processes and stakeholders
The first step to define the “Experimentation as a Service” ecosystem is to
identify the processes and the entities that are involved in it. The following
tables give a brief description of four main identified entities, and the
related process group that will be presented in details in the next sections.
It is important to say that the processes listed in this chapter are not only
the ones to be executed in the FESTIVAL project, but are included also the
processes that could be present in a generic Experimentation as a Service
ecosystem
<table>
<tr>
<th>
**Entities**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Resource**
</td>
<td>
It is a generic basic IT or non-‐IT resource that can be part of an asset.
Examples of IT resources are, for instance, servers, virtual machines, network
connections, but in the same category are included also human resources or
physical items. The resources usually can be dynamically assigned or released
during an experiment.
</td> </tr>
<tr>
<td>
**Asset**
</td>
<td>
The asset represents a complex item that can be used to compose a testbed:
examples of asset can be a software platform, a physical space, an open data
repository, etc.
</td> </tr>
<tr>
<td>
**Testbed**
</td>
<td>
It is the environment in which can be executed the experiments. In addition,
in FESTIVAL case the testbed can be an IT infrastructure, a living lab or any
other environment suitable for experiment execution. The testbeds can be
federated to create a distributed environment.
</td> </tr>
<tr>
<td>
**Experiment**
</td>
<td>
This entity represents the experiment executed in the testbed or in a
federation of testbeds.
</td> </tr> </table>
**Table 1 -‐ Entities in the FESTIVAL EaaS ecosystem**
The Figure 12 shows an example of possible concrete entities and their
relationship in an EaaS ecosystem:
**Figure 12 -‐ Resources, Assets, Testbed and Federation**
Next section describes the process involved in the ecosystem: each process is
classified in process groups based on its application scope, as reported in
Table 2.
<table>
<tr>
<th>
**Process groups**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Resource scope**
</td>
<td>
Resource scope is the lowest level scope of FESTIVAL EaaS ecosystem. It
includes all the processes concerning the management of resources. Main aim of
the processes of this scope is the collection of resources necessary for
running experiments.
</td> </tr>
<tr>
<td>
**Asset scope**
</td>
<td>
Asset scope includes all the processes related to the management of assets
that are valuable in setting testbeds up.
</td> </tr>
<tr>
<td>
**Testbed scope**
</td>
<td>
Testbed scope includes all the processes related to the management of testbeds
that are valuable in setting federation up, in order to provide required
functionalities for running “Experimentation as a Service” platform.
</td> </tr>
<tr>
<td>
**Experiment scope**
</td>
<td>
Experiment scope includes all processes concerning the management of
experiments running on FESTIVAL’s environments through the “Experimentation as
a Service” approach
</td> </tr> </table>
**Table 2 -‐Process groups in FESTIVAL EaaS ecosystem**
For each scope, actors involved in the execution of processes are identified.
Each actor is described in Section **Errore. L'origine riferimento non è stata
trovata.** .
#### 2.1.1. Resource scope
Main aim of processes included in resource scope is the management of
resources for running experiments. Resources can be ICT resources (physical or
virtualised: virtual machines, storage capacity, memory, computing capacity,
etcetera) or human resources, for example people that perform specific task
during the execution of an experiment. Amount and typology of resource
involved in an experiment can be adjusted during its execution of in order to
guarantee its correct progress. Processes related to this scope are:
* **Resource discovery** : this process identifies necessary resources for experiment running in order to guarantee suitable service performance for experiment running itself.
* **Resource provisioning** : this process includes all activities related to resources acquisition, for instance the deployment of necessary virtual machines in compliance with result obtained by resource discovery process.
* **Resource monitoring** : this process includes all activities devoted to the maintenance of service level performance identified through “resource discovery” process; moreover this process is cyclical and it is continuously performed during experiment running. Resource monitoring includes, for instance, activities such as adjustment of amount of storage capacity or changing assignment to people involved in the experiment.
* **Resource release** : this process includes all activities related to resources release, such as
“un-‐deployment” of necessary virtual servers, freeing of storage space etc.
**Figure 13 -‐ Execution sequence of resource processes**
Processes of resources scope are executed in sequence, as depicted in Figure
13.
#### 2.1.2. Asset scope
Main aim of processes included in asset scope is building a new testbed. In
order to achieve this objective, assets are identified and integrated in the
testbed. Processes related to this scope are:
* **Testbed requirements identification** : main aim of this process is the identification of requirements for experimental testbed; output of this process represents inputs of asset identification process.
* **Asset identification** : this process includes all activities devoted to the identification of existing potential reusable assets, such as infrastructures, HW/software platforms, HW/SW components, etcetera, that complies with testbed requirements identified through testbed requirements identification process.
* **Asset analysis** : through this process, an analysis of assets identified in the previous asset identification process is performed; a deep analysis of identified assets is necessary in order to understand their capabilities and limitations in the perspective of testbed set. In this context, each asset should be well documented and training materials should be available to support the analysis.
* **Asset selection** : results achieved through asset analysis process represent the starting point of asset selection process; through this process, a selection of assets that are of interest to the set up the testbed is performed. The selection is made on the basis of the requirements of the testbed and on the results of the analysis of identified assets; moreover asset selection process takes into account non technical aspects, such as IPRs (intellectual property rights) rules associated with the assets, available support resources, respect of ethics and privacy issues, adequate level of quality, etcetera. Involved actor is Testbed Manager.
* **Asset integration** : asset integration process represents the last macro activity useful for making a testbed; in particular, this process includes all activities necessary to integrate assets selected in asset selection process. In order to solve, technological and operational problems derived from possible heterogeneity of selected assets, this process includes the design and building of adaptation components. Moreover, assets integration process faces possible political constraints issues. Final result of asset integration process is realization of the testbed.
* **Asset monitoring** : this process includes all activities devoted to the maintenance of integrated assets into the testbed, in order to guarantee the integrity (in terms of provided services and functionalities) of the testbed itself; this process is cyclical and it is continuously performed. Moreover, through this process it is possible to identify possible enhancements on integrated assets and in general on the whole testbed.
**Figure 14 -‐ Execution sequence of asset processes**
Similarly to processes of resources scope, processes of assets scope are
executed in sequence, as depicted in Figure 14; their final result is the set
up of the testbed and the continuous monitoring of integrated assets.
#### 2.1.3. Testbed scope
Main aim of processes included in testbed scope is the integration of a
testbed into the testbed federation; all actions needed to build the
federation among testbeds and make them interoperable are included in this
scope. In order to achieve this result, processes included in this scope cover
not only technical aspects, but also non-‐technical, such as agreement
subscription, users privacy, policies definition, etcetera. Processes related
to this scope are:
* **Standards and data models agreement subscription** : this process includes action devoted to the establishment of set of commonly agreed standards and data models among federated testbeds, in order to provide a homogeneous abstraction layer on top of the heterogeneous testbeds.
* **Testbed technical integration** : through this process a specific testbed is integrated in the federation of testbeds, according to standards and data models agreed in the previous process. This process includes the implementation of necessary adapters, in order to enable the testbed to interoperate with the entire federation of testbeds; in particular, adapters will be in charge of translation between data formats and interoperability among the different standards.
* **Testbed integration check** : testbed integration check includes action to verify the correct integration of the testbed with the entire federation of testbeds and to ensure expected results and performance.
* **Policies and conditions definition** : through this process, policies and conditions of access to testbeds are defined, such as the way in which the testbed is used or for what it is used (e.g. commercial or non-‐commercial use).
* **End users privacy** : this sub process defines necessary measures and actions to guarantee protection of end-‐users against privacy concerns; testbed must be in accordance with defined rules and actions.
* **Service Level Agreement** : this sub process defines service level agreements for the testbed in order to guarantee suitable quality of service and quality of experience to experimenters and users.
* **Drafting and publication of experimenter guidelines** : final result of this sub process is publication of documents about the specific testbed; in particular a set of guidelines in order to enable experimenters to use the testbed and to “create” experiments. Involved actor is Testbed Manager.
**Figure 15 -‐ Execution sequence of testbed processes**
#### 2.1.4. Experiment scope
Main aim of processes included in experiment scope is the management of
experiments, from their definition to the collection and analysis of results
obtained from them; processes related to this scope are:
* **Experiment definition** : this process represents the first step of experiment management; in particular, it enables experimenters to define both main aim and details of an experiment; definition of experiment obtained from this process represents “what” the experiment wants to demonstrate or to obtain.
* **Experiment setup** : definition of an experiment is the input of this process; experiment setup process includes all necessary actions that enables the execution of the experiment, both technical (e.g. use of web application, mobile application or both, etc.) and non – technical (e.g. target end-‐users, channels of communication, etc); output of this process represents “how” the experiment should obtain expected results.
* **Experiment running** : this process includes all necessary action for executing an experiment; moreover it includes actions for collecting obtained results; experiment running process contains three main sub processes: o **End users involvement** : this sub process includes needed actions for involving end users and to collect information.
* **Experiment control** : this sub process includes actions for controlling and managing the evolution and the execution of the experiment.
* **Experiment monitoring** : this sub process includes actions for monitoring the experiment and evaluating its execution, in order to plan measures for its progression.
* **Evaluation of results** : inputs of this process is formation obtained from execution of an experiment through experiment running process; evaluation of results process includes actions for evaluating and analysing collected information.
**Figure 16 -‐ Execution sequence of experiment processes**
Processes of experiment scope (Figure 16) are executed in sequence and their
end point is the global result obtained from the execution of an experiment.
#### 2.1.5. Roles
**Figure 17 -‐ FESTIVAL's roles and relations among them**
* **Testbed Manager** : it is responsible for a testbed; its main assignment is to guarantee efficiency of and retention of the testbed into the federation. Moreover, it is responsible for the management of assets that compose the testbed, including their identification, evaluation, integration and maintenance. It also manages the execution of experiments and provides support to experimenters in order to define and manage experiments. Finally, it is responsible for the management of resources (storage, memory, computing capacity, etcetera) that enable execution of experiments. To achieve this result it collaborates with Federation Manager, Service Provider and Experimenter.
* **Federation Manager** : it is responsible for the entire federation of testbeds; in particular, it manages the federation in order to maintain both efficiency levels and functionalities; to achieve this results, it works with Testbed Manager.
* **EndUser** : it is an end user involved in an experiment; it uses functionalities provided by an experiment: in that way, it supplies information to experimenters.
* **Service Provider** : it provides possible services necessary for establishment of testbed and for it maintenance; it provides the services toTestbed Manager.
* **Experimenter** : it is a fundamental role of FESTIVAL’s ecosystem, since it is the end user of the functionalities provided by the Experimentation as a Service ecosystem itself. With support of Testbed Manager, it plans, defines, designs and executes experiment and finally analyses results.
In this section, a summary table describing relation between roles and
processes is provided.
<table>
<tr>
<th>
**Role**
</th>
<th>
</th>
<th>
**Processes**
</th> </tr>
<tr>
<td>
Federation Manager
</td>
<td>
•
</td>
<td>
Standards and data models agreement subscription
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Testbed integration check
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Policies and conditions definition
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Service Level Agreement
</td> </tr>
<tr>
<td>
Testbed Manager
</td>
<td>
•
</td>
<td>
Resource discovery
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Resource provisioning
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Resource monitoring
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Resource release
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Testbed requirements identification
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Asset identification
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Asset analysis
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Asset selection
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Asset integration
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Asset monitoring
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Standards and data models agreement subscription
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Testbed technical integration
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Testbed integration check
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Policies and conditions definition
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
End users privacy
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Service Level Agreement
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Drafting and publication of experimenter’s guides
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Experiment definition
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Experiment setup
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Experiment running
</td> </tr>
<tr>
<td>
Service Provider
</td>
<td>
•
</td>
<td>
Asset integration
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Testbed technical integration
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Testbed integration check
</td> </tr>
<tr>
<td>
Experimenter
</td>
<td>
•
</td>
<td>
Experiment definition
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Experiment setup
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Experiment running
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
End users involvement
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Experiment control
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Experiment monitoring
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Evaluation of results
</td> </tr>
<tr>
<td>
End User
</td>
<td>
•
</td>
<td>
Experiment running
</td> </tr> </table>
**Table 3 -‐Roles -‐ Processes summary table**
#### 2.1.6. Stakeholder analysis
The roles that were described in the previous section are now mapped with
possible concrete stakeholders involved in the experimentation ecosystem and
interested in general in the FESTIVAL project results are identified and
described.
##### Testbed Manager
In FESTIVAL’s ecosystem, Testbed Manager is responsible for the management of
a specific testbed integrated in the federation. Testbed Manager could be
represented by Enterprises, Research Centres and Universities, which are the
ideal candidates for holding the role of Testbed Manager, because of the
importance the role itself. Testbed Manager should hold specific skills,
competences and expertise in order to successfully manage all aspects of a
testbed, from its conception to its maintenance.
##### Federation Manager
Differently from Testbed Manager, in FESTIVAL’s ecosystem Federation Manager
is responsible for the management of the entire federation of testbeds, but
similarly to Testbed Manager, it could be represented by Enterprises, Research
Centres and Universities. These three subjects hold necessary skills,
competences and expertise for successfully managing all aspects of the
federation from both technical and not technical point of views.
##### Experimenter
Experimenter is the main stakeholder of FESTIVAL ecosystem and it represents
the crucial point around which processes described inChapter2and internal
stakeholders of the ecosystem revolve. Experimenter could be represented by:
* **Research Center** , **University** , **Living Lab** : Research Center, University and Living Lab could be interested in experimentations for validating and verifying application (that could be in early stage of development) or to undertake studies in specific fields.
* **Researcher, Application Developer** , **Start-‐up** : similarly to Research Center, University and Living Lab, Researcher, Application Developer and Start-‐up could be interested in experimentations for validating and verifying application or to undertake studies in specific fields; in this particular case, they could be driven by business purpose in addition to scientific purposes.
* **Public Administration** : a Public Administration could provide access to its applications and databases, such as civil registry or land registry, in order to enable the execution of experiments that involve themselves; this is mainly due to the fact that a Public Administration can play the roles of Service Provider and Experimenter and it can run experiments that involves its services, in addition to make this services available to other experimenters.
* **Enterprise** : in general, an enterprise could be interested in functionalities provided by FESTIVAL’s Experimentation as a Service in order to check for example prototype application or solution or to investigate new fields of business; an enterprise could be represented by a large range of subjects, from micro enterprise to large enterprise, such as industries.
##### Service Provider
Service Provider supports Testbed Manager both in the establishment and in
maintenance of the testbed providing specific services. Service Provider could
be represented by:
* **Research Center** , **University** : Research Center and University could provide innovative services and functionalities in their early stage of development in order to take advantage of a largest test bench.
* **Public Administration** : a Public Administration cloud provide access to its applications and databases, such as civil registry or land registry, in order to enable the execution of experiments that involve themselves; this is mainly due to the fact that a Public Administration can play the roles of Service Provider and Experimenter and it can run experiments that involves its services, in addition to make this services available to other experimenters.
* **Other EaaS Projects or Initiatives** : other EaaS projects or initiatives could provide new and solid technologies and/or specific applications that are valuable for comprising new functionalities in the testbed.
* **Enterprise** : in general, an Enterprise could provide services and functionalities that are strengthened and well organized in order to build up the testbed; similarly to Public Administration, an Enterprise can play the role of Service Provider and the role of Experimenter in FESTIVAL’s ecosystem.
##### End User
End User represents the final user of applications and services provided
during the execution of the experiments. Specific end users should be
identified for each experiments that will run in EaaS ecosystem; for instance,
in the context of FESTIVAL’s project end-‐users could be Citizen, Art &
Science performer, Industrial User, etc.; in general, specific typologies of
end users can be identified for each experiment.
#### 2.1.7. Experiment: from definition to results
In this section, a description of processes involved in a generic
experimentation is provided, from definition of the experiment to results
coming from its execution (Figure 18 illustrates the execution flow).
First process involved is “Experiment definition” (experiment scope): Testbed
Manager and Experiment collaborate in order to define the new experiment
pinpointing technical and non-technical aspects.
Results obtained from Experiment definition represent the input for
“Experiment running” process (experiment scope); also in this process, Testbed
Manager and Experiment collaborate in order to setup the new experiment.
Moreover, “Experiment running” process includes two other processes: “Resource
discovery” and “Resource provisioning” (resource scope), in which only Testbed
Manager is involved.
When the new experiment is ready to run, “Experiment running” process
(experiment scope) starts; similarly to “Experiment running” process, also
this process includes other process: “End users involvement”, “Experiment
control” and “Experiment monitoring” that belong to experiment scope, and
“Resource monitoring” that belong to resource scope. “End users involvement”,
“Experiment control”, “Experiment monitoring” and “Resource monitoring” start
together when “Experiment running” process starts and they run simultaneously.
Similarly, they stop when “Experiment running” process finishes its execution.
At the end of the execution of “Experiment running” process, “Evaluation of
results” process (experiment scope) starts; in this process Experimenter
analyses information collected through execution of the experiments. Once
Experimenter obtains the expected results “Evaluation of results” ends and the
last process starts: “Resource release” (resource scope). In this last
process, Testbed Manager releases all resources involved in the experiment and
makes them available for other experiments.
**Figure 18 -‐ Execution sequence of processes involved in a generic
experimentation**
### 2.2. Existent Projects and initiatives in the context of EaaS
Based on this definition of what the experimenters may expect from an EaaS
offer we can look at related initiatives that follow either the EaaS model or
present similar characteristics that can answer to some of the expectations
presented above.
#### 2.2.1. FIRE community
The FIRE community of project is a natural and first source for comparison of
the FESTIVAL approach with other related approaches. FIRE current ecosystem
regroups 12 facility projects.
**Figure 19 -‐ FIRE Ecosystem**
The FIRE Facility projects are building a variety of network experimentation
infrastructures and tools with different characteristics. The CREW [1],
Fed4FIRE [2]and OneLab [3]project provide free and or paid access to testbeds.
Most projects also provide access to their facilities through open competitive
calls that are limited in time and scope. The FIRE Testbed Search
[4]references the facility/testbeds involved in the projects and can be used
to get information on access to individual testbeds.
The different individual facilities involved in the FIRE initiative, as well
as the initiative as a whole demonstrate some of the characteristics we
defined above for the EaaS model. One of the main current limitations of the
FIRE community in regards to the EaaS model and expectations is the limited
number of case of “on demand” availability. Many testbed facilities have still
limitation to the access of the experimentation facilities to consortium
members and participants in the open calls. It can also be expected that this
will increase in the coming years as other H2020 projects will go closer and
closer to the EaaS model, and FESTIVAL will have to keep in touch with the
community to see how others implement the EaaS model.
In regard to post project activities, although some independent facilities are
sustained on their own, the federation of testbed and numerous of the past
project infrastructures are mostly sustained by new projects integrating past
infrastructures.
#### 2.2.2. Technology platforms and labs
Various technology platforms and lab initiatives that offer experimentation
possibilities for external users present characteristics of the EaaS model and
answers to the expectations presented above. In most case they link the
provisioning of experimentation service with other services and we can
characterize them as follows (note that the proposed characterization
represent general trends rather than strict boundaries and some
experimentation facilities or actors can be linked with one or more of these
trends):
* **Experimentation services linked with research and education services:** Their scope and size can vary from EU wide initiative (such as the EU Research Infrastructures supported by DG Research [5]) to national initiative (an outstanding example being the Fraunhofer institutes) and to local initiatives (such as the Plateformes Technologiques in France [6]). Their main mission is usually to provide research services, which can include experimentation services and access to experimentation facilities for third parties. In most cases, they cannot be directly characterized as EaaS, but provide services that answers to some of the needs of potential EaaS users.
* **Experimentation services linked with end user access:** This is characteristic of the Living labs movement **.** Here the focus will be on the engagement of end users in the experimentation through demonstration, usage and/or co-‐creation activities. Their scope is usually local limiting the possibility for remote, on demand access to experimentation capability that would be necessary to characterize fully an EaaS model.
* **Experimentation services linked with prototyping activities:** The FabLab movement is a notable example of such initiative. The focus of such initiative is to provide experimentation and prototyping facilities to enable rapid prototyping activities. Their local scope, lack of scalability ability and (for most) lack of research grade quality don’t qualify them as EaaS but these initiative can also be interesting inspiration sources for defining the EaaS offering to broaden its reach.
* **Experimentation services linked with innovation support:** Several innovation support initiative can provide access to experimentation platforms, this is the case of the EIT KIC labs at European Level, or of several economic cluster associations. This provisioning of service is accompanied by support to Innovation and business modeling activities.
The structures providing these services can be public, private or public
private partnerships. They rely on different sources of funding to sustain
their activities:
In the case of public or partially public funded initiative, the funding of
the experimentation service provisioning is conditioned to benefits for the
society. This can comprise various motivations: education opportunities
(exchanges between the external experimenters and local universities /
students), support to economic development, innovation and competitiveness (by
providing experimentation and prototyping abilities to local actors), or the
support of research excellence. The ability for the experimentation facility
to demonstrate some of these public benefits conditions the public funding and
the set of evaluation and KPI linked to these objectives can be important to
gain public support.
In the case of private or partially private funded initiative, two models of
funding can be found (and can be used together), either the **co-‐sponsorship
model** (where industrials and established actors will participate to the
funding of the experimentation facility) or the **service-‐provisioning
model** (where experimenters will directly pay per use of the experimentation
services). Motivations for industrial co-‐sponsoring of experimentation
facilities can range from sharing the cost of the infrastructures with other
actors, to the control of their value chain and subcontractors, or the support
of their business ecosystem. It is to be noted that due to the cost of set up
and maintenance of research level experimentation platforms, the
service-provisioning model is in most cases not used as the funding for
experimentation facilities.
#### 2.2.3. Initial conclusions for FESTIVAL exploitation approach
This initial analysis of the EaaS requirement and existing initiative enables
us to provide some initial conclusions on the direction that the FESTIVAL
project exploitation may take.
A first step for the FESTIVAL project is that it will enable the use of the
Experimentation as a Service model (or at least the fulfilment of most of the
requirements related to the EaaS approach) for the individual experimentation
facilities integrated in the project. This will be possible both on a
technical level (through the homogeneous access API of the project and
additional tools and services) and on a business model level (through
increased knowledge of the possible business models and by the set-‐up of an
initial community of user through task 3.4). Each individual experimentation
facility of the project has its own exploitation plan (defined in section1.3.
) and will therefore be able to be financially sustainable individually.
The second step and challenge for the FESTIVAL project will be the
continuation of the federated approach beyond the project. The work of work
package 4 and 5 will help to assess the benefit of the federation of testbed
as well as the potential costs of maintaining federation beyond the project.
Based on this evaluation different options will be considered such as:
* **Break-‐up of the federation** : if the positive impacts of the federation are deemed insufficient to compensate the costs, it is a possibility that each testbed continues on its own. This is, on the base of the currently available information, considered as an unlikely option.
* **Integration in a larger initiative** : if similar federation initiative emerges over the course of the project providing “Experimentation as a Service” solutions, the merge with other initiative to gain visibility and traction will be considered. This is, on the base of the currently available information, considered as a possible option.
* **Set-‐up of a non-‐profit association to maintain the federation** : Once it is established and functioning, and as long as each individual platform is able to maintain itself on its own (based on their individual exploitation strategy), the cost of maintaining the federation should be limited. In that case, the set-‐up of a non-‐profit association between the consortium partners could be a good way to sustain the federation. The funding of the association could be based on membership fees and/or on commissions on the experimentation services sold to external experimenters by the individual testbeds that have adopted a service provisioning business model. This is, on the base of the currently available information, considered to be a likely option.
* **Set-‐up of a commercial venture:** If the benefits of the federation provide a strongly valuable advantage and if the experimentation services provided can reach to an audience with the ability to generate significant revenues, the set-‐up of commercial ventures between the partners will be considered. The revenues would come from pay per use by experimenters using the federation and be spread among the experimentation infrastructures based on their usage by experimenters. This is, on the base of the currently available information, considered to be a possible option.
## 3\. Open Data opportunities and management in FESTIVAL
The Open Data are one of the most important topics in the FESTIVAL project.
Specific activities are dedicated to the analysis and provision of the data,
produced during the project, in an open way. In particular we have to
distinguish between two different categories of Open Data that will be managed
during the project: the first category includes the research data that will
produced by the experimentations performed in the FESTIVAL use case using the
federated testbeds: this type of data will be managed following the guidelines
of the European Commission regarding the Open Research Data in H2020 [7]. The
second category involves other existent Open datasets that will be identified
and collected in the different pilot sites involved in the FESTIVAL federated
ecosystem in order to enrich the knowledge base of the project and to improve
the reuse. The Open Data collected during the project can represent not only a
way to share the project results with the research community, but also a
concrete business opportunity for the whole FESTIVAL stakeholders: in order to
better identify the business potential offered by the Open Data provisioning
and reuse in an international context, the following section presents some
research and reports about the diffusion and the impact of open data in the
world and the business market related to it. We will use this information as a
starting point for the exploitation of the Open Data in the FESTIVAL business
model.
The end of the chapter includes a first version of the Management Data plan in
terms of processes and outputs to be produced during the FESTIVAL project to
collect and manage Open Data in compliance with the H2020 guidelines.
### 3.1. Open Data in a federated scenario
This section presents a report about the diffusion and the maturity of open
data approach in different countries in the world with a specific focus on the
countries directly involved in the FESTIVAL project experimentations. This
analysis is based on the data contained in the Open Data Barometer Global
Report 2015 document [8], produced by the World Wide Web Foundation, that
describes the state of the policies for the promotion of the dataset of public
data and open government in the world.
The study recognizes the progress made with regard to the provision of public
information, such as the subscription by the G8 leaders of the Open Data
Charter in 2013, which promotes the release of public sector data, free of
charge, in open and reusable formats. This purpose has been reiterated during
the last G20, in which the major industrial economies have committed to
promote open data as a tool against corruption, and the United Nations have
recognized the need for a "data revolution" to achieve the global development
goals.
The Open Data Barometer provides a snapshot of the state of open data around
the world. This type of analysis is very interesting in the context of the
FESTIVAL project showing the importance of Open Data and their diffusion and
how impacts from open data can best be secured in the different Countries of
the world. The Open Data Barometer analyzed, with a specific methodology
[9]based on surveys and several certified sources, several factors related to
the open data, but in particular have been taken in consideration, readiness
to secure benefits from open data, implementation of open data practice,
impacts of open data: calculating and aggregating the score for each these
three key factor the Open Data Barometer created a raking of the different
country of the world. In the following, only few results of this analysis has
been reported, the ones that have been considered interesting for the FESTIVAL
context.
**Figure 20 -‐ Country clusters based on Open Data Barometer Readiness and
Impact questions**
Based on an analysis of readiness and impact variables, the countries analyzed
in the study are classified into four groups:
**High capacity** – These countries are the advanced ones in terms of open
data policies and adoption: they have a deep culture of open data adopting an
open data approach at different government levels. These countries also
promotes the adoption of open licensing in order to maximize the impact of
open data in the society and the private sector that are ready to take benefit
from open data. Countries included in this cluster are UK, US, Sweden,
**France** , New Zealand, Netherlands, Canada, Norway, Denmark, Australia,
Germany, Finland, Estonia, Korea, Austria, **Japan** , Israel, Switzerland,
Belgium, Iceland and Singapore.
**Emerging & advancing ** -‐ These countries have emerging or established
open data programs, often as dedicated initiatives, and sometimes built into
existing policy agendas. In particular, most of these countries are working on
developing open data adoption enlarging the available datasets in different
contexts. This category contains countries with a different level of open data
maturity. Many of these countries are currently working to promote the open
data practice in the different government and institutions. Countries that are
part of this group are **Spain** , Chile, Czech Republic, Brazil, **Italy** ,
Mexico, Uruguay, Russia, Portugal, Greece, Ireland, Hungary, Peru, Poland,
Argentina, Ecuador, India, Colombia, Costa Rica, South Africa, Tunisia, China,
the Philippines and Morocco.
**Capacity constrained** –The countries included in this category have small
or very limited open data initiatives. This is mainly due to limitation
regarding the government processes, the internet access and in general the
availability of technology and related knowledge. Countries included this
cluster are Indonesia, Turkey, Ghana, Rwanda, Jamaica, Kenya, Mauritius,
Ukraine, Thailand, Vietnam, Mozambique, Jordan, Nepal, Egypt, Uganda,
Pakistan, Benin, Bangladesh, Malawi, Nigeria, Tanzania, Venezuela, Burkina
Faso, Senegal, Zimbabwe, Namibia, Botswana, Ethiopia, Sierra Leone, Zambia,
Yemen, Cameroon, Mali, Haiti and Myanmar.
**One-‐sided initiatives** – the countries included in this cluster are
considered with a limited freedoms: they have basic open data initiatives
(e.g. open data web portals) but with a very limited social impact. The
countries in this cluster are Malaysia, Kazakhstan, United Arab Emirates,
Saudi Arabia, Bahrain and Qatar
Another import analysis performed by the barometer is related to the available
datasets. 15 categories of datasets have been taken in consideration and for
each category is assessed the availability and openness based on a 10-‐point
checklist and a weighted aggregation (further information about the
calculation technique can be found in [9]). The result is expressed in a score
of 0-‐100. The dataset categories are described in the following tables, as
defined by the open data barometer.
<table>
<tr>
<th>
</th>
<th>
**Dataset**
</th>
<th>
</th>
<th>
</th>
<th>
**Description**
</th>
<th>
</th> </tr>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Mapping data**
</td>
<td>
_“A detailed digital map of the country provided by a national mapping agency
and kept updated with key features such as official administrative borders,
roads and other important infrastructure. Please look for maps of at least a
scale of 1:250,000 or better (1cm = 2.5km).”_
</td> </tr>
<tr>
<td>
**Land ownership**
**data**
</td>
<td>
_“A dataset that provides national level information on land ownership. This
will usually be held by a land registration agency, and usually relies on the
existence of a national land registration database.”_
</td> </tr>
<tr>
<td>
**National statistics**
</td>
<td>
_“Key national statistics such as demographic and economic indicators (GDP,
unemployment, population, etc), often provided by a National Statistics
Agency. Aggregate data (e.g. GDP for whole country at a quarterly level, or
population at an annual level) is considered acceptable for this category.”_
</td> </tr>
<tr>
<td>
**Detailed budget**
**data**
</td>
<td>
_“National government budget at a high level (e.g. spending by sector,
department etc). Budgets are government plans for expenditure, (not details of
actual expenditure in the past which is covered in the spend category).”_
</td> </tr>
<tr>
<td>
**Government**
**spend data**
</td>
<td>
_“Records of actual (past) national government spending at a detailed
transactional level; at the level of month to month government expenditure on
specific items (usually this means individual records of spending amounts
under $1m or even under $100k). Note: A database of contracts awarded or
similar is not sufficient for this category, which refers to detailed ongoing
data on actual expenditure.”_
</td> </tr> </table>
<table>
<tr>
<th>
**Company**
**registration data**
</th>
<th>
_“A list of registered (limited liability) companies in the country including
name, unique identifier and additional information such as address, registered
activities. The data in this category does not need to include detailed
financial data such as balance sheet etc.”_
</th> </tr>
<tr>
<td>
**Legislation data**
</td>
<td>
_“The constitution and laws of a country.”_
</td> </tr>
<tr>
<td>
**Public transport timetable data**
</td>
<td>
_“Details of when and where public transport services such as buses and rail
services are expected to run. Please provide details for both bus and rail
services if applicable. If no national data is available, please check and
provide details related to the capital city.”_
</td> </tr>
<tr>
<td>
**International trade data**
</td>
<td>
_“Details of the import and export of specific commodities and/or balance of
trade data against other countries.”_
</td> </tr>
<tr>
<td>
**Health sector performance data**
</td>
<td>
_“Statistics generated from administrative data that could be used to indicate
performance of specific services, or the healthcare system as a whole. The
performance of health services in a country has a significant impact on the
welfare of citizens. Look for ongoing statistics generated from administrative
data that could be used to indicate performance of specific services, or the
healthcare system as a whole. Health performance data might include: Levels of
vaccination; Levels of access to health care; Health care outcomes for
particular groups; Patient satisfaction with health services.”_
</td> </tr>
<tr>
<td>
**Primary and**
**secondary education performance data**
</td>
<td>
_“The performance of education services in a country has a significant impact
on the welfare of citizens. Look for ongoing statistics generated from
administrative data that could be used to indicate performance of specific
services, or the education system as a whole. Performance data might include:
Test scores for pupils in national examinations; School attendance rates;
Teacher attendance rates. Simple lists of schools do not qualify as education
performance data.”_
</td> </tr>
<tr>
<td>
**Crime statistics**
**data**
</td>
<td>
_“Annual returns on levels of crime and/or detailed crime reports. Crime
statistics can be provided at a variety of levels of granularity, from annual
returns on levels of crime, to detailed real-‐time crime-‐by-‐crime reports
published online and geolocated, allowing the creation of crime maps.”_
</td> </tr>
<tr>
<td>
**National environmental**
**statistics data**
</td>
<td>
_“Data on one or more of: carbon emissions, emission of pollutants (e.g.
carbon monoxides, nitrogen oxides, particulate matter etc.), and
deforestation. Please provide links to sources for each if available.”_
</td> </tr>
<tr>
<td>
**National election results data**
</td>
<td>
_“Results by constituency / district for the most all national electoral
contests over the last ten years.”_
</td> </tr>
<tr>
<td>
**Public contracting data**
</td>
<td>
_“Details of the contracts issued by the national government.”_
</td> </tr> </table>
**Table 4 -‐ Open data barometer dataset categories**
For each category of data in each country the availability and openness have
been estimated based on a 10-point checklist, and after a weighted
aggregation, for each dataset is assigned a score of 0– 100.The chart below
shows the average scores for each category across all countries surveyed.
**Figure 21 -‐ Availability and openness of dataset categories**
This analysis shows a positive trend with a general slow increment of the
openness between 2013 and 2014 in the most of datasets. It is important to
underline the difference between the availability of different categories of
datasets: for example, there is a large availability of census datasets but a
limited provision of other information for instance about company registration
or territory. In general the research identified a high presence of data
coming from national statistical agency despite should be necessary a direct
flow of data from government to the citizens in order to provide updated and
useful data for real services. Other important considerations can be
elaborated analyzing the datasets about budgets and spending: the governments
usually make available plans related to spending plans, but few dataset about
the real expenses performed. This gap should be filled in order to improve
transparency and accountability at the different government levels.
One of the most open issues related to the open data is the real impact that
the availability of these information have on the society. The barometer
research tried to quantify this impact analyzing the possible use cases and
success stories reported by media and academic literature (year 2013).The
results shows that there is an increment in the perceived use of open data by
entrepreneurs to create new services, but on other topics, for instance
environmental sustainability or economic, it is possible to identify a little
impact. It is important to underline that this global result that does not
show the difference between different countries in the world. Another analysis
shows how the impact is strictly related with the open data readiness and that
is unevenly distributed across the different countries in the world.
**Figure 22 -‐ Open Data impact**
In the next figures are shown the results about the single countries that are
involved in FESTIVAL pilots, Japan, France Spain and Italy. This is very
interesting in order to analyze the real open data maturity in these
countries. For each country is presented a radar chart that aggregates the
three main values of the
Open Data Barometer as global values:
* **Readiness** : measure if there are the necessary political, economic and social condition in the country to implement an open data strategy that can produce real benefits
* **Implementation** : measure the level of provision of different key categories of open data. These categories are also aggregated in three cluster: o _Innovation_ (Map Data, Public Transport Timetables, Crime Statistics, International Trade
Data, Public contracts) o _Social Policy_ (Health Sector Performance, Primary
or Secondary Education, Performance
Data, National Environment Statistics, Detailed Census Data) o
_Accountability_ (Land Ownership Data, Legislation, National Election Results,
Detailed Government Budget, Detailed Government Spend, Company Register)
* **Emerging impacts** : the level of perceived impact
The purple line represents the 2014 data, the blue one the information coming
from 2013 report:
**Figure 23 -‐ Open data report -‐ Japan**
**Figure 24 -‐ Open data report -‐ Spain**
**Figure 25 -‐ Open data report -‐ France**
**Figure 26 -‐ Open data report -‐ Italy**
The complete report of the Open Data Barometer shows the current situation of
the adoption and the impact of the open data model in different countries in
the world. From this analysis is clear that there is a big gap among the
different countries in terms of open data availability and in general open
data approach: this is often related with the technological maturity of the
country but also with the economic and political situation and maturity. It
was interesting to analyze the specific ranking of the key values (Readiness,
implementation and impact) and the dataset type availability for Japan, Spain
and France and Italy the countries directly involved in the project
experimentation: the report showed that these four countries have good
performance for some indicators, in particular readiness, but at the same time
they can improve the level of political, economic and social impact. FESTIVAL
will be, for these countries, a way to improve this key value through a
federated approach of the open data model.
### 3.2. Analysis of Open Data business opportunities
The term “reuse” in the context of “public information” represents the
capability to reprocess (i.e. modify, combine and transform) the data
originally collected (in example from government) for different purposes, in
order to make them more useful and interesting. The reuse implies the design
of solutions based on the use of open data by individual developers,
companies, civil societies or other governments, in order to exploit the value
of public information, even commercially. The following sections report some
studies related to the open data business opportunities and potentials in an
international context.
#### 3.2.1. Public Sector Open Data Market
Governments collect and produce large amounts of data, carrying out their
activities. They can use this data for various reasons:
* financial and economic benefits for themselves or third parties;
* economic growth;
* management of internal policies;
* transparency and accountability;
* direct involvement of citizens in public services;
To achieve the above objectives, many governments have implemented initiatives
to make their data open, available, usable and machine-‐readable, so
enterprises and citizens can access and use this data for their own purposes.
Collection and generation of data represents an important benefit for
governments that can use this data for their purposes, but at the same time,
collected data represent a “treasure trove”, because the opening of the data
generates a new type of services market, with new opportunities for economic
growth and jobs.
It is important to note that the European Commission published guidelines
about the use of public information to support the application by the member
countries of the PSI (Public Sector Information) Directive. In particular,
guidelines focus on:
* the license use of open standard;
* the publication priority of dataset;
* how to make the published datasets more easily reusable;
* the application of the rule of marginal cost to define the reuse cost of information.
In order to achieve this results, European Commission itself launched and
initiative named ePSI platform [10] for promoting PSI and Open Data market. In
particular, ePSI platform provides a portal for publishing news about PSI and
Open Data developments, notify events and workshops, disseminating good
practices and examples, etcetera.
EPSI Platform provides a PSI scoreboard to evaluate the status of Open Data
and the overall PSI reuse throughout the EU.
The scoreboard is compiled based on internet search and of expert advices.
Moreover, ePSI Platform is open to feedback and suggestion for improving
accuracy of scoreboard; ePsi Platform provides scoreboard data that are
published on online under CC0 (Creative Commons 0) license.
**Figure 27 -‐ the Europe open data reuse state**
Scoreboard takes into account seven aspects of PSI reuse; each of these
aspects includes one or more indicators and on each of them a country can
reach up to 100 points, for a maximum score of 700 points.
* Implementation of the PSI Directive; this aspects includes two indicators:
* Implementation and absence of infringement procedures (50 points): it is about the correct implementation of the PSI Directive into national law and about the absence of infringement procedures.
* Exemptions granted (50 points): it is about the inclusion of one or more of the following in the implementation of the PSI Directive: national meteorological institute, Cadastre, Chamber of Commerce and national repository for legal information.
* National re-‐use policy; this aspects includes five indicators:
* General right of reuse (20 points): it is about possible obligations of national law for public sector bodies to allow reuse of PSI.
* Distinction between commercial and non-‐commercial re-‐use (20 points): it is about the absence of distinctions between commercial and non-‐commercial into national law.
* Redress mechanisms (20 points): it is about implementation of redress procedures for appeals against public sector bodies that deny requests for reuse.
* Pro-‐active publishing of PSI for re-‐use (20 points): it is about obligation for public sector bodies to be pro-‐active in publication of PSI.
* Availability of standard licences (20 points): it is about the availability of a standard licence under which public sector bodies are encouraged to publish PSI.
* Formats; this aspects includes four indicators:
* Endorsement of “raw” data and open standards (20 points): it is about the existence of a body promoting or endorsing the publication of PSI for reuse under the form of “raw” data and in open standards.
* Obligatory ‘raw’ data and open standards (30 points): it is about the existence of an obligation for public sector bodies to publish PSI for reuse under the form of “raw” data in open standards.
* Endorsement of “Linked Open Data” (20 points): it is about the existence of actions devoted to the promotion and endorsement of “Linked Open Data”.
* Existence of national or regional data catalogue(s) (30 points): it is about the existence of a national or regional data catalogue or portal providing data sets available for reuse.
* Pricing; this aspects includes three indicators:
* Cost-‐recovery model (cancelled out if 4.2. applies) (30 points): it is about the existence of a PSI-‐pricing mechanism based on a cost-‐recovery model.
* Marginal costing model (cancelled out if 4.1. applies) (50 points): it is about the existence of ePSI-‐pricing mechanism based on a marginal costing model.
* No exceptions to marginal costing model (50 points): it is about existence of possible exceptions to the application marginal costing model for PSI re-‐use.
* Exclusive arrangements; this aspects includes three indicators:
* Prohibition of exclusive arrangements (50 points): it is about the prohibition for PSI holders to granting exclusive rights to resell or reuse data to any legal entity.
* Legal action against exclusive arrangements (30 points): it is about the existence of legal action from the Member State or a private party against public sector bodies granting exclusive agreements to third parties.
* Ending exclusive arrangements (20 points): it is about the successfully ending of at least two exclusive agreements arrangements by the Member State.
* Local and regional PSI availability and open data communities; this aspects includes three indicators:
* Some local or regional PSI available and community activity (40 points): it is about the existence of at least two local or regional bodies publishing at least 10 PSI data sets for reuse and at the same time having active open data communities.
* Moderate local or regional PSI available and community activity (40 points): it is about the existence of at least six local or regional bodies publishing at least 10 PSI data sets for reuse and at the same time having active open data communities.
* Considerable local or regional PSI available and community activity (20 points): it is about the existence of at least twelve local or regional bodies publishing at least 10 PSI data sets for reuse and at the same time having active open data communities.
* Events and activities; this aspects includes three indicators:
* Some national or inter-‐regional events (50 points): it is about the organizations of at least four annual national or inter-‐regional events for promoting Open Data and PSI reuse.
* A moderate number of national or inter-‐regional events (25 points): it is about the organizations of at least eight annual national or inter-‐regional events for promoting Open Data and PSI reuse. o A considerable number of national or inter-‐regional events (25 points): it is about the organizations of at least twelve annual national or inter-‐regional events for promoting Open Data and PSI reuse.
Full description of indicators is available in [10].
**Figure 28 -‐ PSI aggregated scoreboard**
**Figure 29 -‐ PSI Overall Score**
Figure 28 and Figure 29 report the ranking of ePSI for the different
categories presented above and in aggregated format. At the time of writing
this report, first three positions were held by United Kingdom (585 point),
Spain (550 points) and France (535 points).
Other studies have been undertaken in Europe to measure different aspects of
PSI and their reuse, such as POPSIS (Pricing Of Public Sector Information
Study) [11]and Vickery [12]. POPSIS (undertaken by Deloitte Consulting
Belgium)measured the effects of PSI charging models on the market and their
effects. It has analyzed some case studies of public sector bodies and
different PSI sectors across Europe, such as meteorological data, geographical
data, business registries and others.
Vickery (provided by Graham Vickery of Information Economics) indicated that
the size of PSI reuse market was of the order of 28 billion € in 2008 with an
annual growth rate of around 7%, increasing to 40 billion Euro a year.
Moreover, an independent review of Public Sector Information [13] estimates
the direct economic benefits of public sector information at around £1.8bn a
year, with an overall impact including direct and indirect benefits of around
£6.8bn.
#### 3.2.2. Open Data Market
The consulting firm McKinsey published at the end of 2013 a report addressing
the issue of the value of open data and their ability to generate easily
distributable digital information. The report analyzes how open data creates
economic value in terms of revenue, reducing costs and saving time. The survey
focuses on the provision of open data by both governments and private
institutions, providing the basis for using applications that require large
volumes of data or producing innovative applications.
In particular, McKinsey evaluates the potential of open data in seven business
sectors: education, transportation, consumer products, energy, oil and gas,
health and consumer finance [14].
**Figure 30 -‐ Open business potential**
McKinsey claims that the opening of the data could produce an additional value
globally in the seven sectors: more than $3 trillion a year and as much as $5
trillion a year.
Moreover, McKinsey demonstrates that the open data allow:
* to give rise to hundreds of entrepreneurial businesses;
* to help established companies further in the marketplaces;
* to define new products and services;
* to improve the efficiency and effectiveness of operations.
In addition to McKinsey’s study, other two studies underline the importance of
Open Data Market: a study by Oxera estimates the Gross Value Added (GVA) by
the Geospatial Services Sector as $150-‐$270 billion per year, 0.2% of global
GVA and approximately half the GVA of the global airline industry; Oxera
points to additional indirect benefits including $17 billion in time savings,
$5 billion in fuel savings and $13 billion in education [15]. In addition, a
report by Lateral Economics (commissioned by Omidyar Network) shows an
international economic growth of $13 trillion and presents an overview of
findings starting from a survey of the international and local Australian
policy context for open government; in particular it explores the economic
value of open data providing case studies on its impacts [16].
In order to prove potentiality of open data market, four success stories are
shown below. Zillow, Zoopla, Waze and The Climate Corporation created a
business activity based on open data. The usage of open data allowed them to
grow economically and to expand their market of products and services
provision. In particular, two of them provide services in the field of real
estate (Zillow, Zoopla), while another one (The Climate Corporation) provides
services in the field of agriculture combining data about weather, agronomic
modelling, and weather simulation; the last one (Waze) provides not only a
service (a GPS-‐based geographical navigation application), but sets up a
community of drivers that collaborates providing information about routes,
traffic, etcetera.
* **Zillow** : it provides an on-‐line marketplace for home and real estate to help homeowners people involved in this field (e.g. buyers, renters, sellers, agents, etcetera) to manage their businesses; in particular Zillow provides a large database of homes both for sale ad for rent. Moreover it provides information about homes not on the market. Finally, Zillow has with a market capitalisation of over $3 billion.
* **Zoopla** : similarly to Zillow, it provides services in the real estate field based on data from the UK Land Registry. Zoopla was launched in 2008 and at present it has annual sales of £76m (most of them came from estate agents) and profit of £25m.
* **The Climate Corporation** : it provides services in the field of agriculture on the basis of hyper-local weather monitoring, agronomic modelling, high-‐resolution weather simulations and data that comes from third party providers, such as the US National Weather Service. The Climate Corporation was founded in 2006 and in October 2013 it was acquired by Monsanto (multinational agrochemical and agricultural biotechnology corporation) for $930 million.
* **Waze** : it provides geographical navigation application for smartphones, with turn-‐by-‐turn information, and a social network for drivers. Through a game based approach, drivers can notify new roads, traffics, etcetera. In June 2013 Waze was acquired by Google for $1.3 billion. Also in 2013 it was awarded as the “Best Overall Mobile App” at Mobile World
Congress.
All the success stories are based on a strong business model; possible
business models for open data are shown below. In particular, two studies are
taken into account: the first one by Deloitte [17], a network of professionals
collaborating to provide audit, consulting, financial advisory, etcetera, and
the second one [18] by “Osservatorio ICT -‐ PIEMONTE” (Piedmont ICT
Observatory), a public authority of Regione Piemonte.
Deloitte highlighted five emerging business models based on open data
representing five different approaches, referring to the UK market. The five
"archetypes" business models are:
**Figure**
**31**
**-‐**
**F**
**ive "archetypes" business models**
Suppliers
Aggregators
Developers
Enrichers
Enablers
* **Suppliers** : although there is no still a direct financial return, private companies, organizations or public authorities are starting to make available their data in an open format, allowing others subjects to reuse it. In this case, suppliers do not gain direct economic returns; returns are represented by an increased level of engagement and loyalty of customers, citizens, etcetera. Moreover, as consequence engagement of customers, private companies should gain greater revenues.
* **Aggregators** : public or private companies could collect and process open data in order to extract new values end information. In this case, the key factor is the relevance of the new information produced from the data analysis; success depends on the usefulness of produced data and their marketability. Revenues should come from aggregation of data that come from different sources and/or correlation of different types of data (e.g. correlation of geographic data and data about evolution of temperature); in addition, ravenous should come from data access services such as APIs.
* **Developers** : developers could reuse open data to offer new services. In example, one of the fields of major development could be represented by applications for mobile devices (such as smart phones, tablets, etcetera) about public transports, health, etcetera. In order to facilitate reuse of published data, generating in turn useful applications, it is necessary that published data must be of good quality, up-‐to-‐date and easy to reuse.
* **Enrichers** : companies could offer consulting services through the aggregating open data and proprietary information held by large private companies, in order to offer new products and services. Moreover, companies could use open data in order to improve their existing services and products. In this particular case, revenues do not come from open data, but open data can help to save money, because better products and services can increase efficiency.
* **Enablers** : companies could provide platform and technologies for commercial or personal use. Enablers represent a key factor of the Open Data ecosystem, because they provide services and infrastructures for data suppliers and data consumers in order to facilitate access to open data and their reuse. In this case revenues come from access to data. Moreover, it is important to note that enablers act as catalyst, because they can offer cost effective solution for enterprises and organizations without founds for the development of a proprietary platform.
On the other side, in [18] examples of generic business models that can be
adopted in the field of open data are carried out from a survey. In
particular, eight different models are identified.
_**Premium Product /** _
_**Service** _
**Freemium Product /**
**Service**
**Open Source**
**Infrastructural Razor**
** & Blades **
**Demand**
**-‐**
**Oriented**
**Platform**
**Supply**
**-‐**
**Oriented**
**Platform**
**Free, as Branded**
**Advertising**
**White**
**-‐**
**Label**
**Development**
**Figure 32 -‐ Examples of emerging business models**
* Premium Product / Service: products and services based on open data (presumably characterized by high intrinsic value) are supplied to the end market; companies and organizations that provide products and services typically require a payment for accessing/consuming them.
* Freemium Product / Service: in this case, products and services based on open data are supplied to the end market for free with limited functionalities, while for the more advanced features a payment is required.
* Open Source: products and services are supplied for free; in this case, revenues come from added values services provided to specific customers, such as customization of products or services, implementation of particular functionalities, technical advices, etcetera.
* Infrastructural Razor & Blades: brokers facilitate access to PSI for developers; in this case the strategy is as follows: in a first stage the product/service is sold at a very low price in order to increase the subsequent request of complementary goods, on which it is possible to achieve significant gains. In the case of PSI, brokers could provide API (Application Programming Interface) to access data sets and then could charge the use of computational power required for processing incoming requests.
* Demand-‐Oriented Platform: in this model, large sets of PSI are stored, classified (e.g. through metadata), harmonized in terms of formats and exposed through APIon intermediate and proprietary servers with high reliability, in order to facilitate their retrieval.
* Supply-‐Oriented Platform: in this model a provider acts as a broker; provider supplies infrastructural services providing PSI for free to developers; on the other hand PSI holders apply a tariff to PAs, which consequently become holders of data management platforms.
* Free, as Branded Advertising: this model is based on the concept of “service advertising”, a form of communication that aims at encouraging an audience to a particular company or brand; in this model, advertiser draws the attention of customers by providing them with services based on PSI; generally, offered services do not produce revenues, but support other business lines in achieving expected economic results.
* White-‐Label Development: in this case, third parties develop services and solution on behalf of advertisers; services and solutions are developed in a white-‐label manner: in developed services and solutions, third parties hide their brand and gives visibility to the brand of advertiser.
It is important to note that governments have an important role in the growth
of open data market. In promoting and enhancing its growth, governments can
play four roles: Supplier, Leader, Catalyst and User.
**Figure**
**33**
**-‐**
**Roles involved in PSI management**
Supplier
Leader
Catalyst
User
* **Supplier** : are subjects (e.g. Governments, Public Administrations and other companies) that release data in order to increase the economic growth and the business innovation, steadily improving quality and access possibility.
* **Leader** : are subjects that encourage the release of data that are important for economic growth and business innovation. This subjects could be Public Institutions at regional and city level, state owned enterprises and private companies providing important public services.
* **Catalyst** : are subjects that promote the use of open data in order to develop a prosperous ecosystem of users, coders, application developers and in general to boost new data-‐driven businesses. It is important to note that the use Open Data is a catalyst for the business innovation in all sectors, and not solely or primarily in the ICT sector.
* **User** : are subject that take advantage of their published data and reuse them; in order to take advantage of their data, they need to develop specific “Open Data skills”, such as interpretation, extraction, publication in machine-‐readable formats, ensure personal privacy, assist user (business or not) to use available data and support them to face and solve possible problems, such as technical or legal.
### 3.3. Data management plan in FESTIVAL
The Data management plan described in the following sections defines the
process that will be followed to collect, manage process and share open data
produced during the project. The definition of a Data management plan is
required by the Guidelines on Open Access to Scientific Publication and
Research Data [19]and Guidelines on Data Management in H2020 [7] in particular
for the projects that participate in the Open Research Data Pilot as FESTIVAL.
FESTIVAL will provide the research data and the associated metadata generated
during the project as open data using specific research data repositories and
will enable all the interested stakeholder to access and reuse this data. This
section represents a first version of the Data Management Plan that will be
continuously updated in the following months. In particular, specific
activities related in general to open data and open research data will be
performed in task 2.3 “Federated Open Data” where will be identified a
federated open data provision methodology, data model and specific project
guidelines. The results of these activities will be reported in deliverable
D5.3 “First year update to communication, dissemination, exploitation and open
data management activities”. The following Data Management plan is an initial
description of the phases to be performed in the open data (research data and
generic open data) life cycle in the FESTIVAL project. We have identified 4
phases:
#### Data Identification
This is the initial phase to be performed in order to identify the right data
sources and the specific information to be provided as open data. In this
phase, the types of data to be shared in the context of FESTIVAL will be
identified. The data typology will be strictly related to the experimentation
that will be performed on the testbeds and to the existent data sources
available in the test sites. The analysis will be conducted taking in account
the needs of the federation scenario in which the data provided should be
linked among the different testbeds/pilots. In this initial phase, the privacy
issues will be analyzed. Considering that, many data produced by the
experimentation will be related to citizens’ information or behavior, national
and international legislations will be studied in order to identify the type
of information that can be shared and the level of dissemination (consortium
level, public, etc.) that can be applied. It is also particularly important to
identify structured archives, directories, databases available in the testbed
pilot/area: external open data available can be collected and linked to
existing data sources to create new datasets.
It is desirable to define the data of interest for FESTIVAL stakeholder
communities that will be involved in the testbeds. In the context of the
project, it is important to identify priorities for opening the data that are
related to the real interests of the community. In this sense, the definition
of priorities is an opportunity for discussions and debates with the citizen
and the local community, by consulting and involving them in the definition of
the open data. This activity is also related to the definition of ecosystem
and the specific stakeholders of FESTIVAL community that has been started in
this deliverable. The result of this activity will be included in Deliverable
2.3 and Deliverable 5.3
OUTPUT: Data sources, existent data set, privacy policies
#### Data Collection
This phase is related to the definition of the standards of the data formats
and data models and the collection of the data sets identified in the previous
step. Once the type of data to be provided is defined, it is necessary to
define the standards and the data model that are more suitable to share the
information and to be easily reused by all the interested stakeholders. In
this sense, during the task 2.3, a federated approach for the open data based
on the definition of a common data model among the different testbeds will be
proposed. During this activity, different factors that affect the quality of
the individual data and the impact of these factors on the datasetwill be
taken into account:
* Syntactic accuracy, i.e. the degree of proximity of the data value to a value in the syntactic definition domain;
* Semantic accuracy, i.e. the degree of the proximity of the data value to a value in the semantic definition domain;
* Topicality, that is the adequacy of the data value with respect to the timing requirements of the context of use;
* Completeness;
* Internal consistency, indicating the degree of coherence of the data contained in a data set related to the same entity;
* External consistency, indicating the degree of coherence between different but related data
elements present in a dataset.
Some of the above mentioned requirements can represent an obstacle to provide
the open data set, but in other cases (for example, the completeness) the
choice of opening the dataset can be functional to improve quality, possibly
through processes of community involvement reference.
After the analysis of the data set quality, it will be necessary to identify
the right standards and data formats suitable for the different contexts in
FESTIVAL. Considering that there is an absence of clear international
standards for representing key datasets, this situation could produce the
effect that no standard of measurement about the data of certain kinds of data
can be assessed; also in this case if users want to make relations among data
of different countries or want to reuse an application in a different open
data context, they have to re-‐learn and re-‐code their data. In order to
avoid this situation it will be necessary to analyze the different national
guidelines and standards for the open data provisioning finding a common
approach. It will be useful also to refer to existent initiatives such as The
Open Contracting Data Standard [20], launched in November 2014 that is one
experiment with providing both standards for technical interoperability and
for assessing good contracting data publication.
OUTPUT: Data models, standards, data formats, open research datasets
**Publication and federation**
This is the phase in which the different datasets will be concretely published
and shared in open formats. This activity will be performed during the whole
experimentation conducted on the federated testbeds. Part of this activity
will consist in the identification of repositories and tools to publish the
open data sets. In order to be compliant with [19] the open research data will
be published in specific research data repositories. Also all the open data
provided in FESTIVAL will be available through a Federated Open Data Catalogue
a web portal that will be a single point of access for all the open data
produced in the testbeds. This web portal will be developed during the task
2.3 and it will provide a set of services and API that will allow end users
and external systems to access to the data. One important objective in
FESTIVAL is to provide data with a high level of openness, availability and
reusability. In order to measure these characteristics the 5 stars deployment
model proposed by Tim Barners-‐Lee [21] is followed. The model proposes
gradual opening data process composed by five steps from raw unstructured data
to Linked Data:
1. The data are available in any format, but with an open license. In addition, they are incorporated in documents without structure and therefore readable and interpretable only by human (documents). In this case, any service cannot be enabled from data contained in the documents, without significant human intervention of extraction and possible data processing.
2. The data are available in a format automatically readable by an agent. Typically, data that are part of this level are data in proprietary formats (e.g. Excel), which are readable by a program, but human intervention is still necessary for particular data processing (raw data). In this case, enabled services are inefficient and it is ad-‐hoc applications that use and incorporate the data within them.
3. The data have the same features of the previous level, but with a non-‐proprietary format (e.g., usage of CSV instead of Excel)
4. The data have the same characteristics of the previous level, but they are exposed using W3C, RDF and SPARQL standards and they are described semantically using metadata and
ontologies (semantically enriched data). In this case, enabled services and
app are very efficient;
5. The data have the same characteristics of the previous level and are semantically described using metadata and ontologies, but they are connected to the data exposed by other people and organizations. In fact, the human intervention is minimized and sometimes it is even eliminated (semantically enriched and connected data). In this case, the services are very efficient and they are enabled with data mash-‐up;
One of the challenges in FESTIVAL is to provide the most of dataset in 4 or 5
stars in order to habilitate the data federation among the different testbeds.
The achievement of this objectives is related to the specific skills required
to create dataset based on Linked Open Data but also on the publication
process: for all the data collected in a data set, it is appropriate to
describe the datasets by accompanying it with useful information (metadata) to
understand its contents, that make explicit some features and make it easier
to identify. For instance, to facilitate the availability of datasets and
their interoperability is important to use descriptive elements such as Title,
Description, Link, License, Validity Period, Managing Authority, Format etc.
In this phase will be concretely implemented the open data federation: through
the Federated Open Data Catalogue and the interoperable data model the open
research data produced during the different experimentations will be
accessible through a single entry point and a common way to search for the
needed up to date information.
OUTPUT: Federated Open Data Catalogue, Federated Published datasets
**Archiving and preservation**
This phase includes all the activities to assure that all the open research
data collected during FESTIVAL project will be correctly managed for a
long-‐term preservation. In this sense it will be identified, in the FESTIVAL
business model, how to sustain the costs for the data preservation beyond the
project duration. Also, during this process will be identified the technical
infrastructure for the data archiving and the human resources needed to
perform support and maintenance.
OUTPUT: Long-‐term preservation technical and business model
The next figure shows an overview of the data management processes and their
relationship: it is important to notice that “Data identification and “Data
collection” are two iterative processes that can be executed several times
during project lifetime in order to identify new open data sources or refining
standard and data models.
**Figure 34 -‐ Data management plan processes**
## 4\. Project exploitation and business model roadmap
This deliverable has started the definition of an exploitation approach and
the analysis of a business model to be applied in the context of
Experimentation as a Service in FESTIVAL project. This analysis will continue
in the following months to be refined with the results coming from the project
activities and with a clearer definition of the EaaS market. In this section
is presented a roadmap that shows the next activities to be performed in the
field of exploitation, business model and open data management. The next
picture summarizes the outcomes of this activities performed that will be
included in the Deliverable 5.3 “ _First year update to communication,
dissemination, exploitation and open data management activities”_ to be
releases in month 12 (September 2015) and Deliverable 5.4 “ _Experimentation
as a Service business model analysis_ ” (month 18, March 2016).
**Figure 35 -‐ Project exploitation and business model roadmap**
A brief description of the results that will be included in D5.3 is given as
follows:
**FESTIVAL SWOT analysis:**
It will be important to identify the strong points that characterize the
FESTIVAL federated testbeds, the experimentations and in general the EaaS
model. A SWOT analysis will allow to identify, in a structured way, the
internal and external factors that are favourable and unfavourable to achieve
a specific objective. In this approach, will be taken in consideration:
* Strengths: characteristics of the FESTIVAL project that give an advantage over others.
* Weaknesses: characteristics that place the FESTIVAL project at a disadvantage relative to others.
* Opportunities: elements that FESTIVAL could exploit to its advantage.
* Threats: elements in the environment that could cause trouble for the FESTIVAL
The results of the SWOT analysis will be also an input in the definition of
the correct business model and will be updated with insights matured during
the project and a qualification of each of the items listed under the SWOT.
This qualification will serve as a further analysis and to better understand
the critical points.
### **Products definition**
The definition of FESTIVAL products will be fundamental to plan an
exploitation strategy. In particular, the FESTIVAL product value proposition
will be defined, where the product offer intersects with customer’s desires.
In the context of FESTIVAL, the value proposition is how the FESTIVAL EaaS
approach meets the needs of its stakeholders. Defining the value proposition
represents one of the fundamental elements of business models and is the first
step to defining the FESTIVAL business model.
### **Expected impact analysis**
Taking into consideration that the impact assessment will be performed at
different stages of the project, in particular after the execution of the
trials, in the initial phase of FESTIVAL, it will be important to identify the
possible impact of the project from technological, economic and social point
of view. In this activity, starting from the initial project impact definition
and the analysis of the FESTIVAL ecosystem, the different types of impact of
the project and the stakeholder involved in them will be identified and
classified.
**Data management plan** Starting from the processes planned in this
deliverable, the D5.3 will present a first version of the Data management plan
that includes the open research data produced in the project. This plan is
related with the activities performed in task 2.3 and follows the specific
guidelines and template proposed by [7].
The results of the activities related to the business model definition will be
included in the deliverable D5.4:
### **Updated stakeholder analysis**
The analysis of the stakeholders has started in this deliverable. The list of
stakeholders and the information related to them will be refined during the
next months thanks to a more complete knowledge of the project ecosystem, and
also to the input from the stakeholders themselves (for instance collected
through surveys). A more detailed stakeholder analysis will allow to classify
them in specific target groups and in particular identify their role in the
project though:
* Influence/Power – the ability of the stakeholder to affect the adoption of FESTIVAL products/approach.
* Position – why the stakeholder should support or opposes or is neutral about the FESTIVAL project and its enabled services.
* Interest -‐ the stakeholder’s interest or concerns towards the adoption of the FESTIVAL products/approach.
### **Review of business model literature**
In order to define a business model for FESTIVAL it will be explored the
fundamental elements of business models, in particular to analyse the impact
of ICT technology and the Experimentation as a Service approach on these
modules. Starting from business reference models and templates it will be
analysed the basic building blocks and their relationship with the elements
present in FESTIVAL ecosystem such as, products, infrastructures (e.g.
testbeds), customers (e.g. experimenters) and financial aspects.
### **Business model definition**
Using as input the different analysis previously presented, the D5.4 will
define a first version of the FESTIVAL business model. This model will be
updated during the whole duration of the project and the final version will be
included in D5.6 due for month 30.
## Conclusions
The Deliverable 5.2 “Project initial exploitation and open data management
plan”, reported an initial analysis and considerations about the exploitation
opportunities, the Experimentation as a Service ecosystem and the Open Data
management in the FESTIVAL project.
The first part of the document, more focused on the exploitation topic,
presented several assets that will be used or exploited during and after the
project: the different project partners that contributed to this analysis,
identified several items that can be considered project outputs. From this
analysis is clear that the project will produce and exploit not only IT asset
(e.g. software platform, integration component) strictly related to the
technical testbed federation, but also methodologies, frameworks and
standards. The list of these exploitable items will be refined during the
project and updated in the next exploitation deliverables (i.e. Deliverable
5.3 “ _First year update to communication, dissemination, exploitation and
open data management activities”_ ).
The chapter 2 gave a first definition of the EaaS ecosystem describing the
entities, processes and the possible stakeholders involved in it. This
analysis, that can be considered an input for the business model to be defined
in the next phase of the project, showed the relationship between the
categories of stakeholder with different roles and the activities that can be
performed in the ecosystem, from the definition of the assets to the execution
of an experiment in the testbed. In the same chapter is also presented a
series of initiatives related to the Experimentation as A Service, including
projects but also commercial platforms or services, that is a relevant
starting point for the competitor analysis to be performed in the next months
.
Chapter 3 was focused on Open Data analysing the business opportunities
related to the adoption and reuse of Open Data approach in the different
countries of the world and in particular the ones involved in FESTIVAL project
(Japan, France, Spain, Italy): the sections of this chapter, that reported the
results of different international studies, underlined the business potential
of the open data market in the public sector but also in other business
domains (e.g. transportation, education, utilities). The results of the
presented study will contribute to the FESTIVAL business plan development.
Last part of the chapter 3 described the specific Open (Research) Data
management in FESTIVAL project showing the processes that will be followed in
the different phases of open data life cycle during the project and in
particular in the management of the data coming from the experimentations.
This analysis is the initial part of the Data Management Plan that will be
produced following the EU guidelines and that will be included in the
Deliverable 5.3. The last part of the document presented the roadmap for the
next activities related to the exploitation and business plan describing how
the analysis and information collected in D5.2 will be further updated and in
which deliverables the results will be included.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0971_RAMCIP_643433.md
|
# Executive Summary
This deliverable is the third version of the Data Management Plan (DMP) of the
RAMCIP project, in accordance with the regulations of the Pilot action on Open
Access to Research Data of the Horizon 2020 programme (H2020). It contains the
final information about the data the project has generated along with details
on how it will be exploited or made accessible for verification and re-use,
and how they are curated and preserved.
To develop the first version of the deliverable (D9.3. RAMCIP Data Management
Plan – v1), a “data identification form” template was first drafted on the
basis of the H2020 guidelines for the development of projects’ Data Management
Plans. This was circulated to all project partners so as to collect all
relevant information concerning the datasets that are planned to be developed
in the course of the project. On the basis of all partners’ feedback, the
preliminary data management plan of the project was established during the
first project year. During the second project year, the preliminary DMP of the
project, as had been reported in the deliverable D9.3, has been iterated among
all project partners and was revised, in order to better depict the
Consortium’s plans following the project developments achieved so far.
As shown from the description of the project datasets provided herein, the
project at its present stage has already developed a series of datasets,
related to issues ranging from user requirements analysis, through to
evaluation of the algorithms and methods that enable the target skills of the
RAMCIP robot. Specifically, datasets have been collected towards developing
and evaluating the RAMCIP robot’s object recognition algorithms, home
environment modelling and monitoring ones, as well as its human activity,
behaviour and skills modelling and monitoring methods. Moreover, given the
focus of the project on advanced, dexterous manipulations inside the user’s
home environment, datasets are being established concerning the modelling of
objects and appliances that should be handled by the foreseen robot through
its manipulations, as well as ones related to simulating the robot’s
manipulator kinematics.
The datasets that have been collected in RAMCIP helped the development and
improvement of the skills of the RAMCIP robot, while they can also serve as
for
e.g. benchmarking datasets to the scientific community of the RAMCIP-related
research fields, once made public. Nevertheless, as some of the project’s
datasets involve data collection from human participants, the respective data
collection experiments, as well as the data analysis procedures that will be
employed should be carefully handled, under thorough consideration of ethical
and privacy issues involved in such datasets. In this line, the present
deliverable, in parallel to the deliverable D2.4 “Ethics Protocol”, pays due
attention to ethical and privacy issues related not only to the above, but
also to whether the foreseen datasets can be made public. For all the
identified RAMCIP datasets, specific parts that can be made publicly available
have been identified.
The public datasets of the RAMCIP project became available through a common
repository that has been formulated on the basis of the RAMCIP “data
management portal”; this is a dedicated space of the RAMCIP project website,
which can aggregate descriptions of all project public datasets, and provide
links to respective dataset download sections to the interested public, as
well as centralized data management functionalities to project partners. The
data management portal of the RAMCIP project has been developed and made fully
operational during the second project year, as reported in the present
deliverable.
The present deliverable has been formulated at the end of the project’s
lifespan following the H2020 guidelines on Data Management Plans depicting
which of the datasets have been made publicly available and under which Data
Management framework.
# 1\. Introduction
The purpose of the Data Management Plan (DMP) deliverable is to provide
relevant information concerning the data that that have been collected and
used by the partners of the project RAMCIP. These datasets were required for
the development and evaluation of the methods that are have researched,
developed and used to address the particular research problems of the project.
RAMCIP aims at developing a domestic service robot capable of providing
proactive and discreet assistance to elderly people with MCI and at early AD
stages in their everyday life at home. Such kind of robot should develop
highlevel cognitive functions, advanced communication and manipulation skills
in order to interact with the patients as well as with its environment. The
process of training the robot to achieve such advanced skills require
capturing a variety of datasets regarding for instance large and small scale
object detection, localization and human tracking, while it is of equal
importance to simulate the robot kinematics and the patients’ behaviour to
capture synthetic data, instead of relying exclusively on real patients such
as the ones of the primary RAMCIP end user groups.
In this scope, this deliverable extensively describes the RAMCIP consortium
plans for each dataset collected throughout the project’s duration. It
provides final information about the origin and nature of each dataset
acquired during the RAMCIP lifespan, its standards, any similar datasets and
corresponding publications, data access and preservation policies.
RAMCIP participates to the Pilot action on Open Access to Research Data which
is part of the Horizon 2020 program. Our goal is to provide where possible,
accurate and high-quality data to the research community so that the project
will contribute to future advancements in the field of assistive robotics.
However, since data may contain personal information about human participants,
a focus is also given to possible ethical issues and access restrictions
regarding personal data so that no regulations on sensitive information are
violated.
The DMP, is considered fixed and compared to the previous two versions all the
foreseen datasets have been recorded and sufficiently uploaded to the Data
Management Portal. Hardly any deviations took place from the initial foreseen
plan all justifiable by the needs aroused in due course of the project. This
third version of the RAMCIP DMP summarizes the direction of the project
regarding the collection of the data, as well as the project progress made so
far to this end.
The overall plan of RAMCIP related to data management along the project
duration is as follows:
* M6: Preliminary analysis and production of the first version of the Data Management Plan (D9.3).
* M16: Writing of the specifications for the project’s data management portal, where information over the project’s datasets and links to download locations shall be provided where applicable (e.g. where a publicly available version of a dataset exists).
* M17-M19: Development of the data management portal (to be carried out by CERTH), as a dedicated part of the RAMCIP website.
* M20: The data management portal is operational, as described in the present deliverable.
* M24: Second version of the Data Management Plan, describing actual, proven procedures implemented by the project during its data collection efforts, and preparing the sustainability of the data storage after the end of the project (updated Data Management Plan and developed Data Management Portal, as described in the present document – D9.8).
* M42: Final Data Management Plan, reflecting on the lessons learnt through the project, and describing the plans implemented by RAMCIP for sustainable storage and accessibility of the data (D9.9).
In the Section 3 of the deliverable, we provide a list of the established
datasets during the project is provided, including detailed descriptions on
the aforementioned specifications.
## 1.1 Deliverable structure
In the rest of the deliverable, Chapter 2 first summarizes key general
principles that are involved in the Data Management Plan of the RAMCIP
project, such as ones related to data security and personal data protection,
whereas it also provides a description of the project’s plans toward the
development of the data management portal.
Chapter 3 serves as the core chapter of the present deliverable, as it
describes, in detail level the datasets that have been acquired during the
RAMCIP project and their current status.
Chapter 4 describes the Data Management Portal that has been developed during
the second project year, maintained and expanded during the third year,
following the initial DMP specifications.
Finally, Chapters 5 and 6 provide a discussion on this 3 rd version of the
RAMCIP Data Management Plan and draw the conclusions of the present
deliverable.
# 2\. General Principles
## 2.1 Participation in the Pilot on Open Research Data
RAMCIP participates in the Pilot on Open Research Data launched by the
European Commission along with the Horizon2020 programme. The consortium
believes firmly in the concepts of open science, and the large potential
benefits the European innovation and economy can draw from allowing reusing
data at a larger scale. Therefore, all data produced by the project may be
published with open access – though this objective will obviously need to be
balanced with the other principles described below.
## 2.2 Security
The datasets that have been collected collected through RAMCIP are of high
value and may contain sensitive personal data. Special care should be taken to
prevent such datasets to leak or become hacked. This is another key aspect of
RAMCIP data management, and all data repositories used by the project will
include effective protection.
A holistic security approach will be followed, in order to protect the pillars
of information security (confidentiality, integrity, availability). The
security approach will consist of a methodical assessment of security risks
followed by their impact analysis. This analysis will be performed on the
personal information and data processed by the proposed system, their flows
and any risk associated to their processing.
Security measures will include the implementation of PAKE protocols, such as
the SRP protocol, and protection about bots such as captcha technologies.
Moreover, the pilot sites shall apply monitored and controlled procedures
related to the data collection, their integrity and protection. The data
protection and privacy of personal information will include protective
measures against infiltration as well as physical protection of core parts of
the systems and access control measures.
## 2.3 Personal Data Protection
RAMCIP activities will involve human participants for various human activity
and behaviour analysis –related data collection purposes. Therefore, it is
clear that in some cases personal data will have to be collected. Such data
will be protected in accordance with the EU's Data Protection Directive
95/46/EC “on the protection of individuals with regard to the processing of
personal data and on the free movement of such data”. Further information on
how personal data collection and handling should be approached in the RAMCIP
project are provided in the deliverable D2.4 “Ethics Protocol” of the project.
All personal data collection efforts of the project partners will be
established after giving subjects full details on the experiments to be
conducted, and obtaining from them a signed informed consent form, following
the respective guidelines set in the D2.4 deliverable.
## 2.4 The RAMCIP Data Management Portal
RAMCIP has developed a data management portal as part of its website. This
portal will provide to the public, for each dataset that will become publicly
available, a description of the dataset along with a link to a download
section. The portal will be updated each time a new dataset has been collected
and is ready of public distribution. The portal will however not contain any
datasets that should not become publicly available.
The initial version of the portal has become available during the 2 nd year
of the project, in parallel to the establishment of the first versions of
project datasets that has been made publicly available. The RAMCIP data
management portal enables project partners to manage and distribute their
public datasets through a common infrastructure.
# 3\. Description of the established RAMCIP datasets
In this chapter detailed information about the datasets that have been
captured by the partners of the RAMCIP project are provided. In order to meet
the requirements of the DMP according to the Pilot of Open Access of the
Horizon 2020, each partner provided the description of their datasets using
the template given in Annex I, which was formed by following the EC guidelines
of the dataset aspects that should be reported in DMPs of the H2020 projects
1 .
In the present, third version of the RAMCIP Data Management Plan, all partners
have revisited the initial descriptions of their datasets (as provided in
D9.3) and have made any necessary changes so as to reflect the current status
of their DMPs, so as the uploaded data to precisely coincide with their
description herein.
**Datasets Naming Conventions**
Concerning the convention followed for naming the RAMCIP datasets, it should
be noted that the name of each dataset comprises: (a) a prefix 'DS' indicating
a dataset, along with its unique identification number, e.g. “DS1”, (b) the
name(s) of the partner(s) responsible to collect it, e.g. CERTH, along with an
identifier denoting the internal numbering of the dataset concerning the
specific partner,
e.g. -01, and (c) a short title of the dataset summarizing its content and
purpose,
e.g. Object Recognition Dataset.
**Summary of the RAMCIP datasets**
Within the RAMCIP project period all the foreseen datasets have been collected
and uploaded to the Data Management Portal covering a series of research
dimensions on the skills the RAMCIP robot had developed. A comprehensive
description of the uploaded datasets is also provided within the portal.
A summary of the developed datasets and those made publicly available through
the RAMCIP Data Management Portal is provided in the table below. All expected
outcomes have been established and the anticipated public parts of the RAMCIP
datasets have been uploaded at the RAMCIP Data Management Portal, according to
the updated data management plan presented in the second version of the
project’s DMP deliverable (D9.8; RAMCIP Data Management Plan – v2, M24).
#### Table 1\. Summary of datasets planned to be collected during the course
of the RAMCIP project and current status
<table>
<tr>
<th>
**No**
</th>
<th>
**Name**
</th>
<th>
**Description**
</th>
<th>
**Summary**
</th>
<th>
**Current Status**
</th> </tr>
<tr>
<td>
**DS1**
</td>
<td>
DS1.CERTH
-01. Object
Recognition
Dataset
</td>
<td>
A large scale dataset of images and associated annotations will be collected
aiming at benchmarking object recognition and grasping algorithms in a
domestic environment.
</td>
<td>
Object 3D models and test cases have been made publicly available through the
RAMCIP Data Management Portal (DMPo).
</td>
<td>
**Uploaded Final**
</td> </tr> </table>
<table>
<tr>
<th>
**DS2**
</th>
<th>
DS2.CERTH
-02.
Domestic
Space
Modeling
Dataset
</th>
<th>
A collection of RGB-D data with great spatial coherence using the Kinect2
sensor of multiple places concerning indoor scenarios both for large and small
scale circumstances.
</th>
<th>
Data regarding metric mapping along with hierarchical semantic information for
the objects/ robot parking positions is publicly available through the
RAMCIP DMPo
</th>
<th>
**Uploaded Final**
</th> </tr>
<tr>
<td>
**DS3**
</td>
<td>
DS3.ACCRE
A-01.
Interactive Environmen
tal
Component s Dataset
</td>
<td>
A collection of CAD data containing models of
usable/interactive elements of RAMCIP user's surroundings, like light
switches, water taps, cooker knobs, door handles etc.
</td>
<td>
Data with models of house elements from various environments along with full
house models are publicly available through the RAMCIP DMPo.
</td>
<td>
**Uploaded Final**
</td> </tr>
<tr>
<td>
**DS4**
</td>
<td>
DS4.CERTH
-03.
Human
Tracking
Dataset
</td>
<td>
Dataset for human identification, pose and gestures tracking, facial
expressions
monitoring and activity tracking along obtained with
Kinect2 sensor mounted on a mobile robotic base (e.g.
Turtlebot).
</td>
<td>
Dataset containing human skeleton tracking, with low level actions and high
level activities through data collection experiments at the premises of CERTH
and LUM is publicly available through the RAMCIP DMPo.
</td>
<td>
**Uploaded Final**
</td> </tr>
<tr>
<td>
**DS5**
</td>
<td>
DS5.SSSA01. Human
Motion for
Fine
Biomechani
cal Analysis
Dataset
</td>
<td>
Dataset for the training and evaluation of the Fine-grained Body Motion
Tracking Task by SSSA.
</td>
<td>
Data collected through experiments at the premises of LUM from technical
partners of SSSA are publicly available through the RAMCIP DMPo.
</td>
<td>
**Uploaded Final**
</td> </tr>
<tr>
<td>
**DS6**
</td>
<td>
DS6.SSSA02. Human Walking Dataset
</td>
<td>
Dataset for characterizing the walking behaviour of subjects and
identification of changes in the motion patterns, based on RGB-D cameras.
</td>
<td>
Data collected through experiments at the premises of LUM from technical
partners of SSSA are publicly available through the RAMCIP DMPo.
</td>
<td>
**Uploaded Final**
</td> </tr>
<tr>
<td>
**DS8**
</td>
<td>
DS8.CERTH
.04. Virtual
User
Models
Dataset
</td>
<td>
Virtual User Models (VUMs) of robot users (e.g. MCI patients), encoding their
cognitive and motor skills, behavioral aspects,
</td>
<td>
VUMs dataset has been developed from users participated in RAMCIP preliminary
trials including both healthy and MCI users and are publicly
</td>
<td>
**Uploaded Final**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
as well as human-
robot interaction and communication preferences.
</td>
<td>
available through the RAMCIP DMPo.
</td>
<td>
</td> </tr>
<tr>
<td>
**DS9**
</td>
<td>
DS9.TUM.0 1. Lowerbody kinematic Dataset
</td>
<td>
Dataset consisting of kinematics of lowerbody interaction by pairs of human
participants.
</td>
<td>
Data collected at
laboratory
environment with ground truth measurements are publicly available through the
RAMCIP DMPo.
</td>
<td>
**Uploaded Final**
</td> </tr>
<tr>
<td>
**DS10**
</td>
<td>
DS10.ACCR
EA.02 Manipulator kinematics chains Dataset
</td>
<td>
A set of
Simulink/CAD/Gazeb
o models for
simulation,
optimization and
development purposes.
</td>
<td>
The second version of the robot’s urdf model has been developed and is
publicly available through the RAMCIP DMPo.
</td>
<td>
**Uploaded Final**
</td> </tr>
<tr>
<td>
**DS11**
</td>
<td>
DS11.LUM_
ACE.01
User
Requireme nts Dataset
</td>
<td>
Dataset with the pictures and videos taken during the workshops with
stakeholders in
Lublin and Barcelona as well as
anonymized questionnaires, which were filled in by the different stakeholders
groups.
</td>
<td>
Dataset established, used for user requirements analysis. Analysis results
made publicly available in the RAMCIP deliverable 2.1, which has been uploaded
at the RAMCIP DMPo.
</td>
<td>
**Described in the**
**deliverabl e D2.1**
</td> </tr>
<tr>
<td>
**DS12**
</td>
<td>
DS12.CERT
H-01 3D
Force
Slippage PB
Dataset
</td>
<td>
Dataset including 3D Force measurements from Optoforce 3axis Sensors during
experiments where slippage occurred including several surfaces.
</td>
<td>
Data collected with optoforces from
SHADOW that involve multiple experimental setups are publicly available
through the RAMCIP DMPo.
</td>
<td>
**Uploaded Final**
</td> </tr> </table>
In the following sections detailed description of each dataset in accordance
to the H2020 DMP template is provided.
## 3.1 Dataset “DS1.CERTH-01.ObjectRecognitionDataset”
#### General Description
A large scale dataset of images and associated annotations will be collected
aiming at benchmarking object recognition and grasping algorithms in a
domestic environment.
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS1.CERTH-01. Object Recognition Dataset**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**Origin of Data** _
The dataset includes a collection of RGB-D images of household objects
captured from various viewpoints using a Kinect1 and/or Kinect2 sensor. Small
sized objects have been captured by placing them on a turntable. Fiducial
markers have been used to obtain an accurate estimation of the camera pose for
each view point. Larger objects have been captured by moving the sensor around
the object and capturing as much views as possible. In cases where a large
object contains articulations, detailed models of all the articulated parts
are provided, accompanied with the corresponding annotated information
regarding joint types (revolute, prismatic), joint frame (position and
orientation), and joint limits.
_**Nature and scale of data** _
The data consist of 3D models of objects that have been created either with
CAD software, 3D scanner or by merging RGB-D point clouds as well as test
images depicting realistic scenarios for evaluation.
_Data Format:_ Training: PLY, OBJ for 3D models, Testing: PNG, JPG for images,
TXT for annotations
_**To whom could the dataset be useful** _
The dataset is valuable for benchmarking algorithms for object recognition,
robotics navigation and grasping.
_**Related scientific publication(s)** _
The dataset accompanied the research results in the field of object
recognition and grasping.
_**Indicative existing similar data sets** _
There are several public datasets containing RGB-D images of objects aimed at
object recognition.
The UW dataset ( _http://www.cs.washington.edu/rgbd-dataset/_ )
The Berkley's B3DO dataset ( _http://kinectdata.com/_ )
The Berkley's BigBird dataset ( _http://rll.berkeley.edu/bigbird/_ )
The Berkley’s YCB dataset ( _http://rll.eecs.berkeley.edu/ycb/_ )
_Part of our dataset will be considered for integration in the B3DO dataset
that is designed to be extensible._
</td> </tr> </table>
<table>
<tr>
<th>
**3.**
</th>
<th>
**Standards and metadata**
</th> </tr>
<tr>
<td>
Indicative metadata include a) foreground-background masks for training
images, b) camera calibration information, c) camera pose matrix for each
viewpoint, d) object identifier and description category label, e) 3D object
model in CAD format. The metadata are in a format that can be easily parsed
with open source software.
</td> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type** _ Widely open.
_**Access Procedures** _
A web page has been created on the RAMCIP data management portal (hosted at
the RAMCIP web site) that provides a description of the dataset and links to a
download section.
_**Embargo periods** _
Some datasets will be available only after the corresponding paper is accepted
and published.
_**Technical mechanisms for dissemination** _
A link to the dataset from the RAMCIP web site (RAMCIP data management
portal). The link is provided in all relevant RAMCIP publications. A technical
publication describing the dataset and acquisition procedure has been
published.
_**Necessary S/W and other tools for enabling re-use** _
The dataset is designed to allow easy reuse with commonly available tools and
software libraries.
_**Repository where data will be stored** _
The dataset is accommodated at the data management portal of the project
website.
</td> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
_**Data preservation period** _
The dataset will be preserved online for as long as there are regular
downloads. After that it would be made accessible by request.
_**Approximated end volume of data** _ The dataset is approximately 150MB.
_**Indicative associated costs for data archiving and preservation** _
Probably a dedicated hard disk drive will be allocated for the dataset. No
costs are currently foreseen regarding its preservation.
_**Indicative plan for covering the above costs** _ Small, one-time costs
covered by RAMCIP.
</td> </tr>
<tr>
<td>
**6.**
</td>
<td>
**Partners activities and responsibilities**
</td> </tr> </table>
<table>
<tr>
<th>
_**Partner Owner / Data Collector** _
CERTH
_**Partner in charge of the data analysis** _
CERTH
_**Partner in charge of the data storage** _
CERTH
_**WPs and Tasks** _
The data have been collected within activities of WP3 in Task 3.1 and have
been mainly be used for analysis in the scope of WP3, WP5 and WP6 tasks
</th> </tr> </table>
## 3.2 Dataset “DS2.CERTH02.DomesticSpaceModellingDataset”
#### General Description
The space modelling dataset comprises the collection of RGB-D data with great
spatial coherence using the Kinect2 ToF (Time of Flight) sensor. Multiple
places have been recorded concerning indoor scenarios both for large and small
scale circumstances. The collected dataset contain fully registered color
(RGB) images with their respective depth maps. The collection area concerns
domestic real home or home-like environment.
_http://vision.in.tum.de/data/datasets/rgbd-dataset_
_http://robotics.pme.duth.gr/kostavelis/Dataset.html_
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS2.CERTH-02. Domestic Space Modeling Dataset**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**Origin of Data** _
During the acquisition procedure the sensor motion was as smooth as possible,
which combined with the high frame rate of the sensor ensuring great overlap
among the captured scenes. Therefore, the collected data are suitable for
mapping and navigation experimentation.
_**Nature and scale of data** _
The Domestic-Space-Modelling dataset is split into two parts:
In PART I the recordings in static environment have been obtained, providing
thus the required data to assess the developed solutions (mapping, navigation)
in their basis.
In PART II the recording has been carried out in a dynamic environment
including also human activity. Thus the acquired data have been used for the
assessment of the developed algorithms in their higher level (map recalling,
planning, and replanning) and their real performance in human inhabited
environments.
Moreover, the acquired dataset is accompanied with accurate ground-truth
measurements (of the robot location and pose, as well as of the modelled
space) for the evaluation of the mapping and localization algorithms. _Data
Format:_ PNG, JPG image format, PCD format for 3D models
_**To whom could the dataset be useful** _
The dataset is useful for the benchmarking of mapping and robotic navigation
solutions.
_**Related scientific publication(s)** _
The results of the developed algorithms along with the Domestic-Space-
Modelling dataset have been disseminated in International Conferences and
Journals of the robotics field.
_**Indicative existing similar datasets** _
</td> </tr> </table>
<table>
<tr>
<th>
Similar datasets have already been collected in the past such as:
( _http://vision.in.tum.de/data/datasets/rgbd-dataset_ ) provided by the
Technische Universität München.
( _http://robotics.pme.duth.gr/kostavelis/Dataset.html_ ) provided by
Laboratory of Robotics and Automation, DUTH.
Contrary to the aforementioned cases where the data have been collected with
the RGB-D sensor Kinect1, our Domestic-Space-Modelling dataset will be
captured with a Kinect2 sensor which is more accurate and retains greater
resolution.
Since the publicly available datasets are recorded with a Kinect1 sensor, a
direct integration with the Domestic-Space-Modelling dataset is problematic
mainly due to the fact that a) the data are collected in different
environments and b) the resolution is different between the acquired data.
</th> </tr>
<tr>
<td>
**3.**
</td>
<td>
**Standards and metadata**
</td> </tr>
<tr>
<td>
The Domestic-Space-Modelling is accompanied by accurate ground-truth data
ensuring the validity of the developed algorithms as well as their reuse in
future research activities.
The metadata that have been produced are summarized as follows:
* The point clouds (textured/pseudo-colored) of each instance transformed in real world coordinates (x, y, z).
* The produced 3D/2D map as a result of the developing procedure within the RAMCIP project, providing a benchmark
* The consecutive Visual Odometry (VO) transformations reproducing the trajectory of the robot, also for benchmarking.
</td> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type** _
Only portions of the PART I of the dataset that contain a static environment
are publicly released. These portions concern data collected from the LUM
apartment, simulating a home-like environment, without human presence.
For the rest parts of this dataset, including e.g. data collected from real
apartments and those dynamically updated through human activities, only
private access is given. The access is granted to RAMCIP partners whose
research and development activities have a direct dependency (e.g. map
recalling, planning and re-planning), on the basis that a respective informed
consent has been taken from the human subjects participated in the data
collection.
The latter parts of the dataset, including models of real apartments and in
some cases dynamically updated through human activity, cannot become publicly
available. Regardless of the informed consent for publication, such data could
lead to a recognition of participant’s identity and details of his/her home
environment. Thus, it raises significant privacy and ethical concerns and
publication of such a dataset should be prevented, as further explained in the
project’s ethics protocol (deliverable D2.4).
On the contrary, as the LUM home-like environment concerns a public space, the
respective home environment modelling and monitoring dataset, without human
presence, would not be subject to such privacy and ethical issues.
The dataset is accompanied with a specific technical report describing the
calibration, the acquisition procedure as well as technical details of the
architecture of the robot.
</td> </tr> </table>
<table>
<tr>
<th>
_**Access Procedures** _
For the public part of this dataset a web page has been created on the RAMCIP
data management portal (hosted at the RAMCIP web site) that provides a
description of the dataset and links to a download section.
The private part of this dataset is stored at a specifically designated
private space of CERTH, in dedicated hard disk drives, on which only members
of the CERTH research team whose work directly relates to these data will have
access. For the other RAMCIP partners to obtain access to these data, they
should provide a formal request to the CERTH’s primarily responsible for the
data storage, including a justification over the need to have access to these
data. Once deemed necessary, CERTH will provide the respective data portions
to the partner.
_**Embargo periods (if any)** _
None
_**Technical mechanisms for dissemination** _
A link to the public part of this dataset from the RAMCIP web site (data
management portal). The link has been provided in all relevant RAMCIP
publications. A technical publication describing the dataset and acquisition
procedure has been published.
_**Necessary S/W and other tools for enabling re-use** _
The dataset is designed to allow easy reuse with commonly available tools and
software libraries.
_**Repository where data will be stored** _
The public part of this dataset is accommodated at the data management portal
of RAMCIP.
</th> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
_**Data preservation period** _
The public part of the dataset is preserved online for as long as there are
regular downloads. After that, it would be made accessible by request.
The private part of the dataset will be preserved by CERTH at least until the
end of the project.
_**Approximated end volume of data** _
The dataset is approximately 5 Gigabytes.
_**Indicative associated costs for data archiving and preservation** _
Two dedicated hard disk drives will be allocated for the dataset; one
dedicated to the public part and one to the private. No costs are currently
foreseen regarding its preservation.
_**Indicative plan for covering the above costs** _ Small one-time costs
covered by RAMCIP.
</td> </tr>
<tr>
<td>
**6.**
</td>
<td>
**Partners activities and responsibilities**
</td> </tr> </table>
_**Partner Owner / Data Collector** _
CERTH
_**Partner in charge of the data analysis** _
CERTH
_**Partner in charge of the data storage** _
##### CERTH
_**WPs and Tasks** _
The have been collected within activities of WP3 in Task 3.1 and used in the
research efforts of same task, as well as in the context of WP5 activities.
## 3.3 Dataset “DS3.ACCREA01.InteractEnvComponentsDataset”
#### General Description
Interactive environmental components dataset comprises the collection of CAD
data. The prepared dataset contains models of usable/interactive elements of
RAMCIP user's surroundings, such as light switches, water taps, cooker knobs,
door handles etc.
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS3.ACCREA-01. Interactive Environmental Components Dataset**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**Origin of Data** _
CAD models have been created using a 3D-scanning device and regular
calliper/ruler methods.
_**Nature and scale of data** _
Dataset is available in a form of SolidWorks library files package. _Data
Format:_ SolidWorks format
_**To whom could it be useful** _
The collected data have been used for simulations and development of the
RAMCIP manipulator, mobile platform, elevation mechanism and dexterous hand
kinematic chains. Models can be imported into the Gazebo environment
simulation, which have been used for testing components and system integration
by most of technical RAMCIP partners.
_**Related scientific publication(s)** _
N/A
_**Indicative existing similar data sets** _
Several websites provide free bases of everyday objects, although not all of
them are applicable for RAMCIP uses because of their artistic purposes instead
of mechanical/simulation ones.
</td> </tr>
<tr>
<td>
**3.**
</td>
<td>
**Standards and metadata**
</td> </tr>
<tr>
<td>
The metadata that have been produced are be summarized as follows:
* the parsing routines used to read and absorb the data for developing purposes,
* the 3D CAD maps of selected user environments with modelled objects placed on specified world coordinates
</td> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type** _
A part of the dataset is publicly available. The public part is accessible
through the
</td> </tr> </table>
<table>
<tr>
<th>
data management portal of the RAMCIP project. The dataset is accompanied with
photographs and datasheets of chosen, more complex models.
_**Access Procedures** _
For the public part of this dataset a web page has been created on the RAMCIP
data management portal (hosted at the RAMCIP web site) that provides a
description of the dataset and links to a download section.
The private part of this dataset is stored at a specifically designated
private space of ACCREA and CERTH, in dedicated hard disk drives, on which
only members of the ACCREA/CERTH research team whose work directly relates to
these data will have access. For the other RAMCIP partners to obtain access to
these data, they should provide a formal request to the ACCREA’s primarily
responsible for the data storage, including a justification over the need to
have access to these data. Once deemed necessary, ACCREA will provide the
respective data portions to the partner.
_**Embargo periods** _
None
_**Technical mechanisms for dissemination** _
A link to the dataset from the Data management portal. The link is provided in
all relevant RAMCIP publications.
_**Necessary S/W and other tools for enabling re-use** _
The dataset is designed to allow easy reuse with commonly available tools and
software libraries.
_**Repository where data will be stored** _
The public part of this dataset is accommodated at the data management portal
of the project website.
</th> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
_**Data preservation period** _
The dataset will be preserved online for as long as there are regular
downloads. After that it would be made accessible by request.
_**Approximated end volume of data** _
The data are expected to be several hundred of Megabytes.
_**Indicative associated costs for data archiving and preservation** _
A dedicated hard disk drive has been allocated for the dataset. No costs are
currently foreseen regarding its preservation.
_**Indicative plan for covering the above costs** _
The cost will have been covered by the local hosting institute in the context
of RAMCIP.
</td> </tr>
<tr>
<td>
**6.**
</td>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
_**Partner Owner / Data Collector** _
</td> </tr> </table>
ACCREA and CERTH
_**Partner in charge of the data analysis** _
ACCREA, TUM, CERTH, SHADOW
_**Partner in charge of the data storage** _
##### ACCREA, CERTH
_**WPs and Tasks** _
The data have been collected within activities of WP5 in Task 5.1 and 5.4, to
serve the respective project tasks’ research efforts.
## 3.4 Dataset “DS4.CERTH-03.HumanTrackingDataset”
#### General Description
Dataset for human identification, pose and gestures tracking experiments,
along with high-level activities monitoring (e.g. Activities of Daily Living –
ADLs, such as cooking or eating), obtained with Kinect2 or Kinect1 or ASUS
Xtion sensor mounted on a mobile robotic base (e.g. Turtlebot).
The dataset includes facial expressions monitoring and activity tracking
during different affective states, to be used for WP4 affect-related analyses.
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS4.CERTH-03. Human Tracking Dataset**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**Origin of Data** _
The dataset has been collected using a Kinect2 sensor mounted on a mobile
robotic base (e.g. Turtlebot robotic platform). During the acquisition
procedure the robot motion was as smooth as possible.
_**Nature and scale of data** _
The collection experiment has been carried out in two phases, that split the
dataset into three parts:
In PART I the recording focus on monitoring of low-level human activities,
such as pose, gestures and actions.
In PART II the recording deals with monitoring of high-level domestic
activities, such as cooking and eating.
In PART III the recording focus on facial expressions, biosignals and activity
monitoring during different affective states of the user.
_Data Format:_ PNG, JPG for images, XML or TXT for annotations
_**To whom could the dataset be useful** _
The collected data have been used for the development and evaluation of the
human activity monitoring and the affect recognition methods of the RAMCIP
project. The different parts of the dataset are useful in the benchmarking of
a series of human tracking methods, focusing either on human identification,
on pose and gesture analysis and tracking, on high-level activity recognition
and on affect-related human activity analysis.
_**Related scientific publication(s)** _
The dataset accompanies our research results in the field of human activity
monitoring and affect recognition.
_**Indicative existing similar datasets** _
HumanEva: Synchronized Video and Motion Capture Dataset and Baseline Algorithm
for Evaluation of Articulated Human Motion, IJCV 2010.
Cornell Activity Datasets: CAD-60 & CAD-120
(http://pr.cs.cornell.edu/humanactivities/data.php)
It should be noted that although several RGB-D datasets dealing with human
</td> </tr> </table>
<table>
<tr>
<th>
activity analysis are publicly available at present (e.g. the
MSRDailyActivity3D dataset - http://research.microsoft.com/en-
us/um/people/zliu/actionrecorsrc), to the best of our knowledge, no domestic
human activity tracking datasets, focusing on low-level actions, high-level
activities and affect, recorded through the Kinect2 sensor mounted on a mobile
robot base currently exist.
</th> </tr>
<tr>
<td>
**3.**
</td>
<td>
**Standards and metadata**
</td> </tr>
<tr>
<td>
The dataset is accompanied with detailed documentation of its contents.
Indicative metadata include: (a) description of the experimental setup and
procedure that led to the generation of the dataset, (b) documentation of the
variables recorded in the dataset and (c) annotated pose, action, activity and
affective state of the monitored person per time interval.
</td> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type** _
Due to ethical reasons, only the data captured in the LUM premises by normal
healthy control subjects are publicly available, while the rest of them are
private to serve the RAMCIP R&D objectives.
Overall, the data that became publicly available will respect the principle of
anonymity. Therefore, in principle, data that can expose the identity of
participants, including RGB recordings of subjects, have been excluded from
publication. The inclusion of RGB data that could expose the identity of
normal healthy control subjects in the public part of this dataset has been
further investigated in the third project year and if done, it has been on the
basis of appropriate informed consent to data publication (see deliverable
D2.4).
_**Access Procedures** _
For the portions of the dataset that are publicly available, a respective web
page has been created on the data management portal (hosted at the RAMCIP web
site) that provides a description of the dataset and links to a download
section.
The private part of this dataset is stored at a specifically designated
private space of CERTH, in dedicated hard disk drives, on which only members
of the CERTH research team whose work directly relates to these data have
access. For further RAMCIP partners to obtain access to these data, they
should provide a proper request to the CERTH primarily responsible, including
a justification over the need to have access to these data. Once deemed
necessary, CERTH will provide the respective data portions to the partner.
_**Embargo periods** _
None
_**Technical mechanisms for dissemination** _
For the public part of the dataset, a link to has been provided from the Data
management portal. The link is provided in all relevant RAMCIP publications. A
technical publication describing the dataset and acquisition procedure has
been published.
_**Necessary S/W and other tools for enabling re-use** _
The dataset will be designed to allow easy reuse with commonly available tools
and software libraries.
</td> </tr>
<tr>
<td>
_**Repository where data will be stored** _
The public part of this dataset will be accommodated at the data management
portal of the project website.
</td> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
_**Data preservation period** _
The public part of this dataset will be preserved online for as long as there
are regular downloads. After that it would be made accessible by request.
The private part of the dataset will be preserved by CERTH at least until the
end of the project.
_**Approximated end volume of data** _
The dataset is approximately 500 Mbs.
_**Indicative associated costs for data archiving and preservation** _
Two dedicated hard disk drives have been allocated for the dataset; one for
the public part and one for the private. There are no costs associated with
its preservation.
_**Indicative plan for covering the above costs** _ Small one-time costs
covered by RAMCIP.
</td> </tr>
<tr>
<td>
**6.**
</td>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
_**Partner Owner / Data Collector** _
CERTH
_**Partner in charge of the data analysis** _
CERTH, SSSA, LUM
_**Partner in charge of the data storage** _
CERTH
_**WPs and Tasks** _
The data have been collected within activities of WP3 and WP4, to mainly serve
the research efforts of T3.2, T3.4 and T4.2.
</td> </tr> </table>
## 3.5 Dataset “DS5.SSSA-01.HumanMotionFineDataset”
#### General Description
This dataset has beencreated for the purpose of characterizing the walking
behavior of MCI subjects based on RGB-D cameras (T3.3) in the activities not
covered by other datasets from RAMCIP.
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS5.SSSA-01. Human Motion for Fine Biomechanical Analysis Dataset**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**Origin of Data** _
The dataset has been collected by asking people to perform up to 2 minutes of
normal walking motion, then to perform the same motion after a fatiguing
protocol. The trials have been recorded with Kinect1 RGB-D camera placed on a
fixed structure with the same point of view of the robot at short range
distance, together with inertial sensors.
_**Nature and Scale of Data** _
The dataset conists of two sets of data: leg motions before and after the
onset of physical fatigue, captured for 20 people each suffering from mild-
cognitive impairment
The size of the dataset is on the order of 200-300 GB.
_Data Format:_ ROS bag files, XML or TXT for annotations.
_**To whom could the dataset be useful** _
The dataset is helpful for research because it combines a marker-less tracking
with in specific short ranges shots, with inertial sensors. The biomechanical
measures provided by the sensors provide a means to assess differences in
walking patterns due to the onset of physical fatigue in MCI subjects.
_**Related scientific publication(s)** _
A scientific publication has be created for analyzing data and proposing a new
mechanism to detect the onset of physical fatigue from gait patterns in MCI
subjects, using deep learning. .
_**Indicative existing similar data sets (including possibilities for
integration and reuse)** _
_http://www.cbsr.ia.ac.cn/users/szheng/?page_id=71_
_http://www.cvc.uab.es/DGaitDB/Summary.html_ http://mocap.cs.cmu.edu/
</td> </tr>
<tr>
<td>
**3.**
</td>
<td>
**Standards and metadata**
</td> </tr>
<tr>
<td>
The dataset is be accompanied with detailed documentation of its contents.
Indicative metadata include: (a) description of the experimental setup and
</td> </tr> </table>
<table>
<tr>
<th>
procedure that led to the generation of the dataset, (b) documentation of the
variables recorded in the dataset and (c) statistics for every participant
with experimental notes.
</th> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type** _
Data from MCIsubjects as well as from healthy people have been acquired, and
such data do not contain personal information. This allows the release of
data, after anonymization, to the public. The collection of data from these
subject required the adoption a consent form that will followed the guidelines
of deliverable D2.4 (Ethics Protocol).
_**Access Procedures** _
For the portions of the dataset that have been made publicly available, a
respective web page has been created on CERTH's RAMCIP portal that provides a
description of the dataset and links to a download section.
The private part of this dataset is stored at a specifically designated
private space of SSSA, in dedicated hard disk drives, on which only members of
the SSSA research team whose work directly relates to these data have access.
For further RAMCIP partners to obtain access to these data, they should
provide a proper request to the SSSA’s primarily responsible, including a
justification over the need to have access to these data. Once deemed
necessary, SSSA will provide the respective data portions to the partner.
_**Embargo periods** _
None
_**Technical mechanisms for dissemination** _
For the public part of the dataset, a link to this has benn provided from the
Data management portal. The link is also provided in all relevant RAMCIP
publications. A technical publication describing the dataset and acquisition
procedure has been published.
_**Necessary S/W and other tools for enabling re-use** _
The data are published as ROS bag and in a form easily loadable by MATLAB. The
ROS solution is quite good for existing tools but it is not good on the long
term due to the complexity of the representation and the associated
dependencies.
_**Repository where data will be stored** _
The dataset has also been made available over a dedicated website under the
domain of SSSA. The data management portal provides links to the dataset’s
download section.
</td> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
**Data preservation period**
The data are also available on the PERCRO SSSA website with an expected
lifetime of 10 years given the history of PERCRO and the backup procedures of
SSSA. The digital signature of the whole dataset, or the storage of the
dataset in a git repository provides support for the correct duplication and
preservation.
</td> </tr> </table>
<table>
<tr>
<th>
_**Approximated end volume of data** _ 200-300 GBs.
_**Indicative associated costs for data archiving and preservation** _ None,
they are kept on SSSA server.
</th> </tr>
<tr>
<td>
**6.**
</td>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
_**Partner Owner / Data Collector** _
SSSA
_**Partner in charge of the data analysis** _
SSSA, CERTH
_**Partner in charge of the data storage** _
SSSA
_**WPs and Tasks** _
The data have been collected within activities of WP3 in Task 3.3
</td> </tr> </table>
## 3.6 Dataset “DS6.SSSA-02.WalkingSkillsDataset”
#### General Description
This dataset has been created for the purpose of characterizing the walking
behavior of healthy subjects based on RGB-D cameras. This characterization is
part of the motor based skill assessment of the subject for the identification
of changes in the motion patterns.
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS6.SSSA-02. Human Walking Dataset**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**Origin of Data (e.g. indicative collection procedure, devices used etc.)**
_
Actors have been asked to perform two different walking behaviors: before and
after the onset of physical fatigue. : Physical fatigue has been induced with
up to 45 minutes of walking on a threadmill. This recording has been performed
with a Kinect 2 sensor, a Kinect 1 sensor, inertial sensors and Vicon Motion
capture setup as ground truth. The data have been labelled by the presence of
physical fatigue.
_**Nature and scale of data** _
For this dataset, 20 subject have been recorded for 2 minute for each type of
walking behavior.
The dataset is approximately 50 GB.
_Data Format: ROS bag files_ , XML or TXT for annotations
_**To whom could the dataset be useful** _
This dataset is very valuable for research due to the validation with Vicon
and the use of Kinect 2 and Kinect 1.
The collected data have been used for the development and evaluation of the
human tracking and walking assessment. The different parts of the dataset are
useful in understanding different walking behaviors in fatigued subjects.
_**Related scientific publication(s)** _
Such dataset is not existent from literature, and it will can be used for
characterizing walking patterns and low cost solutions to assess gait
behaviors.
_**Indicative existing similar datasets** _
Various activity datasets do exist, but none deals with variability in walking
patterns. In addition this dataset will provide Vicon measures together with
the Kinect2, Kinect 1 and inertial sensors.
</td> </tr>
<tr>
<td>
**3.**
</td>
<td>
**Standards and metadata**
</td> </tr>
<tr>
<td>
The dataset is accompanied with detailed documentation of its contents.
Indicative metadata include: (a) description of the experimental setup and
procedure that led to the generation of the dataset, (b) documentation of the
variables recorded in the dataset and (c) statistics of the participants,
experimental notes and biometrics statistics of the monitored person per time
</td> </tr> </table>
<table>
<tr>
<th>
interval.
</th> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type** _
Only data from normal healthy control subjects have been acquired, and such
data do not contain personal information. This allows the release of data,
after anonymization, to the public. The collection of data from these subjects
required the adoption of a consent form that followed the guidelines of
deliverable D2.4 (Ethics Protocol).
_**Access Procedures** _
For the portions of the dataset that are publicly available, a respective web
page has been created on the data management portal that provides a
description of the dataset and links to a download section.
The private part of this dataset has been stored at a specifically designated
private space of SSSA, in dedicated hard disk drives, on which only members of
the SSSA research team whose work directly relates to these data have access.
For further RAMCIP partners to obtain access to these data, they should
provide a proper request to the SSSA’s primarily responsible, including a
justification over the need to have access to these data. Once deemed
necessary, SSSA will provide the respective data portions to the partner.
_**Embargo periods** _
None
_**Technical mechanisms for dissemination** _
Publishing and RAMCIP project advertising. Eventually robotics mailing list
advertising.
_**Necessary S/W and other tools for enabling re-use** _
The data is published as ROS bag and in a form easily loadable by MATLAB. The
ROS solution is quite good for existing tools but it is not good on the long
term due to the complexity of the representation and the associated
dependencies.
_**Repository where data will be stored** _
The public part of the dataset is available over a dedicated website under the
domain of SSSA. The RAMCIP data management portal provides links to the
respective dataset’s download section.
</td> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
**Data preservation period**
The data are available on the PERCRO SSSA website with an expected lifetime of
10 years given the history of PERCRO and the backup procedures of SSSA. The
digital signature of the whole dataset, or the storage of the dataset in a git
repository could provide support for the correct duplication and preservation.
_**Approximated end volume of data** _
200-300 GBs
</td> </tr> </table>
<table>
<tr>
<th>
_**Indicative associated costs for data archiving and preservation** _ None if
kept on SSSA server.
</th> </tr>
<tr>
<td>
**6.**
</td>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
_**Partner Owner / Data Collector** _
SSSA
_**Partner in charge of the data analysis** _
SSSA, CERTH
_**Partner in charge of the data storage** _
SSSA
_**WPs and Tasks** _
The data have been collected within activities of WP3 in Task 3.3 and Task
3.5.
</td> </tr> </table>
## 3.7 Dataset “DS8.CERTH.04.VirtualUserModelsDataset”
#### General Description
This dataset concerns the RAMCIP VUMs; these are Virtual User Models (VUMs) of
robot users (e.g. MCI patients), encoding their cognitive and motor skills,
behavioral aspects, as well as human-robot interaction and communication
preferences. The models are XML-based specification of parameters that are
taken into account in the context of the RAMCIP user modelling methodology.
The dataset includes for each indicative robot user case, a semantic
representation of a series of parameters related to the above.
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS8.CERTH.04. Virtual User Models Dataset**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**Origin of Data** _
This dataset derived by analyzing the datasets of Human Tracking (2.4),
Walking Skills (2.6) and Human Cognitive Skills (2.7) described above, toward
modelling behavioural aspects as well as cognitive and motor skills of the
participants of the respective data collection experiments, into VUM
representations that hold statistical population values.
_**Nature and scale of data** _
The dataset is in the form of XML-based representations of the parameters
involved in the RAMCIP VUMs.
_Data Format**:** _ XML file format
_**To whom could the dataset be useful** _
This dataset has been used in the development of the RAMCIP user modelling
methodology of WP3. The dataset is also useful for researchers investigating
behavioral traits, as well as cognitive and motor skills correlates to MCI.
_**Related scientific publication(s)** _
The developed VUMs dataset are disseminated in International Conferences and
Journals of robotics and health (e.g. MCI-related) domains.
_**Indicative existing similar datasets** _
Virtual Human Models encoding anthropometric and kinematic parameters of the
human body, focusing on the elderly and disabled have derived from the VERITAS
FP7 project. Knowledge derived from the VERITAS VUMs could be integrated into
the RAMCIP VUMs which, however, will also focus on the cognitive and
behavioural traits of elderly with MCI.
</td> </tr>
<tr>
<td>
**3.**
</td>
<td>
**Standards and metadata**
</td> </tr>
<tr>
<td>
The dataset is accompanied with detailed documentation of its contents;
detailed documentation of the variables involved in the RAMCIP VUMs are also
provided.
</td> </tr> </table>
<table>
<tr>
<th>
Guidelines for Virtual Human Modelling derived from the VUMS cluster
(http://vums.iti.gr/index8091.html?page_id=64) can be followed, as well as
related XSD and XML specifications will be followed. The relevance of
following also usiXML-based paradigms to develop respective (e.g. Human Robot
Communication -related) parts of the RAMCIP VUMs will be investigated.
</th> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type** _
Anonymized versions of the RAMCIP VUMs formulate open models encoding
behavioral traits and human robot communication preferences, cognitive and
motor skills of MCI patients.
_**Access Procedures** _
A web page has been created on the RAMCIP data management portal should
provides a description of the dataset and links to a download section.
_**Embargo periods (if any)** _
None
_**Technical mechanisms for dissemination** _
A link to the anonymized dataset from the Data management portal. The link is
provided in all relevant RAMCIP publications. A technical publication
describing the dataset and acquisition procedure could be published.
_**Necessary S/W and other tools for enabling re-use** _
The dataset is designed to allow easy reuse with commonly available XML
editing tools and software libraries.
_**Repository where data will be stored** _
The public part of the dataset is accommodated at the data management portal
of RAMCIP.
</td> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
_**Data preservation time** _
The dataset will be preserved online for as long as there are regular
downloads. After that it would be made accessible by request.
_**Approximated end volume of data** _
The dataset’s end volume is expected are at the leve of 2 Megabytes.
_**Indicative associated costs for data archiving and preservation** _ There
are no costs associated with its preservation.
_**Indicative plan for covering the above costs** _
The cost will be covered at the local hosting institute as a part of the
standard network system maintenance.
</td> </tr>
<tr>
<td>
**6.**
</td>
<td>
**Partners activities and responsibilities**
</td> </tr> </table>
_**Partner Owner / Data Collector** _
CERTH
_**Partner in charge of the data analysis** _
CERTH, SSSA, FORTH, LUM, ACE
_**Partner in charge of the data storage** _
##### CERTH
_**WPs and Tasks** _
The data have been collected within activities of WP3 in Task 3.4, and served
the project’s research efforts within Task 3.4, Task 3.5 and Task 3.6.
## 3.8 Dataset “DS9.TUM01.LowerBodyKinematicsDataset”
#### General Description
This dataset contains kinematics of lower-body interaction by pairs of human
participants in which one human participant assists wearing a shoe of another
seated participant in line with a scenario description of the RAMCIP project.
The dataset has been used to train the predictive controller of the RAMCIP
system in R&D activities under T6.3. Furthermore, the dataset has been used to
provide ergonomic guidance for designing the control of the RAMCIP system.
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS9.TUM.01. Lower-body kinematic Dataset**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**Origin of Data** _
The data have been collected from volunteers of healthy participants using a
Qualisys passive-marker motion tracking system. The small light-weight markers
have been placed on the foot, tibia, and femur of both legs to capture the
position and orientation of the lower-limb segments. Furthermore, the pose of
the torso has been captured with markers placed on the sternum. A separate set
of markers tracks the positions of the shoe and the hand of the assisting
person. The dataset has beenobtained in accordance with the local ethics
requirements at TUM, Germany, for human subject testing.
_**Nature and scale of data** _
The raw data are images of the reflective markers taken by each motion
tracking camera at a pre-set frequency. The cameras use reflections of
infrared light on the special markers to visualize their positions, thus the
raw data do not record any personal information. The centroid of each marker
image is then triangulated from multiple cameras to estimate its position in
the Cartesian coordinate. The position data are completely anonymous and will
be used as a dissemination material.
The dataset consists of repetitions of the same motions from multiple pairs of
participants. Each pair performed approximately 10 repetitions of a given
movement scenario. Data have been collected from 10 – 20 pairs of
participants.
_Data Format:_ PNG, JPG for images, XML or TXT for annotations
_**To whom could the dataset be useful** _
Roboticists, biomechanists, ergonomic designers.
_**Related scientific publication(s)** _
_Not Available_
_**Indicative existing similar datasets** _
CMU Graphics Lab Motion Capture Database Multisensor-Fusion for 3D Full-Body
Human Motion Capture
</td> </tr> </table>
<table>
<tr>
<th>
**3.**
</th>
<th>
**Standards and metadata**
</th> </tr>
<tr>
<td>
The marker position data obtained from the recording of human participants
have been processed in Matlab and then converted into a c3d file format
(www.c3d.org). The c3d format is a public domain, binary file supported by
most of major motion capture system and animation software. The anonymized
files are available with general information about the file including
participant's gender, age group, and a short description of movements being
performed.
</td> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type** _
In accordance with the ethical requirement regarding data obtained from human
participants, anonymized dataset are available to a restricted group. Personal
information regarding the participants are kept strictly private.
The description of data may is publically disseminated in a form of
publication. Published data including articles, book chapters, and conference
proceedings are available in print or electronically from publishers, subject
to subscription or printing charges. The source codes are retained at the
local site, open to access by a restricted group (e.g. consortium), subject to
privacy, confidentiality, and intellectual property right policy of the
developer(s) with respect to the local national registrations.
_**Access Procedures** _
The request form of the raw data can be submitted to the principal
investigator of the developing site, and upon approval, the data will be
electronically transferred.
Published materials may be accessed from the publishers, subject to
subscription or printing charges.
_**Embargo periods** _
None
_**Technical mechanisms for dissemination** _
A standard publication procedure is taken for dissemination.
_**Necessary S/W and other tools for enabling re-use** _
The dataset are stored as MATLAB, c3d, and QTM (Qualisys Tracking Manager)
files.
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
The dataset is accommodated at the data management portal of RAMCIP. The
network repository has also been used to host all relevant materials at the
local institutes where all data are periodically backed up.
The published materials is hosted by the publishers.
</td> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
_**Data preservation period** _
The datasets for publication will be stored up to 3 years at the local site
following publication for published material.
</td> </tr>
<tr>
<td>
_**Approximated end volume of data** _
The dataset’s end volume is approximately 500 megabytes
_**Indicative associated costs for data archiving and preservation** _
A dedicated hard drive has been used to preserve the dataset. It is estimated
to be around 100 euros.
_**Indicative plan for covering the above costs** _
The cost will be covered by the local hosting institute
</td> </tr>
<tr>
<td>
**6.**
</td>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
_**Partner Owner / Data Collector** _
TUM
_**Partner in charge of the data analysis** _
TUM, CERTH, LUM
_**Partner in charge of the data storage** _
TUM
_**WPs and Tasks** _
The data have been collected within activities of WP6 in Task 6.3
</td> </tr> </table>
## 3.9 Dataset “DS10.ACCREA-2.ManipKinematicsDataset”
#### General Description
Set of Simulink/CAD/Gazebo models for simulation, optimization and development
purposes. The prepared dataset contains models of selected manipulator
kinematics, which allowed RAMCIP partners to choose the best solution for user
requirements and dexterous manipulation tasks.
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
</th> </tr>
<tr>
<td>
**DS10.ACCREA.02 Manipulator kinematics chains Dataset**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**Origin of Data** _
Anthropomorphic kinematics of previously developed by ACCREA and commercial
manipulators have been considered, as well as new concept ones. Our goal is to
select solutions meeting user and safety requirements and also being capable
of performing RAMCIP manipulation tasks.
_**Nature and scale of data** _
Dataset are available in a form of Simulink/Solid Works/Gazebo/URDF files
package.
_Data Format:_ SolidWorks / URDF file format
_**To whom could it be useful** _
The collected data have been used for simulations and development of the
RAMCIP manipulator. Models can be imported into the Gazebo environment
simulation, which has been used for testing components and system integration
by most of technical RAMCIP partners.
_**Related scientific publication(s)** _
None
_**Indicative existing similar data sets** _
Several commercial manipulators' kinematics have been considered in the design
of the most suitable one for the RAMCIP project.
</td> </tr>
<tr>
<td>
**3.**
</td>
<td>
**Standards and metadata**
</td> </tr>
<tr>
<td>
The metadata that have been produced are summarized as follows:
* the parsing routines used to read and absorb the data for developing purposes
* selection of different object grasping/manipulating scenarios based on RAMCIP requirements along with results of simulations.
</td> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type** _
</td> </tr> </table>
<table>
<tr>
<th>
A part of the dataset is publicly available. It has been uploaded to the main
site of the RAMCIP project.
_**Access Procedures** _
A web page has bee created on the project’s data management portal that
provides a description of the dataset and link to a download section.
_**Embargo periods** _
None
_**Technical mechanisms for dissemination** _
A link to the dataset from the data management portal. The link is provided in
all relevant RAMCIP publications.
_**Necessary S/W and other tools for enabling re-use** _
The dataset are designed to allow easy reuse with commonly available tools and
software libraries.
_**Repository where data will be stored** _
The dataset is accommodated at the data management portal of RAMCIP, being
accessible through the RAMCIP website.
</th> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
_**Data preservation period** _
The dataset will be preserved online for as long as there are regular
downloads. After that it would be made accessible by request.
_**Approximated end volume of data** _
The data is approximately 500 Megabytes.
_**Indicative associated costs for data archiving and preservation** _
A dedicated hard disk drive has been allocated for the dataset. There are no
costs associated with its preservation.
_**Indicative plan for covering the above costs** _
The cost will be covered at the local hosting institute in the context of
RAMCIP.
</td> </tr>
<tr>
<td>
**6.**
</td>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
_**Partner Owner / Data Collector** _
ACCREA
_**Partner in charge of the data analysis** _
ACCREA, SHADOW
_**Partner in charge of the data storage** _
ACCREA, SHADOW
</td> </tr> </table>
_**WPs and Tasks** _
The data have been collected within activities of WP7
## 3.10 Dataset “DS11.LUM_ACE01.UserRequirementsDataset”
#### General Description
The user requirement dataset comprises the pictures and videos taken during
the workshops with stakeholders in Lublin and Barcelona as well as anonymized
questionnaires, which were filled in by the different stakeholders groups.
Since the raw data are collected in the local languages, the videos and
summary of the collected data have to be translated into English
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
_Identifier for the data set to be produced_
</th> </tr>
<tr>
<td>
**DS11.LUM_ACE.01 User Requirements Dataset**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**Origin of Data** _
Materials collected for and during workshops conducted at LUM and ACE with
medical personnel and caregivers. The surveys were conducted by LUM and ACE
teams.
_**Nature and scale of data** _
* Videos which were taken during the workshops with medical personnel and caregivers – in local languages.
* Pictures taken during the workshops
* Transcripts of videos and summary in English
* Completed questionnaires – paper versions and scans – in local languages
* Informed consents of the workshop participants in local languages – paper versions and scans.
* Excel sheets and summary of the survey results
_Data Format:_ MPG, AVI format for videos, JPG for images, DOC/PDF for
transcripts, questionnaires and papers, XLS for survey results.
_**To whom could it be useful** _
Raw data – videos and questionnaires in local languages can be assessed and
summarized by local LUM and ACE teams in the scope of user requirements
analysis and definition of the RAMCIP use cases. These data should also be
available for the local Ethics Committees on their requests.
Some videos and pictures may be used for publications and presentations.
The transcripts, tables and summaries can be used by the entire RAMCIP
consortium for a preparation of the functional and technical specifications.
_**Related scientific publication(s)** _
The summaries of the dataset can to be published as user requirements analysis
- related publication. Some pictures can be also part of the scientific
publications.
_**Indicative existing similar data sets** _
</td> </tr> </table>
<table>
<tr>
<th>
No similar data sets are available for public.
</th> </tr>
<tr>
<td>
**3.**
</td>
<td>
**Standards and metadata**
</td> </tr>
<tr>
<td>
The dataset is accompanied by the metadata describing the demographics of the
samples from which the questionnaires were collected and the data collection
process will be described analytically.
The results of the workshops have been described and categorized. The results
of the questionnaires are exhibited in an Excel data sheets with the
respective statistical analysis.
</td> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type (widely open, restricted to specific groups, private)** _
Based on the ethical rules and legal requirements, the data which contain
personal data such as images of people and their opinions, cannot be available
for public. Summaries of the data can be published as public deliverables and
scientific publication.
_**Access Procedures** _
The datasets with personal data of the workshop participants (videos,
pictures, informed consents) has been stored at the special locked cabinet
(paper) or servers (videos, pictures and scans) at LUM and ACE and only the
members of the RAMCIP team will have access to them.
_**Embargo periods (if any)** _
None
_**Technical mechanisms for dissemination** _
The summaries of the data have been published as user requirements in the
appropriate deliverables and scientific publications. Some videos and pictures
are part of the scientific publications and presentations, but for
dissemination of the videos and pictures, the written confirmation of LUM or
ACE (depend on where the data has been recorded) was acquired to ensure that
the publication does not violate personal rights of the participants of the
workshops.
_**Necessary S/W and other tools for enabling re-use** _
N/A
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
LUM’s and ACE’s internal servers for electronic data and locked cabinets at
LUM and ACE for paper documents.
</td> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
In Poland the videos and pictures will be kept for 5 years after the end of
the project and the paper documentations have to be kept for 20 years after
the end of the project as required by the local regulations.
In Spain there are no time limits for how long data should be kept. Therefore
all
</td> </tr>
<tr>
<td>
source data will be kept as long as possible.
_**Approximated end volume of data** _ Videos and pictures – 8 GB.
Informed consents – 18 pages
Questionnaires – 789 pages
_**Indicative associated costs for data archiving and preservation** _ No
additional costs if kept on LUM and ACE servers and spaces.
</td> </tr>
<tr>
<td>
**6.**
</td>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
_**Partner Owner / Data Collector** _
LUM, ACE
_**Partner in charge of the data analysis** _
LUM, ACE
_**Partner in charge of the data storage** _
LUM, ACE
_**WPs and Tasks** _
The collection of this dataset and its analysis is part of WP2 activities,
concerning the research efforts of Task T2.1 in the scope of user requirements
analysis.
</td> </tr> </table>
## 3.11 Dataset “DS12.CERTH-01.3DForceSlippageDataset”
#### General Description
The 3D Force Slippage Dataset comprises 3D Force measurements from Optoforce
3-axis Sensors during experiments where slippage occurred including several
surfaces
<table>
<tr>
<th>
**1.**
</th>
<th>
**Data set reference and name**
_Identifier for the data set to be produced_
</th> </tr>
<tr>
<td>
**DS12.CERTH-01 3D Force Slippage PB Dataset**
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**Data set description**
</td> </tr>
<tr>
<td>
_**Origin of Data** _
Measurements were collected with Optoforce 3D sensors (at 1000Hz without
filtering) during experimentation at CERTH with RAMCIP V2 Hand. The hand
established a 2-finger (pinch) grasp of a cylindric object, which was fixed on
a supporting surface. Data was collected while the arm’s end-effector moved
translationally upwards and downwards, resulting to the slippage of the
fingertips on the object’s surface. On total 6 different surfaces were
sampled, by 2 different fingers, for 3 different arm’s moving velocities, for
several different initial normal grasping forces per finger (in the range
between 1N and 2.5N), resulting in a dataset containing 72 samples in total.
</td> </tr> </table>
<table>
<tr>
<th>
_**Nature and scale of data** _
* Raw data from Optoforce sensors (dataset name: f)
* Labels provided for each sample as slip or stable (dataset name: l)
* Short description of data acquisition and origination of each different sample (dataset name: fd)
_Data Format:_ MAT formats for dataset and txt for documentation
_**To whom could it be useful** _
Data could be useful to researchers trying to address slippage detection,
either as evaluation or as training dataset.
_**Related scientific publication(s)** _
I. Agriomallos, S. Doltsinis, I. Mitsioni, & Z. Doulgeri, (2018). Slippage
Detection Generalizing to Grasping of Unknown Objects using Machine Learning
with Novel
Features. IEEE Robotics and Automation Letters, 3(2), 942–
948, _https://doi.org/10.1109/LRA.2018.2793346_
_**Indicative existing similar data sets** _
No similar data sets that we know of are publicly available.
</th> </tr>
<tr>
<td>
**3.**
</td>
<td>
**Standards and metadata**
</td> </tr>
<tr>
<td>
The dataset will be accompanied by short description of collection process.
</td> </tr>
<tr>
<td>
**4.**
</td>
<td>
**Data sharing**
</td> </tr>
<tr>
<td>
_**Access type (widely open, restricted to specific groups, private)** _
Widely open as soon as a complete version of data is collected.
_**Access Procedures** _
A web page will be created on the RAMCIP data management portal (hosted at the
RAMCIP web site) that should provide a description of the dataset and links to
a download section.
_**Embargo periods (if any)** _
None
_**Technical mechanisms for dissemination** _
A link to the dataset from the RAMCIP web site (RAMCIP data management
portal). The link will be provided in all relevant RAMCIP publications. A
technical publication describing the dataset and acquisition procedure will be
published.
_**Necessary S/W and other tools for enabling re-use** _
The dataset will be designed to allow easy reuse with commonly available tools
and software libraries. Since it is a .mat file it will be loaded by MATLAB or
GNU Octave (free) or whichever framework can load such files (e.g. python’s
scipy.io library).
_**Repository where data will be stored (institutional, etc., if already** _
</td> </tr> </table>
<table>
<tr>
<th>
_**existing and identified)** _
The dataset will be accommodated at the data management portal of the project
website.
</th> </tr>
<tr>
<td>
**5.**
</td>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
The dataset will be preserved online for as long as there are regular
downloads. After that it would be made accessible by request.
_**Approximated end volume of data** _
All Files ~ 50MB
_**Indicative associated costs for data archiving and preservation** _ Small,
one-time costs covered by RAMCIP.
</td> </tr>
<tr>
<td>
**6.**
</td>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
_**Partner Owner / Data Collector** _
CERTH
_**Partner in charge of the data analysis** _
CERTH
_**Partner in charge of the data storage** _
CERTH
_**WPs and Tasks** _
The collection of this dataset and its analysis is part of WP5 activities and
Task T5.3 concerning grasp stability maintenance.
</td> </tr> </table>
# 4\. The RAMCIP Data Management Portal
The present section provides an overview of the RAMCIP Data Management Portal
(DMPo). This is a web based portal, accessible at the URL:
_http://ramcipproject.eu/ramcip-data-mng/_ (Figure 1). The data management
portal can be accessed through the official website of the RAMCIP project, by
following the menu items “Results -> Data Management Portal”. The portal has
been developed with the purpose to enable project partners to manage and
distribute their public datasets generated in the course of the project,
through a common infrastructure.
**Figure 1: Welcome page of the RAMCIP Data Management Portal (
_http://ramcipproject.eu/ramcip-data-mng/_ ) **
Specifically, as defined in the deliverable D9.3 of RAMCIP (D9.3. “RAMCIP Data
Management Plan – v1”), the Data Management Portal, whose first functional
version became operational on M20, formulates a dedicated space of the RAMCIP
project website, which can aggregate descriptions of all project public
datasets and provide links to respective dataset download sections to the
interested public, as well as centralized data management functionalities to
project partners.
Based on the information detailed in the previous sections of the present
deliverable, the Data Management Portal will have to establish the above for a
series of datasets, which are planned to be collected by different project
partners throughout the project’s duration. For the datasets which have been
eventually be collected and comprise (in part or as whole) a portion that can
be made publicly available, the Data Management Portal offers the owner
parties with data management functionalities, enabling them to have an
aggregated space facilitating the public datasets distribution.
## 4.1 Specification of the core functionalities of the Data Management
Portal
The RAMCIP Data Management Portal supports a series of functionalities in
order to facilitate the management and distribution of the public datasets
formulated during the RAMCIP project. More specifically:
* The Data Management Portal has been implemented through a **web based platform** which will enable its users to easily access and effectively manage the public datasets that have been created in the course of the project.
* Each dataset available through the DMP is accompanied by descriptive information, as well as a link to the dataset’s download section.
* Management functionalities (addition, editing) of datasets are provided to authorised members of the web platform, which have access to a corresponding private section of the DMP.
* Public access to the datasets registered in the DMP is provided through a “public space” of the portal, where information on the datasets is provided to the public, as well as links to the datasets download sections.
Regarding the **authentication** procedures of the DMP as well as the
respective permission and access rights, the following three categories of
users are specified:
#### Administrator (Admin)
The Admin has access to all of the datasets and the functionalities offered by
the DMP. The Admin is also be able to provide permission and access rights to
the registered members as well as to determine and adjust the editing/access
rights of the members and the users (open access area). Finally, the Admin is
able to access and extract the analytics, concerning the visitors of the
portal.
#### Members
After someone has been successfully registered and given access permission by
the Admin, s/he isl considered as a “registered Member”. All the registered
members have access to all datasets and are able to manage the datasets that
they own (i.e. those that they have added to the portal). The “Members” role
is designated for the members of the RAMCIP Consortium who is capable of
adding public datasets in the portal.
#### Users
Apart from the admin and the registered members, an open access area became
available for users who are not need to register and they have access to the
public datasets. Users are capable of viewing the descriptive information of
the datasets provided through the DMP and are also capable of selecting to
“Download” a dataset that is of interest to them, being this way redirected to
the download section of the dataset.
## 4.2 Data Management Portal Architecture
Following the above specifications, the RAMCIP DMP comprises two sections, a
private space, accessible to Members, as well as a public space, accessible to
all user types.
**Figure 2: Conceptual Architecture of DMPo access roles and functionalities**
As shown in Figure 2 above, the public space allows users to see descriptive
information regarding the DMPo datasets, as well as to proceed with the
download process of a dataset that they are interested in.
On the other hand, the private space, in addition to the above
functionalities, allows users of the “Member” type, to also register a new
dataset in the DMPo repository, becoming thus the corresponding dataset’s
owner, and also apply modifications to all the datasets that are owned by
her/him.
The functional architecture of the RAMCIP Data Management Portal is a three-
tier architecture composed by the Data, Application and Presentation tiers, as
depicted in Figure 3 below and further explained in the following sub-
sections.
**Figure 3: Functional Architecture of the Data Management Portal**
### 4.2.1. Data Tier
The Data Tier is the one responsible for the storing of both the data that is
necessary for the overall operation of the DMPo Backend (DMPo Application
Tier), as well as the RAMCIP public datasets which are accessible through the
portal. More specifically:
The DMPo Backend DB contains all the tables related to the registered users
(members) details, as well as the information (metadata) that have been
registered to the portal for the added datasets. Moreover, the Backend DB
stores also information over the datasets owners (i.e. those who have
modification rights on the registered datasets).
The Datasets DBs correspond to the databases which perform the actual storing
of the public datasets that are made accessible through the Data Management
Portal.
### 4.2.2. Application Tier (Backend)
The Application Tier corresponds in essence to the RAMCIP DMPo application
server backend, which holds and applies all the business logic of the portal.
In this respect, it is responsible to handle user requests (derived through
the Presentation tier described below) for the provision of information on
specific datasets, as well as for their download. In addition, it is
responsible to provide to the users of the “Members” type the access rights of
adding and modifying the registered datasets. The application tier contains in
this scope a series of interfaces which enable the efficient communication of
the application logic with the datasets of the Data Tier.
### 4.2.3. Presentation Tier (Frontend)
The Presentation Tier comprises the Web-based interface of the Data Management
Portal. This is accessible by any PC, and provides to each of the user roles
(i.e. admin, members, users), all user interface mechanisms that are necessary
to fulfil the functionalities described in the previous section. In the
following Section, more details are provided for the Presentation Tier, along
with screenshots of the DMPo user interfaces. The core design principles that
are followed for the development of the DMPo frontend are as follows:
* The “Look & Feel” of the DMPo Web Interface should follow the one of the official RAMCIP website
* The DMPo Web Interface should be easy to use, enabling the effective establishment of the portal’s target functionalities
* Emphasis should be put on developing a user friendly interface, which will allow easy access of the interested public, to the public datasets
## 4.3 Overview of the DMPo Design and Functionalities
The home page of the developed Data Management Portal is shown in Figure 1
above. From that page, the user can be provided either with public access to
the datasets, or with private access, after providing her/his credentials so
as to be logged in as a “Member”. As soon as the user progresses, either by
selecting to get simple “Access” or, by logging in as member, s/he is
navigated to the DMPo’s introductory page (Figure 4).
At the “INTRODUCTION” page of the DMPo shown above, the user is presented with
some basic information on the RAMCIP project and the data that is provided
through the portal. In addition, links to H2020 guidelines on Data Management
are provided as well.
By clicking on the “DATA” tab, the user can then navigate to the core part of
the DMPo, which provides access and management capabilities (where
appropriate) on the RAMCIP DMPo datasets.
The DATA section comprises two sub-sections, one dedicated to public
“DOCUMENTS” of the RAMCIP project and one dedicated to “DATASETS”, which are
further described below.
### 4.3.1. Public Documents Section
The “DOCUMENTS” section of the “DATA” page of the DMPo (Figure 5) provides
access to public documents of the project. Specifically, it provides a single
access point to RAMCIP public deliverables and publications. A filter allows
the user to obtain a list including only project publications or public
deliverables (Figure 6). By selecting the “Get It” option for each document, a
direct download of that document starts.
Documents**
deliverables**
While the above functionalities are available to all users of the RAMCIP DMPo,
either registered members or simple users, the corresponding page provided to
the portal administrator allows her/him to also register and upload a new
public document, by clicking on the “+” button (Figure 7). The administrator
can also delete some document entry, and also obtain information for each
document related to its downloads.
**Figure 7: Overview of available public documents; administrator view**
### 4.3.2. Public Datasets Section
The main page of the public datasets section, as seen by the general public,
is shown in the Figure 8 below. This page provides a list with all public
datasets that are accessible through the Data Management Portal. By clicking
on a dataset, the user can view more detailed information about it (Figure 9).
By clicking on the “Get it” button shown in Figure 8, the user can proceed to
the process of downloading the desired dataset.
The downloading of a dataset can then be done either directly from the DMPo,
or through the dataset’s owner corresponding web page, in respect to which of
these two approaches has been followed by the owner while adding the dataset
to the DMPo. More details on the addition and downloading of a dataset, from
DMPo members and all users respectively, are provided in the two corresponding
subsections that follow.
**Figure 9: Viewing the details of an available public dataset**
##### **4.3.2.1** Adding a dataset to the Data Management Portal
A DMPo Member and the administrator have the rights to add a new dataset to
the portal. This is achieved through the addition option (“+”) that is
provided in their view of the DMPo available datasets overview page (Figure
10).
view
The addition of a new dataset to the DMPo is performed through the UI shown in
Figure 11 below. Through that interface, the user specifies a title and
general description for the dataset, as well as further information which will
appear to the DMPo users. Notably, at this point the user must define the way
that the dataset will be made available to the public, by defining the
“storage location” details (Figure 11).
In this respect, the user has two options. The first option (“URL”, shown in
Figure 11) concerns the definition of an external link, from which the dataset
can be downloaded. The second option (“Upload file to server”, shown in Figure
12) concerns the uploading of a single file containing the dataset to the
portal. This file will subsequently be available for download, directly from
the DMPo server. This option can be applied in case that the dataset owner
wishes to use the DMPo server for storing the downloadable version of the
dataset, however, it can be applied to cases of datasets of relatively limited
size.
In order to proceed with concluding the addition of the dataset on the DMPo
server, the user should indicate that s/he agrees with the license agreement
of the RAMCIP DMPo, which is illustrated in Annex II of the deliverable and is
shown to the user by clicking on the corresponding “license agreement” link
(Figure 11).
Through a similar UI of Figure 11, shown in Figure 13 below, which appears to
a Member upon the selection of an owned dataset from the list of Figure 10,
the dataset owner can also edit the dataset after its initial addition. The
dataset owner has also the option to delete the dataset through the
corresponding option shown in the UI of Figure 10. In addition, the owner can
be provided with information on the amount of downloads that the dataset has
received through the DMPo, either from public users, or from registered
members; this information is provided by clicking the corresponding “i” button
at the last row of the datasets overview list (Figure 10).
Data Management Portal
uploading a dataset file to make it available for download through the DMPo
**Figure 13: Editing the information of an existing dataset**
##### **4.3.2.2** Dataset download
In the current implementation of the DMP, two different download procedures
are supported. As explained above, the dataset owner may have uploaded a
single (.zip) file containing the full dataset to the portal, along with a
disclaimer notice specifying the terms and conditions for the dataset
download. Alternatively, the owner may have specified that the dataset can be
downloaded from a specific URL of the owner. In accordance, the two
corresponding ways that a dataset can be downloaded once the user selects the
“Get It” option for it (Figure 8), are further explained below.
###### _Case 1: Direct download_
In this case, the user directly downloads the dataset, which has been stored
as a single .zip file on the DMP server.
###### _Case 1: Download through the owner’s website_
In this case, the user is redirected to the web page of the URL that has been
specified by the dataset owner during the dataset’s addition (URL option), as
the one through which the dataset can be downloaded. The owner may provide
through that webpage additional information on the dataset, request specific
user details to be provided prior to the download, ask the user to consent to
specific terms and conditions related to the use of the dataset etc.
# 5\. Discussion
One of the main objectives of RAMCIP project is to make the datasets or the
portions of them that can become public, easily discoverable and accessible.
In several cases, a published scientific paper introduced a new dataset so
that the community can learn about it, and later evaluate and refer to it. In
this case, the dataset has also been related with a certain DOI (upon its
public avaialability). Publishing a scientific paper along with a new dataset
not only helps in making the dataset known to a wider community, but also the
peer review process ensures about its reliability and quality of the context.
All the metadata and various file formats used, adhere to commonly used
practices as much as possible, including commonly used software, while the
description ensures clarity and ease of use by third parties.
The datasets have been announced on the project’s website with extensive
description and with a download link; the RAMCIP “data management portal” of
the project’s website is responsible to enable such a centralized repository.
However, when a dataset could not be publicly available, it was accessible to
the members of the consortium only via internal servers.
As it can be seen from the analysis above, the total space required is on the
order of several TB. In order to balance the period of the datasets
availability and the preservation costs, the public datasets will be kept
available on dedicated servers (being accessible through a the Data management
portal), for as long as there is sufficient demand for them, under specific
licensing schemas which will be defined at subsequent project phases, when the
datasets will be established. In a later time, they will be distributed only
by request.
Lastly, any personal data of healthy controls and patients involved in the
data acquisition will be secured from being publicly leaked, while anonymity
will be exercised in all cases.
# 6\. Conclusions
In the first version of the RAMCIP Data Management Plan, reported in the
deliverable D9.3 on M6 of the project, a detailed preliminary analysis of the
datasets that the partners of the RAMCIP project planned to collect and use
has been performed, toward developing the various skills of the RAMCIP robot.
Those initial plans have been revisited in the second version of the RAMCIP
DMP, resulting to a more updated status of the RAMCIP Data Management Portal.
With the current version all the activities regarding the collection of the
data foreseen in the first version have been concluded. The initially
identified datasets have been created and uploaded in the RAMCI Data
Management Portal and the current status of it is reflected in Section 3 of
the present deliverable with a summary in Section 4.
The created datasets include captured models of objects and domestic
environments, human tracking and behavioural modelling, as well as
questionnaires related to the analysis of the RAMCIP user requirements. Each
dataset was separately analysed, with emphasis given on the nature of the
data, the accessibility and its possible access type, as well as any ethical
issues that may arise from manipulating sensitive personal information.
The first version of the RAMCIP DMP (D9.3) served as a preliminary guide to
build the infrastructure for efficiently managing, storing and distributing
the amount of data collected, especially concerning the portions of the RAMCIP
datasets that will be made publicly available. On that basis, the Data
Management Portal of the RAMCIP project was developed and became fully
operational in the second project year.
In the third year, the Data Management Plan has been further elaborated and
all the identified datasets have been uploaded and made publically available.
Consequently, with this deliverable, the DMP is considered as final, however
all the required maintenance activities to keep the Data Management Portal
alive and accessible for the scientific community will take place as long as
there is a scientific need for the uploaded data.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0983_UnCoVerCPS_643921.md
|
# Introduction
During the submission phase, the partners opted to participate in the pilot
action on open access and research data, and included deliverable D6.2 into
the workplan with the aim of ensuring a strict open source policy. This
deliverable documents the first development stage of the UnCoVerCPS data
management plan. It has been written following the Guidelines on Open Access
to Scientific Publications and Research Data in Horizon 2020 and the
Guidelines on Data Management in Horizon 2020. The required information was
collected among all the partners following the Annex 1, provided by the
European Commission in the Guidelines on Data Management in Horizon 2020. The
template covers the following points:
* Identification;
* Description;
* Standards and metadata;
* Sharing policy;
* Archiving and preservation.
The final aim of the consortium is to implement structures that ensure open-
access of scientific results, software tools, and benchmark examples.
# Elements of the UnCoVerCPS data management policy
During the kick-off meeting (Munich, April 27th-28th, 2015), both the Open
Data Research Pilot and the Data Management Plan were illustrated to all
consortium members. A session to discuss the specification of the project’s
policy on data management followed the presentation. Therefore, the tables
presented in the following pages report the practices currently envisioned by
the consortium for the data, models and tools that will be produced, improved
and used during the project runtime. Please note that the scale of each
element may not directly correspond to its end volume, as the latter depends
on the format of data collected.
## Technische Universit¨at Mu¨nchen
### Element No. 1
Reference _TUM MP_ 1
Name Annotated motion primitives
Origin Generated from MATLAB
Nature Data points and sets
Scale Medium
Interested users People performing motion planning
Underpins scientific publications Yes
Existence of similar data No
Integration and/ or reuse Can be integrated in most motion planners
Standards and Metadata Not existing
Access procedures Download from website or request from authors
Embargo period N/a
Dissemination method Website
Software/tools to enable re-use Not required
Dissemination Level Open access
Repository UnCoVerCPS website
Storing time 12/01/2022
Approximated end volume 100 MB Associated costs None
Costs coverage N/a
**Table 1:** _TUM MP_ 1
### Element No. 2
Reference _TUM MT_ 1
Name Manipulator trajectories
Origin Recorded from experiments with a robotic manipula-
tor for safe human-robot interaction
Nature Joint angles and velocities over time
Scale Medium
Interested users People researching in human-robot collaboration
Underpins scientific publications No
Existence of similar data Yes
Integration and/ or reuse Data can be compared, but not integrated
Standards and Metadata Not existing
<table>
<tr>
<th>
Access procedures
Embargo period
Dissemination
Software/tools to enable re-use
Dissemination Level
Repository
Storing time
Approximated end volume
Associated costs Costs coverage
</th> </tr> </table>
Download from website or request from authors
N/a
Website
Not required
Open access
UnCoVerCPS website
12/01/2022
1 GB
None
N/a
**Table 2:** _TUM MT_ 1
### Element No. 3
Reference _TUM CORA_ 1
Name CORA
Origin N/a (software tool)
Nature Software
Scale N/a (software tool)
Interested users People performing formal verification of CPSs
Underpins scientific publications Yes
Existence of similar data N/a (software tool)
Integration and/ or reuse Integrated in MATLAB
Standards and Metadata Not existing
Access procedures Download from website or request from authors
Embargo period N/a
Dissemination Website
Software/tools to enable re-use CORA is already a tool
Dissemination Level Open access
Repository Bitbucket
Storing time 12/01/2022
Approximated end volume 10 MB
Associated costs None
Costs coverage N/a
**Table 3:** _TUM CORA_ 1
## Universit´e Joseph Fourier Grenoble 1
### Element No. 1
Reference _UJF SX_ 1
Name SpaceEx
Origin N/a (software tool)
Nature Software
Scale N/a (software tool)
Interested users Academia, researchers
Underpins scientific publications Yes
Existence of similar data N/a (software tool)
Integration and/ or reuse N/a
Standards and Metadata Not existing
Access procedures Available at spaceex.imag.fr
Embargo period None
Dissemination method Website
Software/tools to enable re-use None
Dissemination Level Open access
Repository Institutional (forge.imag.fr)
Storing time 31/12/2020
Approximated end volume 50 MB Associated costs None
Costs coverage N/a
**Table 4:** _UJF SX_ 1
## Universit¨at Kassel
### Element No. 1
Reference _UKS Mod_ 1
Name CPS Model
Origin Formal/Definition
Nature Model definition
Scale Scalable
Interested users Partners working on control and verification
Underpins scientific publications Yes
Existence of similar data Partially
Integration and/ or reuse Implementable in MATLAB
Standards and Metadata Not existing
Access procedures Download from website
Embargo period Available after publication
Dissemination method Website
Software/tools to enable re-use Not required
Dissemination Level Restricted to project partners until publication
Repository UnCoVerCPS website
Storing time 31.12.2020
Approximated end volume _ < _ 10 _MB_ Associated costs None
Costs coverage N/a
**Table 5:** _UKS Mod_ 1
### Element No. 2
Reference _UKS Con_ 1
Name Control Strategies
Origin Generated from MATLAB
Nature Algorithm
Scale Scalable
Interested users Partners using control algorithms (for verification)
Underpins scientific publications Yes
Existence of similar data No
Integration and/ or reuse Integrated in MATLAB
Standards and Metadata Not existing
Access procedures Request from authors
Embargo period Available after publication
Dissemination method E-mail
Software/tools to enable re-use MATLAB
Dissemination Level Restricted to project partners until publication
Repository N/a
Storing time 31.12.2020
Approximated end volume _ < _ 10 _MB_
Associated costs None
Costs coverage N/a
**Table 6:** _UKS Con_ 1
### Element No. 3
Reference _UKS Scene_ 1
Name Control Scenario
Origin Generated from MATLAB
Nature Data points and sets
Scale Medium
Interested users Partners using control algorithms (for verification)
Underpins scientific publications Yes
Existence of similar data No
Integration and/ or reuse Integrated in MATLAB
Standards and Metadata Not existing
Access procedures Download from website
Embargo period N/a
Dissemination method Website
Software/tools to enable re-use MATLAB
Dissemination Level Restricted to project partners until publication
Repository UnCoVerCPS website
Storing time 31.12.2020
Approximated end volume _ < _ 10 _MB_
Associated costs None
Costs coverage N/a
**Table 7:** _UKS Scene_ 1
## Politecnico di Milano
### Element No. 1
Reference _PoliMi MG_ 1
Name Microgrid data
Origin Generated from MATLAB
Nature Data points
Scale Medium
Interested users Researchers working on microgrid energy management
Underpins scientific publications Yes
Existence of similar data No
Integration and/ or reuse Can be integrated in larger microgrid units
Standards and Metadata Not existing
Access procedures Download from website or request from authors
Embargo period N/a
Dissemination method UnCoVerCPS website
Software/tools to enable re-use Not required
Dissemination Level Open access
Repository UnCoVerCPS website
Storing time 12/01/2022
Approximated end volume 1 GB Associated costs None
Costs coverage N/a
**Table 8:** _PoliMi MG_ 1
## GE Global Research Europe
### Element No. 1
Reference _GEGR Model_ 1
Name MATLAB/Simulink model of wind turbine dynamics
Origin Designed in MATLAB/Simulink
Nature MATLAB/Simulink Model
Scale Small
Interested users All project partners working on verification
Underpins scientific publications Yes
Existence of similar data Yes, but existing models are typically more complex
Integration and/ or reuse Can be reused with verification tools accepting MAT-
LAB/Simulink models
Standards and Metadata N/a
Access procedures Made available to project partners upon request
Embargo period N/a
Dissemination method Limited to consortium partners Software/tools to enable
re-use MATLAB/Simulink
Dissemination Level Limited to consortium partners
Repository GE-internal repository
Storing time December 2019
Approximated end volume 1 MB
Associated costs N/a
Costs coverage N/a
**Table 9:** _GEGR Model_ 1
### Element No. 2
Reference _GEGR Data_ 1
Name Wind turbine load data
Origin Generated in MATLAB/Simulink
Nature Data on wind, turbine power, turbine speed, turbine
loads
Scale Medium
Interested users All project partners working on verification
Underpins scientific publications Yes
Existence of similar data Yes, but typically based on more complex models
Integration and/ or reuse Reuse in verification tools
Standards and Metadata N/a
<table>
<tr>
<th>
Access procedures
Embargo period
Dissemination method
Software/tools to enable re-use
Dissemination Level
Repository
Storing time
Approximated end volume
Associated costs Costs coverage
</th> </tr> </table>
Made available to project partners upon request
N/a
Limited to consortium partners MATLAB/Simulink
Limited to consortium partners
GE-internal repository
December 2019
100 MB
N/a
N/a
**Table 10:** _GEGR Data_ 1
## Robert Bosch GmbH
### Element No. 1
Reference _BOSCH Model_ 1
Name Simulink Model of an Electro-Mechanical Brake
Origin Designed in Simulink
Nature Simulink Model
Scale Small
Interested users People working on (simulation-based) verification
Underpins scientific publications Yes
Existence of similar data No
Integration and/ or reuse Can be used with verification tools accepting
Simulink
models
Standards and Metadata Not existing
Access procedures Download from ARCH website
Embargo period N/a
Dissemination method Website
Software/tools to enable re-use Mathworks Simulink
Dissemination Level Open access
Repository ARCH website (linked from UnCoVerCPS)
Storing time 12/01/2022
Approximated end volume 1 MB
Associated costs None
Costs coverage N/a
**Table 11:** _BOSCH Model_ 1
## Esterel Technologies
### Element No. 1
Reference _ET SCADE_
Name SCADE
Origin N/a (software tool)
Nature Software
Scale N/a (software tool)
Interested users People working on code generation
Underpins scientific publications Yes
Existence of similar data N/a (software tool)
Integration and/ or reuse API access to models
Standards and Metadata Scade
Access procedures Licensing, academic access
Embargo period N/a
Dissemination method Website
Software/tools to enable re-use SCADE
Dissemination Level Commercial access or Academics programs
Repository Proprietary
Storing time _ > _ 20 _years_
Approximated end volume N/a
Associated costs N/a
Costs coverage N/a
**Table 12:** _ET SCADE_
## Deutsches Zentrum fu¨r Luft- und Raumfahrt
### Element No. 1
Reference _DLR MA_ 1
Name Maneuver Automata
Origin Generated from MATLAB
Nature Datapoints, sets and graph structures
Scale Big
Interested users People researching in motion planning
Underpins scientific publications Yes
Existence of similar data No
Integration and/ or reuse Low probability of reuse
Standards and Metadata Not existing
Access procedures Request from author
Embargo period N/a
Dissemination method Reduced version will be placed on UnCoVerCPS web-
site
Software/tools to enable re-use MATLAB
Dissemination Level Open access
Repository UnCoVerCPS website, DLR SVN
Storing time 12/01/2022
Approximated end volume 10 GB
Associated costs None
Costs coverage N/a
**Table 13:** _DLR MA_ 1
### Element No. 2
Reference _DLR TEST_ 1
Name Vehicle Trajectories
Origin Recorded during testdrives with one or two vehicles
Nature Datapoints
Scale Medium
Interested users People researching in driver assistance systems, vehicle
automation, vehicle cooperation, Car2X
Underpins scientific publications Yes
Existence of similar data Yes
Integration and/ or reuse Data can be compared, but not integrated
Standards and Metadata Not existing
<table>
<tr>
<th>
Access procedures
Embargo period
Dissemination method
Software/tools to enable re-use
Dissemination Level
Repository
Storing time
Approximated end volume
Associated costs Costs coverage
</th> </tr> </table>
Download from website or request from author
N/a
UnCoVerCPS website
MATLAB
Open access
UnCoVerCPS website, DLR SVN
12/01/2022
5 GB
None
N/a
**Table 14:** _DLR TEST_ 1
### Element No. 3
Reference _DLR TEST_ 2
Name Communication Messages
Origin Recorded during testdrives with one or two vehicles
Nature Sent and received messages of Car2Car-
Communication/Vehicle cooperation
Scale Medium
Interested users People researching in driver assistance systems, vehicle
automation, vehicle cooperation, Car2X
Underpins scientific publications Yes
Existence of similar data Yes
Integration and/ or reuse Data can be compared, but not integrated
Standards and Metadata Not existing
Access procedures Download from website or request from author
Embargo period N/a
Dissemination method UnCoVerCPS website
Software/tools to enable re-use MATLAB
Dissemination Level Open access
Repository UnCoVerCPS website, DLR SVN
Storing time 12/01/2022
Approximated end volume 1 GB Associated costs None
Costs coverage N/a
**Table 15:** _DLR TEST_ 2
## Fundacion Tecnalia Research & Innovation
### Element No. 1
Reference _TCNL V D_ 1
Name TCNL Vehicle Data
Origin Recorded from experiments with TCNL’s automated
vehicle
Nature Vehicle’s trajectory, accelerations (lateral, longitudi-
nal), speed, yaw as well as control commands leading to these values. Normally
recorded from vehicle’s
CAN bus.
Scale Medium
Interested users People researching in automated vehicles
Underpins scientific publications No
Existence of similar data Yes
Integration and/ or reuse Data can be compared, but not integrated
Standards and Metadata Not existing
Access procedures Download from website or request from authors
Embargo period N/a
Dissemination method UnCoVerCPS website
Software/tools to enable re-use Not required
Dissemination Level Open access
Repository UnCoVerCPS website
Storing time 12/01/2022
Approximated end volume 100 GB
Associated costs None
Costs coverage N/a
**Table 16:** _TCNL V D_ 1
### Element No. 2
Reference _TCNL V CD_ 1
Name TCNL-DLR Vehicle collaborative Data
Origin Recorded from real experiments with TCNL’s and
DLR’s automated vehicles, regarding communication between vehicles.
Nature Manoeuvres’ sets, in the form as vehicles communi-
cate to each other what trajectory will be executed. Recorded from
communications link (suitable Ethernet ports).
Scale Medium
Interested users People researching in V2V technology
Underpins scientific publications No
Existence of similar data Yes
Integration and/ or reuse Data can be compared, but not integrated
Standards and Metadata Not existing
Access procedures Download from website or request from authors
Embargo period N/a
Dissemination method UnCoVerCPS website
Software/tools to enable re-use Not required
Dissemination Level Open access
Repository UnCoVerCPS website
Storing time 12/01/2022
Approximated end volume 100 GB
Associated costs None
Costs coverage N/a
**Table 17:** _TCNL V CD_ 1
3 CONCLUSIONS AND FUTURE DEVELOPMENTS
## R.U. Robots Ltd
<table>
<tr>
<th>
**Element No.**
</th> </tr> </table>
1
<table>
<tr>
<th>
Reference
Name
</th> </tr> </table>
_RUR SS_ 1
Safety System for Human-Robot Colaboration Test
Bed
<table>
<tr>
<th>
Origin
Nature
Scale
Interested users
Underpins scientific publications
Existence of similar data
Integration and/ or reuse
Standards and Metadata
</th> </tr> </table>
N/a (software tool) Software
N/a (software tool)
People performing formal verification of CPSs
Yes
N/a (software tool)
High possibility for reuse in other control systems
Not existing
<table>
<tr>
<th>
Access procedures
Embargo period
Dissemination method
Software/tools to enable re-use
Dissemination Level
Repository
Storing time
Approximated end volume
Associated costs
Costs coverage
</th> </tr> </table>
Download from website or request from authors
N/a
Website
Compiler for appropriate programming language
Open access
Not know at this stage
12/01/2022
10 MB - estimated
None
N/a
**Table 18:** _RUR SS_ 1
# Conclusions and future developments
The tables above display the current practice proposed by the consortium
regarding the management of data sets, models and software tools. As
UnCoVerCPS will not collect huge amounts of data during its lifespan, partners
decided to include other elements apart from data sets in the data management
plan. The consortium will provide open access to the models and tools employed
to obtain and validate the project results. The data management 3 CONCLUSIONS
AND FUTURE DEVELOPMENTS
plan will be updated in case the consortium identifies new data sets and/or
uses/applications. Changes in the consortium policies, as well as external
factors, will also require an update of the plan. As not every detail may be
clear from the start, a new version of the plan will be created in month 24,
before the mid-term review meeting, to provide a more comprehensive
description of the included elements.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0985_SUPERCLOUD_643964.md
|
# Chapter 1 Introduction
The H2020 programme is implementing a pilot action on open access to research
data.
SUPERCLOUD as a participating project to this pilot action is required to
develop a Data
Management Plan (DMP). This DMP has been identified in the description of
action as
SUPERCLOUD deliverable D6.2. This document is drafted according to the
“guidelines on Data Management in H2020” (version 16 dated December 2013).
This is intended as a living document. It will be periodically revised to
reflect changes in the data that may be made available by the project, and to
provide additional information on the datasets as this information is
developed during the specifications of the experimental phases.
All partners have contributed to the document, particularly through the use of
a project wide questionnaire.
Since each partner will generate and manipulate data, the document is
organized with one section per partner. Each section is structured following
the 4-points structure described thereafter:
1. Dataset description contains a textual description of the dataset. It aims at explaining, in a short paragraph, what the dataset contains and what the goal is.
2. Standards and metadata focuses on explaining the internals of the dataset, namely how a user can find syntactical and semantic information.
3. Data sharing addresses the issues related to data access, namely if the dataset is going to be indexed, and how and to whom it will be made accessible.
4. Archiving and presentation covers the aspects related to data availability, during and beyond the project, as well as the actions taken and planned to support availability.
# Chapter 2 Methodology
In order to compile the data management plan, a questionnaire was first
elaborated covering the main questions that need to be answered in the
template provided by the European Commission.
In a second phase, each project partner responded to the questionnaire,
filling it with as much detail as possible at this stage of the project.
Completed questionnaires were stored for analysis and traceability in the
project’s SVN repository.
In a third phase, the Data Management Plan was created as a synthesis of the
questionnaire results, attempting to take advantage of commonalities between
responses in order to provide a simple view of data management procedures
within the consortium.
Further revisions of the document will be based on updates to partner
questionnaires. Therefore, the DMP will be updated at least by the mid-term
and final review to be able to fine-tune it to the data generated and the uses
identified by the consortium.
In addition, a confidential index of datasets will be created and maintained
in the project when the datasets are created. Since the DMP itself is a public
document, information about the datasets, that may need to remain internal to
the project, will be provided to the EU and the reviewers.
The SUPERCLOUD project will consider open licenses and open availability for
the datasets. The reasons for not offering open access will be documented in
the partner questionnaires and in the appendix describing the datasets.
# Chapter 3 Dataset TEC-Evaluation
## 3.1 Dataset description
For the purposes of evaluation and validation, in particular of the SUPERCLOUD
Architecture, Technikon will generate realistic mock-up data resembling
health-care records. The data will be generated applying realistic
distributions and dependencies. The current objective is to generate the
dataset independently of any pre-existing real dataset, although pre-existing
datasets such as census data will be analyzed.
The dataset will be owned and maintained by Technikon.
## 3.2 Standards and metadata
The dataset will contain data, using the common Comma-Separated Values (CSV)
format (used for example in the database community). This has been specified
in RFC 4180 [1].
Technikon will be responsible for maintaining the metadata associated with the
dataset.
## 3.3 Data sharing
The data will not be discoverable (no indexing). However, it will be
accessible, intelligible, and interoperable to RFC4180.
For the time being, no license is required. Hence, no restrictions on sharing
are foreseen to apply.
## 3.4 Archiving and presentation
The dataset will be disseminated to the consortium through the internal shared
SVN repository. It will be presented in SUPERCLOUD deliverables. When ready,
it may be disseminated to third parties through the SUPERCLOUD website.
The dataset is planned to remain available for three (3) years after the
project. Costs for the availability of the data after the end of the project
will be covered by internal funding.
# Chapter 4 Dataset Orange-Measurements
## 4.1 Dataset description
Data will result from experiments of measurements aiming to validate one or
several components of the SUPERCLOUD security management infrastructure.
One such component could be the UPSC (User-to-Provider Security Continuum)
component developed in WP2. Other components are foreseen as well in compute,
data, and network security management infrastructures. The data falls into two
main categories:
* Infrastructure data constitutes the primary source of information for the system. It provides all necessary indicators and metrics that allow managing optimal trade-offs between customer-controlled and provider-controlled security properties. A typical implementation of such 'infrastructure data' in the SUPERCLOUD project would include the following elements:
* Security management data: Events associated with intentional security breaches, including vulnerabilities, risks, infection alerts, unauthorized access, and intrusions.
* Reliability management data: Events associated with accidental faults and failures (both at the system and network layers), as well as the improper use of resources.
* Quality of service (QoS) indicators: Memory and CPU usage, network jitter and latency, disk space, and other optional performance metrics.
* User data could constitute a secondary source of information to the previous components. In SUPERCLOUD, we consider two types of user data: o Raw text data.
* Multimedia content, including both photos and video images.
Several data formats can be foreseen such as Comma-Separated Values (CSV) or
simulated network traffic PCAP files. In any case, we do not expect the volume
of data to fall in the bigdata category.
The main source of infrastructure data will be the monitoring tools and
measurement tools on execution of the components developed during the project.
User data will be randomly created where necessary. This data will be
exclusively fictitious data that does not relate in any case to “real-life”
user instances. Therefore, there should not be any link to pre-existing data
in both cases.
The dataset will be owned and maintained by Orange.
## 4.2 Standards and metadata
Several data formats can be foreseen such as Comma-Separated Values (CSV) [1]
or simulated network traffic PCAP files. In any case, we do not expect the
volume of data to fall in the big-data category.
The main source of infrastructure data for the UPSC system will be the
monitoring tools (both at system and network layers) that will be implemented
in the demonstrators specifically developed as a proof-of-concepts for the
SUPERCLOUD technology. No real users or customers will used be during the
entire lifetime of the project.
User data will be randomly created where necessary. To illustrate the
technologies that will be developed during the SUPERCLOUD project, random user
data could be created according to a specific format. This data will be
exclusively fictitious mock-up data that does not relate in any case to “real-
life” user instances. No corresponding human participants will be involved
during the SUPERCLOUD project.
Target users are researchers for publication and validation purposes.
## 4.3 Data sharing
Information on sharing and availability will be decided and provided in a
later revision of the document, as licensing is under discussion. However, no
restrictions are currently foreseen.
Textual data will be interoperable with RFC 4180 and related formats. Network
data will be interoperable with PCAP format and similar formats.
## 4.4 Archiving and presentation
The dataset will be disseminated to the consortium through the internal shared
SVN repository. It will be presented in SUPERCLOUD deliverables. When ready,
it may be disseminated to third parties through the SUPERCLOUD website.
Costs for the availability of the data after the end of the project will be
covered by internal funding.
# Chapter 5 Dataset IBM-Measurements
## 5.1 Dataset description
IBM expects to produce a dataset containing the results of experiments of
measurement and testing of multi-cloud systems within the SUPERCLOUD
infrastructure.
The results will include statistics regarding performance (latency and
throughput) under a set of SUPERCLOUD deployments/configurations associated to
different security requirements.
The dataset will be owned and maintained by IBM.
## 5.2 Standards and metadata
Several data formats can be foreseen including Comma-Separated Values (CSV)
files [1]. The volume of data is expected not to fall in the big-data
category.
Textual data will be interoperable with RFC 4180 and related formats.
Target users are researchers for publication and validation purposes.
## 5.3 Data sharing
IBM expects that part of the data will be made available. Information on
sharing and availability will be decided and provided in a later revision of
the document, as licensing is under discussion.
## 5.4 Archiving and presentation
The dataset will be disseminated to the consortium through the internal shared
SVN repository. It will be presented in SUPERCLOUD deliverables. When decided,
it may be disseminated to third parties through the SUPERCLOUD website.
Costs for the availability of the data after the end of the project will be
covered by internal funding.
# Chapter 6 FFCUL-Measurements
## 6.1 Dataset description
FFCUL expects to produce a dataset containing results of experiments of
measurement and testing of multi-cloud systems within the SUPERCLOUD
infrastructure.
The results will include statistics regarding performance (latency and
throughput) under a set of SUPERCLOUD deployments/configurations associated to
different security requirements.
The dataset will be owned and maintained by FFCUL.
## 6.2 Standards and metadata
Several data formats can be foreseen including Comma-Separated Values (CSV)
files [1]. The volume of data is expected not to fall in the big-data
category.
Textual data will be interoperable with RFC 4180 and related formats. Other
raw data formats may be included if necessary.
Target users are researchers for publication and validation purposes.
## 6.3 Data sharing
FFCUL expects that the data will be made available. However, terms are still
under discussion and will be indicated in a later release of the DMP.
## 6.4 Archiving and presentation
The dataset will be disseminated to the consortium through the internal shared
SVN repository. It will be presented in SUPERCLOUD deliverables. When decided,
it should be disseminated to third parties through the SUPERCLOUD website and
internal FFCUL channels.
Costs for the availability of the data after the end of the project will be
covered by internal funding.
# Chapter 7 IMT-Measurements
## 7.1 Dataset description
IMT expects to produce a dataset containing measurement information extracted
from network experiments, e.g. bandwidth, latency, jitter, collected during
experiments related to the SUPERCLOUD project.
The measurements will be realized on the THD-Sec infrastructure at IMT,
running a local instance of the SuperCloud architecture.
The dataset will be owned and maintained by IMT.
## 7.2 Standards and metadata
The dataset will be constituted of Comma-Separated Values (CSV) files [1]. The
volume of data is expected not to fall in the big-data category.
Textual data will be interoperable with RFC 4180 and related formats.
Target users are researchers for publication and validation purposes.
## 7.3 Data sharing
IMT expects that the data will be made available. However, terms are still
under discussion and will be indicated in a later release of the DMP.
## 7.4 Archiving and presentation
The dataset will be disseminated to the consortium through the internal shared
SVN repository. It will be presented in SUPERCLOUD deliverables. It could be
disseminated to IMT partners through joint experimentations on the THD-Sec
platform.
Costs for sharing will be borne by the THD-Sec platform.
# Chapter 8 TUDA-Measurements
## 8.1 Dataset description
TUDA will generate a dataset containing measurements that constitute the
performance evaluation of the SUPERCLOUD architecture.
The data will be plain measurement data (e.g., timings of operations) in a
simple format like comma separated values (CSV) [1]
The volume will be rather small compared given today’s storage devices. The
volume will not exceed the volume storable on costumer hardware.
The dataset will be owned and maintained by TUDA.
## 8.2 Standards and metadata
The data will be stored in plain text files or in data formats used by open
source software, e.g., sql data base created with MySQL
## 8.3 Data sharing
TUDA expects that the dataset will be shared upon request, for academic
research purposes. Sharing will only occur after the relevant publications
have been accepted.
## 8.4 Archiving and presentation
The dataset will be disseminated to the consortium through the internal shared
SVN repository. It will be presented in SUPERCLOUD deliverables.
The dataset will be hosted on pre-existing storage infrastructures at TUDA, at
no additional cost.
# Chapter 9 Dataset PHC-Evaluation
## 9.1 Dataset description
In the SUPERCLOUD project Philips Healthcare focuses on the cloud
infrastructure (for compute, data management, and network) required for
medical applications. It will NOT focus on the actual clinical analytics and
algorithms while using the infrastructure, therefore, all data used in the
project will be mock data. By not using actual patient data in the project, by
definition, it cannot be tracked back to a real person avoiding privacy and
ethical issues.
Usability of the dataset will be limited to interoperability testing of the
SUPERCLOUD architecture and prototypes.
The mock data will be available after test-case definition is finalized
therefore it will be further specified after M22 of the project.
The dataset will be owned and maintained by PH HC.
## 9.2 Standards and metadata
The dataset will follow the DICOM - Digital imaging and communications in
medicine standard [2].DICOM includes metadata in its specification.
## 9.3 Data sharing
The dataset will be disseminated to the consortium through the internal shared
SVN repository. It will be presented in SUPERCLOUD deliverables.
## 9.4 Archiving and presentation
Given the limited usability of the dataset, it will not be archived beyond the
project’s end.
# Chapter 10 Dataset PEN-Measurements
## 10.1 Dataset description
PEN is at this time unsure of the generation of datasets. This will be updated
in a later revision of the document.
If generated, this data will probably be in the form of proof-of-concept
software applications, any generated input and output of these proof-of-
concepts, and performance statistics. It is quite possible there exists data
that PEN didn’t think is useful right now, but decide to include in the
project later.
The dataset will be owned and maintained by PEN.
## 10.2 Standards and metadata
Not applicable at this stage.
## 10.3 Data sharing
Dissemination of the dataset will be within the SUPERCLOUD consortium. Further
dissemination will be defined at a later stage.
## 10.4 Archiving and presentation
Not applicable at this stage.
# Chapter 11 Dataset Maxdata-Demonstration
## 11.1 Dataset description
Maxdata will demonstrate a healthcare laboratory information system (LIS)
running on top of the SUPERCLOUD infrastructure. All data used in this use
case will be artificially generated data mimicking, in a representative way,
data records from Maxdata applications (e.g., results of random blood tests
with random results will be associated to virtual patients with fictional
names such as “Patient 1”, “Patient 2”, etc., and random birth dates).
The definition of the dataset will be further refined in a later version of
this document, after test-case definition is finalized therefore it will be
specified after M22 of the project.
The dataset will be owned and maintained by Maxdata.
## 11.2 Standards and metadata
The data will be made available using Comma-Separated Values (CSV) [1] files
given that the data size will be small (less than 100 Mbytes).
## 11.3 Data sharing
The dataset will be made available under the Open Database License (ODbL) [3].
The dataset will be made available after the project end.
## 11.4 Archiving and presentation
The dataset will be made available through the Maxdata (maxdata.pt) web site,
for at least three (3) years after the end of the project, at no additional
cost.
# Chapter 12 Summary and Conclusion
The Data Management Plan of SUPERCLOUD describes the activity of the partners
related to datasets. It contains a summary of all the information available as
of July 1 st , 2015. All partners intend to create data and make it
available within the consortium.
With respect to _dataset descriptions_ , most of the data manipulated by the
SUPERCLOUD project is related to measurements, result of test and validation
activities that will be conducted in experimental settings to validate the
SUPERCLOUD prototypes. Data collected will thus be related to measurements of
resource usage (use of compute, storage and/or network wherever applicable).
Supporting mock-up data will also be generated as a filler to feed into the
SUPERCLOUD prototypes, allowing meaningful experimentation. Given the target
experimentation on e-health applications, this mock-up data will mimic
e-health records and information.
With respect to _standards and metadata_ , the most prevalent form of data
format is CommaSeparated Values (CSV) [1], a textual description of data that
is extremely common and widely used in the database community. This format is
very easy to manipulate, is particularly adapted to sharing over SVN (as text
files are easily versioned) and is understood by a wide range of tools,
including all database engines, easing sharing and understanding. Other
formats mentioned include the software-based PCAP (packet capture) de-facto
standard and the DICOM [2] (Digital Imaging and Communication in Medecine)
standard, since SUPERCLOUD use cases are focusing on the e-health domain.
With respect to _sharing_ , several partners intend to share, at least in the
academic community, the datasets for further research and publication.
Academic research is the main objective of the data managed in the SUPERCLOUD
project.
With respect to _archiving and presentation_ , partners plan to use internal
resources and have them available at the time of writing. A few datasets will
be made available for 3 years after the end of the project.
Since it is very early in the project, this document only presents preliminary
proposals in terms of sharing, volume and archiving. The project is aware of
these aspects and will tackle them by updating the present document during the
development of the specifications of the experimentations. Therefore,
information in this document is subject to change.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0986_POINT_643990.md
|
**2 Data Management Plan**
Data management activities aim at sharing the data and tools accompanying
POINT’s results with the ICN research community. These activities focus on the
data that underpin the scientific publications and project deliverables. In
this section, we give a brief description of such data and metadata and
present processes and tools to ensure the the long-term availability of the
research data.
# 2.1 Data Sets and Tools
POINT will augment Blackadder 1 with a number of components that will allow
existing applications to run on ICN without any changes. These new components
will be first evaluated on a distributed test-bed and then at an operator’s
(Primetel) network. Some concepts may also be evaluated by simulation.
The evaluation will produce raw data with some parts summarised in
deliverables and scientific publications. These raw data, underpinning the
published work, constitute the main research data sets that will be made
publicly available. In cases where release of complete raw data sets is
impossible due to, for example, privacy or personal data concerns (such as
packet traces involving networking usage of trial participants), we will
strive to find data sanitation and anonymisation approaches that enable
publishing as large parts of the data as possible. Any scripts used for post-
processing the raw data will also be shared.
When the data has been produced through customised simulation, the simulators
and the configuration files will be made available. When simulation is not
customised, configuration files for the well-known simulator will be shared.
Data will be shared under a Creative Commons Licence (CC-BY or CC0 tool) 2 .
# 2.2 Metadata
As mentioned, data will be shared only in relation to publications
(deliverables and papers). As such, the publication will serve as the main
piece of metadata for the shared data. When this is not seen as being adequate
for the comprehension of the raw data, a report will be shared along with the
data explaining their meaning and methods of acquisition.
# 2.3 Data Sharing
Data will be shared when the related deliverable or paper has been made
available at an open access repository. The normal expectation is that data
related to a publication will be openly shared. However, to allow the
exploitation of any opportunities arising from the raw data and tools, data
sharing will proceed only if all co-authors of the related publication agree.
The Lead author is responsible for getting approvals and then sharing the data
and metadata on Zenodo 3 , a popular repository for research data. The Lead
Author will also create an entry on OpenAIRE 4 in order to link the
publication to the data. OpenAIRE is a service that has been built to offer
exactly this functionality and may be used to reference both the publication
and the data. A link to the OpenAIRE entry will then be submitted to the POINT
Website Administrator (Primetel) by the Lead Author.
1. Blackadder is the platform of FP7 PURSUIT, which is the precursor of POINT. See _http://www.fp7pursuit.eu/PursuitWeb/?page_id=338_
2. For more details on Creative Commons licenses see _http://creativecommons.org/licenses/_
3. Zenodo is available at _https://zenodo.org/_
4. OpenAIRE is available at _https://www.openaire.eu/_
POINT 4(5)
<table>
<tr>
<th>
**Document:**
</th>
<th>
H2020-ICT-2014-1-643990-POINT/D6.5 – Data Management Plan
</th> </tr>
<tr>
<td>
**Security:**
</td>
<td>
Public
</td>
<td>
**Date:**
</td>
<td>
25.6.2015
</td>
<td>
**Status:**
</td>
<td>
Completed
</td>
<td>
**Version:**
</td>
<td>
1.00
</td> </tr> </table>
# 2.4 Archiving and Preservation
Both Zenodo and OpenAIRE are purpose-built services that aim to provide
archiving and preservation of long-tail research data. In addition, the POINT
website, linking back to OpenAIRE, is expected to be available for at least 2
years after the end of the project.
POINT 5(5)
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0989_M3TERA_644039.md
|
# Chapter 1 Introduction
The M3TERA Data Management Plan (further on referred as DMP) is required for
H2020 projects participating in the Open Research Data Pilot and describes the
data management life cycle for all data sets that will be generated,
collected, and processed by the research project M3TERA. Being more specific,
it outlines how research data will be handled, what methodology and standards
will be used, whether and how the data will be exploited or made accessible
for verification and re-use and how it will be curated and preserved during
and even after the M3TERA project is completed. The DMP can be considered as a
checklist for the future, as well as a reference for the resource and budget
allocations related to the data management.
However, to explain the **reason** why a DMP gets elaborated during the
lifespan of a research project, the European Commission’s vision is that
information already paid for by the public purse should not be paid again each
time it is accessed or used. Thus, other European companies should benefit
from this already performed research.
To be more specific, _“r _esearch data_ refers to information, in particular
facts or numbers, collected to be examined and considered and as a basis for
reasoning, discussion, or calculation. In a research context, examples of data
include statistics, results of experiments, measurements, observations
resulting from fieldwork, survey results, interview recording and images. The
focus is on research data that is available in digital form.” _ 1
The DMP is not a fixed document. It will evolve and gain more precision and
substance during the lifespan of the M3TERA project. The first version of the
DMP, including information from the first six months of the project, includes
the following:
* Data management o Data set description o Collection/Generation/Documentation of Data and Metadata o Intellectual Property Rights
* Accessibility o Data access and –sharing o Archiving and preservation
However, before this information from all partners gets depicted in a more
detailed manner (Chapter 3 and Chapter 4), first the used methodology (Chapter
2) gets shortly described in the following chapter.
# Chapter 2 Methodology
As mentioned in the introduction, a research instrument questionnaire was
selected as the best mechanism of collecting partner inputs related to the
data management within the M3TERA project. This had the dual aim of first
gathering a more detailed understanding of the operations planned during the
project and also to raise the awareness of the requirements outlined in the
Guidelines on Data Management in Horizon 2020\.
The questionnaire has been divided into five main chapters, consisting each
one of a series of questions (note: the questionnaire template is attached to
Appendix of this document). In total, the questionnaire was designed to be
broad enough to include the information required by the European commission
and to cover the various roles the partners play within the M3TERA project. As
the project is by now within its first months, some information remains
undefined at the moment. Therefore, a more detailed and elaborated version of
the DMP will be delivered at later stages of the project.
Moreover, the DMP will be updated at least by the mid-term and final review to
be able to fine-tune it to the data generated and the uses identified by the
consortium.
M3TERA D8.2
# Chapter 3 Data management
The term ‘Data Management’ stands for an extensive strategy targeting data
availability to target groups within an organized and structured process
converted to practice. Before making data available to the public, the data to
be published needs to be defined, collected, documented and addressed
properly. The following sections define this process within M3TERA and will be
led by the following questions:
* **3.1 Data Set** – Which type of data will be generated? Which formats will be chosen and can be reused? Which data volume will the data comprise?
* **3.2 Data Generation and Collection** – How can the data set be described? To whom might be the data useful? How can it be identified as research data?
* **3.3 Data Documentation & ** \- Does the project data comply with international research standards?
* **3.4 Intellectual Property Rights** \- Will the public availability be restricted due to the adherence to Intellectual Property Rights?
## 3.1 Data Set Description
The project has been generating data during the lifespan of the M3TERA
project. The overall volume of the generated data is estimated to reach 5-10
GByte. Approximately one out of four beneficiaries will reuse existing data
for M3TERA, whereas the rest will start generating and collecting data from
scratch. This generated data will differentiate then from quality data, over
characterization data of microsystem components and subsystem performance,
provided as graphs and raw data, to data design flow or mixed-signal circuit
design data (Virtuoso, Spectre, Avenue, Calibre).
Furthermore, geometrical designs (STEP), simulations, measurements (CSV), and
calculation data will be generated. Also Matlab will be used as data
generation instrument. Mostly data will be displayed in numbers and/or
pictures, geometrical- (*.sm3, *.jpg, *.dxf), Microsoft Excel and Microsoft
Word format, and through the use of the EM software (*.hfss).
Moreover, the consortium acknowledged that the chosen formats and used in-
house software will enable long-term access to the mentioned data.
## 3.2 Data Generation and Collection
Data generation and collection is concerened about the project data generated
or collected, including its origin, nature and scale, and to whom it might be
useful.
Data will mostly be generated by the M3TERA beneficiaries themselves or among
the consortium. Therefore, different methodologies come into operation. Almost
half of the partners will generate data via research. Others will do different
types of measurements and simulations (e.g. microwave, process/device, and
other components), or bottom-up/top-down design flow (behavioural, transistor
level, circuit synthesis, hand layout, layout synthesis, verification, etc).
However, for some partners the exact methodology is unknown yet, but important
to use what are commonly used and known. In case the data gets collected and
not generated in M3TERA, data originally will come from literature research,
internal databases, company internal instrumentation, and through design (e.g.
MMIC), simulations and measurements.
The consortium agrees in prospectively seeing the possibility to integrate or
reuse the generated data. They further agree that the data will be useful for
universities, research organizations, SMEs and scientific publications.
Moreover, it might be also beneficial for IP providers and to design
companies. Even though the data either includes already information for the
use or is nonetheless so transparent to not require information to be read and
interpreted, half of the partners mentioned that dedicated software packages,
access to the PDK and IFAT design flow and tools are required.
## 3.3 Data Documentation & Metadata
Data documentation ensures that the given dataset or set of documents will be
understood, citied properly and interpreted correctly by everyone.
All partners will document their data in a different way, either logging
relevant data, or using dedicated software (EM/EDA), libraries and IP
management systems. Others prefer to document it after designing, simulating
and measuring the components using also MS office and MATLAB. Almost half of
the partners will not use metadata standards, the rest however will use EAD,
ISO/IEC, SAML, Cadence and .xml formats.
## 3.4 Intellectual Property Rights
Even though IPR issues mainly arise during the project lifetime or even after
project end due to the dissemination (scientific and non-scientific
publications, conferences etc.) and exploitation (licensing, spin-offs etc.)
of project results, the M3TERA consortium considered the handling of IPR right
from the very beginning, already during the project planning phase. Therefore
a Consortium Agreement (CA) clearly states the background, foreground,
sideground of each partner and defines rules regarding patents, copyrights,
(un-) registered designs and other similar or equivalent forms of statutory
protection.
Within the M3TERA project most data will be generated within internal
processes at partner level through measurement analysis. Close cooperation
within the consortium may lead to joint generation of data, which is clearly
handled in terms of IPR issues within the CA.
At this stage of the project, no licenses are required, as the commercial
value of the data itself might be low. The reuse of valuable data within
M3TERA is covered by the CA and will be depending on hardware and software
targets of the consortium.
Furthermore, no third party data is reused in the current project phase. In
case third-party data will be reused, confidentiality restrictions might apply
in specific cases, which will be analyzed per case in detail.
Project data will be published only after review or publication through
scientific publication institutes or after ensuring that data is uncritical in
terms of IPR issues. Further, data of commercial value for the project
partners might underlie restrictions or face a minor time lag before
publication.
In total, within M3TERA, all public data is well discoverable and accessible.
However, confidential data is only accessible via internal partner platforms
and the provided IT infrastructure solely for the M3TERA consortium as agreed
in the Consortium Agreement. As data gets and will be provided in readable
text format, the consortium (except Chalmers) agrees that the data is
assessable and intelligible. Further, they confirm that as a basis for future
scientific research activities the data will be usable beyond the original
purpose. Regarding suitable standards it can be finally said that only ANTERAL
mentioned that data is interoperable to specific quality standards, whereas
the others cannot state a comment at the moment, do not know it or deny this
statement.
# Chapter 4 Accessibility
While Chapter 3 focuses on the internal project processes before publication
including the compliance with the project rules for IPR, Chapter 4 describes
how the generated data will become accessible for public (re-) use (Section
4.1) and how the availability will be ensured permanently, whether data needs
to be destroyed/retained for any contractual, legal or regulatory purpose as
well as how long the data should be preserved, what costs will occur and how
they will be covered. (Section 4.2).
## 4.1 Access and Sharing
Access to and sharing of data helps to advance science and to maximize the
research investment. A recent paper 2 reported that when data is shared
through an archive, research productivity and often the number of publications
increases. Protecting research participants and guarding against disclosure of
identities are essential norms in scientific research. Data producers should
take efforts to provide effective informed consent statements to respondents,
to identify data before deposit when necessary, and to communicate to the
archive any additional concerns about confidentiality. With respect to
timeliness of data deposit, archival experience has demonstrated that the
durability of the data increases and the cost of processing and preservation
decreases when data deposits are timely. It is important that data is
deposited while the producers are still familiar with the dataset and able to
fully transfer their knowledge to the archive.
In particular potential users can find out about generated and existing data
most likely through the project's dissemination activities (scientific
publications and papers), deliverables, presentations and technical events
(conferences, trade shows) etc. During the project lifetime these documents
and data will be published on our official project website ( _www.m3tera.eu_
) where a broad community has access to the project information. Besides the
M3TERA public websites also marketing flyers or the internal project SVN
repository will be used as a tool to provide and exchange the requested data.
In principle, the data will be shared within the M3TERA consortium according
to our Consortium Agreement (with respect to any IPR issues) via a secured SVN
repository as soon as the data is available. To the public community, data
will be shared according to the dissemination level of the data via the public
project website. Partner Ericsson stated that they will share their data to
the public under bilateral agreements but there are no conditions for "open"
data generated by them. Besides the SVN and the website, the consortium is
also willing to handle requests directly. Public deliverables will be made
available as soon as they have been approved by the European Commission.
In this early stage of the project (M06) the consortium does not pursue to get
a persistent identifier for the data generated.
## 4.2 Archiving and Preservation
Generally, the consortium's opinion is that it will not be necessary to
destroy any data for contractual, legal, or regulatory purposes. However, as
described before, there will be the case that the confidential deliverables
will be restricted.
At the moment it cannot be determined if other data should be kept. Along with
the project progress, the M3TERA consortium will discuss this further.
However, the data generated will serve as basis for future scientific research
work and reports on device performance as well as for benchmarking. The M3TERA
consortium will use the data also for the development of SiGeBiCMOS circuits,
the use of mm-wave suited packages (eWLB), for chip/RF-MEMS interfaces and the
RF-MEMS design and modeling. Further foreseeable research will be mmW building
practice, future radio systems as well as antenna research. The consortium
will also develop a sensing prototype for M3TERA.
With regards to the retention and preservation of the data, M3TERA will retain
and/or preserve the produced data at least for three years after the project
end. Further, it will be stored in a commodity cloud with usage of internal
infrastructure and data bases from the partners or external platforms.
Costs for data storage and archiving will occur, in particular for sever
provision (infrastructure) and maintenance (security updates). The
coordinator, Technikon, has foreseen appropriate costs in the project budget
for the active project time. At a later stage of the project it can be better
assessed, if further costs for data storage will occur. These costs will then
be covered by the partners with their own resources.
# Chapter 5 Summary and conclusion
This data management plan outlines the handling of data generated within the
M3TERA project, during and after the project lifetime. As this document will
be kept as a living document it will be regularly updated by the consortium.
The partners put into write their plans and guarded expectations regarding
valuable and publishable data.
A questionnaire on data management issues supported the partners to create
awareness for data handling right at the project start. Within the M3TERA
consortium qualitative data, characterization data, design data etc. will be
generated in different designs like Matlab, Microsoft Excel, EM etc. These
data will be valuable for universities, research organizations, SMEs and
scientific publications.
The M3TERA consortium is aware of proper data documentation requirements and
will rely on each partners’ competence in appropriate citation etc. The
Consortium Agreement (CA) forms the legal basis in dealing with IPR issues and
covers clear rules for dissemination or exploitation of project data. Besides
the M3TERA public website, which targets a broad interest group, also
marketing flyers or the SVN repository will be used as a tool to provide data.
With regards to the retention and preservation of the data, M3TERA partners
will retain and/or preserve the produced data for several years, three years
after the project end at least.
The M3TERA consortium is convinced that this data management plan ensures that
project data will be provided for further use timely, available and in
adequate form, taking into account the IPR restrictions of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0990_3D Tune-In_644051.md
|
Executive summary
This is public deliverable D7.9 of the H2020 project 3D Tune-In (3DTI \-
644051). This work was carried out as part of WP7 Project Management.
3DTI takes part in the Open Access Research Data Pilot which aims to improve
and maximise access to and re-use of research data generated by projects. D7.9
– Data Management Plan outlines the project’s approach towards making research
data available in the public domain.
# Section 1: Introduction
As outlined in Article 29.3 of the 3DTI Grant Agreement, beneficiaries must
deposit project data in a research data repository and take measures to make
it possible for third parties to access, mine, exploit, reproduce and
disseminate data free of charge.
Data includes associated metadata needed to validate the results presented in
scientific publications, and any other kind of data as specified in this Data
Management Plan (DMP). Moreover, beneficiaries must provide information (via
the repository) about tools and instruments necessary for validating the
results (and - where possible - provide the tools and instruments themselves).
This does not change the obligation to protect results, adhere to
confidentiality and ethics considerations, security obligations or the
obligations to protect personal data. As an exception, beneficiaries do not
have to ensure open access to specific parts of their research data if this
can compromise the achievement of the action's main objectives, as described
in Annex 1. In this case, the data management plan must contain the reasons
for not giving access.
This deliverable describes the DMP for 3DTI. The purpose of the DMP is to
provide an analysis of the main elements of the data management policy that
will be used by the beneficiaries. The Project’s approach towards data
management is outlined in close accordance with the EU’s Guidelines for
Data Management
(
**_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020hoa-
data-mgt_en.pdf_ ** ) .
This deliverable will be updated at regular intervals.
# Section 2: Data Types
3DTI will produce four types of data (D1-2-3-4) to be included in the Open
Access Research Data Pilot.
## 2.1: Software (D1)
The software production of 3DTI is divided in three separate stages. Firstly,
all partners will work towards the creation of a 3D Tune-In Toolkit, which
will comprise 3D audio and video engines, a haptic engine, hearing aid
emulators, evaluation tools, human-computer interfaces and game scenarios. The
Toolkit will then be used to create 5 separate applications - each application
will be linked with a specific commercial partner, and will involve all the
academic partners.
D1.1 The Toolkit will serve as a basis for building specific applications, and
will be shared as open source software. Once ready, the Toolkit, including
relevant documentation, will be made available to the public, as described in
Sections 3 and 4 of this report.
D1.2 In order to address the concerns of the commercial partners related to
sharing sensitive information about their products and services (e.g. GN
Hearing sharing sensitive information about their hearing aid devices) and
potential clashes in the market in terms of competitors having similar tools,
the 3DTI applications will not be open source, and will not be part of the
Open Access Research Data Pilot.
D1.3 During the project, several demonstration and testing platforms will be
created. These will include simple interfaces to use the Toolkit, testing
platforms to evaluate its various functionalities, and tools/interfaces for
demonstration purposes. These, including relevant documentation, will be made
available to the public, as described in Sections 3 and 4 of this report.
## 2.2: Subjects’ data (D2)
Within 3DTI three separate activities will be carried out in which individuals
will be involved for evaluation and testing purposes.
D2.1 Qualitative analysis for the participatory design stage (WP1).
D2.2 Quantitative analysis for the technical development stage (WP2).
D2.3 Quantitative and qualitative analysis for the evaluation stage (WP4).
Considering the sensitive nature of this data type, special attention will be
put in sharing it with the general public. In particular, data in which
individuals could be potentially recognised (e.g.
quantitative analysis for the participatory design and evaluation stages) will
not be included in the Open Access Research Data Pilot.
Advise from the Quality Manager, Ethics Coordinator and external Ethics
Advisor will be sought before making public any data within this category
(D2).
## 2.3: Scientific publications (D3)
All scientific publications produced within the 3DTI project will be included
in the Open Access Research Data Pilot where this does not contravene any
copyright issues and will be made publicly available.
## 2.4: Dissemination material (D4)
All dissemination material produced within the 3DTI project will be included
in the Open Access Research Data Pilot, and will be made publicly available.
# Section 3: Data repositories
3DTI will employ two separate data repositories in order to comply with the
Open Access Research Data Pilot.
Before the public release (schedule in Section 4), every partner will be
responsible for archiving the data they produced on local hard-drives, which
will be regularly backed up.
## 3.1: 3DTI Website (DR1)
The 3DTI website ( _http://www.3d-tune-in.eu_ ) is live since July 2015, and
contains an _Open Access Research Data_ section, as well as a _Downloads_
section. The 3DTI website will be locked at the end of the project (May 2018),
and will be kept available at the same URL for 10 years after that date.
## 3.2: Zenodo (DR2)
_Zenodo (_ _http://zenodo.org/_ _)_ _is an open dependable home for the long
tail of science, enabling researchers to share and preserve any research
outputs in any size, any format and from any science_ .
An account in Zenodo will be created for 3DTI, and the repository will be used
for sharing 3DTI data.
# Section 4: Data Management Plan
Here follows the provisional timetable for the public release of the data
produced by the 3DTI project. The schedule is based on the three Open Access
Research Data pilot deliverables (D7.6D7.7-D7.8), which are due in M12-24-36.
Both DR1 and DR2 repositories will be used for sharing the data with the
public.
<table>
<tr>
<th>
**Project Task**
</th>
<th>
**Data set type and name**
</th>
<th>
**Notes**
</th>
<th>
**Publicly available from**
</th> </tr>
<tr>
<td>
T1.3 - Specification of 3D-Tune-In Toolkit
T2.1 -Development of the audio rendering engine
</td>
<td>
D1.3 - Demonstration and
testing platforms, with documentation
</td>
<td>
</td>
<td>
M12
M24
M36
</td> </tr>
<tr>
<td>
T1.3 - Specification of 3D-Tune-In Toolkit
T2.1 -Development of the audio rendering engine
</td>
<td>
D2.2 - Quantitative analysis for the technical
development stage
</td>
<td>
Only non-sensitive data where subjects are not identifiable will be shared.
</td>
<td>
M24
</td> </tr>
<tr>
<td>
WP2 - Development of the 3D Tune-In Toolkit (T2.1-T2.2-T2.3-T2.4)
</td>
<td>
D1.1 - 3D Tune-In Toolkit
</td>
<td>
</td>
<td>
M24
</td> </tr>
<tr>
<td>
WP4 - Evaluation and validation (T4.2-T4.3)
</td>
<td>
D2.3 - Quantitative and qualitative analysis for the evaluation stage
</td>
<td>
Only non-sensitive data where subjects are not identifiable will be shared.
</td>
<td>
M36
</td> </tr>
<tr>
<td>
All WPs
</td>
<td>
D3 – Scientific publications
</td>
<td>
These will also made available through public repositories of the various
partner institutions.
</td>
<td>
M12
M24
M36
</td> </tr>
<tr>
<td>
All WPs
</td>
<td>
D4 - Dissemination materials
</td>
<td>
</td>
<td>
M12
M24
M36
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0991_HECTOR_644052.md
|
**Chapter 1 Introduction**
In research projects it is common that several partners work together and
produce a lot of data related to the project. Therefore, it is important to
specify in an early stage of the project what data will be generated, how it
will be shared between the project partners and if it will be publicly
available. A data management plan (DMP) is a tool which should assist in
managing the data created during the project.
In general, the DMP should specify what data will be generated, collected, and
processed during the project. It should also provide information whether and
how data will be exploited and open for public and re-use. The DMP should
include information on what standards and methodologies will be used and how
the data will be handled during and after the research project (how the data
will be curated and preserved).
The DMP should result in a checklist for the future; it should serve as a
reference for resource and budget allocation. Further, it should support and
describe the data management lifecycle. The DMP is a living document, the
first version is submitted in M06, and updated versions are planned for M18
and M36.
In particular, the data created by the HECTOR project will be in the form of:
* Bit streams generated by true random number generators (TRNGs)
* Hardware signature codes generated by physically unclonable functions (PUFs)
* Results of statistical testing methods for TRNGs using AIS 31, NIST SP 800-22 and NIST SP 800-90B methodologies
* Results of statistical testing methods for PUFs
* Measurement data of power consumption/electromagnetic emanation of the investigated devices observed during hardware side-channel attacks (leakage traces)
* Output data acquired during active attacks (e.g. fault attacks) targeting specific modules (e.g. TRNG, PUF)
* FPGA or ASIC specific HDL code describing modules for performing efficient cryptographic calculations or microcontroller-specific software for cryptographic computations
Parts of the created data will be made available for the public, e.g. the
research community. Cloud storage (Dropbox, Google drive, …) or an IT service
hosted by one of the project partners will be applied for that purpose. To
make the data easily accessible there will be a direct link from the HECTOR
homepage to the service where to download the data from incorporated with a
detailed description of the data sets. Therefore, no specific data sharing
infrastructure is required at the partner sites. This approach allows
providing access to interested parties outside the project by e.g. simply
sharing a URL.
It has to be considered that the size of some types of generated data (e.g.
leakage traces) might exceed several Gigabytes (GBs) making it impossible to
share using cloud storage or a comparable server-based solution. In such
cases, only a subset of the data will be shared to limit the storage
requirements. If external parties are interested in the whole dataset, an
appropriate sharing solution can be set-up on demand. The required information
therefore will be provided in the appropriate dataset description which can be
found on the HECTOR homepage.
The data will typically be stored by the project partner generating it. E.g.
the partner who performs side-channel measurements will store the
corresponding leakage traces locally. Sharing the data between project
partners will be done on demand. Depending on the size of the data to share,
different approaches will be used: SVN, cloud storage, server-based approach,
exchange USB sticks or hard drives.
HECTOR D5.2 1 of 17 D5.2–Data Management Plan (DMP)
For source code created during the project (e.g. HDL code of cryptographic
modules, microcontroller code), only parts which do not include protection
mechanisms against e.g., side-channel analysis attacks, will be made available
for the public. The developer of the code will benefit from sharing the code
in the way that other interested researchers can reuse the code. This reuse
results in citations for the author. On the other hand the research community
can benefit from the publicly available code in the way that implementing
standard algorithms (e.g. authenticated encryption algorithms submitted to the
CAESAR competition [1]) from scratch becomes unnecessary.
Results of side-channel analysis (SCA) attacks based on leakage traces,
results of statistical tests for the TRNGs/PUFs, and implementation results of
the cryptographic building blocks (area numbers, runtime) will be published in
deliverables. Therefore these numbers are accessible for interested parties
outside the project. Also scientific publications will ensure that the results
are disseminated.
For publicly available data, an appropriate licensing scheme will be put in
place. Interested third parties should be allowed to use, modify, and build on
the provided data. One option to allow this is attaching a Creative Commons
License (see _http://creativecommons.org/licenses/?lang=en_ ) to the data.
Two examples are the CC0 license and the CC-BY license. While CC0 allows the
author of data to waive the copyright completely, CC-BY allows the reuse of
the data by a third party, but the original author has to be cited. Specific
use-cases might require using a more-restrictive license (e.g.
_http://www.apache.org/licenses/LICENSE-2.0_ ) . If such cases are identified
in the course of the project, decisions will be made on demand and the DMP
will be adapted accordingly.
Currently there are no plans to use existing data. This might change if VHDL
code for specific modules is already available by one project partner or if
code from a third party can be used without license restrictions. This is a
further adaption of the DMP which might become necessary during the lifecycle
of the project.
HECTOR D5.2 2 of 17
# Chapter 2 Data generation
D5.2
–
Data Management Plan (DMP)
<table>
<tr>
<th>
**Data Nr.**
</th>
<th>
**Responsible Beneficiary**
</th>
<th>
**Data set reference and name**
</th>
<th>
</th>
<th>
**Data set description**
</th>
<th>
</th>
<th>
</th>
<th>
**Research data identification**
</th>
<th>
</th> </tr>
<tr>
<th>
**End user (e.g. university, research**
**organization, SME’s, scientific publication)**
</th>
<th>
**Existence of similar data (link, information)**
</th>
<th>
**Possibility for integration and reuse**
**(Y/N) + information**
</th>
<th>
**D 1 **
</th>
<th>
**A 2 **
</th>
<th>
**AI 3 **
</th>
<th>
**U 4 **
</th>
<th>
**I 5 **
</th> </tr>
<tr>
<td>
1
</td>
<td>
UJM
</td>
<td>
Huge random bit streams and random data streams generated by proposed TRNGs in
different technologies
</td>
<td>
University, research organisation, SMEs
</td>
<td>
No other similar data are available
</td>
<td>
Y; the data will be used within this project for the statistical evaluation
and may be reused in other projects
</td>
<td>
</td>
<td>
x
</td>
<td>
x
</td>
<td>
x
</td>
<td>
x
</td> </tr>
<tr>
<td>
2
</td>
<td>
TEC
</td>
<td>
Hardware signature codes generated by proposed PUFs in individual devices
</td>
<td>
University, research organisation, consortium
</td>
<td>
Data from PUFs that were developed in the course of the FP7 project UNIQUE,
_http://unique.technikon_
_.com_
</td>
<td>
Y; the data will be used within this project for the statistical evaluation
and may be reused in other projects for advanced analysis
</td>
<td>
</td>
<td>
x
</td>
<td>
x
</td>
<td>
x
</td>
<td>
x
</td> </tr>
<tr>
<td>
3
</td>
<td>
BRT
</td>
<td>
Results of TRNG statistical testing using AIS31, NIST SP 800-22 and NIST SP
800-90B methodologies
</td>
<td>
University, research organisation, SMEs
</td>
<td>
No other similar data available
</td>
<td>
Y; the data will be used within this project for the statistical evaluation
and may be reused in
other projects
</td>
<td>
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td> </tr> </table>
1 Discoverable
2 Accessible
3 Assessable and intelligible
4 Usable beyond the original purpose of which it was collected
5 Interoperable to specific quality standards
HECTOR D5.2 Page 3 of 17
<table>
<tr>
<th>
**Data Nr.**
</th>
<th>
**Responsible Beneficiary**
</th>
<th>
**Data set reference and name**
</th>
<th>
</th>
<th>
**Data set description**
</th>
<th>
</th>
<th>
</th>
<th>
**Research data identification**
</th>
<th>
</th> </tr>
<tr>
<th>
**End user (e.g. university, research**
**organization, SME’s, scientific publication)**
</th>
<th>
**Existence of similar data (link, information)**
</th>
<th>
**Possibility for integration and reuse**
**(Y/N) + information**
</th>
<th>
**D 1 **
</th>
<th>
**A 2 **
</th>
<th>
**AI 3 **
</th>
<th>
**U 4 **
</th>
<th>
**I 5 **
</th> </tr>
<tr>
<td>
4
</td>
<td>
TEC
</td>
<td>
Results of PUF statistical testing using new proposed methodology
</td>
<td>
University, research organisation, consortium
</td>
<td>
_http://unique.technikon_ _.com_
</td>
<td>
Y; advanced analysis may be based on this data
</td>
<td>
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td> </tr>
<tr>
<td>
5
</td>
<td>
BRT
</td>
<td>
Leaked signal traces observed during hardware SCA
</td>
<td>
University, research organization
</td>
<td>
Power measurements for the DPA contest, _http://www.dpacontest_
_.org/v4/rsm_traces.php_
</td>
<td>
Y; Might be reused in other projects to evaluate e.g. novel attack methods
</td>
<td>
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
6
</td>
<td>
UJM
</td>
<td>
Test output data acquired during active attacks on proposed modules and
demonstrators
</td>
<td>
University, research organisation, SMEs
</td>
<td>
</td>
<td>
Y; may be used in other projects too
</td>
<td>
</td>
<td>
</td>
<td>
x
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
7
</td>
<td>
TUG
</td>
<td>
VHDL code of building blocks for demonstration and evaluation in WP4
</td>
<td>
University, research organization
</td>
<td>
ASCON hardware implementations at github, _https://github.com/asc_
_on/ascon_collection_
</td>
<td>
Y; The building blocks might be reused for other projects and scientific
research.
</td>
<td>
x
</td>
<td>
x
</td>
<td>
x
</td>
<td>
</td>
<td>
x
</td> </tr> </table>
Table 1: Data generation
D5.2
–
Data Management Plan (DMP)
HECTOR D5.2 Page 4 of 17
**_Explanation of Table 1:_ **
**_Data set reference and name:_ **
Identifier for the data set to be produced
**_Data set description:_ **
Description of the data that will be generated or collected, its origin (in
case it is collected), nature and scale to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration of reuse.
**_Research Data Identification_ **
The boxes (D, A, AI, U and I) symbolize a set of questions that should be
clarified for all datasets produced in this project.
**Discoverable:**
Are the data and associated software produced and/or used in the project
discoverable (and readily located), identifiable by means of a standard
identification mechanism (e.g. Digital Object Identifier) **Accessible:**
Are the data and associated software produced and/or used in the project
accessible and in what modalities, scope, licenses (e.g. licencing framework
for research and education, embargo periods, commercial exploitation, etc.)
**Assessable and intelligible:**
Are the data and associated software produced and/or used in the project
assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. are the minimal datasets handled
together with scientific papers for the purpose of peer review, are data
provided in a way that judgements can be made about reliability and the
competence of those who created them)?
**Useable beyond the original purpose for which it was collected**
Are the data and associated software produced and/or used in the project
usable by third parties even long time after the collection of the data (e.g.
is the data safely stored in certified repositories for long term preservation
and curation; is it stored together with the minimum software, metadata and
documentation to make it useful; is the data useful for the wider public needs
and usable for the likely purposes of non-specialists)?
**Interoperable to specific quality standards**
Are the data and associated software produced and/or used in the project
interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc. (e.g. adhering to standards for data
annotation, data exchange, compliant with available software applications, and
allowing re-combinations with different datasets from different origins?)
It is recommended to make an “x” to each applicable box and explain it
literally in more detail afterwards.
# Chapter 3 Processing and explanation of generated data
The following sections provide some additional information to the listed data
introduced in Chapter 2. This information includes the entity which is
responsible for the data, how the data is collected, an identification of the
end-users of the data, and research data identification.
## 3.1 Huge random bit streams generated by proposed TRNGs in different
technologies
### 3.1.1 Responsible Beneficiary
Random data will be generated and recorded by the parties performing
evaluations of random number generators. This task will be mainly performed by
UJM, so they take the main responsibility of the data. It is probable that
similar data will also be produced by other parties, e.g. KUL, BRT, STM, TCS
or MIC, as these parties also have expertise in random number generation.
### 3.1.2 Gathering Process
Random data will be essentially generated using HECTOR evaluation boards and
demonstrator in various conditions including border and corner operating
conditions. Two types of data will be generated: the raw random data streams
and the post-processed random data streams. Random data can be bits, bytes,
16- or 32-bit words. Two data formats will be available: the binary stream and
the stream of random words (bytes, 16- or 32-bit words). Stream of random
words can be useful for example when the raw random data is the output of a
counter of random events.
The raw random bit stream files have extension *.rbs, the raw random data
stream files have extension *.r08, *.r16 or *.r32 for data streams with bytes,
16- and 32-bit words, respectively. The post-processed bit stream files have
extension *.pbs and the post-processed data stream files have extension *.p08,
*.p16 or *.p32.
Random bit stream files with extension *.rxx or *.pxx (raw bit streams or
post-processed bit streams) represent the most common file format, since this
format is required by most general-purpose statistical tests (e.g. AIS 31,
NIST SP 800-90B or NIST SP 800-22).
In generation and evaluation of random numbers, the order of bits, bytes and
words is important, since it can change the existing pattern (if there is
some). Random bytes are written into the files in the same order as they
arrive. Bits are placed into the bytes in the following manner: the first
arrived bit is placed to the least significant bit and the last arrived bit to
the most significant bit, i.e. byte=bit8|bit7|bit6|bit5|bit4|bit3|bit2|bit1.
The 16-bit words have the following format: word16=byte2|byte1 and the 32-bit
words are as follows: word32= byte4|byte3|byte2|byte1.
### 3.1.3 End-User of the Data
The end-users of this type of data will mainly be the producers of data and
other partners of the HECTOR project. It can happen that the generated data
would need to be shared with another institution. The data file sizes of at
least 2 MB will be needed for applying the AIS31 tests, sizes of 1 MB for
applying the NIST SP 800-90B test suite and thousands of files of 125000 bytes
for applying the NIST SP 800-22 test suite. The technique to share the data
depends on the amount of data. A small amount (<100MB) can be shared using the
existing SVN. For medium amounts (<1GB) some cloud storage infrastructure
might be applied. Huge amounts (>10GB) might require to share USB sticks or
external hard disks.
To allow parties outside the project to evaluate the generated data, the data
files will be made publicly available. Depending on the size of the
measurement data, only subsets of the data files might be publicly shared. All
information concerning data acquisition will be available at the same place
where the generated data can be downloaded. If one interested party requires
the full set of measurement data, a custom sharing method can be set up.
### 3.1.4 Research Data Identification
The TRNG output data will not be discoverable in public search engines or in a
global registry of research data repositories, but within the consortium
internally. It will be accessible by means of an existing project subversion
repository or if necessary, exchanged via data storage media. The quality and
reliability of the data can be evaluated by statistical evaluation. The data
may be useable in upcoming projects as well, but the purpose of the data will
not change from a present-day perspective. TRNG data are used within
frameworks that allow the interoperability between the existing components
based on the conformity to the same standards.
## 3.2 Hardware signature codes generated by proposed PUFs in individual
devices
### 3.2.1 Responsible Beneficiary
The data of Physically Unclonable Functions (PUFs) will be generated and
recorded by parties performing evaluations on the data, mainly TEC and KUL.
The driving partner will be UJM in this context. It will be decided at a later
stage which PUF type will be used.
### 3.2.2 Gathering Process
There are several different sources that may be used for PUF data generation.
At the current stage, responses can be derived from 65nm PUF ASICs including
SRAM, Latch, D Flip-flop, Buskeeper, Arbiter and Ring Oscillator PUFs. This
PUFs were developed in the course of the FP7 project UNIQUE. Another possible
source is an FPGA structural and behavioural emulation of an SRAM-like PUF
implemented in VHDL by TEC (realized during the FP7 project HINT). There are
also ring oscillator PUF implementations ongoing that may be used in this
context.
The raw PUF data have extension *.bin, for data streams in binary files and
deliver sequences of bytes either in hexadecimal or binary format. For
existing ASICs and for correlation analysis the physical proximity plays an
important role, since "0xA1" may correspond either to "10100001" or
"01011000".
### 3.2.3 End-User of the Data
The end-user of this type of data will be mainly the partners within the
HECTOR project or organisations that perform analysis on PUF data. The
generated data can be shared with other institutions. The size of the *.bin
files are different for the PUF types since they show different response
length, but have a maximum size of 16kB per response.
When performing statistical tests on PUF data, a lot of data is required and
needs to be shared. This might lead to an exchange of the data via USB sticks
or external hard disks. If PUF data is only used for a low number of
reconstructions within a framework, a small amount of responses can easily be
shared via the existing SVN or a cloud storage infrastructure.
### 3.2.4 Research Data Identification
The PUF data will not be discoverable in public search engines or in a global
registry of research data repositories, but within the consortium internally.
It will be accessible by means of an existing project subversion repository or
if necessary, exchanged via data storage media. The quality and reliability of
the data can be evaluated by statistical evaluation. The data may be useable
in upcoming projects as well, but the purpose of the data will not change from
a present-day perspective. PUF data are used within frameworks that allow the
interoperability between the existing components based on the conformity to
the same standards.
## 3.3 Results of TRNG statistical testing using AIS 31, NIST SP 800-22 and
NIST SP 800-90B methodologies
### 3.3.1 Responsible Beneficiary
The results of the statistical testing are mainly produced by those who
perform evaluations on TRNG data (e.g. UJM, KUL, BRT, STM or MIC) described in
Section 3.1.
### 3.3.2 Gathering Process
Test outputs (test results) are produced from TRNG output data described in
Section 3.1. Results of statistical testing using AIS 31, NIST SP 800-90B and
NIST SP 800-22 methodology are generated by corresponding standard tests as
log (text) files. It is important to maintain the link between tested data and
test output using convenient file naming. The filename before extension must
therefore be the same for input and output data of each test.
Output of tests of the raw data will have file extension:
*.r31 – for the AIS31 test suite output,
*.r22 – for the NIST SP 800-22 test suite output,
*.r9i – for the NIST SP 800-90B test suite for iid data,
*.r9n – for the NIST SP 800-90B test suite for non-iid data.
Correspondingly, output of tests of the post-processed data will have file
extension:
*.p31 – for the AIS31 test suite output,
*.p22 – for the NIST SP 800-22 test suite output,
*.p9i – for the NIST SP 800-90B test suite for iid data, *.p9n – for the NIST SP 800-90B test suite for non-iid data.
Since the NIST SP 800-90B test suite needs different input data format: one
random sample per output byte (or two-byte word) must be saved. Some
conversion program to convert formats described in Section 3.1 to this
specific format will be needed.
### 3.3.3 End-User of the Data
Most of the resulting data including detailed explanations will be
incorporated within (public) deliverables. So the actual main end-user of the
results will be project partners and/or universities or research organisations
that may use this data to build additional statistical analysis on the given
results, or use them for comparisons. External end-user may also build up new
analysis on already existing results or use the raw data for their own
evaluations.
### 3.3.4 Research Data Identification
The results of the statistical evaluations will not be discoverable in public
search engines or in a global registry of research data repositories, but
within the consortium internally. Because of the small size of the output
data, the data can be easily made accessible by means of an existing project
subversion repository. The realization and the results of the statistical
tests will be published together with scientific papers and/or deliverables
within the project. Therefore, the produced data can be assessed. There is no
additional purpose conceivable. The interoperability is given with the
exchange of the statistical evaluation between researchers.
## 3.4 Results of PUF statistical testing using new proposed methodology
### 3.4.1 Responsible Beneficiary
The results of the statistical testing are mainly produced by those who
perform evaluations on PUF data (e.g. TEC, KUL) described in Section 3.2.
### 3.4.2 Gathering Process
PUF raw data are *.bin files, which may be read in by a MATLAB script, to
subsequently perform statistical analysis. The sequence of bytes needs to be
converted from a hexadecimal or decimal form to binary bit strings. When
performing statistical analysis, the output parameters will be stored within a
structure array that can be saved within *.mat file. A *.mat file with 1440
different output parameters (evaluation of 12 different chips) makes up about
16KB.
### 3.4.3 End-User of the Data
The *.mat file with the resulting parameters of the statistical analysis needs
to be combined with a read-me file that will describe the structure of the
stored variables. Most of the resulting data including detailed explanations
will be incorporated within (public) deliverables. So the actual main end-user
of the results will be project partners and/or universities or research
organisations that may use this data to build additional statistical analysis
on the given results, or use them for comparisons. External end-user may also
build up new analysis on already existing results or use the raw data for
their own evaluations.
### 3.4.4 Research Data Identification
The results of the statistical evaluations will not be discoverable in public
search engines or in a global registry of research data repositories, but
within the consortium internally. Because of the small size of the output
data, the data can be easily made accessible by means of an existing project
subversion repository. The realization and the results of the statistical
tests will be published together with scientific papers and/or deliverables
within the project. Therefore, the produced data can be assessed. There is no
additional purpose conceivable. The interoperability is given with the
exchange of the statistical evaluation between researchers.
## 3.5 Leaked signal traces observed during hardware side channel attacks
### 3.5.1 Responsible Beneficiary
Leakage signal traces will be recorded by the parties performing evaluations
of the side-channel resistance of specific cryptographic building blocks. This
task will be mainly performed by BRT, so they take the main responsibility of
the data. It is very likely that similar data will also be produced by other
parties, e.g. TUG or KUL, as these parties also have expertise in side-channel
measurements.
### 3.5.2 Gathering Process
Leaked signal traces are typically recorded using an oscilloscope, independent
whether power measurements or EM measurements are performed. Modern digital
oscilloscopes allow storing the captured traces in different file formats.
Such file formats can e.g. be CSV (comma separated file), MAT (MATLAB data
file), or a proprietary format. Due to the fact that most of the formats can
be easily converted into other formats it is not necessary for the different
parties to agree on a common format. In case of proprietary format (BRT), a
conversion tool will be provided to the partners of the consortium.
### 3.5.3 End-User of the Data
The end-users of this type of data will mainly be the producers itself. It is
common that the institution measuring the side-channel information also
evaluates the amount of leakage which can be extracted out of the measurements
by applying methods like differential power analysis (DPA), template attacks
(TA) and others. Of course also cases might arise where the measurement data
has to be shared with another institution having more computing power for the
evaluations or want to test and apply novel analysis methods. Here, the
technique to share the data highly depends on the amount of data. A small
amount (<100MB) can be shared using the existing SVN. For medium amounts
(<1GB) some cloud storage infrastructure might be applied. Huge amounts
(>10GB) will require to share USB sticks or external hard disks.
To allow parties outside the project to reproduce the side-channel analysis
results or to apply new methods, the leakage traces will be made publicly
available. Depending on the size of the measurement data, only subsets of the
measurements might be publicly shared. All information required to use the
measurements (e.g. corresponding plain text and cipher text to each leakage
trace, oscilloscope model which has been used for capturing the data,
measurement parameters) will be available at the same place where the
measurement data can be downloaded. If one interested party requires the full
set of measurement data, a custom sharing method can be set up. One existing
example for sharing measurement data are the power measurements for the DPA
contest available at _http://www.dpacontest.org/v4/rsm_traces.php_ .
The results of the side-channel analyses will be reported in (public)
deliverables. So additional enduser of the results will be project partners
and/or universities or research organisations that may use this data to
perform additional side-channel analysis with the given measurements, or use
them for comparisons.
### 3.5.4 Research Data Identification
The leaked signal traces will not be discoverable in public search engines or
in a global registry of research data repositories, but within the consortium
internally. Because of the expected large size of the data, sharing it using
the existing project subversion will not be applicable. Sharing options like
cloud storage or a sharing infrastructure provided by the responsible project
partner will be applied. If the size even exceeds several gigabytes, exchange
of physical data storage devices like USB sticks or hard disks can be
arranged. Results of side-channel analysis based on specific leakage traces
will be published in scientific papers and/or deliverables within the project.
Therefore, the achieved results based on the measurement data can be assessed.
## 3.6 Test output data acquired during active attacks on proposed modules
## and demonstrators
Investigations of the influence of active attacks on the hardware signature
codes of PUFs and the random bit streams generated by the TRNGs will be
performed in the course of the HECTOR project. The goal is to evaluate to what
extent the investigated PUF/TRNG modules are vulnerable to active attacks in
order to include appropriate countermeasures. Format and gathering process of
the output data do not change when applying active attacks so for a detailed
description to the corresponding data formats we refer to Section 3.1 for the
TRNG case and to Section 3.2 for the PUF case, respectively.
Type and parameters of the active attacks are important information for
further analyses and also for the countermeasure development. This additional
information will be incorporated to the dataset description and poses the main
difference to the data sets recorded without active attacks.
## 3.7 VHDL code of building blocks for demonstration and evaluation in WP4
### 3.7.1 Responsible Beneficiary
VHDL code for cryptographic building blocks will be mainly developed by the
parties KUL, STI and TUG. Although the focus of TUG is more on evaluating
countermeasures they will also contribute to the hardware design and act as
the responsible beneficiary for this type of data.
#### 3.7.2 Gathering Process
Hardware building blocks are typically modelled using a hardware description
language (HDL) such as VHDL or Verilog. For more complex building blocks, the
source code can be divided into several files which then form a project. Each
project will be accompanied by a short readme file explaining the file
structure providing a quick overview of the project.
Some building blocks might also be developed as software modules running on
microcontrollers. Here the software is typically developed in C or a
comparable high-level programming language. Projects typically consist of
vendor-specific files including e.g. standard configuration routines of the
target microcontroller and user-specific files including the actual program
for the microcontroller.
#### 3.7.3 End-User of the Data
Several end-users can be identified for the cryptographic hardware building
blocks. First, the evaluators will use these building blocks in order to
evaluate their resistance against implementation attacks such as differential
power analysis (DPA) attacks or fault attacks. By evaluating designs without
and with countermeasures, evaluators can rate the efficiency of the integrated
countermeasures. Second, some of the building blocks will be integrated into
the demonstrator platform by MIC. Finally, some of the building blocks will
also be made publicly available. The decision whether the building blocks will
be publicly shared will be discussed on demand but at the moment it is planned
to apply the following rule:
If the implementation does not include countermeasures and is likely to be
reused by other parties for comparison reasons or as foundation for
integrating improvements, it will be made publicly available. In order to
share the code and distribute it across the community, a web-based hosting
provider for software projects like github ( _https://github.com/_ ) will be
used. A link to the github repository will be provided on the HECTOR homepage.
This approach is already in use by TUG to make hardware and software
implementations of their CAESAR submission named ASCON publicly available (
_https://github.com/ascon/ascon_collection_ ) .
Implementations including specific countermeasures against implementation
attacks will not be made publicly available, they can be shared using the
internal project SVN service.
#### 3.7.4 Research Data Identification
The publicly available source code will be discoverable by public search
engines using the name of the implemented algorithm. Links to the source code
repository will also be provided from the HECTOR homepage. Project-intern
source code will be shared by applying the project SVN where only the project
partners have access. Implementation results (area numbers, cycle count, …)
will be published in scientific publications and (public) deliverables and
will therefore be publicly accessible. The interoperability is given with the
exchange of the implementation results between researchers.
**Chapter 4 Accessibility - Data sharing, archiving and preservation**
Access to and sharing of data helps to advance science and to maximize the
research investment. A whitepaper 1 by the University of Michigan reported
that when data is shared through an archive, research productivity and often
the number of publications increases. Protecting research participants and
guarding against disclosure of identities are essential norms in scientific
research. Data producers should take efforts to provide effective informed
consent statements to respondents, to identify data before deposit when
necessary, and to communicate to the archive any additional concerns about
confidentiality. With respect to timeliness of data deposit, archival
experience has demonstrated that the durability of the data increases and the
cost of processing and preservation decreases when data deposits are timely.
It is important that data is deposited while the producers are still familiar
with the dataset and able to fully transfer their knowledge to the archive.
In particular potential users can find out about generated and existing data
most likely through the project's dissemination activities (scientific
publications and papers), deliverables, presentations and technical events
(conferences, trade shows) etc. During the project lifetime these documents
and data will be published on our official project website ( _www.hector-
project.eu_ ) where a broad community has access to the project information.
Besides the HECTOR public websites also marketing flyers or the internal
project subversion repository will be used as a tool to provide and exchange
the requested data.
In principle, the data will be shared within the HECTOR consortium according
to our Consortium Agreement (with respect to any IPR issues) via a secured
data repository as soon as the data is available. To the public community,
data will be shared according to the dissemination level of the data via the
public project website. Besides the data repository and the website, the
consortium is also willing to handle requests directly. Public deliverables
will be made available as soon as they have been approved by the European
Commission.
Generally, the consortium's opinion is that it will not be necessary to
destroy any data for contractual, legal, or regulatory purposes. However, as
described before, there will be the case that the confidential deliverables
will be restricted. The data generated will serve as basis for future
scientific research work and reports on device performance as well as for
benchmarking.
With regards to the retention and preservation of the data, HECTOR will retain
and/or preserve the produced data at least for three years after the project
end. Due to the broad range of data generated during the HECTOR project, there
will not be a single solution for data sharing. Small amounts of data (e.g.
source code of hardware modules or microcontroller code, example measurement
data) up to 100MB will be shared by applying the already existing project SVN
repository _https://hector.technikon.com_ . This allows easy synchronization
as well as data versioning. It has to be noted that only project partners have
access to the project SVN. Therefore, publicly available data needs to be
shared in another way. For publicly sharing software projects (source code),
the file-hosting service github ( _https://github.com/_ ) has been
established within the research community during the last years. The ASCON
designers at TUG use the file-hosting service github to promote their software
and hardware implementations of the ASCON authenticated encryption algorithm.
The github repository is accessible via a link on the ASCON homepage (
_http://ascon.iaik.tugraz.at/links.html_ ) . For software and hardware
implementations created within the HECTOR project, which will be publicly
shared, a similar approach is planned. The source code of cryptographic
hardware and software modules which are secured by means of countermeasures
will not be made publicly available. This is on the one hand due to the
protection of the intellectual property of the project partners and on the
other hand due to security-related considerations. Here, the internal SVN will
be applied.
For bigger amounts of data in the range of gigabytes, which needs to be
shared, it is foreseen to utilize commodity clouds with usage of internal
infrastructure and data bases from the partners or external platforms. Costs
for data storage and archiving will occur, in particular for server provision
(infrastructure) and maintenance (security updates). The coordinator,
Technikon, has foreseen appropriate costs in the project budget for the active
project time. At a later stage of the project it can be better assessed, if
further costs for data storage will occur. These costs will then be covered by
the partners with their own resources.
Another potential solution for quickly sharing huge amounts of data it the
direct use of public cloud solutions. The cloud storage provider dropbox
figured out to offer a well-fitting solution with Dropbox Pro (
_https://www.dropbox.com/upgrade_ ) . It offers 1TB of data storage, 30 days
versioning of files, folders can be shared by using links and shared links can
be protected by passwords. Furthermore, the duration of the sharing can be
limited and file permissions for different users can be set (e.g. read,
modify, …). The cost for this service is 99€ per year. The cloud storage can
be used for sharing data between project partners on the one hand and also to
offer publicly available data.
At the current stage of the project, no data which requires some kind of
embargo period has been identified. Of course this can change during the
lifecycle of the project and will then be reported in an updated version of
the DMP.
In order to allow third parties to access, mine, exploit, reproduce, and
disseminate the publicly available data, an adequate license scheme has to be
put in place. For publicly available data provided at the github repository or
via another sharing infrastructure from the HECTOR homepage we plan to attach
an appropriate _Creative Commons_ License
( _http://creativecommons.org/licenses/?lang=en_ ) . Different types of
licenses are provided by that service, differing in the restrictions. These
restrictions include the right for modification, commercial usage, naming the
original author, and passing on under the same conditions. The license with
the lowest restrictions is CC0, which allows authors to waive the copyright
protection on their work (“No Rights Reserved”). As a consequence, a third
party can freely build upon, enhance, and reuse CC0licensed data. The
_Creative Commons_ website provides a tool which allows adding several of the
previously listed restrictions to enhance the CC0 license.
**Chapter 5 Summary and conclusion**
This data management plan outlines the handling of data generated within the
HECTOR project, during and after the project lifetime. As the deliverable will
be kept as a living document it will be regularly updated by the consortium.
The partners put into write their plans and guarded expectations regarding
valuable and publishable data.
The generated data such as leaked signal traces will not only be of interest
for the project partners but also for the scientific community outside of the
HECTOR project. These signal traces serve as foundation for practically
verifying new methods for e.g. security evaluations. The same is true for the
random bit streams generated by the TRNG designs applied in the HECTOR
project. Not all institutions have the facility to generate this data on their
own. This institutions benefit from the data provided by the HECTOR project.
As another advantage, the public data sharing enables comparing TRNG designs
across the HECTOR project borders. This will further result in citations of
HECTOR project results in external scientific publications. The scientific
community will also benefit from publicly available source code created during
the HECTOR project. It enables e.g. comparisons of metrics like runtime, or
resource consumption of algorithms created in the HECTOR project with
algorithms created by external researchers. This will again lead to citations
of HECTOR-related results.
The HECTOR consortium is aware of proper data documentation requirements and
will rely on each partners’ competence in appropriate citation etc. The
Consortium Agreement (CA) forms the legal basis in dealing with IPR issues and
covers clear rules for dissemination or exploitation of project data. Besides
the HECTOR public website, which targets a broad interest group, also
marketing flyers or the SVN repository will be used as a tool to provide data.
With regards to the retention and preservation of the data, HECTOR partners
will retain and/or preserve the produced data for several years, three years
after the project end at least.
The HECTOR consortium is convinced that this data management plan ensures that
project data will be provided for further use timely, available and in
adequate form, taking into account the IPR restrictions of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0992_SAFURE_644080.md
|
# Chapter 1 Introduction
The SAFURE Data Management Plan (further on referred as DMP) is required for
H2020 projects participating in the Open Research Data Pilot and describes the
data management lifecycle for all data sets that will be generated, collected,
and processed by the research project SAFURE. Being more specific, it outlines
how research data will be handled, what methodology and standards will be
used, whether and how the data will be exploited or made accessible for
verification and reuse and how it will be curated and preserved during and
even after the SAFURE project is completed. The DMP can be considered as a
checklist for the future, as well as a reference for the resource and budget
allocations related to the data management.
However, to explain the **reason** why a DMP gets elaborated during the
lifespan of a research project, the European Commission’s vision is that
information already paid for by the public purse should not be paid again each
time it is accessed or used. Thus, other European companies should benefit
from this already performed research.
To be more specific, _“**research data** refers to information, in particular
facts or numbers, collected to be examined and considered and as a basis for
reasoning, discussion, or calculation. In a research context, examples of data
include statistics, results of experiments, measurements, observations
resulting from fieldwork, survey results, interview recording and images. The
focus is on research data that is _
_available in digital form.”_ 1
The DMP is not a fixed document. It will evolve and gain more precision and
substance during the lifespan of the SAFURE project. Figure 1 envisions the
Data Management Lifecycle in a graphical view.
Figure 1: Data Management Lifecycle
The first version of the DMP includes data management information from the
first six months of the project. Furthermore, it addresses how the consortium
plans to handle the following topics:
* Description of Data
* Data Collection
* Data Documentation and Metadata
* Intellectual Property Rights
* Access and Sharing
* Archiving and Preservation
The rest of this report is structured as follows. Chapter 2 introduces the
general methodology according to which this DMP has been derived. Chapters 3
and 4, describe, in a more detailed manner, how each of the above topics will
be addressed by each partner.
# Chapter 2 Methodology
In order to get a detailed view regarding the data management topics
identified in the introduction and to collect the requirements and constraints
of each partner regarding the DMP, a data management questionnaire has been
designed. The additional purpose of this questionnaire was to raise awareness
among the project partners regarding the guidelines on data management in
Horizon 2020 projects.
The questionnaire has been divided into five main chapters regarding data
description, management, identification, intellectual property rights, and
accessibility, each comprising a series of questions to help address the
topics identified in the introduction. A template of the questionnaire is
provided in the Appendix of this report.
As the project is by now within its first months, some information remains
undefined at the moment. Since this DMP is panned to be a living document, it
will be updated as soon as more details are available and a more detailed and
elaborated version of the DMP will be delivered at later stages of the
project. Moreover, the DMP will be updated at least by the mid-term and final
review to be able to fine-tune it to the data generated and the uses
identified by the consortium.
# Chapter 3 Data Management
The term ‘Data Management’ stands for an extensive strategy to make
project/research data available to interested target groups via a set of well-
defined policies. Before making data available to the public, the published
data needs to be defined, collected, documented and addressed properly. The
following sections define this process within SAFURE and will be led by the
following questions:
* **3.1 Description of data** – Which type of data will be generated? Which formats will be chosen and can it be reused?
* **3.2 Data generation & collection ** – How can the data set be described? To whom might be the data useful? How can it be identified as research data?
* **3.3 Data documentation & metadata ** – Does the project data comply with international research standards?
* **3.4 Intellectual Property Rights** – Will the public availability be restricted due to the adherence to Intellectual Property Rights?
## 3.1 Description of data
The consortium will generate data throughout the lifespan of the SAFURE
project. The generated data is expected to cover a large range of areas
including performance measures (code size, loading time, execution
performance,
temperature, power and clock frequency of MPSoCs, latency, jitter, bitrate),
measures obtained from worst-case and distribution analysis (e.g. network and
ECU load, frame and task latencies), as well as qualitative data (platform
requirements) and specifications (DOC/PDF). Furthermore, various types like
source code (C language), object code, software and hardware architecture
models (ARXML, SYSML, SymTA/S XML), network packets and type formats used by
network analysis tools.
## 3.2 Data generation & collection
Data generation and collection phase is concerned with the project data
generated or collected, including its origin, nature and scale, and to whom it
might be useful.
Data will be mostly generated by the SAFURE beneficiaries themselves or among
the consortium. Therefore, different methodologies come into operation. Almost
all partners will execute performance measures, which leads to high amount of
generated data. The consortium agrees in prospectively seeing the possibility
to integrate or reuse the generated data. They further agree that the data
will be useful for universities, research organizations, SMEs and scientific
publications. Moreover, it might be also beneficial for IP providers and to
design companies. Restrictions on data availability depend on the specific
type of data.
Based in the questionnaire we developed, Table 1 gives a per-partner overview
of the data which is expected to be generated within the SAFURE project,
including its description and identification. For each partner, more details
can be found in their respective questionnaires.
_D7.1_ – _Data Management Plan_
<table>
<tr>
<th>
**Data Nr.**
</th>
<th>
**SAFURE partner**
</th>
<th>
**Data set reference and name**
**and used methodology**
</th>
<th>
**Data set description**
</th>
<th>
</th>
<th>
</th>
<th>
**Reserach data identification 2 **
</th>
<th>
</th> </tr>
<tr>
<th>
**End user (e.g. university, research organization,**
**SME’s, scientific publication)**
</th>
<th>
**Existence of similar data (link, information)**
</th>
<th>
**Possibility for integration and**
**reuse (Y/N) + information**
</th>
<th>
**D 3 **
</th>
<th>
**A 4 **
</th>
<th>
**AI 5 **
</th>
<th>
**U 6 **
</th>
<th>
**I 7 **
</th> </tr>
<tr>
<td>
1
</td>
<td>
TRT
</td>
<td>
Performance measures
</td>
<td>
Universities, research organizations, SMEs
</td>
<td>
None
</td>
<td>
Yes
</td>
<td>
D
</td>
<td>
D
</td>
<td>
N
</td>
<td>
D
</td>
<td>
N
</td> </tr>
<tr>
<td>
2
</td>
<td>
TTT
</td>
<td>
Performance measures
</td>
<td>
For internal use
</td>
<td>
None
</td>
<td>
Yes
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
N
</td>
<td>
N
</td> </tr>
<tr>
<td>
3
</td>
<td>
BSC
</td>
<td>
Performance measures
</td>
<td>
Academics, industry
</td>
<td>
None
</td>
<td>
Yes
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
N
</td> </tr>
<tr>
<td>
4
</td>
<td>
TEC
</td>
<td>
Performance measures
</td>
<td>
Research organization, SMEs
</td>
<td>
None
</td>
<td>
Yes
</td>
<td>
D
</td>
<td>
D
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
N
</td> </tr>
<tr>
<td>
5
</td>
<td>
ETHZ
</td>
<td>
Performance measures
</td>
<td>
Universities
</td>
<td>
None
</td>
<td>
No
</td>
<td>
N
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
N
</td>
<td>
N
</td> </tr>
<tr>
<td>
6
</td>
<td>
SYM
</td>
<td>
Performance measures obtained from worst-case and distribution analysis
</td>
<td>
Universities, research organizations, SME’s, etc.
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
D
</td>
<td>
D
</td>
<td>
Y
</td>
<td>
D
</td>
<td>
Y
</td> </tr>
<tr>
<td>
7
</td>
<td>
ESCR
</td>
<td>
Qualitative data and performance measures
</td>
<td>
Academics, SMEs
</td>
<td>
None
</td>
<td>
Yes
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
N
</td>
<td>
N
</td> </tr> </table>
2 N – No; Y – Yes; D - Depends 3 Discoverable 4 Accessible
5. Assessable and intelligible
6. Usable beyond the original purpose of which it was collected
7. Interoperable to specific quality standards
SAFURE D7.1 Page 5 _D7.1_ – _Data Management Plan_
<table>
<tr>
<th>
**Data Nr.**
</th>
<th>
**SAFURE partner**
</th>
<th>
**Data set reference and name**
**and used methodology**
</th>
<th>
**Data set description**
</th>
<th>
</th>
<th>
</th>
<th>
**Reserach data identification 2 **
</th>
<th>
</th> </tr>
<tr>
<th>
**End user (e.g. university, research organization,**
**SME’s, scientific publication)**
</th>
<th>
**Existence of similar data (link, information)**
</th>
<th>
**Possibility for integration and**
**reuse (Y/N) + information**
</th>
<th>
**D 3 **
</th>
<th>
**A 4 **
</th>
<th>
**AI 5 **
</th>
<th>
**U 6 **
</th>
<th>
**I 7 **
</th> </tr>
<tr>
<td>
8
</td>
<td>
MAG
</td>
<td>
MAG Sw Code, SW
Specs, Sw Architecture
Models
</td>
<td>
All code products for internal use, other data to academics.
</td>
<td>
None
</td>
<td>
Yes
</td>
<td>
N
</td>
<td>
D
</td>
<td>
D
</td>
<td>
N
</td>
<td>
N
</td> </tr>
<tr>
<td>
9
</td>
<td>
SSSA
</td>
<td>
Sample models
</td>
<td>
Universities, industry, standardization bodies
</td>
<td>
TBD
</td>
<td>
Yes
</td>
<td>
D
</td>
<td>
D
</td>
<td>
D
</td>
<td>
D
</td>
<td>
D
</td> </tr>
<tr>
<td>
10
</td>
<td>
SYS
</td>
<td>
Performance measures
</td>
<td>
SMEs, academics
</td>
<td>
None
</td>
<td>
Yes
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
D
</td>
<td>
N
</td> </tr>
<tr>
<td>
11
</td>
<td>
TCS
</td>
<td>
Network packets of the Wireshark tool, performance and timing data
</td>
<td>
Internal use only
</td>
<td>
None
</td>
<td>
Not at present
</td>
<td>
N
</td>
<td>
D
</td>
<td>
D
</td>
<td>
N
</td>
<td>
N
</td> </tr>
<tr>
<td>
12
</td>
<td>
TUBS
</td>
<td>
Performance measures obtained from worst-case
analysis
</td>
<td>
Universities, research organizations, SME’s, scientific publishing
</td>
<td>
None
</td>
<td>
Yes
</td>
<td>
D
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
N
</td>
<td>
D
</td> </tr> </table>
Table 1: Data Overview
SAFURE D7.1 Page 6
## 3.3 Data documentation & metadata
Data documentation ensures that the given dataset or set of documents can be
understood, citied properly and interpreted correctly by any interested party.
Where ever possible, we will use metadata standards to document the generated
data. At this point, AUTOSAR (ARXML), SYSML, XMI standards, and ISO 26262 have
been identified suitable by some partners. Most partners generate very
specific data for which no suitable metadata standard has been identified,
yet. These partners plan to store/organize their generated data in a
standardized format (e.g. CSV, XML, or Microsoft Excel) and plan to provide an
accompanying description to interpret the data. This description and metadata
will be produced manually and might also include scientific publications and
technical reports. Also, this process can be automated by scripts and specific
(software architecture) modelling tools.
Some tools are nonetheless required to access the given data sets, e.g. XML
parsing, tools to read XMI/ARXML formats, tools with CSV input support (e.g.
Microsoft Excel).
## 3.4 Intellectual Property Rights
Even though IPR issues mainly arise during the project lifetime or even after
project end due to the dissemination (scientific and non-scientific
publications, conferences etc.) and exploitation (licensing, spin-offs etc.)
of project results, the SAFURE consortium considered the handling of IPR right
from the beginning, i.e. during the project planning phase. Therefore the
Consortium Agreement (CA) clearly states the background, foreground, and
sideground of each partner and defines rules regarding patents, copyrights,
(un-)registered designs and other similar or equivalent forms of statutory
protection.
Within the SAFURE project most data will be generated within internal
processes at partner level through measurement and/or analysis. Close
cooperation within the consortium may lead to joint generation of data, which
is clearly handled in terms of IPR issues within the CA.
At this stage of the project, no licenses are required. Raw data and results
extracted from the performed studies will be public. The reuse of synthetic
data or collected data is covered by the CA and will be depending on hardware
and software targets of the consortium.
No third party data is reused in the current project phase. In case third-
party data will be reused, confidentiality restrictions might apply in
specific cases, which will be analyzed per case in detail.
Neither time lag nor restrictions for the publication of results are planned.
Publishable data will be posted and published in due course.
# Chapter 4 Accessibility
While Chapter 3 focuses on the internal project processes before publication
including the compliance with the project rules for IPR, Chapter 4 describes
how the generated data will be made accessible for public (re-)use (Section
Chapter 4) and how availability will be ensured permanently, whether data
needs to be destroyed/retained for any contractual, legal or regulatory
purpose as well as how long the data should be preserved, what costs will
occur and how they will be covered (Section 4.2).
## 4.1 Access and Sharing
Access to and sharing of data helps to advance science and to maximize the
research investment. A recent paper 2 reported that when data is shared
through an archive, research productivity and often the number of publications
increases. Protecting research participants and guarding against disclosure of
identities are essential norms in scientific research. Data producers should
take efforts to provide effective informed consent statements to respondents,
to identify data before deposit when necessary, and to communicate to the
archive any additional concerns about confidentiality. With respect to the
timeliness of the data deposit, archival experience has demonstrated that the
durability of the data increases and the cost of processing and preservation
decreases when data deposits are timely. It is important that data is
deposited while the producers are still familiar with the dataset and able to
fully transfer their knowledge to the archive.
In particular, potential users can find out about generated and existing data
most likely through scientific publications and deliverables. During the
project lifetime these documents will be published on the official project
website (www.safure.eu) were a broad community has access to the project
information. Our SME/Industry partners will also conduct product marketing in
order to draw attention to the SAFURE results and data. The consortium will
provide data also through search engines or are willing to provide information
also upon requests of interested users, potential customers, etc. These
requests will be handled directly with them. Besides public websites, also
marketing flyers or the SVN repository will be used as a tool to provide
requested data.
The partners indicated to provide the generated data after the project end or
upon request. Public deliverables will be made available as soon as they have
been approved by the European commission. The consortium itself will receive
data as soon as it is available.
Once the interested users have received the information which data was
generated and is available, it depends on the dissemination level if this data
will be shared and made available without any restrictions. The consortium is
willing to share their produced data with researchers (from academia and
industry) and potential customers/business partners provided that
confidentiality restrictions are met. Of course the data will be shared within
the SAFURE consortium without any restrictions to obtain synthetic data.
In this early stage of the project most of the partners don’t pursue to get a
persistent identifier for their data.
## 4.2 Archiving and Preservation
Generally, the partners believe that it will not be necessary to destroy any
data. However, it might be the case that some confidential data may need to be
restricted. This will be decided on a case by case basis. At this early stage,
some partners could not yet identify whether data destroying will be necessary
at all, as this also depends on the software and hardware targets that still
need to be decided.
Along with the project progress it will be agreed what data will be kept and
what data will be destroyed. This will be done according to the SAFURE project
rules, agreements and discussion within the consortium. So far, the partners
have already expressed that data that is relevant for scientific evaluation
and publication should certainly be kept.
The data generated will serve as basis for future scientific research work and
projects. For the consortium it is clear that foreseeable research uses for
the data can be, for instance, performance comparisons, in SAFURE particularly
with future systems and other hardware and software. Furthermore, the data may
even define the starting point for new standards and provide benchmarks for
research.
Regarding the retention and preservation of the data, SAFURE partners will
retain and/or preserve the produced data for several years, three years at
least.
As to the location of the storage, the SAFURE partners prefer to hold data in
internal repositories and/or servers. Further, they can be hold in marketing
repositories. Another option indicated by the partners is the storage in
public or institutional websites. Furthermore, it has been suggested to
establish a commodity cloud by using internal cloud infrastructure or,
depending on the confidentiality, an external platform.
For SAFURE the costs for data storage and archiving will occur, in particular
for server provision (infrastructure) and maintenance. The coordinator,
Technikon, has already foreseen this in the project budget. The expected
amount at this stage will be approximately € 2,000 for the servers. At a later
stage of the project it can be better assessed, if further costs for data
storage will occur. These costs will then be covered by the partners with
their own resources.
# Chapter 5 Conclusion
This data management plan outlines the handling of data generated within the
SAFURE project, during and after the project lifetime. As this document will
be kept as a living document, it will be updated regularly by the consortium.
This report defines the data management policy within the SAFURE project
addressing data description, collection, documentation and metadata,
intellectual property rights, access and sharing, and archiving and
preservation. A questionnaire has been developed to collect detailed
information from each partner regarding these topics (see Appendix for the
questionnaire template).
The main data collected within SAFURE will be various performance measurements
ranging from ECU to network performance measures. Also, platform requirements
and specifications will be derived. Additionally, a description what data is
collected by each partner is provided in this report. This data is anticipated
to be useful to universities, research organizations, SMEs and for scientific
publication. The partners have identified some metadata standards (AUTOSAR
(ARXML), SYSML, XMI standards) to help understanding for the collected data by
third parties. Data, for which no suitable metadata standard could be
identified at present, will be described and documented manually. The
consortium agreement specifies also how intellectual property rights can be
preserved and covers clear rules for dissemination and exploitation of project
data.
Data availability will be mainly advertised via publications, deliverables,
and marketing. This data will be made available through the project’s SVN
repository. Furthermore, some partners plan to make data available through
their websites as well. Access to this data will mostly be handled upon
request provided that confidentiality requirements are met. Also, if
confidentiality allows, data will be archived and preserved for at least three
years after the project ended.
The SAFURE consortium is convinced that this data management plan ensures that
project data will be provided for further use timely, available and in
adequate form, taking into account the IPR restrictions of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0994_AEROARMS_644271.md
|
1. **Introduction**
1. Purpose of the document
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used with regard to
all the datasets that have been generated by the project. A DMP details what
data the project has generated, whether and how it has been exploited and made
accessible for verification and re-use, and how it will be curated and
preserved.
2. Scope of the document
This document (Deliverable D9.5) describes the final version of the AEROARMS
DMP. The first version was described in Deliverable D9.4, which was submitted
in M6. The final AEROARMS DMP includes all the data that were predicted in
D9.4.
3. Structure of the document
The DMP describes datasets and reflects the current point of view of the
consortium about the data that will be produced. The description of each data
includes the following:
* Dataset name
* Authors
* Data contact and responsible
* Dataset objective
* Dataset description
* Data description
* Dataset sharing
* Archiving and preservation
* Dataset size
* Zenodo link
It has been agreed by the AEROARMS consortium that all the datasets that will
be produced within the project and that are not affected by IPR (clause 8.0 of
the consortium agreement) will be shared between the partners. Moreover, all
the datasets with potential interest for the community and that are not
related to further exploitation activities will be shared with the whole
scientific community after their publication in conference proceedings and/or
international journals.
Besides the introduction and conclusions, this document is structured in 4
Sections, devoted to the datasets of WPs 3, 4, 5, 6 and 8, which have
generated the datasets with the highest potential interest.
2. **Control of aerial robots with multiple manipulation means**
This section is devoted to datasets that have been collected during the
activities in WP3. These datasets have been collected in preliminary tests in
the laboratory, indoor settings or in outdoor experiments. These datasets are
grouped depending on the tasks in WP3 they are involved in.
1. Modelling of aerial multirotors with two arms
Several datasets that include data for modeling of aerial multirotors with
arms have been published in AEROARMS. These datasets are the following:
1. **Dataset “AEROARMS Behavioural coordinated control”** , described in Section 2.4.
The provided data have been acquired during the experiment described in
Section 4 of Deliverable D3.5, which was conducted at the flight arena of
CATEC by using the dual arm manipulator developed by USE. The following data
for modelling of the aerial robot with two arms are included in the dataset:
* Reference and actual position and orientation of the UAV
* Reference and actual joint positions of the two arms
* Desired (planned), reference (output of the Inverse Kinematics) and actual position and orientation of the two manipulators end effectors
* Position and orientation tracking errors of the two end effectors
The full description and details of the dataset are included in Section 2.4.
2. **Dataset “Visual servoing with actively movable camera”** , described in Section 2.5.
The provided data have been acquired during the experiment described in
Section 5.1 of Deliverable D3.5, which was conducted at the flight arena of
CATEC by using the dual arm manipulator developed by USE. The following data
for modelling of the aerial robot with two arms are included in the dataset:
* Reference and actual position and orientation of the UAV
* Reference and actual joint positions of the two arms
* Desired (planned), reference (output of the Inverse Kinematics) and actual position and orientation of the end effector of the left manipulator
* Position and orientation tracking error of the end effector of the left manipulator The full description and details of the dataset are included in Section 2.5.
3. **Dataset “Multirotor with two arms: multirotor/arms interaction”** , described in the
# following.
**Dataset name:** Multirotor with two arms: multirotor/arms interaction
**Authors:** A. Suarez, G. Heredia, A. Ollero
**Data contact and responsible:** USE, Guillermo Heredia
**Dataset objective:** Unlike fixed base manipulators, in an aerial
manipulation robot the reaction wrenches caused by the motion of the arms or
the physical interactions raised on flight are supported by the aerial
platform, causing typically undesired oscillations in the attitude or
deviations in the position that may complicate the realization of grasping
tasks or installation operations. Since it is difficult to appreciate this
effect on flight due to the action of the autopilot and the noise generated by
the propellers, the goal of this dataset is to analyze the effect of the
motion of a dual arm manipulator over the attitude of a multirotor platform
supported by wires, emulating hovering conditions. In particular, it is
interesting to evaluate the partial reaction compensation capability of a dual
arm manipulator, generating coordinated symmetric trajectories to cancel the
reactions in two axes (roll and yaw). The data logs also reveal how the
reaction oscillation in the multirotor is higher as the velocity/acceleration
of the arms increases.
**Dataset description:** This dataset is obtained with the dual arm aerial
manipulator (DJI Matrice 600 hexarotor equipped with USE dual arm) hanging
from four cables attached to the multirotor base emulating hovering
conditions. Although the datasets presented here were obtained with a
particular platform and dual arm manipulator, these may result of interest for
a preliminary analysis of the dynamic coupling effect. The orientation data
was obtained with a STM32F3 Discovery board attached to the multirotor base,
providing the measurements from the accelerometer, gyroscope and magnetometer
sensors. The arms are built with the Herkulex DRS-0402 and DRS-0602 servos and
a customized aluminium frame structure. The experiments consist of generating
a sequence of rotations around the shoulder pitch and shoulder yaw joints with
one arm (non-compensated reaction wrenches) and with both arms (partial
reaction compensation), considering different joint speeds. The measurement
given by the gyroscope in the roll-pitch-yaw angles is evaluated to analyze
the amplitude of the reaction wrenches. Note that the rotational motion of the
multirotor is constrained by the cables.
Three experiments are conducted:
1. Symmetric trajectory with 1 second play time: the left and right arms generate a sequence of rotations around the shoulder pitch and elbow pitch joints which is symmetric with respect to the XZ plane, where the X-axis is the forward axis, the Y axis is parallel to the shoulder pitch rotation angle, and the Z-axis is the vertical axis parallel to the gravity vector. The play time indicates the desired time to reach the reference angular position. This parameter is sent to the servos in the motion commands.
2. Symmetric trajectory with 0.5 seconds play time: the same trajectory is executed with both arms, with a lower play time so the reaction wrenches in the pitch angle are more evident (higher inertias and centrifugal terms).
3. Asymmetric motion with 0.5 seconds play time: the same trajectory is executed only by the left arm while the right arm stays in a fixed position, causing a reaction wrench in the three roll, pitch and yaw angles.
A video file is also provided in the dataset file, showing the execution of
the three experiments.
**Data description** : Three groups of log files are provided in the
corresponding folders. The left and right arm data files contain the joint
position, velocity and PWM signal provided by the Herkulex servos, whereas the
STM32Board_Data file contains the accelerometer, gyroscope and magnetometer
data. A MATLAB .m script file is provided to load and plot the data,
indicating clearly each field. The three files share the same time stamp.
The content of the dataset is detailed in Table 2.
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Data name and format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Left arm data
</td>
<td>
Text file “Log_LeftArm.txt”:
* Col 1: time stamp in sec
* Cols 2-4: TCP position ref in meters
* Cols 5-7: TCP position in meters
* Cols 8-10: joint angular position in deg
* Cols 11-13: joint angular
position velocity in deg/s
* Cols 14-16: normalized PWM signal applied to the servos in the range [-1, 1]
</td>
<td>
The “Log_LeftArm.txt” data file contains different data of interest from the
left arm sampled at 50 Hz, including the reference Cartesian position of the
tool center point (TCP, not used), the current TCP position, the joint
position and velocity, and the normalized PWM signal applied by the servos.
These signals are obtained from the internal registers of the Herkulex servos,
applying the forward kinematic model to obtain the TCP position. The sequence
of rotations indicated in the README file within each experiment folder. This
data file can be loaded and plotted easily with MATLAB
</td> </tr>
<tr>
<td>
Right arm data
</td>
<td>
Text file “Log_RightArm.txt” with the same format used for the left arm
</td>
<td>
The “Log_RightArm.txt” contains the data of interest from the right arm with
the same format that for the left arm
</td> </tr>
<tr>
<td>
IMU data
</td>
<td>
Text file “STM32_Board_DataFIle
.txt”:
* Col 1: time stamp in sec
* Col 2: packet ID
* Cols 3-4: internal time stamp of the board sec-ms
* Cols 5-7: acceleration in m/s^2
* Cols 8-10: angular velocity in deg/s
* Cols 11-13: magnetic field in Gauss
* Col 14: temperature of the sensor in degrees Celsius x100
* Col 15-17: roll-pitch-yaw orientation estimated from the
Madgwick algorithm
</td>
<td>
A STM32F3 Discovery board is used as IMU, logging the data from the
accelerometer, gyroscope and magnetometer at 100 Hz, sent to the main computer
board through a USART interface. The Madgwick algorithm is used to estimate
the orientation in the rollpitch-yaw angles. The effect of the dynamic
coupling can be observed more clearly in the data from the gyroscope
</td> </tr> </table>
**Table 2: Content of the dataset named: “Multirotor with two arms:
multirotor/arms interaction”.**
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, USE will preserve a copy of the
dataset.
**Dataset size** : 20.1 MB
**Zenodo link** : _https://doi.org/10.5281/zenodo.2657640_
**2.2.** Integrated force and position control
**Dataset name:** Integrated force and position control dataset
**Authors:** A. Suarez, G. Heredia, A. Ollero
**Data contact and responsible:** USE, Guillermo Heredia
**Dataset objective:** The goal of this dataset is to evaluate the performance
of a contact force control task carried out on flight by a compliant joint arm
integrated in a hexarotor platform. The pushing force exerted by the arm is
estimated and controlled from the joint deflection of the spring-lever
transmission mechanism introduced between the servo shaft and the output link,
measuring the deflection angle with an encoder. The data obtained from the IMU
allows to analyse the effect of the physical interactions over the multirotor.
**Dataset description:** The dataset corresponds to two experiments carried
out in outdoors with a Cartesian aerial manipulator consisting of a 2-DOF
Cartesian base (XY axes) and a compliant joint arm attached at its base (third
joint). A safety rope was used for safety in the realization of the
experiments. In the experiment, the multirotor approaches to the contact point
with the arm stretched with its link pointing downwards to ensure that the
force is transmitted in the forward direction (X-axis). Then, it exerts a
sequence of two force references (1 N and 1.5 N) while the aerial platform
tries to stay in hover. The data from the experiment include the position of
the Cartesian base, the force reference and the estimation, the joint
deflection signal and the control signals of the PI controller, as well as the
position, velocity, orientation and angular rate of the aerial platform. The
autopilot is based on the Raspberry Pi-NAVIO board, with the PX4 estimator. A
Leica laser tracking system was used to measure the position of the platform.
**Data description** : The data of each experiment is stored in a single plain
text file with the format indicated below. This file can be loaded and plotted
with MATLAB, providing a script for this purpose, indicating clearly all the
fields.
The content of the dataset is shown in Table 3.
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Data name and format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Contact force experiment data
</td>
<td>
File “Log_Contact_Force_ControlDATE-TIME.txt”
* Col 1: time stamp
* Col 2-3: Cartesian base XY axes position in mm
</td>
<td>
The file contains all the data from the contact force control experiment
including the joint position references and feedback, the control signals, and
the pose of the multirotor platform. The position of the
</td> </tr>
<tr>
<td>
</td>
<td>
•
•
•
• • •
•
•
•
•
•
•
</td>
<td>
Col 4: compliant joint servo position in deg
Cols 5-6: Cartesian base XY axes reference (PWM or position, depending on the
control mode).
Col 7: compliant joint servo position reference in degrees. Col 8: force
reference in N
Col 9: force estimation in N
Col 10: torque estimation in Nm/rad
Col 11: torque reference in Nm/rad
Col 12-13: proportional and integral correction terms of the PI deflection-
force controller in deg
Cols 14-16: multirotor position in m
Cols 17:19: multirotor velocity in m/s
Cols 20:22: multirotor
orientation in deg
Cols 23:25: UAV angular rate in deg/s
</td>
<td>
Cartesian base is estimated from the rotation angle and number of turns of the
corresponding DC motor that drives the linear guide. The servo position is
provided by the servo itself (Herkulex DRS-0101), measuring the deflection
angle with a magnetic encoder. The force and torque are estimated from the
deflection angle, knowing the stiffness of the springs in the spring-lever
transmission mechanism. The position of the multirotor is measured with a
Leica laser tracker, whereas the velocity, orientation and angular rate are
obtained from the PX4 estimator
</td> </tr> </table>
# Table 3: Content of the dataset named: “Integrated force and position
control dataset”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, USE will preserve a copy of the
dataset.
**Dataset size** : 630 KB
**Zenodo link** : _https://doi.org/10.5281/zenodo.2641222_
**2.3.** Control for novel fully-actuated aerial platforms
Two datasets have been made available regarding the control of novel fully-
actuated aerial platforms:
* Dataset "Towards a Flying Assistant Paradigm: the OTHex" contains experimental data relative to the validation of the controller for fully-actuated aerial robots.
* Dataset "The Tele-MAGMaS: An Aerial-Ground Comanipulator System" contains experimental data regarding the validation of the controller for fully-actuated aerial robots, while cooperatively manipulating a long object together with ground manipulators.
The description of both datasets is in the following:
**Dataset name:** Towards a Flying Assistant Paradigm: the OTHex
**Authors:** N. Staub, D. Bicego, Q. Sablé, V. Arellano-Quintana, S. Mishra
and A. Franchi
**Data contact and responsible:** CNRS, Antonio Franchi
**Dataset objective:** This dataset contains the experimental data relative to
the validation of the controller for fully-actuated (more in general, multi-
directional thrust) aerial robots. The task consists in approaching a metallic
bar which is fixed to the ground with a revolute joint on one side and lies
horizontally, then grasping it from the free side and lifting it vertically.
This task has been chosen with the goal of showing the capability of the
aerial robot to act as a flying assistant, aiding human operators and/or
ground manipulators to move long bars for assembly and maintenance tasks.
**Dataset description:** The name of the dataset has been changed with respect
to what mentioned in D9.4 to better link the dataset to the corresponding
paper. Thus, we preferred using the title of the paper, i.e., “Towards a
Flying Assistant Paradigm: the OTHex”. This dataset contains the data related
to the experiment used to validate the control of the OTHex, a multi-
directional thrust aerial robot tailored for physical interaction tasks, in
particular for cooperative transportation and manipulation of long beams
together with human operators and/or ground manipulators. In this experiment,
the pose control for the robot has been integrated with an admittance filter
control, which modifies the reference trajectory to the position control based
on the information of the external wrench, computed by a model-based wrench
estimator. This has been done with the goal of preserving stability during the
interaction task.
More in details, the following quantities have been collected:
* Aerial robot desired and actual pose (position plus orientation)
* Aerial robot estimated angle of the passive joint
* Aerial robot desired and measured angle of the bar w.r.t. the ground plane
* Aerial robot desired value for the exerted body wrench (force and torque)
* Aerial robot estimated body external wrench (force and torque)
* Aerial robot desired and estimated rotor spinning velocities
**Data description** : The time history of the above-described variables is
provided in the _mat_ format for reading with MATLAB. Additionally, one MATLAB
script is included for plotting the main variables.
The content of the dataset is shown in Table 4.
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Data name and format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Collection of data
</td>
<td>
othex_record-2017.09.10-
10.29_rpy.mat
</td>
<td>
Includes all the measurements related to the aerial robot and the bar to be
manipulated
</td> </tr>
<tr>
<td>
MATLAB script
</td>
<td>
check_plotter.m
</td>
<td>
Prints all the main variables mentioned above
</td> </tr> </table>
# Table 4: Content of the dataset named: “Towards a Flying Assistant
Paradigm: the OTHex”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, CNRS will preserve a copy of
the dataset.
**Dataset size** : 54.2 MB
**Zenodo link:** _https://doi.org/10.5281/zenodo.2640502_
**Dataset name:** The Tele-MAGMaS: An Aerial-Ground Comanipulator System
**Authors:** N. Staub, M. Mohammadi, D. Bicego, Q. Delamare, H. Yang, D.
Prattichizzo, P.
Robuffo Giordano, D. Lee and A. Franchi
**Data contact and responsible:** CNRS, Antonio Franchi
**Dataset objective:** This dataset contains the experimental data relative to
the validation of the controller for fully-actuated (more in general, multi-
directional thrust) aerial robots, while cooperatively manipulating a long
object together with ground manipulators. The task consists in approaching a
metallic bar which initially lies horizontally on a support structure, then
grasping it with an aerial robot, i.e., the OTHex, and a ground manipulator,
i.e., a KUKA IIWA. Finally, the two robots should cooperatively manipulate the
bar with a master-slave approach, with the robotic arm on the ground acting as
the leader and the aerial robot as the follower. This task has been chosen
with the goal of showing the capability of the aerial robot to act as a flying
assistant, aiding ground manipulators to move long bars for assembly and
maintenance tasks. This task represents an upgrade w.r.t. the one developed in
another work, i.e., “Towards a Flying Assistant Paradigm: the OTHex”. In that
work, the aerial robot was supposed to lift the bar alone, while in this
experiment it has do it together with a ground robot. Therefore, this dataset
represents an additional validation of the controller (of both the pose and
the admittance loops) for multi-directional thrust aerial platforms.
**Dataset description:** The name of the dataset has been changed with respect
to what mentioned in D9.4 to better link the dataset to the corresponding
paper. Thus, we preferred using the title of the paper, i.e., “The Tele-
MAGMaS: An Aerial-Ground Comanipulator System”. This dataset contains the data
related to the experiment used to validate the control of the OTHex, a multi-
directional thrust aerial robot tailored for physical interaction tasks.
Furthermore, this dataset contains the data related to the control of the
ground robot, which is a KUKA IIWA industrial manipulator. In this experiment,
the pose control for the aerial robot has been integrated with an admittance
filter control, which modifies the reference trajectory to the position
control based on the information of the external wrench, computed by a model-
based wrench estimator. This has been done with the goal of preserving
stability during the interaction task.
More in details, the following quantities have been collected:
* Ground manipulator joint angles and commands
* Ground manipulator measured joint torque
* Ground manipulator estimated external joint torque
* Ground manipulator estimated external Cartesian wrench
* Aerial robot desired and actual pose (position plus orientation)
* Aerial robot desired value for the exerted body wrench (force and torque)
* Aerial robot estimated body external wrench (force and torque)
* Aerial robot desired and estimated rotor spinning velocities
**Data description** : The time history of the above-described variables is
provided in the _mat_ format for reading with MATLAB. Additionally, one MATLAB
script is included for plotting the main variables.
The content of the dataset is shown in Table 5:
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Data name and format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Collection of data
</td>
<td>
othex_record-2017.06.23-
22.18.mat
</td>
<td>
Includes all the measurements related to the aerial robot
</td> </tr>
<tr>
<td>
Collection of data
</td>
<td>
iiwa_record-2017.06.23-22.18.mat
</td>
<td>
Includes all the measurements related to the ground robot
</td> </tr>
<tr>
<td>
MATLAB script
</td>
<td>
check_plotter.m
</td>
<td>
Prints all the main variables mentioned above
</td> </tr> </table>
# Table 5: Content of the dataset named: “The Tele-MAGMaS: An Aerial-Ground
Comanipulator System”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, CNRS will preserve a copy of
the dataset.
**Dataset size** : 72.8 MB
**Zenodo link:** _https://zenodo.org/record/2640461#.XNlGihVYe71_
**2.4.** Behavioural coordinated control
**Dataset name:** AEROARMS Behavioural coordinated control
**Authors:** E. Cataldi, D. Di Vito, G. Antonelli, P.A. Di Lillo, F. Pierri,
F. Caccavale, A. Suarez, F. Real, G. Heredia, A. Ollero
**Data contact and responsible:** CREATE, Gianluca Antonelli
**Dataset objective:** This dataset contains the code and the experimental
data for the development, testing and validation of the devised behavioural
control techniques for dual arm aerial manipulators. The dataset can be used
for performing simulations and tests and it could be of interest for
researchers working on the behavioral and kinematic control of robots, since
it is one of the first applications to the aerial manipulation.
**Dataset description:** This dataset includes a library of elementary
behaviors, namely atomic tasks to be assigned to a dual arm aerial manipulator
in a priority order. On the basis of the theory described in Deliverable D3.3,
both equality and set-based behaviors have been considered. For each
elementary task the Jacobian matrix and the task function are provided. The
code of the kinematic control developed in C++ under the ROS environment and
the simulation model of the aerial dual arm manipulator are provided. The
simulation model has been developed by using the commercial software V-Rep
available with free educational license. More in details, in the provided code
the following equality tasks are included:
* Position and orientation trajectory tracking of both end-effectors
* Center of mass, this task is aimed at ensuring that the center of mass of the dualarm system is, as much as possible, aligned with that of the UAV, in such a way to avoid destabilizing the flight and reduce the power consumption As concerns the set-based tasks, the following are included:
* Joint limits: for each joint, upper and lower limits are set in order to avoid its mechanical limits
* Virtual wall between the two arms: to avoid collisions between the two arms, a virtual wall is implemented in order to delimit their working spaces
* Virtual wall between the arms and the vehicle: to avoid collisions between the arms and the vehicle, virtual walls are implemented in order to delimit their working spaces
* Manipulability, aimed at keeping the manipulators far enough from singular configurations, at which the structure loses mobility
The provided data have been acquired during the experiment described in
Section 4 of Deliverable D3.5 and conducted at the flight arena of CATEC by
using the anthropomorphic compliant and lightweight dual arm developed by USE
integrated in an hexarotor platform. More in details, the following quantities
have been collected:
* Reference and actual position and orientation of the UAV
* Reference and actual joint positions of the two arms
* Desired (planned), reference (output of the Inverse Kinematics) and actual position and orientation of the two manipulators end effectors
* Position and orientation tracking errors of the two end effectors
* Time histories of the task variables for each implemented task
**Data description** : The time history of the above described variables is
provided both in the _mat_ format for reading with MATLAB and in the _bag_
format for ROS. Moreover, also the _ASCII_ format is provided in order to be
used by any software. A file with the description of the formats and standards
is included in the dataset in order to facilitate sharing and re-usability.
The ROS bag (Exp_2_1_2018-05-30-16-25-08.bag) contains all measurements and
all task variables captured and logged with their corresponding ROS time
stamp.
The content of the dataset is shown in Table 6.
<table>
<tr>
<th>
**Data**
</th>
<th>
**Format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
UAV pose
</td>
<td>
ROS topic:
/IK/Quadricopter_base/pose
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Measurement of the pose of the aerial platform in terms of position and
orientation (quaternion), provided by VICON system (position) and the IMU
(orientation)
</td> </tr>
<tr>
<td>
UAV reference
pose
</td>
<td>
ROS topic:
/IK/Quadricopter_base/pose_des
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Reference values of the aerial platform pose computed by the inverse
kinematics
</td> </tr>
<tr>
<td>
Joint positions
</td>
<td>
ROS topic: /joint_states
Format “sensor_msgs/JointState”
</td>
<td>
Joint position measurements of both the arms
</td> </tr>
<tr>
<td>
Reference joint positions
</td>
<td>
ROS topic /IK/jointCommand
Format “sensor_msgs/JointState”
</td>
<td>
Reference values of the joint position of both the arms computed by the
inverse kinematics
</td> </tr>
<tr>
<td>
End-effector pose of the right arm
</td>
<td>
ROS topic /IK/Kinematic/EE_1
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Pose of the end-effector of the right arm, computed via the direct kinematics
on the basis of the measured pose of the UAV and the measured joint positions
</td> </tr>
<tr>
<td>
Planned end-
effector pose of the right arm
</td>
<td>
ROS topic /IK/Planner/EE_1
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Planned pose of the end-effector of the right arm, computed via an off-line
planner
</td> </tr>
<tr>
<td>
Reference endeffector pose of the right arm
</td>
<td>
ROS topic /IK/Kinematic/EE_1_des
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Reference pose of the end-effector of the right arm, computed via the direct
kinematics on the basis of the reference pose of the UAV and the reference
joint positions
</td> </tr>
<tr>
<td>
End-effector pose of the left arm
</td>
<td>
ROS topic /IK/Kinematic/EE_2
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Pose of the end-effector of the left arm, computed via the direct kinematics
on the basis of the measured pose of the UAV and the measured joint positions
</td> </tr>
<tr>
<td>
Planned end-
effector pose of the left arm
</td>
<td>
ROS topic /IK/Planner/EE_2
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Planned pose of the end-effector of the left arm, computed via an offline
planner
</td> </tr>
<tr>
<td>
Reference endeffector pose of the left arm
</td>
<td>
ROS topic /IK/Kinematic/EE_2_des
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Reference pose of the end-effector of the left arm, computed via the direct
kinematics on the basis of the reference pose of the UAV and the reference
joint positions
</td> </tr>
<tr>
<td>
Task errors
</td>
<td>
ROS topic /IK/ErrorTasks
</td>
<td>
End-effectors position and orientation errors, center of mass task error,
manipulability measure, task errors of the virtual wall tasks (between the two
arms and between the arms and the UAV)
</td> </tr> </table>
# Table 6: Content of the dataset named: “AEROARMS Behavioural coordinated
control”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, both UNIBAS and UNICAS will
preserve a copy of the dataset.
**Dataset size** : 13.9 MB
**Zenodo link** : _https://doi.org/10.5281/zenodo.2641131_
**2.5.** Visual servoing with actively movable camera
**Dataset name:** AEROARMS Visual servoing with actively movable camera
**Authors:** E. Cataldi, G. Antonelli, D. Di Vito, P.A. Di Lillo, F. Pierri,
F. Caccavale, A. Suarez, F. Real, G. Heredia, A. Ollero
**Data contact and responsible:** CREATE, Gianluca Antonelli
**Dataset objective:** This dataset contains the code and the experimental
data for the development, testing and validation of the task-space approach to
hand-eye coordination. The dataset can be used for performing simulations and
tests.
**Dataset description:** This dataset includes the results obtained within the
WP3 referred to the task-space approach to hand-eye coordination, described in
Deliverable D3.4. The use case for this approach is represented by an arm
involved in specific operations while the other arm is used for moving a
camera. The goal is to have the first end-effector working close to a pipe
always in the field of view of the second one. The code developed in C++ under
the ROS environment and the simulation model of the aerial dual arm
manipulator are provided. The simulation model has been developed by using the
commercial software V-Rep available with free educational license. More in
detail, by recurring to the prioritized NSB approach developed in Task T3.3,
the following equality tasks have been included for the two arms:
* Trajectory tracking of the end-effector of the left arm, both in terms of position and orientation
* Field of View of the end effector of the right arm, equipped with a micro-camera
* Center of mass, aimed at ensuring that the center of mass of the dual-arm system is, as much as possible, aligned with that of the UAV
The same set-based tasks described in Section 2.4 have been included.
The provided data have been acquired during the experiment described in
Section 5.1 of Deliverable D3.5 and conducted at the flight arena of CATEC by
using the anthropomorphic compliant and lightweight dual arm developed by USE
integrated in an hexarotor platform. More in details, the following quantities
have been collected:
* Reference and actual position and orientation of the UAV
* Reference and actual joint positions of the two arms
* Desired (planned), reference (output of the Inverse Kinematics) and actual position and orientation of the end effectors of the left manipulator
* Position and orientation tracking error of the end effectors of the left manipulator
* Time histories of the task variables for each implemented task, included the Field of View
**Data description** : The time history of the above described variables is
provided both in the _mat_ format for reading with MATLAB and in the _bag_
format for ROS. Moreover, also the _ASCII_ format is provided in order to be
used by any software. A file with the description of the formats and standards
is included in the dataset in order to facilitate sharing and re-usability.
The ROS bag (Exp_2_1_2018-05-30-14-19-21.bag) contains all measurements and
all task variables captured and logged with their corresponding ROS time
stamp.
The content of the dataset is shown in Table 7.
<table>
<tr>
<th>
**Data**
</th>
<th>
**Format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
UAV pose
</td>
<td>
ROS topic:
/IK/Quadricopter_base/pose
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Measurement of the pose of the aerial platform in terms of position and
orientation (quaternion), provided by VICON system (position) and the IMU
(orientation)
</td> </tr>
<tr>
<td>
UAV reference
pose
</td>
<td>
ROS topic:
/IK/Quadricopter_base/pose_des
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Reference values of the aerial platform pose output by the inverse kinematics
</td> </tr>
<tr>
<td>
Joint positions
</td>
<td>
ROS topic: /joint_states
Format “sensor_msgs/JointState”
</td>
<td>
Joint position measurements of both the arms
</td> </tr>
<tr>
<td>
Reference joint positions
</td>
<td>
ROS topic /IK/jointCommand
Format “sensor_msgs/JointState”
</td>
<td>
Reference values of the joint position of both the arms output by the inverse
kinematics
</td> </tr>
<tr>
<td>
End-effector pose of the right arm
</td>
<td>
ROS topic /IK/Kinematic/EE_1
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Pose of the end-effector of the right arm, computed via the direct kinematics
on the basis of the measured pose of the UAV and the measured joint positions
</td> </tr>
<tr>
<td>
End-effector pose of the left arm
</td>
<td>
ROS topic /IK/Kinematic/EE_2
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Pose of the end-effector of the left arm, computed via the direct kinematics
on the basis of the measured pose of the UAV and the measured joint positions
</td> </tr>
<tr>
<td>
Planned end-
effector pose of the left arm
</td>
<td>
ROS topic /IK/Planner/EE_2
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Planned pose of the end-effector of the left arm, computed via an offline
planner
</td> </tr>
<tr>
<td>
Reference endeffector pose of the left arm
</td>
<td>
ROS topic /IK/Kinematic/EE_2_des
Format
“geometry_msgs/PoseStamped”
</td>
<td>
Reference pose of the end-effector of the left arm, computed via the direct
kinematics on the basis of the reference pose of the UAV and the reference
joint positions
</td> </tr>
<tr>
<td>
Task errors
</td>
<td>
ROS topic /IK/ErrorTasks
</td>
<td>
Left end-effectors position and orientation errors, Field of View task error,
center of mass task error, manipulability measure, task variables of the
virtual wall task between the two arms and between the arms and the UAV
</td> </tr> </table>
# Table 7: Content of the dataset named: “AEROARMS Visual servoing with
actively movable camera”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, both UNIBAS and UNICAS will
preserve a copy of the dataset.
**Dataset size** : 15.2 MB
**Zenodo link** : _https://doi.org/10.5281/zenodo.2641129_
3. **Aerial tele-manipulation in inspection and maintenance**
This section is devoted to data that have been collected during the activities
performed in WP4 devoted to aerial telemanipulation in inspection and
maintenance. These datasets are grouped depending on the tasks in WP4 they are
involved in.
**3.1.** Aerial telemanipulation system
**Dataset name:** Aerial Telemanipulation System
**Authors:** R. Balachandran, M. De Stefano
**Data contact and responsible:** DLR, R. Balachandran
**Dataset objective:** This dataset contains the experimental data of the
development, testing and validation of the bilateral controller for aerial
telemanipulation. The dataset can be used for performing simulations and
tests.
**Dataset description:** The dataset was collected during the experiments
performed in WP4. The bilateral controller was used for the Cartesian space
telemanipulation of the manipulator (slave) attached to the helicopter base
(simulated by Aerial telemanipulation Simulator). The slave device is
teleoperated using a lightweight robot based haptic device (master) with force
feedback.
**Data description** : The data uses standard _.txt_ format where all the
states of the master, slave and the helicopter base are row-wise logged. It
uses MATLAB datatype double for all the states. Additionally, a . _mat_ file
(standard MATLAB file) has been made available for direct use in MATLAB. A
readme file with the description of acquired data been added to the dataset
folder in order to facilitate sharing and re-usability.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, DLR will preserve a copy of the
dataset.
**Dataset size** : 22.6 MB
**Zenodo link** : _https://doi.org/10.5281/zenodo.2639690_
**3.2.** Local planning for constrained aerial telemanipulation
**Dataset name:** Cooperative Aerial Tele-Manipulation with Haptic Feedback
**Authors:** M. Mohammadi, A. Franchi, D. Barcelli and D. Prattichizzo
**Data contact and responsible:** CNRS, Antonio Franchi
**Dataset objective:** This dataset contains the experimental data relative to
the validation of the bilateral teleoperation scheme for cooperative aerial
manipulation in which a human operator drives a team of Vertical Take-Off and
Landing (VTOL) aerial vehicles. While the robots grasp and manipulate an
object, the human operator should receive force feedback depending on the
state of the system. This task has been chosen with the goal of showing the
capability of the framework to produce local planning for all the aerial
robots that solves the task of moving the load subject to the system
constraints.
**Dataset description:** The name of the dataset has been changed with respect
to what mentioned in D9.4 to better link the dataset to the corresponding
paper. Thus, we preferred using the title of the paper, i.e., “Cooperative
Aerial Tele-Manipulation with Haptic Feedback”. This dataset contains the data
related to the experiment used to validate the local planning for constrained
aerial telemanipulation. In the particular case, the experiment refers to the
case in which a single quadrotor aerial vehicle is commanded in order to push
an object by means of a passive tool. The most relevant data associated with
this dataset are the force commands given by the human operator, the forces
allocated by the force allocator in order to keep the contact and satisfy the
system constraints, and the measured contact forces. Furthermore, the desired
and measured robot positions have been registered.
**Data description** : The time history of the above-described variables is
provided in the _mat_ format for reading with MATLAB. Additionally, one MATLAB
script is included for plotting the main variables.
More in details, the following quantities have been collected:
* Aerial robot desired and actual position
* Aerial robot commanded, feasible and measured forces The content of the dataset is shown in Table 8.
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Data name and format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Collection of data
</td>
<td>
dataSet11.mat
</td>
<td>
Includes position and force measurements / estimations related to the aerial
robot
</td> </tr>
<tr>
<td>
MATLAB script
</td>
<td>
check_plotter.m
</td>
<td>
Prints all the main variables mentioned above
</td> </tr> </table>
# Table 8: Content of the dataset named: “Cooperative Aerial Tele-
Manipulation with Haptic Feedback”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, CNRS will preserve a copy of
the dataset.
**Dataset size** : 3.2 MB
**Zenodo link** : _https://doi.org/10.5281/zenodo.2640409_
4. **Perception for robotic manipulation in aerial operations**
This section is devoted to data that can be collected during the activities
performed in WP5 devoted to perception for robotic manipulation in aerial
operations. These datasets have been collected in preliminary tests in the
laboratory, indoor settings or in outdoor experiments. These datasets are
grouped depending on the tasks in WP5 they are involved in.
**4.1.** Adaptive perception for robot operation
Several datasets that include data for the detection and accurate localization
of the crawler adopting different approaches and in different conditions have
been published in AEROARMS:
* Dataset **“** Crawler Direct Detection Image Dataset" contains data for the development of vision-based techniques for the direct detection of the crawler: for the detection of the crawler itself.
* Dataset **“** Image Dataset for the Crawler Indirect Detection through its Cage" contains data for the development of vision-based techniques for the indirect detection of the crawler, i.e. the detection of the crawler through the crawler's cage.
* Dataset **“** Crawler RGB-D dataset for accurate localization **”** contains data to train and test algorithms in which the aerial robot gives support to the crawler by computing its relative position and orientation in different environments and lighting conditions.
In the first Data Management Plan in Deliverable D9.4, this was structured in
two datasets: "Adaptive vision for accurate grabbing" and "Perception for the
support of the aerial and ground robot operation". In the Final Data
Management Plan we finally provide three related datasets and prefer to
present them in the same section of this deliverable. The description of these
datasets are in the following:
**Dataset name: “** Crawler Direct Detection Image Dataset **”**
**Authors:** Albert Pumarola; Juan Andrade; Alberto Sanfeliu
**Data contact and responsible:** UPC, Albert Pumarola
**Dataset objective:** This dataset contains the necessary data for the
development of perception tools for the direct detection of the crawler:
detection of the crawler itself. The dataset contains images and ground-truth
position of the crawler used in the AEROARMS project experiments.
**Dataset description:** The dataset is subdivided into positive (images
containing the crawler) and negative (images NOT containing the crawler)
samples. For the positive samples, it is also included the image coordinates
(uv pixels) of the crawler centroid.
**Data description:** The dataset uses standard formats jpg for images and, as
well as _PKL_ and _XML_ for image coordinates (uv pixels) of the crawler
centroid.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, UPC will preserve a copy of the
dataset.
**Dataset size** : 2.6 GB
**Zenodo link:** _https://doi.org/10.5281/zenodo.2636697_
**Dataset name: “** Image Dataset for the Crawler Indirect Detection through
its Cage **”**
**Authors:** Javier Laplaza; Albert Pumarola; Juan Andrade; Alberto Sanfeliu
**Data contact and responsible:** UPC, Javier Laplaza
**Dataset objective:** In many computer vision complex problems it is more
convenient to detect objects indirectly rather than directly. This dataset
contains data for the development of perception tools to detect the crawler's
cage as a way of detecting the crawler itself. The dataset contains images and
ground-truth position of the crawler’s cage used in the AEROARMS project
experiments.
**Dataset description:** The dataset is subdivided into images containing and
NOT containing the crawler’s cage. For the positive samples, it is also
included the image coordinates (uv pixels) of the cage handle.
**Data description:** The dataset uses standard formats jpg for images as well
as _PKL_ and _XML_ for image coordinates (uv pixels) of the cage handle.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, UPC will preserve a copy of the
dataset.
**Dataset size** : 1.5 GB
**Zenodo link:** _https://doi.org/10.5281/zenodo.2636666_
**Dataset name:“** Crawler RGB-D dataset for accurate localization **”**
**Authors:** P. Ramon-Soria, B.C. Arrue
**Data contact and responsible:** USE, Pablo Ramón Soria
**Dataset objective:** The objective of this dataset is to provide data to
train and test algorithms in which the aerial robot gives support to the
crawler by computing its relative position and orientation in different
environments and lighting conditions.
**Data description:** The dataset contains five folders in different indoor
and outdoor environments and lighting conditions. Each folder contains the
RGB-D images obtained from an Intel RealSense d435 camera. A 3D point cloud
model of the crawler is also provided. Additionally, the calibration file has
been provided in _XML_ format.
The content of the dataset is shown in Table 9.
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Data name and format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
PLY file
</td>
<td>
Crawler_model.ply
</td>
<td>
3D point cloud of the crawler
</td> </tr>
<tr>
<td>
Text file
</td>
<td>
CalibrationFile.XML
</td>
<td>
Text file containing the calibration of the camera
</td> </tr>
<tr>
<td>
PNG Image file
</td>
<td>
[1,2,3,4] _[workshop,engine, machine,grass]/left_%d.png
</td>
<td>
PNG image containing the color image from the camera
</td> </tr>
<tr>
<td>
PNG Image file
</td>
<td>
[1,2,3,4]_[workshop,engine, machine,grass]/depth_%d.png
</td>
<td>
PNG image containing the depth information from the camera. Coded in unsigned
int of 16bits
</td> </tr> </table>
# Table 9: Content of the dataset named: “Crawler detection”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, USE will preserve a copy of the
dataset.
**Dataset size** : 76.3 MB
**Zenodo link** : _https://doi.org/10.5281/zenodo.3066232_
**4.2.** Multi-sensor mapping and localization
**Dataset name:** Multi-sensor mapping and localization
**Authors:** J.R. Martínez-de Dios, M. Polvillo, J.L. Paneque, V. Vega, A.
Ollero
**Data contact and responsible:** USE, J. Ramiro Martinez-de Dios
**Dataset objective:** This dataset contains the necessary data for the
development, testing and validation of 3D mapping and 6DOF localization
techniques of aerial robot in GNSSdenied environments. These datasets can be
used for the configuration and setting of the methods as well as for
performing simulations and tests.
**Dataset description:** This dataset contains measurements from two multi-
sensor 6DOF aerial robot localization and mapping experiments performed in the
Karting AEROARMS outdoor scenario in September 2018. The first experiment took
3 minutes and 56 seconds.
The second, 3 minutes and 31 seconds. Both datasets contain the RTK GPS robot
localization as ground truth.
**Data description** : The dataset uses standard formats and metadata typical
of ROS and aerial robotics to represent sensor data, robot position and
orientation, among others. A file with the description of the formats and
standards is included in the dataset in order to facilitate sharing and re-
usability.
This dataset contains two ROS bags (aeroarms_us_2018-09-06-13-33-56 and
aeroarms_us_2018-09-06-15-04-39), one for each outdoor experiment. All
measurements were captured and logged with their corresponding ROS time stamp.
The content of each ROS bag is shown in Table 10.
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Data name and format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
UWB measurements
</td>
<td>
ROS topic: /ranges_uwb Format “range_msgs/P2Prange” which contains:
* ID of the UWB receiver
* ID of the UWB transmitter
(anchor in the scenario)
* range measurements in m
* timestamp
* frame ID
</td>
<td>
Ultra-Wide Band (UWB) range measurements obtained by the UWB receiver on the
aerial robot from UWB tags located at the scenario. The rate was of 20 Hz. The
location of the UWB tags can be found in a document within the dataset
</td> </tr>
<tr>
<td>
RTK GPS
</td>
<td>
ROS topic: /fix
Format "sensor_msgs/NavSatFix"
</td>
<td>
RTK GPS measurements provided by a Novatel FlexPak6D obtained in "narrow
float" accuracy level during the flight at a rate of 100 Hz
</td> </tr>
<tr>
<td>
RTK GPS Local
Localization
</td>
<td>
ROS topic:
/odometry_ground_truth
Format "nav_msgs/Odometry"
</td>
<td>
RTK GPS localization in the robot frame during the flight at a rate of 10 Hz
</td> </tr>
<tr>
<td>
Laser altimeter
</td>
<td>
ROS topic:
/use_robot/lidarlite_range
Format "std_msgs/Float32"
</td>
<td>
Altitude measurements provided by a LIDARLite v3 sensor pointing downwards at
a rate of 25 Hz
</td> </tr>
<tr>
<td>
IMU data
</td>
<td>
ROS topic: /imu/data
Format "sensor_msgs/Imu"
</td>
<td>
IMU measurements provided by a Mti-G IMU sensor at a rate of 100 Hz
</td> </tr>
<tr>
<td>
Velodyne data
</td>
<td>
ROS topic: /velodyne_points
Format
"sensor_msgs/PointCloud2"
</td>
<td>
Scans provided by a Velodyne HDL-32 lidar during the flight at a rate of 10 Hz
</td> </tr>
<tr>
<td>
Onboard camera
</td>
<td>
ROS topic:
/camera/left/image_rect_color
Format "sensor_msgs/Image"
</td>
<td>
Images from a visual camera during the flight at a rate of 10 Hz
</td> </tr>
<tr>
<td>
Camera internal calibration
</td>
<td>
ROS topic:
/camera/left/camera_info
Format
"sensor_msgs/CameraInfo"
</td>
<td>
Internal calibration of the camera
</td> </tr>
<tr>
<td>
External calibration transformations
</td>
<td>
ROS topic: /tf
Format "tf2_msgs/TFMessage"
</td>
<td>
Position of each sensor with respect to the robot frame. It is detailed by a
transformation tree with the relationships between the robot coordinate frame
(base_link) and:
* uwb_frame (UWB coordinate frame)
* gps_frame (GPS coordinate frame)
* lidarlite_link (altimeter coordinate frame) - imu (IMU sensor coordinate frame) velodyne (velodyne coordinate frame)
</td> </tr>
<tr>
<td>
**Type of data**
</td>
<td>
**Data name and format**
</td>
<td>
**Description**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
* camera_frame (visual sensor coordinate frame)
* camera_optical_frame(optical coordinate frame of the visual sensor)
</td> </tr> </table>
# Table 10: Content of the dataset named: “Multi-sensor mapping and
localization”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, USE will preserve a copy of the
dataset.
**Dataset size** : 6.3 GB
**Zenodo link** : _https://doi.org/10.5281/zenodo.2648635_
**4.3.** Robust perception fusion from deployed sensors
**Dataset name:** Robust perception fusion from deployed sensors.
**Authors:** J.R. Martínez-de Dios, M. Polvillo, J.L. Paneque, A. Sanfeliu, J.
Andrade-Cetto,
A. Santamaría, M. Oetiker, E. Zwicker, V. Vega, A. Ollero
**Data contact and responsible:** USE, J. Ramiro Martinez-de Dios
**Dataset objective:** This dataset contains the necessary data for the
development of perception tools to fuse the information from the sensors on a
crawler robot and the sensors on an aerial robot. The objective is to improve
the perception required for the inspection and maintenance operations. The
datasets will be used for configuration and setting of the methods and
algorithms as well as for performing simulations and tests.
**Dataset description:** This dataset contains measurements from two
collaborative aerial robot-crawler experiments performed in the Karting
AEROARMS outdoor scenario in September 2018. The aerial robot used was a
octorotor platform developed by USE that was equipped with RTK GPS, Velodyne
32-HDL, laser altimeter, Zed stereo camera and IMU. The crawler -developed by
AIR- computed its odometry. The first experiment took 2 minutes and 47
seconds. The second, 3 minutes and 31 seconds. Both datasets contain the RTK
GPS robot localization as ground truth.
**Data description** : The dataset uses standard formats and metadata typical
of ROS and aerial robotics to represent sensor measurements and configuration
and robot position and orientation, among others. A file with the description
of the formats and standards is included in the dataset in order to facilitate
sharing and re-usability.
This dataset contains two ROS bags (aeroarms_2018-09-06-13-34-17 and
aeroarms_201809-06-15-04-40), one for each outdoor experiment. All
measurements were captured and logged with their corresponding ROS time stamp.
The content of each ROS bag is shown in Table 11.
<table>
<tr>
<th>
**Data**
</th>
<th>
**Format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
RTK GPS
</td>
<td>
ROS topic: /fix
Format "sensor_msgs/NavSatFix"
</td>
<td>
RTK GPS measurements provided by a Novatel FlexPak6D onboard the aerial robot
obtained in "narrow float" accuracy level during the flight at a rate of 100
Hz
</td> </tr>
<tr>
<td>
RTK GPS Local
Localization
</td>
<td>
ROS topic:
/odometry_ground_truth
Format "nav_msgs/Odometry"
</td>
<td>
RTK GPS localization of the aerial robot in the robot frame during the flight
at a rate of 10 Hz
</td> </tr>
<tr>
<td>
Laser altimeter
</td>
<td>
ROS topic:
/use_robot/lidarlite_range
Format "std_msgs/Float32"
</td>
<td>
Altitude measurements provided by a LIDAR-Lite v3 sensor onboard the aerial
robot pointing downwards at a rate of 25 Hz
</td> </tr>
<tr>
<td>
IMU data
</td>
<td>
ROS topic: /imu/data
Format "sensor_msgs/Imu"
</td>
<td>
Aerial robot IMU measurements provided by a Mti-G IMU sensor at a rate of 100
Hz
</td> </tr>
<tr>
<td>
Velodyne data
</td>
<td>
ROS topic: /velodyne_points
Format "sensor_msgs/PointCloud2"
</td>
<td>
Scans provided by a Velodyne HDL-32 lidar onboard the aerial robot during the
flight at a rate of 10 Hz
</td> </tr>
<tr>
<td>
Image1
</td>
<td>
ROS topic: /iri/image_raw
Format "sensor_msgs/Image"
</td>
<td>
Images of a forward pointing monocular camera onboard the aerial robot at a
rate of 10 Hz
</td> </tr>
<tr>
<td>
Image2
</td>
<td>
ROS topic:
/iri/mvbluefox3_camera/cam1/image_raw
Format "sensor_msgs/Image"
</td>
<td>
Images of a monocular camera onboard the aerial robot pointing downwards at a
rate of 40 Hz
</td> </tr>
<tr>
<td>
Camera1 internal calibration
</td>
<td>
ROS topic: /iri/camera_info
Format "sensor_msgs/CameraInfo"
</td>
<td>
Internal calibration of the forwards pointing monocular camera onboard the
aerial robot
</td> </tr>
<tr>
<td>
Camera2 internal calibration
</td>
<td>
ROS topic:
/iri/mvbluefox3_camera/cam1/camera_info
Format "sensor_msgs/CameraInfo"
</td>
<td>
Internal calibration of the monocular camera onboard the aerial robot
downwards pointing
</td> </tr>
<tr>
<td>
Crawler odometry
</td>
<td>
ROS topic:
/crawler/odom_feedback
Format "nav_msgs/Odometry"
</td>
<td>
Position estimation of the crawler
</td> </tr>
<tr>
<td>
**Data**
</td>
<td>
**Format**
</td>
<td>
**Description**
</td> </tr>
<tr>
<td>
External calibration transformations
</td>
<td>
ROS topic: /tf
Format "tf2_msgs/TFMessage"
</td>
<td>
Position of each sensor with respect to the robot frame at a rate of 100 Hz.
It is detailed by a transformation tree with the relationships between the
robot coordinate frame (base_link) and:
* gps_frame (GPS coordinate frame)
* lidarlite_link (altimeter coordinate frame)
* imu (IMU sensor coordinate frame)
* velodyne (velodyne coordinate frame)
* iri_uvc_camera_base (forward pointing monocular camera
coordinate frame)
* iri_uvc_camera_optical (optical frame of the forward pointing monocular camera)
* iri_mvbluefox_base (coordinate frame of the downwards pointing camera)
* iri_mvbluefox_optical (optical coordinate frame of the downwards pointing camera)
</td> </tr> </table>
# Table 11: Content of the dataset named: “Robust perception fusion from
deployed sensors".
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, USE will preserve a copy of the
dataset.
**Dataset size** : 7.5 GB
**Zenodo link** : _https://doi.org/10.5281/zenodo.2649246_
5. **Planning for aerial manipulation in inspection and maintenance**
This section is devoted to data that have been be collected during the
activities performed in WP6. These datasets have been collected in preliminary
tests in the laboratory and in real experiments. These datasets are grouped
depending on the tasks in WP6 they are involved in.
**5.1.** Planning for aerial manipulation
**Dataset name:** A Truly Redundant Aerial Manipulator System with Application
to Pushand-Slide Inspection in Industrial Plants
**Authors:** M. Tognon, H. Tello Chavez, E. Gasparin, Q. Sablé, D. Bicego, A.
Mallet, M. Lany,
G. Santi, B. Revaz, J. Cortés and A. Franchi
**Data contact and responsible:** CNRS, Antonio Franchi
**Dataset objective:** This dataset contains the experimental data relative to
the validation of the control-aware motion planner for task constrained
motions. The task consists in inspecting a real metallic pipe with an Eddy
Current sensor, performing a raster scan. The planner is used to generate a
collision-free robot trajectory that fulfills the task requirements (sensors
in contact and perpendicular to the surface) and the robot constraints related
to its dynamics and inputs.
**Dataset description:** The name of the dataset has been changed with respect
to what mentioned in Deliverable D9.4 to better link the dataset to the
corresponding paper. Thus, we preferred using the title of the paper. The
dataset "A Truly-Redundant Aerial Manipulator System with Application to Push-
and-Slide Inspection in Industrial Plants" contains the data relative to full
experiment integrating control, motion planning and Eddy Current sensing. This
experiment is an example of contact-based inspection were the end-effector,
equipped with an Eddy Current probe, needs to scan a pipe, sliding the sensor
on its surface in order to localize a weld. Since for the experiment the
system integrates control, motion planning and sensing as well, the dataset
does not only contain data relative to motion planning but also to control and
sensing as well. Regarding the control, one can check its performance in terms
of tracking error and task fulfillment: sensor always in contact and
perpendicular to the surface. Regarding the motion planner, the dataset
contains the desired trajectory of end-effector and state computed with our
proposed 'control-aware motion planner'. The computed trajectory allows to
execute a raster-scan not only respecting the task constraints, but also the
ones related to the dynamics and inputs of the system. Finally, the dataset
contains the raw and postprocessed measurements coming from the sensor. Based
on those, one can conclude that a weld can be effectively located and that the
contact-based inspection in exam is feasible with such an aerial manipulator.
More in details, the following quantities have been collected:
* End-effector desired and actual pose (position plus orientation)
* Aerial platform desired and actual pose (position plus orientation)
* Joint desired and actual angles
* Raw and post-processed measurements of the EC sensor
**Data description:** The time history of the above described variables is
provided in the _mat_ format for reading with MATLAB. Additionally, four
MATLAB scripts are included for plotting the main variables. A readme file is
also present with instructions for plotting.
The content of the dataset is shown in Table 12.
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Data name and format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Collection of data
</td>
<td>
18-09-01_12-14.mat
</td>
<td>
Includes the row and post-processed data coming from the EC sensor
</td> </tr>
<tr>
<td>
Collection of data
</td>
<td>
MATLAB.mat
</td>
<td>
Include the end-effector and state trajectories computed by the planner; the
actual end-effector and state trajectories
</td> </tr>
<tr>
<td>
MATLAB script
</td>
<td>
print_main_variables.m
</td>
<td>
it prints all the main variables meaning the 1) lift-off 2) weld signal 3)
end-effector position error 4) end-effector attitude error 5) position of the
aerial platform 6) attitude of the aerial platform 7) joint one 8) joint two
</td> </tr>
<tr>
<td>
MATLAB script
</td>
<td>
print_raw_signals.m
</td>
<td>
it prints the raw data coming from the EC sensor
</td> </tr>
<tr>
<td>
MATLAB script
</td>
<td>
print_traj_3d.m
</td>
<td>
print a 3d image of: the pipe, the desired and real trajectory of the end-
effector highlighting in blue and red the parts in contact or not with the
pipe, respectively, the parts in which the sensor detect the presence of a
weld, the estimated mapping of the weld on the pipe, the real map of the weld
on the pipe
</td> </tr> </table>
# Table 12:Content of the dataset named: “A Truly Redundant Aerial
Manipulator System with Application to Pushand-Slide Inspection in Industrial
Plants”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, CNRS will preserve a copy of
the dataset.
**Dataset size** : 14.2 GB
**Zenodo link** : _https://doi.org/10.5281/zenodo.2640361_
**5.2.** Control-based local optimization methods for planning
**Dataset name:** Control-based local optimization methods for planning
**Authors:** E. Cataldi, G. Antonelli, D. Di Vito, P.A. Di Lillo
**Data contact and responsible:** CREATE, Gianluca Antonelli
**Dataset objective:** This dataset contains the code and the experimental
data for the development, testing and validation of a planner, which has been
designed with the purpose to have an agile system able to perform operations
inside a dense industrial installation, taking into consideration several
obstacles inside the workspace and online (re-)planning. In particular, the
proposed approach is based on merging control-based local optimization methods
inside the planning algorithms (Task T6.1).
**Dataset description:** The dataset includes all data collected from the
experiments performed on a mockup represented by a fixed-base 7 DOFs
manipulator in two different scenarios. In the first case, a static
environment has been considered. In the second one, the user places an
obstacle on the manipulator’s path in real-time. Thus, a re-planning results
necessary to manage this change in the environment.
**Data description** : The time history of the variables involved into the
planner are provided in ASCII format, to be used from any kind of software.
The code of the planning algorithm, developed in C++ under the ROS
environment, is provided as well.
The content of the dataset is shown in Table 13:
<table>
<tr>
<th>
**Data**
</th>
<th>
**Format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
dist_dynamic_obstacle.txt
</td>
<td>
ASCII format
</td>
<td>
end-effector, wrist and elbow distances from the obstacle introduced in real-
time in the workspace
</td> </tr>
<tr>
<td>
distance_from_joint2.txt
</td>
<td>
ASCII format
</td>
<td>
end-effector and wrist distances from 3D-Space joint2 position
</td> </tr>
<tr>
<td>
distance_from_joint3.txt
</td>
<td>
ASCII format
</td>
<td>
end-effector and wrist distances from 3D-Space joint3 position
</td> </tr>
<tr>
<td>
distance_from_obstacles.txt
</td>
<td>
ASCII format
</td>
<td>
end-effector, wrist and elbow distances from the three obstacles present in
the workspace
</td> </tr>
<tr>
<td>
joint_ik.txt
</td>
<td>
ASCII format
</td>
<td>
joints positions
</td> </tr>
<tr>
<td>
joint_velocity.txt
</td>
<td>
ASCII format
</td>
<td>
joint velocities
</td> </tr>
<tr>
<td>
jointLimit2.txt
</td>
<td>
ASCII format
</td>
<td>
second joint position limit
</td> </tr>
<tr>
<td>
jointLimit4.txt
</td>
<td>
ASCII format
</td>
<td>
fourth joint position limit
</td> </tr>
<tr>
<td>
jointLimit6.txt
</td>
<td>
ASCII format
</td>
<td>
sixth joint position limit
</td> </tr>
<tr>
<td>
virtualWallZ.txt
</td>
<td>
ASCII format
</td>
<td>
distance from the horizontal plane set at the baseframe of the manipulator
</td> </tr>
<tr>
<td>
ee_velocity.txt
</td>
<td>
ASCII format
</td>
<td>
end-effector linear and angular velocities
</td> </tr> </table>
# Table 13: Content of the dataset named: “Control-based local optimization
methods for planning”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, UNICAS will preserve a copy of
the dataset.
**Dataset size** : 31.1 MB
**Zenodo link** : _https://doi.org/10.5281/zenodo.2641158_
**5.3.** Reactivity for safe operation
**Dataset name:** Reactivity for safe operation.
**Authors:** A. Caballero, F. Real, A. Suárez, V. Vega, M. Béjar, A.
Rodríguez-Castaño, A. Ollero.
**Data contact and responsible:** USE, Álvaro Caballero.
**Dataset objective:** This dataset contains the necessary data for the
development, testing and validation of reactive techniques for obstacle
avoidance of aerial manipulators in industrial environments. This dataset can
be used as benchmark for other methods and algorithms as well as for
performing simulations and tests.
**Dataset description:** The provided dataset includes planning results of the
different local replanning algorithms described in Section 5.3 of Deliverable
D6.2. This dataset is the result of applying such algorithms to the
application scenarios presented in Sections 7.3 and 8.2 of Deliverable D6.2
for the Aerial Robotic System for Long-Reach Manipulation in Section 2.2 of
the same deliverable.
**Data description** : The data have been classified in a set of subfolders
organized hierarchically according to the used algorithm, the application
scenario and the origin of the data, i.e., simulation or real experiments.
Each subfolder contains two MATLAB files (MotionPlan.mat and Execution.mat)
with the main information of both the computed motion plan and its execution
by the aerial manipulator.
The content of these files is explained in Table 14. Additionally, a README
file with a more detailed description has been included in the dataset.
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Data name**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Motion Plan
</td>
<td>
MotionPlan.InitialState
</td>
<td>
Initial state
</td> </tr>
<tr>
<td>
MotionPlan.GoalState
</td>
<td>
Goal state
</td> </tr>
<tr>
<td>
MotionPlan.NumberOfIterations
</td>
<td>
Number of iterations to compute the plan
</td> </tr>
<tr>
<td>
MotionPlan.NumberOfNodes
</td>
<td>
Number of nodes within the plan
</td> </tr>
<tr>
<td>
MotionPlan.Nodes
</td>
<td>
State associated to each node
</td> </tr>
<tr>
<td>
MotionPlan.Time
</td>
<td>
Timestamp associated to each node
</td> </tr>
<tr>
<td>
MotionPlan.Cost
</td>
<td>
Cost associated to each node
</td> </tr>
<tr>
<td>
MotionPlan.Parent
</td>
<td>
Parent associated to each node
</td> </tr>
<tr>
<td>
MotionPlan.OptimalTrajectory
</td>
<td>
Nodes in the optimal trajectory
</td> </tr>
<tr>
<td>
Execution of the motion plan
</td>
<td>
t
</td>
<td>
= Execution(1,:)
</td>
<td>
Timestap
</td> </tr>
<tr>
<td>
q1_ref
</td>
<td>
= Execution(2,:)
</td>
<td>
Reference for the longitudinal position of the aerial platform
</td> </tr>
<tr>
<td>
q3_ref
</td>
<td>
= Execution(3,:)
</td>
<td>
Reference for the vertical position of the aerial platform
</td> </tr>
<tr>
<td>
q7R_ref
</td>
<td>
= Execution(4,:)
</td>
<td>
Reference for the angular position of the right upper link of the dual arm
</td> </tr>
<tr>
<td>
q8R_ref
</td>
<td>
= Execution(5,:)
</td>
<td>
Reference for the angular position of the right lower link of the dual arm
</td> </tr>
<tr>
<td>
q7L_ref
</td>
<td>
= Execution(6,:)
</td>
<td>
Reference for the angular position of the left upper link of the dual arm
</td> </tr>
<tr>
<td>
q8L_ref
</td>
<td>
= Execution(7,:)
</td>
<td>
Reference for the angular position of the left lower link of the dual arm
</td> </tr>
<tr>
<td>
q1
</td>
<td>
= Execution(8,:)
</td>
<td>
Longitudinal position of the aerial platform
</td> </tr>
<tr>
<td>
q3
</td>
<td>
= Execution(9,:)
</td>
<td>
Vertical position of the aerial platform
</td> </tr>
<tr>
<td>
q7R
</td>
<td>
= Execution(10,:)
</td>
<td>
Angular position of the right upper link of the dual arm
</td> </tr>
<tr>
<td>
q8R
</td>
<td>
= Execution(11,:)
</td>
<td>
Angular position of the right lower link of the dual arm
</td> </tr>
<tr>
<td>
q7L
</td>
<td>
= Execution(12,:)
</td>
<td>
Angular position of the left upper link of the dual arm
</td> </tr>
<tr>
<td>
q8L
</td>
<td>
= Execution(13,:)
</td>
<td>
Angular position of the left lower link of the dual arm
</td> </tr> </table>
# Table 14: Content of the dataset named: “Reactivity for safe operation”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, USE will preserve a copy of the
dataset.
**Dataset size** : 7.5 MB.
**Zenodo link** : _https://doi.org/10.5281/zenodo.2641949_
6. **Validation in the industrial scenario**
This section is devoted to data that have been collected during the activities
performed in WP8. These datasets have been collected in preliminary tests in
the laboratory and in real experiments. These datasets are grouped depending
on the tasks in WP8 they are involved in.
**6.1.** Installing an EC Sensor on a remote location
**Dataset name:** Installing an EC Sensor on a remote location
**Authors:** E. Gasparin, B. Revaz
**Data contact and responsible:** SENS, Bernard Revaz
**Dataset objective:** Validate the consistency and quality of the data
received by the EC (Eddy Current) sensor when manipulated remotely by a drone
or deployed for permanent monitoring.
**Dataset description:** The dataset provides the EC (Eddy Current) data
obtained during a drone-operated inspection and after the release of the
sensor to a specific location. The collected measurements allow to assess the
capability of detecting relevant features for an EC inspection. This involves
the possibility of recognizing the relevant signatures in the signal, such as,
for example, when the probe crosses a weld or a crack on the inspected
structure. Thus, aiming to demonstrate the feasibility of the inspection
process in the project scenario.
**Data description** : The dataset is organized with the following sections:
* EXP001: EC data collected during the validation experiments. The EC was installed on the CATEC manipulated and deployed. The measurements refer to the validation experiments performed at the Cement kiln in Seville, Spain.
* Software: the UPecView software necessary to open the *.sidata files.
* plots: a preview plot of the experimental data.
The collected EC data are stored in two formats:
1. *.sidata files: These files are proprietary format that can be opened with the software “UPecView” supplied by Sensima Inspection
(http://www.sensimainsp.com). This software provides an interface familiar to
what expected by eddy-current inspectors. Each file includes all the relevant
information that may be used for analysis: the measurements and the instrument
configuration (ex. Excitation frequency of the probe) is contained in this
file.
2. *.csv files: The csv files contain an export of the measurements only (without instrument settings); a comma separator is used.
The content of the dataset is shown in Table 15.
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Data name and format**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
*.csv and *.sidata
</td>
<td>
EXP001/0001A
</td>
<td>
Manual calibration block scan with cracks, Seville, Spain.
</td> </tr>
<tr>
<td>
*.csv and *.sidata
</td>
<td>
EXP001/0001B
</td>
<td>
Manual reference weld pipe scan, Seville, Spain.
</td> </tr>
<tr>
<td>
*.csv and *.sidata
</td>
<td>
EXP001/0002A
</td>
<td>
CATEC Drone overall scan inspection and sensor deployment, Cement kiln,
Seville, Spain.
</td> </tr>
<tr>
<td>
*.csv and *.sidata
</td>
<td>
EXP001/0002B
</td>
<td>
Deployed sensor, Cement kiln, Seville, Spain.
</td> </tr>
<tr>
<td>
*.csv and *.sidata
</td>
<td>
EXP001/0002C
</td>
<td>
Deployed sensor, Cement kiln , Seville, Spain.
</td> </tr>
<tr>
<td>
*.csv and *.sidata
</td>
<td>
EXP001/0002D
</td>
<td>
Permanent sensor removal, Cement kiln, Seville, Spain.
</td> </tr>
<tr>
<td>
*.exe, program
installer
</td>
<td>
Software/
UPECView_1.7.1.3.rc_win-
64_cxf_Setup.exe
</td>
<td>
UPecView software installer.
</td> </tr>
<tr>
<td>
images
</td>
<td>
plots/~multiple files~
</td>
<td>
Preview plot of the datafiles contained in the section “data”. There is a
direct correspondence of the filename.
</td> </tr> </table>
# Table 15: Content of the dataset named: “Installing an EC Sensor on a
remote location”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, SENS will preserve a copy of
the dataset.
**Dataset size** : 272.9 MB
**Zenodo link** : _https://doi.org/10.5281/zenodo.2652208_
**6.2.** Deploying, operation and maintenance of a mobile robot
**Dataset name:** TRIC Crawler Internal Localization on Pipe Segment
Verification by
External Ground Truth System
**Authors:** M. Oetiker
**Data contact and responsible:** AIR, Moritz Oetiker
**Dataset objective:** Validation of TRIC Crawler internal localization in an
industrial scenario (on a carbon steel pipe segment).
**Dataset description:** The TRIC magnetic inspection crawler, once it is
deployed by the UAV, needs to move on the surface of an elevated pipe-segment,
while localizing itself. The internal localization system is the main
reference for inspection and only receives position updates from the UAV
during periodic “maintenance” flights of the UAV. The dataset is providing a
comparison of the internal localization (odometry and IMU, matching the TRIC
to the pipe surface) and an external ground truth measurement recorded by the
Optitrack camera system. The longitudinal coordinate (x-axis) is uncertain
because it cannot be determined by the TRIC crawler in an absolute way. In
postprocessing of the experiment, a position update by the UAV was simulated
to correct the x-axis drift.
**Data description** : The dataset contains photographs and a video of the
experiment, plots of the results (including the simulated position updates),
raw-data and python code to access the raw data.
The content of the dataset is shown in Table 16.
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Data name and format**
</th>
<th>
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Sqlite database containing a pickle file on each entry
</td>
<td>
11.4.2018 the path
09_08.bag.sqlite
</td>
<td>
of
</td>
<td>
moe
</td>
<td>
crawlerPoseMessage(DataMessage) defined
in BasicMessages.py (=Crawler pos) MocapRigidBodyMessage(PoseMessage) defined
in mocap_message.py (=Optitrack)
</td> </tr>
<tr>
<td>
Sqlite database containing a pickle file on each entry
</td>
<td>
11.4.2018 the path
10_35.bag.sqlite
</td>
<td>
of
</td>
<td>
moe
</td>
<td>
crawlerPoseMessage(DataMessage) defined
in BasicMessages.py (=Crawler pos) MocapRigidBodyMessage(PoseMessage) defined
in mocap_message.py (=Optitrack)
</td> </tr>
<tr>
<td>
Sqlite database containing a pickle file on each entry
</td>
<td>
11.4.2018 the path
10_54.bag.sqlite
</td>
<td>
of
</td>
<td>
moe
</td>
<td>
crawlerPoseMessage(DataMessage) defined
in BasicMessages.py (=Crawler pos) MocapRigidBodyMessage(PoseMessage) defined
in mocap_message.py (=Optitrack)
</td> </tr>
<tr>
<td>
Sqlite database containing a pickle file on each entry
</td>
<td>
11.4.2018 the path
13_01.bag.sqlite
</td>
<td>
of
</td>
<td>
moe
</td>
<td>
crawlerPoseMessage(DataMessage) defined
in BasicMessages.py (=Crawler pos) MocapRigidBodyMessage(PoseMessage) defined
in mocap_message.py (=Optitrack)
</td> </tr>
<tr>
<td>
Sqlite database containing a pickle file on each entry
</td>
<td>
9.4.2018 17_41.bag.sqlite
</td>
<td>
</td>
<td>
crawlerPoseMessage(DataMessage) defined
in BasicMessages.py (=Crawler pos) MocapRigidBodyMessage(PoseMessage) defined
in mocap_message.py (=Optitrack)
</td> </tr> </table>
# Table 16: Content of the dataset named: “TRIC Crawler Internal Localization
on Pipe Segment Verification by External Ground Truth System”.
**Dataset sharing:** Open access (always in accordance with the GA and the CA
clauses).
**Archiving and preservation:** This dataset is archived in ZENODO and linked
with OpenAIRE. The ZENODO link has been made available at a Dataset section in
the AEROARMS website. Besides, for redundancy, USE will preserve a copy of the
dataset.
**Dataset size** : 822.7 MB
**Zenodo link** : _http://doi.org/10.5281/zenodo.2643087_
**7\. Conclusions**
This document presented the final Data Management Plan (DMP) of the AEROARMS
project. The first version was submitted in M6. The objective is to detail
what data the project has generated, whether and how it has been exploited and
made accessible for verification and re-use, and how it will be curated and
preserved.
These datasets are classified depending on the tasks they are involved in the
project. The document is structured in 4 Sections, each devoted to the
datasets of WPs 3, 4, 5, 6 and 8. All datasets have been had available to the
community in Zenodo and have been linked to OpenAire.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0995_AEROARMS_644271.md
|
# Introduction
## Purpose of the document
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used with regard to
all the datasets that will be generated by the project. A DMP details what
data the project will generate, whether and how it will be exploited or made
accessible for verification and re-use, and how it will be curated and
preserved.
## Scope of the document
This document (deliverable D9.4) describes the first version of the AEROARMS
DMP. The DMP is not a fixed document; it evolves and gains more precision and
substance during the lifespan of the project. The final version of the DMP
(deliverable D9.5) will be delivered at M48.
## Structure of the document
The DMP describes datasets and should reflect the current point of view of the
consortium about the data that will be produced. The description of each data
includes the following:
* Dataset name
* Data contact and responsible
* Dataset description
* Data collection
* Standards and metadata
* Dataset sharing
* Archiving and preservation (including storage and backup)
It has been agreed by the AEROARMS consortium that all the datasets that will
be produced within the project and that are not affected by IPR (clause 8.0 of
the consortium agreement) will be shared between the partners. Moreover, all
the datasets with potential interest for the community and that are not
related to further exploitation activities will be shared with the whole
scientific community after their publication in conference proceedings and/or
international journals.
Besides the introduction and conclusions, this document is structured in 4
Sections, devoted to the datasets of WPs 3, 4, 5, 6 and 8. It is expected that
datasets of higher interest in AEROARMS will be generated in these WPs.
# Control of aerial robots with multiple manipulation means
This section is devoted to datasets that can be collected during the
activities in WP3. These datasets will be collected in preliminary tests in
the laboratory, indoor settings or in outdoor experiments. These datasets are
grouped depending on the tasks in WP3 they are involved in.
## Dataset: Modelling of aerial multirotors with two arms
**Dataset name:** Modelling of aerial multirotors with two arms.
**Data contact and responsible:** USE, Guillermo Heredia.
**Dataset description:** This dataset contains the necessary data for the
accurate modelling of the aerial vehicle with two arms developed in the
project. The dataset will be used for the modelling and for the validation of
coupled aerial platform-arms control algorithms. They will also be used for
robustness tests of the control algorithms.
**Data collection:** This dataset includes data collected from the experiments
performed including partial experiments performed in the laboratory,
experiments in indoors and also experiments in outdoors.
**Standards and metadata:** The dataset will use standards and metadata usual
in multirotors modelling and control to represent position, attitude and
articular variables of the arms, among others. A file with the description of
the formats and standards used will be added to the dataset in order to
facilitate sharing and re-usability.
**Dataset sharing:** This dataset will be shared with related partners of the
AEROARMS consortium. The configuration of the aerial vehicles developed in
AEROARMS is very specific and it is expected that this dataset could have
limited interest for the community. For this reason this dataset is not
expected to be published.
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be archived in the AEROARMS server. Besides, for
redundancy, USE will preserve a copy of the dataset.
## Dataset: Integrated force and position control
**Dataset name:** Integrated force and position control.
**Data contact and responsible:** USE, Guillermo Heredia.
**Dataset description:** This dataset contains the necessary data for the
development of integrated force and position control of the aerial vehicle
with manipulators. The dataset will be used for the modelling and for the
validation of integrated force and position control algorithms.
**Data collection:** This dataset includes data collected from the experiments
performed including partial experiments performed in the laboratory,
experiments in indoors and also experiments in outdoors.
**Standards and metadata:** The dataset will use standards and metadata usual
in integrated force and position control in order to represent position,
attitude, joint variables of the arms, forces and torques, among others. A
file with the description of the formats and standards used will be added to
the dataset in order to facilitate sharing and re-usability.
**Dataset sharing:** This dataset will be shared with related partners of the
AEROARMS consortium. Depending of the generality of the results, after the
publications in conference proceedings and/or international journals, the data
may be shared with the whole scientific community (always in accordance with
the GA and CA clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be archived in the AEROARMS server. Besides, for
redundancy, USE will preserve a copy of the dataset.
## Dataset: Control for novel fully-actuated aerial platforms
**Dataset name:** Control for novel fully-actuated aerial platforms.
**Data contact and responsible:** CNRS, Antonio Franchi.
**Dataset description:** This dataset contains data related to modelling and
control of the fully-actuated aerial vehicle with 6 tilted-propellers
developed in the Task 3.2 of the project. The dataset will contain the main
physical parameters of the platform, some sensor measurements, the desired
output and control inputs.
**Data collection:** This dataset includes data collected from at least one
meaningful sample experiment performed in the laboratory.
**Standards and metadata:** The code of a simplified simulator of the platform
and of the algorithms developed for the project will be also provided to the
partners upon request for their use within the project. The dataset will be
provided in a standard text space separated format, which can be typically
used by any software. Papers reporting the results obtained from simulations
and experiments will be submitted for publication both in conference
proceedings and international scientific journals.
**Dataset sharing:** The previously mentioned data will be shared with
partners related. Moreover, after the publications in conference proceedings
and/or international journals, the data may be shared with the whole
scientific community (always in accordance with the GA and the CA clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be stored on a server at LAAS and committed on the
SVN project repository upon request of the partners.
## Dataset: Behavioural coordinated control
**Dataset name:** Behavioural coordinated control.
**Data contact and responsible:** CREATE, Prof. Gianluca Antonelli.
**Dataset description:** A library of elementary behaviours, namely atomic
tasks to be assigned to the aerial manipulator, and compound behaviour, namely
a collection of elementary behaviours in priority order. For each behaviour
will be provided the Jacobian matrix and the task function. Such a library
could be of interest for researchers working on the behavioural and kinematic
control of robots, since it could be one of the first applications to the
aerial manipulation.
**Data collection:** For the experiments, the time history of the data
generated by the kinematic control, such as the desired velocities and/or
positions of the whole system (vehicle and multiple arms), will be also
collected.
**Standards and metadata:** Code of the kinematic control, developed in C/C++
under the ROS environment and/or in MatLab/Simulink. Moreover, simulation
models developed by using open source software as Gazebo or commercial
software as V-Rep (available with free educational license) will be also
provided.
The time history of the variables involved into the kinematic control will be
provided in ASCII format, to be used from any kind of software.
Papers reporting the results obtained from simulations and experiments will be
submitted for publication both in conference proceedings and international
scientific journal.
**Dataset sharing:** The code that will be generated and the results obtained
from the simulations and experiments will be shared with partners related.
Moreover, after the publications in conference proceedings and/or
international journals, the data may be shared with the whole scientific
community (always in accordance with the GA and the CA clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be archived in the AEROARMS server, and besides
for redundancy on University of Cassino server.
## Dataset: Visual servoing with actively movable camera
**Dataset name:** Visual servoing with actively movable camera.
**Data contact and responsible:** CREATE, Prof. Gianluca Antonelli.
**Dataset description:** A collection of control laws based on visual (and
force if available) data allowing the coordination of the vehicle and robotic
arm movements to achieve manipulation tasks, like grasping and plugging of
objects into structures fixed to the ground.
**Data collection:** The time history of the data acquired during real
experiments and simulations will be also collected.
**Standards and metadata:** Code of the kinematic control, developed in C/C++
under the ROS environment and/or in MATLAB/SIMULINK. Moreover, simulation
models developed by using open source software as Gazebo or commercial
software as V-Rep (available with free educational license) will be also
provided.
The time history of the variables involved into the visual control will be
provided in ASCII format, to be used from any kind of software.
Papers reporting the results obtained from simulations and experiments will be
submitted for publication both in conference proceedings and international
scientific journal.
**Dataset sharing:** The code that will be generated and the results obtained
from the simulations and experiments will be shared with partners related.
Moreover, after the publications in conference proceedings and/or
international journals, the data may be shared with the whole scientific
community (always in accordance with the GA and the CA clauses).
**Archiving and preservation (including storage and backup):** The code that
will be generated and the results obtained from the simulations and
experiments, and with relevant interest, will be posted on official web site
of the project.
# Aerial tele-manipulation in inspection and maintenance
This section is devoted to data that can be collected during the activities
performed in WP4 devoted to aerial telemanipulation in inspection and
maintenance. These datasets are grouped depending on the tasks in WP4 they are
involved in.
## Dataset: Aerial telemanipulation system
**Dataset name:** Aerial telemanipulation system.
**Data contact and responsible:** DLR, Jordi Artigas.
**Dataset description:** This dataset contains the necessary data for the
aerial telemanipulation system with force feedback, stability and bilateral
control developed in the Task 4.2 of the project. The dataset will contain the
sensor measurements, the desired output, and the control inputs, among others.
**Data collection:** The time history of the data acquired during real
experiments will be also collected.
**Standards and metadata:** The dataset will be provided in a standard text
format, to be used from any standard software. A file with the description of
the formats and standards used will be added to the dataset in order to
facilitate sharing and reusability.
**Dataset sharing:** This dataset will be shared with partners reatedof the
AEROARMS consortium. After the publications in conference proceedings and/or
international journals, the data may be shared with the whole scientific
community (always in accordance with the GA and the CA clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be archived in the AEROARMS server, and besides
for redundancy in DLR server.
## Dataset: Local planning for constrained aerial telemanipulation
**Dataset name:** Local planning for constrained aerial telemanipulation.
**Data contact and responsible:** CNRS, Antonio Franchi.
**Dataset description:** This dataset contains a sample of typical problem
inputs and algorithm outputs of the algorithms that will be developed in the
Task 4.3 of the project. Other meaningful data recorded from either
simulations or real experiments may be stored as well, upon request of other
partners of the project.
**Data collection:** The time history of the aforementioned data will be
collected.
**Standards and metadata:** The program running the developed algorithms will
be also provided to the partners. The dataset will be provided in a standard
text spaceseparated format. Papers reporting the results obtained from
simulations and experiments will be submitted for publication both in
conference proceedings and international scientific journals.
**Dataset sharing:** The previously mentioned data will be shared with
partners related. Moreover, after the publications in conference proceedings
and/or international journals, the data may be shared with the whole
scientific community (always in accordance with the GA and the CA clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be stored on a server at LAAS and committed on the
SVN project repository upon request of the partners.
# Perception for robotic manipulation in aerial operations
This section is devoted to data that can be collected during the activities
performed in WP5 devoted to perception for robotic manipulation in aerial
operations. These datasets will be collected in preliminary tests in the
laboratory, indoor settings or in outdoor experiments. These datasets are
grouped depending on the tasks in WP5 they are involved in.
## Dataset: Adaptive vision for accurate grabbing
**Dataset name:** Adaptive vision for accurate grabbing.
**Data contact and responsible:** UPC, Alberto Sanfeliu.
**Dataset description:** This dataset contains the necessary data for the
development, testing and validation of adaptive vision techniques for accurate
grabbing with aerial robots. These datasets will be used for configuration and
setting of the methods and algorithms as well as for performing simulations
and tests.
**Data collection:** This dataset includes data collected from the experiments
performed including partial experiments performed in the laboratory,
experiments in indoors and also experiments in outdoors.
**Standards and metadata:** The dataset will use standards and metadata usual
in computer vision. A file with the description of the formats and standards
used will be added to the dataset in order to enable usability.
**Dataset sharing:** This dataset will be shared with partners related of the
AEROARMS consortium. After the publications in conference proceedings and/or
international journals, the data may be shared with the whole scientific
community (always in accordance with the GA and the CA clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be archived in the AEROARMS server. Besides, for
redundancy, UPC will preserve a copy of the dataset.
## Dataset: 3D mapping and localization for manipulation
**Dataset name:** 3D mapping and localization for manipulation.
**Data contact and responsible:** USE, Fernado Caballero.
**Dataset description:** This dataset contains the necessary data for the
development, testing and validation of 3D mapping and localization techniques
for manipulation with aerial robots and the crawler in industrial
environments. These datasets will be used for configuration and setting of the
methods and algorithms as well as for performing simulations and tests.
**Data collection:** This dataset includes data collected from the experiments
including partial experiments performed in the laboratory, experiments in
indoors and also experiments in outdoors.
**Standards and metadata:** The dataset will use standards and metadata usual
in 3D mapping and localization such as the measurements of the sensors onboard
the robot, ground truth (map and localization and orientation of the aerial
robot and the crawler), among others. A file with the description of the
formats and standards used will be added to the dataset in order to facilitate
sharing and re-usability.
**Dataset sharing:** This dataset will be shared with partners related of the
AEROARMS consortium. After the publications in conference proceedings and/or
international journals, the data may be shared with the whole scientific
community (always in accordance with the GA and the CA clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be archived in the AEROARMS server. Besides, for
redundancy, USE will preserve a copy of the dataset.
## Dataset: Perception for the support of the aerial and ground robot
operation
**Dataset name:** Perception for the support of the aerial and ground robot
operation.
**Data contact and responsible:** UPC, Antoni Grau.
**Dataset description:** This dataset contains the necessary data for the
development, testing and validation of perception tools for the support of the
aerial and ground robot operation that will be developed in task T5.3. These
datasets will be used for configuration and setting of the methods and
algorithms as well as for performing simulations and tests.
**Data collection:** This dataset includes data collected from the experiments
performed including partial experiments performed in the laboratory,
experiments in indoors and also experiments in outdoors.
**Standards and metadata:** The dataset will be provided in a standard text
format, to be used from any standard software. A file with the description of
the formats and standards used will be added to the dataset in order to
facilitate sharing and reusability.
**Dataset sharing:** This dataset will be shared with partners related of the
AEROARMS consortium. After the publications in conference proceedings and/or
international journals, the data may be shared with the whole scientific
community (always in accordance with the GA and the CA clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be archived in the AEROARMS server. Besides, for
redundancy, UPC will preserve a copy of the dataset.
## Dataset: Robust perception fusion from deployed sensors
**Dataset name:** Robust perception fusion from deployed sensors.
**Data contact and responsible:** USE, Ramiro de Dios.
**Dataset description:** This dataset contains the necessary data for the
development of perception tools to fuse in real time the information from the
sensors on the crawler robot and the sensors on the aerial robots. The
objective is to improve the perception required for the inspection and
maintenance operations. These perception fusion techniques will be developed
in task T5.3. The datasets will be used for configuration and setting of the
methods and algorithms as well as for performing simulations and tests.
**Data collection:** This dataset includes data collected from the experiments
performed including partial experiments performed in the laboratory and real
experiments.
**Standards and metadata:** The dataset will be provided in a standard text
format, to be used from any standard software. A file with the description of
the formats and standards used will be added to the dataset in order to
facilitate sharing and reusability.
**Dataset sharing:** This dataset will be shared with partners related of the
AEROARMS consortium. After the publications in conference proceedings and/or
international journals, the data may be shared with the whole scientific
community (always in accordance with the GA and the CA clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be archived in the AEROARMS server, and besides
for redundancy on USE server.
# Planning for aerial manipulation in inspection and maintenance
This section is devoted to data that can be collected during the activities
performed in WP6. These datasets will be collected in preliminary tests in the
laboratory and in real experiments. These datasets are grouped depending on
the tasks in WP6 they are involved in.
## Dataset: Planning for aerial manipulation
**Dataset name:** Planning for aerial manipulation.
**Data contact and responsible:** CNRS, Antonio Franchi.
**Dataset description:** This dataset contains a sample of typical problem
inputs and algorithm outputs of the algorithms that will be developed in the
Task 6.1 of the project. Other meaningful data recorded from either
simulations or real experiments may be stored as well, upon request of other
partners of the project.
**Data collection:** The time history of the aforementioned data will be
collected.
**Standards and metadata:** The program running the developed algorithms will
be also provided to the partners. The dataset will be provided in a standard
text spaceseparated format. Papers reporting the results obtained from
simulations and experiments will be submitted for publication both in
conference proceedings and international scientific journals.
**Dataset sharing:** The previously mentioned data will be shared with
partners related. Moreover, after the publications in conference proceedings
and/or international journals, the data may be shared with the whole
scientific community (always in accordance with the GA and the CA clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be stored on a server at LAAS and committed on the
SVN project repository upon request of the partners.
## Dataset: Control-based local optimization methods for planning
**Dataset name:** Control-based local optimization methods for planning.
**Contact person and responsible:** CREATE Prof. Gianluca Antonelli.
**Dataset description:** A planner will be created, with the purpose to have
an agile system able to perform operations inside a dense industrial
installation, taking into account the dynamics of the manipulator. This
dataset will be used mainly for the development of the control-based local
optimization methods inside the planning algorithms (task T6.1). For the
experiments, the time history of the data generated by the planner, then the
positions and orientations of the system’s end-effector, will be also
collected.
**Data collection:** This dataset includes data collected from the experiments
performed including partial experiments performed in the laboratory and in
real experiments.
**Standards and metadata:** Code of the provided planner, developed in C/C++
under the ROS environment and/or in MatLab/Simulink. Moreover, simulation
models developed by using open source software as Gazebo or commercial
software as V-Rep (available with free educational license) will be also
provided. The time history of the variables involved into the planner will be
provided in ASCII format, to be used from any kind of software. Papers
reporting the results obtained from simulations and experiments will be
submitted for publication both in conference proceedings and international
scientific journal.
**Dataset sharing:** The code that will be generated and the results obtained
from the simulations and experiments will be shared with partners related.
Moreover, after the publications in conference proceedings and/or
international journals, the data may be shared with the whole scientific
community (always in accordance with the GA and the CA clauses).
**Archiving and preservation (including storage and backup):** The code that
will be generated and the results obtained from the simulations and
experiments and with relevant interest will be posted on SVN project
repository and on University of Cassino server.
## Dataset: Reactivity for safe operation
**Dataset name:** Reactivity for safe operation.
**Data contact and responsible:** USE, Ivan Maza.
**Dataset description:** This dataset contains the necessary data for the
development, testing and validation of reactive techniques for obstacle
avoidance of aerial robots in industrial environments. These datasets will be
used for configuration and setting of methods and algorithms as well as for
performing simulations and tests.
**Data collection:** This dataset includes data collected from the experiments
performed including partial experiments performed in the laboratory,
experiments in indoors and also experiments in outdoors.
**Standards and metadata:** The dataset will use standards such as the
measurements of the sensors onboard the robot (3D cameras), localization and
orientation of the aerial robot, among others. A file with the description of
the formats and standards used will be added to the dataset in order to
facilitate sharing and re-usability.
**Dataset sharing:** This dataset will be shared with partners related of the
AEROARMS consortium. After the publications in conference proceedings and/or
international journals, the data may be shared with the whole scientific
community (always in accordance with the GA and the CA clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be archived in the AEROARMS server. Besides, for
redundancy, USE will preserve a copy of the dataset.
# Validation in the industrial scenario
This section is devoted to data that can be collected during the activities
performed in WP8. These datasets will be collected in preliminary tests in the
laboratory and in real experiments. These datasets are grouped depending on
the tasks in WP8 they are involved in.
## Dataset: Installing a EC Sensor on a remote location
**Dataset name:** Installing a EC Sensor on a remote location.
**Data contact and responsible:** SENS, Bernard Revaz.
**Dataset description:** This dataset contains the relevant data collected in
the experiments of the AEROARMS application "Installing an Eddy Current (EC)
sensor on a remote location". By relevant data, we mean data allowing a
professional to have a clear understanding of the inspection and maintenance
operation. The dataset will be also used for debugging and validating the
different functionalities and techniques involved in the application as well
as to validate their integration.
**Data collection:** This dataset includes data collected from the experiments
performed.
**Standards and metadata:** The dataset will be provided in text and other
standard formats, ready to be used from any standard software. A file with the
description of the formats and standards used will be added to the dataset in
order to facilitate sharing and re-usability.
**Dataset sharing:** This dataset will be shared with partners related of the
AEROARMS consortium. The data not protected by IPR, may be shared with the
whole scientific community after the publications in conference proceedings
and/or international journals (always in accordance with the GA and the CA
clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be committed on the SVN project repository and on
a server at SENS.
## Dataset: Deploying, operation and maintenance of a mobile robot
**Dataset name:** Deploying, operation and maintenance of a mobile robot.
**Data contact and responsible:** AIR, Moritz OETIKER.
**Dataset description:** This dataset contains all the data collected in the
experiments of the AEROARMS application "Deploying, operation and maintenance
of a mobile robot". The dataset will contain the data necessary to re-play the
execution of the experiment.
The dataset will be used for debugging and validating the different
functionalities and techniques involved in the application as well as to
validate their integration. Moreover, the data will be used to demonstrate the
relevance of the robot deploying and maintenance task in an industrial
environment.
**Data collection:** This dataset includes data collected from the experiments
performed.
**Standards and metadata:** The dataset will be provided in text and other
standard formats, ready to be used from any standard software. A file with the
description of the formats and standards used will be added to the dataset in
order to facilitate sharing and re-usability.
**Dataset sharing:** This dataset will be shared with partners related of the
AEROARMS consortium. The data not protected by IPR, may be shared with the
whole scientific community after the publications in conference proceedings
and/or international journals (always in accordance with the GA and the CA
clauses).
**Archiving and preservation (including storage and backup):** The dataset
with relevant interest will be committed on the SVN project repository and on
a server at AIR.
# Conclusions
This document presented the first version of the Data Management Plan (DMP) of
the AEROARMS project. The objective is to detail what data the project will
generate, whether and how it will be exploited or made accessible for
verification and re-use, and how it will be curated and preserved.
These datasets are classified depending on the tasks they are involved in the
project. The document is structured in 4 Sections, each devoted to the
datasets of WPs 3, 4, 5, 6 and 8.
The DMP is not a fixed document; it evolves and gains more precision and
substance during the lifespan of the project. The final version of the DMP
(deliverable D9.5) will be delivered at M48.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0996_NEAT_644334.md
|
# Datasharing
Github 1 is the code repository chosen for all open-source software released
by the project. In cases where we provide such software, we will link to the
corresponding Github repository where relevant.
Scientific publications (and related public deliverables) will be shared on
Zenodo together with results files and snapshots of code to ensure
reproducibility. Decision procedures regarding the level of openness of the
data will follow NEAT’s publication approval process described in deliverable
D5.2 (“Dissemination Plan”).
We will use license options from Zenodo, and they will be decided on a case-
by-case basis. Whenever possible, we will prioritise use of the Creative
Commons licence.
<table>
<tr>
<th>
D5.3
Data management plan
</th>
<th>
Public Rev. 1.0/ September 1, 2015
</th> </tr> </table>
All publications will contain pointers to the relevant data sets in the Zenodo
archive. Whenever a newdatasetorpublicationbecomesavailable,
theNEATpublicwebsite 2 andanyassociatedaccounts in social-networking sites
will post a news item that will provide the relevant pointers.
4
of
5
Project no. 644334
Even though Zenodo will be used as the main vehicle for data sharing, partners
will post open data in their own web sites to maximise spreading of NEAT
results.
# Archivingandpreservation(includingstorageandbackup)
Since NEAT will use Zenodo, archiving will be handled there according to
Zenodo’s terms of service.
This service is free of cost.
**Disclaimer**
<table>
<tr>
<th>
D5.3
Data management plan
</th>
<th>
Public Rev. 1.0/ September 1, 2015
</th> </tr> </table>
Theviewsexpressedinthisdocumentaresolelythoseoftheauthor(s).
TheEuropeanCommission is not responsible for any use that may be made of the
information it contains.
5
of
5
Project no. 644334
All information in this document is provided “as is”, and no guarantee or
warranty is given that the information is fit for any particular purpose. The
user thereof uses the information at its sole risk and liability.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1000_SMARTSET_644704.md
|
# 1.- INTRODUCTION
A DMP describes the data management life cycle for all data sets that will be
collected, processed or generated **under** the research project. It is a
document outlining how research data will be handled during **the initiative**
, and even after the **action** is completed, describing what data will be
collected, processed or generated and following what methodology and
standards, whether and how this data will be shared and/or made open, and how
it will be curated and preserved. The DMP is not a fixed document; it evolves
and gains more precision and substance during the lifespan of the project.
## 1.1 Project background and vision
Creative industry SMEs in the broadcast media sector, such as small-scale TV
stations and production companies, have a need for Virtual Reality and
Augmented Technologies to remain competitive, bearing in mind their
limitations in facilities and resources. The possibility of expanding the use
of advanced graphics technologies which are only within reach for large-scale
TV networks, will be an important step forward to creative industry SMEs’ in
the competitiveness of this industry.
The vision in the SmartSet project is to develop a low cost virtual studio
solution that, despite being ten times less than the cost of comparable
solutions on the market, will have the same quality and functionality of high
cost solutions currently used by larger broadcast media companies so that the
project will increase the competitiveness of the European creative industries,
particularly in the broadcast media sector.
The SmartSet initiative is a response to a demand from creative industry SMEs
in the broadcast media sector to provide an advanced, cost effective virtual
studio solution which will increase their competitiveness in the market. The
project contributes to expanding a vibrant EU technological ecosystem for the
creative industries' needs and foster exchanges between the creative
industries SMEs and providers of ICT innovative solutions.
Having said that, it is essential to know if the SmartSet Innovation Action is
in line with the worldwide experts expectations in the field of Virtual Studio
technology and be sure that SmartSet objectives are aligned with the main
stakeholders and end users opinions. The generated data within the project
will be mainly related to consultation and validation processes, obtaining a
priceless feedback from endusers.
User consultations comprises activities related to defining user needs and
ensuring the SmartSet solution meets the express requirements of creative
industry SMEs, not forgetting that SmartSet solution has to be also cost
effective. User consultation also has to provide critical input to the
development of exploitation strategy and the SmartSet business planning
process.
## 1.2. Document description
The Data Management Plan intends to identify the dataset which is going to be
handled, to define a general protocol to create, manage and guarantee free
access to results and data collected within the project lifespan. This
document will be periodically updated along the duration of the project.
Due to project´s nature, the type of data managed in the project can´t be
considered as sensitive beyond some contact details and answers to
questionnaires. In SmartSet, the amount of information will be relatively
small since interest groups are established and focused on media professionals
and data collection is only addressed to consultation matters.
More detailed versions of the DMP will be then submitted when any significant
changes occur such as the generation of new data sets or changes in consortium
agreements.
# DATA COLLECTION
The main goal of this section is to define the nature and different types of
data that will be used in the project as well as the agents that will be
involved in the process.
## Data description
In SmartSet project there are 5 different sort of data that will be gathered
and produced during the project lifetime.
− **Personal Data:** contact details from stakeholders and project partners
who are taking part in either the requirements definition and some
consultation procedures or becoming a member of the On-line Community or CIAG.
− **Questionnaires:** forms created in order to c0llect feedback from industry
professionals about some aspects of the project that the consortia wish to
confirm and validate.
− **Interviews:** after answering questionnaires, it is expected to study more
complex parts of the system in depth with the aim of obtaining a clear idea of
customers´ expectations.
− **Graphic information:** pictures, videos, etc that are shared among end-
users when implementing the technology in their own virtual studios.
− **Deliverables:** these documents were described in the Description of Work
and accepted by the EC. According to the Workplan, these reports will be
published on the Project website to be accessible for the general public. Some
of the deliverables will contain aggregated data obtained by means of
questionnaires and interviews, summing up the gathered feedback without
revealing personal information from participants.
**Deliverables**
**Graphic**
**information**
**Interviews**
**Questionnaires**
**Contact**
**information**
**Figure 1. Types of Data**
Most of the datasets will be part of the information generated under the
following tasks, since these work packages involve contacting and getting
feedback from stakeholders and final users. Information obtained in WP2 and
Wp4 will mainly consists of the output resulting from questionnaires and
interviews distributed to stakeholders. However, data within Wp7 is generally
made up of personal contact details from potential end-users to whom
forthcoming results could be of interest.
<table>
<tr>
<th>
**WP/Task nr.**
</th>
<th>
**WP/ Task Description**
</th>
<th>
**Responsible**
</th>
<th>
**Output**
</th> </tr>
<tr>
<td>
WP2.- User Consultations & Requirements Definitions
</td>
<td>
Lapland UAS
</td>
<td>
Deliverable
</td> </tr>
<tr>
<td>
Task 2.1
</td>
<td>
User Consultation Process Protocol and Tools
</td>
<td>
Questionnaires/ Interviews
</td> </tr>
<tr>
<td>
WP4.- System Verification & Validation
</td>
<td>
Lapland UAS
</td>
<td>
</td> </tr>
<tr>
<td>
Task 4.3
</td>
<td>
Questionnaires and Templates for Data Collection
</td>
<td>
Questionnaires
</td> </tr>
<tr>
<td>
Task 4.4
</td>
<td>
Test Sessions and Data Collection
</td>
<td>
Interviews
</td> </tr>
<tr>
<td>
Task 4.5
</td>
<td>
Data analysis and feedback
</td>
<td>
Deliverable
</td> </tr>
<tr>
<td>
WP7.-Commercial Exploitation & Business Planning
</td>
<td>
UPV
</td>
<td>
Deliverable
</td> </tr>
<tr>
<td>
Task 7.1.-
</td>
<td>
Establish and Manage CIA Group
</td>
<td>
Contact details
</td> </tr> </table>
**Table 1. Work Packages data outcomes**
## Participants
As explained in deliverable 2.1 User Consultation Protocol, participants in
the **Smartset** project are composed of:
− _Developers_ of the **Smartset** software
− E _nd users_ whom are also the **Smartset** project partners together with
developers
− _Commercial impact advisory group_ which is formed from the group of
_stakeholders_ to share more general opinion among professionals in the
creative industry concerning the commercial potential of **Smartset** product.
In this case, we are going to include in the group of stakeholders , members
of the CIAG in order to simplify the analysis.
**Figure 2. Different participants’ groups involved in the SmartSet project**
## Tools
### Questionnaires
One of the main tools for collecting the data of user requirements and
validation is a versatile questionnaire. These forms are carried out with
Webropol by Lapland UAS. The surveys will be published online and the link to
the survey sent to each target group (end users and CIAG as representatives of
the stakeholders). After getting the link, respondents have 1,5–2 weeks time
to answer the survey and finally the questionnaires will be printed out from
Webropol.
### Interviews
To complement the data from the questionnaires, there will also be a series of
online interviews and/or meetings (in Skype or other similar online tool)
organized by Lapland UAS with the help of Brainstorm. The interviews are based
on the data gained from the questionnaires.
All the online sessions will be recorded for research data purposes and
transcripted.
### Production diaries and data collection
In the phase 2 of the user consultation process, user experiences will be
collected in the form of demo descriptions. Data collection is based on actual
user experiences after the end users have used SmartSet for making demos. The
emphasis is on the practical experiences and actual demos.
All end users are committed to document their work when carrying out demo
material with SmartSet within the project. The production diaries and other
data are collected during January–April 2016.
The end users are going to be provided with a template in which they will
document the processes, materials, experiences etc. in each of the demos
they’ll make. These templates will act as diaries that also will show each end
user’s personal development process as they gain more knowledge along the way.
It wil be important that end users also share data in the form of photos,
videos and other visual material. The materials are intended to be submitted
via e-mail, or if necessary, some other transportation method. In addition to
writing diaries, the end users’ experiences will be also collected in Skype
interviews. These interviews should be planned and organized based on each end
users’ individual needs. All the material will be combined into a final
report.
**Figure 3. Tools used for data collection**
## Evaluation and analysis of the data
Apart from the feedback (questionnaires, interviews, etc…) there will also be
data in video format (real-time and non-real time) in order to analyse the
quality of the productions and refine the technology components as well as
advice the users in the proper use of the technology.
The conclusions obtained by means of questionnaires, interviews, etc., which
can´t be considered as sensitive, will come out publicly. The gathered
material will be processed to both written and visual (charts, still photos
from demos etc.) final reports for further development of **SmartSet** .
End users will also deliver written reports, photos etc. about different
phases: first about expectations and demos, then about realization of the
demos and concluding report about final demo products (was final product what
you expected in quality, better or worse and how/why etc).
# DOCUMENTATION AND METADATA
As explained in previous sections of the DMP, data produced in SmartSet will
be mostly the outcome of analysing questionnaires and interviews to better
know the potential customers´expectations and perception about the SmartSet
product.
The information handled within this project might not be particularly
susceptible to be reused since it has been especifically designed for SmartSet
features. Despite this fact, conclusions resulting from the research are going
to be openly published and summarised in the approved deliverables which their
final versions will be accessible on the project website.
As a first stage, information is initially foreseen to be saved and backed up
on personal computers. Additionally, file nomenclature will be according to
personal criteria. Regarding file versioning, it is intended to fulfill
project policies detailed in D.1.1.- Project Handbook.
On a second stage, the consortia has chosen Google Drive platform in order to
upload and share information enabling in this way to be accessible among
project partners. Thereby, server could act at the same time as a security
copy.
Concerning personal contact details, which will have been previously approved
by informed consent, only some contact information from people participating
in Online Community will be published on the project website and in
deliverables. CIAG members authorised project consortia to publish their
contact information and photo on the corresponding section of the website.
Information collected via questionnaires and interviews will be published
collectively without revealing any personal opinion.
At this stage of the project, the main formats of files containing information
are described in the following table. However, this information is subject to
future changes which will be duly updated in next versions of DMP:
<table>
<tr>
<th>
**Type of Data**
</th>
<th>
**File Format**
</th> </tr>
<tr>
<td>
Questionnaires
</td>
<td>
Microsoft Word, Pages, PDF
</td> </tr>
<tr>
<td>
Interviews
</td>
<td>
AVI, mp4
</td> </tr>
<tr>
<td>
Deliverables
</td>
<td>
Microsoft Word (compatible versions),
</td> </tr>
<tr>
<td>
</td>
<td>
Pages, PDF
</td> </tr>
<tr>
<td>
Webinars, Demo Sessions
</td>
<td>
AVI, FLT, mp4
</td> </tr>
<tr>
<td>
Contact Details
</td>
<td>
Microsoft Word
</td> </tr> </table>
**Table 2. File formats**
# ETHICS AND LEGAL COMPLIANCE
On the one hand, Lapland University of Appled Sciences as responsible for User
consultation and Validation process deliverables is in charge of data security
and legal compliance. As a public institution, the university acts in
accordance to their internal rules of Information Security Policies and fulfil
National legislation referring this matter.
Brainstorm is a certificated company under ISO:9001 and it is committed to
ensure the necessary measures to guarantee the data protection.
In deliverables, answers from respondents are not going to be single out
individually, thereby, it will be impossible to for external people to
identify respondents answers. Data will be analyzed as a whole, however, the
questionnaires weren’t anonymous as every respondent gave their names and
contact information. This information is not being revealed at any time.
# STORAGE AND BACK UP
Initially, data have been storaged in personal computers and periodically
security copies are being done. Initially, it has been established to save all
new data in a frequency of once per week as long as new data are being created
or added. Despite the fact that, the amount of data collected does not require
a considerable storage capacity, external hard drive is expected to be used to
ensure the data storage. In addition, as explained above, Google Drive is
being used to back up the data and at the same time to be used as a repository
among partners to facilitate data exchange. Regarding deliverables, they will
be uploaded on the project website.
The onus of data storage speaking about questionnaires and interviews will be
on Lapland UAS but only due to practical reasons since they will be in charge
of leading questionnaire and interview collection. Concerning demo session
video and webinars, Brainstorm will assume the responsibility of keeping save
the information. Last but not least, personal information will be kept in a
personal computer with private access.
# DATA SHARING
All of the reports will be published online in _the publication series of
Lapland UAS_ . As a part of the Publication series B: Reports, all the
publications will have official ISSN- and ISBN-numbers.
Furthermore, public deliverables will be uploaded and accessible on due curse
on the project website section, Outcomes.
Graphic material such as demonstrations, webinars and session videos will be
uploaded on the project´s youtube channel to be accessible for general public.
# SELECTION AND PRESERVATION
At this stage of the project, the intention is to preserve and keep data at
least 5 years after the project finalisation.
# RESPONSIBILITIES AND RESOURCES
As a collaborative project, data management responsibility is divided into
different persons/organisations depending on the role they have adopted in the
project:
<table>
<tr>
<th>
**Type of Data**
</th>
<th>
**Resource**
</th>
<th>
**Responsible**
</th> </tr>
<tr>
<td>
Questionnaires/ Interviews
</td>
<td>
Personal Computers/ Google
Drive
</td>
<td>
Timo Puuko (Lapland Univ)
</td> </tr>
<tr>
<td>
Stakeholders contact details
</td>
<td>
Personal Computer
</td>
<td>
Francisco Ibañez (Brainstorm)
</td> </tr>
<tr>
<td>
Demonstrations,
Webinars, virtual set templates
</td>
<td>
Youtube channel
</td>
<td>
Javier Montesa (Brainstorm)
</td> </tr>
<tr>
<td>
Deliverables
</td>
<td>
Personal Computer/Google Drive/ Website
</td>
<td>
Francisco Ibáñez (Brainstorm)
</td> </tr> </table>
**Table 3. Storage resources**
Taking into consideration the nature of the data handled in the project, it is
not foreseen to need any exceptional measures in order to carry out our plan.
Moreover, no additional expertise will be required for data management.
Regarding the work to be done speaking about data storage and back up, the
project has agreed to appoint task leaders to take care of ensuring the plan
commitment.
<table>
<tr>
<th>
**Task name**
</th>
<th>
**Responsible person name**
</th> </tr>
<tr>
<td>
Data capture
</td>
<td>
Timo Puuko
</td> </tr>
<tr>
<td>
Metadata production
</td>
<td>
Timo Puuko
</td> </tr>
<tr>
<td>
Data storage & back up
</td>
<td>
Timo Puuko
</td> </tr>
<tr>
<td>
Data archiving & sharing
</td>
<td>
Francisco Ibáñez
</td> </tr> </table>
**Table 4. Task responsibles**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1002_PaaSword_644814.md
|
**Executive Summary**
This deliverable is the first version of PaaSword's Data Management Plan
(DMP). It includes the main elements foreseen in the European Guidelines for
H2020 and the data management policy that will be used for all the datasets
generated by the project. PaaSword's DMP is driven by the project's pilots.
Specifically, this document describes the datasets related to the four (out of
five) PaaSword pilots: 1) Intergovernmental Secure Document and Personal Data
Exchange (led by Ubitech), 2) Secure Sensors Data Fusion and Analytics (led by
Siemens), 3) Protection of personal data in a multi-tenant CRM environment
(led by CAS) and 4) Protection of Sensible Enterprise Information in Multi-
tenant ERP Environments (led by SingularLogic). For each of these datasets,
the document presents its name, description, standards and metadata that will
be used, data sharing options along with archiving and preservation details.
1. **Introduction**
In this deliverable, we discuss PaaSword's Data Management Plan (DMP) based on
the European Commission Guidelines for Horizon 2020. The purpose of DMP is to
analyse the main elements and their details of the data management policy that
will be used for each of the datasets generated by the project. Since the DMP
is expected to evolve and to mature during the project, updated versions of
the plan will be delivered periodically as the project progresses.
PaaSword's DMP is driven by the project's pilots. These have been selected to
cover a variety of business and public ecosystems with different
characteristics, thus, promoting the general applicability and validation of
the project results. The PaaSword use cases will evaluate the PaaSword
services in important real-life scenarios answering the crucial question of
the eventual benefits for users. Five types of PaaSword pilot applications are
envisaged during the project duration, covering important, real needs of user
communities and their respective success criteria, as shown below:
* Encrypted Persistency as PaaS/IaaS Service Pilot Implementation (led by SixSq)
* Intergovernmental Secure Document and Personal Data Exchange (led by Ubitech)
* Secure Sensors Data Fusion and Analytics (led by Siemens)
* Protection of personal data in a multi-tenant CRM environment (led by CAS)
* Protection of Sensible Enterprise Information in Multi-tenant ERP Environments (led by SingularLogic)
For all of the PaaSword pilots, with the exception of the first, details of
the datasets and the associated data management policy are discussed in
Sections 2-5. The pilot led by SixSq, consists of the integration of the
PaaSword components within the SlipStream “App Store” allowing Cloud
Application Operators to deploy and manage applications secured with the
PaaSword software. Since the use case involves deployment of the project’s
software rather than a specific, external application, there is no specialized
data set associated with this use case. Deployment and testing of this use
case will be done either with a mocked application or another use case
application, using the data sets defined by the other use cases.
2. **Intergovernmental Secure Document and Personal Data Exchange**
1. **Data set reference and name**
Ubitech is using a relational database management system in order to store all
the essential information that is related with the data exchange between
governmental personnel. The data exchange entities are encrypted using digital
certificates that belong to registry offices among different countries. The
dataset of Ubitech is named and referenced as “Ubitech Cross Border Exchange
Data”.
2. **Data set description**
The “Ubitech Cross Border Exchange Data” involve many entities. Each of these
entities is related to specific tables in an RDBMS such as Countries, Clerks,
Municipalities, Certificate Data, Users, etc.
The “Ubitech Cross Border Exchange Data” are stored in a relational database.
Some of the main entities (data types) that are used are the following:
* Clerk: the physical person in a registry office who can issue a certificate request/response
* Country: the name of countries
* DivisionHierarchy: the definition of geographical structure of each Country
* Region: the relations between each division of each Country
* Task: a certificate request/response assigned to a Clerk
* UMDBOffices: contains all the registry offices that have been created under an admin user
* UMDBUsers: contains all the users that have been created under an admin user
* User: represents a physical person based on the Distinguished Names (DN) of its certificate who has access to the platform
An indicative scenario for a common use of the “Ubitech Cross Border Exchange
Data” platform is the following:
“A person born in Rome, Italy, dies in Brussels, Belgium. Therefore a
respective automatic notification is sent from Brussels to Rome.” In this
scenario a Clerk (registry officer) in Brussels creates a death report
(Convention 3 - Formula C) regarding the death of the person and digitally
signs the report. After the report is digitally signed, it is encrypted based
on the public key of the receiving Clerk (in Rome, Italy). After the
encryption of the report the Clerk forwards the report to the region where the
person was born, which is Rome, Italy. The Clerk in the registry office (RO)
of Rome can open the report and thus is notified about the death of the
person. Note that that report is decrypted the exact time where the Clerk
opens the report within “Ubitech Cross Border Exchange Data” platform. Only
the specific, receiving Clerk can open the encrypted report because the public
key of his/her certificate was used to encrypt the death report.
Figure 1 contains a partial database schema of the “Ubitech Cross Border
Exchange Data” platform describing the above entities.
**Figure 1: Partial database schema of the “Ubitech Cross Border Exchange
Data” platform**
Altogether, the database contains 105569 data sets regarding the above
entities. In Table 1, the distribution of data sets for the particular data
types is displayed.
**Table 1: Scale of Ubitech cross Border exchange data**
<table>
<tr>
<th>
**Entity (Data Type)**
</th>
<th>
**Number of data sets**
</th> </tr>
<tr>
<td>
Clerk
</td>
<td>
34
</td> </tr>
<tr>
<td>
Country
</td>
<td>
194
</td> </tr>
<tr>
<td>
DivisionHierarchy
</td>
<td>
59
</td> </tr>
<tr>
<td>
Region
</td>
<td>
102796
</td> </tr>
<tr>
<td>
Task
</td>
<td>
1960
</td> </tr>
<tr>
<td>
UMDBOffices
</td>
<td>
277
</td> </tr>
<tr>
<td>
UMDBUsers
</td>
<td>
190
</td> </tr>
<tr>
<td>
User
</td>
<td>
59
</td> </tr>
<tr>
<td>
**Total**
</td>
<td>
**105569**
</td> </tr> </table>
**2.3 Standards and metadata**
Ubitech uses a set of conventions 1 for importing and exporting data in the
RDBMS. Reports generated by “Ubitech Cross Border Exchange Data” platform (in
PDF format) can be considered the main form of data export. One of the primary
conventions is that each generated report is digitally signed in order to
preserve the identity of the owner.
**2.4 Data sharing**
Ubitech will share a full database schema of the “Ubitech Cross Border
Exchange Data” platform within the PaaSword project with the project partners.
In addition, Ubitech will share a test data set (approximately 50000 tuples)
concerning four counties exchanging data between them through the “Ubitech
Cross Border Exchange Data” platform. This data set will be publicly released.
**2.5 Archiving and preservation**
The entire storage data set will probably not exceed a maximum of 2 GB.
Ubitech will archive the data set at least until the end of the project. A
full schema of the database is provided by the Ubitech RDBMS system. Ubitech
along with the rest consortium partners will further examine platform
solutions (e.g. _https://joinup.ec.europa.eu/_ and _http://ckan.org/_ )
that will allow the sustainable archiving of all the PaaSword datasets after
the life span of the PaaSword project.
3. **Secure Sensors Data Fusion and Analytics**
1. **Data set reference and name**
Siemens builds up its experimental data sources based on current business and
research projects. Siemens builds its own simulation tools, including
simulated data, based on existing known, real-life data sources. Such an
approach guarantees a replication of business cases still preserving the
privacy of potential sensitive data. For convenience the name to be used is
“SIEMENS Logistic Data”.
2. **Data set description**
Logistic problems refer to a range of directly measured, historical and
inferred data arriving from various data sources: ERP Systems, databases,
connected devices, mobile devices, and logging systems. Based on the
complexity of the subject, those data may be imported from a number of
different sources: secured connections, on premise, cloud or multi-cloud
environments.
In order to meet the experimental needs of PaaSword and also be representative
of the large volume of real life use cases, the data set will be inferred and
simulated taking in account a few relevant dimensions:
* Type: static and dynamic
* Format: text files, PDF files, SQL binary streams
* Location: on premise, public cloud, private cloud, mobile data
Since the Siemens team develops a number of logistics oriented solutions that
refer to both sensor and IT systems data, a reduced schema database, providing
common format information with various frequency of SCRUD operations will be
delivered for project research and experimental use at the end of Month 7. The
provided scheme will be deployed on a NoSQL-type database, which - in the Big-
data context in which Siemens’ relevant projects exist, provides a suitable
level of design simplicity and performance.
The business meaning of data that use and implement sensor data fusion for
logistic sub-processes is vast; nonetheless it is possible to mention few key
types:
* Tags
* Measurements
* Measurement precision
* Alarms
* Events
* Product
* Packaging
* Location
* Frequency of measurement
* Warehousing conditions
* Warehousing location/capability
* Transportation conditions
* Transportation and warehousing compatibilities
* Transportation meaning
* Transportation communication device
Table 2 estimates the size of data that could be used, considering the various
data types.
## Table 2: Scale of Siemens Sensors Data Fusion and Analytics data
<table>
<tr>
<th>
**Entity (Data Type)**
</th>
<th>
**Number of data sets**
</th> </tr>
<tr>
<td>
Tags
</td>
<td>
3000
</td> </tr>
<tr>
<td>
Measurements
</td>
<td>
800000
</td> </tr>
<tr>
<td>
Measurements precision
</td>
<td>
12
</td> </tr>
<tr>
<td>
Alarms
</td>
<td>
50000
</td> </tr>
<tr>
<td>
Events
</td>
<td>
70000
</td> </tr>
<tr>
<td>
Product
</td>
<td>
500
</td> </tr>
<tr>
<td>
Packaging
</td>
<td>
50
</td> </tr>
<tr>
<td>
Location
</td>
<td>
3000
</td> </tr>
<tr>
<td>
Frequency of measurements
</td>
<td>
10
</td> </tr>
<tr>
<td>
Warehousing conditions
</td>
<td>
20
</td> </tr>
<tr>
<td>
Warehousing location
</td>
<td>
50
</td> </tr>
<tr>
<td>
Transportation conditions
</td>
<td>
50
</td> </tr>
<tr>
<td>
Transportation and warehousing compatibilities
</td>
<td>
30
</td> </tr>
<tr>
<td>
Transportation meaning
</td>
<td>
10
</td> </tr>
<tr>
<td>
Transp. Communication device
</td>
<td>
50
</td> </tr>
<tr>
<td>
**Total**
</td>
<td>
**926782**
</td> </tr> </table>
Each of listed type may have different privacy and security profiles based on
specific use within a logistical process. Those profiles usually specify when
and to whom data is visible or is permitted to be manipulated.
A possible scenario referring previous data types for Siemens use case may
look like:
“One company, specializing in various logistical aspects through the whole
value chain, is offering to its customers a set of multi-site warehousing
facilities served by a various means of transportation.” This infrastructure
aims to support different, product-oriented companies that externalize
logistic details for cost reduction. The logistic company manages the
transportation conditions, packaging and grouping of products inside the
different transfer steps between customers’ facilities, providing adapted and
monitored warehousing and transportation conditions as well as active and
passive tagging of products and packaging. These aspects are achieved by
deploying sensors and communication capabilities attached both to
transportation and carried products. Since products may raise different
sensitivity issues, a middleware capable of generating different alarms and
events should run on top of the data infrastructure, requesting readings with
a variable frequency, and serving, in an isolated way, both the logistics
company and its customers, which can run their own analytics. Analytics
capabilities and middleware should provide configurability, traceability and
accountability of logistics services in close to real time.
Since the data to be provided will be based on simulated processes and will be
generated in laboratory, it will be made available to all project partners to
be used in scientific investigations. Depending on the different levels of
volume and complexity as well as the variations in throughput and precision
that will be considered, the total size of the dataset can range from 10 GB to
500 GB.
**3.3 Standards and metadata**
Usually (as it will be the case here) the metadata is described in an XML DTD
and/or using semantic annotations and will follow standards as SSN 2 . Still
since formats may vary due to the integration of various proprietary systems,
a common data description will be agreed with the project partners per each
type of source.
**3.4 Data sharing**
Siemens will share a relevant volume of data and associated metadata and
connectors. During the first year of the project, a set of agreed procedures
for sharing will be established, with current assumption being that project’s
ownCloud repository will be sufficient for the metadata part. Since the
provided use case is extracted from real life experiences, a measure of
confidentiality needed for public access will be evaluated. Based on this
evaluation a set of metadata (especially ones based on Open Data sources) will
be released as public resource.
**3.5 Archiving and preservation**
Local Siemens data centre facility will be used for storage and back up. Since
we are dealing with experimental data the volume of data sets may vary based
on the experimental needs to reach the project’s objectives. Siemens along
with the rest consortium partners will further examine platform solutions
(e.g. _https://joinup.ec.europa.eu/_ and _http://ckan.org/_ ) that will
allow the sustainable archiving of all the PaaSword datasets after the life
span of the PaaSword project.
4. **Protection of personal data in a multi-tenant CRM environment**
1. **Data set reference and name**
CAS is using classical CRM data. In the PaaSword project, the data set of CAS
is named and referenced as “CAS CRM Data”.
2. **Data set description**
Because classical CRM data is composed of a mix of personal data and
confidential business data, CAS exclusively utilizes mock data for system
demonstrations, system development, system tests, and research. CAS CRM Data
is suitable for use with CAS Open, the pilot platform of CAS in PaaSword. In
order to allow meaningful system tests and demonstrations, data volume,
structure, coverage, and associations between the mocked data objects
contained in the CAS CRM Data are complete in the technical dimension and
reflect the typical data set of a customer using CAS Pia (i.e. the cloud-based
CRM solution of CAS Software AG build on top of CAS Open).
In addition to the mocked data objects, CAS CRM Data also includes sample
users, user profiles, and resources, realistic in terms of amount and type.
They are necessary for manual and automated permission system tests as well as
for interactive system demonstrations. System configurations and user settings
are part of the data set.
CAS Open is a multi-tenant system, following the one-schema-per-tenant
approach. Because of that, CAS CRM Data by default contains three full
tenants, which is typically sufficient for the purpose of testing tenant
isolation and version update operations. Additional tenants can be easily
created by cloning.
The CRM data is stored either in relational databases or are document files.
The following data types are used:
* Contacts
* Appointments
* E-mails
* Documents, e.g. office documents, text documents, etc.
* Campaigns
* Opportunities
* Tasks
* Phone calls
* Projects
* Products
A partial database schema is displayed in Figure 2. in order to describe the
entities in “CAS CRM Data”.
Contacts
Phone
calls
Opportunities
Campaigns
E
\-
mails
Appointments
, Tasks
Documents Products Projects
**Figure 2: Data Model CAS CRM Data**
These data types are dynamic in the sense that the user can extend every data
type by adding new attributes. When adding personal attributes to formerly
non-personal data types, the extended data type will also become a personal
data type.
In order to manage permissions every named data type has a corresponding
permission model that includes the access management data for CAS Open’s
discretionary access control (DAC) mechanisms, including owner type (e.g.
user) and the role (e.g. participant).
An indicative scenario for a common use of “CAS CRM Data” is the following:
“CRM systems focus on managing (i.e. planning, controlling and executing) all
interactive processes with the customer, like arranging phone calls, managing
opportunities or organizing meetings. Britta wants to organize a phone call
about a new offering with Robert. Therefore, she generates a new appointment
in their CRM system, CAS Pia, and includes Robert as a participant with full
access permissions. Britta wants to share a document with Robert containing
the offer, which is confidential content. Therefore, the document is encrypted
before Britta attaches it to the appointment in the CRM system. After Britta
recorded the appointment, CAS Pia notifies Robert about the new appointment
that was added to his calendar. Robert opens the calendar and has a look at
the appointment. He notices that Britta has attached an encrypted document.
Robert opens the document that needs to be decrypted at the same time when
Robert opens it in his CAS Pia. Only Robert can decrypt the file because
Britta used Robert’s public key for the encryption.”
The test data set can be used for scientific publications concerning the
integration of the PaaSword framework into the operation of a multi-tenant CRM
system.
Altogether, the database contains 2130 data sets per tenant. Table 3 displays
the distribution of data sets per data type.
## Table 3: Scale of CAS CRM Data
<table>
<tr>
<th>
**Data Type**
</th>
<th>
**Number of data sets**
</th> </tr>
<tr>
<td>
Contacts
</td>
<td>
404
</td> </tr>
<tr>
<td>
Appointments
</td>
<td>
1110
</td> </tr>
<tr>
<td>
E-mails
</td>
<td>
48
</td> </tr>
<tr>
<td>
Documents
</td>
<td>
31
</td> </tr>
<tr>
<td>
Campaigns
</td>
<td>
6
</td> </tr>
<tr>
<td>
Opportunities
</td>
<td>
21
</td> </tr>
<tr>
<td>
Tasks
</td>
<td>
485
</td> </tr>
<tr>
<td>
Phone calls
</td>
<td>
25
</td> </tr>
<tr>
<td>
Projects
</td>
<td>
0
</td> </tr>
<tr>
<td>
Products
</td>
<td>
0
</td> </tr>
<tr>
<td>
**Total**
</td>
<td>
**2130**
</td> </tr> </table>
**4.3 Standards and metadata**
CAS uses standards for importing and exporting data in the CRM system. For the
import/export of contacts, the vCard 3 format is used. The datatype-
independent import/export of data uses the CSV 4 format. Reports generated
by CAS Open (in PDF format) can be considered as another form of data export.
A database schema with the sole purpose of storing the metadata necessary for
the operation of CAS Open is included in the CAS CRM Data.
**4.4 Data sharing**
CAS will share the test data set with the PaaSword project with all partners
and make it publicly available. The cloud-based CRM solution CAS Pia can be
used through a standard browser. In order to grant access to the project
partners, CAS will install a demo client and configure a demo user for each
partner. The demo system will be based on the test database described in
Section 4.2. The data set can be reused by every project partner.
**4.5 Archiving and preservation**
The final volume of the data set will probably not exceed the maximum of 1GB.
CAS will archive the data set at least until the end of the project. A backup
of the database is provided by the CAS system. There will be no costs arising
from these activities. CAS along with the rest consortium partners will
further examine platform solutions (e.g. _https://joinup.ec.europa.eu/_ and
_http://ckan.org/_ ) that will allow the sustainable archiving of all the
PaaSword datasets after the life span of the PaaSword project.
5. **Protection of Sensible Enterprise Information in Multi-tenant ERP Environments**
1. **Data set reference and name**
SingularLogic is using a data set that is part of its Multi-tenant ERP system.
In the PaaSword project, the dataset offered by SingularLogic is named and
referenced as “SILO ERP Data”.
2. **Data set description**
Due to the private and confidential nature of the stored data of
SingularLogic’s ERP systems, the data provided for the PaaSword project will
be mocked. The produced, mocked data that constitute “SILO ERP Data” will,
however, be suitable for use with the specific ERP from SingularLogic’s
portfolio that will be used as pilot in PaaSword. The data volume, structure,
coverage, and associations between the mocked data objects contained in the
SILO ERP Data have been created in such way that they allow meaningful system
tests and demonstrations, in real-world usage scenarios.
Real SILO ERP data are stored in relational databases; the same approach will
be used for SILO ERP data used in PaaSword. The following data types are part
of SILO ERP Data.
* Contacts
* Calendar
* Projects
* People
* Invoices
* Payments
* Agreements
* Products
* Inventory
* Tenants
* Accounts (Billing and Financial Accounts, Credit Cards, Bank Accounts, Bonds)
* Customer Requests
* Documents
* User Profiles
Part of the database schema describing the most important tables is presented
in Figure 3.
**Figure 3: Partial database schema of the “SILO ERP” platform**
Multi-tenancy is supported in SILO ERP and it has the ability to run separate
data instances (tenants) from a single ERP installation. Each data instance is
kept in a separate database (one-schema-per-tenant) that is selected when a
user logs into the application. For this reason, SILO ERP Data includes four
tenants that can be used for proper testing of multi-tenancy scenarios.
An indicative scenario for a common use of the “SILO ERP” platform is the
following:
“A user of SILO ERP made a payment and wants to store this information in
his/her account on SILO ERP”. In this scenario, the user accesses SILO ERP and
logs in. During login process the appropriate tenant is selected and the
user’s data is displayed. Critical data are encrypted in the database and
decrypted when needed. The user navigates through the menu to payments and
adds payment information in the appropriate form. The payment is then stored
in the corresponding database tables.
Altogether the database contains about 1166 data sets per tenant,
corresponding to the database entities presented above. Slight differences
occur between tenant databases as some changes have been introduced in order
to differentiate the tenants. The distribution of data sets per data type is
displayed in Table 4.
## Table 4: Scale of snapshot data in SILO ERP
<table>
<tr>
<th>
**Entity (Data Type)**
</th>
<th>
**Number of data sets**
</th> </tr>
<tr>
<td>
Contacts
</td>
<td>
64
</td> </tr>
<tr>
<td>
Calendar Items
</td>
<td>
268
</td> </tr>
<tr>
<td>
Projects
</td>
<td>
10
</td> </tr>
<tr>
<td>
People
</td>
<td>
70
</td> </tr>
<tr>
<td>
Invoices
</td>
<td>
160
</td> </tr>
<tr>
<td>
Payments
</td>
<td>
258
</td> </tr>
<tr>
<td>
Agreements
</td>
<td>
9
</td> </tr>
<tr>
<td>
Products
</td>
<td>
90
</td> </tr>
<tr>
<td>
Inventory Items
</td>
<td>
155
</td> </tr>
<tr>
<td>
Tenants
</td>
<td>
4
</td> </tr>
<tr>
<td>
Accounts
</td>
<td>
7
</td> </tr>
<tr>
<td>
Documents
</td>
<td>
60
</td> </tr>
<tr>
<td>
Customer Requests
</td>
<td>
4
</td> </tr>
<tr>
<td>
Users
</td>
<td>
7
</td> </tr>
<tr>
<td>
**Total**
</td>
<td>
**1166**
</td> </tr> </table>
**5.3 Standards and metadata**
SingularLogic's approach is to use industrial and open standards on its
products and projects. The specific ERP used for the purpose of PaaSword is
based on open source solutions and standards-based export and import functions
are offered through SILO ERP. Export in XML 5 and MS Excel format (“xls”
type) are supported. The “xls” format support allows the transformation of the
exported data to other data formats supported by MS Excel, like CSV.
**5.4 Data sharing**
SingularLogic will share the test data set with the PaaSword project partners.
ERP accounts have been created for project use and test data have been
exported already. The dataset provided can be used for shared publicly.
**5.5 Archiving and preservation**
The data set provided by SingularLogic for the project will be archived at
least until the end of the project. The archiving will be part of the backup
strategy currently taking place for products that Singular already offers. The
data set’s final volume will not exceed the 1 GB boundary. The standard backup
strategy of Singular products will be used. No extra costs will rise for
archiving and preserving SILO ERP data set for the project’s duration.
SingularLogic along with the rest consortium partners will further examine
platform solutions (e.g. _https://joinup.ec.europa.eu/_ and
_http://ckan.org/_ ) that will allow the sustainable archiving of all the
PaaSword datasets after the life span of the PaaSword project.
**6 Conclusions**
The initial PaaSword DMP presented in this deliverable will be updated
accordingly throughout the lifetime of the project in D7.2 Dissemination
Activities Report (M12, M24 and M36). The following table summarizes the
datasets that were discussed in the previous sections and will be made
available by the PaaSword consortium.
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
CAS CRM
Data
</td>
<td>
Mock CRM data composed of a mix of personal data and confidential business
data.
* Sample users, user profiles, and resources, realistic in terms of amount and type
* Data types: Contacts, Appointments, E-mails, Documents, Campaigns,
Opportunities, Tasks, Phone calls, Projects, Products
</td>
<td>
< 1 GB
</td> </tr>
<tr>
<td>
SILO ERP
Data
</td>
<td>
Mock data suitable for use in multi-tenant ERP systems.
* Data volume, structure, coverage, and associations between the mocked data objects will allow for meaningful system demonstrations.
* Data types: Contacts, Calendar, Projects, People, Invoices, Payments,
Agreements, Products, Inventory, Tenants, Accounts, Customer Requests,
Documents, User Profiles
</td>
<td>
< 1 GB
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1005_HUMANE_645043.md
|
# Executive summary
This deliverable presents the first version of the Data Management Plan (DMP)
for the HUMANE project, and is a mandatory report for all projects
participating under the ICT31 Open Research Data pilot in Horizon 2020.
The deliverable first presents the key considerations made to ensure open
access to both research data and project publications. We next describe the
background for why and how HUMANE needs to be an open access project,
influencing the overall data management processes. The deliverable next
describes the data sets to be gathered, processed and analysed. These data set
descriptions follow the HUMANE 2020 DMP template provided by the European
Commission. This template was circulated to the project-partners responsible
for the different studies to be conducted, and partners completed the data set
descriptions according to the current plans for gathering and analysis of data
as well as the methods and processes foreseen to be applied to ensure open
access and data sharing of the data. Where open access to research data
represents a risk for compromising the privacy of study participants, data
will not be shared or made accessible.
As a final activity in preparing the DMP, we have reviewed HUMANE-relevant
open access journals, focusing on gold open access without author processing
fees and green open access journals with a maximum of 12 months embargo period
for self-archiving in repositories. The review resulted in a long list of
potential publication-venues.
# Introduction
All projects under ICT 31 participate in the Open Research Data pilot. This
implies requirements for open access to research data and open access to
scientific publications. Open access is defined by the EC as "the practice of
providing online access to scientific information that is free of charge to
the end-user and that is re-usable" (European Commission 2013a, p. 2).
This Data management plan (DMP) describes the data management life cycle for
the data sets to be collected and processed by HUMANE. The DMP outlines the
handling of research data during the projects, and how and what parts of the
data sets will be made available after the project has been completed. This
includes an assessment of when and how data can be shared without disclosing
directly or indirectly identifiable information from study participants.
The DMP specifies deadlines for the availability of research data, describes
measures to ensure data are properly anonymized to ensure the privacy of
informants and respondents, and to ensure the open data strategy does not
violate the terms made with the interlinked R&I projects.
With regard to access to research data HUMANE will make the data available in
a research data repository to make it possible for third parties to access,
mine, exploit, reproduce and disseminate - free of charge - the data and
metadata. Research data was originally planned to be archived at the Norwegian
Social Science Data Services to ensure re-use in future research projects and
follow-up studies. Working with this deliverable, project partners decided to
use Zenodo as the project data and publication repository, and to link the
repository to a HUMANE project-site at OpenAIRE. This decision was made to
make sure the data and publications are as easily discoverable and accessible
as they should be, assessing Zenodo and OpenAIRE to be a better option than
the Norwegian Social Science Data Services.
With regard to open access to scientific publications, HUMANE aims to publish
in open access journals (gold open access), and to make publications behind
pay-walls available as final peerreviewed manuscripts in an online repository
after publication (green open access). To ensure gold open access, the HUMANE
budget includes costs for Article Processing Charges (APC), yet a review of
HUMANE-relevant journals conducted for this deliverable indicates that a
better option is to choose gold open access journals without APC or journals
offering green option access. With regard to the latter, following the
recommendations of the data management plan ensures we only submit our work to
journals with a maximum of 12 months embargo period for self-archiving in
repositories.
This deliverable is structured as follows. In chapter 2, we will describe the
guiding principles for the overall data management of HUMANE. In chapter 3 we
will present the data sets to be gathered, processed and analysed, following
the H2020 DMP template (European Commission 2013b). For each data set, we
will: (i) provide an identifier for the data set to be produced; (ii) provide
the data set description; (iii) refer to standards and metadata; (iv) describe
how data will be shared; and (v) describe the procedures for archiving and
long-term preservation of the data. In chapter 4 we describe how HUMANE plans
to comply with the Horizon 2020 mandate on open access to publications.
# Guiding principles
The legal requirements for open research data in ICT topic 31-projects are
contained in the article
29.3 in the Grant Agreement, stating that:
_Regarding the digital research data generated in the action ('data'), the
beneficiaries must:_
1. _Deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate – free of charge for any users – the following:_
1. _the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible._
2. _other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan':_
2. _provide information – via the repository – about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and – where possible – provide the tools and instruments themselves)._
As can be interpreted from article 29.3 in the Grant Agreement, the objectives
of open access to data primarily concern two aspects: to have raw-data
available for post-validation of research results; and to permit re-use in
future research projects. Relatedly, as emphasized by the EC (2013a), open
research data can help to accelerate innovation; foster collaboration and
avoid duplication of efforts; build on previous research results; and increase
the transparency of the scientific process.
_**Open access to research data and to publications should however not
represent a risk for compromising the privacy of informants participating in
the different HUMANE casestudies by** _ _**openly publish datasets in which
persons, households or families are identified. This DMP assesses when and how
data can be shared within a sound research ethics framework, where directly or
indirectly identifiable information is not disclosed at any stage in the
research process.** _
In addition to open access to research data, HUMANE will comply with the
requirements for open access to scientific publications. We will return to
this in section 4. In the below section 3, we describe the data sets to be
gathered and processed in HUMANE, and the procedures followed to ensure open
access to these data sets without violating the privacy of informants taking
part in the HUMANE case-studies.
Figure 1 illustrates the main points for how open access to research data and
publications will be ensured in the project.
**Figure 1: HUMANE open access to data and publications.**
Finally, it is worth noting that open access to research data and publications
is important within the context of responsible research and innovation 1 .
Ensuring research data and publications can be openly and freely accessed,
means any relevant stakeholder can choose to cross-check and validate whether
research data are accurately and comprehensively reported and analysed, and
may also encourage re-use and re-mixing of data. A better exploitation of
research data has much to offer, also in terms of alleviating the efforts
required by study participants as well as researchers. Optimizing sharing of
research data could potentially imply less duplication of very similar studies
as previously collected data sets may be used at least as additional sources
of data in new projects. Again, we emphasize that open access to research data
must comply with sound research ethics, ensuring no directly or indirectly
identifiable information is revealed.
# Data sets to be gathered and processed in HUMANE
In this chapter we describe the different data sets that are planned to be
gathered and processed by the HUMANE-partners. These descriptions follow the
template provided by the EC for open research data projects in Horizon 2020
(European Commission 2013b). This template (see section 7: Appendix) was
circulated to be completed by the project-partners responsible for the
different studies to be conducted. The data sets follow many of the same
procedures, e.g. with regard to using Zenodo as
an open data repository. This means the same wording is often repeated in the
different data sets. As each data set description should give a comprehensive
overview of the gathering, processing and open access archiving of data, we
assessed it as necessary to repeat the procedures in the different data set
descriptions. The name for each data set includes a prefix "DS" for data set,
followed by a case-study identification number, the partner responsible for
collecting and processing the data, as well as a short title. The H2020 DMP
template requires that information about data set metadata is provided. We
have primarily based the outlining of how and what metadata will be created on
the guidelines provided by the European University Institute (2015).
Table 1 gives an overview of the data sets to be collected. The descriptions
of each data set, following the H2020 template, are provided in the following
sections. **Table 1: Overview of data sets**
<table>
<tr>
<th>
**No.**
</th>
<th>
**Identifier/name**
</th>
<th>
**Brief description**
</th> </tr>
<tr>
<td>
1
</td>
<td>
DS.C1.SINTEF. Open
Innovation data set
</td>
<td>
This data set will provide accounts on open innovation in terms of involved
personnel and customers' experiences of cross- and intraorganizational
collaboration, and motivation-mechanisms such as gamification.
</td> </tr>
<tr>
<td>
2
</td>
<td>
DS.C2.SINTEF. Redistribution markets data set
</td>
<td>
This data set will provide accounts on customer-experiences with
redistribution markets, including both the experiences of selling and buying
products.
</td> </tr>
<tr>
<td>
3
</td>
<td>
DS.C3.IT Innovation. eVACUATE data set
</td>
<td>
This data set will provide qualitative insights on using the HUMANE typology
and method for human machine networks for crisis management from use case end-
users and ICT/system architects.
</td> </tr>
<tr>
<td>
4
</td>
<td>
DS.C4.ATC. REVEAL data set
</td>
<td>
This data set will provide accounts on journalists’ experiences regarding the
use of REVEAL platform, which aims to analyse the credibility and
trustworthiness of diverse online sources.
</td> </tr>
<tr>
<td>
5
</td>
<td>
DS.C5.UOXF.
Wikipedia data set
</td>
<td>
This data set will provide the analysed large-scale raw data extracted from
selected channels for Wikipedia data, aiming to address transactions and
collaboration between active and contributing Wikipedia-users.
</td> </tr>
<tr>
<td>
6
</td>
<td>
DS.C6.UOXF.
Zooniverse data set
</td>
<td>
This data set will provide logs of contributors' classifications at the
citizen science portal Zooniverse.
</td> </tr>
<tr>
<td>
7
</td>
<td>
DS.C7.ATC. Roadmap data set
</td>
<td>
This data set will provide the raw-data from a survey conducted with relevant
practitioners to gather data on needs, expectations and experiences with
human-machine networks.
</td> </tr> </table>
## DS.C1.SINTEF. Open innovation data set
The DS.C1 data-set consists of: (1) Qualitative and anonymized interview-
transcripts with employees in an enterprise, which uses an online open
innovation solution for gathering suggestions and ideas for service
innovation; (2) Anonymized and primarily qualitative data from a survey with
customers who have contributed ideas and suggestions to the open innovation
solution, and (3) Anonymized interview-transcripts from focus-groups with
enterprise-employees.
Anonymous data are items of information that cannot in any way identify
individuals in the data material directly through names or personal ID
numbers, indirectly through background variables, or through a list of names /
connection key or encryption formula or code. The data set will not include
the name of the company we are studying. The combination of background
variables such as gender, age, employee role in the company and the company
name increases the risk of identifying individuals in the data material. At
this stage we assess that withholding the company name is sufficient to ensure
the privacy of the informants, but we will need to re-assess this
continuously.
In order to ensure confidentiality, the lists with names and reference-number
to the participants will be kept separate from the empirical data. These lists
will not be stored together with the main material, but stored in an isolated
computer belonging to the institution conducting the different case studies,
and accessible only for the person in charge of the case-study.
### Data set description
**Origin of data:** The data set will provide accounts on open innovation in
terms of involved personnel and customers' experiences of cross- and intra-
organizational collaboration, and motivationmechanisms such as gamification.
The data in this data set will be collected by SINTEF.
**Nature and scale of data:** (1) Transcripts of interview data in the
language it was conducted (English or Norwegian); (2) Completed surveys in
Norwegian. Note: survey will primarily include open-ended questions; (3)
Transcripts of focus-group interviews in Norwegian.
**To whom the data set could be useful:** Outside of the consortium, the data
in its anonymized form might be useful for other researchers interested in the
potentials and limitations of open innovation and crowdsourcing of ideas.
However, most of the transcripts will be in Norwegian, which clearly delimits
the usefulness of the data outside of Scandinavian countries.
**Scientific publication:** It is our objective to use the data-set as a basis
for at least one scientific publication.
**Existence of similar data-sets?** To our knowledge qualitative data-sets on
the experiential aspects of online open innovation are not openly available.
### Standards and metadata
The following metadata (with indicative values) will be created:
* Author/compiler of data set: Marika Lüders, Dimitra Chasanidou, SINTEF (may be updated)
* Funded by: [HUMANE, H2020 – 645043]
* Format: [PDF/A]
* Content-data: open innovation, crowdsourcing, online open innovation, Norway
* Method of data accumulation: qualitative interviews, qualitative survey, focus-groups.
* Data collection period [from] – [to]: 15.10.2015-15.12.2016 (may be updated)
* Conditions of use of data: open access, free of charge.
* DOI: [assigned by Zenodo]
* Related publications [Bibliographic details of publications based on the data-set]
### Data sharing
**Access procedures:** The anonymized and transcribed data from the interviews
and the anonymized collation of survey responses will be made accessible and
available for re-use and secondary analysis by uploading the data to Zenodo.
For the transcribed interviews, the time, location, pseudonym for each
individual interview will be clearly stated.
**Document format and availability:** The data-set will be available as PDF/A
at _http://www.zenodo.org/collection/datasets_ . From here the fully
anonymized data are open accessible for anyone, free of charge.
The data will be uploaded to Zenodo in M24 of HUMANE's project period. Before
uploading datasets, we will first have to anonymize data. We plan to anonymize
the data in the final month of the project.
### Archiving and preservation (including storage and backup)
Archiving of the anonymized data-set at Zenodo guarantees a long-term and
secure preservation of the data at no additional cost for the project. Zenodo
informs that "in the highly unlikely event that Zenodo will have to close
operations, we guarantee that we will migrate all content to other suitable
repositories, and since all uploads have DOIs, all citations and links to
Zenodo resources (such as your
data) will not be affected."
## DS.C2.SINTEF. Redistribution markets data set
The DS.C2 data-set consists of (1) qualitative and anonymized interview-
transcripts with adult endusers of the online redistribution service Snapsale;
(2) Data-base with raw data on independent and dependent variables in selling-
and buying experiments (quasi-experimental ); (3) Summaries of qualitative
content analysis; (4) Interview-transcripts from focus-groups with Snapsale
managers and employees.
Anonymous data are items of information that cannot in any way identify
individuals in the data material directly through names or personal ID
numbers, indirectly through background variables, or through a list of names /
connection key or encryption formula or code. The data-material from the
focus-groups with Snapsale managers and employees will not be shared outside
the consortium. We assess it as unviable to withhold the company-name, as the
qualitative content analysis and the quasi-experiment will require disclosing
the company. Disclosing the company name will not enable us to properly
anonymize the company employees and managers, as the combination of company
name and background variables such as gender, age and company role will
increase the risk of identifying individuals in the data material. The data
from the interviews with adult end-users of Snapsale can be properly
anonymized and will be shared as described below.
In order to ensure confidentiality, the lists with names and reference-number
to the participants will be kept separate from the empirical data. These lists
will not be stored together with the main material, but stored in an isolated
computer belonging to the institution conducting the different case studies,
and accessible only for the person in charge of the case-study.
### Data set description
**Origin of data:** The data set will provide accounts on customer-experiences
with redistribution markets, including both the experiences of selling and
buying products. The data in this data set will be collected and analysed by
SINTEF.
**Nature and scale of data:** (1) Transcripts of interview data in Norwegian;
(2) Data-base on selling/buying quasi-experiments in English; (3) Summaries of
qualitative content analysis in English;
(4) Transcripts of focus-group interviews in Norwegian (not shared outside the
consortium).
**To whom the data set could be useful:** Outside of the consortium, the data
from the interviews with adult end-users of Snapsale in its anonymized form
might be useful for other researchers interested in user-experiences of
redistribution markets. The interview transcripts will be in Norwegian, which
limits the usefulness of the data outside of Scandinavian countries.
**Scientific publication:** It is our objective and plan to use the data-set
as a basis for at least one scientific publication.
**Existence of similar data-sets?** To our knowledge qualitative data-sets on
the experiential aspects of online redistribution markets are not openly
available.
### Standards and metadata
The following metadata (with indicative values) will be created:
* Author/compiler of data set: Marika Lüders, Jan Håvard Skjetne, Aslak Eide, SINTEF (may be updated)
* Funded by: [HUMANE, H2020 – 645043]
* Format: [PDF/A; excel]
* Content-data: sharing-economy, redistribution markets, second hand consumption
* Method of data accumulation: qualitative interviews, focus groups, quasi-experimental design, qualitative content analysis.
* Data collection period [from] – [to]: 15.10.2015 – 15.12.2016 (may be updated)
* Conditions of use of data: open access, free of charge.
* DOI: [assigned by Zenodo]
* Related publications [Bibliographic details of publications based on the data-set]
### Data sharing
**Access procedures:** The anonymized and transcribed data from the interviews
with end-users, and the data from the quasi-experiment and content analysis
will be made accessible and available for reuse and secondary analysis by
uploading the data to Zenodo. For the transcribed interviews, the time,
location, pseudonym for each individual interview will be clearly stated.
**Document format and availability:** The data-set will be available as PDF/A
for the transcribed interviews and summary of content analysis and excel-data
base for the quasi-experimental data at
_http://www.zenodo.org/collection/datasets_ . From here the fully anonymized
data are open accessible for anyone, free of charge.
The data will be submitted to NSD in M24 of HUMANE's project period. Before
uploading data-sets, we will first have to anonymize data. We plan to
anonymize the data in the final month of the project.
### Archiving and preservation (including storage and backup)
Archiving of the anonymized data-set at Zenodo guarantees a long-term and
secure preservation of the data at no additional cost for the project. Zenodo
informs that "in the highly unlikely event that Zenodo will have to close
operations, we guarantee that we will migrate all content to other suitable
repositories, and since all uploads have DOIs, all citations and links to
Zenodo resources (such as your
data) will not be affected."
## DS.C3.IT Innovation. eVACUATE data set
The DS.C3 data set consists of: (1) qualitative and anonymised survey
responses from eVACUATE use case end-user participants from the different
eVACUATE use cases on whether networks described in the HUMANE topology
adequately reflect the situation in the respective use case scenarios; (2)
qualitative and anonymised interview transcripts from telephone-based focus
group discussions with the same eVACUATE use case end-users around any issues
or concerns that may have been defined through the aforementioned survey; (3)
qualitative and anonymised interview transcripts from focus groups with
IT/system architects from eVACUATE using design patterns and applying the
HUMANE typology to the process of system design; (4) qualitative and
anonymised survey responses from eVACUATE IT/system architects, which will
establish a ranking of importance of factors arising from the aforementioned
focus groups; (5) summaries from the analysis of the four former sources of
data.
Anonymous data are items of information that cannot in any way identify
individuals in the data material directly through names or personal ID
numbers, indirectly through background variables, or through a list of names /
connection key or encryption formula or code. The data set will not include
the name of the specific eVACUATE use cases we are studying. The combination
of background variables such as gender, age, employee role in the use case and
the use case name increases the risk of identifying individuals in the data
material. Therefore, at this stage, we deem that withholding the
aforementioned information is sufficient to ensure the privacy of
participants; however, we will need to re-assess this continuously.
### Data set description
**Origin of data:** The data set is collected in the HUMANE project based on
responses from use case end-users and IT architects/designers from the
eVACUATE project. The data in this data set will be collected and analysed by
IT Innovation.
**Nature and scale of data:** All data is expected to be small scale,
qualitative, data from approximately 10-20 participants. There are four types
of data: (1) Anonymised survey responses in English; (2) anonymised interview
transcripts in English; (3) anonymised focus group transcripts in English; (4)
anonymised summaries of analysis.
**To whom the data set could be useful:** Outside of the consortium, the data
in its anonymised form might be useful for other researchers interested in the
experience of system end-users and system designers of using the HUMANE
resources for designing human-machine networks. All data will be in English,
and is, thus, widely accessible.
**Scientific publication:** It is our objective to use the data-set as a basis
for at least two scientific publications.
**Existence of similar data-sets?** To our knowledge, there are no similar
data-sets available, except for other data sets that will be generated in the
HUMANE project.
### Standards and metadata
The following metadata (with indicative values) will be created:
* Author/compiler of data set: Brian Pickering and Vegard Engen, University of Southampton IT Innovation Centre
* Funded by: [HUMANE, H2020 – 645043]
* Format: [PDF/A; excel]
* Content-data: evacuation, eVACUATE use cases, system design, HUMANE typology feedback and evaluation, technology-mediated collaboration, decision making for evacuation, dynamic HMN creation in crises.
* Method of data accumulation: qualitative surveys, qualitative interviews, focus groups, qualitative content analysis.
* Data collection period [from] – [to]: 01.11.2015 – 31/06/2016.
* Conditions of use of data: open access, free of charge
* DOI: [assigned by Zenodo]
* Related publications [Bibliographic details of publications based on the data-set]
### Data sharing
**Access procedures:** The anonymized and transcribed data from the interviews
and the anonymized collation of survey responses will be made accessible and
available for re-use and secondary analysis by uploading the data to Zenodo.
For the transcribed interviews, the time, location, pseudonym for each
individual interview will be clearly stated.
**Document format and availability:** The data-set will be available as PDF/A
at _http://www.zenodo.org/collection/datasets_ . From here the fully
anonymized data are open accessible for anyone, free of charge.
The data will be uploaded to Zenodo in M24 of HUMANE's project period. Before
uploading datasets, we will first have to anonymize data. We plan to do
anonymize the data in the final month of the project.
### Archiving and preservation (including storage and backup)
Archiving of the anonymized data-set at Zenodo guarantees a long-term and
secure preservation of the data at no additional cost for the project. Zenodo
informs that "in the highly unlikely event that Zenodo will have to close
operations, we guarantee that we will migrate all content to other suitable
repositories, and since all uploads have DOIs, all citations and links to
Zenodo resources (such as your data) will not be affected.
## DS.C4.ATC. REVEAL data set
The DS.C4 data-set consists of (1) qualitative and anonymized interview-
transcripts with adult endusers / journalists of the REVEAL platform; (2)
Summaries of qualitative content analysis; (3) Sets of data gathered from the
REVEAL human – machine network, regarding the dependencies between the
network’s elements.
As in the data sets of the other use cases, the data will be anonymous,
meaning that it cannot in any way be used in order to identify individuals in
the data material directly through names or personal ID numbers, indirectly
through background variables, or through a list of names / connection key or
encryption formula or code.
In order to ensure confidentiality, the lists with names and reference-number
to the participants will be kept separate from the empirical data. These lists
will not be stored together with the main material, but stored in an isolated
computer belonging to the institution conducting the different case studies,
and accessible only for the person in charge of the case-study.
### Data set description
**Origin of data:** The data set will provide accounts on journalists-
experiences with the REVEAL platform, to be used in order to identify the
credibility of several sources in the internet. The data in this data set will
be collected and analysed by ATC.
**Nature and scale of data:** (1) Transcripts of interview data in Greek; (2)
Data related to interaction between human – machine elements, in English (3)
Summaries of qualitative content analysis in English.
**To whom the data set could be useful:** Outside of the consortium, the data
in its anonymized form might be useful for other researchers interested in the
investigation of sources’ credibility for journalists. The interview
transcripts will be in Greek, which limits the usefulness of the data outside
Greece.
**Scientific publication:** It is our objective and plan to use the data-set
as a basis for at least one scientific publication.
**Existence of similar data-sets?** To our knowledge qualitative data-sets on
the sources’ credibility for journalists are not openly available.
### Standards and metadata
The following metadata (with indicative values) will be created:
* Author/compiler of data set: George Bravos, Eva Jaho, ATC (may be updated)
* Funded by: [HUMANE, H2020 – 645043]
* Format: [PDF/A]
* Content-data: trustworthiness, online sources credibility, Greece
* Method of data accumulation: qualitative interviews, qualitative survey.
* Data collection period [from] – [to]: 01.11.2015 – 15.12.2015 (may be updated)
* Conditions of use of data: open access, free of charge.
* DOI: [assigned by Zenodo]
* Related publications [Bibliographic details of publications based on the data-set]
### Data sharing
**Access procedures:** The anonymized and transcribed data from the interviews
and the anonymized collation of survey responses will be made accessible and
available for re-use and secondary analysis by uploading the data to Zenodo.
For the transcribed interviews, the time, location, pseudonym for each
individual interview will be clearly stated.
**Document format and availability:** The data-set will be available as PDF/A
at _http://www.zenodo.org/collection/datasets_ . From here the fully
anonymized data are open accessible for anyone, free of charge.
The data will be uploaded to Zenodo in M24 of HUMANE's project period. Before
uploading datasets, we will first have to anonymize data. We plan to anonymize
the data in the final month of the project. At this point, the list with names
and reference-number to the participants will be deleted.
### Archiving and preservation (including storage and backup)
Archiving of the anonymized data-set at Zenodo guarantees a long-term and
secure preservation of the data at no additional cost for the project. Zenodo
informs that "in the highly unlikely event that Zenodo will have to close
operations, we guarantee that we will migrate all content to other suitable
repositories, and since all uploads have DOIs, all citations and links to
Zenodo resources (such as your
data) will not be affected."
## DS.C5.UOXF. Wikipedia data set
Wikipedia is the focus of DS.C5. An amazing feature of Wikipedia is that every
single action of its editors is tracked and recorded. This includes all edits
on articles, posts on talk pages, page deletions or creations, changes in page
titles, uploading multimedia files, etc. Apart from the practical advantages
of this complete archiving, it is also extremely valuable from scientific
point of view. There are three main channels for collecting Wikipedia data:
* Live data: There are two convenient ways to access live data of Wikipedia. i) “Wikimedia Toolserver databases (http://toolserver.org/), which contains a replica of all Wikimedia wiki databases, and ii) “MediaWiki web service API” ( _https://www.mediawiki.org/wiki/API_ ) .
* Dumped data: Wikipedia also offers archived copies of its content in different formats ( _http://dumps.wikimedia.org_ ) , e.g., XML and HTML and different types, e.g., snapshots of full history of articles or a collection of latest version of all articles.
* Semantic Wikipedia: “Semantic Wikipedia”, as a general concept would be a combination of Semantic Web and WP data to provide structured data sets through query services. There are various projects providing access to Semantic WP. Examples are “DBpedia”
( _http://dbpedia.org_ ) “Semantic MediaWiki” ( _http://semantic-
mediawiki.org_ ) , and “Wikipedia XML corpus” ( _http://www-
connex.lip6.fr/~denoyer/wikipediaXML_ ) , and most notably, Wikidata (
_https://www.wikidata.org_ ) .
### Data set description
**Nature and scale of data:** Most of the data described about are either
numeric or textual (action logs and article content respectively). The size of
the data is at the order of few Tera Bytes. Therefore it is essential to use
live access to publicly available replica (a few of which are named above)
rather than locally host the data. The analysed datasets however can be
locally host and shared with other interested parties.
**To whom the data set could be useful:** Outside of the consortium, other
researchers with interest in analysing Wikipedia activity data can use the
data packages produced in this case study.
### Standards and metadata
The following metadata (with indicative values) will be created:
* Author/compiler of data set: Milena Tsvetkova, Ruth Garcia, Taha Yasseri, UOXF
* Funded by: [HUMANE, H2020 – 645043]
* Format: [CSV]
* Content-data: Wikipedia, collective action, editorial activity and readership
* Method of data accumulation: large-scale statistical analysis.
* Date-range coverage of dataset: 01.01.2001 – 15.12.2016
* Conditions of use of data: open access, free of charge.
* DOI: [assigned by Zenodo]
* Related publications [Bibliographic details of publications based on the data-set]
### Data sharing
**Access procedures:** The linked data from different sources will be made
accessible and available for re-use and secondary analysis by uploading the
data to Zenodo.
**Document format and availability:** The data-set will be available as CSV at
_http://www.zenodo.org/collection/datasets_ . From here the data are open
accessible for anyone, free of charge.
The data will be uploaded to Zenodo in M24 of HUMANE's project period.
### Archiving and preservation (including storage and backup)
Archiving of the data-set at Zenodo guarantees a long-term and secure
preservation of the data at no additional cost for the project. Zenodo informs
that "in the highly unlikely event that Zenodo will have to close operations,
we guarantee that we will migrate all content to other suitable repositories,
and since all uploads have DOIs, all citations and links to Zenodo resources
(such as your
data) will not be affected."
## DS.C6.UOXF. Zooniverse data set
DS.C6 will consider Zooniverse, the citizen science portal. The datasets used
in this case study consist of logs of contributors’ classification. The
datasets are produced in collaboration with the Zooniverse team and they are
not originally publicly available. The Zooniverse User Agreement describes how
usage information (e.g. log-ins, page-requests, classifications made) are
recorded and made available for collaborators to the Citizens Science Alliance
(Oxford as one of the collaborators) for research purposes (see D3.1). The
data sets will be anonymized, meaning no directly or indirectly identifiable
information will be disclosed in the process of sharing the data through
Zenodo.
### Data set description
**Nature and scale of data:** The dataset under study consist of logs of 3.5
years of 35,000,000 contributions to 17 projects of Zooniverse by 345,000
users form 198 countries.
**To whom the data set could be useful:** This is a unique dataset in the area
of citizen science studies. No project has been growing at this scale and no
aggregate data at this size is publicly available.
### Data sharing
**Access procedures:** The anonymized will be made accessible and available
for re-use and secondary analysis by uploading the data to Zenodo.
**Document format and availability:** The data-set will be available as CSV at
_http://www.zenodo.org/collection/datasets_ . From here the fully anonymized
data are open accessible for anyone, free of charge.
The data will be uploaded to Zenodo in M24 of HUMANE's project period.
### Standards and metadata
The following metadata (with indicative values) will be created:
* Author/compiler of data set: Taha Yasseri, UOXF.
* Funded by: [HUMANE, H2020 – 645043]
* Format: [CSV]
* Content-data: Citizen science, large-scale collaboration, crowdsourcing
* Method of data accumulation: large scale statistical analysis.
* Date-range coverage of dataset: 09.11.2009 – 01.06.2013.
* Conditions of use of data: open access, free of charge.
* DOI: [assigned by Zenodo]
* Related publications [Bibliographic details of publications based on the data-set]
### Archiving and preservation (including storage and backup)
Archiving of the anonymized data-set at Zenodo guarantees a long-term and
secure preservation of the data at no additional cost for the project. Zenodo
informs that "in the highly unlikely event that Zenodo will have to close
operations, we guarantee that we will migrate all content to other suitable
repositories, and since all uploads have DOIs, all citations and links to
Zenodo resources (such as your data) will not be affected."
## DS.C7.ATC. Roadmap data set
The DS.C7 data-set consists of the anonymized raw-data from a survey of
practitioners we foresee to be the target-groups for the roadmap. The survey
will be developed to systematize knowledge on stakeholder needs, expectations
and previous experiences with human-machine networks. The survey will likely
include a combination of closed and open survey-questions.
The survey data set will be anonymized, meaning that it cannot in any way be
used in order to identify individuals in the data material directly through
names or personal ID numbers, indirectly through background variables, or
through a list of names / connection key or encryption formula or code.
### Data set description
**Origin of data:** The data set is collected in the HUMANE project, and is
based on the planned survey to be conducted approximately in month 18 of the
project. The data in this data set will be collected and analysed by ATC and
IT Innovation.
**Nature and scale of data:** We aim to reach out to as many respondents as
possible. The data will be available as a CSV file and can hence be accessible
and read with e.g. excel and SPSS.
**To whom the data set could be useful:** Outside of the consortium, the data
in its anonymized form might be useful for other researchers interested in the
design and implementation of roadmaps related to the operation of human –
machine networks.
**Scientific publication:** It is our objective to use the data-set as a basis
for at least one scientific publication.
**Existence of similar data-sets?** To our knowledge qualitative data-sets
related to the design of roadmaps for the implementation of human – machine
networks are not openly available.
### Standards and metadata
The following metadata will be created:
* Author/compiler of data set: Person with main responsibility compiling data set to be decided, ATC.
* Funded by: [HUMANE, H2020 – 645043]
* Format: [CSV]
* Content-data: human – machine networks operation, roadmaps
* Method of data accumulation: Survey
* Data collection period [from] – [to]: 01.09.2016 – 01.11.2016.
* Conditions of use of data: open access, free of charge.
* DOI: [assigned by Zenodo]
* Related publications [Bibliographic details of publications based on the data-set]
### Data sharing
**Access procedures:** The data gathered from all case studies will be made
accessible and available for re-use and secondary analysis by uploading the
data to Zenodo.
**Document format and availability:** The data-set will be available as a CSV-
file at _http://www.zenodo.org/collection/datasets_ . From here the fully
anonymized data are open accessible for anyone, free of charge.
The data will be uploaded to Zenodo in M24 of HUMANE's project period. Before
uploading datasets, we will first have to anonymize data. We plan to do
anonymize the data in the final month of the project.
### Archiving and preservation (including storage and backup)
Archiving of the anonymized data-set at Zenodo guarantees a long-term and
secure preservation of the data at no additional cost for the project. Zenodo
informs that "in the highly unlikely event that Zenodo will have to close
operations, we guarantee that we will migrate all content to other suitable
repositories, and since all uploads have DOIs, all citations and links to
Zenodo resources (such as your data) will not be affected.
# Open access to publications
Any publications from HUMANE must be available as open access. Open access to
publications can be ensured either by publishing in Gold open access journals
or Green open access journals.
Gold open access means the article is available as open access by the
scientific publisher. Some journals require an author processing fee for
publishing open access.
Green open access or self-archiving means that the published article or the
final peer-reviewed manuscript is archived by the researcher in an online
repository (such as Zenodo), in most cases after its publication. Most
journals within the social sciences and humanities domains require authors to
delay self-archiving to repositories to 12 months after the article first
being published.
In the HUMANE project, author publishing fees for gold open access journals
can be reimbursed within the project period and budget. There are however a
very good selection of relevant gold open access and green open access
journals available that do not charge author processing fees. Scholarly
publication can take a very long time, and final acceptance of all submitted
manuscripts may not occur before the end of the HUMANE project. For these
reasons, we will prioritize to submit our work to gold open access journals
without author processing fees or green open access journals.
## Gold open access journals without author processing fees
Table 2 gives an overview of relevant HUMANE-relevant gold open access
journals without author processing fees.
**Table 2: HUMANE-relevant Gold open access journals with no author processing
charges**
<table>
<tr>
<th>
**Journal**
</th>
<th>
**Link and description**
</th> </tr>
<tr>
<td>
Big Data & Society
</td>
<td>
_http://bds.sagepub.com/_
Big Data & Society is an open access peer-reviewed scholarly journal that
publishes interdisciplinary work principally in the social sciences,
humanities and computing and their intersections with the arts and natural
sciences about the implications of Big Data for societies.
This journal is of particular interest for publishing work from WP3.
For an introductory period any publication fee will be waived in order to
allow the journal to establish itself.
</td> </tr>
<tr>
<td>
Complex & Intelligent
Systems
</td>
<td>
_http://www.springer.com/engineering/journal/40747_
A SpringerOpen journal, which aims to provide a forum for presenting and
discussing novel approaches, tools and techniques meant for attaining a cross-
fertilization between the broad fields of complex systems, computational
simulation, and intelligent analytics and visualization. This journal is of
particular interest for publishing work in
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Task 3.4, for example.
</th> </tr>
<tr>
<td>
Computational Cognitive
Science
</td>
<td>
_http://www.computationalcognitivescience.com/_
A SpringerOpen journal, which focuses on cross-disciplinary research
pertaining to any aspects of computational modelling of cognitive theories and
implementation of intelligent systems. This journal may be of interest for
publishing work from WP2 on the HUMANE typology, for example.
</td> </tr>
<tr>
<td>
Digital Humanities
Quarterly (DHQ)
</td>
<td>
_http://www.digitalhumanities.org/dhq/_
Digital humanities is a diverse and still emerging field that encompasses the
practice of humanities research in and through information technology, and the
exploration of how the humanities may evolve through their engagement with
technology, media, and computational methods.
This journal is of particular interest for publishing work from WP3.
</td> </tr>
<tr>
<td>
European Journal of
Futures Research
</td>
<td>
_http://www.springer.com/philosophy/journal/40309_
A SpringerOpen journal, which has got a very broad scope aiming to strengthen
networking and community building among European scholars. The journal invites
papers pertaining to society, politics, economy, and science and technology.
This journal is of interest for publishing work from WP4 on the roadmap of
future human-machine networks, for example.
</td> </tr>
<tr>
<td>
Fibreculture Journal
</td>
<td>
_http://fibreculturejournal.org/_
Digital media + networks + transdisciplinary critique
The journal serves wider social formations across the international community,
working with those thinking critically about, and working with, contemporary
digital and networked media.
This journal is of particular interest for publishing work from WP3.
</td> </tr>
<tr>
<td>
First Monday
</td>
<td>
_http://journals.uic.edu/ojs/index.php/fm/index_
First Monday is one of the first openly accessible, peer–reviewed journals on
the Internet, solely devoted to the Internet.
This journal is of particular interest for publishing work from WP2, WP3 and
also WP4 considering the wide readership of the journal.
</td> </tr>
<tr>
<td>
Human-centric
Computing and
</td>
<td>
_http://www.hcis-journal.com/_
A SpringerOpen journal, which publishes papers on human-centric
</td> </tr> </table>
<table>
<tr>
<th>
Information Sciences
</th>
<th>
computing and information sciences, covering many aspects of work in the
HUMANE project, such as human-computer interaction, social computing and
social intelligence, and privacy, security and trust management. Therefore, a
journal well suited for publishing a range of research outputs from HUMANE.
</th> </tr>
<tr>
<td>
Human Technology
</td>
<td>
_http://www.humantechnology.jyu.fi/_
Human Technology is an interdisciplinary, multi-scientific journal focusing on
the human aspects of our modern technological world. The journal provides a
forum for innovative and original research on timely and relevant topics with
the goal of exploring current issues regarding the human dimension of evolving
technologies and providing new ideas and effective solutions for addressing
the challenges.
This journal is of particular interest for publishing work from WP2 and WP3.
</td> </tr>
<tr>
<td>
International Journal of
Communication
</td>
<td>
_http://ijoc.org/index.php/ijoc_
The International Journal of Communication is an interdisciplinary journal
that, while centred in communication, is open and welcoming to contributions
from the many disciplines and approaches that meet at the crossroads that is
communication study.
This journal is of particular interest for publishing work from WP2 and WP3.
</td> </tr>
<tr>
<td>
International Journal of
Internet Science
</td>
<td>
_http://www.ijis.net/_
The International Journal of Internet Science is an interdisciplinary, peer
reviewed journal for the publication of research articles about empirical
findings, methodology, and theory in the field of Internet Science. It
provides an outlet for articles on the Internet as a medium of research and
its implications for individuals, social groups, organizations, and society.
Typical articles report empirical results gathered to test and advance
theories in the social and behavioural sciences.
This journal is of particular interest for publishing work from WP2 and WP3.
</td> </tr>
<tr>
<td>
Journal of Community
Informatics
</td>
<td>
_http://ci-journal.net/index.php/ciej_
Community Informatics (CI) is the study and the practice of enabling
communities with Information and Communications Technologies (ICTs). CI seeks
to work with communities towards the effective use of
ICTs to improve their processes, achieve their objectives, overcome the
"digital divides" that exist both within and between communities, and
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
empower communities and citizens in the range of areas of ICT application
including for health, cultural production, civic management, and e-governance,
among others.
This journal is of particular interest for publishing work from WP3.
</th> </tr>
<tr>
<td>
Journal of Computer-
Mediated
Communication
</td>
<td>
_http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1083-6101_
The Journal of Computer-Mediated Communication (JCMC) is a webbased, peer-
reviewed scholarly journal. Its focus is social science research on
communicating with computer-based media technologies. Within that general
purview, the journal is broadly interdisciplinary, publishing work by scholars
in communication, business, education, political science, sociology,
psychology, media studies, information science, and other disciplines.
This journal is of particular interest for publishing work from WP2 and WP3,
also as continuing the advancements made in WP1.
</td> </tr>
<tr>
<td>
Journal of Media
Innovations
</td>
<td>
_https://www.journals.uio.no/index.php/TJMI_
The Journal of Media Innovations is an open access journal that explores
changes in media technologies, media policies, organizational structures,
media management, media production, journalism, media services, and usages.
This journal is of particular interest for publishing work from WP2.
</td> </tr>
<tr>
<td>
Journal of Virtual Worlds
Research
</td>
<td>
_http://jvwresearch.org/_
The Journal of Virtual Worlds Research is a transdisciplinary journal that
engages a wide spectrum of scholarship and welcomes contributions from the
many disciplines and approaches that intersect virtual worlds research.
This journal is of particular interest for publishing work from WP3.
</td> </tr>
<tr>
<td>
M/C Journal
</td>
<td>
_http://journal.media-culture.org.au/index.php/mcjournal/_
M/C Journal is a journal for public intellectualism analysing and critiquing
the meeting of media and culture. M/C Journal takes seriously the need to move
ideas outward, so that our cultural debates may have some resonance with wider
political and cultural interests. Each issue is organised around a one word
theme, and is edited by one or two guest editors with a particular interest in
that theme. Each issue has a feature article which engages with the theme in
some detail, followed by several shorter articles.
HUMANE will need to keep track of call for papers to see whether any
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
future special issues are of relevance for our work.
</th> </tr>
<tr>
<td>
MedieKultur
</td>
<td>
_http://ojs.statsbiblioteket.dk/index.php/mediekultur/index_
The aim of MedieKultur is to contribute to critical reflection and the
development of theories and methods within media and communication research.
MedieKultur publishes works of relevance to the community of researchers in
Denmark exploring media and communication in political, economic, cultural,
historic, aesthetic and social contexts. MedieKultur publishes theme issues
with the aim of bringing Danish and international media and communication
research into dialogue. Accordingly, MedieKultur is a publication forum for
Danish and international researchers.
This journal is of particular interest for publishing work from WP2 and WP3.
</td> </tr>
<tr>
<td>
Nordicom Review
</td>
<td>
_http://www.nordicom.gu.se/sv/publikationer/nordicom-review_
Nordicom Review, a refereed journal, provides a major forum for media and
communication researchers in the Nordic countries. This semiannual and blind
peer-reviewed journal (hard copy and open access) is addressed to the
international scholarly community with artciles published in English. It
publishes the best of media and communication research in the region, as well
as theoretical works in all its diversity; it seeks to reflect the great
variety of intellectual traditions in the field and to facilitate a dialogue
between them.
This journal is of particular interest for publishing work from WP3.
</td> </tr>
<tr>
<td>
Observatorio (OBS*)
</td>
<td>
_http://obs.obercom.pt/index.php/obs/index_
Observatorio (OBS*) is an interdisciplinary journal that welcomes
contributions coming from and speaking to the many disciplines and approaches
that meet at the crossroads that is Communication Studies, and is open to
several publishing languages such as Portuguese, Spanish, Catalan, Galician,
Italian, French, and English.
</td> </tr>
<tr>
<td>
Social Media + Society
</td>
<td>
_https://uk.sagepub.com/en-gb/eur/social-media-society/journal202332_
Social Media + Society is an open-access, peer-reviewed scholarly journal that
focuses on the socio-cultural, political, historic, economic, legal and policy
dimensions of social media in societies past, contemporary and future. It
publishes interdisciplinary work that draws from the social sciences,
humanities and computational social sciences, reaches out to the arts and
natural sciences, and endorses mixed methods and methodologies. The journal is
open to a diversity of
</td> </tr>
<tr>
<td>
</td>
<td>
theoretic paradigms and methodologies.
This journal is of particular interest for publishing work from WP2 and WP3.
For an extended introductory period all APCs will be waived [no publication
fee], a policy that will be reviewed as the journal establishes itself.
</td> </tr> </table>
## Green open access journals
Journals are increasingly allowing authors to self-archive the final peer-
reviewed manuscript in repositories (Green open access). Table 3 gives an
overview of only some of these journals, and only those with a maximum
embargo-period of 12 months. Before taking any decisions of where to submit a
manuscript, all involved HUMANE researchers will be required to ensure the
green open access policy of the journal complies with the requirements posed
on H2020 open data access projects: authors must be allowed to self-archive
the final peer-reviewed article at the latest 12 months after publication.
Most of the journals listed in Table 3 also offer the opportunity to publish
open access with author processing fees, yet these journals are not repeated
in Table 4, which lists HUMANE-relevant gold open access journals with APC.
**Table 3: HUMANE-relevant journals with green open access after embargo
period**
<table>
<tr>
<th>
**Journal**
</th>
<th>
**Link and brief description**
</th>
<th>
**Embargo period**
</th> </tr>
<tr>
<td>
CoDesign
</td>
<td>
_http://www.tandfonline.com/toc/ncdn20/current_
CoDesign is inclusive, encompassing collaborative, cooperative, concurrent,
human-centred, participatory, sociotechnical and community design among
others. Research in any design domain concerned specifically with the nature
of collaboration design is of relevance to the Journal.
This journal is of particular interest for publishing work from WP2.
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
Computer
Journal
</td>
<td>
_http://comjnl.oxfordjournals.org/_
The Computer journal serves all branches of the academic computer science
community, and publishes in four sections: Computer Science Theory, Methods
and Tools; Computer and Communications Networks and Systems; Computational
Intelligence, Machine Learning and Data Analytics; Security in Computer
Systems and Networks.
</td>
<td>
12 months
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
This journal is of particular interest for publishing work from WP3.
Authors may upload their accepted manuscript PDF (version before being
copyedited) to an institutional and/or centrally organized repository,
provided that public availability is delayed until 12 months after first
online publication in the journal.
</th>
<th>
</th> </tr>
<tr>
<td>
Computing
Surveys
</td>
<td>
_http://csur.acm.org/_
The primary purpose of the ACM Computing Surveys is to present new specialties
and help practitioners and researchers stay abreast of all areas in the
rapidly evolving field of computing. Computing Surveys focuses on integrating
and adding understanding to the existing literature. This is accomplished by
publishing surveys, tutorials, and symposia on special topics of interest to
the membership of ACM.
This journal is of particular interest for publishing work from WP1.
</td>
<td>
0 months
</td> </tr>
<tr>
<td>
Convergence
</td>
<td>
_http://con.sagepub.com/_
Convergence is a quarterly, peer-reviewed academic journal that publishes
leading research addressing the creative, social, political and pedagogical
issues raised by the advent of new media technologies. It provides an
international, interdisciplinary forum for research exploring the reception,
consumption and impact of new media technologies in domestic, public and
educational contexts.
This journal is of particular interest for publishing work from WP3.
Original submission to the journal with revisions after peer review can be
uploaded to a repository (such as Zenodo).
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
Cyber-Physical systems
</td>
<td>
_http://www.tandfonline.com/toc/tcyb20/1/1_
Cyber-Physical Systems is an international interdisciplinary journal dedicated
to publishing the highest quality research in the rapidly-growing field of
cyber-physical systems / Internet-of-Things.
This journal is of particular interest for publishing work from
</td>
<td>
12 months
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
WP2.
Publications will cover theory, algorithms, simulations, architectures,
implementations, services and applications of state-of-the-art research in
this exciting field.
</th>
<th>
</th> </tr>
<tr>
<td>
European Journal of Work and
Organizational
Psychology
</td>
<td>
_http://www.tandfonline.com/toc/pewo20/current_
The mission of the European Journal of Work and Organizational Psychology is
to promote and support the development of Work and Organizational Psychology
by publishing high-quality scientific articles that improve our understanding
of phenomena occurring in work and organizational settings. The journal
publishes empirical, theoretical, methodological, and review articles that are
relevant to real-world situations.
This journal is of particular interest for publishing work from WP3.
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
Human- Computer
Interaction
</td>
<td>
_http://www.tandfonline.com/toc/hhci20/current_
Human- Computer Interaction (HCI) is a multidisciplinary journal defining and
reporting on fundamental research in human5computer interaction. The goal of
HCI is to be a journal of the highest-quality that combines the best research
and design work to extend our understanding of human-computer interaction. The
target audience is the research community with an interest in both the
scientific implications and practical relevance of how interactive computer
systems should be designed and how they are actually used. HCI is concerned
with the theoretical, empirical, and methodological issues of interaction
science and system design as it affects the user.
This journal is of particular interest for publishing work from WP2.
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
Human
Performance
</td>
<td>
_http://www.tandfonline.com/toc/hhup20/current_
Human Performance publishes research investigating the nature and role of
performance in the workplace and in organizational settings and offers a rich
variety of information going beyond the study of traditional job behavior.
Dedicated to presenting original research, theory, and measurement methods,
the journal investigates
</td>
<td>
12 months
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
individual, team, and firm level performance factors that influence work and
organizational effectiveness.
This journal is of particular interest for publishing work from WP3.
</th>
<th>
</th> </tr>
<tr>
<td>
Information
Systems
Management
</td>
<td>
_http://www.tandfonline.com/toc/uism20/current_
Information Systems Management (ISM) is the on-going exchange of academic
research, best practices, and insights based on managerial experience. The
journal’s goal is to advance the practice of information systems management
through this exchange.
The target readership includes both academics and practitioners. Hence,
submissions integrating research and practice, and providing implications for
both, are encouraged.
This journal is of particular interest for publishing work from WP2.
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
Information
Society
</td>
<td>
_http://www.indiana.edu/~tisj/_
The Information Society (TIS) journal, published since 1981, is a key critical
forum for leading edge analysis of the impacts, policies, system concepts, and
methodologies related to information technologies and changes in society and
culture. Some of the key information technologies include computers and
telecommunications; the sites of social change include homelife, workplaces,
schools, communities and diverse organizations, as well as new social forms in
cyberspace.
This journal is of particular interest for publishing work from WP3.
</td>
<td>
0 months
</td> </tr>
<tr>
<td>
Interacting with
Computers
</td>
<td>
_http://iwc.oxfordjournals.org/_
Interacting with Computers is the interdisciplinary journal of Human-Computer
Interaction. Topics covered include: HCI and design theory; new research
paradigms; interaction process and methodology; user interface, usability and
UX design; development tools and techniques; empirical evaluations and
assessment strategies; new and emerging technologies; ubiquitous, ambient and
mobile interaction; accessibility, user modelling and intelligent systems;
</td>
<td>
12 months
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
organisational and societal issues.
This journal is of particular interest for publishing work from WP2.
Authors may upload their accepted manuscript PDF (version before being
copyedited) to an institutional and/or centrally organized repository,
provided that public availability is delayed until 12 months after first
online publication in the journal.
</th>
<th>
</th> </tr>
<tr>
<td>
Internet
Mathematics
</td>
<td>
_http://www.tandfonline.com/toc/uinm20/current_
Internet Mathematics publishes conceptual, algorithmic, and empirical papers
focused on large, real-world complex networks such as the web graph, the
Internet, online social networks, and biological networks. The journal accepts
papers of outstanding quality focusing on either theoretical or experimental
work, and encourages submissions which have a view toward real-life
applications.
This journal is of particular interest for publishing work from WP3.
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
Journal of
Complex
Networks
</td>
<td>
_http://comnet.oxfordjournals.org/_
The Journal of Complex Networks publishes original articles and reviews with a
significant contribution to the analysis and understanding of complex networks
and its applications in diverse fields. Complex networks are loosely defined
as networks with nontrivial topology and dynamics, which appear as the
skeletons of complex systems in the realworld.
This journal is of particular interest for publishing work from WP2 and WP3,
also as continuing the advancements made in WP1.
Authors may upload their accepted manuscript PDF (version before being
copyedited) to an institutional and/or centrally organized repository,
provided that public availability is delayed until 12 months after first
online publication in the journal.
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
International
Journal of Design
Creativity and
</td>
<td>
_http://www.tandfonline.com/toc/tdci20/current_
The International Journal of Design Creativity and Innovation is an
international publication that provides a forum for
</td>
<td>
12 months
</td> </tr> </table>
<table>
<tr>
<th>
Innovation
</th>
<th>
discussing the nature and potential of creativity and innovation in design
from both theoretical and practical perspectives.
Design creativity and innovation is an interdisciplinary academic research
field that will interest and stimulate researchers of engineering design,
industrial design, architecture, art, and similar areas. The journal aims to
not only promote existing research disciplines but also pioneer a new one that
lies in the intermediate area between the domains of systems engineering,
information technology, computer science, social science, artificial
intelligence, cognitive science, psychology, philosophy, linguistics, and
related fields.
This journal is of particular interest for publishing work from WP2.
</th>
<th>
</th> </tr>
<tr>
<td>
International
Journal of
HumanComputer
Interaction
</td>
<td>
_http://www.tandfonline.com/toc/hihc20/current_
The International Journal of Human-Computer Interaction addresses the
cognitive, creative, social, health, and ergonomic aspects of interactive
computing.
It emphasizes the human element in relation to the systems and contexts in
which humans perform, operate, network, and communicate, including mobile
apps, social media, online communities, and digital accessibility. The journal
publishes original articles including reviews and reappraisals of the
literature, empirical studies, and quantitative and qualitative contributions
to the theories and applications of HCI.
This journal is of particular interest for publishing work from WP2.
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
Journalism
</td>
<td>
_http://jou.sagepub.com/_
Journalism is a major international, peer-reviewed journal that provides a
dedicated forum for articles from the growing community of academic
researchers and critical practitioners with an interest in journalism. The
journal is interdisciplinary and publishes both theoretical and empirical work
and contributes to the social, economic, political, cultural and practical
understanding of journalism.
It includes contributions on current developments and
</td>
<td>
12 months
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
historical changes within journalism.
This journal is of particular interest for publishing work from WP3.
Original submission to the journal with revisions after peer review can be
uploaded to a repository (such as Zenodo).
</th>
<th>
</th> </tr>
<tr>
<td>
Journal of
Control and
Decision
</td>
<td>
_http://www.tandfonline.com/loi/tjcd20_
The primary aim of the Journal of Control and Decision (JCD) is to provide a
platform for scientists, engineers and practitioners throughout the world to
present the latest advancement in control, decision, automation, robotics and
emerging technologies.
JCD will cover both theory and application in all the areas of these
disciplines.
This journal is of particular interest for publishing work from WP2.
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
Journal of
Information
Privacy and
Security
</td>
<td>
_http://www.tandfonline.com/toc/uips20/11/1_
The Journal of Information Privacy and Security (JIPS) serves as a reliable
source on issues of information privacy and security for both academics and
practitioners. The journal is a refereed journal of high quality that seeks
support from academicians, industry experts and specific government agencies.
The journal focuses on publishing articles that address the paradoxical nature
of privacy versus security amidst current global conditions. It is
increasingly important that various constituents of information begin to
understand their role in finding the delicate balance of security and privacy.
This journal is of particular interest for publishing work from WP3.
</td>
<td>
0 months
</td> </tr>
<tr>
<td>
Journal of
Information Technology Case and Application
Research
</td>
<td>
_http://www.tandfonline.com/loi/utca20_
The Journal of Information Technology Case and Application Research (JITCAR)
publishes case-based research on the application of information technology and
information systems to the solution of organizational problems. Research
articles may focus on public, private, or governmental organizations of any
size, from start-up
</td>
<td>
0 months
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
through multinational. The research can focus on any type of application,
issue, problem or technology, including, for example, artificial intelligence,
business process reengineering, cross-cultural issues, cybernetics, decision
support systems, electronic commerce, enterprise systems, groupware, the human
side of IT, information architecture, joint application development, knowledge
based systems, local area networks, management information systems, office
automation, outsourcing, prototyping, robotics, security, social networking,
software as a service, supply chain management, systems analysis,
telemedicine, ubiquitous computing, video-conferencing, and Web 2.0.
This journal is of particular interest for publishing work from WP2 and WP3.
</th>
<th>
</th> </tr>
<tr>
<td>
Journal of
Responsible
Innovation
</td>
<td>
_http://www.tandfonline.com/loi/tjri20_
The Journal of Responsible Innovation ( JRI) provides a forum for discussions
of the normative assessment and governance of knowledge-based innovation. JRI
offers humanists, social scientists, policy analysts and legal scholars, and
natural scientists and engineers an opportunity to articulate, strengthen, and
critique the relations among approaches to responsible innovation, thus giving
further shape to a newly emerging community of research and practice. These
approaches include ethics, technology assessment, governance, sustainability,
socio-technical integration, and others.
This journal is of particular interest for publishing work from WP4.
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
Media, Culture &
Society
</td>
<td>
_http://mcs.sagepub.com/_
Media, Culture & Society provides a major international, peer-reviewed forum
for the presentation of research and discussion concerning the media,
including the newer information and communication technologies, within their
political, economic, cultural and historical contexts. It regularly engages
with a wider range of issues in cultural and social analysis. Its focus is on
substantive topics and on critique and innovation in theory and method. An
interdisciplinary journal, it welcomes contributions in any
</td>
<td>
12 months
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
relevant areas and from a worldwide authorship.
This journal is of particular interest for publishing work from WP3 also as
continuing the advancements made in WP1.
Original submission to the journal with revisions after peer review can be
uploaded to a repository (such as Zenodo).
</th>
<th>
</th> </tr>
<tr>
<td>
New media & society
</td>
<td>
_http://nms.sagepub.com/_
New Media & Society is a top-ranked, peer-reviewed, international journal that
publishes key research from communication, media and cultural studies, as well
as sociology, geography, anthropology, economics, the political and
information sciences and the humanities.
This journal is of particular interest for publishing work from WP3 also as
continuing the advancements made in WP1.
Original submission to the journal with revisions after peer review can be
uploaded to a repository (such as Zenodo).
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
New Review of
Hypermedia and
Multimedia
</td>
<td>
The New Review of Hypermedia and Multimedia (NRHM) is a world-leading
interdisciplinary journal providing a focus for research covering practical
and theoretical developments in hypermedia, hypertext, and interactive
multimedia.
Topics include, but are not limited to the conceptual basis of hypertext
systems, cognitive aspects, design strategies, intelligent and adaptive
hypermedia, user interfaces, physical hypermedia, individual, social and
societal implications.
This journal is of particular interest for publishing work from WP2.
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
Theory, Culture &
Society
</td>
<td>
_http://tcs.sagepub.com/_
Theory, Culture & Society is a highly ranked, high impact factor, rigorously
peer reviewed journal that publishes original research and review articles in
the social and cultural sciences. Launched to cater for the resurgence of
interest in culture within contemporary social science, it provides a forum
for articles which theorize the relationship between culture and society.
This journal is of particular interest for publishing work from WP3 also as
continuing the advancements made in WP1.
</td>
<td>
12 months
</td> </tr>
<tr>
<td>
</td>
<td>
Original submission to the journal with revisions after peer review can be
uploaded to a repository (such as Zenodo).
</td>
<td>
</td> </tr>
<tr>
<td>
Transactions on
Computer-
Human
Interactions
</td>
<td>
_http://tochi.acm.org/_
TOCHI publishes archival research papers in the following major areas.
* Studying new hardware and software architectures for building human-computer interfaces
* Studying new interactive techniques, metaphors and evaluation
* Studying processes and techniques for designing humancomputer interfaces
* Studying users and groups of users to understand their needs
This journal is of particular interest to publish work from WP2.
</td>
<td>
0 months
</td> </tr>
<tr>
<td>
Transactions on
Interactive
Intelligent
Systems
</td>
<td>
_http://tiis.acm.org/_
The journal publishes articles on research concerning the design, realization,
or evaluation of interactive systems that incorporate some form of machine
intelligence.
TiiS articles come from a wide range of research areas and communities. An
article can take any of several complementary views of interactive intelligent
systems, focusing on:
* the intelligent technology,
* the interaction of users with the system, or
* both aspects at once.
This journal is of particular interest to publish work from WP3.
</td>
<td>
0 months
</td> </tr> </table>
## Gold open access journals with author processing fees
As described in section 4.2, many green open access journals also provide gold
open access with APC. These journals are not repeated in this section, as we
will if submitting and being accepted in these journals opt for green open
access.
**Table 4: Gold open access journal with author processing fees (APCs)**
<table>
<tr>
<th>
**Journal**
</th>
<th>
**Link and brief description**
</th>
<th>
**APC**
</th> </tr>
<tr>
<td>
International
Journal of
Computer
Science Issues
</td>
<td>
_http://www.ijcsi.org/_
The International Journal of Computer Science Issues (IJCSI) is a venue for
publishing high quality research papers as recognised by various universities
and international professional bodies. IJCSI is a refereed open access
international journal for publishing scientific papers in all areas of
computer science research.
This journal is of particular interest for publishing work from WP3.
</td>
<td>
USD 190
</td> </tr>
<tr>
<td>
Journal of ICT
Research and
Applications
</td>
<td>
_http://journals.itb.ac.id/index.php/jictra_
Journal of ICT Research and Applications welcomes full research articles in
the area of Information and Communication Technology from the following
subject areas: Information Theory, Signal Processing, Electronics, Computer
Network, Telecommunication,
Wireless & Mobile Computing, Internet Technology, Multimedia, Software
Engineering, Computer Science, Information System and Knowledge Management.
This journal is of particular interest for publishing work from WP3.
</td>
<td>
USD 100
</td> </tr>
<tr>
<td>
Journal of
Communications
</td>
<td>
_http://www.jocm.us/_
JCM is a scholarly peer-reviewed international scientific journal published
monthly, focusing on theories, systems, methods, algorithms and applications
in communications.
This journal is of particular interest for publishing work from WP3.
</td>
<td>
USD 590
</td> </tr>
<tr>
<td>
Media and
Communication
</td>
<td>
_http://cogitatiopress.com/ojs/index.php/mediaandcommunication/_
Media and Communication is concerned with the social development and
contemporary transformation of media and communication and critically reflects
on their interdependence with global, individual, media, digital, economic and
visual processes of change and innovation. Contributions ponder the social,
ethical, and cultural conditions, meanings and consequences of media, the
public sphere and organizational as well as interpersonal communication and
their complex interrelationships.
This journal is of particular interest for publishing work from WP3.
</td>
<td>
EURO 800
</td> </tr> </table>
# Conclusions
In this DMP we have described the requirements imposed on HUMANE as a
participant in the Open Research Data pilot with regard to open access to
research data and open access to publications. The project partners have
decided to use Zenodo as the open project and publication repository, and to
link the repository to a HUMANE project site at OpenAIRE.
Chapter 3, which describes the data sets following the Horizon 2020 template,
is the most important part of the DMP. These descriptions will likely need to
be revised to provide updated versions as the HUMANE project evolves. A
version two of the DMP was not initially planned, yet we believe this is
required as the DMP should be a living document. Although we have attempted to
take into consideration the data management life cycle for the data sets to be
collected and processed by HUMANE, it is very likely that additions and
changes may be needed.
This DMP also includes the results of a review of HUMANE-relevant open access
journals, with an emphasis on gold open access journals without APC and green
open access journals with an embargo period of a maximum of 12 months. This
review has resulted in a considerable amount of relevant journals,
demonstrating the wide variety of open access dissemination channels possible
for the HUMANE activities. The listed publications venues are not complete and
other journals may be identified as the project progresses. For each planned
publication we will consider which journals will be the most appropriate first
choices for publication.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1006_AppHub_645096.md
|
# 1\. Introduction
This document provides a data management plan (DMP) for the AppHub project.
According to the guidelines of the European Commission [EC13a], “ _the purpose
of the DMP is to support the data management life cycle for all data that will
be collected, processed or generated by the project_ .”
The AppHub project is a Coordination and Support Action that aims on
supporting the development of open source software of European industry, SMEs,
and in particular projects funded by the European Commission towards higher
professionalism regarding development and maintenance processes, higher
software quality, and – ultimately – increased market applicability. To
achieve this objective, AppHub provides a technical platform that will provide
market place services for open source software products (assets):
* **Directory** : Classification and analysis of open source software with regard to a software taxonomy based on an enterprise computing model. This taxonomy – the Open Interoperability Framework (OIF) - allows to understand purpose and functionalities of software in a unified way.
* **Factory** : A packaging and deployment feature that allows to model templates and create a virtual run-time environment for software assets, and to deploy them to a wide range of infrastructure service clouds.
* **Marketplace** : An exposition feature to expose packages created by the Factory as templates or ready-to-use Virtual Machines, which can be directly exploited by end users.
Hence, the data management plan relates to data regarding open source asset
and taxonomy data.
# 2\. Data Management Objectives
The guideline [EC13a] provides a check list of objectives to be taken into
account when defining data management principles. In the following, we are
going to relate out approach to these:
<table>
<tr>
<th>
Objective
</th>
<th>
Description
</th>
<th>
AppHub actions
</th> </tr>
<tr>
<td>
Discoverable
</td>
<td>
Are the data and associated software produced and/or used in the project
discoverable (and readily located), identifiable by means of a standard
identification mechanism (e.g. Digital Object Identifier)?
</td>
<td>
The AppHub Directory provides a centralized location for taxonomy data which
can be searched by standard mechanisms (search engines). The AppHub
Marketplace provides a centralized location for software. In addition, REST
APIs are available for automated discovery both for the
AppHub Directory and the Marketplace.
</td> </tr>
<tr>
<td>
Accessible
</td>
<td>
Are the data and associated software produced and/or used in the project
accessible and in what modalities, scope, licenses (e.g. licensing framework
for research and education, embargo periods, commercial exploitation, etc.)?
</td>
<td>
For taxonomy data, unconditional access is provided. For publications, embargo
periods may apply depending. For software in the Marketplace, the respective
open source license applies.
</td> </tr>
<tr>
<td>
Assessable and
intelligible
</td>
<td>
Are the data and associated software produced and/or used in the project
assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. are the
</td>
<td>
Data are collected collaboratively by incorporating projects that produce open
source assets. Classification data are accessible and can be validated with
regard to their correctness and appropriateness. Software in the AppHub
</td> </tr>
<tr>
<td>
</td>
<td>
minimal datasets handled together with scientific papers for the purpose of
peer review, are data is provided in
a way that judgements can be made about their reliability and the competence
of those who created them)?
</td>
<td>
Marketplace can be executed, and a public review and rating mechanism is
available.
</td> </tr>
<tr>
<td>
Useable beyond the original purpose for which it was collected
</td>
<td>
Are the data and associated software produced and/or used in the project
useable by third parties even long time after the collection of the data (e.g.
is the data safely stored in certified repositories for long term preservation
and curation; is it stored together with the minimum software, metadata and
documentation to make it useful; is the data useful for the wider public needs
and usable for the likely purposes of nonspecialists)?
</td>
<td>
The AppHub project will not use repositories certificated for long term
storage. Data acquired during the project time frame will be available for a
two year period after the end of the project.
AppHub is intended to be an integral part of the OW2 strategy. Hence, the
project aims on making the AppHub marketplace a sustainable platform for open
source in Europe. Wether data acquired during the project time frame will stay
available under the conditions outlined in this document once the
responsibility for the marketplace operation has been transferred to OW2 will
be decided in the course of the AppHub project.
</td> </tr>
<tr>
<td>
Interoperable to specific quality standards
</td>
<td>
Are the data and associated software produced and/or used in the project
interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc (e.g. adhering to standards for data annotation,
data exchange, compliant with available software applications, and allowing
re-combinations with different datasets from different origins)?
</td>
<td>
Data can be accessed using web interfaces based on REST/JSON.
Hence, interoperability is provided on the level of data transfer and
exchange.
</td> </tr> </table>
# 3\. Project and Taxonomy Data Sets
Data on open source assets stored in the AppHub Directory provide a
comprehensive description of the capabilities of those assets in term of (a)
activities they support (asset usage), (b) functional characteristics, (c)
standards and technologies used or supported, and (d) cross-concerns that are
addressed by them. Hence, the Directory provides information necessary to
evaluate the usefulness of a certain asset as part of the IT infrastructure of
a potential consumer and allows comparison with other assets.
AppHub is not another Sourceforge. The AppHub platform does not store software
source code, documents, binaries, etc. as such, but provides on-top
information on open source assets that cannot be found at different locations.
### 3.1. Structure
The general structure of the AppHub Directory data is based on the notion of a
project as top-level structural element. Projects subsume all types of
organisational units concerned with the production of open source assets: EC
funded projects, enterprises contributing to open source ecosystems,
universities, etc.
Projects are described be the following information:
* Title (short and full)
* Start and end date
* Logo
* Web page
* Contact email
* List of assets
Each project comprises a list of open source assets:
* Title
* Description
* Tags
* Asset type (software, knowledge, etc.)
* Open source license
* Resources (name, description)
* OIF classification
At the time of writing this document, the OIF is still under development.
Hence, details will be added to future updates of this document.
### 3.2. Data Collection and Quality
Taxonomy data are collected as a collaborative effort between open source
project and members of the AppHub project consortium. The AppHub Directory
provides a comprehensive dialogue to enter data on projects and assets. The
intention is that project managers provide the OIF classification of the
assets produced by their projects by themselves as part of their contribution
to AppHub. The AppHub consortium is available to assist this activity if
required.
Software data produced by the projects and made available in the AppHub
Marketplace will be subject to potential reviews by end users using the
Marketplace review function.
Hence, data quality is achieved by a continuous dialogue between the projects
and institutions contributing to the AppHub marketplace, and the AppHub
consortium (in particular Fraunhofer).
Moreover, the AppHub project also dedicates a whole work package, WP4 “Quality
and compliance”, to the quality of data provided by projects.
# 4\. Data Sharing
Data of the types described above provided by the AppHub Directory are
available for re-use under
The Directory does not contain material that is related to personal data.
Access statistics are collected using the usual mechanism to monitor web page
access for internal evaluation and project progress assessment.
Software data provided by the AppHub Marketplace are available according to
the respective project licenses, which are, by definition, open source. and
benefit from the qualities of open source software in terms of use. Each
software element retains its respective copyright.
Access to project and asset is also provided via REST APIs. The following
table summarizes the main calls. Results are returns in JSON format (to be
documented in a future update of this document).
<table>
<tr>
<th>
_http://directory.apphub.eu.com_ _/api/action/organization_list_
List of all projects that are registered in the AppHub directory
</th> </tr>
<tr>
<td>
_http://directory.apphub.eu.com_ _/api/search/dataset?q=organization_
_:PROJECT_NAME_
</td> </tr>
<tr>
<td>
Information on the project indicated by PROJECT_NAME including asset list.
Project names can be obtained by the call above.
</td> </tr>
<tr>
<td>
_http://directory.apphub.eu.com_ _/action/package_show?id=_ _ASSET_ID_
Information on the asset indicated by ASSET_ID. Asset identifiers can be
obtained by the call above.
</td> </tr> </table>
# 5\. Archiving and preservation
During the project duration, copies of the data stored in the AppHub Directory
are taken every week
(complete data base dumps) and stored at file servers of the Fraunhofer FOKUS
IT infrastructure (which in turned is mirrored on secondary storage media on a
daily basis as part of operational procedures). Copies of these data will be
kept for at least two years after the project finalization. Copies of the data
stored in the AppHub Marketplace and Factory will be taken automatically on a
daily basis (snapshots) and stored at file servers of OVH (the company hosting
the AppHub Factory and Marketplace services).
After the project finalisation the AppHub platform will be maintained by OW2
as part of their support infrastructure for open source projects and will
maintain access to project and taxonomy data as described above (with possible
changes regarding access mechanisms, APIs, protocols, etc.). Procedures for
archiving and preservation of data acquired after the time frame of AppHub are
up to
OW2.
# 6\. Publications and Deliverables
Publications produces in the AppHub project and deliverables (with
dissemination level “public”) after the approval by the European Commission
will be published on the AppHub web site as fast as possible. Hence, AppHub
will provide (gold) open access publishing whenever this is possible [EC13b].
# 7\. Updates
Two major updates of this document are planned:
* An elaboration of the data structure for taxonomy data once the OIF taxonomy has been defined (month 12 in the AppHub project);
* Updated conditions for data access as part of the transfer of the responsibility of the AppHub platform operation from Fraunhofer/UShareSoft to OW2.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1007_HOLA CLOUD_645197.md
|
# EXECUTIVE SUMMARY
## PRESENTATION OF THE DOCUMENT
This document presents the plan for the management of the generated and
collected data from the HOLA CLOUD project; the main purpose of the Data
Management Plan (DMP) is to support and monitor the data management life cycle
for all data that will be collected, processed or generated by the project. In
order to comply this purpose, the DMP has to provide an analysis of the main
elements of the data management policy used during the project.
One of the main objectives to achieve during the life of HOLA CLOUD project is
to increase the visibility of scientific outcomes from projects and
researchers and experts to other projects and people in the same area, as well
as to other stakeholders by continuing with the creation and implementation of
the on-line searchable joint knowledge repository. Therefore, data collection,
sharing and storing is crucial for its achievement. This document will enable
to identify and monitor the main aspects regarding the management of data.
This document is composed of a brief introduction of the deliverable,
reminding the main specific objectives. After that we will briefly introduce,
define and describe the main data sets collected and generated during the life
time of HOLA CLOUD, having identified the target audience to whom will be
useful. Later on, we will go in deep in the standards utilised for the key
aspect of the project, metadata. Then, we will carry on with the nature of the
data sharing and finally we will describe our strategy for data storing.
## OBJECTIVES OF THE DELIVERABLE
A short description of the D5.5 (DMP), as stated in the Guidelines on Data
Management:
_“A DMP describes the data management life cycle for all data sets that will
be collected, processed or generated by the research project. It is a document
outlining how research data will be handled during a research project, and
even after the project is completed, describing what data will be collected,
processed or generated and following what methodology and standards, whether
and how this data will be shared and/or made open, and how it will be curated
and preserved. The DMP is not a fixed document; it evolves and gains more
precision and substance during the lifespan of the project”._ The scope of
this document is to define and describe:
* Data set that will be generated or collected, by identifying their origin, nature and scale, and its potential targets.
* Standards on which data are based.
* Data sharing, by identifying how data will be shared in order to outline technical mechanisms for dissemination and definite its access.
* Repository where data will be stored.
* Procedures that will be put in place for long-term preservation of the data, by indicating how long the data should be preserved.
# DATA SET DESCRIPTION
## COLLECTED DATA SET
In continuation with our past efforts to create a sustainable knowledge
repository from R&D or Innovation projects in the area in order to radically
increase the searchability of documents generated by EU projects and expert
search experience, we will collect and index metadata mainly from
* Scientific reports
* Conference proceedings
* Articles
* Newsletters
* Project deliverables
Furthermore, in order to capture the entity recognition we need an accurate
affiliation parser (University, Department, Lab), contact information, name,
location. This is based on a proprietary algorithm, trained with thousands of
labelled datasets to achieve high precision for this task and domain.
The purpose is to build an interface that provides technology insights, and
show this R&D data in a visual and intuitive way, by generating expert
profiling, hot-spots profiling (companies and academic institutions) and
visual picture of the search (i.e. text clustering).
The collection and re-using of the metadata make us offer unique opportunities
in terms of searchability, to really help people find and understand each
others’ research, share their knowledge and create the needed ground for
effective collaboration. We have also designed the repository for the
permanent storage of a particular/most valuable knowledge source from
projects: their public deliverables.
### Stakeholder Database
HOLA CLOUD partners have access to multiple distribution lists of relevant
organisations and stakeholders who will be involved in contributing to the
dissemination of the project. In addition, several consortium members have
developed in the past databases of key ICT related stakeholders, which will be
used to attract support and recruit target stakeholders in order to increase
this database.
## GENERATED DATA SET
### Co-authored Roadmap
One of the main focuses of the Scientific Conferences, especially for the
second edition (Cloud Forward 2016), will be to activate the process for a co-
production of a shared Roadmap, which should set mid/long terms R&D priorities
of the areas of software, services and cloud computing. This Roadmap will be
the result of the cooperation of the winners of the selection of papers for
each edition of the Conference to be run during project lifetime.
Therefore, this Roadmap that will include:
* A vision document, depicting the scenarios of deployment and use of cloud infrastructures and services in the next years, with a vision towards 2030
* A position paper, expressing the opinion of the co-authors about the actual challenges that the technology providers have to face in the coming years, the support that the EC and other public bodies can provide to the growth of the sector, the obstacles that should be overcame, etc.
Thus, this roadmap will thereby help guiding the industry in search for new
trends and products for the future that they can develop based on state of the
art European knowledge of today. This outcome will be a final result from the
second year’s conference, having the first edition for starting to compile the
baseline state-of-the-art and first identification of long-term technological
trends. In this context, contribution requests, contribution types, processes
and outcomes will be set in place and optimized for the second conference.
LKN focuses on "fresh" research data to provide technology insights, as
opposed to other tools like Google Scholar, Scopus (Elsevier),Thomson-Reuters
or Inno360 who focus on "old data": publications and patents. The purpose is
to bring the most value to user, for which LKN is building the biggest "fresh
research data" database, by focusing on conferences, grants (R&I projects in
software, services and Cloud computing) and related software, publications and
metadata
### Data Journals Development
In order to increase the awareness on joint knowledge, we will generate data
journals development from metadata collected. The data journals annotation
will be based on models extending the Unidata’s Common Data Model (CDM) and
the Dryad Metadata Application Profile, including parameters such as topic,
area, dataset source type (real-world vs. synthetic), level of noise,
popularity (e.g., number of views, number of downloads), associated research
topics, citation source (i.e., which researchers have used this dataset),
among others. Based on this annotation, the data papers are semantically
linked with research papers and could potentially be searchable through a
variety of ways. Moreover, the research trends service feeds this one by
proposing candidate topics for the data journal.
### Data target audience
Data from the HOLA CLOUD project will benefit the following stakeholders:
* **Researchers** , interested in increasing the academic merits by participating in the Scientific Conference and publishing their work in high-quality proceedings with a renowned publisher
* **Industrial players,** in search for new trends and products for the future. Even though the roadmap resulting from International Conferences is generated based on research, experience shows that it will take three to five years to turn research prototypes into innovative products. Thus, this roadmap will thereby help guiding the industry in search for new trends and products for the future that they can develop with state of the art European knowledge of today.
* **Related EU funded projects** , interested in the discovery functionalities foreseen for the on-line platform and in contributing to the roadmapping also through joint-publications from different projects
The data project are also useful for:
* **Policy makers** , interested in the roadmapping co-authored exercise and its relation to public support to European R&D and Innovation in Cloud and services
* **Brokers** (intermediaries), interested in the discovery functionalities foreseen for HOLA CLOUD on-line plaform
* **Media, publication editors** , also interested in the proceedings from the Scientific Conference and the features from the advanced services within the on-line platform.
# STANDARDS USED
The key to success in providing the appropriate information to support the
objectives of HOLA
CLOUD is metadata. The project uses as its basis CERIF (Common European
Research Information Format which is an EU Recommendation to Member States).
The EC requested euroCRIS (www.eurocris.org) to maintain and develop CERIF
since 2002. Although originally designed as a data model for research
information – where it is used in 43 countries – CERIF has found wide usage in
many domains since it is rather general. It covers objects such as project,
organisation, person, publication, product (dataset or software), patent,
facility, equipment, event and many more - in fact the set of entities
required for managing research information whether for reporting, management
decision-making or inquiry and analysis. Instances of entities are related by
linking relations which have role (e.g. author) and start and end date/time.
From CERIF it is possible to generate many of the well-known metadata
standards such as DC (Dublin Core) or CKAN (Comprehensive Knowledge Archive
Network) and in a geospatial environment INSPIRE, yet also point to more
detailed, specific, domain level datasets that are of specialised usage. CERIF
forms the lowest common level across many kinds of data. In this sense it is
trans-disciplinary.
CERIF is widely used. It has entities and attributes appropriate for recording
information on legalistic and economic aspects of research entities
particularly datasets, publications, patents and products. As such it is
positioned to assist in interoperation and homogeneous access over
heterogeneous sources of information. CERIF has been adopted by the EC Project
OpenAIREPlus concerning research publications, datasets and their evaluation
and is used for interoperation in ENGAGE and EPOS-PP. In this context it is
also being considered within RDA (Research Data Alliance) especially through
the three groups working on metadata which all have euroCRIS representation as
co-chairs. The potential further standardisation of CERIF is also being
discussed with W3C.
In the domain of HOLA CLOUD it is notable that key sources of information use
CERIF namely OpenAIREplus (providing information on H2020 projects, and their
outputs using OpenAIRE and DRIVER, the ERC management system and many systems
of funding agencies or universities and research institutes throughout Europe
(and wider). The advanced metadata is thus based on an existing standard.
HOLA CLOUD utilises a more advanced version of CERIF than that used by
OpenAIRE thus permitting more complex analysis. Unlike OpenAIRE which harvests
metadata from open access repositories (almost all with scholarly
publications; OpenAIREplus will also harvest datasets), HOLA CLOUD will
interoperate with a wider range of sources and wider range of metadata
standards. Thus HOLA CLOUD will have as its central information base a richer
metadata description of more sources.
However, depending on the requirements of HOLA CLOUD extensions to CERIF may
be proposed to accommodate those requirements. In particular one would expect
extensions in domain-specific vocabularies (CERIF manages its own ontology)
and additional entities and attributes may be required. CERIF provides a well-
defined extension mechanism.
Thus we take advantage of solid previous work and use it for this novel
purpose, but also allow for and expect further advances which will benefit not
only the particular objectives of the project but also more widely. euroCRIS
provides the expertise on the CERIF model and its usage including changes and
developments and the provision of any necessary convertors between metadata
formats.
CERIF has some novel concepts. In particular it distinguishes base entities –
such as person or project – from linking entities which relate instances of
base entities together. An example would be Person X is author of Publication
P or Person X collaborates with Person Y. Moreover these linking entities also
cover the role (e.g. is author of) and temporal validity. This provides a very
rich semantics for managing research information. Furthermore, CERIF separates
the syntax (structure) of the data used as metadata from the semantics
(meaning) by having a semantic layer. This functions as an ontology and is
interconvertible with OWL and SKOS (W3C recommendations). However, having the
ontology integral with the data and using relational or other database
technology provides advantages in performance over traditional ontologies.
CERIF is natively multilingual with all text attributes being repeatable in
different languages.
# DATA SHARING
This project aims at introducing and piloting a holistic approach to the
publication, sharing, linking, review and evaluation of research results,
based on the open access to scientific information. Towards this direction, it
pilots a range of novel services that could alleviate the lack of structured
data journals and associated data models, the weaknesses of the review
process, the poor linking of scientific information, as well as the
limitations of current research evaluation metrics and indicators.
In order to improve the searchability of documents generated by EU projects
and expert search experience in general, and to enable an efficient data
sharing, we will focus on overcome the major barrier for researchers: entering
data, particularly metadata. To achieve this purpose, HOLA CLOUD will automate
as much as possible of the processes and procedures.
* By harvesting from pre-existing systems (that are already integrated into the workflow of users) one major obstacle is overcome.
* By using a canonical metadata format (CEIF) that is a superset of the other commonly-used formats ingestion is automatic and, if required, we can generate those other formats for interoperation.
* CERIF is designed to be extensible and if HOLA CLOUD requirements indicate further entities or attributes are needed it can be extended transparently while preserving backward compatibility.
HOLA CLOUD will generate profiles for the researchers (and institutions:
research groups, labs and companies) so that they will only have to "claim"
them, not create them.
HOLA CLOUD will implement a periodical update of the experts’ profiles with
the "fresh" metadata that they generate from new reports, project results and
deliverables, conferences proceedings, etc. This up-to-date data is the most
important for other researchers and especially for SMEs and companies.
## SEARCH ENGINE
Because of the physical level, and given the constraints imposed by the
agreement objectives (an scalable and distributable architecture to leverage
search with high performance) we will not use a traditional DataBase. As
specified on the proposal we will use fault tolerant high efficient search
architecture in top of ElasticSearch, a distributed, open source search and
analytics engine, designed for horizontal scalability, reliability, and easy
management.
## CONFERENCE PROCEEDINGS
The contents of CF2015 proceedings (i.e. list of accepted papers) are shown in
deliverable D3.3. All accepted papers, including the preface and the
consolidated paper (expressing the positions of authors of accepted papers)
are online as open access, these are available on the corresponding Elsevier
repository. For CF2016 proceedings and the co-authored Roadmap publishing, we
will follow the same open access strategy, although we are currently exploring
other publisher alternatives.
Page
<table>
<tr>
<th>
**PRESERVATION**
</th>
<th>
[DR1]
</th> </tr> </table>
# DATA ARCHIVING AND
One of HOLA CLOUD value propositions is the storage of H2020 project reports,
deliverables and R&D results, as opposed to the fragmentation that exists
today (each project has its own website) and temporarily. HOLA! Digital
Library started storing and tagging public deliverables from around 40 FP7
projects in the area (even some FP6), which proves the need for a permanent
repository for these documents, since some of the websites for those projects
are not alive any longer (data is lost after 2-4 years since those webs stop
maintenance), and Hola Portal is now the place to find them. In continuation
with this effort, the searchable knowledge repository and all its services
will be integrated into HOLA CLOUD on-line platform.
Storing documents requires computing power, cleaning data and structure it,
and therefore, resources. And storing documents to be able to access them for
a search engine fashion is even more demanding since information retrieval
needs to be very fast-, e.g. servers with high RAM requirements are needed and
the database structure needs to be optimized among other parameters. It could
happen that requirements go beyond available resources for the project.
In order to have this under control, Linknovate will monitor while
implementing, by reassessing every month. Linknovate has previous experience
at building a very similar metadata database for their search engine (
_www.linknovate.com_ ) .
One of the most successful technologies uses nowadays for indexing, search and
store is Lucene. Lucene enables fast indexing and searching using inverted and
direct files, and makes transparent to the user the management of the document
fields. However, our search engine needs some features such as scalability,
distributivity, entity-relationships that are not possible with Lucene.
For these reasons, we decided to work with ElasticSearch. ElasticSearch is a
distributed, open source search and analytics engine, designed for horizontal
scalability, reliability, and easy management. ElasticSearch relies on Lucene
as final storage solution but enables a much larger range of possibilities.
Moreover, the use of this technology will allow the dynamic adding of entity
types and entity attributes to our models, enabling the future use of a more
ample set of the CERIF standard when new kind of information arrives to the
system.
During HOLA CLOUD project Linknovate will be in charge of the preservation and
storing of metadata. In order to achieve it, 50.000 € of the budget will be
allocated in concept of hosting costs for HOLA Cloud on-line platform,
estimated considered standard rates for platforms functioning as a search
engine and storing an important volume of files and metadata.
Page
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1009_FLOBOT_645376.md
|
# Introduction
This document deals with the research data produced, collected and preserved
during the project. This data can either be made publicly available or not
according to the Grant Agreement and to the need of the partners to preserve
the intellectual property rights and related benefits derived from project
results and activities.
Essentially the present document will answer to the following main questions:
* What types of data will the project generate/collect?
* What data is to be shared for the benefit of the scientific community?
* What format will it have?
* How will this data be exploited and/or shared/made accessible for verification and re-use?
* What data cannot be made available? Why?
* How will this data be curated and preserved?
The data is made available as Open access research data; this refers to the
right to access and re-use digital research data under the terms and
conditions set out in the Grant Agreement. Openly accessible research data can
typically be accessed, mined, exploited, reproduced and disseminated free of
charge for the user.
The FLOBOT project abides to the European Commission's vision that information
already paid for by the public purse should not be paid for again each time it
is accessed or used, and that it should benefit European companies and
citizens to the full. This means making publicly-funded scientific information
available online, at no extra cost, to European researchers, innovative
industries and citizens, while ensuring long-term preservation.
The Data Management Plan (DMP) is not a fixed document, but evolves during the
lifespan of the project. The following are basic issues that will be dealt
with:
* **Data set reference and name**
The identifier for the data sets to be produced will have the following
format:
Flobot_[taskx.y]_[descriptive name]_[progressive version number]_[date of
production of the data]
* **Data set description**
Description of the data that will be generated or collected will include:
* Its origin, nature, scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse.
* Its format
* Tools needed to use the data (for example specialised software)
* Accessory information such as video registration of the experiment or other.
* **Standards and metadata**
Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created.
* **Data sharing**
Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling re-use, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.). In case the
dataset cannot be shared, the reasons for this should be mentioned (e.g.
ethical, rules of personal data, intellectual property, commercial, privacy-
related, security-related).
* **Archiving and preservation (including storage and backup) and access modality**
Description of the procedures that will be put in place for long-term
preservation of the data. Indication of how long the data should be preserved,
what is its approximated end volume, what the associated costs are and how
these are planned to be covered.
The present Data Management Plan will answer to the following questions:
Is the Scientific research data easily:
1. **Discoverable.** Is the data and associated software produced and/or used in the project discoverable (and readily located), identifiable by means of a standard identification mechanism (e.g. Digital Object Identifier)?
2. **Accessible** . Is the data and associated software produced and/or used in the project accessible and in what modalities, scope, licenses (e.g. licencing framework for research and education, embargo periods, commercial exploitation, etc.)?
3. **Assessable and intelligible** . Is the data and associated software produced and/or used in the project assessable for and intelligible to third parties in contexts such as scientific scrutiny and peer review (e.g. are the minimal datasets handled together with scientific papers for the purpose of peer review, is data provided in a way that judgments can be made about their reliability and the competence of those who created them)?
4. **Useable beyond the original purpose for which it was collected** . Is the data and associated software produced and/or used in the project useable by third parties even long time after the collection of the data (e.g. is the data safely stored in certified repositories for long term preservation and curation; is it stored together with the minimum software, metadata and documentation to make it useful; is the data useful for the wider public needs and usable for the likely purposes of non-specialists)?
5. **Interoperable to specific quality standards** . Is the data and associated software produced and/or used in the project interoperable allowing data exchange between researchers, institutions, organisations, countries, etc. (e.g. adhering to standards for data annotation, data exchange, compliant with available software applications, and allowing re-combinations with different datasets from different origins?
# Data generated in FLOBOT
## Floor visual analysis, Floor cleaning quality control and Object
identification data
The data is collected during lab tests and during preliminary visits to the
end-user sites, demonstration sites and similar target locations, such as
various supermarkets. Data collected is related to the developments of T5.3
and T5.4. The data will be collected continuously during the first 30 months
of the project. The data will be available at project end. The format of the
data is the Rosbag file format (binary data container file), available in the
ROS framework. The format is ROS internal and allows reproducing the test
scenarios at any time. The disadvantage is that the amount of data is large,
as indicated below.
The database structure is a sequence of images from the sensors of the robot.
The current sensor setup includes: 2x RGB-D sensors, 2x stereo cameras, 1x
Laser Line Scanner. Sequences may also be recorded with fewer sensors. The
files size depends on the data rate: it is estimated to 800 MB / sec
(uncompressed) and 270 MB / sec (zipped).
The scientific data standards to be used are the ROS formats. We plan to
adhere to this widely used format. We will also create annotated metadata. The
metadata will contain masks for the potential output of the method. Masks will
be in binary format, as also used in ROS.
For the evaluations, we will also collect and make available ground truth
data. This will be achieved manually by annotation of the acquired Rosbag
camera images. This needs to be done per frame and is time consuming. There
exists also a similar framework for evaluation, created by Bormann et al 1 .
It may be possible to use this framework, even if the application scenario
differs. However, this possibility requires further investigation, since it
requires extensive knowledge of ROS and the framework code to run.
The possible use or re-use by the scientific community is provided by adhering
to ROS conventions.
_Information about tools and instruments_ : As outlined before, it is planned
to investigate and use specific tools described earlier. If this will not be
possible, we will attempt to provide different tools or produce data in a way
that the format given is adequate for easy re-use by the scientific community.
Regarding scientific papers to be used as key explication of the data meaning,
value, and use, we plan an ICRA 2 or IROS 3 submission with the Data and
first evaluations.
_Interoperability to specific quality standards_ : The data and associated
software produced and/or used are interoperable, allowing data exchange
between researchers, institutions, organisations, countries, etc. (e.g.
adhering to standards for data annotation, data exchange, compliant with
available software applications, and allowing re-combinations with different
datasets from different origins).
_Accessibility_ : The data and associated software produced and/or used will
be freely accessible.
## Human tracking for safety data
The data is collected for Task 5.2 during lab tests and preliminary tests on
site of the demonstration. The data will be available at the end of the
project.
The format of the data depends on the sensor used. In general, the Rosbag file
format (binary data container file) from the ROS framework will be used. The
format is ROS internal and allows reproducing the test scenarios at any time.
Alternatively, the ‘pcap’ format (packet capture) will be used for laser data
only.
The database structure is a sequence of images and laser scans from the robot
sensors. The current sensor setup includes: 1x RGB-D sensor, 1x 3D laser
(LIDAR). Sequences may also be recorded with each sensor individually. Where
available, information from the robot odometry sensors will also be recorded.
The expected size of the files is in the order of hundreds of MBs per second,
depending on the sensor reading rate (i.e. 20÷30 Hz).
As in the previous case, the scientific standards and metadata are the ROS
formats, since these are the most widely used in the robotics community. We
will also create annotated metadata containing masks (i.e. ROS binary format)
for the potential output of the human tracking system. Ground truth data will
be created by manually annotating the acquired Rosbag data for evaluation
purposes. Further details regarding tools for data annotation will become
available during the actual development stage of the project.
The possible use or re-use by the scientific community is made possible by
adhering to the ROS standards. Information about tools and instruments at the
disposal of the beneficiaries will be those provided by the ROS community and
those developed, where necessary, by the partners for internal and external
use.
Expected scientific papers to be used as key explication of the data meaning,
value, and use are planned for the IEEE ICRA or IEEE/RSJ IROS conferences, as
well as a high-impact robotics journal (e.g. IJRR 4 ).
Interoperability to specific quality standards will be achieved by adhering to
the aforementioned formats. The data will be freely available.
## Navigation and mapping data
_Mapping data_
The data is collected during task 5.1 at M9 and M10 on at least one
demonstration site, to enable unitary test during the development. The final
mapping data is collected, in two stages, one in the beginning of task 6.2 at
M19 and the other during the setup on task 7.3 at M28. The data will be
available at M32.
The local mapping will be done through several layers of the map:
* Localisation map : contains a bitmap of the land occupation ; its resolution (float) and origin (two floats)
* Semantic map : contains a bitmap of metadata, splits the surface into zones
* Points of interest : a list of points (each one represented by two float coordinates)
The size of the bitmap will depend on the working surface and the resolution
required. For example, a 10.000m² square-shaped surface at 5cm resolution
would require a 2000*2000 pixels bitmap (each pixel contains 1 byte of data).
This bitmap would be susceptible to change at most once a day. With a 75%
compression (under .png format), saving this data would require 1 MB of memory
each day in this example.
The data (bitmap produced by mapping module) will be made available under a
licence agreement that makes it freely available for non-commercial use,
provided to have the prior approval of the FLOBOT consortium, Robosoft and
demonstration site owner (ex. For secure sites, like the airports).
_Navigation and obstacle detection data_
The data is collected during task 7.3 at M34 on site of the demonstration. The
data will be available at M35.
The path of the robot will be given as a series of segments and smooth turns
(Bézier curves). Each segment will be defined by two points (4 float
coordinates), each Bézier curve requires 6 points (12 float coordinates). The
robot will receive its initial path and might have to modify it (if an
obstacle lies in its path for instance). To record this, the robot will have
to memorise each order it followed (i.e.: each segment or Bézier curve it
effectively tried to follow). Thus, if a part of the path was aborted for some
reason, it will not be saved.
The instruments are able to estimate the path of the robot, which will
slightly differ from its instructions. This will only be recorded as a series
of points, updating regularly the position of the robot. Saving the initially
instructed path, the followed path and the estimated path used by a robot
should not require more than 300 kB/h.
The data (path produced by the navigation module) will be made available under
a licence agreement that makes it freely available for non-commercial use,
provided to have the prior approval of the FLOBOT consortium, Robosoft and
demonstration site owner (ex. For secure sites like the airports)
The state of the robot includes the following data:
* battery level (a _float_ representing the percentage left)
* position (three _floats_ representing the measured coordinates x, y and orientation theta)
* speed (two _floats_ , one for the speed V and one for the angular speed omega)
* covariance (covariance of the position measure, three _floats_ : covar(x), covar(y) and covar(theta) )
* state of the cleaning system (TBD as the project progresses. It should at least describe whether the system is working or not)
* task of the robot (one integer which will code the present task : cleaning, going home...)
* action of the robot (one integer which will code the present action : following the mission, avoiding an obstacle…)
The robot will also be able to receive a direct instruction (two _floats_
V_instruction and omega_instruction). Memorising the state of the robot every
second should require at most 500 kB/h.
The 3D RGBD cameras will supply data under the ROS format sensor_msgs/Image.
They will supply a standard RGB image and a depth image. The LIDAR will
produce a 3D point cloud under the ROS format sensor_msgs/PointCloud. The
image, the depth image and the point cloud are not to be saved. They represent
huge amounts of data updated at a high frequency and cannot be memorised by
the robot.
The data (state of the robot produced by the low-level and high-level control
module developed by Robosoft) will be made available under a licence agreement
that makes it freely available for non-commercial use, provided to have the
prior approval of the FLOBOT consortium and Robosoft.
Regarding the API (protocol of communication and technical details exchanged
between Robosoft and other partners), any commercial use is strictly
prohibited without Robosoft’s prior written consent. Partners shall not
distribute, sell, lend or otherwise make available or transfer to a person
other than the Partners or an entity not party to this agreement and in this
project frame, the technical details, for any reason, without Robosoft’s prior
written agreement.
To protect the competitive advantage of the research and development
activities of the FLOBOT consortium, the software code, algorithm, protocol,
technical drawing and sketches, produced and used by this module, are not
accessible by public. In this way, FLOBOT consortium will have the option to
seek patent protection.
## Environment reasoning and learning data
The data is collected during task 7.3 at M34 on site of the demonstration. The
data will be available at M35.
The learning and reasoning module will collect data while the FLOBOT moves.
This module will in particular check the presence of new obstacles. If a new
obstacle appears recurrently on the map, it will be added in the land
occupation map. Similarly, if an obstacle disappears, it will be removed from
the land occupation map. Both those functionalities will be achieved through
machine learning (reinforcement learning). If a point of interest cannot be
thoroughly cleaned, it will be added as non-cleanable in a corresponding layer
map. This layer map will be (very much like the maps of the navigation &
mapping modules) a bitmap.
The learning module will require data from the turbidity sensor. Thanks to
this data, the robot will be able to evaluate the dirtiness of its area while
cleaning. This data will be transferred to the database, which will in turn be
able to draw a bitmap of the frequently unclean areas (same format as the
other bitmap layers).
To protect the competitive advantage of the research and development
activities of the FLOBOT consortium, the software code, algorithm, protocol,
technical drawing and sketches, produced and used by this module, are not
accessible by public. In this way, FLOBOT consortium will have the option to
seek patent protection.
## Proactive safety module data
The proactive safety module is a safety feature offered by FLOBOT and its aim
is to warn people about the FLOBOT presence, as well as about its next
movement. The corresponding task is T5.9. Data is collected both during the
development phase (T5.9) and during the validation phase (T6.4 – Laboratory
tests, T7.2 – Prevalidation first testing, T7.3 – Pre-validation second
testing, T7.4 – Qualification review).
The first data will be available by month 20, while data will continue to be
collected up to the end of the project in month 36.
The proactive safety module relies on the indications received by the FLOBOT
main controller, regarding the next move of the robot and the environment
around it, in order to project the necessary information. The data that is
considered interesting for the scientific community and which will be
collected relate to: a) the performance of the module in various lighting
conditions and b) the adaptation of the projection distance, depending on the
floor inclination and/or position of nearby obstacles.
Regarding the first type of data (various lighting conditions), datasets
collected will be formatted as follows:
<table>
<tr>
<th>
Measurement index
</th>
<th>
Timestamp
</th>
<th>
Location
</th>
<th>
Surface type
</th>
<th>
Light intensity
</th>
<th>
Image of the projection
</th> </tr> </table>
Light intensity will be measured using a standard light sensor. The output of
the module (projected images) will be captured using a hi-end camera and will
be used to compare the visibility of the projections in varying light
conditions.
Regarding the data collected for evaluating the module’s behaviour in
different floor inclinations and surrounding environment conditions, those
will be structured as follows:
<table>
<tr>
<th>
Measurement index
</th>
<th>
Timestamp
</th>
<th>
Location
</th>
<th>
Surface type
</th>
<th>
Surface inclination
</th>
<th>
Laser projection angle
</th>
<th>
Image of the surrounding obstacles
</th>
<th>
Image of the projection
</th> </tr> </table>
Surface inclination will be measured using an inclinometer, while laser
projection angle will be measured on the robot (angle of turn of the servos).
High resolution images of the surroundings and of the projection will also be
included in the dataset.
Data will not be assigned a DOI, but they will be discoverable through the use
of major search engines. In fact, the
FLOBOT website will be promoted through search engine optimization techniques
and the project results (including links to the datasets) will also be
disseminated through social media (Facebook, Twitter). Data will be shared
with the community free of charge for use in non-commercial applications.
Descriptive metadata will also be produced to describe: the camera used to
take the pictures and the corresponding project task.
The expected file size _per entry_ is estimated at about 2 MB for the first
case (various lighting conditions) and 4 MB in the second one.
No specialized software or tool is necessary for processing the datasets and
evaluating the module’s performance. The evaluation of the FLOBOT proactive
safety module, using the collected data, will be made by the project partners
during the final stages of the project. The results might be published in
scientific journal or conferences, if deemed necessary by the management
board.
Proactive module datasets will be made available through the project website,
in the same way all publicly available FLOBOT research data will be published.
Details are presented in the appropriate section of this document.
## Tools for psychological impact and user's evaluation
The tools will be made available under a license agreement that makes them
freely available for non-commercial use, provided the FLOBOT consortium and
the owner of the corresponding IPR (RBK) are duly acknowledged.
# Data sharing
Most literature, including scientific data, will be published and hosted as
per individual availability on the project’s public website i.e.
_www.flobot.eu_ . The only literature that will not be published is the one
pertaining to patent protection as defined in the previous sections.
The website has a friendly and easy to use navigation. It will be modified in
due time to accommodate additional sections (Categories) where all the
necessary literature will be stored, these will be:
1. Floor Visual Analysis Module and Data
2. Floor Cleaning Quality Control Module and Data
3. Object Identification Module and Data
4. Human Tracking for Safety Module and Data
5. Navigation and Mapping Module and Data
1. Mapping Data
2. Navigation and Obstacle Detection Data
6. Environment Reasoning and Learning Module and Data
7. Proactive Safety Module and Data
The data will be made available on the website through a Wiki module, which
will be presented as a Knowledge Base facility. Each section will have its own
page and consequent subpages. The wiki pages will cover the topics and project
descriptive information to an appropriate level for each set of information or
dataset.
The data will be formatted as per the description of each section, provided
previously in this document, and will be presented for access along with the
necessary links to download the appropriate software tools, if necessary. Each
downloadable bit of information will be encompassed by its own wiki page,
which will also enclose all necessary additional downloadable tools.
The wiki pages will be available to the public domain, enriched with the
necessary metadata and will be open to web crawlers for search engine listing,
so they will be available to the public through standard web searches.
Despite the publicly available wiki pages, the downloadable data will be
presented along with the restrictions provided in each previously described
section. This means that the following will apply on the website in order to
gain access to the information:
1. Terms and Conditions will apply and will have to be accepted prior to any download
2. Registration will be compulsory (free of charge) to maintain a data access record
3. For certain and limited amount of datasets, a form will be available to request access to the data, but this will be subject to approval from the consortium
Please note that all datasets will be under the ROS framework standard or the
PCAP format which means that the data will not need to be available through a
published API, the entire binary set can be downloadable and accessed through
publicly available tools.
Additional downloadable formats will be:
1. PNG, BMP and JPEG file formats
2. ZIP and other public domain compressed archives
3. PDF formatted documents
# Archiving, preservation and access modality
The data in all the various formats will be stored in standard storage. It has
been mentioned in the previous section that no specialised storage or access
is necessary, as all datasets will be downloadable in their entirety.
All the information, including the descriptive metadata will be available
throughout the lifetime of the website, which is expected to be in the public
domain for a period of at least five (5) years after the completion of the
project. Due to the unusually large size of individual downloadable files, the
storage used will be based on a Cloud based services and it is estimated
(based on current prices) to be according to the table below:
_Geographically Monthly Price Estimated Estimated Price per Estimated Price
over 5_
_Redundant Storage for per GB Storage month years_
_Disaster/Recovery requirements_
_purposes_
<table>
<tr>
<th>
_First 1 TB/Per month_
_Next 49 TB/Per month_
_(1-50TB)_
</th>
<th>
€0.0358 per GB
</th>
<th>
1 TB (1024 GB)
</th>
<th>
€36.66 per month
</th>
<th>
€2,199.60
</th> </tr>
<tr>
<th>
€0.0352 per GB
</th>
<th>
1 TB (1024 GB)
</th>
<th>
€36.05 per month
</th>
<th>
€2,162.69
</th> </tr> </table>
_€4,362.29_
Please note that we estimate a maximum of 2 TB of data to be made publicly
available. The cost of storage will be covered by the consortium members .
Please note that the prices are based on the Microsoft Azure current pricing (
_http://azure.microsoft.com/enus/pricing/details/storage/_ ).
Please note that some data may be stored under the facilities of the
consortium members that own them, but will still be referenced via the
website.
# Conclusions
The Data Management Plan presented here describes the research data that will
be produced by each of the software-related tasks of the project and the way
that those will be made available. Information regarding data sharing,
archiving and preservation is also included. Essentially the DMP answers to
the following main questions: What types of data will the project
generate/collect? What data is to be shared for the benefit of the scientific
community? What format will it have? How will this data be exploited and/or
shared/made accessible for verification and re-use? What data cannot be made
available? Why? How will this data be curated and preserved?
Finally, a preliminary costing analysis has been made. It has to be clarified
that this Data Management Plan (DMP) is not a fixed document, but evolves
during the lifespan of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1014_PQCRYPTO_645622.md
|
# Introduction
The PQCRYPTO project’s main target is to study cryptosystems to establish
their security (or insecurity) against attacks including attacks using a
quantum computer. Good candidate systems are implemented and analyzed for
their vulnerability against physical attacks, including software side-channel
attacks, and for their efficiency in terms of performance and size.
PQCRYPTO produces data in the form of scientific papers, software, and
benchmarking results.
# Software and benchmarking
Important deliverables of the PQCRYPTO project are software libraries,
implementing the systems identified as good candidates. These implementation
will be made available for general use and included in the benchmarking
platform eBACS. Timing and other measurement results will be made available
in full detail; any interested party can reproduce the results of timing and
other measurements.
eBACS does not cover the smallest devices considered in WP1. For those the
code will be made available and effort will made to handle benchmarking in the
most transparent way. Maybe the somewhat dormant XBX benchmarking can be
revived and extended to post-quantum cryptography, PQCRYPTO is investigating
different avenues.
# Scientific papers
Published papers and preprints are made available via the project website
https://pqcrypto. eu.org/papers.html, according to the Open Access
Requirements. This provides the pdf files; source files will not be provided.
For data related to experiments see the previous section.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1015_EVE_645736.md
|
_Data not selected for publication._ This category covers raw data tagged by
the consortium as unpublished. However, these data will be screened for
quality and available upon request for potential external users.
Three listed data categories are selected in DMP with the following
objectives:
* Ensure the public access both to the intermediate and final project results;
* Facilitate easy public search and access to publications, which are directly arising for the research funded by the European Community;
* Maximize the potential for creative reuse of research results to enhance value to all potential stakeholders;
* Avoid unnecessary duplication of research activities;
* Guarantee transparency of research process within the project framework.
# Standards and metadata
The DMP defines the data management cycle during the project life, detailing
the character of data generated in EVE individual projects and linked
metadata, as well as, exploitation, curation and preservation of these data.
The DMP concerns: Generated and Collected Data; Data Management Standards;
Data Exploitation, Sharing and Access; Data Curation and Preservation in
compliance with the following standards: ISO/IEC JTC 1/SC 32 - Data management
and interchange; ISO 9001:2008 - Quality management systems; ISO 27001:2013 -
Information Security Management Systems.
The knowledge sharing outside of the consortium will be realized through two
main instruments:
* The consortium will define a set of documents and reports with the analysis of the project results and assets that will be available for open access on the project website. Most of the project presentations delivered on professional events will be also published on the website for free download.
* The consortium will aim at granting free-access for all the scientific publication that will be prepared during the project activities. The planned publications will be subjected to the “green” and “gold” open access model. In addition to this, presentation of program activities and relative results will be published on the consortium website.
* The publishable and analyzed raw data can be reused upon request in exchange for authorship and/or establishment of a formal collaboration.
# Quality assurance and control
The consortium has identified a number of measures related to quality
assurance and control in framework of the data management. These measures can
be summarized in three groups described below.
Measures for _quality assurance before data collection_ :
* Definition of the standars(s) for measurements and recording prior to data collection;
* Definition of the digital format for the data to be collected;
* Specification of units of measurement;
* Definition of required metadata;
* Assignment of responsibility to a person over quality assurance for each test series;
* Design of Experiments (DoE) for each test series;
* Design a data storage system with sufficient performance;
* Design of a purpose-built database structure for data organization.
Measures for _quality assurance and control during data collection and entry_
:
* Calibration of sensors, measuring devices and other relevant instruments to check the precision, bias and scale of measurements;
* Taking multiple measurements and observations in accordance with the established DoE;
* Setting up validation rules and input masks in data entry software;
* Unambiguous labelling of variable and record names;
* Implementation of double entry rule – ensuring that two persons, performing the tests, can independently enter the data;
* Use reference mechanisms (a relational database) to minimize the number of times the data need to be entered.
Measures for _quality control during data checking_ :
* Document any modifications to the dataset to avoid duplicate error checking;
* Check the dataset for missing or irregular data entries to ensure the data completeness;
* Perform statistical summaries with checking for outliers by using graphical methods as probability and regression plots, scatterplots et al.;
* Verifying random samples of the digital data against the original data; Data peer review both by scientific and technical criteria.
# DATA SHARING
The EVE data to be shared are subjected to three research and innovation Work
Packages of the project and focused on the following content.
## Tyre and Ground Vehicle Modelling
* Tyre test results on smooth and rough terrains under various real-world operating conditions on surfaces with realistic friction properties. Selected sets of test results will be made available to the broader research community to stimulate further tyre model development and correlation;
* Tyre models for longitudinal, lateral and vertical forces on smooth and rough, as well as soft and hard terrains;
* Vehicle dynamics model incl. (i) a full vehicle, multi-body dynamics model of the test vehicle developed in MSC Adams and (ii) real-time versions of the model in DSPACE ASM software.
## Active Chassis Systems
* The vehicle model in MATLAB/Simulink software for covering tasks of real-time simulation;
* Active chassis subsystems models and validation results. The vehicle subsystems are specified in accordance to the global project tasks;
* Driving cycles and manoeuvres specification from the point of view of overall vehicle energy efficiency and safety;
* Vehicle dynamics control strategies for improvement of vehicle safety and stability. The developed control strategies will be further integrated and used for the purposes of optimal vehicle dynamics control;
* The optimal control strategy considering both energy efficiency and vehicle stability incl. controller specification.
## Cooperative Test Technologies
* Documentation to the test platform for integration of the brake, active suspension and tyre pressure control systems;
* Documentation to the test vehicle demonstrator;
* Results of the successive testing of vehicle models, controllers and integrated chassis control system on the integrated test platform developed;
* Results of vehicle tests to quantify the ride and handling of the test vehicle with different control strategies.
# ARCHIVING AND PRESERVATION
The EVE online database will provide wide access to shared electronic research
and technical material of the participating institutions and will follow in
general the concept of the Horizon 2020 pilot on Open Research Data. In
addition to this, the consortium has decided to include ZENODO, Figure 1, as
data repository for the sharing and dissemination of the research assets.
ZENODO targets high quality data in engineering sciences and uses the DOI
format to sort and preserve the data themselves. A detailed description of the
chosen data repository is given in the next section.
**_Figure 1 - Graphic interface of ZENODO web portal_ **
# DATA REPOSITORY
The EVE consortium has defined that the project results and assets will be
made available on the project web portal for wide international audience in
order to share the acquired knowledge outside of the consortium. As more and
more funders and journals adopt data policies that require researchers to
deposit research data in a data repository, the question over where to store
this data and how to choose a repository becomes more and more important.
ZENODO enables researchers, scientists, EU projects and institutions to
display and share multidisciplinary research results. In particular, ZENODO
provides with: easy sharing of small research results in a wide variety of
formats including text, spreadsheets, audio, video, and images across all
fields of science; display of the research results and credits by making the
research results citable and by integrating them to funding agencies like the
European Commission; easy access and reuse of shared research results.
Here follows an example of data sharing through ZENODO in accordance with the
dissemination requirements of the project. As an example, the uploading
procedure of a publication from ResearchGate personal web portal will be
described.
through ZENODO repository_
The research data is here represented by a journal paper from Figure 2\. Once
the document is selected, it can be uploaded on the ZENODO personal web portal
by going to the UPLOAD section, Figure 3. Hence, it is possible to specify a
large amount of information for each uploaded data: here a brief list of the
most important information required for the fair data storage and
identification is reported.
_**Figure 3 - ZENODO interface for data uploading. Focus on the compulsory
details for the fair data preservation** _
Individual members can share any type of data linked to the project, such as
Articles and
Conference Papers. First of all, the type of data to upload can be specified
in the field “Type of File”. With reference to the example, the option
Publication – Journal Paper is selected. In addition to this, among the main
characteristics of ZENODO repository, there is the possibility to preserve the
data by means of the DOI codification. In the “Digital Object Identifier”
field it is either possible to assign a new DOI, automatically generated by
ZENODO, or to use the original code in order to allow the other users to
easily and unambiguously cite the uploaded file. Moreover, the latter choice
avoids the useless multiplication of the same data: in this view, the file is
generated once and then stored.
With reference to Figure 4, further important information is required, such
as: the title of the shared article or conference paper; the name of the
Authors linked to the document; a description of the uploaded data. Moreover,
to guarantee an easy consulting and research of the shared data, it is
possible to enrich the submitted documents with the keywords. Thereafter, in
the “Community” section, the name of the EVE community wherein the data is
aimed at being shared must be specified. At last, the Grant financing the
research activity can be also included in the form.
details for the fair data preservation_
In conclusion, the consortium has decided to include ZENODO as data repository
for the sharing and dissemination of the research assets in addition to the
EVE web portal. Above, just the most important information required for the
data sharing on ZENODO repository have been illustrated. Mainly four reasons
brought the consortium to this choice:
1. ZENODO targets high quality data, such as publications and associated metadata and uses the DOI format to sort and preserve the data themselves;
2. the database restriction level can be set among open, closed and restricted;
3. it concerns the engineering sciences in general;
4. it has the European Commission Horizon 2020 among the participating institutions.
Hence, ZENODO complies the requirements of the Horizon 2020 pilot on Open
Research Data providing wide access to shared electronic research and
technical material of the participating institutions.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1016_DIGIWHIST_645852.md
|
information (company name, ID, incorporation date, address, company size
etc.), financial data (annual turnover, profit rate, liabilities etc.),
ownership and manager information. However, in most European countries, there
is no readily available and detailed company data. Although, national company
registries exist, they are not always free to use and often only contain a
limited set of information (e.g. no ownership or financial data is available).
Furthermore, open data repositories on company characteristics do not contain
enough data either (e.g. opencorporates.com), hence they can be only used for
cross-checking data quality. Therefore, under the terms of project proposal,
the full company data set from all 34 countries covered was purchased from a
private data provider.
# Public sector data
In a same way as for procurement data we use public sector data published
either on national portals or on NGO portals that mainly provides data in a
machine readable format. There is a big difference between quality of
procurement and public sector data sources. While governments put an effort
into publishing procurement data quality of public sector data like budget
information or asset declarations is very poor. This data is mainly completely
unstructured, not easily machine processable (scanned pdfs) and scattered
across the internet.
# Size of the data
Public procurement data is stored in several stages within a database
## _1\. Raw data_
This comprises data as it is published on its original source. In this stage
we basically create a mirror of the original source so that we can access this
data without needing to request it again from its original location and
without any information loss. Raw data therefore contains a mixture of HTML,
XML, JSON or CSV data including all the unnecessary information that
accompanies the required information.
We’ve already collected raw data from almost all jurisdictions and therefore
don’t expect that the raw data size will grow dramatically although it’s
highly probable that after the first round of validation we will find out that
some publications are missing and a crawler adjustment will be needed.
We will also collect data increments so we expect the data size will increase
in the coming years. Since we have precise data for the month of May 2016
we’ll be able to estimate the size of an increment in the near future.
<table>
<tr>
<th>
Stage
</th>
<th>
Number of records *
</th>
<th>
Data size **
</th>
<th>
Estimated data size (GB) ***
</th> </tr>
<tr>
<td>
Raw
</td>
<td>
9142602
</td>
<td>
318
</td>
<td>
350
</td> </tr> </table>
* Number of records August 2016
** Data size in GB in August 2016
*** Estimated data size in September 2017
## _2\. Parsed data_
This database contains useful information extracted from the raw documents in
a structured text format. One raw document can be split into multiple parsed
documents, each describing one tender. We don’t parse all documents, only
selected forms that contain information relevant to the project’s goals;
therefore, the number of parsed documents may be lower than the number of raw
documents.
We have only datasets from a few jurisdictions processed to the parsed stage.
This comprises about 25% of all raw publications; our estimate of the final
data size is therefore based on the current size and our prediction that final
size will be four times bigger
<table>
<tr>
<th>
Stage
</th>
<th>
Number of records *
</th>
<th>
Data size **
</th>
<th>
Estimated data
size ***
</th> </tr>
<tr>
<td>
Parsed
</td>
<td>
2552248
</td>
<td>
13
</td>
<td>
52
</td> </tr> </table>
* Number of records August 2016
** Data size in GB August 2016
*** Estimated data size in GB September 2017
## _3\. Clean data_
At this stage we convert the structured text information to a proper data type
e.g. numbers, dates, enumeration values. This stage contains the same number
of documents as a parsed stage but may contain a different number of fields
from the corresponding parsed document because:
* the system can fail while cleaning some fields e.g. number is not a number; or
* the system can create a new field e.g. by mapping the national tender procedure type to enumeration value and storing both of them.
<table>
<tr>
<th>
Stage
</th>
<th>
Number of records *
</th>
<th>
Data size **
</th>
<th>
Estimated data
size ***
</th> </tr>
<tr>
<td>
Clean
</td>
<td>
1866845
</td>
<td>
11
</td>
<td>
44
</td> </tr> </table>
* Number of records August 2016
** Data size in GB August 2016
*** Estimated data size in GB September 2017
## _4\. Matched data_
Clean data contains one document for one publication without any relation
between publications describing the same tender. The matched stage connects
such publications into one group. It contains the same number of records as
the previous stage in a same format but adds information which connects
documents.
## _5\. Master records_
The mastered stage is the last stage. In this phase of data processing we
aggregate data from all publications describing one tender and create one
master object that is a final image of a specific tender. This will be a final
dataset that the DIGIWHIST project will publish together with some related
data discussed in Chapter 2.2.
Because we are at the very beginning of matching algorithms and creating
master records it’s difficult to estimate how many records there will be in
this stage. We can only use an expert estimate here based on the fact that:
* there are stricter rules for publishing for above threshold procurements;
* above threshold tenders will consist of a contract notice and a contract award;
* many publications will be form corrections;
* this leads to about half the number of records in comparison with matched data collection.
## Company data
The company data database is an important dataset for:
* research activities;
* buyer/supplier matching algorithms;
It's currently 350GB and it comprises:
* company register - 51.288.900 records
* financial data - 67.828.500 records
* manager information - 43.954.700 records
* link - 1.489.220.000 records. This is not a final number because some data hasn’t been imported yet so we can expect around 1.800.000.000
## Public sector data
In comparison to other datasets the public sector data database size is
negligible. Currently we have for all categories together a database of 7.5GB.
Despite the fact that it is likely to grow we don’t expect it will be larger
than 15GB for raw data.
# Data utility
Procurement data has variety of potential users. The foremost goal of the
project is to create data which is usable for policy analyses and research,
therefore drawing users from public institutions such as the EC or national
governments and academia.
Various studies such as PwC(2013) 1 identify lack of reliable data
(especially in terms of unified structure and centralisation) as a major
drawback for similar policy analyses. Additionally data including various red
flags might be highly beneficial to anti-fraud agencies such as OLAF and
various NGOs focusing on anti-corruption activities.
# 2\. FAIR data
## 2.1 Making data findable, including provisions for metadata
### Data discoverability
We are active members of the Open Contracting community which is dedicated to
the publication of public procurement data. We plan to follow the standard
defined by the OCDS and, together with other OCDS publishers, our outputs will
be linked from _http://www.open-contracting.org/why-
opencontracting/worldwide/_ which is the central directory of similar
datasets.
### Naming conventions
Although we designed our own data template for recording public procurement
data our outputs will be published in the format of the Open Contracting Data
Standard (OCDS) ( _http://standard.opencontracting.org_ ) that is currently
the only widely used standard for publishing this type of data.
### Keywords
Individual structures, fields and enumeration values follow the OCDS which
makes our data easy searchable for everyone.
Versioning
Each data release will be versioned in accordance with the OCDS for package
releases 2 .
### Metadata
To make our data more open and findable we will publish metadata based on the
OCDS 3 like URL, published date, publisher etc. When we decide to publish
metadata that is not described in the OCDS we will do it in such a way that it
only extends it and remains compatible.
## 2.2. Making data openly accessible
### Processed and produced datasets
The main goal of WP2 is to publish a data collection that will best reflect
each tender based on the raw data detected and obtained by our software
together with additional tender related information such as indicators that
are outputs of WP3. Some secondary data collected and/or processed by our
software won’t be published as a separate dataset because it’s either under
the protection of contract or it’s not a goal of DIGIWHIST project to publish
such datasets and we use it only for purposes of making our final data better
and more accurate.
#### Public datasets
* Public procurement data: DIGIWHIST will publish all tender information it detects in the described format together with a methodology of how the data was collected and created since it will be an aggregation of more public data sources.
* Indicators developed within the scope of WP3 as deliverable D3.6 ( _Indicators implemented in database_ ) will be a part of the public procurement data. They will be published as tenderrelated information
* Public sector data collected within WP2 will be published, in compliance with the Grant Agreement, as tender/buyer related information or aggregate statistics
#### Non-public datasets
* Company data is a dataset that the DIGIWHIST project bought in compliance with the Grant Agreement. Its usage is defined by a contract with the supplier (Bureau van Dijk). This prevents DIGIWHIST from making it public but it enables DIGIWHIST partners to use it for scientific research.
* Tender-related data of a speculative nature. Within the complex process of data cleaning and merging we obtain some variables with some informative value, yet with a high risk of being erroneous. These will be valuable for research purposes yet their publication might bring serious legal and misinterpretation risks (for example through publishing the wrong supplier). Even with rigorous disclaimer release of such data might in fact reduce understandability and usability of the data to journalists, researchers etc.
### Data access
All data published within WP2 will be accessible through an API designed to be
easily usable and machine readable. It will use SSL for authentication and
encryption of communication between API and end user so that delivered data
can’t be modified by anyone during its transmission from source system to its
destination and the receiving party is thus certain of the source.
#### Access methods
The DIGIWHIST API will use a standard HTTP protocol which means there is no
need for special software to access the data. All popular programming
languages implement functions or libraries that enable developers to
communicate via HTTP protocol. On top of that anyone can access the data via a
web browser such as Internet Explorer, Chrome or Firefox.
#### Documentation
As well as the data and the API itself, the documentation of the API is under
development and this will be released together with the data by the end of a
project. This documentation will describe all API endpoints and methods in
detail and will be the only document needed to successfully connect to the
described data source.
#### Restrictions
There are no explicit restrictions on re-use of published data therefore there
is also no need for a data access committee. The software as well as the data
documentation will be released as D2.8
( _Methods paper describing database content, data collection, cleaning, and
linking_ )
### Software license
All software products developed within the framework of WP2 will be published
as open source under the MIT 4 licence. This licence grants permission, free
of charge, to any person obtaining a copy of the software and associated
documentation files (the "Software"), to deal in the Software without
restriction, including without limitation on the rights to re-use, copy,
modify, merge, publish, distribute, sublicense, and/or sell copies of the
Software, and to permit persons to whom the Software is furnished to do so.
### Data, documentation and code repository
#### Source codes and documentation
All source codes and their documentation will be stored in a public GitHub
repository. This is a wellknown repository in an open source community and it
is considered to be a best practice to share codes in this way because it
creates a centralised point for third parties to obtain and re-use the code
which has been created.
The choice was thus motivated by current best practice, which is based on the
following GitHub features:
* security of a repository;
* connectivity reliability;
* data durability;
* graphical user interface;
* easy re-use enabled by major third-party software development tools.
#### Published data
Due to the fact that the data size may grow and the nature of a data may
change every day we decided that it’s more appropriate to implement a custom
solution for data publication that allows users to access updated data on
demand and in small portions instead of always having to download the whole
dataset from a static repository, even if a significant part of a content
hadn’t changed. This solution is referred to as an API in this document and is
based on standard technologies like HTTP protocol or JSON data format. A
proper backup and recovery plan needs to be implemented in the production
phase to avoid potential system failures or data loss.
## 2.3. Making data interoperable
It is a top priority goal for DIGIWHIST to make data from various sources
using various national code tables and enumerations interoperable and easily
understandable.
### Standard vocabularies
To make data easily readable and processable we follow open contracting data
standard structures and enumerations. This should make data completely clear
for everyone who wants to use it. The importance of standard vocabularies like
common procurement vocabulary (CPV) or NUTS code arises when we take into
consideration that we publish data in many different languages. For users it’s
almost impossible to understand all of those languages but standard
vocabularies help to make basic information like the subject of a tender or
location of works understandable.
### Mapping
Where various national values for different fields are used (e.g. tender
procedure type) we put extensive effort into mapping national values to
standard vocabularies. We do such mapping for fundamental data like:
* lot status;
* tender size;
* procedure type; and
* other fields that have enumeration values in OCDS.
## 2.4. Increase data re-use
### Data licence
The licensing of the data produced is still an open issue given the legal
differences across all jurisdictions and the differences in rights granted by
official data providers. Even though there are licenses designed for open data
(e.g. ODbL 5 ) any licensing can be complicated and we will have to proceed
country by country. For example, copyright law in the Czech Republic
explicitly excludes "data in public registries whose distribution is in public
interest" from any possibility of licensing or protection.
### Data availability
D2.6 is due in month 31 of DIGIWHIST. This means that a final linked database
and related algorithms will be published by the end of September 2017. In
compliance with the Grant Agreement, data will be available for at least three
years after the end of a project.
### Data reusability
Data published by DIGIWHIST will be accompanied by an OCDS version that it is
compatible with. This is especially important because data standards are
evolving all the time and it is expected that some changes in OCDS will occur
during the implementation phase which ends in month 31 of the project or
during following years. Adding a compatible OCDS makes implementation of the
data processing software much easier.
### Quality assurance
There are several consortium members contributing to the quality assurance
process. This is led by Datlab which validates data at several levels.
1. **Consistency** \- examining the integrity of the data, its structural consistency with the designed model and suggesting further changes to the model. Responsible organisation: Datlab
2. **Completeness** \- ensuring that all the relevant data (at the form level) has been obtained from the source. Responsible organisation: Datlab
3. **Correctness** \- ensuring that the raw data obtained is consistent with the source, i.e.
containing the same values, codelists match national legislation etc.
Responsible organisations Datlab + UCAM domain experts
4. **Data availability** \- evaluating the quality of the processed data in terms of availability of variables (in contrast to Correctness this is not looking for the errors in our software anymore, but assessing the quality of the data, which possibly carries many imperfections from the source systems). Responsible organisation: Datlab.
The outputs of this process, most importantly the Data availability step, will
be described in detail together with validation results in D2.7 which will be
released together with the final database.
# 3\. Allocation of resources
## Costs for making data FAIR and its coverage
Making data FAIR is significant part of the project. Almost the whole of WP2
entails re-creating data from original sources and making it FAIR. Thus, in
some sense, at least 36% of overall project costs (the WP2 share of the work)
is dedicated to this. Since other work packages such as WP1 also contribute to
that goal, we can conclude that overall considerable resources and time are
dedicated to publishing data in accordance with FAIR principles. The costs of
achieving this are built into the project budget. Some of the activities
(deliverables) which are crucial for this include:
* Legal and regulatory mapping (D 1.1)
* Implement data templates compatible with OCDS (D2.3)
* Raw (D2.4), Cleaned and structured databases (D2.5), Final linked database (D2.6)
* Data validation (D2.7)
* Methods paper describing database content, data collection, cleaning, and linking (D2.8)
## Responsibilities for data management
Until the end of project the UCAM team is responsible for making the data
public, documented and secure. After that OKFN will take over the
sustainability phase, ensuring the availability of published resources at
least for five years after the project end.
The current distribution of labour requires several steps of complex data
gathering and processing which is designed and coordinated by the UCAM IT
team. Other consortium members take responsibility as part of that process for
particular actions:
1. Source annotation (UCAM domain experts)
2. Parsing and processing of the data from sources (UCAM IT)
3. Validation and bug reporting (Datlab, UCAM domain experts)
4. Data release and provision to other partners (UCAM IT)
The process further involves many decisions which will affect the final
quality and scope of the data. This includes, for example, the prioritization
of countries, sources (especially if multiple sources are available in given
countries) and individual variables in order to deliver the most comprehensive
dataset with the resources available. Such decisions are made following
discussion amongst consortium members to reflect both future usability of data
and the practical costs of gathering it.
Are the resources for long term preservation discussed?
As explained earlier, the current infrastructure is run and further developed
by the UCAM IT team. The choice of storage now, as well as in the future, is
primarily made by balancing the costs, ease of processing and potential re-
use.
Thus far we have designed (D2.1) and implemented the whole architecture using
the AWS IAAS (Infrastructure as a Service) provider and we will run it until
September 2017. Thereafter OKFN will be responsible for ensuring that the data
gathered during the implementation phase is available until the end of the
sustainability phase. One of the key upcoming decisions is to agree with OKFN
what kind of architecture they will use to do this; alternatives include using
the existing architecture, or they could run the database and software using
their own servers or they could buy some servers from a hosting company. The
chosen solution will reflect above mentioned principles in order to facilitate
one of key project goals - making the data available for further re-use.
# 4\. Data security
## Access
We apply different security mechanisms on different levels to ensure the
security of the production infrastructure:
1. Access to the production infrastructure is granted to approved personnel only.
2. The production environment is secured by firewall.
3. There is only one entry point to the infrastructure. There is no direct access vector to servers/services(PostgreSQL, RabbitMQ etc.)
4. All communication with the production environment (API, server access) is possible only via channels with a strong cryptography enabled OpenVPN, SSH)
### OpenVPN
There is an OpenVPN server installed on the infrastructure entry point. The
clients have first to connect to the OpenVPN with a proper certificate
(certificates are user specific). OpenVpn is configured to disable
“visibility” of connected clients between each other.
Once the OpenVPN connection is successfully established, the user can continue
with SSH access to the rest of infrastructure.
### SSH
Connection to servers is possible only via SSH. The SSH is configured to
disallow password authentication, only public/private key authentication is
possible. Keys are not shared amongst users.
### Administrator access
Client trying to access the infrastructure have to:
1. Connect to OpenVPN with a proper certificate
2. Connect to the entry point server via SSH
### Service access
To connect to one of the services(MongoDB, RabbitMQ) from the client directly:
1. Connect to OpenVPN with a proper certificate
2. Connect to the entry point server via SSH
3. Create an SSH tunnel to target service/port.
## Backup
The data is backed up on a daily basis with 30 day retention period. The
backups are stored as encrypted snapshots to Amazon S3 infrastructure into
geographically different locations.
## Availability and recovery strategy
The current state of the project does not require 24/7 availability setup. In
case of service failure, we are able to restore whole production environment
in a short period.
## Encryption
The data is stored on encrypted storage devices. Database storages as well as
backups, logs etc. are placed only on encrypted volumes.
## Software patches
Software patches are applied on a regular basis. Critical path updates and
security patches bulletins are reviewed and fixes are applied within hours
when necessary.
# 5\. Ethical aspects
The EC original Ethics Check and RP1 Ethics Check have both raised a number of
concerns around the impact of data sharing :
## Data Protection & Privacy
Detailed information was sought in relation to the procedures that will be
implemented for data collection, storage, protection, retention and
destruction along with confirmation that they comply with national and EU
legislation **.** A lengthy description of data security was provided which
covers the service provider for data storage, encryption, backup, secure
access, network configuration, user accounts, software patches, log audit,
recovery strategies, data destruction and passing data to third parties. The
full response, which has been accepted by the EC, can be found in our
Consortium Ethics Check Response RP1.
## Personal information
Detailed information was also sought on the type of personal information that
is to be collected from interviewees/informants as well as the
privacy/confidentiality issues related the personal data. We provided a
detailed explanation of how the data will be accessible through password login
and will be kept in encrypted files which are backed up daily. Data will only
be kept for the length of the project. All participants will be made aware of
how their data will be used and will sign consent forms. For the purposes of
analysis, informant’s personal data will be anonymised so they cannot be
identified. We will not publish any information which would allow the
identification of interviewees/informants. The full response, which has been
accepted by the EC, can be found in our Consortium Ethics Check Response RP1.
## Protection of whistleblowers
Whistle-blowers may face severe professional and physical reprisals if their
identities were wrongfully disclosed. Our national portals will not themselves
provide the whistleblowing function, but will link to a national partner’s
website that provides such a channel so no personal data will be transmitted
through and stored on DIGIWHIST servers. All the national partners will be
experienced in running such portals and will be thoroughly vetted in advance.
Each will sign a Memorandum of Understanding requiring them to comply with EU
and national whistle-blowing and data protection legislation. We will only
enable the whistleblower function in countries where we can identify partners
that are capable of implementing the required national and international
standards.
## The management of the potential discovery of illegal activities, in
particular corruption
We have agreed with the EC that we will develop a set of guidelines, including
for interviewers, on how to manage such situations based on the best practice
required by the University of Cambridge and with input from all consortium
partner institutions.
## The stigmatization of organizations and/or individuals because of false
alarms caused by the developed indicators and systems
The possible stigmatization of individuals has been addressed satisfactorily
as we will not share any individual data at all – neither for private nor
public persons – so the issue will not arise.
The possible stigmatization of organisations has yet to be resolved and is
still being discussed by the Consortium with the EC (as at September 2016).
This is an ongoing “conversation”.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1017_YDS_645886.md
|
# Introduction
## Purpose and Scope
A Data Management Plan (DMP) is a formal project document which outlines the
handling of the data sources at the different project stages. The H2020
guidelines [22] provide an outline that has to be addressed. The DMP covers
how data will be handled within a project frame, during the research and
development phase but also details the intentions for the archiving and
availability of the data once the project has been completed [5,8]. As the
project evolves, the DMP will need to be updated to reflect the changes in the
data situations and the understanding of data source becomes more concrete.
YDS as project aims to create a data eco system bringing together state of the
art data processing technology with recent content about governmental
budgetary and economical transparency in a platform that facilitates European
citizens and in particular journalists creating stories based on factual data.
The technological foundations of the data management platform being
established within YDS are such that it is intended to be multi-purpose and
domain-agnostic. Within YDS this generic data management platform will be
piloted using three closely related data domains: the financial transparency
in the Greek and Irish governments and governmental development aid. This core
activity of collecting and aggregating data from many different (external to
the YDS project) data sources makes that metadata management of the used and
produced datasets is key. By applying the DCAT-AP [16] standard for dataset
descriptions and making these publicly available, the YDS DMP covers 4 out of
the 5 key aspects (dataset reference name, dataset description, standards and
metadata, data sharing) as specified in [22] as integral parts of the
platform.
## Approach for Work Package and Relation to other Work Packages and
Deliverables
This deliverable is related to D3.1 “Data Source Assessment Methodology” since
many of the questions identified here will need to be answered as part of the
data source assessment (prior to trying to harvest the data source). D3.1
defines a continuous process and related activities that will ensure that
relevant data (for open access and to be made available publicly through
existing open access repositories and services) is identified and verified
during the course of the project and beyond. It is therefore crucial for
completing the individual DMP instances (i.e. per data source), which will be
provided in D2.8 Data Management Plan v2.0.
Moreover, the overall project approach to data processing will be provided in
D3.6 Data Harvesters v1.0 (and its later versions) and both practical and
technical consideration with respect to data storage and sharing will be given
in D3.9 Open Data repository v1.0 (and its future updates).
## Methodology and Structure of the Deliverable
The initial version of the YDS DMP life-cycle is outlined in Section 2 which
will elaborate the general conditions and data management methodology. As the
YDS pilots will mostly handle manually created content (tables, reports,
analysis …) the tooling will often require manual intervention and hence the
complete data integration process from source discovery to published
aggregated data cannot be completely automated. Therefore, an important aspect
of the YDS DMP is the general methodology.
During the project progress the YDS DMP will be furthermore detailed by taking
into account the experiences of the pilot cases. The remainder of this report
is structured as follows:
* _Data Management Plan Checklist_ \- Section 3 provides a description of the basic information required about the datasets that are going to be used in the YDS project.
* _Metadata Management_ \- Each data source and each resulting dataset of the YDS aggregation process will be described with meta-data. This meta-data can be used on the one hand for automating the YDS data ingestion process, but on the other hand also for external users to understand better the published data. This is further described in Section 4.
* _Access, sharing and re-use policies_ \- An important challenge in the YDS platform is the ambition to combine data from datasets having different usage and access policies. Interlinking data having payment requirements with data that is publicly and freely available impacts the technological and methodological approaches in order to implement the desired access policy. Section 0 outlines this further.
As the YDS pilots are still being defined, some questions relating to the Data
management and storage (long term) are somewhat premature. Section 6 will,
however, provide some direction in the sort of questions each data source and
Pilot will need to answer.
# The YDS data lifecycle
The YDS platform is a Linked Data platform; therefore, the data ingested and
managed by the YDS platform will follow the Linked Data life cycle [4]. The
Linked Data life cycle describes the technical processing steps which are
possible to create and manage a quality web of data. In order to smoothen the
process best practices are described to guide data contributors in their usage
of the YDS platform. This is further discussed in section 2.2 “The generic YDS
data value chain”, while the common best practices [12, 13] are quoted in
section 2.3 Best Practices.
Prior to the linked-data approach to the use of data, data management was
perceived as being an act done by a single person or unit. Responsibilities
(involving completeness, consistency, coverage, etc. of the data) were bound
to the organizations duties. Today, with the used of the Internet and the
distribution of data sources, this has changed: data management is seen as
being a living service within a larger ecosystem with many stakeholders across
internal and external organization borders. For instance, Accenture Technology
Vision 2014 indicated this as the third most important trend in 2014 [9].
## Stakeholders
For YDS, the key stakeholders have been identified which influence the data
management. Their main interaction routes are depicted in Figure 1: DMP Role
Interactions.
**Figure 1: DMP Role Interactions**
_**Data end-user(s)** : _
The data end-users make use of the aggregated datasets to create their own
story. In Deliverable D2.1 the main data end-user types for the YDS platform
are identified: media & data journalists, auditors, web developers, and
suppliers of business opportunities in public procurement, the civil society
and public institutions. The data end-users are the main drivers of the YDS
platform content: their need for data is the key driver for the content of the
YDS platform.
**_Data source publisher/owner(s):_ **
Represent the organization(s) which will provide the data to be integrated
into the YDS platform. For many data sources, especially those that are
published as Open Data by the public bodies, the interaction between YDS and
the data source publisher/owner will be limited to technical access to the
data (a download of a file, a registration to obtain an API key). To ensure a
quality service level to the data end-users it is required to setup a more
intense collaboration with the key data sources. This is, however, expected to
happen only when the YDS platform matures.
_**Content business owner** : _
Is the person responsible for the content business objectives. The content
business owner makes sure that the necessary data sources are found in a
usable form and that the desired aggregations are being defined so as to
realize the aggregated enriched content for the supported YDS stories. For
each content domain a business owner is required.
_**Data wrangler [10,11]** : _
This person acts as a facilitator in that they interact at with all
stakeholders but at the level of the integration of the source data into the
platform. The data wrangler massages the data using the YDS platform to
realize the desired content. They must understand both the business
terminology used in the source data model(s) and the YDS target mode,
understand the end user objectives and ensuring that the mapping between the
models is semantically correct. The data wrangler is assisted by the YDS
system administrator and YDS platform developers to tackle the technical
challenges, but their central concern is the mapping of the data.
_**System administrator and platform developer** : _
Are responsible for the building and support of the YDS platform in a domain
agnostic way.
## The generic YDS data value chain
The complex process of a data value chain can be described using the following
stages:
**Figure 2: Data value chain stages**
* **Discover** : In today’s digitized world, there are many sources of data that help solve business problems that are both internal and external to organizations. Data sources need to be located and evaluated for cost, coverage, and quality. For YDS the evaluation of the data sources is part of the data source assessment methodology (See Deliverable D3.1). The description and management of the resulting dataset meta-data is one of the main best practices used in the Linked Data community.
* **Ingest machine processable data** : The ingest pipeline is fundamental to enabling the reliable operation of entire data platforms. There are diverse file formats and network connections to consider, as well as considerations around frequency and volume. In order to facilitate the value creation stage (Integrate, analyze & enrich) the data has to be provided in, or turned into a machine processable format. In the YDS case, the preferred format is RDF [1].
* **Persist** : Cost-effective distributed storage offers many options for persisting data. The choice of format or database technology is often influenced by the nature of other stages in the value chain, especially analysis.
* **Integrate, analyze & enrich ** : Much of the value in data can be found from combining a variety of data sources to find new insights. Integration is a nontrivial step which requires domain knowledge and technical knowhow. Exactly by using a Linked Data approach with a shared ontology the integration process is facilitated in YDS. Where-as the other stages have a high
potential of automation to a level where humans are not anymore involved, this
stage is driven by human interest in the data. New insights and better data
interconnectivity are created and managed by a growing number of data
analytical tools and platforms.
* **Expose** : The results of analytics and data that are exposed to the organization in a way that makes them useful for value creation represents the final step in deriving value from data. The structure of the stages is based on the vision of the IBM Big Data & Analytics group on the data value chain [21].
When contributing a new data source to the YDS platform the stages are roughly
followed from left to right. In practice the activities are, however, more
distributed in order to keep the platform and the data it provides in the
desired state. Indeed, data that is not actively nursed becomes quickly
outdated. More and more imperfections will show up, to the point that data
end-users consider the data not valuable anymore. Taking care of the YDS
platform content, is hence a constant activity. From a technical perspective
this work is supported by the tooling available during the Integrate, Analyse
and Enrich phase. It is similar work as creating newly added value, but then
with the objective to improve the overall data quality (coherency,
completeness, etc.).
A further point to consider is that data based applications also have a
tendency to generate new requirements based on insights which are gained when
studying the data (this forms a loop which will continue as understanding of
the data increases 1 and this is shown in Figure 3: Linked Data ETL
Process). This will depend heavily on what the data is intended to allow or
what it is intended to be used for (search for understanding, support of a
particular story, tracking of an ongoing situation, etc.).
In the following sections, the above data value chain stages are made more
concrete.
### Discover
The **content business owners** are the main actors in this stage. Using the
data source assessment methodology, relevant data sources for their content
domain are being selected to be integrated.
An important outcome of the data source assessment is the creation of the
meta-data description of the selected datasets. In section 4, the meta-data
vocabulary that is going to be used is described (DCAT-AP). The expectation
raised in creating the meta-data is that the data sources will be well
described (what is the data, which are the usage conditions, what are the
access rights, etc.), but experience has shown that collecting this
information represents a non-trivial effort because it is often not directly
available.
### Ingest machine processable data
The selected datasets are being prepared by a _data wrangler_ so that they can
be ingested in the YDS platform. The data wrangler will hook up the right data
input stream, for instance a static file, a data feed or an API, into the YDS
platform. During this work the data is prepared for machine processing.
Especially for static files such as CSV’s often additional contextual
information is required to be added in order to make the semantics explicit.
Without this preparation the conversion to RDF results in a technical
reflection of the input, yielding more complex transformation rules in the
Integrate, analyze and enrich stage.
### Persist
Persistence of the data is de-facto an activity that happens throughout the
whole data management process. However, when contributing a new data source to
the platform, the first moment data persistence is explicitly handled is when
the first steps have been taken to ingesting data into the YDS platform.
Since the YDS platform is about integrating, analyzing and enriching data from
different sources _external_ to the YDS partners, persistence of the source
information is not only an internal activity. It requires interaction between
the content business owner and the data source publisher/owner to guarantee
that during the life time of the applications build on top of the data the
source data stays available. Only carefully following up and the continuous
interaction with the data source publishers/owners will create a trustable
situation. Technically, this is reflected in the management of the source data
meta-data activity.
Despite sufficient attention and follow up, it will occur that data sources
become obsolete, are temporary not available (e.g. due maintenance) or
completely disappear (e.g. the organization dissolves). Many of these cases
are addressable to a certain extent by implementing data persistence
strategies such as:
* _Keeping local copies_ : explicit activity of copying data from one location to another. The most frequent case is copying the data from the governmental data portal to the YDS platform.
* _Caching_ : a technical strategy which main intention is to enhance data locality so that the processing is smoother. It may also act as a cushion to reduce the effects of temporary data unavailability.
From the perspective of the YDS data user, _archiving & high available data
storage _ strategies are required to address the availability of the outcome
of the YDS platform. This usually goes hand in hand with a related, yet
orthogonal activity, namely the application of a dataset versioning strategy.
Introducing dataset versioning provides clear boundaries were along data
archiving has to be applied.
### Integrate, analyze and enrich
In this stage, the actual value creation is done. The integration of data
sources, their analysis and the analysis of the aggregated data and the
overall content enrichment is realized by a wide variety of activities. In
[4], the Linked Data life cycle is described: a comprehensive overview of all
possible activities applicable to Linked Data. The Linked Data life cycle is
shown in Figure 3: Linked Data ETL Process. (Note: Some activities of the
Linked Data life cycle are also part of other phases like ingestion,
persistence and expose.)
**Figure 3: Linked Data ETL Process**
Start reading from the left bottom stage called “Extraction” and going clock-
wise.
As most data is not natively available as RDF extraction tooling will provide
the necessary means to turn other formats into RDF. The resulting RDF is then
stored in an RDF storage system, available to be queried using SPARQL. Native
RDF authoring tools and Semantic Wiki’s allow then the data to be manually
updated to adjust to the desired situation. The interlinking and data fusion
tools are unique tools in the world of data management: Linked Data (or a data
format with similar capabilities as RDF) are the enablers of this process in
which data elements are interlinked with each other without losing their own
identity. It is the interlinking and the ability of using entities from other
public Linked Data sources that creates the web of data. The web of data is a
distributed knowledge graphs across organizations which is in contrast to the
setup of a large data warehouses. The following 3 stages are about further
improving the data: when data is interlinked with other external sources new
knowledge can be derived and thus new enrichments may appear. Data is off-
course not a solid entity but it evolves over time: therefore, quality control
and evolution is monitored. To conclude the tour the data is published. RDF is
primarily a data publication format. This is indicated by the vast amount of
tooling that provides the search, browsing and exploration of Linked Data.
### Expose
The last stage is about the interaction with the YDS data users. The YDS
platform is a Linked Data platform, and hence the outcome of the data
integration, analyzes and enrichments will be made available according to the
common practices for Linked Open Data:
* A meta-data description about the exposed datasets
* A SPARQL endpoint containing the meta-data
* A SPARQL endpoint containing the resulting datasets
* A public Linked Data interface for those entities which are dereferenceable.
Additionally, the YDS platform supports dedicated public API interfaces to
support application development (such as visualizations). The specifications
of these are to be defined.
## Best Practices
The YDS platform is a Linked Data platform and in this section, the relevant
best practices for publishing Linked Data are described [12, 13]. The 10 steps
described in [13] are an alternative formulation of these stages in the
context of publishing a standalone dataset. Nevertheless, these steps
formulate major actions in the creation of Linked Data content for the YDS
platform concisely (and that is why they are quoted here):
1. _STEP #1 PREPARE STAKEHOLDERS:_
_Prepare stakeholders by explaining the process of creating and maintaining
Linked Open Data._
2. _STEP #2 SELECT A DATASET:_
_Select a dataset that provides benefit to others for reuse._
3. _STEP #3 MODEL THE DATA:_
_Modeling Linked Data involves representing data objects and how they are
related in an application-independent way._
4. _STEP #4 SPECIFY AN APPROPRIATE LICENSE:_
_Specify an appropriate open data license. Data reuse is more likely to occur
when there is a clear statement about the origin, ownership and terms related
to the use of the published data._
5. _STEP #5 GOOD URIs FOR LINKED DATA:_
_The core of Linked Data is a well-considered URI naming strategy and
implementation plan, based on HTTP URIs. Consideration for naming objects,
multilingual support, data change over time and persistence strategy are the
building blocks for useful Linked Data._
6. _STEP #6 USE STANDARD VOCABULARIES:_
_Describe objects with previously defined vocabularies whenever possible.
Extend standard vocabularies where necessary, and create vocabularies (only
when required) that follow best practices whenever possible._
7. _STEP #7 CONVERT DATA:_
_Convert data to a Linked Data representation. This is typically done by
script or other automated processes._
8. _STEP #8 PROVIDE MACHINE ACCESS TO DATA:_
_Provide various ways for search engines and other automated processes to
access data using standard Web mechanisms._
9. _STEP #9 ANNOUNCE NEW DATA SETS:_
_Remember to announce new data sets on an authoritative domain. Importantly,
remember that as a Linked Open Data publisher, an implicit social contract is
in effect._
10. _STEP #10 RECOGNIZE THE SOCIAL CONTRACT:_
_Recognize your responsibility in maintaining data once it is published.
Ensure that the dataset(s) remain available where your organization says it
will be and is maintained over time._
# Data Management Plan Checklist
Each YDS pilot handles content within the Linked Open Economy domain. The
following information will need to be recorded by the _**content business
owner** _ of each pilot. These questions, similar to that found in [5] will
provide the starting point for using the data sources. The aim being to find
any data usage issues, earlier rather than later 2 . This basic data
information, information about the data or meta-data will require managing and
will be further discussed in section 4. The question will also serve as a
checklist, similar to that provided by the UK’s Digital Curation Center 3 ,
and the answers will serve as direct input for the individual DMPs, to be also
provided in a machine-readable form as DCAT-AP descriptions (section 4).
**Table 1: Data Management Plan Checklist**
<table>
<tr>
<th>
DMP aspect
</th>
<th>
Questions
</th> </tr>
<tr>
<td>
**Administrative Data**
</td>
<td>
* How will the dataset be identified? o A Linked Data resource URI What is the title of the dataset?
* What is the dataset about?
* What is the origin of the data in the dataset?
* Who is the data publisher?
* Who is the contact point?
* When was the data last modified?
</td> </tr>
<tr>
<td>
**Data Source**
</td>
<td>
* Where will the data be acquired?
* What documentation is available for the data source models, attributes etc.?
* For how long will the data be available?
* What is the relationship between the data collected and existing data?
</td> </tr>
<tr>
<td>
**Data formats**
</td>
<td>
* Describe the file formats that will be used, justify those formats,
* Describe the naming conventions used to identify the files (persistent, date based, etc.)
</td> </tr>
<tr>
<td>
**Data Harvesting and Collection**
</td>
<td>
* How will the data be acquired?
* How often will the data be acquired?
* What are the tools and/or software that will be used?
* How will the data collected be combined with existing data?
* How will the data collection procedures/harvesting be documented?
</td> </tr>
<tr>
<td>
**Post Collection Data Processing**
</td>
<td>
* How is the data to be processed?
* Basic information about software used,
* Are there any significant algorithms or data transformations
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
used (or to be used)?
</td> </tr>
<tr>
<td>
**Data Quality Assurance**
</td>
<td>
</td>
<td>
Identify the quality assurance & quality control measures that will be taken
during sample collection, analysis, and processing 4 ,
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
What will be the data validation requirements? Are there any already in place?
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Are there any community standards you can re-use?
</td> </tr>
<tr>
<td>
**Short-term Data Management**
</td>
<td>
</td>
<td>
How will the data be managed in the short-term? Consider the following:
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Version control for files,
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Backing up data,
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Security & protection of data and data products,
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Who will be responsible for management (Data ownership)?
</td> </tr>
<tr>
<td>
**Long-term Data Management**
</td>
<td>
</td>
<td>
See Section 6 for more details
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
</td>
<td>
How will the data be shared with the public?
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Are there any restrictions with respect to the dataset or parts of it to be
shared?
</td> </tr>
<tr>
<td>
**Ethics and Legal Compliance**
</td>
<td>
</td>
<td>
How will any ethical issues, should they arise, be managed?
* Have you gained consent for data preservation and sharing? o How will you protect the identity of participants if required?
* How will sensitive data be handled to ensure it is stored and transferred securely?
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
What are the licenses required to access and used the data?
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
How will any copyright and Intellectual Property Rights (IPR) issues, should
they arise, be managed?
</td> </tr> </table>
**Note:** An example with the full checklist and the possible answers is
provided in the Annex. The answers to some of the above questions, such as
Ethics and Legal Compliance (to be discussed in section 7), will be provided
in the sections below, and will serve as default input for the individual DMP
instances.
# Meta-data management
The data collected and aggregated in the YDS platform can also be distributed
to the public or be used in another aggregation process. A coherent set of
data is called a dataset. Distributing the dataset requires describing the
dataset using meta-data properties.
Within Europe an application profile of the W3C standard DCAT [15] called
DCAT-AP [16] is being used to manage data catalogues **.** This standard,
which is also a **European Commission recommendation** , enables dataset
descriptions in Europe to be exchanged in a coherent and harmonized context.
At the moment of writing, i.e. June 2015, DCAT-AP is undergoing a revision to
better fit the European needs.
In addition to this motivation, YDS has extensive in-house knowledge and
experience: the YDS partners, NUIG and TenForce are organizations that played
key roles in the establishing and success of the standards. NUIG actively
supported the creation of DCAT as being the co-editor of the standardization
process and it has continued sharing its expertise in the development of the
DCAT application profile. TenForce, lead and was/is participating in several
projects that contributed to the technological application of the standard
DCAT and the creation of DCAT-AP: LOD2, the European Open Data Portal, Open
Data Support (in which TenForce established the first implementation of DCAT-
AP). Recently TenForce supported the revision of the DCAT-AP process and is it
responsible for the first study on creating a variant for statistical data
STAT DCAT-AP.
Building upon DCAT-AP will integrate the YDS platform in the European (Open)
Data Portal ecosystem. Data being made available through the YDS platform is
being picked up and distributed to the whole of Europe. On the other hand the
European (Open) Data Portal ecosystem can provide access to data that has not
yet being identified as relevant. For instance the Open Data Support project
data catalogue [17] offers access to more than 80000 dataset descriptions of
more than 15 European Union member states.
The core entities are Dataset and Distribution. The Dataset describes the data
and its usage conditions. Each Dataset has one or more Distributions, the
actual physical forms of the Dataset. A collection of Datasets is managed by a
Data Catalogue. The details are shown in Figure 4: DCAT-AP Overview.
As the DCAT-AP vocabulary is a Linked Data vocabulary, it fits naturally the
technological choices of the YDS platform. It is expected that the DCAT-AP
vocabulary covers the majority of the YDS data cataloguing needs. In case of
gaps or more specific needs, the YDS platform will further enhance and detail
the DCAT-AP vocabulary to fit its needs. One such aspect that requires further
elaboration is the management of licensing, rights and payments. In the
ongoing revision of DCAT-AP some additional properties are being added
covering these aspects, but it has to be expected that those are not
sufficient for YDS.
The adoption of DCAT-AP creates also the availability of tooling. There is
EDCAT [18] and API layer to manage data catalogues, a web interface [19] and
the ODIP platform [17] that harvests open data portals (based on an earlier
version of UnifiedViews [20], the central component of the YDS platform).
17/08/2016
**Figure 4: DCAT-AP Overview**
# Access, sharing and re-use policies
For a data platform, such as YDS, the access, usage and dissemination
conditions of the used source data determine the possible access, usage and
dissemination conditions of the newly created aggregated data. Despite the
sizeable amount of public open data that is available and that will be
imported, it is likely to occur that there will be source data which is
subject to restrictions. When combining open data with restricted data, it
cannot be taken for granted that the resulting new data is open (or
restricted). In such mixed licensing situations, decisions will need to be
made by the content business owner and the data source owners concerning the
accessibility of the merged data. For example, it may be decided that some
aggregated data is only accessible for a selected audience (subscription
based, registration based, payment required or not …).
This context poses not only a business challenge, but also a technological
challenge. Some common practices when moving data from one source to another
may not be acceptable anymore. For example: if one data source A describes the
overall spending of a government by project and another data source B
describes the governmental projects and their contractors. The aggregated data
A+B provides thus insight in how the budget was spend by the contractors.
Merging the data into one aggregation usually makes it impossible to determine
from where the individual data elements came from. This is not problematic
when the aggregated data is subject to the same or more restrictive access,
usage and dissemination conditions as the source data themselves.
More complex and problematic is the situation where the aggregations are being
distributed throughout channels to audiences that do not satisfy the
conditions stipulated by one of the sources. To prevent incorrect usage,
managing the access, usage and dissemination conditions of the newly created
aggregations is important. That information will form the cornerstone of the
correct implementation of the required access, usage and dissemination
policies.
As shown above this aspect of the data management is a non-trivial work. Today
it is part of ongoing discussions. See the outcomes of the LAPSI project [14].
Therefore, YDS will apply the following strategy:
* The content business owner ensures that for each data source the access, sharing and reuse policy information is known.
* The content business owner decides whether the outcome of the integration & aggregation process is open (in all meanings = public, reusable, free of charge) or non-public (some restrictions apply).
* The data wranglers and system developers set up a data aggregation flow and data publication exposure according to the specification by the content business owner.
* The dataset meta-data of the created outcome is always public. This ensures transparency of the knowledge that is gathered within the YDS platform. The openness of the meta-data repository yields transparency.
The openness of the meta-data repository may conflict (see [6]) with the
notion of “protection of sources” (see [7]), the right that is granted to
journalists to keep their sources anonymous. With a centralistic approach this
dilemma is non-trivial. A distributed approach such as that depicted in
Figure 5: Data Accessibility shows, however, a possible resolution. The public
open instance of the YDS platform will publish the public data, a local
instance at the journalist’s office will use the data from the public instance
as one of the data sources. The journalists can then augment the public data
with confidential data within their safe environment. The collected insights
can then be turned into a story, ready to be published.
**Figure 5: Data Accessibility**
The technological foundations of the YDS platform, i.e. Linked Data, ensure
that the above scenario is supported out of the box without any additional
work.
As the above situations already indicate, the situations that might occur may
be very complex. Therefore, YDS will start with a simpler more uniform initial
setup of only open data that is free for reuse. Since the YDS specify to
create for each dataset a DCAT-AP entry, the base usage conditions get
registered. It will enable to identify a complex situation of which some are
sketched above. The effect and decisions to resolve the case will be recorded
and added as notes to the relevant DCAT-AP entries. In doing this, the DCAT-AP
record for a dataset becomes the key reference point of the dataset decision
making.
## Data sharing
All collected data is to be shared via the YDS Open Data Repository as
**findable, accessible, interoperable and reusable (FAIR)** . The Open Data
Repository will provide machine-readable means for accessing all YDS data
through multiple channels, along with the accompanying DCAT-AP descriptions.
The DCAT-AP descriptions allow for easy discovery and automatic harvesting by
third parties, such as the European Data Portal 5 . Further technical and
practical considerations and the implementation of the data endpoints that
will be made accessible so as to disseminate/share the YDS data with the
public will be described in the D3.9 Open Data Repository v1.0 deliverable
(and its future updates).
# Long term data management and storage
The questions to be addressed concerning long-term storage are not new:
environmental datasets, medical testing datasets, component test results
relating to safety will all have to be stored for a long time (the definition
of long-term being defined as part of a legal requirement, others will simply
be seen as being expected, e.g. datasets relating to academic published
results). These issues are complicated for when the data is made available
over the internet, in that the data could be merged with other data coming
from other sources, so the definition of meaningful long-term becomes
problematic. So, each content business owner needs to consider:
* What is the volume of the data to be maintained?
* What is considered long-term (2-3 years, 10 years, etc.)?
* Identification of archive for long-term preservation of YDS data.
* Which datasets will need to be preserved in the archive?
* What about relevant dependent datasets? Snapshots of external datasets?
* Preserved datasets will need to be updated and this means a data preservation policy and process will need to be defined (and operational).
A central consideration for any long-term DMP is the cost of preserving that
data and what will happen after the completion of the project? Preservation
costs may be considerable depending on the exploitation of the project after
its finalization. Examples include:
* Personnel time for data preparation, management, documentation, and preservation,
* Hardware and/or software needed for data management, backing up, security, documentation, and preservation,
* Costs associated with submitting the data to an archive,
* Costs of maintaining the physical backup copies (disks age and need to be replaced).
# Risk management
In addition to all of the above discussed issues, a robust approach to data
storage and management needs to implement a range of practices to ensure data
is stored securely, particularly if it has been collected from human
participants. This means foreseeing the “worst-case scenario”, considering
potential problems that could occur and how to avoid these, or at least
minimize the likelihood that they will happen.
## Personal data protection
Even though the project will avoid collecting such data unless deemed
necessary, encountering it is inevitable, and necessary measures must be
foreseen to avoid unauthorized leaks of personal information. Failing to
address this properly could consequently translate to breaching Data
Protection legislations and potentially result in reputation damage, financial
repercussions, and legal action. We foresee three potential sources of
personal data in YDS.
### Platform users
The YourDataStories platform will provide the users with the possibility to
create their own accounts and data spaces. This means that even a minimum set
of essential user information might contain sensitive data (e.g. an e-mail
address).
### Social media
Any user data on the social web is by default deemed personal. For the
YourDataStories project to deliver on the social-to-semantic and semantic-to-
social promise, without endangering user privacy, any information obtained
from the social media must be handled with care.
### Evaluations with users
Even though it is undesirable, for some of the activities to be carried out by
the YDS project, such as platform evaluation via focus groups, it may be
necessary to collect basic personal data (e.g. full name, contact details,
background).
**Table 2: Personal data risk mitigation strategies**
<table>
<tr>
<th>
Risk source
</th>
<th>
Mitigation strategy
</th> </tr>
<tr>
<td>
**Platform users**
</td>
<td>
To ensure none of the sensitive data is released to third parties, the
platform will leverage access control policies on an isolated, secure server,
providing only authorized users (data owners) and the YDS administrator with
access to such data. Furthermore, the user access credentials (passwords) will
be encrypted.
</td> </tr>
<tr>
<td>
**Social media**
</td>
<td>
The YDS platform will never integrate any sensitive information collected from
the social networks in its data sets/streams permanently. Instead, the YDS
Data Layer will store and publish only anonymized information, or seek to
remove identifiable information at the earliest opportunity.
</td> </tr>
<tr>
<td>
**Evaluations with users**
</td>
<td>
Such data will be protected in compliance with the EU's Data Protection
Directive 95/46/EC1 6 aiming at protecting personal data. National
legislations
</td> </tr>
<tr>
<td>
</td>
<td>
applicable to the project will also be strictly followed, such as laws
2472/1997 Protection of Individuals with regard to the Processing of Personal
Data 7 , and 3471/2006 Protection of personal data and privacy in the
electronic telecommunications sector (and amendment of law 2472/1997) 8 in
Greece.
Any data collection by the project partners will be done only after providing
the data subjects with all relevant information, and after obtaining signed
informed consent forms. All paper consent forms that contain personal
information will be stored in a secure, locked cabinet within the responsible
partner’s premises.
</td> </tr> </table>
## Undesirable disclosure of information due to linking
Being a Linked Open Data project YDS encodes all publishable information in
the form of an RDF graph. Although such an approach gives a clear edge to the
platform over its potential competitors in the market, its very nature bears a
certain degree of risk when it comes to unwanted disclosure of information due
to linking. This applies to both personal information and other private
information, either due to its nature or licensing limitations.
## Linking by reference
An important advantage of LOD as a data integration technology, even in
enterprise use cases, is that it does not require physical integration.
Instead, it employs the _linking by reference_ principle, where it relies on
the resource identifiers (URIs) to _point_ to the data entry that is to be
integrated. This means that a public dataset can point to a resource in a
private one without disclosing any accompanying information.
Nevertheless, in YDS, special attention is paid to what data is triplified in
the first place. The data harvesters will collect, transform and store only
information which is already publically available, with the exception of
social media data which, as discussed above, will be anonymized so as to make
reidentification of individuals impossible.
**Note:** If there are concerns that certain data cannot be fully anonymized,
it will be made available only on condition that end users apply for access
and sign a Data Transfer Agreement indicating that they will not share the
data or attempt to re-identify individuals, assuming that no licenses are
broken by the YDS consortium in making such data available in the first place.
# Conclusions
Applying & setting up a data management platform requires not only the
selection of the right technological components but also the application of a
number of best practice data management guidelines [12, 13] and given in
Section 2.3. Those best practices guide the users to the best ways to the
creation of data better ready to become a sustainable data source in the web
of data. Two of these best practices have led to a concentration on two focal
areas that require initial attention for the YDS data stories. These initial
focus points being:
Dataset meta-data management both for both the sources and the published
datasets, and Data access considerations, sharing possibilities and re-use
policies and licenses.
In all this, the DCAT-AP dataset descriptions are a key requirement. Having
the dataset descriptions in machine readable format creates potential on
effective traceability, status monitoring and sharing with the YDS target
audiences. Each DCAT-AP entry will act as a machine-readable DMP instance for
the dataset it describes. Human-readable DMP’s will be given in the form of
DMP checklists (an example is provided in the Annex) 9 .
The high level principles of the YDS project DMP have been presented from data
source discovery up to publishing of the aggregated content. The best
practices for publishing Linked Data – which is followed by YDS – describe a
data management plan for publication and use of high quality data published by
governments around the world using Linked Data. Via these best practices the
experiences of the Linked Open Data community are taken into account in the
project.
The technological foundations of the YDS platform separate very cleanly data
semantics, data representation and software development. Linked Data makes the
platform more flexible to implement at a later point in time the technological
and data provenance support which is required by the pilots as basic support.
This ability is unique in the data management technology space. Here and there
throughout the report some tooling is mentioned, but it has to be noted that
the actual software is irrelevant for the discussion in this report.
Given the current initial status of the YDS pilots and the fact that for each
pilot the more concrete DMP will be different (because of the data source
types, the access licenses, etc.) more detailed & precise guidelines will
require further analysis of the common situations as they are identified. This
will be on-going work which will initially be on a case by case basis, which
will be combined into a YDS DMP best practices guide for the various pilots.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1018_YDS_645886.md
|
# Introduction
## Purpose and Scope
A Data Management Plan (DMP) is a formal project document which outlines the
handling of the data sources at the different project stages. The H2020
guidelines [22] provide an outline that must be addressed. The DMP covers how
data will be handled within a project frame, during the research and
development phase but also details the intentions for the archiving and
availability of the data once the project has been completed [5,8]. As the
project evolves, the DMP needs to be updated to reflect the changes in the
data situations and the understanding of data source becomes more concrete.
YDS as project aims to create a data eco system bringing together state of the
art data processing technology with recent content about governmental
budgetary and economical transparency in a platform that facilitates European
citizens and, in particular, journalists, creating stories based on factual
data.
The technological foundations of the data management platform being
established within YDS are such that it is intended to be multi-purpose and
domain-agnostic. Within YDS, this generic data management platform is piloted
using three closely related data domains: the financial transparency in the
Greek and Irish governments and governmental development aid. This core
activity of collecting and aggregating data from many different (external to
the YDS project) data sources makes that metadata management of the used and
produced datasets is key. By applying the DCAT-AP [16] standard for dataset
descriptions and making these publicly available, the YDS DMP covers 4 out of
4 of the key aspects of FAIR data management, as specified in [22] as integral
parts of the platform:
* Making data findable, including provisions for metadata;
* Making data openly accessible;
* Making data interoperable;
* Increasing data re-use (through clarifying licences).
It is important to note here that even though the YDS DMP strives to align
with the Open Research Data Pilot (ORD) guidance notes provided by the
European Commission 1 , the ORD pilot “applies primarily to the data needed
to validate the results presented in scientific publications”. As the
YourDataStories project neither outputs such data, nor is the original
publisher of data (but establishes an Open Data Repository), the consortium
must align with the associated licensing conditions assigned by the publisher.
This means that one DMP would not be enough to cover all YDS data sources.
Hence, this document establishes a shared framework for all data harvested and
published by the YDS consortium, while additional, data source specific
information is provided in the individual DMP instances, in the Annex of this
deliverable.
## Approach for Work Package and Relation to other Work Packages and
Deliverables
This deliverable provides the final link in the feedback loop between the DMP
and the Data Source
Assessment Methodology deliverables (D3.1 and D3.2). The second and final
version of the Data
Source Assessment methodology, D3.2, defines a continuous process and related
activities which ensure that relevant data (for open access and to be made
available publicly through existing open access repositories and services) is
identified and verified during the course of the project and beyond. It has,
therefore, proven crucial for completing the individual DMP instances (i.e.
per data source), which are provided in the Annex of this report.
Moreover, the overall project approach to data processing at the time of
writing is provided in D3.7 Data Harvesters v2.0 and both practical and
technical consideration with respect to data storage and sharing are given in
D3.10 Open Data repository v2.0. Both deliverables are to be followed by their
final versions in month 32 of the project.
## Updates with respect to D2.7
In summary, this deliverable updates the previous version of the Data
Management Plan with regard to the following aspects:
* Long term data management and storage;
* Risk management with respect to stigmatization of companies and individuals;
* User-generated content
* Destruction of data;
* Social media data management;
* Individual Data Management Plans, per data source;
* Individual (updated) DCAT-AP descriptions, per data source.
## Methodology and Structure of the Deliverable
The YDS DMP life-cycle which will elaborate the general conditions and data
management methodology is outlined in Section 2. As the YDS pilots frequently
handle manually created content (tables, reports, analyses …), as well as low
quality data, the tooling often requires manual intervention and, hence, the
complete data integration process from source discovery to published
aggregated data cannot be completely automated. Therefore, an important aspect
of the YDS DMP is the general methodology. This, final, version of the YDS DMP
also takes into account the experiences of the pilot cases. The remainder of
this report is structured as follows:
* _Data Management Plan Checklist_ \- Section 3 provides a description of the basic information required about the datasets that are going to be used in the YDS project.
* _Metadata Management_ \- Each data source and each resulting dataset of the YDS aggregation process is described with meta-data. This meta-data can be used on the one hand for automating the YDS data ingestion process, but on the other hand also for external users to better understand the published data. This is further described in Section 4.
* _Access, sharing and re-use policies_ \- An important challenge in the YDS platform is the ambition to combine data from datasets having different usage and access policies. Interlinking data having payment requirements with data that is publicly and freely available impacts the technological and methodological approaches in order to implement the desired access policy. Section 0 outlines this further.
As the YDS pilots are now well defined, the questions relating to the Data
management and storage (long term) are now addressed in much more detail.
Section 6 provides answers to the questions which are considered common for
all pilots, whereas the pilot and data source specific questions are addressed
in the individual DCAT-AP descriptions and DMPs, in the Annex of this
deliverable.
# The YDS data lifecycle
The YDS platform is a Linked Data platform; therefore, the data ingested and
managed by the YDS platform follows the Linked Data life cycle [4]. The Linked
Data life cycle describes the technical processing steps which are possible to
create and manage a quality web of data. In order to smoothen the process best
practices are described to guide data contributors in their usage of the YDS
platform. This is further discussed in section 2.2 “The generic YDS data value
chain”, while the common best practices [12, 13] are quoted in section 2.3
Best Practices.
Prior to the Linked Data approach to the use of data, data management was
perceived as being an act done by a single person or unit. Responsibilities
(involving completeness, consistency, coverage, etc. of the data) were bound
to the organizations duties. Today, with the used of the Internet and the
distribution of data sources, this has changed: data management is seen as
being a living service within a larger ecosystem with many stakeholders across
internal and external organization borders. For instance, Accenture Technology
Vision 2014 indicated this as the third most important trend in 2014 [9].
## Stakeholders
For YDS, the key stakeholders have been identified which influence the data
management. Their main interaction routes are depicted in Figure 1: DMP Role
Interactions.
**Figure 1: DMP Role Interactions**
_**Data end-user(s)** : _
The data end-users make use of the aggregated datasets to create their own
story. In Deliverable D2.1 the main data end-user types for the YDS platform
are identified: media & data journalists, auditors, web developers, and
suppliers of business opportunities in public procurement, the civil society
and public institutions. The data end-users are the main drivers of the YDS
platform content: their need for data is the key driver for the content of the
YDS platform.
**_Data source publisher/owner(s):_ **
Represent the organization(s) which provide the data being integrated into the
YDS platform. For many data sources, especially those that are published as
Open Data by the public bodies, the interaction between YDS and the data
source publisher/owner is limited to technical access to the data (a download
of a file, a registration to obtain an API key). As the YDS platform is now
much more mature, to ensure a quality service level to the data end-users, the
consortium has set up a more intense collaboration with some of the key data
sources, such as the International Aid Transparency Initiative..
_**Content business owner** : _
Is the person responsible for the content business objectives. The content
business owner makes sure that the necessary data sources are found in a
usable form and that the desired aggregations are being defined so as to
realize the aggregated enriched content for the supported YDS stories. For
each content domain / project pilot a business owner has been identified based
on the experience, familiarity and proximity to the content domain and related
data publishers. The content business owner is also the responsible party for
any questions related to a data source covered by a given pilot. For cross-
domain data, TF, as the work package leader, is the contact point. The
identified CBOs are listed below:
* Pilot 1: NCSR-D
* Pilot 2: TF
* Pilot 3: NUIG, NCSR-D
_**Data wrangler [10,11]** : _
This person acts as a facilitator in that they interact with all stakeholders
but at the level of the integration of the source data into the platform. The
data wrangler ‘massages’ the data using the YDS platform to realize the
desired content. They must understand both the business terminology used in
the source data model(s) and the YDS target mode, understand the end user
objectives and ensureg that the mapping between the models is semantically
correct. The data wrangler is assisted by the YDS system administrator and YDS
platform developers to tackle the technical challenges, but their central
concern is the mapping of the data.
_**System administrator and platform developer** : _
Are responsible for the building and support of the YDS platform in a domain
agnostic way.
## The generic YDS data value chain
The complex process of a data value chain can be described using the following
stages:
**Figure 2: Data value chain stages**
* **Discover** : In today’s digitized world, there are many sources of data that help solve business problems that are both internal and external to organizations. Data sources need to be located and evaluated for cost, coverage, and quality. For YDS, the evaluation of the data sources is part of the data source assessment methodology (See Deliverable D3.2). The description and management of the resulting dataset meta-data is one of the main best practices used in the Linked Data community.
* **Ingest machine processable data** : The ingest pipeline is fundamental to enabling the reliable operation of entire data platforms. There are diverse file formats and network connections to consider, as well as considerations around frequency and volume. In order to facilitate the value creation stage (Integrate, analyze & enrich) the data has to be provided in, or turned into a machine processable format. In the YDS case, the preferred format is RDF [1].
* **Persist** : Cost-effective distributed storage offers many options for persisting data. The choice of format or database technology is often influenced by the nature of other stages in the value chain, especially analysis.
* **Integrate, analyze & enrich ** : Much of the value in data can be found from combining a variety of data sources to find new insights. Integration is a nontrivial step which requires domain knowledge and technical knowhow. Exactly by using a Linked Data approach with a shared ontology the integration process is facilitated in YDS. Where-as the other stages have a high potential of automation to a level where humans are not anymore involved, this stage is driven by human interest in the data. New insights and better data interconnectivity are created and managed by a growing number of data analytical tools and platforms.
* **Expose** : The results of analytics and data that are exposed to the organization in a way that makes them useful for value creation represents the final step in deriving value from data. The structure of the stages is based on the vision of the IBM Big Data & Analytics group on the data value chain [21].
When contributing a new data source to the YDS platform the stages are roughly
followed from left to right. In practice the activities are, however, more
distributed in order to keep the platform and the data it provides in the
desired state. Indeed, data that is not actively nursed becomes quickly
outdated. More and more imperfections will show up, to the point that data
end-users consider the data not valuable anymore. Taking care of the YDS
platform content is, hence, a constant activity. From a technical perspective,
this work is supported by the tooling available during the Integrate, Analyze
and Enrich phase. It is similar work as creating newly added value, but then
with the objective to improve the overall data quality (coherency,
completeness, etc.).
A further point to consider is that data based applications also have a
tendency to generate new requirements based on insights which are gained when
studying the data (this forms a loop whichcontinues as understanding of the
data increases 2 and this is shown in Figure 3: Linked Data ETL Process).
This depends heavily on what the data is intended to allow or what it is
intended to be used for (search for understanding, support of a particular
story, tracking of an ongoing situation, etc.).
In the following sections, the above data value chain stages are made more
concrete.
### Discover
The **content business owners** are the main actors in this stage. Using the
data source assessment methodology, relevant data sources for their content
domain are being selected to be integrated.
An important outcome of the data source assessment is the creation of the
meta-data description of the selected datasets. In section 4, the meta-data
vocabulary that being used is described (DCAT-AP). The expectation raised in
creating the meta-data is that the data sources are well described (what is
the data, which are the usage conditions, what are the access rights, etc.),
but experience has shown that collecting this information represents a non-
trivial effort because it is often not directly available.
### Ingest machine processable data
The selected datasets are prepared by a _data wrangler_ so that they can be
ingested in the YDS platform. The data wrangler hooks up the right data input
stream, for instance a static file, a data feed or an API, into the YDS
platform. During this work the data is prepared for machine processing.
Especially for static files, such as CSV’s, often additional contextual
information is required to be added in order to make the semantics explicit.
Without this preparation the conversion to RDF results in a technical
reflection of the input, yielding more complex transformation rules in the
Integrate, analyze and enrich stage.
### Persist
Persistence of the data is de-facto an activity that happens throughout the
whole data management process. However, when contributing a new data source to
the platform, the first moment data persistence is explicitly handled is when
the first steps have been taken to ingesting data into the YDS platform.
Since the YDS platform is about integrating, analyzing and enriching data from
different sources _external_ to the YDS partners, persistence of the source
information is not only an internal activity. It requires interaction between
the content business owner and the data source publisher/owner to guarantee
that during the life time of the applications built on top of the data the
source data stays available. Only carefully following up and the continuous
interaction with the data source publishers/owners will create a trustable
situation. Technically, this is reflected in the management of the source
(meta)data activity.
Despite sufficient attention and follow up, it will occur that data sources
become obsolete, are temporary not available (e.g. due maintenance) or
completely disappear (e.g. the organization dissolves). Many of these cases
are addressable to a certain extent by implementing data persistence
strategies such as:
* _Keeping local copies_ : explicit activity of copying data from one location to another. The most frequent case is copying the data from the governmental data portal to the YDS platform.
* _Caching_ : a technical strategy which main intention is to enhance data locality so that the processing is smoother. It may also act as a cushion to reduce the effects of temporary data unavailability.
From the perspective of the YDS data user, _archiving & highly available data
storage _ strategies are required to address the availability of the outcome
of the YDS platform. This usually goes hand in hand with a related, yet
orthogonal activity, namely the application of a dataset versioning strategy.
Introducing dataset versioning provides clear boundaries were along data
archiving has to be applied.
### Integrate, analyze and enrich
In this stage, the actual value creation is done. The integration of data
sources, their analysis and the analysis of the aggregated data and the
overall content enrichment is realized by a wide variety of activities. In
[4], the Linked Data life cycle is described: a comprehensive overview of all
possible activities applicable to Linked Data. The Linked Data life cycle is
shown in Figure 3: Linked Data ETL Process. (Note: Some activities of the
Linked Data life cycle are also part of other phases like ingestion,
persistence and expose.)
**Figure 3: Linked Data ETL Process**
Start reading from the left bottom stage called “Extraction” and proceed
clock-wise.
As most data is not natively available as RDF, extraction tooling provides the
necessary means to turn other formats into RDF. The resulting RDF is then
stored in an RDF storage system, available to be queried using SPARQL. Native
RDF authoring tools and Semantic Wiki’s allow then the data to be manually
updated to adjust to the desired situation. The interlinking and data fusion
tools are unique tools in the world of data management: Linked Data (or a data
format with similar capabilities as RDF) are the enablers of this process in
which data elements are interlinked with each other without losing their own
identity. It is the interlinking and the ability of using entities from other
public Linked Data sources that creates the web of data. The web of data is a
distributed knowledge graph across organizations which is in contrast to the
setup of a large data warehouse. The following 3 stages are about further
improving the data: when data is interlinked with other external sources new
knowledge can be derived and thus new enrichments may appear. Data is, of
course, not a solid entity but it evolves over time: therefore, quality
control and evolution is monitored. To conclude the tour, the data is
published. RDF is primarily a **data publication format** . This is indicated
by the vast amount of tooling that provides the search, browsing and
exploration of Linked Data.
### Expose
The last stage is about the interaction with the YDS data users. The YDS
platform is a Linked Data platform, and hence the outcome of the data
integration, analyzes and enrichments is made available according to the
common practices for Linked Open Data:
* A meta-data description about the exposed datasets
* A SPARQL endpoint containing the meta-data and the resulting datasets
* A public Linked Data interface for those entities which are dereferenceable.
Additionally, the YDS platform supports dedicated public API interfaces to
support application development (such as visualizations), discussed in D3.10
Open Data Repository V2.0.
## Best Practices
The YDS platform is, first and foremost, a Linked Data platform, and in this
section, the relevant best practices for publishing Linked Data are described
[12, 13]. The 10 steps described in [13] are an alternative formulation of
these stages in the context of publishing a standalone dataset. Nevertheless,
these steps formulate major actions in the creation of Linked Data content for
the YDS platform concisely (and that is why they are quoted here):
1. _STEP #1 PREPARE STAKEHOLDERS:_
_Prepare stakeholders by explaining the process of creating and maintaining
Linked Open Data._
2. _STEP #2 SELECT A DATASET:_
_Select a dataset that provides benefit to others for reuse._
3. _STEP #3 MODEL THE DATA:_
_Modeling Linked Data involves representing data objects and how they are
related in an application-independent way._
4. _STEP #4 SPECIFY AN APPROPRIATE LICENSE:_
_Specify an appropriate open data license. Data reuse is more likely to occur
when there is a clear statement about the origin, ownership and terms related
to the use of the published data._
5. _STEP #5 GOOD URIs FOR LINKED DATA:_
_The core of Linked Data is a well-considered URI naming strategy and
implementation plan, based on HTTP URIs. Consideration for naming objects,
multilingual support, data change over time and persistence strategy are the
building blocks for useful Linked Data._
6. _STEP #6 USE STANDARD VOCABULARIES:_
_Describe objects with previously defined vocabularies whenever possible.
Extend standard vocabularies where necessary, and create vocabularies (only
when required) that follow best practices whenever possible._
7. _STEP #7 CONVERT DATA:_
_Convert data to a Linked Data representation. This is typically done by
script or other automated processes._
8. _STEP #8 PROVIDE MACHINE ACCESS TO DATA:_
_Provide various ways for search engines and other automated processes to
access data using standard Web mechanisms._
9. _STEP #9 ANNOUNCE NEW DATA SETS:_
_Remember to announce new data sets on an authoritative domain. Importantly,
remember that as a Linked Open Data publisher, an implicit social contract is
in effect._
10. _STEP #10 RECOGNIZE THE SOCIAL CONTRACT:_
_Recognize your responsibility in maintaining data once it is published.
Ensure that the dataset(s) remain available where your organization says it
will be and is maintained over time._
# Data Management Plan Checklist
Each YDS pilot handles content within the Linked Open Economy domain. The
following information is recorded by the _**content business owner** _ of each
pilot. These questions provide the starting point for using the data sources –
the aim being to find any data usage issues earlier, rather than later 3 .
This basic data information, information about the data or meta-data, require
managing and will be further discussed in section 4. The questions also serve
as a checklist, similar to that provided by the UK’s Digital Curation
Center[5] or the template provided by the Guidelines on FAIR Data Management
in Horizon 2020 [24], and the answers serve as direct input for the individual
DMPs, which are also provided in a machine-readable form as DCAT-AP
descriptions (section 4).
**Table 1: Data Management Plan Checklist**
<table>
<tr>
<th>
DMP aspect
</th>
<th>
Questions
</th> </tr>
<tr>
<td>
**Administrative Data**
</td>
<td>
* How will the dataset be identified? o A Linked Data resource URI What is the title of the dataset?
* What is the dataset about?
* What is the origin of the data in the dataset?
* Who is the data publisher?
* Who is the contact point?
* When was the data last modified?
</td> </tr>
<tr>
<td>
**Data Source**
</td>
<td>
* Where will the data be acquired?
* What documentation is available for the data source models, attributes etc.?
* For how long will the data be available?
* What is the relationship between the data collected and existing data?
</td> </tr>
<tr>
<td>
**Data formats**
</td>
<td>
* Describe the file formats that will be used, justify those formats,
* Describe the naming conventions used to identify the files (persistent, date based, etc.)
</td> </tr>
<tr>
<td>
**Data Harvesting and Collection**
</td>
<td>
* How will the data be acquired?
* How often will the data be acquired?
* What are the tools and/or software that will be used?
* How will the data collected be combined with existing data?
* How will the data collection procedures/harvesting be documented?
</td> </tr>
<tr>
<td>
**Post Collection Data Processing**
</td>
<td>
* How is the data to be processed?
* Basic information about software used,
* Are there any significant algorithms or data transformations used (or to be used)?
</td> </tr>
<tr>
<td>
**Data Quality Assurance**
</td>
<td>
</td>
<td>
Identify the quality assurance & quality control measures that will be taken
during sample collection, analysis, and processing 4 ,
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
What will be the data validation requirements? Are there any already in place?
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Are there any community standards you can re-use?
</td> </tr>
<tr>
<td>
**Short-term Data Management**
</td>
<td>
</td>
<td>
How will the data be managed in the short-term? Consider the following:
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Version control for files,
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Backing up data,
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Security & protection of data and data products,
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Who will be responsible for management (Data ownership)?
</td> </tr>
<tr>
<td>
**Long-term Data Management**
</td>
<td>
</td>
<td>
See Section 6 for more details
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
</td>
<td>
How will the data be shared with the public?
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Are there any restrictions with respect to the dataset or parts of it to be
shared?
</td> </tr>
<tr>
<td>
**Ethics and Legal Compliance**
</td>
<td>
</td>
<td>
How will any ethical issues, should they arise, be managed?
* Have you gained consent for data preservation and sharing? o How will you protect the identity of participants if required?
* How will sensitive data be handled to ensure it is stored and transferred securely?
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
What are the licenses required to access and used the data?
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
How will any copyright and Intellectual Property Rights (IPR) issues, should
they arise, be managed?
</td> </tr> </table>
**Note:** The full checklist and the related answers, per dataset, are
provided in the Annex. The answers to some of the above questions, such as
Ethics and Legal Compliance (to be discussed in section 7), will be provided
in the sections below, and will serve as default input for the individual DMP
instances.
# Meta-data management
The data collected and aggregated in the YDS platform is distributed to the
public or used in another aggregation process. A coherent set of data is
called a dataset. Distributing the dataset requires describing the dataset
using meta-data properties.
Within Europe, an application profile of the W3C standard DCAT [15] called
DCAT-AP [16] is being used to manage data catalogues **.** This standard,
which is also a **European Commission recommendation** , enables dataset
descriptions in Europe to be exchanged in a coherent and harmonized context.
Since D2.7 Data Management Plan V1.0, i.e. M6 of the project, DCAT-AP has
undergone a revision to better fit the European needs. At the time of writing,
the current version of DCAT-AP is 1.1.
In addition to this motivation, YDS has extensive in-house knowledge and
experience: the YDS partners, NUIG and TenForce are organizations that played
key roles in the establishing and success of the standards. NUIG actively
supported the creation of DCAT as being the co-editor of the standardization
process and it has continued sharing its expertise in the development of the
DCAT application profile. TenForce, lead and was/is participating in several
projects that contributed to the technological application of the standard
DCAT and the creation of DCAT-AP: LOD2, the European Open Data Portal, Open
Data Support (in which TenForce established the first implementation of DCAT-
AP). Recently TenForce supported the revision of the DCAT-AP process and it is
responsible for the first study on creating a variant for statistical data
STAT DCAT-AP.
Building upon DCAT-AP makes the YDS platform compliant with the European
(Open) Data Portal ecosystem. Data being made available through the YDS
platform can be picked up and distributed to the whole of Europe. On the other
hand, the European (Open) Data Portal ecosystem can provide access to data
that has not yet being identified as relevant. For instance, the Open Data
Support project which finished in December 2015, was handed over to the
European Data Portal [17] which offers access to more than 640,000 dataset
descriptions from all over Europe.
The core entities are Dataset and Distribution. The Dataset describes the data
and its usage conditions. Each Dataset has one or more Distributions, the
actual physical forms of the Dataset. A collection of Datasets is managed by a
Data Catalogue. The details are shown in Figure 4: DCAT-AP Overview.
As the DCAT-AP vocabulary is a Linked Data vocabulary, it fits naturally the
technological choices of the YDS platform. The vocabulary covers the majority
of the YDS data cataloguing needs. Any gaps or more specific needs, are
covered by the individual DMP instances.
The adoption of DCAT-AP creates also the availability of tooling. There is
EDCAT [18], an API layer to manage data catalogues, which has evolved into the
JSON API compliant interface of the YDS Open Data Repository (first MuDCAT,
then mu-cl-resources), a web interface [19], a validator [23], and the ODIP
platform [17] that harvests open data portals (based on an earlier version of
UnifiedViews [20], the central component of the YDS platform).
**Figure 4: DCAT-AP Overview**
# Access, sharing and re-use policies
For a data platform, such as YDS, the access, usage and dissemination
conditions of the used source data determine the possible access, usage and
dissemination conditions of the newly created aggregated data. Despite the
sizeable amount of public open data that is available and imported, it is
likely to occur that there will be source data which is subject to
restrictions. When combining open data with restricted data, it cannot be
taken for granted that the resulting new data is open (or restricted). In such
mixed licensing situations, decisions will need to be made by the content
business owner and the data source owners concerning the accessibility of the
merged data. For example, it may be decided that some aggregated data is only
accessible for a selected audience (subscription based, registration based,
payment required or not etc.).
This context poses not only a business challenge, but also a technological
challenge. Some common practices when moving data from one source to another
may not be acceptable anymore. For example: if one data source A describes the
overall spending of a government by project and another data source B
describes the governmental projects and their contractors. The aggregated data
A+B provides thus insight in how the budget was spend by the contractors.
Merging the data into one aggregation usually makes it impossible to determine
from where the individual data elements came from. This is not problematic
when the aggregated data is subject to the same or more restrictive access,
usage and dissemination conditions as the source data themselves.
More complex and problematic is the situation where the aggregations are being
distributed throughout channels to audiences that do not satisfy the
conditions stipulated by one of the sources. To prevent incorrect usage,
managing the access, usage and dissemination conditions of the newly created
aggregations is important. That information forms the cornerstone of the
correct implementation of the required access, usage and dissemination
policies.
As shown above this aspect of the data management is a non-trivial work. Today
it is part of ongoing discussions. See the outcomes of the LAPSI project [14].
Therefore, YDS applies the following strategy:
* The content business owner ensures that for each data source the access, sharing and reuse policy information is known.
* The content business owner decides whether the outcome of the integration & aggregation process is open (in all meanings = public, reusable, free of charge) or non-public (some restrictions apply).
* The data wranglers and system developers set up a data aggregation flow and data publication exposure according to the specification by the content business owner.
* The dataset meta-data of the created outcome is always public. This ensures transparency of the knowledge that is gathered within the YDS platform. The openness of the meta-data repository yields transparency.
As the above situations already indicate, the situations that might occur may
be very complex. Our experiences gained in the first 24 months of the project
have shown that even open data sources can have conflicting licenses.
Therefore, our setup harvests and redistributes only open data that is free
for reuse, and we leave the licensing information intact (along with
appropriate provenance information) explicitly linked to each dataset. Since
each dataset is accompanied by a DCAT-AP entry, the base usage conditions get
registered. In doing this, the DCAT-AP record for a dataset becomes the key
reference point of the dataset decision making.
## Data sharing
All collected data is shared via the YDS Open Data Repository (D3.10) as
**findable, accessible, interoperable and reusable (FAIR)** . The Open Data
Repository provides machine-readable means for accessing all YDS data through
multiple channels, along with the accompanying DCAT-AP descriptions. The DCAT-
AP descriptions allow for easy discovery and automatic harvesting by third
parties supporting the European application profile, such as the European Data
Portal 5 . Further technical and practical considerations and the
implementation of the data endpoints that are made accessible so as to
disseminate/share the YDS data with the public are described in the D3.10 Open
Data Repository v2.0 deliverable (to be followed by its final update in M32).
# Long term data management and storage
The questions to be addressed concerning long-term storage are not new:
environmental datasets, medical testing datasets, component test results
relating to safety all have to be stored for a long time (the definition of
long-term being defined as part of a legal requirement, others will simply be
seen as being expected, e.g. datasets relating to academic published results).
These issues are complicated for when the data is made available over the
internet, in that the data could be merged with other data coming from other
sources, so the definition of meaningful long-term becomes problematic. So,
each content business owner needs to consider:
* What is the volume of the data to be maintained?
* What is considered long-term (2-3 years, 10 years, etc.)?
* Identification of archive for long-term preservation of YDS data.
* Which datasets will need to be preserved in the archive?
* What about relevant dependent datasets? Snapshots of external datasets?
* Preserved datasets will need to be updated and this means a data preservation policy and process will need to be defined (and operational).
A central consideration for any long-term DMP is the cost of preserving that
data and what will happen after the completion of the project? Preservation
costs may be considerable depending on the exploitation of the project after
its finalization. Examples include:
* Personnel time for data preparation, management, documentation, and preservation,
* Hardware and/or software needed for data management, backing up, security, documentation, and preservation,
* Costs associated with submitting the data to an archive,
* Costs of maintaining the physical backup copies (disks age and need to be replaced).
## Practical considerations
Below, we outline and address the practical considerations with respect to the
above questions and long-term data management and storage.
### Defining “long term”
From the perspective of a YDS content business owner, 4 years beyond the
lifespan of the project can be considered “long time”. From the perspective of
a server administrator, this is rather acceptable, and hence, the associated
costs of storage boil down to consumed energy and repair due to possible,
though unlikely, disk failure.
### Data to be preserved
All datasets made publicly available through the YDS Open Data Repository are
to be stored long-term (i.e. all datasets with DCAT-AP descriptions).
Additionally, supporting datasets, such as relevant SKOS taxonomies, will also
be preserved during this period. User feedback received during the course of
the project, e.g. user evaluation forms, will be analyzed and reported in the
respective deliverables. Therefore, there is no need to preserve the original
forms after the end of the project. Social media content fetched by YDS
components will be preserved long-term in the form of links to the original
piece of content, e.g. links to tweets. This way no original social media
content will be saved during the period of the project and subsequently after
the end of it.
## Technical considerations
Within WP5, a decision was taken to organize the YDS server, in order to
support development activities, i.e. accommodate a development Virtual Machine
(VM). The current YDS server setup is as follows:
**Table 2: YDS server setup**
<table>
<tr>
<th>
</th>
<th>
Development
</th>
<th>
Production
</th> </tr>
<tr>
<td>
**CPU cores**
</td>
<td>
7
</td>
<td>
24
</td> </tr>
<tr>
<td>
**Memory**
</td>
<td>
8 GB
</td>
<td>
37 GB
</td> </tr>
<tr>
<td>
**Disk space**
</td>
<td>
345 GB
</td>
<td>
2.5 TB
</td> </tr> </table>
The development VM was initially a clone of the production one, meaning both
machines host Unified Views and OpenLink Virtuoso 7 instances, but the actual
amount of data at any given point in time might vary. During the initial
harvesting and transformation procedures, all data is stored on the
development server. As the data matures, it is transferred to the production
server.
### Data volume
The triple store on either of the servers is not expected to consume more than
50 GB of disk space (backups included). Considering the graph nature of the
database, as the amount of data grows, so does its complexity, which is why
all additions to the Open Data Repository are considered with care and server
performance in mind.
### Backups
Even though the data on the development server is never made public, both
servers have failsafe mechanisms in place for both data and the associated
harvesting processes. The data on both servers is backed up once a week on
local disk, i.e. every Tuesday, at 4 AM (i.e. outside peak hours). Moreover,
the harvesters are backed up on GitHub, in a dedicated repository 6 ,
ensuring fast recovery even in case of loss of data.
# Risk management
In addition to all of the above discussed issues, a robust approach to data
storage and management needs to implement a range of practices to ensure data
is stored securely, particularly if it has been collected from human
participants. This means foreseeing the “worst-case scenario”, considering
potential problems that could occur and how to avoid these, or at least
minimize the likelihood that they will happen.
## Personal data protection
Even though the project will avoid collecting such data unless deemed
necessary, encountering it is inevitable, and necessary measures must be
foreseen to avoid unauthorized leaks of personal information. Failing to
address this properly could consequently translate to breaching Data
Protection legislations and potentially result in reputation damage, financial
repercussions, and legal action. We foresee three potential sources of
personal data in YDS.
### Platform users
The YourDataStories platform will provide the users with the possibility to
create their own accounts and data spaces. This means that even a minimum set
of essential user information might contain sensitive data (e.g. an e-mail
address).
### Social media
Any user data on the social web is by default deemed personal. For the
YourDataStories project to deliver on the social-to-semantic and semantic-to-
social promise, without endangering user privacy, any information obtained
from the social media must be handled with care.
### Evaluations with users
Even though it is undesirable, for some of the activities to be carried out by
the YDS project, such as platform evaluation via focus groups, it may be
necessary to collect basic personal data (e.g. full name, contact details,
background).
**Table 3: Personal data risk mitigation strategies**
<table>
<tr>
<th>
Risk source
</th>
<th>
Mitigation strategy
</th> </tr>
<tr>
<td>
**Platform users**
</td>
<td>
To ensure none of the sensitive data is released to third parties, the
platform will leverage access control policies on an isolated, secure server,
providing only authorized users (data owners) and the YDS administrator with
access to such data. Furthermore, the user access credentials (passwords) will
be encrypted.
</td> </tr>
<tr>
<td>
**Social media**
</td>
<td>
The YDS platform will never integrate any sensitive information collected from
the social networks in its data sets/streams permanently. Instead, the YDS
Data Layer will store and publish only anonymized information, or seek to
remove identifiable information at the earliest opportunity.
</td> </tr>
<tr>
<td>
**Evaluations with users**
</td>
<td>
Such data will be protected in compliance with the EC's Directive 95/46/EC1
(General Data Protection Regulation) 7 aiming at protecting personal data.
</td> </tr>
<tr>
<td>
</td>
<td>
National legislations applicable to the project will also be strictly
followed, such as laws 2472/1997 Protection of Individuals with regard to the
Processing of Personal Data 8 , and 3471/2006 Protection of personal data
and privacy in the electronic telecommunications sector (and amendment of law
2472/1997) 9 in Greece.
Any data collection by the project partners will be done only after providing
the data subjects with all relevant information, and after obtaining signed
informed consent forms. All paper consent forms that contain personal
information will be stored in a secure, locked cabinet within the responsible
partner’s premises.
</td> </tr> </table>
## Undesirable disclosure of information due to linking
Being a Linked Open Data project YDS encodes all publishable information in
the form of an RDF graph. Although such an approach gives a clear edge to the
platform over its potential competitors in the market, its very nature bears a
certain degree of risk when it comes to unwanted disclosure of information due
to linking. This applies to both personal information and other private
information, either due to its nature or licensing limitations.
## Linking by reference
An important advantage of LOD as a data integration technology, even in
enterprise use cases, is that it does not require physical integration.
Instead, it employs the _linking by reference_ principle, where it relies on
the resource identifiers (URIs) to _point_ to the data entry that is to be
integrated. This means that a public dataset can point to a resource in a
private one without disclosing any accompanying information.
Nevertheless, in YDS, special attention is paid to what data is triplified in
the first place. The data harvesters will collect, transform and store only
information which is already publically available, with the exception of
social media data which, as discussed above, will be anonymized so as to make
reidentification of individuals impossible.
**Note:** If there are concerns that certain data cannot be fully anonymized,
it will be made available only on condition that end users apply for access
and sign a Data Transfer Agreement indicating that they will not share the
data or attempt to re-identify individuals, assuming that no licenses are
broken by the YDS consortium in making such data available in the first place.
## Risk of accidental stigmatization
All data with respect to organizations is published as-is, meaning the risk of
accidental stigmatization is inherited from the original data published, by
default (a provenance trail back to the original publisher is always provided
in the accompanying DCAT-AP description). However, during automatic,
supervised, and even manual data reconciliation, when interlinking with other
datasets, such as the OpenCorporates 10 database, there is a risk of false
positives. For this reason, all links between matching entities are via the
skos:closeMatch property, which does not guarantee semantic equivalence.
Moreover, the front end provides a disclaimer clarifying the nature of a link
whenever such information is present.
## User-generated stories
It should not be forgotten that the YDS platform allows users to create their
own stories about the topic of their own choosing. Moreover, such stories may
contain data from external sources. These stories are stored on the YDS
platform and can be made public via YDS. However, it is worth noting that that
the contents of such stories, as well as the data not-originating from YDS,
are the sole responsibility of the user in the same way as any social media
platforms cannot take ultimate responsibility for what a member writes, save
to remove it when a valid objection is made.
## Destruction of data
As sensitive, non-encrypted, digital data is never stored on disk, and
physically collected personal data (if any, e.g. during evaluations with
users) is stored securely, as explained in Table 3, data destruction is not
foreseen until the expiry of the long term storage period, as defined in
Section 6. Upon expiry, all data collected by YDS will be destroyed.
# Conclusions
Applying & setting up a data management platform requires not only the
selection of the right technological components but also the application of a
number of best practice data management guidelines [12, 13] as given in
Section 2.3. Those best practices guide the users to the creation of
sustainable data sources in the web of data. Two of these best practices have
led to a concentration on two focal areas that required special attention:
Dataset meta-data management both for the sources and the published
datasets, and Data access considerations, sharing possibilities and re-use
policies and licenses.
In all this, the DCAT-AP dataset descriptions are a key requirement. Having
the dataset descriptions in a machine readable format creates potential on
effective traceability, status monitoring and sharing with the YDS target
audiences. Each DCAT-AP entry acts as a **machine-readable DMP instance** for
the dataset it describes, whereas human-readable DMP’s are given in the form
of DMP checklists (in the Annex of this report).
The core principles of the YDS project DMP have been presented from data
source discovery up to publishing of the aggregated content. The best
practices for publishing Linked Data – which is followed by YDS – describe a
data management plan for publication and use of high quality data published by
governments around the world using Linked Data. Via these best practices the
experiences of the Linked Open Data community are taken into account in the
project.
The technological foundations of the YDS platform separate very cleanly data
semantics, data representation and software development. Linked Data has made
the platform more flexible to implement the technological and data provenance
support which was required by the pilots as basic support. This ability is
unique in the data management technology space. Even though the deliverable
does touch upon the topic of tooling it must be noted that the actual software
is irrelevant for the discussion in this report.
This deliverable also extends the original DMP with respect to a number of
additional aspects. Now that the data, the model, and the platform are much
more mature, the DMP looks at the long term data management and storage
questions in more detail, and addresses them so as to provide a common
framework for all data collected and published by the project. In response to
new experiences and risks which arose in the second year of the project, we
also address additional risk management concerns, such as the management of
data collected from Social media, the destruction of data, as well as the risk
of accidental stigmatization of organizations and individuals (as a
consequence of automatic, supervised or manual interlinking).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1020_IIT_649351.md
|
# Introduction
This document outlines the principles and processes for data collection,
annotation, analysis and distribution as well as storage, security and final
destroying of data within the Industrial Innovation in Transition (IIT)
project. The procedures will be adopted by all project partners and third
parties throughout the project in order to ensure that all project related
data is well-managed according to contractual obligations as well as
applicable legislation both during and after the project.
As the IIT project has opted to participate in the Open Data Pilot, this
document also details the practices and solutions regarding the storage and
re-usability of the research data, which will be made accessible for other
researchers and the public for further use and analyses. The Grant Agreement
of the IIT project as an Open Data Pilot participant obligates the project to:
a) deposit [digital research data generated in the project] in a research data
repository and take measures to make it possible for third parties to access,
mine, exploit, reproduce and disseminate — free of charge for any user — the
following: (i) the data, including associated metadata, needed to validate the
results presented in scientific publications as soon as possible; (ii) other
data, including associated metadata, as specified and within the deadlines
laid down in the 'data management plan', i.e. this document. The Grant
Agreement contains an option to discard the obligation to deposit a part of
research data in the case where the achievement of the action's main
objective, described in Annex 1 of the Grant Agreement, would be jeopardised.
In such case, the Data Management Plan must contain the reasons for not giving
access.
As the obligation to deposit research data in a databank does not change the
obligation to protect results, take care of confidentiality and security
obligations, or the obligations to protect personal data, the Data Management
Plan addresses these topics. This document details, how the seemingly
contradicting commitments to share and protect are implemented within the
project.
The Data Management Plan has, on the other hand, also served the purpose of
acting as a tool to agree on the data processing of the IIT project
consortium. The production of the Data Management Plan has helped the
consortium to identity situations, where the practices were thought to be
agreed upon and where a common understanding on practices was thought to have
been achieved, but where such in fact did not exist. For that reason the
process of producing a Data Management Plan can be recommended for other
projects as well.
Documents related to the Data Management Plan are the IIT project Grant
Agreement, the Consortium Agreement and the Project Handbook. Some of the
deliverables also contain information which links to the Data Management Plan.
The relationships are described below:
<table>
<tr>
<th>
Related document
</th>
<th>
Relationship to the Data Management Plan
</th> </tr>
<tr>
<td>
The Grant Agreement
</td>
<td>
* Article 27 details the obligation to protect results
* Article 36 details confidentiality obligations
* Article 37 details security obligations
* Article 39 details obligations to protect personal data
* Annex 1, Chapter 1.4 details the ethics requirements, which in the case of the IIT project link to the obligation to protect personal data, the
</td> </tr>
<tr>
<td>
</td>
<td>
obligation to get informed consent from persons participating in the research
and the obligation to get the ethical approvals for the collection of personal
data from relevant sources.
</td> </tr>
<tr>
<td>
Consortium Agreement
</td>
<td>
* Chapter 4.1 on the General principles: “ _Each Party undertakes to take part in the efficient implementation of the Project, and to cooperate, perform and fulfil, promptly and on time, all of its obligations under the Grant Agreement and this Consortium Agreement as may be reasonably required from it and in a manner of good faith as prescribed by Belgian law_ .”. This is a general declaration of the partners to abide by the rights and obligations set out in the Grant Agreement.
* Chapter 4.3 on the Involvement of third parties: “ _A Party that enters into a subcontract or otherwise involves third parties (including but not limited to Affiliated Entities) in the Project remains responsible for carrying out its relevant part of the Project and for such third party’s compliance with the provisions of this Consortium Agreement and of the Grant Agreement. It has to ensure that the involvement of third parties does not affect the rights and obligations of the other Parties under this Consortium Agreement and the Grant Agreement_ .”. In the context of the Data Management Plan this chapter explains that the Partner extends the rights and obligations of the Grant Agreement to the subcontractor who implements the company interviews of WP2.
</td> </tr>
<tr>
<td>
The Project Handbook
</td>
<td>
The Project Handbook defines the quality criteria for all work conducted in
IIT project.
</td> </tr>
<tr>
<td>
D2.2
</td>
<td>
D2.2 Interview guidelines also describes some aspects of the collection,
analysis, storage and reporting of IIT research data.
</td> </tr>
<tr>
<td>
D3.3
</td>
<td>
D3.3 describes the data processing related to the in-depth case studies. The
Description of the Scientific Research Data File will be an Annex to D3.3.
</td> </tr>
<tr>
<td>
D3.4
</td>
<td>
D3.3 which describes the validation web survey will in more detail whether
personal data laws affect the practical solutions related to the survey. In
case personal data is collected, the Description of the Scientific Research
Data File will be an Annex to D3.4.
</td> </tr> </table>
# Data Types
## Data Types of the Project
In the IIT project there are four basic types of data (Figure 1): research
data, analysed research data, project data and reports and communication data.
**Research data** covers the data collected on the project subject matter,
namely industrial innovation practices and related innovation policies. The
data is mainly collected through company interviews and the data types are
e.g. audio records, transcriptions and possibly handwritten interviewer notes
from the interviews. Research data also includes web survey responses.
**Analysed research data** means the reports composed by the interviewee on
the main content of the interviews. Analysed data also refers to qualitative
and quantitative data analyses conducted on the data. Reviews of earlier
published data and records will be utilised to some degree. This data will be
considered as analysed research data for the purposes of this document.
Project related workshops and stakeholder engagement events are public events
and the workshop notes of project partners will be treated in the same way as
analysed research data (i.e. the notes will be shared within the consortium).
**Figure 1. Data types.**
**Project data** includes administrative and financial project data, including
contracts, partner information and periodic reports, as well as accumulated
data on project meetings, teleconferences and other internal materials. This
data is confidential to the project consortium and to the European Commission.
Project data includes mainly MS Office documents, in English, which ensures
ease of access and efficiency for project management and reporting. Most of
the project data is stored in the password protected Eduuni workspace,
administrated by Aalto University.
**Reports and other communication data** includes deliverables, presentations
and for example articles. This data type also refers to the contents of the
IIT project website.
Each data type is treated differently with regard to the level of
confidentiality (see Chapter 2.2). E.g. the untreated research data such as
audios of company interviews are treated as highly confidential data, whereas
most project deliverables are actively disseminated. Some of the data falls
under the EU and national laws on data protection and for this reason the
project is obliged to seek necessary authorisations and to fulfil notification
requirements.
The data will partly be in native languages, but all summary documents will be
translated in English. The project will assume the principle of using commonly
used data formats for the sake of compatibility, efficiency and access. The
preferred means of data types is MS Office compatible formats, where
applicable.
## Levels of Confidentiality and Flow of Data
Overall, there are three basic levels of confidentiality, namely Public,
Confidential to consortium (including Commission Services), and Confidential
to the Partner / Subcontractor.
**Figure 2. Data types displayed in three levels of confidentiality.**
Figure 2 displays how the previously mentioned data types are positioned in
the level of confidentiality context. Only one data type – (untreated or raw)
research data – is totally situated in one level of confidentiality, which
means that it solely remains with the partner or third party responsible for
collecting it. The other three types contain data of two different
confidentiality level. For this reason Figure 3 displays the data in more
granularity.
**Figure 3. Data distributed into the three levels of confidentiality in more
detail.**
Figure 4 describes the data flows in time and also describes the transitions
of data – through processing – from one level of confidentiality to another.
The project team is aware of the effect of transitions, e.g. keeps in mind
that untreated data which has not been anonymized will not flow from Partner /
Subcontractor level of confidentiality to the Consortium level of
confidentiality. The Consent Forms signed by the interviewee and the Non-
Disclosure Agreements (NDAs) signed by the interviewers have been drawn in
this effect. In addition, the Description of the Scientific Research Data File
1 has also been composed according to the previously described data transfer
principle.
The data flows have been designed with the objective of maximizing personal
data protection: personal data remains within one partner or subcontractor and
within one country. This also makes the practical interpretation of data
protection laws more feasible.
**Figure 4. Data flows within and between the three levels of
confidentiality.**
It should be noted that workshop notes taken by the project partners will not
contain the names of persons, or any other information that would make it
possible to identify the origin of a comment or an opinion voiced in a
workshop. This being the practice, the data collected in the workshops is not
personal data. The workshop notes will be shared within the project consortium
and only aggregate reports and analyses will be made public.
## Personal Data under the Data Protection Directive
A part of the data gathered in the IIT falls under the definition of personal
data. To avoid any misunderstandings, the central concept definitions have
been included in the IIT Data Management Plan. The following definitions have
been derived from the unofficial translation of the Finnish Personal Data Act
(523/1999) 2 . The Finnish Personal Data Act has been derived from the
constitutional reform and the EU Data
Protection Directive (Directive 95/46/EC of the European Parliament and of the
Council of 24 October 1995 on the protection of individuals with regard to the
processing of personal data and on the free movement of such data), so the
principles of it are applicable for the other IIT partner countries as well.
**Personal data** means any information on a private individual and any
information on his/her personal characteristics or personal circumstances,
where these are identifiable as concerning him/her or the members of his/her
family or household.
**Processing of personal data** means the collection, recording, organisation,
use, transfer, disclosure, storage, manipulation, combination, protection,
deletion and erasure of personal data, as well as other measures directed at
personal data.
**Personal data file** means a set of personal data, connected by a common use
and processed fully or partially automatically or sorted into a card index,
directory or other manually accessible form so that the data pertaining to a
given person can be retrieved easily and at reasonable cost.
**Controller** means a person, corporation, institution or foundation, or a
number of them, for the use of whom a personal data file is set up and who is
entitled to determine the use of the file, or who has been designated as a
controller by an Act.
**Data subject** means the person to whom the personal data pertains.
**Third party** 3 means a person, corporation, institution or foundation
other than the data subject, the controller, the processor of personal data or
someone processing personal data on the behalf of the controller or the
processor.
**Consent** means any voluntary, detailed and conscious expression of will,
whereby the data subject approves the processing of his/her personal data.
The next chapters of this deliverable have been organised as follows: first,
the different data types and collection methods are described. The second
section details the data management systems, categorisation and organisation
of data in the systems. The third and last part of the document focuses on
data sharing and protection of the privacy of interviewees and survey
respondents.
# Collection, Storage and Use of Research Data and Analysed Research Data
The research data will be collected following jointly agreed guidelines and
principles in order to guarantee the achievement of sound research data and to
make the reusability of the research data possible. The project consortium
places a strong emphasis on data quality, and consequently there are dedicated
tasks for joint development of a conceptual framework (Task 2.1) and the
methodology for company interviews (Task 2.2). In addition, workshops on
interview principles and codification have been organised, in order to adopt
common practices. The developed guidelines and principles are discussed and
reviewed regularly in monthly project meetings, as well as in Work Package
meetings. Further tasks have been devoted to data collection and analysis,
both for company data and public policy data.
In the project, interviews and case studies are used for descriptive and
exploratory, theory building purposes, whereas a web-survey will be conducted
for theory testing and generalisation purposes. With this, the three different
methods enable comparison and authentication of data collected from different
sources in order to increase the reliability and validity of the research
results. This is illustrated in Figure 1. The three methods are briefly
summarised below.
**Figure**
**2**
**. IIT data collection methods.**
Data
triangulation
INTERVIEWS
CASE STUDIES
WEB SURVEY
_Theory building_
_Theory_
_testing_
Data collection methods, in summary, include:
* literature reviews on innovation practices and national innovation policies,
* semi-structured interviews with standardised interview guideline throughout the interviews of companies,
* in-depth case studies from five selected sectors,
* a web survey, and
* policy and innovation management best practices dissemination workshops and events on national and European level.
The interview questions and templates will be made available for the future
replication of the study in the project toolbox, which will be developed for
the benefit of future projects. The toolbox will include the methodology and
methods for data collection and analysis, the interview guidelines, and
descriptions of the case study procedures (Work Package 5). The toolbox
development is a key initiative to increase the impact of the IIT project.
## Literature Review
There has been a renewed interest in innovation research following the fiscal
crisis in Europe and a vast amount of research has been conducted on
industrial innovation and innovation management practices by the European
Commission and different European universities, companies and innovation
agencies. Due to its fluid nature, innovation can be approached through a
plethora of approaches and disciplines. The IIT project takes a multi-
disciplinary approach to innovation, yet approaches the phenomenon from the
perspective of industrial companies. The IIT project partners have a strong
command of the current ‘state of the art’ in the field, and have reviewed the
existing body of research and best practice cases available on the innovation
practices of companies. As the project progresses, it will also collect
relevant background information regarding the target companies using public
sources of information.
The project will also conduct literature and policy reviews at the European,
regional and national levels through desktop research. This will generate data
that will be further elaborated in policy workshops, and used for summary
policy reports. The literature review will be carried out using publicly
available research results, publications and policy documents. This state of
the art review will serve as background for constructing the workshop agendas
for validation and extending the assumptions.
## Company Interviews
The data collection for theory building purposes will begin with semi-
structured interviews, organised under Task 2.3 Data Gathering. The interviews
will be conducted among e.g. Chief Technology Officers, or managers in
equivalent position, of the target companies. Also, to ensure the
trustworthiness of the data, the IIT target is to carry out 800 interviews.
The IIT covers and compares five industrial sectors, and the ideal is to
achieve a balanced sample size between sectors. The purpose of the overall
sample size is also to enable different kinds of comparisons (e.g. innovation
practices between countries, company sizes, companies operating in high/low
innovation performer countries, etc.) and to ensure fulfilling the data
richness demands of qualitative research as well as the saturation of
understanding in each comparison. Also, in order to ensure the inclusion of
all the relevant companies (i.e. to ensure that all relevant ‘voices’ are
heard), and in that way to ensure the saturation of understanding, this sample
size will lead to conducting one interview per company
(cf. ibid.).
Each partner will conduct 150 interviews, mainly in two target member states
per partner (AT, CZ, DE, EE, ES, FI, IE, IT, NL, PT, UK). As an exception,
ZABALA will conduct interviews in three countries (ES, PT and IT). In
combination with the planned number of interviews, this will allow for
comparisons made between countries. In addition, to ensure that the best
innovation performers are reached, 10 additional interviews per partner are
assigned as result of the collaboration with the European Round Table of
Industrialists (ERT): this set of companies will be selected on the basis of
proven innovation capability, assessed either by their level of profitability,
growth or generally perceived innovative product and/or service portfolio.
Finally, and also to minimise possible over-/under sampling biases, the target
countries represent different type of innovation performers: of the 11 EU
member states targeted in the study, two are innovation leaders, five are
innovation followers and four are moderate innovators.
All interviews will be conducted following the guidelines documented in the
research methodology, which has been developed in the beginning of the project
(Task 2.2 Methodology for Company Interviews). Methodological guidelines
define the selection of industries and sectors, the type and size of the
interviewed companies, as well as the profile of their representatives. Data
will be collected on interviewee perceptions regarding external trends and
innovation drivers and on company internal innovation processes and
initiatives. The interview questions used across all sectors and countries are
detailed in the interview guidelines.
The initial preparation for the interviews contains contacting companies by
letter or email (which may be followed up by a phone call. The interviewees
will be asked for a two-hour-slot so that all of the main issues of the
interview guideline can be covered. The information given to the proposed
interviewee should contain:
1. A description of the project objectives and indications of the areas in which questions will be asked (this can be a summary of the main questions set out as issues).
2. The motivations and hence potential benefits of the study to companies will also be made explicit.
3. The ethical and data protection related issues are addressed. This includes explaining that the interviews will be recorded and transcribed (unless the interviewee does not wish for the interview to be recorded), but that the transcript will be used only for coding purposes, and is confidential only to the partner / third party organization conducting the interview. Companies will be informed that the results will be used only statistically, and that any attribution of answers to a company or person will require their explicit acceptance and clearance. Where necessary, the interviewees will also be advised to seek the consent from their organisation to present their views before the interviews are arranged. A Consent Form (Annex I) will be sent to the interviewee to be reviewed and signed. The Description of the Scientific Research Data File (Annex II), which is required by the legislation regarding personal data 4 , will be presented to the interviewee. If the interviewee/company requires a Non-Disclosure Agreement, a model NDA is offered to project partners as part of the Data Management Plan (Annex III).
The interviews will be conducted in line with the principles described in the
Data Management Plan and the related documents listed in Chapter 1 (e.g. Grant
Agreement, Consortium Agreement, Project Handbook), as well as the preceding
deliverables (D2.1 and D2.2) which guide the interview implementation work.
Most partner organizations have additional organization-specific principles
e.g. regarding research work, data protection and ethical code of conduct.
Each partner is responsible to observe the organisation specific guidance as
well as national legislation, in addition to the above mentioned documents. In
the cases where third parties have been contracted by project partners to
conduct interviews, the project partner responsible for contracting a third
party is responsible for contractually obligating these third parties to abide
by the same legal, ethical and project related documents and principles as
which direct the research work of the project partners themselves.
At the beginning of the interviews the purpose of the interview, the processes
for the management and use of data, and sharing of the results will be
discussed and explained to the interviewees. The Consent Form and the
Description of the Scientific Research Data File contain the key facts. The
signed Consent Forms will be collected from the interviewees prior to
conducting the interview.
The interviews will be conducted in the native language of the interviewees.
The interviews will be taped unless the interviewee requests to not be taped.
The taped interviews will be transcribed by third parties, which will be
contractually obligated to adopt the same principles as the consortium
partners with regard to personal data. Interview Summary Reports will be
composed based on the transcripts or written hand notes in the cases where the
interview has not been taped. The Interview Summary Reports will be sent to
the interviewees for review and approval prior to archiving, if the
interviewee requests this, or if the interviewer is unsure of whether the
report contains information which the company considers confidential and
harmful to be published even after aggregation and anonymization. The
interview data will also be used to produce an anonymised Codified Data
Catalogue.
All interview data will be confidential to the partner / third party
conducting the interview, and will not be disclosed even within the IIT
consortium. Access to raw data will be granted only to nominated persons in
the organisations which collected the data. The data will be stored in the
respective organisations’ secure databases. The Anonymized Interview Summary
Reports and Anonymized Codified Data Catalogues of each interview will be
shared within the consortium (M7 onwards). In addition, Anonymized Interviewer
notes will shared with the other consortium members, on a need-basis.
The Anonymized Interview Summary Reports and Anonymized Codified Data
Catalogues will be aggregated and made public at the end of the project (M24).
Depositing the research data into a publically accessible database comes from
the participation of the project in the Open Research Data Pilot, which is a
part of the larger Open Access initiative 5 . Open access can be defined as
the practice of providing on-line access to scientific information free of
charge to the end-user. Open access promotes the re-use of data. Scientific
information in this context refers to peer-reviewed scientific research
articles (published in scholarly journals) or research data (data underlying
publications, and e.g. raw data). The underlying principle of the vision is
that “information already paid for by the public purse should not be paid for
again each time it is accessed or used, and that it should benefit European
companies and citizens to the full.” 6 .
Deliverable 2.5 Best practices of company innovation management, which is
derived from the interview data, will be a public document, published
according to the schedule detailed in the Grant Agreement.
## In-depth Case Studies
In order to deepen the understanding on the different innovation practices and
their alignment with innovation policies, 10-15 companies will be selected for
case studies. The purpose of the case studies is to enrich the findings and
strengthen the emerging understanding (and to fill the ‘gaps’ in
understanding) achieved with the interviews. The case studies will also
contribute to exploratory theory building. The IIT case study approach will be
developed in more detail based on the interviews described in the previous
section. However, two key principles are outlined here: Firstly, the case
studies will build on the interviews and prior theoretical understanding for
developing an understanding of the key variables and constructs examined in
the IIT project, and for outlining a rudimentary understanding of their
relationships.
Secondly, and in line with the _theoretical sampling_ principle, contrasting
cases will be selected in different sectors and countries. Therefore, the
selected cases will differ according to their innovation practices and the
extent to which these companies are supported or constrained by national
innovation policies. In order to increase the data richness and the depth of
understanding, additional interviews with key actors in the organisation, and
possibly at government level, will also be conducted. Also, data from public
annual reports, policy documents, and other written public sources may be
collected and used as additional sources of information.
The data will be collected in the form of interviews, company annual reports
and other public documentation, policy documents and other written sources.
The collected data will be confidential to partners Twente, Uniman and Aalto,
which are the partners conducting the case studies. The partners conducting
the case studies will compile Case Study Reports of the studies for which they
are responsible. The aim of the reports is to give additional insights and
e.g. quotes 7 regarding the innovation strategies, internal innovation
practices and e.g. collaborative arrangements of the companies. The Case Study
Reports as well as the Notes of interviewers will be anonymized and shared
within the consortium (i.e. the data is Confidential to the Consortium). These
documents will result in Deliverable 3.3 In-depth case study findings, which
is Confidential to the Consortium. The lessons learned will be utilized in
workshops and dissemination activities.
As the Case Study work will in all likelihood at some point result in research
work where the data protection laws related obligations will need to be
followed, D3.3 will contain as an annex the Description of the Scientific
Research Data File regarding the case study work (a model of such file can be
seen in Annex II of this document). Partner Twente as the task leader will be
responsible for the fulfilling this notification requirement.
## Web Based Survey
The interviews and the case studies provide the basis for deductive,
hypotheses testing quantitative data collection. This will be carried out by
conducting a web survey. The primary motive of the web survey is to validate
the findings and hypotheses rising from the interviews and case studies,
therefore, contributing to developing _statistical generalisations_ . The web
survey will also further widen the respondent base to take into examination
the perspectives of key stakeholders. The survey will cover the same topics as
the interview guideline with the possibility of additions e.g. in the form of
tables usable in web based surveys.
The web surveys will be translated into the national languages of IIT in order
to cater to the needs of SMEs especially. Partner Joanneum will implement the
web survey and the data will be stored in a secure server of Partner Joanneum.
The methodology for the web survey has been developed within Task 2.2
Methodology for Company Interviews, and will be further reflected on as part
of Task 3.2 Data Analysis. The survey will complement the data collected
through interviews and case studies. The data from the survey is confidential
to the partner conducting the survey. The anonymised survey analyses will be
confidential to the consortium for the purpose of comparisons and further
analyses. The reports and the results of the analysis will be made public.
As the web survey implementation choices may at some point result in a
situation where the data protection law related obligations will need to be
followed, D3.4 will contain as an annex the Description of the Scientific
Research Data File regarding the web survey (if personal data will indeed be
collected). A model of such file can be seen in Annex II of this document.
Partner Joanneum as the task leader will be responsible for the fulfilling the
possibly emerging notification requirement.
## National and European Level Workshops and Focus Groups
For further analysis, a two level workshop concept will be developed. This
includes national level focus groups in 11 countries that have been selected
for the analysis, and European level workshops. The focus groups at national
level will include representatives from relevant ministries, funding
authorities and agencies, NCPs, industrial associations as well as other
policy intermediaries. The workshops will also include representatives from
all the five main sectors covered by the IIT project: 1) ICT and ICT services,
2) Manufacturing, 3) Biopharma, 4) Agro-food, and 5) Clean technologies. The
outcomes of the national focus groups will be public summary reports and
policy briefings based on the discussions. European level workshops will
discuss the specific role, appropriateness and coherence of national and
European instruments in order to support industrial transition. The results of
the workshops will be reported in a public deliverable D4.2 Briefing paper for
the European policy workshop.
Further to the focused policy workshops, the IIT project will organise public
workshops for dissemination purposes. These workshops provide an opportunity
for further feedback and thus build on the final reports and recommendations
by the project.
Project partners will take notes in the workshops. The workshop notes will
neither list the names of the participants nor record the expressed opinions
of the participants in connection with the names of the persons expressing the
opinions. This being the practice, the data gathered does not fall under the
category of personal data as meant in the personal data protection
legislation.
All data collected in the workshops will be public and made available through
e.g. the IIT website. The data will also be summarised in the IIT project
deliverables D6.2 National level workshops documentation and D6.3 Workshop
documentation and Output paper.
## Destroying of Data
The data of the IIT project will be destroyed in January 2022. The Grant
Agreement states that the project needs to be prepared for an audit within two
years after the payment of the balance. The obligation to provide
documentation for a possible investigation, however, is valid for five years
after the payment of the balance. As the exact date of the payment of the
balance is unknown at this stage, the consortium will adopt a security time
slot of one year, starting from the first possible date of the payment of the
balance. January 2022 has been derived with this principle.
Each partner will be responsible for destroying the data in their possession.
The coordinator will be responsible for destroying data from Eduuni and any
other coordinator servers. Each partner is obliged to make such arrangements
for this task which are not dependent of the availability of current project
personnel.
# Collection, Storage and Use of Project Data
The data accumulated in the IIT project will be analysed and stored according
to the principles detailed in the Project Handbook and Interview Guidelines.
Overall the detailed data will be stored by the organisations which has
collected it, and the anonymised Interview Summary Reports and Codified Data
Catalogues will be stored at a database provided by partner Uniman. Both the
central repository and the databases of the individual organisations will be
secured using the latest security protocols, and access to data will be
granted only for persons nominated by the project partners. Each partner will
produce a Description of the Scientific Research Data File to fulfil the
obligations rising from data national protection laws. The partners are
offered a model Description of the Scientific Research Data File (Annex II),
but are advised to check whether it fulfils the national requirements.
All project administrative data will be stored at a dedicated database for the
IIT project. The project uses the Eduuni workspace (https://www.eduuni.fi/),
which is a secure, password protected document workspace and archive system.
The Eduuni workspace consists of Microsoft SharePoint 2013 Workspace and
Office Web Apps functionalities. It further includes a wiki functionality.
Access to the database is managed by the coordinator and provided for project
consortium and other parties as deemed necessary by the project team. The
project data is stored on Aalto University servers, not in the cloud, for
added security. The data is organised in the database following the e.g. Work
Package, Task and Deliverable structure as defined in the project plan and
contract. This ensures the ease of access and provides a logical structure for
the data. The following table details the project data management structure
and categories.
<table>
<tr>
<th>
WP1 Management
</th> </tr>
<tr>
<td>
T1.1 Coordination actions within the consortium
</td> </tr>
<tr>
<td>
T1.2 Data management
</td> </tr>
<tr>
<td>
T1.3 Project handbook
</td> </tr>
<tr>
<td>
T1.4 Advisory board facilitating and stakeholder liaising
</td> </tr>
<tr>
<td>
WP2 Current company innovation practices
</td> </tr>
<tr>
<td>
T2.1 Conceptual framework development
</td> </tr>
<tr>
<td>
T2.2 Methodology for company interviews
</td> </tr>
<tr>
<td>
T2.3 Data gathering
</td> </tr>
<tr>
<td>
T2.4 Data analysis (national, industry specific, other)
</td> </tr>
<tr>
<td>
T2.5 Best practices of company innovation management
</td> </tr>
<tr>
<td>
WP3 Innovation policy implications rising from current company innovation
practices
</td> </tr>
<tr>
<td>
T3.1 Review of national innovation policies
</td> </tr>
<tr>
<td>
T3.2 Data analysis (comparison of company practices against national policies)
</td> </tr>
<tr>
<td>
T3.3 In-depth case studies to validate and understand findings
</td> </tr>
<tr>
<td>
T3.4 Validation via web survey
</td> </tr>
<tr>
<td>
WP4 Assessment of current innovation policies
</td> </tr>
<tr>
<td>
T4.1 Methodology development
</td> </tr>
<tr>
<td>
T4.2 Innovation policy assessment workshops
</td> </tr>
<tr>
<td>
WP5 Toolkit for the replication of the study
</td> </tr>
<tr>
<td>
T5.1 Toolkit development
</td> </tr>
<tr>
<td>
T5.2 Toolkit introductory workshops
</td> </tr>
<tr>
<td>
WP6 Dissemination
</td> </tr>
<tr>
<td>
T6.1 Best innovation practices dissemination
</td> </tr>
<tr>
<td>
T6.2 National market-to-policy workshops and related dissemination
</td> </tr>
<tr>
<td>
T6.3 European innovation policy workshops and related dissemination
</td> </tr>
<tr>
<td>
T6.4 Toolkit dissemination
</td> </tr> </table>
Table 1. Structure of the IIT Project Work Packages and Tasks
The IIT Project Handbook details the project internal management structure and
processes, as well as quality and reporting practices. Related to project data
management, best practices for data generation and sharing have been applied.
This includes set rules for version control, whereby the partners are
encouraged to use unified methods for naming the documents by the Task or
Deliverable name, with a corresponding version number 01., 02., 03.. The
documents are stored in the database and preferably shared within the
consortium via a link to the database rather than e-mail attachments. All
deliverables have a unified look and feel with a unified template which helps
the reviewers in their project evaluation.
The project coordinator assumes the responsibility for timely documentation
and sharing of project management related documents and materials. Each Work
Package (WP) leader monitors the timely documentation of WP related
requirements within the consortium. Each task leader ensures the timely
production of the deliverable for which he/she is responsible. With the
strongly inter-related and intertwined Tasks in the different Work Packages,
the same previously described principles regarding e.g. confidentiality levels
and data types will be applied.
# Data Sharing
All parties have signed/accessed to the project Grant Agreement and Consortium
Agreement, which detail the parties’ rights and obligations, including – but
not limited to – obligations regarding data security and the protection of
privacy. These obligations and the underlying legislation will guide all of
the data sharing actions of the project consortium.
The IIT project has opted to support and join the Open Research Data Pilot,
which is an expression of the larger Open Access initiative of the European
Commission 8 . Participation in the pilot is manifested on two levels: a)
depositing research data in an open access research database or repository and
b) choosing to provide open access to scientific publications which are
derived from the project research. At the same time the consortium is
dedicated to protect the privacy of the informants and companies.
**Depositing research data in an open access research database or
repository:** Following the principles of the European Commission Open Data
pilot, the applicable anonymised and aggregated data gathered in the project
will be made available to other researchers, in order to increase the
potential exploitation of the project work. The aggregated and anonymized
Interview Summary Reports and Codified Data Catalogues will be the key
contribution to the Open Access initiative, as they will be made available
according to the schedule detailed in the Grant Agreement. The IIT project
will further establish a toolbox which future users of the project methodology
can access and continue to use in their own respective countries, and in doing
so enrich the existing data with their corresponding national data. The
toolbox will be made available via the project website, _www.IIT-project.eu_
.
**Choosing to provide open access to scientific publications which are derived
from the project** : All peerreviewed scientific publications relating to
results are published so that open access (free of charge, online access for
any user) is ensured. Publications will either immediately be made accessible
online by the publisher (Gold Open Access), or publications are available
through an open access repository after an embargo period, which is usually
from six to twelve months (Green Open Access). Possible Gold Open Access
journals include Research Policy, Technovation, Technological Forecasting and
Social Change, Industry and Corporate Change. For all other articles, the
researchers aim at publishing them in a Green Open Access repository. The
coordinator, Aalto University, has a Green Open Access repository the IIT
consortium can use, at https://aaltodoc.aalto.fi/?locale-attribute=en.
A machine-readable electronic copy of the published version or final peer-
reviewed manuscript accepted for publication will be available in a repository
for scientific publications. Electronic copies of publications will have
bibliographic metadata in a standard format and will include "European Union
(EU)" and "Horizon 2020", the name of the action, acronym and grant number;
publication date, and length of embargo period if applicable, and identifier.
In addition to the previous sharing of the project results, the IIT project
will disseminate the best practice experiences among the participating
companies and broader European audiences. It will also evaluate existing
innovation policy portfolios at national and European levels, and analyse the
differences between innovation processes and management practices in different
industrial sectors. The best practices and other results will be disseminated
widely both to the European business community and governments in order to
improve Europe’s innovation potential.
The IIT project will publish summary reports of the project findings, as well
as other reports and recommendations on how to accelerate the deployment of
best innovation practices in Europe. These publications will be made publicly
available through the project website, as well as through the participating
organisations’ and their partners’ websites. The reports will be accessible
for everyone and can be freely quoted in subsequent research and publications.
The project will provide three key reports, namely an Innovation Policy
Report, Report on Companies Innovation Practices, and a Toolkit for the
Replication of the Study. The reports will be published at national and
regional levels and further disseminated through events and workshops for the
benefit of European companies and the research community. There is a dedicated
Work Package 6 for dissemination activities, and further dissemination will be
done on a national level. Details regarding dissemination channels and events
will be a part of the project Dissemination and Communication Plan.
# Living Document
This Data Management Plan is a living document which will be submitted at the
end of July 2015 (M6) but which will be updated and complemented as the
project evolves.
Examples of data related issues which remain to be decided at later stages of
the project:
* Type of metadata on the research data deposited to an open access research database or repository
* Aggregation method / technology applied on the anonymised research data prior to depositing the research data in an open access research database or repository
* Practical implementation of the Grant Agreement obligation to submit copies of ethical approvals for the collection of personal data by the competent University Data Protection Officer / National Data Protection (Grant Agreement Annex 1, 1.4. Ethics Requirements)
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1024_ENTROPY_649849.md
|
# 1\. INTRODUCTION
This deliverable constitutes the final Data Management Plan Implementation of
the ENTROPY project. The main objective of this deliverable is to provide the
overall update relevant to the data management policy with regards to the data
sources that the ENTROPY project collects, processes, generates and makes
available. It takes into account the new data, changes in the consortium
policies as derived from the innovation potential of the project and it
follows the Horizon 2020 FAIR DMP approach.
Overall, ENTROPY follows the FAIR approach followed in H2020 as it presents
research data that are findable, accessible, interoperable and re-usable as
depicted in the following chapters.
In Chapter 1 Data Summary, the ENTROPY datasets are presented alongside with
the overall purpose for collection, alignment with the objectives, origin of
data and expected size of them. Chapter 2 outlines the FAIR data approach and
measures taken pertinent to data. In this chapter the detailed description of
the datasets made open is conducted alongside with the means to access them.
Chapter 3 describes the resources allocation and Chapter 4 the provisions
taken during and past the project lifecycle pertinent to data security.
Chapter 5 relates to any issues relevant to the open data and Chapter 6
presents all the relevant information pertinent to the forms utilized in the
project in an Appendix format.
# 2\. DATA SUMMARY
This section describes the basic data collected and generated throughout the
lifetime of the ENTROPY Project in relevance to the platform, the apps
developed and the pilots. Following the evolution of the ENTROPY project, the
data collected and generated are reflective of the three pilots where ENTROPY
was deployed and evaluated: (1) PILOT A: Navacchio Technology Park (POLO), (2)
PILOT B: University of Murcia Campus (UMU) and (3) PILOT C: Technopole in
Sierre (HES-SO). The data sets collected and generated by each pilot differ to
each other, since the pilots differ in terms of sensing devices, context and
users, however some data are applicable. For each of these three target
groups/deployments, several parameters have been identified by each pilot and
the data received by external sources will fill in these parameters.
The overall purpose of data collection / generation in ENTROPY was to enable
the identification of Energy Consumption, Building setup and Participant
actions relevant to Energy Efficiency and the potential to reduce their
overall energy consumption relevant to current conditions in the place of
application. The consortium took the necessary measures to ensure that the
necessary amount and type of data was collected in order to meet the
technological and scientific objectives of ENTROPY.
The aforementioned necessary data collected and then processed adhere to User
Demographics, Building Data, Sensing Data, Environmental data and Energy data
and the types and formats of the data are presented in detail, in the
following sections globally, alongside with the data in the Personal App
(PersoApp) and the Treasure Hunt Serious game (TH). Additionally, data related
to energy performance characteristics of the considered areas and subareas, as
well as open data regarding environmental conditions in the past, were re-used
to assist in setting the baselines and calibrating the system. Following the
presentation of the data collected / generated in the lifecycle of the ENTROPY
project, a table of the datasets that will be made available is presented
alongside with the relevant information in the following sections.
## 2.1 Datasets
### 2.1.1 Users Data / Building Data / Sensing Data
The following tables present the data collected / processed in the main
ENTROPY platform during the course of the ENTROPY project. Part of such data
is only collected for creation of the behavioral profiles of the end users and
is anonymized upon completion of relevant questionnaires.
<table>
<tr>
<th>
**Parameter**
</th>
<th>
**Type**
</th>
<th>
**Unit**
</th>
<th>
**Mandatory**
</th> </tr>
<tr>
<td>
**USERS DEMOGRAPHICS**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
User ID
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
N(NO)
</td> </tr>
<tr>
<td>
Age
</td>
<td>
Numeric
</td>
<td>
Years
</td>
<td>
N
</td> </tr>
<tr>
<td>
Gender
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
N
</td> </tr>
<tr>
<td>
Function / Role (ex. Manager, professor, student etc.)
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
N
</td> </tr>
<tr>
<td>
Educational level
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
N
</td> </tr>
<tr>
<td>
Hours at university/campus / Working hours
</td>
<td>
Numeric
</td>
<td>
hours
</td>
<td>
N
</td> </tr>
<tr>
<td>
Energy Awareness Level
</td>
<td>
Numeric
</td>
<td>
\-
</td>
<td>
N
</td> </tr>
<tr>
<td>
**BUILDING DATA**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Date
</td>
<td>
String
</td>
<td>
dd-mm-yyyy HH:mm:ss
</td>
<td>
</td> </tr>
<tr>
<td>
Building ID
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y(YES)
</td> </tr>
<tr>
<td>
Building type
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Building size
</td>
<td>
Numeric
</td>
<td>
m (meters)
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Building regulations
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Consumption baseline
</td>
<td>
Numeric
</td>
<td>
kWh
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Sensor ID (link with sensor data)
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Total number of sensors
</td>
<td>
Numeric
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Internal temperature
</td>
<td>
Numeric
</td>
<td>
°C
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Internal humidity level
</td>
<td>
Numeric
</td>
<td>
%
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Occupants per room/building
</td>
<td>
Numeric
</td>
<td>
</td>
<td>
N
</td> </tr>
<tr>
<td>
**SENSING DATA**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
ROOM SENSOR DATA
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**HVAC**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Sensor ID
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Location
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Automated system (Yes/No)
</td>
<td>
Boolean
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
State (ON/FF)
</td>
<td>
Boolean
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Operation mode (heating/cooling)
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Fan speed
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Nominal power
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Energy efficiency label
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Energy (Electricity, Gas, Fuel oil)
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Energy Meter**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Date (timestamp)
</td>
<td>
Date
</td>
<td>
dd-mm-yyyy HH:mm:ss
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Meter ID
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr> </table>
<table>
<tr>
<th>
Energy consumption
</th>
<th>
Numeric
</th>
<th>
KWh
</th>
<th>
Y
</th> </tr>
<tr>
<td>
Energy from renewable sources
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Type of energy source
</td>
<td>
String
</td>
<td>
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Building/Room ID (link with building/room data)
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Indoor Lighting System Management/Luminosity Sensors**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Sensor ID
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Location
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Automated system (Yes/No)
</td>
<td>
Boolean
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Light status (ON/OFF)
</td>
<td>
Boolean
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Light regulation (0-100%)
</td>
<td>
Numeric
</td>
<td>
%
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Hours of lighting per day
</td>
<td>
Numeric
</td>
<td>
Hours
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Type of lighting (ex. CFL, LED etc.)
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Number of lights on
</td>
<td>
Numeric
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Luminous flux
</td>
<td>
Numeric
</td>
<td>
lm(lumen)
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Nominal power
</td>
<td>
Numeric
</td>
<td>
W
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Humidity Sensors**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Sensor ID
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Location
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Humidity level (internal)
</td>
<td>
Numeric
</td>
<td>
%
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Presence sensor**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Sensor ID
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Location
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Number of attendees
</td>
<td>
Numeric
</td>
<td>
\-
</td>
<td>
N
</td> </tr>
<tr>
<td>
User ID
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
N
</td> </tr>
<tr>
<td>
Enter timestamp
</td>
<td>
Date
</td>
<td>
\-
</td>
<td>
N
</td> </tr>
<tr>
<td>
Exit timestamp
</td>
<td>
Date
</td>
<td>
\-
</td>
<td>
N
</td> </tr>
<tr>
<td>
BUILDING SENSOR DATA
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Energy Meter**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Date (timestamp)
</td>
<td>
String
</td>
<td>
dd-mm-yyyy HH:mm:ss
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Meter ID
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Energy consumption (KWh)
</td>
<td>
Numeric
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Electrical consumption (Active and reactive power)
</td>
<td>
Numeric
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Energy from renewable sources
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Type of energy source
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Water Meter**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Meter ID
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Water meter type (Mass/Volumetric)
</td>
<td>
Boolean
</td>
<td>
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Water consumption
</td>
<td>
Numeric
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Environmental conditions monitoring (Weather station)**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Weather station ID
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Temperature (external)
</td>
<td>
Numeric
</td>
<td>
°C
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Barometric pressure
</td>
<td>
Numeric
</td>
<td>
Hpa
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Humidity (external)
</td>
<td>
Numeric
</td>
<td>
%
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Wind speed
</td>
<td>
Numeric
</td>
<td>
m.s -1
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Wind direction
</td>
<td>
Numeric
</td>
<td>
°
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Precipitation
</td>
<td>
String
</td>
<td>
Mm
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Outside sun duration (luminosity)
</td>
<td>
Numeric
</td>
<td>
h/day (hours per day)
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Outside radiation
</td>
<td>
Numeric
</td>
<td>
W/m 2 /day (daily radiation average)
</td>
<td>
N
</td> </tr> </table>
### 2.1.2 ENTROPY Personal App Data
The following tables present the data collected / processed relevant to the
Personal App during the course of the ENTROPY project.
<table>
<tr>
<th>
**Parameter**
</th>
<th>
**Data**
</th>
<th>
**Type**
</th>
<th>
**Unit**
</th>
<th>
**Mandatory**
</th> </tr>
<tr>
<td>
PERSONAL APP DATA
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Authentication token form user sign in**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
username
</td>
<td>
The user name of the participant
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
password
</td>
<td>
The password of the participant
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Consumption profile data of all the registered buildings**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
energyConsuptionPerSqMeter
</td>
<td>
Energy consumption per sqr meter
</td>
<td>
Integer
</td>
<td>
kWh
</td>
<td>
Y
</td> </tr>
<tr>
<td>
buildingSpace
</td>
<td>
Building surface in sqr meters
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
energyConsumptionPerOccupa nt
</td>
<td>
kWh per occupant in building room
</td>
<td>
Integer
</td>
<td>
kWh
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Consumption profile data of a building**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
energyConsuptionPerSqMeter
</td>
<td>
Energy consumption per sqr meter
</td>
<td>
Integer
</td>
<td>
kWh
</td>
<td>
Y
</td> </tr>
<tr>
<td>
buildingSpace
</td>
<td>
Building surface in sqr meters
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
EnergyConsumptionComparis onWeekly
</td>
<td>
Energy consumption
</td>
<td>
Integer
</td>
<td>
kWh
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Building subAreas**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
returnobject
</td>
<td>
Number of areas in buildings
</td>
<td>
Array
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**All building space areas**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
returnobject
</td>
<td>
Number of areas in buildings
</td>
<td>
Array
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Observation Values from a Sensor**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
sensor_data_stream_id
</td>
<td>
The unique URL id of a Sensor Data Stream
</td>
<td>
URL
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
dateFrom
</td>
<td>
Timestamp
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
dateTo
</td>
<td>
Timestamp
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**All Recommendations per end user**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Username
</td>
<td>
User name
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
recommendationType
</td>
<td>
Type of Recommendation
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
status
</td>
<td>
Recommendation status
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
App_name
</td>
<td>
App name
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Positive or Negative Feedback from a Recommendation**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
id
</td>
<td>
ID of recommendation
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
App_name
</td>
<td>
App name
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
feedback
</td>
<td>
Positive or Negative
</td>
<td>
Boolean
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
points
</td>
<td>
User points earned
</td>
<td>
Integer
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Status
</td>
<td>
Recommendation status
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Custom_attributes
</td>
<td>
Attributes
</td>
<td>
JSONObject
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**SensorDataStreams per building for a list of attributes**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Temperature
</td>
<td>
Room temperature
</td>
<td>
String
</td>
<td>
celsius, lux,
</td>
<td>
Y
</td> </tr>
<tr>
<td>
CO2
</td>
<td>
Room C02
</td>
<td>
String
</td>
<td>
ppm,w/h
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**User profile per app**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
playerCharacter
</td>
<td>
Pic of the player
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
totalScore
</td>
<td>
Score of the player in total
</td>
<td>
Integer
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Ranking
</td>
<td>
Player ranking in leader board
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Badges
</td>
<td>
Player badges in leader board
</td>
<td>
Object
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**A new Action**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
App_name
</td>
<td>
Name of the app based on Pilot location
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Building_name
</td>
<td>
Building name where the action took place
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
validatable
</td>
<td>
Is action validatable?
</td>
<td>
Boolean
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Score
</td>
<td>
Points to gain for this action
</td>
<td>
Integer
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
badge
</td>
<td>
Name of the badge for this action
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr> </table>
### 2.1.3 ENTROPY Serious Games Data
The following tables present the data collected / processed relevant to the
Treasure Hunt during the course of the ENTROPY project.
<table>
<tr>
<th>
**Parameter**
</th>
<th>
**Data**
</th>
<th>
**Type**
</th>
<th>
**Unit**
</th>
<th>
**Mandatory**
</th> </tr>
<tr>
<td>
SERIOUS GAME DATA
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Authentication token form user sign in**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
username
</td>
<td>
The user name of the participant
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
password
</td>
<td>
The password of the participant
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Observation Values from a Sensor**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
Sensor_data_stream_id
</td>
<td>
The unique URL id of a Sensor Data Stream
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
dateFrom
</td>
<td>
Start date
</td>
<td>
String
</td>
<td>
date
</td>
<td>
Y
</td> </tr>
<tr>
<td>
dateTo
</td>
<td>
End date
</td>
<td>
String
</td>
<td>
date
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**All Recommendations per end user**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
username
</td>
<td>
The user name of the participant
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
status
</td>
<td>
Status of recommendation
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
app_name
</td>
<td>
Application name
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
recommendationType
</td>
<td>
Type of recommendation (e.g. task)
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**User Profile per Application**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
app_name
</td>
<td>
Name of the game
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
playerCharacter
</td>
<td>
Character of the player
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
lastscore
</td>
<td>
The last score of the player
</td>
<td>
Integer
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
lastRanking
</td>
<td>
The last rank of the player
</td>
<td>
Integer
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
buildingSpace
</td>
<td>
The building where the game is played
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
custom_attributes
</td>
<td>
The list of badges
</td>
<td>
Array of Strings
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**Positive or Negative Feedback from a Recommendation**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Id
</td>
<td>
ID of recommendation
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
app_name
</td>
<td>
Name of the game
(e.g. TH POLO, TH UMU, TH HESSO)
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Feedback
</td>
<td>
The feedback from the recommendation "POSITIVE" or "NEGATIVE"
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Points
</td>
<td>
Number of points won for the completed task/action
</td>
<td>
Integer
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**A new Action**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
app_name
</td>
<td>
The name of the app based on the Pilot location
(e.g. TH POLO, TH UMU, TH HESSO)
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
building_name
</td>
<td>
The building name where the action took place
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
validatable
</td>
<td>
Is action possible to validate
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
score
</td>
<td>
The points to gain for this action
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
badge
</td>
<td>
The name of the badge that can be won for this action
</td>
<td>
String
</td>
<td>
\-
</td>
<td>
Y
</td> </tr>
<tr>
<td>
**SensorDataStreams per building for a list of attributes**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Temperature
</td>
<td>
</td>
<td>
Integer
</td>
<td>
Celsius
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Luminosity
</td>
<td>
</td>
<td>
Integer
</td>
<td>
Lux
</td>
<td>
Y
</td> </tr>
<tr>
<td>
CO2
</td>
<td>
</td>
<td>
Integer
</td>
<td>
ppm
</td>
<td>
Y
</td> </tr>
<tr>
<td>
active power
</td>
<td>
</td>
<td>
Float
</td>
<td>
w/h
</td>
<td>
Y
</td> </tr> </table>
## 2.2 The ENTROPY datasets
The previous tables presented the basic data collected and processed in the
course of the ENTROPY Project. Out of the aforementioned collected data,
different datasets are produced as presented in the following table.
<table>
<tr>
<th>
**#**
</th>
<th>
**Data Type**
</th>
<th>
**Origin**
</th>
<th>
**WP**
</th>
<th>
**Format**
</th>
<th>
**Overall size**
</th> </tr>
<tr>
<td>
1
</td>
<td>
POLO Sensor Observation Data
</td>
<td>
Primary Data, Pilots
</td>
<td>
5
</td>
<td>
.jsonld
</td>
<td>
> 5GB
</td> </tr>
<tr>
<td>
2
</td>
<td>
POLO Recommendation Data
</td>
<td>
Primary Data, Pilots
</td>
<td>
5
</td>
<td>
.jsonld
</td>
<td>
> 50 MB
</td> </tr>
<tr>
<td>
3
</td>
<td>
UMU-Pleiades Sensor Observation Data
</td>
<td>
Primary Data, Pilots
</td>
<td>
5
</td>
<td>
.jsonld
</td>
<td>
> 8 GB
</td> </tr>
<tr>
<td>
4
</td>
<td>
UMU-Pleiades Recommendation
Data
</td>
<td>
Primary Data, Pilots
</td>
<td>
5
</td>
<td>
.jsonld
</td>
<td>
> 100 MB
</td> </tr>
<tr>
<td>
5
</td>
<td>
UMU-Lanave Sensor Observation
Data
</td>
<td>
Primary Data, Pilots
</td>
<td>
5
</td>
<td>
.jsonld
</td>
<td>
> 1 GB
</td> </tr> </table>
<table>
<tr>
<th>
6
</th>
<th>
UMU-Lanave Recommendation Data
</th>
<th>
Primary Data, Pilots
</th>
<th>
5
</th>
<th>
.jsonld
</th>
<th>
> 10 MB
</th> </tr>
<tr>
<td>
7
</td>
<td>
HESSO Sensor Observation Data
</td>
<td>
Primary Data, Pilots
</td>
<td>
5
</td>
<td>
.jsonld
</td>
<td>
> 5 GB
</td> </tr>
<tr>
<td>
8
</td>
<td>
HESSO Recommendation Data
</td>
<td>
Primary Data, Pilots
</td>
<td>
5
</td>
<td>
.jsonld
</td>
<td>
> 20 MB
</td> </tr>
<tr>
<td>
9
</td>
<td>
Serious Game Analytics Data
</td>
<td>
Primary Data, Pilots
</td>
<td>
5
</td>
<td>
.jsonld
</td>
<td>
400 kB
</td> </tr>
<tr>
<td>
10
</td>
<td>
Perso app data
</td>
<td>
Primary Data, Pilots
</td>
<td>
5
</td>
<td>
.jsonld
</td>
<td>
1 MB
</td> </tr>
<tr>
<td>
11
</td>
<td>
Campaign Users Interaction Data (data per campaign)
</td>
<td>
Primary Data
</td>
<td>
5
</td>
<td>
Raw Data in
MongoDb
</td>
<td>
> 50MB
</td> </tr> </table>
The following table describes the data sets and the purpose of the data
collection or data generation in relation with the objectives of the project
in detail. Additionally, it shows the data utility for clarifying to whom the
data might be useful.
<table>
<tr>
<th>
**#**
</th>
<th>
**Data Type**
</th>
<th>
**Description and Purpose**
</th>
<th>
**Utility**
</th> </tr>
<tr>
<td>
1
</td>
<td>
POLO Sensor
Observation Data
</td>
<td>
**Description:** The POLO Sensor Observation Data contains the data from all
sensors distributed in the different rooms used for the campaigns
**Purpose:** The POLO Sensor Observation Data are used as input for the
recommendation engine, producing recommendations when the defined rule
conditions are accomplished. Also is used to inform participants about the
current status of variables like indoor and outdoor temperature, CO2 levels,
humidity, etc. And last but not least, all gathered data from sensors are used
to evaluate if campaigns have influenced in energy consumption and comfort.
</td>
<td>
The POLO Sensor
Observation Data can be used by other researchers working in the field of
energy savings in building.
</td> </tr>
<tr>
<td>
2
</td>
<td>
POLO
Recommendation
Data
</td>
<td>
**Description:** The POLO Recommendation Data contains the data from the
recommendation engine, templates and rules definition and all the statistics
about them, i.e., how many times a rule has been triggered in a determined
campaign and the responsiveness of the participant.
**Purpose:** The POLO Recommendation Data are used to analyse the result of
campaigns, the participants behaviour during campaigns and evaluate their
responsiveness.
</td>
<td>
The POLO Recommendation Data can be used by other researchers working in the
field of behavioural change.
</td> </tr> </table>
<table>
<tr>
<th>
3
</th>
<th>
UMU-Pleiades
Sensor
Observation Data
</th>
<th>
**Description:** The UMU Sensor Observation Data contains the data from all
sensors distributed in the different rooms used for the campaigns
**Purpose:** The UMU Sensor Observation Data are used as input for the
recommendation engine, producing recommendations when the defined rule
conditions are accomplished. Also is used to inform participants about the
current status of variables like indoor and outdoor temperature, CO2 levels,
humidity, etc. And last but not least, all gathered data from sensors are used
to evaluate if campaigns have influenced in energy consumption and comfort.
</th>
<th>
The UMU Sensor Observation Data can be used by other researchers working in
the field of energy savings in building.
</th> </tr>
<tr>
<td>
4
</td>
<td>
UMU-Pleiades
Recommendation
Data
</td>
<td>
**Description:** The UMU Recommendation Data contains the data from the
recommendation engine, templates and rules definition and all the statistics
about them, i.e., how many times a rule has been triggered in a determined
campaign and the responsiveness of the participant.
**Purpose:** The UMU Recommendation Data are used to analyse the result of
campaigns, the participants behaviour during campaigns and evaluate their
responsiveness.
</td>
<td>
The UMU Recommendation Data can be used by other researchers working in the
field of behavioural change.
</td> </tr>
<tr>
<td>
5
</td>
<td>
UMU-LaNave
Sensor
Observation Data
</td>
<td>
**Description:** The LaNave Sensor Observation Data contains the data from all
sensors distributed in the different rooms used for the campaigns
**Purpose:** The LaNave Sensor Observation Data are used as input for the
recommendation engine, producing recommendations when the defined rule
conditions are accomplished. Also is used to inform participants about the
current status of variables like indoor and outdoor temperature, CO2 levels,
humidity, etc. And last but not least, all gathered data from sensors are used
to evaluate if campaigns have influenced in energy consumption and comfort.
</td>
<td>
The LaNave Sensor
Observation Data can be used by other researchers working in the field of
energy savings in building.
</td> </tr>
<tr>
<td>
6
</td>
<td>
UMU-LaNave
Recommendation
Data
</td>
<td>
**Description:** The LaNave Recommendation Data contains the data from the
recommendation engine, templates and rules definition and all the statistics
about them, i.e., how many times a rule has been triggered in a determined
campaign and the responsiveness of the participant.
**Purpose:** The LaNave Recommendation Data are used to analyse the result of
campaigns, the participants’ behaviour during campaigns and evaluate their
responsiveness.
</td>
<td>
The LaNave Recommendation Data can be used by other researchers working in the
field of behavioural change.
</td> </tr>
<tr>
<td>
7
</td>
<td>
HESSO Sensor Observation Data
</td>
<td>
**Description:** The HESSO Sensor Observation Data contains the measurement
data collected from various sensor streams
**Purpose:** The HESSO Sensor Observation Data are used to calculate energy
baselines as well as to decide when recommendations would be fired and how
they would be validated.
</td>
<td>
The HESSO Sensor
Observation Data can be used by other researchers working in the field of
Energy in order to test their data analytics algorithms and conduct
literature review
</td> </tr>
<tr>
<td>
8
</td>
<td>
HESSO
Recommendation
Data
</td>
<td>
**Description:** The HESSO Recommendation Data contains the data of
recommendation templates created by campaign managers
**Purpose:** The HESSO Recommendation Data are used to intervene users’ energy
consumption behaviour to achieve energy savings.
</td>
<td>
The HESSO Recommendation Data can be used by other researchers working in the
field of Energy in order to have basis for what kind of data should be
represented in terms of behavioural interventions for energy efficiency.
</td> </tr>
<tr>
<td>
9
</td>
<td>
Serious Game Analytics Data
</td>
<td>
**Description:** The serious game analytics data contains relevant data for
player interaction with the game elements (e.g. number of logins, number of
actions read, number of actions completes, time spent doing an action etc.).
More details are given in D5.4.
**Purpose:** The Serious Game Analytics Data was used for KPI calculation, as
a basis for game modifications in order to improve KPIs
</td>
<td>
The Serious Game Analytics data can be used by the game designers to provide
indication how different elements of the game are used, interacted with and
how different gamification elements are used, and whether they provide
motivation to the players. Also the level of difficulty of the questions is
estimated based on total number of correctly answered questions.
</td> </tr>
<tr>
<td>
10
</td>
<td>
Personal App data.
</td>
<td>
**Description:** The Personal App data contains all appropriate data sets and
streams required for a player to engage and interact with the mobile app. It
uses credential data, sensor streams and also educational content interaction
data (click streams, quizzes taken, tips read, user actions, content
interaction results, user views, dashboard views, educational results, points
taken and leader-board, QR location scan, etc)
**Purpose:** The purpose is to use the data sets in order to measure various
digital interaction KPIs, user engagement and user knowledge through the
Entropy platform applied on pilot sites
</td>
<td>
The Perso App data sets and the relevant KPIs that were created (engagement,
knowledge) can be used from researchers and digital marketers in order to
evaluate, analyse and optimize various digital marketing techniques, to
research new ways of customer engagement and
KPIs to measure it, to evaluate digital content and to generate new ways of
customer
interactions, over mobile apps
</td> </tr>
<tr>
<td>
11
</td>
<td>
Campaign Users
Interaction Data
(data per campaign)
</td>
<td>
**Description:** The set of campaign users interaction data regard the data
collected per campaign with regards to the interaction of end users with the
applications, in terms of responsiveness to recommendations and evolution
</td>
<td>
Such data may be used by other researchers mainly for realisation of extended
behavioural analysis and comparison with similar interventions in other
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
of their application profiles.
**Purpose:** Used to evaluate the behavioural change of end users and adapt
accordingly the provided recommendations.
</th>
<th>
buildings. Such data may be provided upon full anonymization.
</th> </tr> </table>
# 3\. ENTROPY FAIR DATA
## 3.1 Making data findable, including provisions for metadata
The ENTROPY project is related to different pillars, e.g., green energy,
environment, etc. This section presents the open datasets and the provisions
for making the data findable and presents the metadata form adopted in the
ENTROPY project (APPENDIX 1) filled in relation to the datasets. As a general
rule, the sensor observations are aggregated four times a day. The
recommendations will be opened after the user identifiers are going to be
anonymized and all profile information is left out.
### 3.1.1 Dataset: POLO Sensor Measurements
<table>
<tr>
<th>
**Polo Tecnologico di Navacchio.ttl**
</th> </tr>
<tr>
<td>
Document version
</td>
<td>
v1
</td> </tr>
<tr>
<td>
Document format
</td>
<td>
.jsonld
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset contains sensor measurements from POLO building of Navacchio
Technology Park Pilot
</td> </tr>
<tr>
<td>
Date
</td>
<td>
2018-11-27
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
sensor, energy, infrastructure
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
This data is sourced from several sensors with different properties
</td> </tr>
<tr>
<td>
**Creator (NTP)**
</td> </tr>
<tr>
<td>
Sector of the provider
</td>
<td>
University
</td> </tr>
<tr>
<td>
Permissions
</td>
<td>
CC-BY-SA 4.0
</td> </tr>
<tr>
<td>
**Name of the Partner (NTP)**
</td> </tr>
<tr>
<td>
Responsible person
</td>
<td>
Giulia Gori (Campaign Manager)
</td> </tr>
<tr>
<td>
Pilot
</td>
<td>
NTP – POLO
</td> </tr>
<tr>
<td>
Scenario of data usage
</td>
<td>
Data collected through several sensor data streams
</td> </tr>
<tr>
<td>
**Description of the Data Source**
</td> </tr>
<tr>
<td>
File format
</td>
<td>
JSON-LD (.jsonld)
</td> </tr>
<tr>
<td>
File name/path
</td>
<td>
POLO-sensor.jsonld
</td> </tr>
<tr>
<td>
Storage location
</td>
<td>
https:// _entropy-opendata.inf.um.es_
</td> </tr>
<tr>
<td>
Data type
</td>
<td>
JSON-LD, .jsonld
</td> </tr>
<tr>
<td>
Standard
</td>
<td>
RDF
</td> </tr>
<tr>
<td>
Data size
</td>
<td>
</td> </tr>
<tr>
<td>
Time references of data
</td>
<td>
2018/02/28
</td>
<td>
2018/11/16
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
2018/02/28
</td>
<td>
2018/11/16
</td> </tr>
<tr>
<td>
Data collection frequency
</td>
<td>
Aggregated to 4 times a day
</td> </tr>
<tr>
<td>
Data quality
</td>
<td>
Complete, available, right collection frequency
</td> </tr>
<tr>
<td>
**Observation Values**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
isObservedBy
</td>
<td>
Observed by Sensor
</td>
<td>
OCBSensor
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isProducedBy
</td>
<td>
Produced by Sensor Data Stream
</td>
<td>
SensorDataStream
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
inDateTime
</td>
<td>
Observation Date
</td>
<td>
Date
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
hasValue
</td>
<td>
Observation Value
</td>
<td>
Double
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isUsedFor
</td>
<td>
Property the Observation Value used for
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isMeasuredIn
</td>
<td>
Unit of measure
</td>
<td>
UnitOfMeasure
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
hasSampleFrequency
</td>
<td>
Sampling Frequency
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**OCBSensor:**
</td> </tr>
<tr>
<td>
Variables
</td>
<td>
Name
</td>
<td>
Type
</td>
<td>
Mandatory
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Sensor URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
deviceCategory
</td>
<td>
Sensor Type
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
attributes
</td>
<td>
List of sensor attributes
</td>
<td>
List<String>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isLocatedIn
</td>
<td>
Building Space URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**SensorDataStream:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
comesFrom
</td>
<td>
Sensor
</td>
<td>
OCBSensor
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
hasMonitoringType
</td>
<td>
Monitoring Type
</td>
<td>
String
</td>
<td>
Yes
</td> </tr> </table>
### 3.1.2 Dataset: POLO Recommendation Data
<table>
<tr>
<th>
**Polo Tecnologico di Navacchio.ttl**
</th> </tr>
<tr>
<td>
Document version
</td>
<td>
v1
</td> </tr>
<tr>
<td>
Document format
</td>
<td>
.jsonld
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset contains recommendation sent to users from POLO building of
Navacchio Technology Park Pilot
</td> </tr>
<tr>
<td>
Date
</td>
<td>
2018-11-27
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
sensor, energy, infrastructure
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
This data is generated by campaign managers and sent to users by rule firings
</td> </tr>
<tr>
<td>
**Creator (NTP)**
</td> </tr>
<tr>
<td>
Sector of the provider
</td>
<td>
University
</td> </tr>
<tr>
<td>
Permissions
</td>
<td>
CC-BY-SA 4.0
</td> </tr>
<tr>
<td>
**Name of the Partner (NTP)**
</td> </tr>
<tr>
<td>
Responsible person
</td>
<td>
Giulia Gori (Campaign Manager)
</td> </tr>
<tr>
<td>
Pilot
</td>
<td>
NTP – POLO
</td> </tr>
<tr>
<td>
Scenario of data usage
</td>
<td>
Data created based on rule firings based on collected sensor data
</td> </tr>
<tr>
<td>
**Description of the Data Source**
</td> </tr>
<tr>
<td>
File format
</td>
<td>
JSON-LD (.jsonld)
</td> </tr>
<tr>
<td>
File name/path
</td>
<td>
POLO-recommendation.jsonld
</td> </tr>
<tr>
<td>
Storage location
</td>
<td>
https:// _entropy-opendata.inf.um.es_
</td> </tr>
<tr>
<td>
Data type
</td>
<td>
JSON-LD, .jsonld
</td> </tr>
<tr>
<td>
Standard
</td>
<td>
RDF
</td> </tr>
<tr>
<td>
Data size
</td>
<td>
</td> </tr>
<tr>
<td>
Time references of data
</td>
<td>
2018/02/28
</td>
<td>
2018/11/16
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
2018/02/28
</td>
<td>
2018/11/16
</td> </tr>
<tr>
<td>
Data collection frequency
</td>
<td>
Per rule firing, per campaign
</td> </tr>
<tr>
<td>
Data quality
</td>
<td>
Complete, available, right collection frequency
</td> </tr>
<tr>
<td>
**Observation Values:**
</td> </tr>
<tr>
<td>
**Recommendation:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Recommendation URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
recommendationRule
</td>
<td>
Rule URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Feedback
</td>
<td>
User’s feedback,positive or negative or null
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
inDateTime
</td>
<td>
Recommendation sending time
</td>
<td>
Datetime
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
triggeringAttributes
</td>
<td>
List of attributes that is involved with the rule
</td>
<td>
List<String>
</td>
<td>
No
</td> </tr>
<tr>
<td>
**RecommendationRule:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Recommendation Rule URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
conditionRule
</td>
<td>
Rule triggering condition
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
userRule
</td>
<td>
User selection rule
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
areaRule
</td>
<td>
The condition that selects the area
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isValidatedBy
</td>
<td>
Action that validates the recommendation
</td>
<td>
URI
</td>
<td>
No
</td> </tr>
<tr>
<td>
recommendationTemplat
</td>
<td>
The template the rule is
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
e
</td>
<td>
based on
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**RecommendationTemplate:**
</td>
<td>
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Recommendation Template URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Name
</td>
<td>
Name of the recommendation template
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Descriptive Content of the Recommendation
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
difficultyLevel
</td>
<td>
Level of difficulty, Low Medium or High
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
@type
</td>
<td>
Type of recommendation from the behavioural ontology
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
showOnCompletion
</td>
<td>
The message shown after completion
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**ActionValidation:**
</td>
<td>
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
duration
</td>
<td>
Maximum duration before validation in seconds
</td>
<td>
Integer
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
conditionRules
</td>
<td>
The condition that validates if the recommended activity has been carried
</td>
<td>
String
</td>
<td>
Yes
</td> </tr> </table>
### 3.1.3 Dataset: UMU-Pleiades Sensor Measurements
<table>
<tr>
<th>
**UmuPleiadesFinalSensorMeasurements.ttl**
</th> </tr>
<tr>
<td>
Document version
</td>
<td>
v1
</td> </tr>
<tr>
<td>
Document format
</td>
<td>
.jsonld
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset contains sensor measurements from Pleiades building of UMU Pilot
</td> </tr>
<tr>
<td>
Date
</td>
<td>
2018-11-27
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
sensor, energy, infrastructure
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
This data is sourced from several sensors with different properties
</td> </tr>
<tr>
<td>
**Creator (UMU)**
</td> </tr>
<tr>
<td>
Sector of the provider
</td>
<td>
University
</td> </tr>
<tr>
<td>
Permissions
</td>
<td>
CC-BY-SA 4.0
</td> </tr>
<tr>
<td>
**Name of the Partner (UMU)**
</td> </tr>
<tr>
<td>
Responsible person
</td>
<td>
Pedro J. Fernandez (Campaign manager of UMU and La Nave)
</td> </tr>
<tr>
<td>
Pilot
</td>
<td>
UMU – Pleiades
</td> </tr>
<tr>
<td>
Scenario of data usage
</td>
<td>
Data collected through several sensor data streams
</td> </tr>
<tr>
<td>
**Description of the Data Source**
</td> </tr>
<tr>
<td>
File format
</td>
<td>
JSON-LD (.jsonld)
</td> </tr>
<tr>
<td>
File name/path
</td>
<td>
Pleiades-sensor.jsonld
</td> </tr>
<tr>
<td>
Storage location
</td>
<td>
https:// _entropy-opendata.inf.um.es_
</td> </tr>
<tr>
<td>
Data type
</td>
<td>
JSON-LD, .jsonld
</td> </tr>
<tr>
<td>
Standard
</td>
<td>
RDF
</td> </tr>
<tr>
<td>
Data size
</td>
<td>
</td> </tr>
<tr>
<td>
Time references of data
</td>
<td>
2017/04/12
</td>
<td>
2018/11/27
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
2017/04/12
</td>
<td>
2018/11/27
</td> </tr>
<tr>
<td>
Data collection frequency
</td>
<td>
Aggregated to 4 times a day
</td> </tr>
<tr>
<td>
Data quality
</td>
<td>
Complete, available, right collection frequency
</td> </tr>
<tr>
<td>
**ObservationValue:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
isObservedBy
</td>
<td>
Observed by Sensor
</td>
<td>
OCBSensor
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isProducedBy
</td>
<td>
Produced by Sensor Data Stream
</td>
<td>
SensorDataStream
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
inDateTime
</td>
<td>
Observation Date
</td>
<td>
Date
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
hasValue
</td>
<td>
Observation Value
</td>
<td>
Double
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isUsedFor
</td>
<td>
Property the Observation Value used for
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isMeasuredIn
</td>
<td>
Unit of measure
</td>
<td>
UnitOfMeasure
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
hasSampleFrequency
</td>
<td>
Sampling Frequency
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**OCBSensor:**
</td> </tr>
<tr>
<td>
Variables
</td>
<td>
Name
</td>
<td>
Type
</td>
<td>
Mandatory
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Sensor URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
deviceCategory
</td>
<td>
Sensor Type
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
attributes
</td>
<td>
List of sensor attributes
</td>
<td>
List<String>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isLocatedIn
</td>
<td>
Building Space URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**SensorDataStream:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
comesFrom
</td>
<td>
Sensor
</td>
<td>
OCBSensor
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
hasMonitoringType
</td>
<td>
Monitoring Type
</td>
<td>
String
</td>
<td>
Yes
</td> </tr> </table>
### 3.1.4 Dataset: UMU-Pleiades Recommendation Data
<table>
<tr>
<th>
**UmuPleiadesFinalRecommendations.ttl**
</th> </tr>
<tr>
<td>
Document version
</td>
<td>
v1
</td> </tr>
<tr>
<td>
Document format
</td>
<td>
.jsonld
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset contains recommendation sent to users from Pleiades building of
UMU Pilot
</td> </tr>
<tr>
<td>
Date
</td>
<td>
2018-11-27
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
sensor, energy, infrastructure
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
This data is generated by campaign managers and sent to users by rule firings
</td> </tr>
<tr>
<td>
**Creator (UMU)**
</td> </tr>
<tr>
<td>
Sector of the provider
</td>
<td>
University
</td> </tr>
<tr>
<td>
Permissions
</td>
<td>
CC-BY-SA 4.0
</td> </tr>
<tr>
<td>
**Name of the Partner (UMU)**
</td> </tr>
<tr>
<td>
Responsible person
</td>
<td>
Pedro J. Fernandez (Campaign manager)
</td> </tr>
<tr>
<td>
Pilot
</td>
<td>
UMU – Pleiades
</td> </tr>
<tr>
<td>
Scenario of data usage
</td>
<td>
Data created based on rule firings based on collected sensor data
</td> </tr>
<tr>
<td>
**Description of the Data Source**
</td> </tr>
<tr>
<td>
File format
</td>
<td>
JSON-LD (.jsonld)
</td> </tr>
<tr>
<td>
File name/path
</td>
<td>
Pleiades-recommendation.jsonld
</td> </tr>
<tr>
<td>
Storage location
</td>
<td>
https:// _entropy-opendata.inf.um.es_
</td> </tr>
<tr>
<td>
Data type
</td>
<td>
JSON-LD, .jsonld
</td> </tr>
<tr>
<td>
Standard
</td>
<td>
RDF
</td> </tr>
<tr>
<td>
Data size
</td>
<td>
</td> </tr>
<tr>
<td>
Time references of data
</td>
<td>
2017/04/12
</td>
<td>
2018/11/27
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
2017/04/12
</td>
<td>
2018/11/27
</td> </tr>
<tr>
<td>
Data collection frequency
</td>
<td>
Per rule firing, per campaign
</td> </tr>
<tr>
<td>
Data quality
</td>
<td>
Complete, available, right collection frequency
</td> </tr>
<tr>
<td>
**Observation Values:**
</td> </tr>
<tr>
<td>
**Recommendation:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Recommendation URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
recommendationRule
</td>
<td>
Rule URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Feedback
</td>
<td>
User’s
feedback,positive or negative or null
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
inDateTime
</td>
<td>
Recommendation sending time
</td>
<td>
Datetime
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
triggeringAttributes
</td>
<td>
List of attributes that is involved with the rule
</td>
<td>
List<String>
</td>
<td>
No
</td> </tr>
<tr>
<td>
**RecommendationRule:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Recommendation Rule URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
conditionRule
</td>
<td>
Rule triggering condition
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
userRule
</td>
<td>
User selection rule
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
areaRule
</td>
<td>
The condition that selects the area
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isValidatedBy
</td>
<td>
Action that validates the recommendation
</td>
<td>
URI
</td>
<td>
No
</td> </tr>
<tr>
<td>
recommendationTemp
late
</td>
<td>
The template the rule is based on
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**RecommendationTemplate:**
</td>
<td>
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Recommendation Template URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Name
</td>
<td>
Name of the recommendation template
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Descriptive Content of the Recommendation
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
difficultyLevel
</td>
<td>
Level of difficulty,
Low Medium or High
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
@type
</td>
<td>
Type of recommendation from the behavioural ontology
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
showOnCompletion
</td>
<td>
The message shown after completion
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**ActionValidation:**
</td>
<td>
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
duration
</td>
<td>
Maximum duration before validation in seconds
</td>
<td>
Integer
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
conditionRules
</td>
<td>
The condition that validates if the recommended activity has been carried
</td>
<td>
String
</td>
<td>
Yes
</td> </tr> </table>
### 3.1.5 Dataset: HESSO Sensor Measurement
<table>
<tr>
<th>
**HessoFinalSensorMeasurements.ttl**
</th> </tr>
<tr>
<td>
Document version
</td>
<td>
v1
</td> </tr>
<tr>
<td>
Document format
</td>
<td>
.jsonld
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset contains sensor measurements from HES-SO building of Technopole
Sierre Pilot
</td> </tr>
<tr>
<td>
Date
</td>
<td>
2018-11-27
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
sensor, energy, infrastructure
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
This data is sourced from several sensors with different properties
</td> </tr>
<tr>
<td>
**Creator (HES-SO)**
</td> </tr>
<tr>
<td>
Sector of the provider
</td>
<td>
University
</td> </tr>
<tr>
<td>
Permissions
</td>
<td>
CC-BY-SA 4.0
</td> </tr>
<tr>
<td>
**Name of the Partner (HES-SO)**
</td> </tr>
<tr>
<td>
Responsible person
</td>
<td>
Vincent Schülé (Campaign Manager)
</td> </tr>
<tr>
<td>
Pilot
</td>
<td>
UMU – La Nave
</td> </tr>
<tr>
<td>
Scenario of data usage
</td>
<td>
Data collected through several sensor data streams
</td> </tr>
<tr>
<td>
**Description of the Data Source**
</td> </tr>
<tr>
<td>
File format
</td>
<td>
JSON-LD (.jsonld)
</td> </tr>
<tr>
<td>
File name/path
</td>
<td>
Hesso-sensor.jsonld
</td> </tr>
<tr>
<td>
Storage location
</td>
<td>
https:// _entropy-opendata.inf.um.es_
</td> </tr>
<tr>
<td>
Data type
</td>
<td>
JSON-LD, .jsonld
</td> </tr>
<tr>
<td>
Standard
</td>
<td>
RDF
</td> </tr>
<tr>
<td>
Data size
</td>
<td>
</td> </tr>
<tr>
<td>
Time references of data
</td>
<td>
2017/04/12
</td>
<td>
2018/11/27
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
2017/04/12
</td>
<td>
2018/11/27
</td> </tr>
<tr>
<td>
Data collection frequency
</td>
<td>
Aggregated to 4 times a day
</td> </tr>
<tr>
<td>
Data quality
</td>
<td>
Complete, available, right collection frequency
</td> </tr>
<tr>
<td>
**ObservationValue:**
</td> </tr>
<tr>
<td>
**ObservationValue:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
isObservedBy
</td>
<td>
Observed by Sensor
</td>
<td>
OCBSensor
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isProducedBy
</td>
<td>
Produced by Sensor Data Stream
</td>
<td>
SensorDataStream
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
inDateTime
</td>
<td>
Observation Date
</td>
<td>
Date
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
hasValue
</td>
<td>
Observation Value
</td>
<td>
Double
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isUsedFor
</td>
<td>
Property the Observation Value used for
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isMeasuredIn
</td>
<td>
Unit of measure
</td>
<td>
UnitOfMeasure
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
hasSampleFrequency
</td>
<td>
Sampling Frequency
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**OCBSensor:**
</td> </tr>
<tr>
<td>
Variables
</td>
<td>
Name
</td>
<td>
Type
</td>
<td>
Mandatory
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Sensor URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
deviceCategory
</td>
<td>
Sensor Type
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
attributes
</td>
<td>
List of sensor attributes
</td>
<td>
List<String>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isLocatedIn
</td>
<td>
Building Space URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**SensorDataStream:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
comesFrom
</td>
<td>
Sensor
</td>
<td>
OCBSensor
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
hasMonitoringType
</td>
<td>
Monitoring Type
</td>
<td>
String
</td>
<td>
Yes
</td> </tr> </table>
### 3.1.6 HESSO Recommendation Data
<table>
<tr>
<th>
**HESSOFinalRecommendations.ttl**
</th> </tr>
<tr>
<td>
Document version
</td>
<td>
v1
</td> </tr>
<tr>
<td>
Document format
</td>
<td>
.jsonld
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset contains recommendation sent to users from HES-SO building of
Technopole Sierre Pilot
</td> </tr>
<tr>
<td>
Date
</td>
<td>
2018-11-27
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
sensor, energy, infrastructure
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
This data is generated by campaign managers and sent to users by rule firings
</td> </tr>
<tr>
<td>
**Creator (UMU)**
</td> </tr>
<tr>
<td>
Sector of the provider
</td>
<td>
University
</td> </tr>
<tr>
<td>
Permissions
</td>
<td>
CC-BY-SA 4.0
</td> </tr>
<tr>
<td>
**Name of the Partner (UMU)**
</td> </tr>
<tr>
<td>
Responsible person
</td>
<td>
Vincent Schülé (Campaign Manager)
</td> </tr>
<tr>
<td>
Pilot
</td>
<td>
HESSO
</td> </tr>
<tr>
<td>
Scenario of data usage
</td>
<td>
Data created based on rule firings based on collected sensor data
</td> </tr>
<tr>
<td>
**Description of the Data Source**
</td> </tr>
<tr>
<td>
File format
</td>
<td>
JSON-LD (.jsonld)
</td> </tr>
<tr>
<td>
File name/path
</td>
<td>
Hesso-recommendation.jsonld
</td> </tr>
<tr>
<td>
Storage location
</td>
<td>
https:// _entropy-opendata.inf.um.es_
</td> </tr>
<tr>
<td>
Data type
</td>
<td>
JSON-LD, .jsonld
</td> </tr>
<tr>
<td>
Standard
</td>
<td>
RDF
</td> </tr>
<tr>
<td>
Data size
</td>
<td>
</td> </tr>
<tr>
<td>
Time references of data
</td>
<td>
2017/04/12
</td>
<td>
2018/11/27
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
2017/04/12
</td>
<td>
2018/11/27
</td> </tr>
<tr>
<td>
Data collection frequency
</td>
<td>
Per rule firing, per campaign
</td> </tr>
<tr>
<td>
Data quality
</td>
<td>
Complete, available, right collection frequency
</td> </tr>
<tr>
<td>
**Observation Values**
</td> </tr>
<tr>
<td>
**Recommendation:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Recommendation URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
recommendationRule
</td>
<td>
Rule URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Feedback
</td>
<td>
User’s feedback,positive or negative or null
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
inDateTime
</td>
<td>
Recommendation sending time
</td>
<td>
Datetime
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
triggeringAttributes
</td>
<td>
List of attributes that is involved with the rule
</td>
<td>
List<String>
</td>
<td>
No
</td> </tr>
<tr>
<td>
**RecommendationRule:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Recommendation Rule URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
conditionRule
</td>
<td>
Rule triggering condition
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
userRule
</td>
<td>
User selection rule
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
areaRule
</td>
<td>
The condition that selects the area
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isValidatedBy
</td>
<td>
Action that validates the recommendation
</td>
<td>
URI
</td>
<td>
No
</td> </tr> </table>
<table>
<tr>
<th>
recommendationTemplat e
</th>
<th>
The template the rule is based on
</th>
<th>
URI
</th>
<th>
Yes
</th> </tr>
<tr>
<td>
**RecommendationTemplate:**
</td>
<td>
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Recommendation Template URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Name
</td>
<td>
Name of the recommendation template
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Descriptive Content of the Recommendation
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
difficultyLevel
</td>
<td>
Level of difficulty, Low Medium or High
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
@type
</td>
<td>
Type of recommendation from the behavioural ontology
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
showOnCompletion
</td>
<td>
The message shown after completion
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**ActionValidation:**
</td>
<td>
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
duration
</td>
<td>
Maximum duration before validation in seconds
</td>
<td>
Integer
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
conditionRules
</td>
<td>
The condition that validates if the recommended activity has been carried
</td>
<td>
String
</td>
<td>
Yes
</td> </tr> </table>
### 3.1.7 Dataset: Lanave Sensor Measurements
<table>
<tr>
<th>
**LanaveFinalSensorMeasurements.ttl**
</th> </tr>
<tr>
<td>
Document version
</td>
<td>
v1
</td> </tr>
<tr>
<td>
Document format
</td>
<td>
.jsonld
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset contains sensor measurements from Lanave building of UMU Pilot
</td> </tr>
<tr>
<td>
Date
</td>
<td>
2018-11-27
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
sensor, energy, infrastructure
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
This data is sourced from several sensors with different properties
</td> </tr>
<tr>
<td>
**Creator (UMU)**
</td> </tr>
<tr>
<td>
Sector of the provider
</td>
<td>
University
</td> </tr>
<tr>
<td>
Permissions
</td>
<td>
CC-BY-SA 4.0
</td> </tr>
<tr>
<td>
**Name of the Partner (UMU)**
</td> </tr>
<tr>
<td>
Responsible person
</td>
<td>
Pedro J. Fernandez (Campaign manager)
</td> </tr>
<tr>
<td>
Pilot
</td>
<td>
UMU – La Nave
</td> </tr>
<tr>
<td>
Scenario of data usage
</td>
<td>
Data collected through several sensor data streams
</td> </tr>
<tr>
<td>
**Description of the Data Source**
</td> </tr>
<tr>
<td>
File format
</td>
<td>
JSON-LD (.jsonld)
</td> </tr>
<tr>
<td>
File name/path
</td>
<td>
Lanave-sensor.jsonld
</td> </tr>
<tr>
<td>
Storage location
</td>
<td>
https:// _entropy-opendata.inf.um.es_
</td> </tr>
<tr>
<td>
Data type
</td>
<td>
JSON-LD, .jsonld
</td> </tr>
<tr>
<td>
Standard
</td>
<td>
RDF
</td> </tr>
<tr>
<td>
Data size
</td>
<td>
</td> </tr>
<tr>
<td>
Time references of data
</td>
<td>
2018/10/08
</td>
<td>
2018/11/27
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
2018/10/08
</td>
<td>
2018/11/27
</td> </tr>
<tr>
<td>
Data collection frequency
</td>
<td>
Aggregated to 4 times a day
</td> </tr>
<tr>
<td>
Data quality
</td>
<td>
Complete, available, right collection frequency
</td> </tr>
<tr>
<td>
**ObservationValues:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
isObservedBy
</td>
<td>
Observed by Sensor
</td>
<td>
OCBSensor
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isProducedBy
</td>
<td>
Produced by Sensor Data Stream
</td>
<td>
SensorDataStream
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
inDateTime
</td>
<td>
Observation Date
</td>
<td>
Date
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
hasValue
</td>
<td>
Observation Value
</td>
<td>
Double
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isUsedFor
</td>
<td>
Property the Observation Value used for
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isMeasuredIn
</td>
<td>
Unit of measure
</td>
<td>
UnitOfMeasure
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
hasSampleFrequency
</td>
<td>
Sampling Frequency
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**OCBSensor:**
</td> </tr>
<tr>
<td>
Variables
</td>
<td>
Name
</td>
<td>
Type
</td>
<td>
Mandatory
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Sensor URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
deviceCategory
</td>
<td>
Sensor Type
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
attributes
</td>
<td>
List of sensor attributes
</td>
<td>
List<String>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isLocatedIn
</td>
<td>
Building Space URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**SensorDataStream:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
comesFrom
</td>
<td>
Sensor
</td>
<td>
OCBSensor
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
hasMonitoringType
</td>
<td>
Monitoring Type
</td>
<td>
String
</td>
<td>
Yes
</td> </tr> </table>
### 3.1.8 Dataset: Lanave Recommendation Data
<table>
<tr>
<th>
**LanaveFinalRecommendations.ttl**
</th> </tr>
<tr>
<td>
Document version
</td>
<td>
v1
</td> </tr>
<tr>
<td>
Document format
</td>
<td>
.jsonld
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset contains recommendation sent to users from Lanave building of UMU
Pilot
</td> </tr>
<tr>
<td>
Date
</td>
<td>
2018-11-27
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
sensor, energy, infrastructure
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
This data is generated by campaign managers and sent to users by rule firings
</td> </tr>
<tr>
<td>
**Creator (UMU)**
</td> </tr>
<tr>
<td>
Sector of the provider
</td>
<td>
University
</td> </tr>
<tr>
<td>
Permissions
</td>
<td>
CC-BY-SA 4.0
</td> </tr>
<tr>
<td>
**Name of the Partner (UMU)**
</td> </tr>
<tr>
<td>
Responsible person
</td>
<td>
Pedro J. Fernandez (Campaign manager)
</td> </tr>
<tr>
<td>
Pilot
</td>
<td>
UMU – La Nave
</td> </tr>
<tr>
<td>
Scenario of data usage
</td>
<td>
Data created based on rule firings based on collected sensor data
</td> </tr>
<tr>
<td>
**Description of the Data Source**
</td> </tr>
<tr>
<td>
File format
</td>
<td>
JSON-LD (.jsonld)
</td> </tr>
<tr>
<td>
File name/path
</td>
<td>
Lanave-recommendation.jsonld
</td> </tr>
<tr>
<td>
Storage location
</td>
<td>
https:// _entropy-opendata.inf.um.es_
</td> </tr>
<tr>
<td>
Data type
</td>
<td>
JSON-LD, .jsonld
</td> </tr>
<tr>
<td>
Standard
</td>
<td>
RDF
</td> </tr>
<tr>
<td>
Data size
</td>
<td>
</td> </tr>
<tr>
<td>
Time references of data
</td>
<td>
2018/10/08
</td>
<td>
2018/11/27
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
2018/10/08
</td>
<td>
2018/11/27
</td> </tr>
<tr>
<td>
Data collection frequency
</td>
<td>
Per rule firing, per campaign
</td> </tr>
<tr>
<td>
Data quality
</td>
<td>
Complete, available, right collection frequency
</td> </tr>
<tr>
<td>
**Observation Values**
</td> </tr>
<tr>
<td>
**Recommendation:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Recommendation URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
recommendationRule
</td>
<td>
Rule URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Feedback
</td>
<td>
User’s feedback,positive or negative or null
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
inDateTime
</td>
<td>
Recommendation sending time
</td>
<td>
Datetime
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
triggeringAttributes
</td>
<td>
List of attributes that is involved with the rule
</td>
<td>
List<String>
</td>
<td>
No
</td> </tr>
<tr>
<td>
**RecommendationRule:**
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Recommendation Rule URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
conditionRule
</td>
<td>
Rule triggering condition
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
userRule
</td>
<td>
User selection rule
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
areaRule
</td>
<td>
The condition that selects the area
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
isValidatedBy
</td>
<td>
Action that validates the recommendation
</td>
<td>
URI
</td>
<td>
No
</td> </tr> </table>
<table>
<tr>
<th>
recommendationTemplat e
</th>
<th>
The template the rule is based on
</th>
<th>
URI
</th>
<th>
Yes
</th> </tr>
<tr>
<td>
**RecommendationTemplate:**
</td>
<td>
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
@id
</td>
<td>
Recommendation Template URI
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Name
</td>
<td>
Name of the recommendation template
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Descriptive Content of the Recommendation
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
difficultyLevel
</td>
<td>
Level of difficulty, Low Medium or High
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
@type
</td>
<td>
Type of recommendation from the behavioural ontology
</td>
<td>
URI
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
showOnCompletion
</td>
<td>
The message shown after completion
</td>
<td>
String
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**ActionValidation:**
</td>
<td>
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
duration
</td>
<td>
Maximum duration before validation in seconds
</td>
<td>
Integer
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
conditionRules
</td>
<td>
The condition that validates if the recommended activity has been carried
</td>
<td>
String
</td>
<td>
Yes
</td> </tr> </table>
### 3.1.9 Datasets : Perso App Dataset and Serious Game Dataset
The data utilized within the context of the both the PersoApp, as well as the
TH Serious Game will not be open to the public due to GDPR compliance with
personal user interactive data. In general, the TH Serious Game and the
PersoApp collected data, which are based on user interaction with the gamified
apps, enable data analytics and assessment of KPIs to have an insight on how
to improve the apps and increase the user interaction, in terms of promoting
energy efficient behaviour.
## 3.2 Making data openly accessible
The following table presents which datasets that are produced and used in the
ENTROPY project will be made openly available. It also explains why several
datasets cannot be shared.
<table>
<tr>
<th>
#
</th>
<th>
Dataset
</th>
<th>
Data Openly
Available (Y/N)
</th>
<th>
Justification
</th> </tr>
<tr>
<td>
1
</td>
<td>
POLO Sensor Observation Data
</td>
<td>
Y
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
2
</td>
<td>
POLO Recommendation Data
</td>
<td>
Y
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
3
</td>
<td>
UMU-Pleiades Sensor Observation Data
</td>
<td>
Y
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
4
</td>
<td>
UMU-Pleidaes Recommendation Data
</td>
<td>
Y
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
5
</td>
<td>
UMU-LaNave Sensor Observation Data
</td>
<td>
Y
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
6
</td>
<td>
UMU-LaNave Recommendation Data
</td>
<td>
Y
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
7
</td>
<td>
HESSO Sensor Observation Data
</td>
<td>
Y
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
8
</td>
<td>
HESSO Recommendation Data
</td>
<td>
Y
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
9
</td>
<td>
Perso App Data
</td>
<td>
N
</td>
<td>
The Perso App data will not be open to the public due to
GDPR compliance, on
personal user interactive data
</td> </tr> </table>
<table>
<tr>
<th>
10
</th>
<th>
TH serious game data
</th>
<th>
N
</th>
<th>
The Serious Game data will not be open to the public due to GDPR compliance,
on
personal user interactive data
</th> </tr> </table>
During the course of the ENTROPY project, all original raw data files and
respective processing programs were versioned over time and maintained in a
date-stamped file structure. Access to the datasets was given only after
request and during the design phases of the project to the responsible person.
These datasets were automatically backed up on a nightly and monthly basis.
Respectively, the data generated by the system during the pilots of the
project were stored to the database of ENTROPY platform, whose DB schema
reflected the aforementioned pilot parameters. Back-ups of the DB were
performed and stored on a monthly-basis. Also, the datasets were automatically
backed up on a nightly and monthly basis.
The ENTROPY project consortium is committed to make the high quality final
data generated by ENTROPY available for use by the research community, as well
as industry peers. Through this research ENTROPY identified appropriate
platform solutions that can allow the sustainable archiving of all the ENTROPY
datasets after the life span of the project. The ENTROPY project Open Data are
hosted at _https://entropy-opendata.inf.um.es_ on a CKAN installation in
JSON-LD format. The data will be available in a repository as different
datasets plus a dataset for the metadata.
## 3.3 Making data interoperable
From the beginning of the project, the ENTROPY platform aimed to make data
interoperable. For this reason, we developed two ontologies, mainly reusing
existing and well known ontologies. The detailed documentation of these
ontologies are available and can be found in the ENTROPY project website.
Additionally, the metadata regarding the datasets will be published annotated
with the DCAT Vocabulary 1 .
## 3.4 Increase data re-use (through clarifying licenses)
The Open Data of ENTROPY are licensed under CC-BY-SA 4.0 (
_https://creativecommons.org/licenses/by-sa/4.0/_ ) and the data is aimed to
be reusable for as long as the entropy-opendata.inf.um.es is alive.
## 4\. ALLOCATION OF RESOURCES
The ENTROPY consortium utilizes a CKAN repository installation at _entropy-
opendata.inf.um.es_ , a content manager system dedicated to store and provide
open data in a unified way. The datasets declared as open in previous sections
will be available online for some years, ensuring that other researchers have
the chance of working with this useful data. Additionally, the reports and
deliverables are published in the ENTROPY website. The handling of the CKAN
repository on behalf of the ENTROPY project, as well as all data management
issues related to the project, fall in the responsibility of project
coordinator. As for the publications, the ENTROPY consortium has extensively
published in scientific journals that allow open access with the costs related
to open access will claimed as part of the Horizon 2020 grant.
## 5\. DATA SECURITY AND ETHICAL ASPECTS
In terms of Data Security, in the course of the ENTROPY project, measures were
undertaken to account for a detailed Data Protection through the developed and
followed Data Protection Procedures, which also take under consideration all
ethical aspects of the ENTROPY related data as presented below.
### 5.1 Data Protection Procedures of ENTROPY
In order to be compliant with the European Union’s Directive 95/46/EC on the
protection of individuals with regard to the processing of personal data and
on the free movement of such data, this chapter defines the policy to protect
and pseudonymise the personal data collected from the participants taking part
in the different pilot cases in the ENTROPY project. Short description of the
overall data management processes is also provided, focusing on the end users
personal data.
#### 5.1.1 Data Management and Protection Workflow
In order to achieve a robust data protection plan in the context of ENTROPY, a
workflow comprising of a set of steps has been designed, as depicted in Figure
1. This data-handling workflow covers all the generation of data from the
initial recruitment of the participants to the final operation of the ENTROPY
platform in the pilots. During the lifetime of the ENTROPY project, this
process was followed for initial recruitment of participants prior to the
implementation of the platform and for recruitment of participants based on
the usage of the ENTROPY platform. Figure 1 shows the general workflow of this
procedure that is common for both cases (the only difference regards the
completion of online questionnaires via the ENTROPY platform or not).
The first step regards the registration process, where an end user was able to
create an account via the ENTROPY platform and agree with the overall platform
usage terms and conditions. Following, the end user had to fill in an online
questionnaire with a set of questions that aimed to build its personal
profile. Upon the completion of the questionnaire, an analysis took place
categorizing the end user in specific profile categories. Upon the completion
of the analysis, personal profile data was encrypted and stored in the ENTROPY
repository, while the rest data was pseudonymised and stored also in the
ENTROPY repository for further analysis purposes. Encrypted user profile data
were made available and may be updated through secure communication channels
via the ENTROPY mobile and web applications. Further data stored and made
available in the repository – that did not include any personal data- regard
the data collected from the set of activated sensor data streams, as well as
crowd sensing data collected from the end users. In the following sections,
detailed description of the aforementioned steps is provided.
It should be noted that an instantiation of the ENTROPY platform regards the
realization of one pilot case. Thus, in case of ENTROPY pilot cases, three
instantiations of the ENTROPY platform were realized in independent virtual
machines, one for each pilot case (UMU, HESSO, and POLO). No access to data
coming from other cases was made available in any of the pilot cases.
Furthermore, the collected personal data was used exclusively within the
ENTROPY project and not made available for any other source outside the
project as it was pointed out in the Ethical Requirement document.
Figure 1. ENTROPY data management and protection workflow
##### 5.1.1.1 Participants registration and questionnaire completion (based
on ENTROPY Ethics)
As an initial step, the target users who were invited to participate in the
ENTROPY project carried out a registration process. This process involved the
creation of an account and the completion of a personal questionnaire to
collect certain relevant personal data. Creation of a new account was realized
via the ENTROPY platform, while activation of the account was provided by the
platform administrator. An activation e-mail was sent to the e-mail denoted by
the end user in order to validate that he/she requested the creation of an
account. Upon validation by the end user, he/she was able to login to the
platform.
Following, the next step regards the agreement with the terms and conditions
of usage of the platform and the completion of the online questionnaire (the
consent form and the questionnaires are available in the Annexes). At the
first phase -and given that the implementation of the platform was in
progress- the questionnaire were filled in based on the infrastructure of each
pilot case. However, given the release of the platform, the completion of the
questionnaires and the collection of the provided data from the end users was
eventually realised within the ENTROPY platform. Actually, this step regarded
the first step upon the registration of a new user. The questionnaire is
composed by 6 parts, as it is designed by the ENTROPY consortium. The
completion of all the parts is mandatory. An indicative screenshot of the
questionnaire -as it is integrated in the ENTROPY platform- is shown in Figure
2.
It should be noted that all the personal private profile data is collected in
this step of the overall data collection and management process, while no
previous dataset exists with regards to such data. Data collection for this
step is realized at the initiation of a campaign for a fixed and predefined
time period, specified by the campaign administrators.
Figure 2. Screenshot from the online questionnaire.
It should be also noted that, in the first round, prior to the implementation
of the platform, the questionnaires were created and shared among the
participants by means of the pilots’ infrastructure. Indicatively, in the case
of UMU, a web-based platform for questionnaire generation and management of
the University of Murcia (UMU) 2 was used. This tool was only accessible by
using a proper UMU email account.
Among other features, this platform allows the easy creation of web-based
questionnaires. In that sense, we created five types of questionnaires. Each
one included an explicit link to the ENTROPY consent form in its first
introductory page so that all the participants could easily read it. The links
to all of these questionnaires are listed next:
* https://encuestas.um.es/entropy_spa_emp.cc
* https://encuestas.um.es/entropy_spa_stud.cc
* https://encuestas.um.es/entropy_ita_emp.cc
* https://encuestas.um.es/entropy_fr_emp.cc
* https://encuestas.um.es/entropy_de_emp.cc
Next, an email with the listed links was distributed among the project's
partners by using the regular email list of the project. Each partner was
responsible for distributing the appropriate links among their target staff.
The data collected through this step are denoted in the database collections
entitled “User”, “UserProfile” and “QuestionnaireResult”, as they are detailed
in the following sections in this chapter.
#### 5.1.2 Data Analysis, De-association and Encryption of sensitive
information
Upon the completion of the questionnaire, an automated analysis is taking
place classifying the user into specific behavioural types. Such a statistical
analysis is defined by the ENTROPY partner Athens University of Economics and
Business (AUEB) and is implemented in the ENTROPY platform. The results of the
analysis, along with part of the personal information (e.g. gender,
educational level) provided in the questionnaire were encrypted and stored in
the ENTROPY repository in the database collections “User” and “UserProfile”.
Thus, all sensitive personal data is only known by the end user that provided
them and cannot be revealed to other parties. All data interaction for such
data is realised upon encrypted data. The rest information is pseudonymised
fully disjointed by the end user that provided them- and stored also in the
ENTROPY repository, aiming at supporting any statistical analysis that may be
considered helpful in the future.
At this point, the end user profile is successfully initialized. The end user
has data such as:
* name,
* educational level,
* gender,
* personality profile axis (Extraversion, Agreeableness , Conscientiousness , Emotional Stability , Openness to Experiences),
* work engagement (as result of the Vigor , Dedication ,Absorption)
* energy conservation behaviours,
* game interaction type (Philanthropist, Socialiser, Free Spirit, Achiever, Disruptor, Player).
All the string-represented data is encrypted through the usage of a popular
and widely adopted symmetric encryption algorithm, namely the Advanced
Encryption Standard (AES). Some of the features of AES include:
* Symmetric key symmetric block cipher,
* 128-bit data, 128/192/256-bit keys,
* Stronger and faster than Triple-DES,
* Provide full specification and design details.
AES is widely adopted and supported in both hardware and software. Up to our
knowledge, no practical cryptanalytic attacks against AES has been discovered.
Additionally, AES has built-in flexibility of key length, which allows a
degree of ‘future-proofing’ against progress in the ability to perform
exhaustive key searches. The password required by the algorithm to encrypt the
data is provided by the project coordinator.
After their encryption process, the end user data was made available in the
ENTROPY repository and could be used for analysis and personalized
recommendation purposes. Based on the usage of such services, the end user
profile may be updated, however such an update in the data is totally obscured
from the end users as well as the administrators of the platform.
It should be noticed that the covered and encrypted part of the data will not
be distributed to any project or third-party participants. This part of the
data will be only stored and retained at the pilot sites (UMU, HESSO, and
POLO) or at UBITECH’s infrastructure dedicated to the ENTROPY project.
In case that an end user desired to opt-out from the pilot, a relevant process
was defined and supported. Through the platform the end-user was able to
declare that we wants to opt-out and then was able to select whether he
desires his data to be removed or not from the ENTROPY repository. In the
first case, a removal process took place deleting all the end-user related
data, while in the latter case no action was required.
The data collected through this step are denoted in the database collections
entitled “User”, “UserProfile” and “QuestionnaireResult”, as they are detailed
in the relevant section of the chapter.
#### 5.1.3 Data management and update from ENTROPY services and applications
Once the initial profile data has been created, the set of services provided
through the ENTROPY platform as well as the developed third party mobile
applications, have access to them. As already mentioned, all the sensitive
personal data was encrypted, thus access to the encrypted data was provided
and no further data exposure was realized.
The third party applications have partial access only to the demographic data
of the authenticated end user. Partial access to an end-user personal data is
considered 100% secure since it only needs to put once its username and
password and after that just make use of the token-based authentication
mechanisms supported by the ENTROPY platform. The general concept behind a
token-based authentication system is to allow users to enter their username
and password in order to obtain a token which allows them to fetch a specific
resource - without using their username and password. Once their token has
been obtained, the user can offer the token - which offers access to a
specific resource for a time period - to the remote site. All communication
between the third party personalized applications and serious games and the
ENROPY platform was done through a secure SSL channel which allows all
sensitive information to be transmitted encrypted and secure.
In case of an update in the profile of an end-user (e.g. energy awareness
level, engagement indicator on games) based on his interaction with the
ENTROPY services and applications, the relevant information in the ENTROPY
repository may be updated, respecting the applied authorization, encryption
and secure communication mechanisms. In case of the collection of crowdsensing
data on behalf of the end users (e.g. indication of presence, reporting of
problems, answers to raised questions), such data was also stored in the
ENTROPY repository and made available to the relevant applications. Collection
of such data was based on the terms and conditions agreed with the end users
prior to the first execution of the application.
The data collected through this step are denoted in the database collections
entitled “User”, “UserProfile”, “AppProfile”, “Action”, “Action Validation”
and “Recommendation”, as they are detailed in the relevant section in the
chapter.
#### 5.1.4 Building and Sensor Infrastructure Data Management
In addition for the data collected from users feedback via mobile
applications, information about the set of buildings considered in each pilot,
as well as the set of sensors registered per building space was provided. Such
information was provided in the platform by the campaign administrator. For
each building, the set of building spaces was declared, while information
regarding the surface, the capacity and the working hours of each building
space was provided. Furthermore, the sensor data streams that were activated
during the operation of the pilot were declared. The templates of these excel
sheets are depicted in Figure 3 and Figure 4.
Figure 3. Building Space Information
Figure 4. Sensor Data Stream Information
Regarding the association of the presence or engagement of end users with
specific building spaces, such information was collected only with their
consent. Each end user may declare the building spaces that he has activities
in order to get meaningful recommendations for these spaces. Furthermore,
during his interaction with the third-party applications, he may also declared
his presence in specific spaces (e.g. for earning points upon the realization
of an action).
In addition to that, the infrastructure sensors deployed in the building could
be associated with presence and certain activities of the users. However, the
ENTROPY consortium did not use this type of information, except if the user
declared such an action on its own (e.g. as part of a serious game action).
Here we describe the information that may be indirectly inferred given the set
of sensors deployed in the three use cases:
* As for HVACs , when such devices are manually switched on or off it would be possible to infer that a person, probably the person associated with the HVAC's building space, is located in this space. This applies mostly to cases where a very limited number of persons has access to this space. If this space can be easily linked to a certain activity like activity in a kitchen or a personal office, then it would be possible to also infer the potential activity undertaken by the user. In similar manner, the manual configuration of the regulated temperature also indicates the presence of a person in the HVAC's area of influence.
* Concerning CO2, luminosity, temperature and humidity sensors, their readings indicating remarkable fluctuations might also indicate the presence of one or more people in their associate building spaces.
* Regarding presence sensors, their readings can be used to know when a person moves around a building space. Using this information along with other external data like time or the category of the building space then it would be also possible to infer the activity of the user and his approximate location.
* The sensors installed in doors and windows reporting when they are closed or opened can be also used to infer the presence or not of people within a building space or room. Similarly, the correlation of this data with other sources of information like the current time of the day and the category of the space (e.g. kitchen, research laboratory or personal office) might also give rise to a coarse-grained perception of the current activity performed within the space premises.
In that sense, the consent form also reported the possibility of inferring the
aforementioned information but pointing out that it would not be used in the
context of the project.
The data collected through this step are denoted in the most of the database
collections, as they are detailed in the relevant section of the chapter.
#### 5.1.5 ENTROPY data structure
In this section, short description of the main collections of the ENTROPY
database structure is provided, aiming at providing information on the main
data stored per database collection. It should be noted that the provided
information final, since minor adaptations may take place based on the
continuous feedback provided by the mobile applications developers. All the
sensitive personal data is encrypted, while access to any type of data is
provided to authenticated users over secure connections.
<table>
<tr>
<th>
**Collection**
</th>
<th>
</th>
<th>
**Fields**
</th>
<th>
**Encrypted**
</th>
<th>
**Hosted by**
</th>
<th>
**Lifetime**
</th> </tr>
<tr>
<td>
User
</td>
<td>
• • • • •
•
</td>
<td>
first name last name e-mail
educational level
gender role
</td>
<td>
</td>
<td>
UMU or
UBITECH
</td>
<td>
Project lifetime (or until
a user decides to optout)
</td> </tr>
<tr>
<td>
User
</td>
<td>
• •
•
</td>
<td>
id interests
energy awareness level
</td>
<td>
</td>
<td>
UMU or
UBITECH
</td>
<td>
Project lifetime (or until a user decides to optout)
</td> </tr>
<tr>
<td>
UserProfile
</td>
<td>
• •
•
</td>
<td>
id user id
behavioural indicators
</td>
<td>
</td>
<td>
UMU or
UBITECH
</td>
<td>
Project lifetime (or until
a user decides to optout)
</td> </tr>
<tr>
<td>
BuildingSpace
</td>
<td>
• • • • • • •
•
</td>
<td>
id
name type surface capacity building objects working hours coordinates
</td>
<td>
</td>
<td>
UMU, POLO,
HESSO or
UBITECH
</td>
<td>
From pilots deployment until project's end
</td> </tr>
<tr>
<td>
Recommendati on
</td>
<td>
• • • • • •
•
</td>
<td>
id user id description triggering attributes
datetime feedback category
</td>
<td>
</td>
<td>
UMU, POLO,
HESSO or
UBITECH
</td>
<td>
From pilots deployment until project's end
</td> </tr>
<tr>
<td>
AppProfile
</td>
<td>
• • • • • • • •
•
</td>
<td>
id
name
user id player character total score monthly score last ranking last score
update last building space
</td>
<td>
</td>
<td>
UMU, POLO,
HESSO or
UBITECH
</td>
<td>
From pilots deployment until project's end
</td> </tr>
<tr>
<td>
SensorStream
</td>
<td>
• • • • •
•
</td>
<td>
id attribute sensor id frequency type state
</td>
<td>
</td>
<td>
UMU, POLO,
HESSO or
UBITECH
</td>
<td>
From pilots deployment until project's end
</td> </tr>
<tr>
<td>
ObservationVa lue
</td>
<td>
• • • • • •
•
</td>
<td>
id value rate of change datetime sensor stream id unit of measure prediction
</td>
<td>
</td>
<td>
UMU, POLO,
HESSO or
UBITECH
</td>
<td>
From pilots deployment until project's end
</td> </tr>
<tr>
<td>
Action
</td>
<td>
• • • • • • • • • •
•
</td>
<td>
id
username app profile recommendation building space sensor stream datetime
validated awarded score badge
</td>
<td>
</td>
<td>
UMU, POLO,
HESSO or
UBITECH
</td>
<td>
From pilots deployment until project's end
</td> </tr>
<tr>
<td>
Sensor
</td>
<td>
• • •
•
</td>
<td>
id state attributes location
</td>
<td>
</td>
<td>
UMU, POLO,
HESSO or
UBITECH
</td>
<td>
From pilots deployment until project's end
</td> </tr> </table>
### 5.2 Data Protection of Datasets made open
The consortium chose to utilize the CKAN data repository, which ensures the
data protection of the ENTROPY datasets made open.
**6\. OTHER ISSUES**
The ENTROPY project does not have any other issues to declare.
# APPENDIXES
## APPENDIX 1: Dataset Metadata Template
<table>
<tr>
<th>
**Parameter**
</th> </tr>
<tr>
<td>
Document version
</td>
<td>
The version of this document
</td> </tr>
<tr>
<td>
Document format
</td>
<td>
The format of this document
</td> </tr>
<tr>
<td>
Description
</td>
<td>
A description of the data included in the document
</td> </tr>
<tr>
<td>
Date
</td>
<td>
The date of the creation of the document (yyyy-mm-dd)
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
Some keywords describing the content
</td> </tr>
<tr>
<td>
Subject
</td>
<td>
Small description of the data source
</td> </tr>
<tr>
<td>
**Creator (Name of the creator of the data source)**
</td> </tr>
<tr>
<td>
Sector of the provider
</td>
<td>
Information on the sector that this provider belongs to
</td> </tr>
<tr>
<td>
Permissions
</td>
<td>
The permission of this document are mandatory to be mentioned here
</td> </tr>
<tr>
<td>
**Name of the Partner (The name of the partner that collected the data and is
responsible for)**
</td> </tr>
<tr>
<td>
Responsible person
</td>
<td>
The name of the person within the partner, who is responsible for the data
</td> </tr>
<tr>
<td>
Pilot
</td>
<td>
For which pilot the data will be used
</td> </tr>
<tr>
<td>
Scenario of data usage
</td>
<td>
How the data are going to be used in this scenario
</td> </tr>
<tr>
<td>
**Description of the Data Source**
</td> </tr>
<tr>
<td>
File format
</td>
<td>
The format of the data source provided
</td> </tr>
<tr>
<td>
File name/path
</td>
<td>
The name of the file
</td> </tr>
<tr>
<td>
Storage location
</td>
<td>
In case a URI/URL exists for the data provider
</td> </tr>
<tr>
<td>
Data type
</td>
<td>
Data type and extension of the file; e.g. Excel Sheet, .xlsx; Standard if
possible
</td> </tr>
<tr>
<td>
Standard
</td>
<td>
Data standard, if existent
</td> </tr>
<tr>
<td>
Data size
</td>
<td>
Total data size, if possible
</td> </tr>
<tr>
<td>
Time references of data
</td>
<td>
Start date
</td>
<td>
End date
</td> </tr>
<tr>
<td>
Availability
</td>
<td>
Start date
</td>
<td>
End date
</td> </tr>
<tr>
<td>
Data collection frequency
</td>
<td>
The time frequency in which the data is collected; e.g. hourly, every 15
minutes, on demand, etc.
</td> </tr>
<tr>
<td>
Data quality
</td>
<td>
The quality of the data; is it complete, does it have the right collection
frequency, is it available, etc.
</td> </tr>
<tr>
<td>
**Raw data sample**
</td> </tr>
<tr>
<td>
Textual copy of data sample
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Number of Parameters included:**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Parameter #1:**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
</td>
<td>
…
</td>
<td>
…
</td>
<td>
…
</td> </tr>
<tr>
<td>
**Parameter #2:**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Variables**
</td>
<td>
**Name**
</td>
<td>
**Type**
</td>
<td>
**Mandatory**
</td> </tr>
<tr>
<td>
</td>
<td>
**…**
</td>
<td>
**…**
</td>
<td>
**…**
</td> </tr> </table>
## APPENDIX 2: Ethics related material (e.g. consent form, Terms and
Conditions)
**_Forms and information to be provided to user_ **
##### Title of project
ENTROPY: Design of an innovative energy-aware it ecosystem for motivating
behavioural changes towards the adoption of energy efficient lifestyles.
##### Purpose of the project and its academic rationale
ENTROPY is a multidisciplinary project that aims to design and deploy an
innovative IT ecosystem targeting at improving energy efficiency through
consumers understanding, engagement and behavioural changes. The ENTROPY
consortium combines multidisciplinary competences and resources from the
academia, industry and research community focusing on the energy efficiency,
micro-generation, sensors and smart metering networking, behavioural change
and gamification domain.
We are going to monitor a series of sensors that are installed in each one of
the three pilots included in the project. The sensors will collect data from,
among others, temperature, humidity, CO2 levels, condition of air conditioners
and electrical consumption. The system will use this information to detect
energy-inefficient situations and notify a subset of the occupants.
We will collect information from the users of the pilots through questions and
surveys. On the one hand we will measure its factors relevant to energy use
such as habits, knowledge and attitudes. On the other hand we will measure
various factors related to player profiles. These measurements will be used to
offer users an experience as personalized as possible, thus increasing the
impact on their habits and attitudes.
We will deploy among the pilots participants a series of personalized
applications and serious games that will serve as a link between the users and
the system. We will develop a system of recommendations that will inform users
of possible tasks, news, or other information that can offer a positive change
in the behavior of users.
Further information on the academic rational of the whole ENTROPY project can
be found at **_http://www.entropy-project.eu/_ **
Note: This project it is realized in collaboration between the three pilots
where the data will be collected, although the personal data will be stored
and treated within the repository of University of Murcia where the
disassociation is done. This note is relevant for the participants in the
pilots of Italy and Switzerland.
#### _Brief description of the methods and measurements_
Methods:
In each of the three pilot buildings will be monitored a subset of sensors
that previously existed. It is currently being investigated on what is the
optimal number and type of sensors needed to maximize the effect on the
participants.
The measurements collected from the sensors will be a variation of the
following depending on the result of the study mentioned in the previous
paragraph:
* temperature
* temperature_2m
* average_temperature
* regulated_indoor_temperature
* active_energy_consumption
* active_power
* power_factor
* current
* voltage
* luminocity
* humidity
* radiation
* dew_point
* total_cloud_area
* snow_hour
* precipation_hour
* rain rate
* rain total
* co2
* wind_direction_10m
* wind_speed_10m
* sunshine_duration
* weather_conditions
* pressure
* active_persons
During your participation in ENTROPY, you will be asked for providing the
following types of information about yourself in active (through
questionnaires) or passive manner (through mobile applications and games):
* Demographics
* Personality
* Engagement at work
* Energy-conservation behavior
* Video-gaming personal preferences
* Energy-related actions at work
In addition to that, it might be possible to indirectly infer your presence
and associated activity in certain areas of your building by making use of
some infrastructure sensors deployed within yours workplace. Nevertheless, the
ENTROPY consortium commits to neither infer nor use such information at any
moment.
During the registration process, potential participants will be asked to fill
out a survey (included as annex II). In addition, throughout the duration of
the study, we will launch questions to users through personalized applications
and serious games. As a complement to this information we will monitor data
regarding the use of custom applications and serious games. All this
information will be used to compose energy and player profiles of the
participants and to facilitate the identification of pattern of behaviors.
From the data collected directly from the users it is intended to extrapolate
the following information:
* Levels of awareness of energy saving.
* Levels of knowledge on energy saving.
* Player Profiles.
* Acceptance levels of the application or serious game.
**It is understood by all members of the ENTROPY project that this initial
ethical approval application will cover only what is written here and any
additional work on this study will be subject to another ethical approval
application.**
#### _Participants; recruitment methods, number, age, gender,
exclusion/inclusion criteria_
##### Participants
Users of the three pilot buildings: Navacchio Technology Park (POLO),
University of Murcia Campus (UMU), Technopole in Sierre (HESSO).
##### Recruitment
The majority of participants will be recruited via Advertisements, flyers,
information sheets, notices, Internet postings and/or media which will
encourage potential participants to register on the platform through a web
portal.
Other methods of recruitment may be used such as direct recruitment (i.e.
expositions, lectures with stakeholders), a referral system or a participant
pool. All methods listed above are not an exhaustive list of the methods that
will be used, and their usage is not obligatory for the consortium.
All recruitment materials and strategies has been/will be reviewed by the EAB
3 for approval.
No direct compensation to the participants is expected or planned during the
project. Based on the gamification mechanism developed for the platform, end-
users are expected to be rewarded with virtual points that may be redeemed on
services, as the software teams have decided and promoted at the time their
project launches. Nevertheless, the project may decide to promote the platform
through a series of lotteries and competitions on other platforms, the terms
of which will be announced when needed.
##### Number, age, gender
The exact number, age and gender of the individuals within the households will
not be known until the point of recruitment.
**Inclusion/exclusion criteria** :
Participants must be students or university employees in the case of UMU and
HESSO. And employees and visitors in the technology park or residents in the
social housing infrastructure in POLO case.
#### _Consent and participant information arrangements, debriefing_
Specific attention will be given to the issue of _informed consent_ . For the
carrying out of the experiments, it will be ensured that all volunteers are
healthy adults legally capable of providing their informed consent. In order
to make an informed decision, all volunteers will be provided with
comprehensive information regarding the goals and duration of the project, the
progress, the planned tests and procedures to which they will take part, as
well as information on their rights – such as the right to withdraw their
consent at any time. Users will have access to the explanation **sheet of the
research project** **ENTROPY** provided within this document before entering
the service, and for the duration of its permanence in the same.
The consent form will be presented to potential participants from the web
portal during the registration process on the platform. **The consent form is
provided in this document.** The consent will be won once the participant
clicks on accept after marking a clearly identifiable check box.
The platform developers will be responsible for ensuring the consent form is
completed before the access to the platform is granted. Also, the platform
will provide to the participants a mechanism to ask any question that they
could have.
Initially, subjects will be **contacted** , to inform them about the goal of
the project, provide relative material and privacy information. They will be
informed about the process that will be followed and they will be asked to
subscribe to the project contact list.
At the end of each pilot study, a debrief letter will be sent via e-mail to
each participant, appreciating that they have participated in the study and
reminding them of the possibility of withdrawing their study data. **The
debrief letter is provided in this document.**
**A clear but concise statement of the ethical considerations raised by the
project and how you intend to deal with them.**
The study will collect invasive and potentially sensitive data. For example
from sensors we will have access to electric consumption data, and from
serious games we will know participants' occupancy patterns.
As such the confidentiality and anonymity of the data is paramount, and a
secure data protection system will be put in place accordingly. **Data
storage**
The information will be stored on the servers of the pilot buildings making
use of the FIWARE platform. FIWARE uses MongoDB which offers rigorously tested
security mechanisms. Only encrypted login data will be stored on user's
devices.
##### Identifiable data
Personal data of users, e.g. demographic data and survey results will be
entered immediately into a database from where each set of results will be
given an automatic number and the personal details omitted. During the pilot
stages, the correspondence with the users list will be saved into a local
database, which will be encrypted. The server, will be kept in a locked server
room with electronic access only to those analyzing the results of the ENTROPY
project from the consortium.
##### Transparency of the being data collected
Before participants sign the first consent form they will be told the exact
nature of the data that is being collected in simple and clear language. This
will be in the ENTROPY Research information sheet.
##### Third countries
The involvement of a non-EU partner (HESSO) deserves special attention.
Bearing in mind that the EU Data Protection Directive provides strong
limitations for the transfer of personal data beyond EU boundaries and only
legitimates such a transfer to “third countries” under well-defined conditions
(see esp. Art. 25, 26 of the Directive), **no such transfer will be planned
for in the initial phase** . Instead, the fundamental assumption will be a
model of three strictly separated “silos” with no personal data being
transferred from the Spanish and the Italian to the Swiss pilot and vice
versa. Independently from this strict separation, the ethical standards and
guidelines of Horizon 2020 will be rigorously applied to all project
activities, including those taking place outside of the EU (HESSO Confirmation
included).
##### Estimated start and duration of the project
We will begin collecting information from potential participants on the pilots
from 02/2016 the end of the project in 08/2018.
The data collected at the end of the project will be completely anonymized to
be used in future research studies without creating traceability to the
participants.
**Include copies of any information sheets, consent forms, debrief sheets and
questionnaire measures you intend to use**
### ENTROPY project: Research information sheet
1. _Invitation:_
You are being invited to take part in the research project entitled ENTROPY.
If you decide to take part, you will be asked to provide behavioral and
lifestyle data which will be aggregated automatically from the applications to
be developed within the context of the project. Before you make this decision,
it is important for you to understand why the research is being done and what
it will involve.
This document describes the project in order to help you to make sure your
decision. Please read the information provided carefully and discuss it with
others if you wish. Please take time to decide whether or not you wish to take
part. You must not feel obliged to participate in this research project. If
you do decide to participate, you can withdraw your consent at any time
without any disadvantages. Also, if you decide not to volunteer for the
project, it will not affect your treatment in any way.
Thank you for reading this.
2. _Purpose of the project:_
The vision of the ENTROPY project is to design and deploy an innovative IT
ecosystem for motivating endusers’ behavioural changes towards the adoption of
energy efficient lifestyles, building upon the evolvements in the Internet of
Things, Data Modeling and Analysis and Recommendation and Gamification eras.
Internet of Things technologies are exploited for the proper and energy
efficient interconnection of a heterogeneous set of sensor nodes (e.g. smart
energy meters, sensors interacting with microgeneration infrastructure,
sensors in smart phones), the collection of data based on Mobile Crowd Sensing
Mechanisms exploiting the power of the collection of data from a critical mass
of interested people and the application of proper communication networking
schemes with regards to data collection. Advanced Data Modeling and Analysis
techniques are applied for the modelling of the collected data –both from
sensor networks as well as directly from end users- and the extraction of
advanced knowledge by exploiting the power of Semantic Web techniques, Linked
Data and Data Analytics. Focus is given on the development of personalised
mobile applications and games targeted at providing energy related information
to end users, triggering interaction with relevant users in social networks
(e.g. users in a specific area within a city), increasing their awareness with
regards to ways to achieve energy consumption savings in their daily
activities and adopt energy efficient lifestyles based on a set of
recommendations and motives targeted to their culture. The engagement and
direct inclusion of end users within the diverse components of the provided IT
ecosystem is going to be strongly supported.
3. _Why have you been chosen?_
You have been chosen because your data is of interest for the research
developed within the project.
4. _Do you have to take part?_
Your participation in this study is entirely voluntary. If you decide to take
part you will be asked to sign a consent form. By signing the consent form,
you will confirm that you were properly informed about this project and that
all your questions have been answered. A copy of the consent form will be
given to you to keep. If you decide to take part, you are free to withdraw
your consent at any time and leave the study without giving any reason.
5. _What will happen to you if you take part?_
If you have decided to take part, behavioral and lifestyle related information
will be collected automatically, stored in the ENTROPY platform and processed
in order to provide personalised recommendations with regards to energy
consumption savings in your daily activities and adoption of energy efficient
lifestyles.
6. _How is your data protected?_
Access and use of the data, is only allowed to registered users of the
platform.
7. _Costs_
There will not be any additional costs for you if you decide to participate in
the project.
ENTROPY project
Individual and legal commitment to abide to ENTROPY Privacy and personal data
protection rules and guidelines
_Details of the contracting collaborator:_
Organization name:
First name:
Family name:
Email address:
I hereby confirm that I have read and fully understood the “Data Protection
Procedure” of the ENTROPY project. I personally and formally commit to respect
and to make respect those rules and guidelines, as well as the European
directive(s) on personal data protection.
I also commit to:
* Mitigate any identified risk that those rules may be breached;
* Ensure that access to any potentially stored personal data is reserved to those who have signed the present legal commitment;
* Inform my internal hierarchy and/or the personal data protection officer of ENTROPY in the case I would identify any breach in the privacy and personal data protection policy.
I understand that any voluntary breach of those rules may be considered as a
grave fault.
Place:
Date:
Signature:
**ENTROPY project**
ENTROPY Terms of Use and Privacy Statement – Preliminary Document
To be included in Web pages and apps
_Note:_This is a preliminary document that will be finalised in collaboration
with the legal departments of the_ _partners and checked with the national
data protection agencies as soon as the first release of the project is_
_available._ _
_**1\. Terms of use** _
#### Basic Terms
* _You must be 18 years or older to use this application._
* _You may not post nude, partially nude, or sexually suggestive photos._
* _You are responsible for any activity that occurs under your screen name._
* _You are responsible for keeping your password secure._
* _You must not abuse, harass, threaten, impersonate or intimidate other users._
* _You may not use the service for any illegal or unauthorized purpose. International users agree to comply with all local laws regarding online conduct and acceptable content._
* _You are solely responsible for your conduct and any data, text, information, screen names, graphics, photos, profiles, audio and video clips, links ("Content") that you submit, post, and display on the ENTROPY platform._
* _You must not modify, adapt or hack ENTROPY or modify another website so as to falsely imply that it is associated with ENTROPY._
* _You must not crawl, scrape, or otherwise cache any content from ENTROPY including but not limited to user profiles and photos._
* _You must not create or submit unwanted email or comments to any ENTROPY members ("Spam")._
* _You must not use web URLs in your name without prior written consent from ENTROPY._
* _You must not transmit any worms or viruses or any code of a destructive nature._
* _You must not, in the use of ENTROPY, violate any laws in your jurisdiction (including but not limited to copyright laws)._
* _Violation of any of these agreements will result in the termination of your ENTROPY account. While ENTROPY prohibits such conduct and content on its site, you understand and agree that ENTROPY cannot be responsible for the Content posted on its web site and you nonetheless may be exposed to such materials and that you use the ENTROPY service at your own risk._
#### Proprietary Rights in Content on ENTROPY
1. _ENTROPY does NOT claim ANY ownership rights in the text, files, images, photos, video, sounds, musical works, works of authorship, applications, or any other materials (collectively, "Content") that you post on or through the ENTROPY Platform. By displaying or publishing ("posting") any Content on or through the ENTROPY Platform, you hereby grant to ENTROPY a non-exclusive, fully paid and royalty-free, worldwide, limited license to use, modify, delete from, add to, publicly perform, publicly display, reproduce and translate such Content, including without limitation distributing part or all of the Site in any media formats through any media channels, except Content not shared publicly ("private") will not be distributed outside the ENTROPY Platform. This IP License ends when you delete your IP content or your account._
2. _When you delete IP content, it is deleted in a manner similar to emptying the recycle bin on a computer. However, you understand that removed content may persist in backup copies for a reasonable period of time (but will not be available to others). All this content will be securely deleted at the end of the ENTROPY project, prior to notice from the consortium to all ENTROPY users for this operation, unless they explicitly grant permission to the consortium to retain this content in the ENTROPY platform, until the users delete it themselves in the future._
3. _You represent and warrant that: (i) you own the Content posted by you on or through the ENTROPY platform or otherwise have the right to grant the license set forth in this section, (ii) the posting and use of your Content on or through the ENTROPY platform does not violate the privacy rights, publicity rights, copyrights, contract rights, intellectual property rights or any other rights of any person, and (iii) the posting of your Content on the Site does not result in a breach of contract between you and a third party. You agree to pay for all royalties, fees, and any other monies owing any person by reason of Content you post on or through the ENTROPY platform._
4. _The ENTROPY platform contain Content of ENTROPY ("ENTROPY Content"). ENTROPY Content is protected by copyright, trademark, patent, trade secret and other laws, and ENTROPY owns and retains all rights in the ENTROPY Content and the ENTROPY platform. ENTROPY hereby grants you a limited, revocable, nonsublicensable license to reproduce and display the ENTROPY Content (excluding any software code) solely for your personal use in connection with viewing the Site and using the ENTROPY platform._
5. _The ENTROPY platform contain Content of Users and other ENTROPY licensors. Except as provided within this Agreement, you may not copy, modify, translate, publish, broadcast, transmit, distribute, perform, display, or sell any Content appearing on or through the ENTROPY platform._
6. _ENTROPY performs technical functions necessary to offer the ENTROPY platform, including but not limited to transcoding and/or reformatting Content to allow its use throughout the ENTROPY platform._
7. _Although the Site and the ENTROPY platform are normally available, there will be occasions when the Site or the ENTROPY platform will be interrupted for scheduled maintenance or upgrades, for emergency repairs, or due to failure of telecommunications links and equipment that are beyond the control of ENTROPY. Also, although ENTROPY will normally only delete Content that violates this Agreement, ENTROPY reserves the right to delete any Content for any reason, without prior notice. Deleted content may be stored by ENTROPY in order to comply with certain legal obligations and is not retrievable without a valid court order. Consequently, ENTROPY encourages you to maintain your own backup of your Content. In other words, ENTROPY is not a backup service. ENTROPY will not be liable to you for any modification, suspension, or discontinuation of the ENTROPY platform, or the loss of any_
_Content._
8. _Your profile is only visible under an avatar and cannot be linked with your real profile unless you make information public._
9. _When you join a project (i.e. an application under development), your content and information is shared with the project owners, based on the profile data you have explicitly permitted to be public. ENTROPY requires projects to respect your privacy, and your agreement with that project will control how the project can use, store, and transfer that content and information and how IPRs are treated._
10. _You are in a position to opt-out of any project (“leave” from a project) that makes use of your information, at any time and for any reason. This action will result in ceasing the IPR agreement between yourself and the project, while the data you have shared with this project will be deleted and no new data will be provided from your side._
11. _You are allowed to re-join a project at any time, by accepting the agreement that is valid at the period of joining._
12. _A project may at any time ban certain users due to violation of terms of use, prior to notifying the user and the ENTROPY consortium for the intended action._
13. _A project may at any time request from users to share more data and/or modify the IPR agreement. In such a case, the user may a) decide either to accept these changes, b) reject them and continue his presence in the project without altering the agreement and data sharing permissions which he accepted at join (or during a previous request), or c) reject them and completely opt out of the project._
14. _We always appreciate your feedback or other suggestions about ENTROPY, but you understand that we may use them without any obligation to compensate you for them (just as you have no obligation to offer them)._
_**2\. Privacy** _
#### Gathering of Personally-Identifying Information
_Certain visitors to ENTROPY websites choose to interact with ENTROPY in ways
that require ENTROPY to gather personally-identifying information. The amount
and type of information that ENTROPY gathers depends on the nature of the
interaction. For example, we ask visitors who sign up for an account on
http://Entropy.com to provide a username and email address. In each case,
ENTROPY collects such information only insofar as is necessary or appropriate
to fulfill the purpose of the visitor's interaction with ENTROPY. ENTROPY does
not disclose personally-identifying information other than as described below.
Visitors can always refuse to supply personally-identifying information, with
the caveat that it may prevent them from engaging in certain website-related
activities._
#### Aggregated Statistics
_ENTROPY may collect statistics about the behavior of visitors to its
websites. ENTROPY may display this information publicly or provide it to
others. However, ENTROPY does not disclose personally-identifying information
other than as described below._
#### Protection of Certain Personally-Identifying Information
_ENTROPY discloses potentially personally-identifying and personally-
identifying information only to those of its employees, contractors and
affiliated organizations that (i) need to know that information in order to
process it on ENTROPY's behalf or to provide services available at ENTROPY's
websites, and (ii) that have agreed not to disclose it to others. Some of
those employees, contractors and affiliated organizations may be located
outside of your home country; by using ENTROPY's websites, you consent to the
transfer of such information to them. ENTROPY will not rent or sell
potentially personally-identifying and personallyidentifying information to
anyone. Other than to its employees, contractors and affiliated organizations,
as described above, ENTROPY discloses potentially personally-identifying and
personally-identifying information only when required to do so by law, or when
ENTROPY believes in good faith that disclosure is reasonably necessary to
protect the property or rights of ENTROPY, third parties or the public at
large. If you are a registered user of an ENTROPY website and have supplied
your email address, ENTROPY may occasionally send you an email to tell you
about new features, solicit your feedback, or just keep you up to date with
what's going on with ENTROPY and our products. We primarily use our various
product blogs to communicate this type of information, so we expect to keep
this type of email to a minimum. If you send us a request (for example via a
support email or via one of our feedback mechanisms), we reserve the right to
publish it in order to help us clarify or respond to your request or to help
us support other users, without however disclosing any personal information
about you. ENTROPY takes all measures reasonably necessary to protect against
the unauthorized access, use, alteration or destruction of potentially
personally-identifying and personally-identifying information. ENTROPY may
process your personal data to increase accuracy on project recommendations.
The outcome of this process will not be available to any third-party entity._
#### Internet Protocol (IP) Address, WAN Data and Cookie processing
_No data of IP Addresses, WAN data and cookies will be processed by the
ENTROPY platform. Any such data unavoidably recognized for technical reasons
will be deleted as soon as possible._
#### Ads
_No Ads apply to the ENTROPY platform. Projects hosted on the platform may,
however, be promoted to potential contributors as “editors’ choice” or
“trending” without any fee during the project period._
#### Privacy Policy Changes
_Although most changes are likely to be minor, ENTROPY may change its Privacy
Policy from time to time, and in ENTROPY's sole discretion, after acceptance
by the consortium, the EAB and the responsible national agencies. ENTROPY will
inform users for any such change, asking for their acceptance, in order to
continue using of the platform. User’s that have not accepted the revised
policies will be put into a “frozen” state, meaning that their data will not
be shared anymore until they decide to accept or reject the new policies.
Rejecting them, would mean that user’s data will be treated using the previous
policy, while users will also be able to choose to delete their account and
remove their data from the platform._
#### Leaving the service
_Any time you decide to leave the service, you can delete your account and
remove all your related data from the platform. Nevertheless, your data will
be lost forever and cannot be retrieved by any means, while it may be
impossible to retrieve your previous username if you decide to return to the
platform._
## De-brief [Sent as e-mail]
Dear [Participant],
Thank you for participating in the ENTROPY project, we really appreciate your
involvement. The data that has been collected over the past year is currently
being analysed to find better ways to generate behavioural changes towards the
adoption of energy efficient lifestyles.
_Reminder: your data is secure, confidential and anonymous. You are free to
withdraw from the study at any point and have your data destroyed. Please
contact us quoting your username within ENTROPY if you wish to do so._
.
**APPENDIX 3: Registration Questionnaire**
## _A. Personality Test_
For each of the following statements, please state the degree of your
agreement, by selecting between 1 - (Strongly Disagree) to 7- (Strongly
Agree).
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**Strongly disagree**
</th>
<th>
**Disagree**
</th>
<th>
**Somewhat disagree**
</th>
<th>
**Neither agree nor disagree**
</th>
<th>
**Somewhat agree**
</th>
<th>
**Agree**
</th>
<th>
**Strongly agree**
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**1**
</td>
<td>
**2**
</td>
<td>
**3**
</td>
<td>
**4**
</td>
<td>
**5**
</td>
<td>
**6**
</td>
<td>
**7**
</td> </tr>
<tr>
<td>
</td>
<td>
**I See myself as:**
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
1
</td>
<td>
Extraverted, enthusiastic.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2
</td>
<td>
Critical, quarrelsome.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
3
</td>
<td>
Dependable, selfdisciplined.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
4
</td>
<td>
Anxious, easily upset.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
5
</td>
<td>
Open to new experiences, complex.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
6
</td>
<td>
Reserved, quiet.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
7
</td>
<td>
Sympathetic, warm.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
8
</td>
<td>
Disorganized, careless.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
9
</td>
<td>
Calm, emotionally stable.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<td>
10
</td>
<td>
Conventional, uncreative.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## _B. Engagement_
For each of the following statements, please state the degree of your
agreement, by selecting between 1 – (Never) to 7- (Always).
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**Never**
</th>
<th>
**Almost Never**
</th>
<th>
**Rarely**
</th>
<th>
**Sometimes**
</th>
<th>
**Often**
</th>
<th>
**Very Often**
</th>
<th>
**Always**
</th> </tr>
<tr>
<th>
**1**
</th>
<th>
**2**
</th>
<th>
**3**
</th>
<th>
**4**
</th>
<th>
**5**
</th>
<th>
**6**
</th>
<th>
**7**
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**Never**
</td>
<td>
**A few times a year or less**
</td>
<td>
**Once a month or less**
</td>
<td>
**A few times a month**
</td>
<td>
**Once a week**
</td>
<td>
**A few times a week**
</td>
<td>
**Every day**
</td> </tr>
<tr>
<td>
1
</td>
<td>
At my work, I feel bursting with energy.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2
</td>
<td>
At my job, I feel strong and vigorous.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
3
</td>
<td>
I am enthusiastic about my job.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
4
</td>
<td>
My job inspires me.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
5
</td>
<td>
When I get up in the morning, I feel like going to work.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
6
</td>
<td>
I feel happy when I am working intensely.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
7
</td>
<td>
I am proud of the work that I do.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
8
</td>
<td>
I am immersed in my work.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
9
</td>
<td>
I get carried away when I am working.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**_C. EMPLOYEE ENERGY-CONSERVATION BEHAVIOURS AT_ **
## _WORK_
For each of the following statements regarding energy behaviours at work,
please state the degree of your agreement, by selecting between 1- (Strongly
Disagree) to 7- (Strongly Agree).
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**Strongly disagree**
</th>
<th>
**Disagree**
</th>
<th>
**Somewhat disagree**
</th>
<th>
**Neither agree nor disagree**
</th>
<th>
**Somewhat agree**
</th>
<th>
**Agree**
</th>
<th>
**Strongly agree**
</th> </tr>
<tr>
<td>
</td>
<td>
**Question**
</td>
<td>
**1**
</td>
<td>
**2**
</td>
<td>
**3**
</td>
<td>
**4**
</td>
<td>
**5**
</td>
<td>
**6**
</td>
<td>
**7**
</td> </tr>
<tr>
<td>
**Self-reported behaviours**
</td> </tr>
<tr>
<td>
1
</td>
<td>
When I am finished using my computer for the day, I turn it off.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2
</td>
<td>
When I leave a room that is unoccupied, I turn off the lights.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
3
</td>
<td>
When I leave a bathroom that is unoccupied, I turn off the lights.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
4
</td>
<td>
When I am not using my computer, I turn off the monitor.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
5
</td>
<td>
When I leave my work area, I turn off the Air Conditioner(s).
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
6
</td>
<td>
When I leave my work area, I turn off the printer(s).
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
7
</td>
<td>
I often leave the windows open while the Air Conditioner is on.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**Behavioural intentions**
</th> </tr>
<tr>
<td>
8
</td>
<td>
I would help the organization I work for conserve energy.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
9
</td>
<td>
I would change my daily routine to conserve energy.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
## _D. Game Interaction Design_
For each of the following statements, please state the degree of your
agreement selecting between 1 - (Strongly Disagree) to 7 - (Strongly Agree).
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**Strongly**
**Disagree**
</th>
<th>
</th>
<th>
</th>
<th>
**Strongly**
**Agree**
</th> </tr>
<tr>
<td>
</td>
<td>
**Question**
</td>
<td>
**1**
</td>
<td>
**2**
</td>
<td>
**3**
</td>
<td>
**4**
</td>
<td>
**5**
</td>
<td>
**6**
</td>
<td>
**7**
</td> </tr>
<tr>
<td>
1
</td>
<td>
I like being part of a team
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2
</td>
<td>
It is important to me to follow my own path.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
3
</td>
<td>
I enjoy group activities
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
4
</td>
<td>
It is important to me to always carry out my tasks completely
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
5
</td>
<td>
I like to question the status quo.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
6
</td>
<td>
It is difficult for me to let go of a problem before I have found a solution
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
7
</td>
<td>
I dislike following rules.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
8
</td>
<td>
Interacting with others is important to me.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
9
</td>
<td>
Rewards are a great way to motivate me
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
10
</td>
<td>
It makes me happy if I am able to help others
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
11
</td>
<td>
Return of investment is important to me.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
12
</td>
<td>
I see myself as a rebel
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
13
</td>
<td>
I like helping others to orient themselves in new situations.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
14
</td>
<td>
The wellbeing of others is important to me.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
15
</td>
<td>
I like mastering difficult tasks
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
16
</td>
<td>
It is important to me to feel like I am part of a community.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
17
</td>
<td>
Being independent is important to me.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
18
</td>
<td>
I like to provoke
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
19
</td>
<td>
I like overcoming obstacles.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
20
</td>
<td>
If the reward is enough I will put in the effort.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
21
</td>
<td>
I like sharing my knowledge
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
22
</td>
<td>
I like to try new things.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
23
</td>
<td>
I like competitions where a prize can be won.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
24
</td>
<td>
I often let my curiosity guide me.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
25*
</td>
<td>
I prefer setting my own goals.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
26*
</td>
<td>
I like to take changing things into my own hands.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
27*
</td>
<td>
I would like to enhance my skills by training.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
28*
</td>
<td>
I like to play with others in a team.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
29*
</td>
<td>
I like comparing my performance with others.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## _E. GAME ELEMENT IMPORTANCE APPRAISAL_ _QUESTIONNAIRE_
The following table includes and explains the functionality of **game
elements** that a game may include. Please state **how important it is for
you, that each one is utilized in a game aimed at reducing energy consumption
at the workplace,** by selecting between 1- (Not Important) to 7- (Very
Important).
<table>
<tr>
<th>
**Game Element Evaluation**
</th> </tr>
<tr>
<td>
**Legend of game element terminology**
</td>
<td>
**Not**
**Very Important Important**
</td> </tr>
<tr>
<td>
**Term**
</td>
<td>
**Definitio n**
</td>
<td>
**Alternati ves**
</td>
<td>
**1**
</td>
<td>
**2**
</td>
<td>
**3**
</td>
<td>
**4**
</td>
<td>
**5**
</td>
<td>
**6**
</td>
<td>
**7**
</td> </tr>
<tr>
<td>
**Points**
</td>
<td>
Numerica
l units indicating progress
</td>
<td>
Experien
ce points; score
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Badges**
</td>
<td>
Visual icons signifying
achievem
ents
</td>
<td>
Trophies
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Leaderbo ards**
</td>
<td>
Display of ranks
for
comparis on
</td>
<td>
Rankings
,
scoreboar d
</td>
<td>
</td>
<td>
</td> </tr> </table>
## _F. Demographics_
### _Role in the organisation_
In the following you can state your role in the organization
* Administrative (1)
* Managerial (2)
* Technical (3)
* Security (4)
* Other (5)
### _Smartphone usage_
Do you use a smartphone?
* Yes (1)
* No (2)
If in “Do you use a smartphone?” Yes Is Selected
### _Phone OS_
* What is the Operating System of your mobile?
* iOS / Apple (1)
* Android (2)
* Other/Don't know (3)
### _Age_
* 18-24 (1)
* 25-35 (2)
* 35-45 (3)
* 45-55 (4)
* 55-65 (5)
* >65 (6)
### _Gender_
Sex (M/F)
* Male (1)
* Female (2)
### _Children (Y/N)_
Do you have children?
* Yes (1)
* No (2)
_Contact details:_
Please enter your e-mail address, so that we can send you notifications
regarding the upcoming game. (We will not be using this information for any
other reason and it shall be kept private)
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1025_DARWIN_653289.md
|
# Executive Summary
This deliverable presents the final version of the Data Management Plan (DMP)
and is an update of deliverable 7.3 _Initial data management plan_ . It
presents the data collected in the project and how the project will make this
data **f** indable, **a** ccessible, **i** nteroperable and **r** eusable, in
accordance with the concept of FAIR data management.
It follows the template for DMP provided by the European Commission in the
_Guidelines on FAIR Data_
_Management in Horizon 2020_ , version 3.0, July 2016 4 , and provides
technical details on the data collected, as well as purpose for data
collection, data utility and where and how data can be accessed and reused.
**About the project:** The DARWIN project aims to develop state of the art
resilience guidelines and innovative training modules for crisis management.
The guidelines, which will evolve to accommodate the changing nature of
crises, are developed for those with the responsibility of protecting
population or critical services from policy to practice.
The guidelines address the following resilience capabilities and key areas:
* Capability to anticipate
* Mapping possible interdependencies
* Build skills to notice patterns using visualisations
* Capability to monitor
* Identify resilience related indicators, addressing potential for cascade
* Establish indicators that are used and continuously updated
* Capability to respond and adapt (readiness to responds to the expected and the unexpected)
* Conduct a set of pilot studies
* Investigate successful strategies for resilient responses
* Capability to learn and evolve
* Explore how multiple actors and stakeholders operate in rapidly changing environments
* Enable cross-domain learning on complex events
* Key areas: social media and crisis communication; living and user-centred guidelines; continuous evaluation and serious gaming
# Introduction
## Purpose of the document
This deliverable constitutes the project's DMP and describes what data has
been collected and how it has been processed and managed in the project. It
further outlines how and what parts of the data will be available after the
project has been completed, and by what means these will be made available.
## Authorship and Intellectual Property Rights (IPR)
This deliverable has been prepared by SINTEF with input from the work package
(WP) leaders from WP1
(FOI), WP2 (SINTEF), WP4 (DBL) and WP5 (KMC). The WP leaders have mainly
contributed to section 3 and 0 with detailed descriptions of the data
collected in their WPs, and how these data have been managed and will be
preserved. ISS and FOI have contributed with feedback and input through their
role as reviewers of the document.
In this deliverable the DARWIN Wiki is described as a channel for making data
generated by the project available. The IPR principle that applies to this
tool is outlined in the table below. For information on IPR principles applied
to other DARWIN results, please see deliverable 6.8 Plan for Business and
Exploitation of Results (Final).
<table>
<tr>
<th>
**Key Results**
</th>
<th>
**Asset (IP)**
</th>
<th>
**IPR Principle**
</th>
<th>
**Primary**
**Exploitation**
**Partner(s)**
</th> </tr>
<tr>
<td>
Darwin Resilience
Management
Guidelines
</td>
<td>
DARWIN Wiki
</td>
<td>
Creative Commons CC-BY 4.0 license
</td>
<td>
SINTEF, ENAV,
ISS, DBL, FOI,
KMC, BGU
</td> </tr> </table>
## Intended readership
This deliverable is mainly intended for use internally in the project, to
provide guidance on data management to project partners and participants. In
addition, section 3 and 4 can be used by external actors to gain knowledge of
what data has been generated and how to access such data after the project
ends.
## Structure of this document
Section 3-5 follows the official template for DMP and FAIR data management,
whereas section 2 gives and introduction to the guiding principles for data
management applied in the project.
* Section 2 describes the guiding principles for the overall data management in DARWIN
* Section 3 provides details on the data collected and generated in the project • Section 4 provides and overview of how the open data can be accessed and reused.
* Section 4 addresses how DARWIN will relate to the concept of FAIR Data Management
* Section 6 describes how the project has handled issues related to secure storage of research data and data protection.
## Stakeholder involvement
The involvement of end-users and stakeholders is central to achieving the
development of the DARWIN Resilience Management Guidelines (DRMG), which is
the main objective and core result of the DARWIN project. Their involvement
will ensure transnational, cross-sector applicability and long-term relevance,
and to secure their input and involvement in the project the _DARWIN Community
of Practice_ (DCoP) has been established. The DCoP includes relevant
stakeholders and end-users representing different domains and critical
infrastructures (CIs) as well as resilience experts.
The DCoP has been an important source of data collected in the project. DCoP
members, in addition to other relevant stakeholders who participated in the
pilot exercises, provided input on end user needs, requirements and practices
relevant to the development of the DARWIN Resilience Management Guidelines
(DRMG) and associated innovative tools and training material, as well as
continuous feedback during the development phase. Such data was collected
through surveys, interviews, webinars, questionnaires and face-to-face
workshops.
## Relationship with other deliverables
The DMP presented in this document complements the following deliverables:
* D7.1 – Project Management Manual: D7.9 presents procedures for managing research data developed during the project and thus enables the management procedures presented in D7.1
* D7.3 – Initial data management plan: D7.9 presents an updated version of D7.3
* D7.4 – DARWIN Ethical approvals: The content of D7.4 provides input to D7.9 through the Ethical approvals.
# Guiding Principles
The DARWIN project is an "open" project with 23 of the 38 deliverables in the
project being public. Among the 15 that are confidential 11 are related to
project management and reporting. The figure below is taken from D7.3 and
illustrates the main procedure used in the project to ensure open access to
research data and publications.
**Figure 1: DARWIN data sets and publications**
## General Data Protection Regulation (GDPR)
As of May 2018, the General Data Protection Regulation (GDPR) is applicable in
all Member States in the European Union, as well as in the countries in the
European Economic Area (EEA). GDPR updates and modernises existing laws on
data protection to strengthen citizens' fundamental rights and guarantee their
privacy in the digital age.
The DARWIN project has reviewed the data collected through the project and how
this has been processed and stored. We have received confirmation from the
Norwegian Social Science Data Services (NSD) 5 , who is our main advisor in
handling sensitive data as well as our main data archiving facility, that they
operate in accordance with the new GDPR rules. We have also consulted our
Ethics and Security Board comprised of project external experts to confirm
that our procedures are in line with GDPR and sound research ethics. In
addition, we have contacted all members of the DCoP to get updated permission
to store their contact data for involving them in project work and activities.
All data collected from stakeholders in the project has been done in
accordance with applicable ethical standards and requirements in the
respective countries of the data collection, as well processed and handled
secure and in line with applicable rules and regulations on privacy and data
protection. Deliverable _7.4 Ethical approvals_ outline how the project has
handled sensitive data, as well as presents the required ethics approvals from
the countries where data was gathered.
Before any of the data collected were published, it went through a process of
anonymisation, aggregation and analysis, so that none of the publicly
available data can be traced back to an individual participant or respondent.
# Data summary
This chapter describes the datasets that has been gathered and processed
during the project and follows the template for DMP as presented in the
_Guidelines on FAIR Data Management in Horizon 2020_ , version 3.0 from July
2016 6 .
Datasets in DARWIN are defined as _organised data_ and excludes _un-organised
data_ . An example of unorganised data is notes from interviews, workshops and
exercises that are not directly included in the project deliverables but are
only used in deliverables in aggregated or analysed form. Such data was used
for guidance and analysis internally in the project only and were not
structured in a way to make them reusable after the end of the project. As you
will see, not all datasets from the project will be openly available after the
end of the project, and in the cases were a dataset is public, there might
still be parts of the dataset that remain non-public. There are five main
reasons for this:
1. Data collected from volunteers participating in interviews, workshops and pilot exercises (etc.) contains personal data that is confidential. The project is subject to Ethical Requirements to protect this data and ensure the participants privacy. Only aggregated, anonymised and analysed data from datasets are included in project deliverables and/or published in articles and papers. In the cases were datasets are not made public, the main reason is that the data has the potential to be traced back to the individual participants and must remain confidential to protect their privacy.
2. The data collected in this project is context specific, and the publicly available part of this data is at the highest level of detail that can be interpreted and understood by external readers. Including more data, as in the form of "raw data", could lead to misinterpretations of the data.
3. Some of the data collected in its "raw form" in the pilot exercises reveals details of critical infrastructure operations that are to be considered _security_ and _organisational sensitive information 8 _ and we do not have permission from the concerned organisations to make this data available. For more information, please see deliverable 4.3, section 9.4 9 .
4. Most data from stakeholders and participants were collected in local languages, for example in the pilot studies in Sweden and Italy. This data was then aggregated and analysed, and only the analysis of this data is available in English. To translate all raw material from interviews, workshops etc. to English would require resources beyond the availability of the DARWIN project, and would again potentially lead to the identification of individual participants (or organisations).
5. Data collected from scientific publications is in most cases copyright-protected so that datasets with entries of text taken directly from scientific publications cannot be reproduced publicly, except for occasional quotes of very limited length.
Since all descriptions of datasets follow the same template, the same wording
might be repeated between the different descriptions. The name for each data
set includes a prefix "DS" for data set, followed by a case-study
identification number, the partner responsible for collecting and processing
the data, as well as a short title. Table 3 provides an overview of the
datasets collected. Updated and more detailed descriptions of each set is
provided in the following sub-sections.
**Table 3: Overview of data sets**
<table>
<tr>
<th>
**No.**
</th>
<th>
**Identifier/Name**
</th>
<th>
**Brief description**
</th>
<th>
**Public**
</th> </tr>
<tr>
<td>
1
</td>
<td>
DS.WP1.FOI.Practices
</td>
<td>
This data set provides the aggregated data from an interview series conducted
with relevant practitioners to gather data on practices, needs, expectations
and experiences with crisis management and resilience.
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
2
</td>
<td>
DS.WP1.FOI.Literature.Analysis
</td>
<td>
This data set provides the aggregated data from a worldwide literature survey
(conducted in WP1) addressing crisis management and resilience.
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
3 (new)
</td>
<td>
DS.WP1.FOI.Literature.Working.Material
</td>
<td>
This data set is the DARWIN-internal dataset that provides guidance for the
DRMG developers to extract relevant input from the Literature Analysis.
</td>
<td>
No
</td> </tr>
<tr>
<td>
4 (new)
</td>
<td>
DS.WP2.SINTEF.DRMG.Working.Material
</td>
<td>
This data set have collected stakeholder input/feedback on the DRMG/CCs during
the development phase, through DCoP surveys, interviews with outside experts,
interviews with project internal experts, and cycles of revisions of the
guidelines.
</td>
<td>
No
</td> </tr>
<tr>
<td>
5
</td>
<td>
DS.WP4.DBL.Pilots
</td>
<td>
This data set provides feedback and qualitative insights on the use of DARWIN
resilience management guidelines (including practices and associated methods)
by end-users, in the context of the pilot cases conducted in healthcare and
ATM as well as other related domains.
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
6 (new)
</td>
<td>
DS.WP4.DBL.Questionnaires
</td>
<td>
This data set provides feedback on the potential impact of the DRMG in
improving resilience, as perceived by the practitioners that were involved in
the different evaluation events, including the Pilot Exercises, the
Interactive Sessions of the 3 rd DCOP Workshop and all the other smaller
scale evaluation events.
</td>
<td>
No
</td> </tr>
<tr>
<td>
7
</td>
<td>
DS.WP5.KMC.DCoP_Workshops.Feedback
</td>
<td>
This data set provides qualitative insights and inputs from the DARWIN
Community of
Practice giving feedback on the presented project work( e.g. DRMG, simulation
tool and training materials).
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
8 (new)
</td>
<td>
DS.WP5.KMC.DCoP_Workshops.Evaluation
</td>
<td>
This data set provides feedback and qualitative insights on the DCop Workshop
organization and execution during the DARWIN project.
</td>
<td>
No
(except quotes in
D5.2,
5.3,
5.5)
</td> </tr> </table>
## DS.WP1:FOI.Practices
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Format**
</th>
<th>
**Size of data**
</th>
<th>
**Public/Non-public**
</th> </tr>
<tr>
<td>
Text file
(Deliverable 1.1, section 3.3)
</td>
<td>
PDF/A
</td>
<td>
13 pages (PDF)
</td>
<td>
Public
</td> </tr> </table>
**Purpose of the data collection/generation:** Identify resilience and
brittleness aspects from significant crisis and everyday practices of crisis
response organisations and the public, in order to provide content to and
requirements for the DRMG.
**Relation to the objectives of the project:** This data contributed to
achieving objective 5:
_To build on “lessons learned” in the area of resilience by:_
1. _Identifying criteria that provide indicators of what works well and what does not;_
2. _Applying these criteria in defining and evolving resilience guidelines._
**Re-use of existing data:** None.
**Origin of data:** Interviews with stakeholders and practitioners.
**Data utility:** This data can be useful for actors that are interested in
issues concerning crisis management, e.g. crisis response practitioners from
safety- and security-critical complex domains, the research communities
involved with the various aspects of resilience and crisis management research
and application, and the project partners of DARWIN.
## DS.WP1.FOI.Literature.Analysis
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Format**
</th>
<th>
**Size of data**
</th>
<th>
**Public/Non-public**
</th> </tr>
<tr>
<td>
Text files:
Literature analysis in D1.1, section 2
Reference list in D1.1, appendix A
</td>
<td>
PDF/A
PDF/A
</td>
<td>
Analysis: 74 pages
Reference list: 19 pages
</td>
<td>
Public
Public
</td> </tr> </table>
**Purpose of the data collection/generation:** To identify resilience
concepts, methods, definitions, practices, tools, in order to provide content
and requirements for the DRMG.
**Relation to the objectives of the project:** This data contributed to
achieving objective 5:
_To build on “lessons learned” in the area of resilience by:_
_1\. Identifying criteria that provide indicators of what works well and what
does not; 2. Applying these criteria in defining and evolving resilience
guidelines._
**Re-use of existing data:** Systematic Literature Review (SLR): We performed
and aggregation and analysis of existing (published) journal articles.
**Origin of data:** Data collected from relevant scientific journals,
identified through searching the SCOPUS database and the DARWIN Description of
Action (DoA).
**Data utility:** This aggregated and structured data that is presented in the
catalogue that is D1.1 can be useful for actors that are interested in issues
concerning crisis management, e.g. crisis response practitioners from safety-
and security-critical complex domains, the research communities involved with
the various aspects of resilience and crisis management research and
application, and the project partners of DARWIN.
## DS.WP1.FOI.Literature.Working.Material
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Format**
</th>
<th>
**Size of data**
</th>
<th>
**Public/Non-public**
</th> </tr>
<tr>
<td>
Excel spreadsheet containing specific questions for the use of creating DRMG
content, interpreting the scope and gathering relevant input to the project.
</td>
<td>
.xlsx
</td>
<td>
138 k (DoA) 2089 k (articles)
</td>
<td>
Non-public
</td> </tr> </table>
**Purpose of the data collection/generation:** To assist project partners in
navigating data from the SLR and identify resilience concepts, methods,
definitions, practices and tools, in order to provide content and requirements
for the DRMG. This data is organised as a spreadsheet database in excel format
to be used by project-internal DRMG developers, searching for input to the
guidelines.
**Relation to the objectives of the project:** This data contributed to
achieving objective 5:
_To build on “lessons learned” in the area of resilience by:_
_1\. Identifying criteria that provide indicators of what works well and what
does not; 2. Applying these criteria in defining and evolving resilience
guidelines._
**Re-use of existing data:** SLR: We performed an aggregation and analysis of
existing (published) journal articles.
**Origin of data:** Data collected from relevant scientific journals,
identified through searching the SCOPUS database and the DARWIN Description of
Action (DoA).
**Data utility:** Project internal: used by DRMG developers searching for
input to the guidelines.
## DS.WP2.SINTEF.DRMG.Working.Material
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Format**
</th>
<th>
**Size of data**
</th>
<th>
**Public/Non-public**
</th> </tr>
<tr>
<td>
Text files – stakeholder analysis:
Deliverable 2.1
Deliverable 2.4
</td>
<td>
PDF/A
PDF/A
</td>
<td>
160 pages (14,32 MB)
Approx. 300 pages
</td>
<td>
Public
Public
</td> </tr>
<tr>
<td>
Text files – adaption of the DRMG:
Deliverable 2.2
Deliverable 2.3
</td>
<td>
PDF/A
PDF/A
</td>
<td>
137 pages (3,71 MB)
140 pages (4 MB)
</td>
<td>
Public
Public
</td> </tr>
<tr>
<td>
Text files – adaption of the DRMG:
Hand-written notes from interviews, workshops and exercises. Data collected in
local languages.
</td>
<td>
Paper documents
</td>
<td>
</td>
<td>
Non-public
</td> </tr>
<tr>
<td>
**Type of data**
</td>
<td>
**Format**
</td>
<td>
**Size of data**
</td>
<td>
**Public/Non-public**
</td> </tr>
<tr>
<td>
Text files - Cycles of revisions of the involving members of the DARWIN
research team:
Electronic notes and comments
provided in the wiki
Electronic documents including notes and feedback on the DRMG
</td>
<td>
Text files - Cycles of revisions of the involving: .txt and text and text and
.txt / .docx
</td>
<td>
</td>
<td>
Non-public
</td> </tr>
<tr>
<td>
Text files: part of DARWIN
Resilience Management Guidelines – Deliverable 2.4DARWIN Resilience
Management Guidelines – Wiki
DARWIN Resilience Management
Guidelines – Book format
</td>
<td>
PDF/A test test
test test test test test
Online Wiki
PDF/A
</td>
<td>
Appox. 300 pages
</td>
<td>
Public
Public
Public
</td> </tr> </table>
**Purpose of the data collection/generation:** To collect feedback on the
development of the DRMG and Capability Cards (CCs) and their adaptability to
different domains, focusing on ATM and healthcare. And to perform cycles of
revisions of the DRMG to improve their relevance and usability for end-users.
The nonpublic data contain personal data that can be traced back to
individuals and are therefore subject to data protection and privacy measures
and cannot be shared.
**Relation to the objectives of the project:** This data contributed mainly to
achieving objective 1 but also other objectives (see deliverable 2.4 for more
details):
_To make resilience guidelines available in a form that makes it easy for a
particular infrastructure operator to apply them in practice, by:_
1. _Surveying and cataloguing resilience concepts, approaches, practices, tactics and needs_
2. _Adapting/customising them to the needs of a domain or specific organisation;_
3. _Utilization of social media by emergency authorities, first responders and the public as part of resilience management;_
4. _Quickly locating and accessing the details relevant to a specific situation;_
5. _Integrating them within existing working processes within organisations;_
6. _Entering new information (e.g. based on practical experience) that updates the guidelines (to “learn and evolve”)._
**Re-use of existing data:** Input from WP1 and WP4 deliverables.
**Origin of data:** Surveys, interviews and workshops with both project
internal experts, external experts, and practitioners, end-users and external
experts that are members of the DCoP.
**Data utility:**
Non-public data: Project internal - used by DRMG developers for input to
development of the DRMG.
Public data:
_Deliverable 2.1:_ Practitioners and researchers outside the project that are
involved in developing the resilience of critical infrastructures, and to
developers of guidelines: 1) the development process (including assessment and
revision activities) is described in detail in order to provide potential
methodological support; 2) the content, organisation and nature of the
guidelines can serve as a source of reference; and, 3) the development of the
DAWIN Wiki highlights the issues of knowledge management and access associated
with the evolving guidelines content, and implements various capabilities to
support such efforts.
_Deliverable 2.2:_ This is useful for policy, healthcare crisis managers,
healthcare critical infrastructure managers and community of practice
healthcare and other CIs as source of inspiration when adapting resilience
guidelines for their domains.
_Deliverable 2.3:_ useful for ATM stakeholders (i.e. policy makers, crisis
managers, critical infrastructure managers and community of practice) and
other CIs as source of inspiration when adapting resilience guidelines for
their domains.
_Deliverable 2.4:_ Primary users are managers and stakeholders responsible for
CIs who are interested in adapting and adopting resilience management
guidelines in their organisation, especially within the ATM and healthcare
domain, but also relevant to other CIs. Other groups this could be useful for
include: 1) Members of the DCoP and of the DARWIN consortium who might be
involved in pursuing this work, expanding and improving the guidelines
described here; 2) practitioners and researchers outside the project that are
involved in enhancing the resilience of Critical Infrastructures; and, 3)
other developers of guidelines, who might find insight in the content and
process described. This is useful for the following groups:
## DS.WP4.DBL.Pilots
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Format**
</th>
<th>
**Size of data**
</th>
<th>
**Public/Non-public**
</th> </tr>
<tr>
<td>
Text files:
Deliverable 4.3
Deliverable 4.4
</td>
<td>
PDF/A
PDF/A
</td>
<td>
140 pages
180 pages
</td>
<td>
Public
Public
</td> </tr>
<tr>
<td>
Excel spreadsheets
(Overall Summative and Formative
Evaluation grid)
</td>
<td>
.xlsx
</td>
<td>
1 spreadsheet with 6
tabs
278KB
</td>
<td>
Non-public
</td> </tr>
<tr>
<td>
Audio recordings
</td>
<td>
.m4A
.mp3
</td>
<td>
560MB
177M
</td>
<td>
Non-public
</td> </tr> </table>
**Purpose of the data collection/generation:** Provide accounts of involved
personnel and end-users' experiences in using the DRMG, to provide feedback to
the development and support the improvement of end results.
**Relation to the objectives of the project:** This data contributed to
achieving objective 6:
_To carry out two pilots that apply project results in two key areas - Health
care and Air Traffic Management (ATM) – and use the experience gained to
improve project results and demonstrate their practical benefits in these
domains, as well as add value to established risk management practices and
guidelines._
**Re-use of existing data:** None.
**Origin of data:** Focus Groups, Workshops, Interviews with and observations
of participants at pilot exercises. **Data utility:** This data can be useful
for practitioners and researchers that are interested in the result of the
assessment of the DRMG. They can also be of interest for other organizations
that operate in the same domain of crisis management tested in the pilot
exercises and would like to know more about the effects of adopting the DRMG
(with focus, but not limited, to Healthcare and ATM).
## DS.WP4.DBL.Questionnaires
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Format**
</th>
<th>
**Size of data**
</th>
<th>
**Public/Non-public**
</th> </tr>
<tr>
<td>
Excel spreadsheets
</td>
<td>
.xlsx
</td>
<td>
11 Spreadsheets
(one tab each)
</td>
<td>
Non-public
</td> </tr>
<tr>
<td>
Google Forms Entries
(De-identified and accessible only to DBL)
</td>
<td>
PDF/A
</td>
<td>
182 Entries
</td>
<td>
Non-public
</td> </tr> </table>
**Purpose of the data collection/generation:** To collect additional feedback
on the DRMGs and CCs after each pilot exercise, as well as in smaller scale
evaluations including other domains, separate from and in between pilot
exercises. Data were collected both via online surveys and paper
questionnaires (one per each CC plus one for the DARWIN Wiki as a whole). The
structure and content of the questionnaire was the same in both formats. The
resulting data was aggregated into Excel spreadsheets, and the anonymised
analysis of it was included in the overall evaluation documents described in
section 3.5. The questionnaire data itself was only used internally in the
project, for reasons listed in the introduction to this section.
**Relation to the objectives of the project:** This data contributed to the
achievement of objective 6:
_To carry out two pilots that apply project results in two key areas - Health
care and Air Traffic Management (ATM) – and use the experience gained to
improve project results and demonstrate their practical benefits in these
domains, as well as add value to established risk management practices and
guidelines._
**Re-use of existing data:** None.
**Origin of data:** Questionnaires (both as online survey and paper format).
**Data utility:** Project-internal DRMG developers: This data was used to feed
the Summative and Formative Evaluation in combination with qualitative data
deriving from Pilot Exercises.
## DS.WP5.KMC.DCoP_Workshops_Input
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Format**
</th>
<th>
**Size of data**
</th>
<th>
**Public/Non-public**
</th> </tr>
<tr>
<td>
Text files:
Deliverable 5.2
Deliverable 5.3
Deliverable 5.5
</td>
<td>
PDF/A
PDF/A
PDF/A
</td>
<td>
27 pages (746,42 KB)
51 pages (1,88 MB)
71 pages (2,47 MB)
</td>
<td>
Public
Public
Public
</td> </tr>
<tr>
<td>
Text files (paper)
Unprocessed original paper questionnaires
</td>
<td>
Paper documents
</td>
<td>
370 A4 pages
</td>
<td>
Non-public
</td> </tr> </table>
**Purpose of the data collection/generation:** Data collected and generated in
this set had two main purposes: 1) to establish and manage a community of
crisis and resilience practitioners that would, 2) provide input from end-
users and practitioners to WP2, WP4, WP3 to improve results and ensure that
the DRMGs and associated tools are relevant, useful and adaptable across
different domains and critical infrastructures. **Relation to the objectives
of the project:** This data contributed to achieving objective 4 and 5\.
_Objective 4: To establish a forum - the Community of Resilience and Crisis
Practitioners - with a lifetime that will extend beyond the end of the
project, that will:_
1. _Bring together infrastructure operators, policy makers and other relevant stakeholders;_
2. _Allow them to exchange views and experiences in a dynamic, interactive and fluent way enabled by social media;_
_Objective 5: To build on “lessons learned” in the area of resilience by:_
3. _Identifying criteria that provide indicators of what works well and what does not;_
4. _Applying these criteria in defining and evolving resilience guidelines._
**Re-use of existing data:** None.
**Origin of data:** Data collected at 3 face-to-face workshops held at KMC's
premises in Linköping, Sweden; 6 webinars using GoToMeeting; and, 1 DCoP
Questionnaire.
**Data utility:** The utility of this data was mainly internally in the
project, as input to develop and improve project results. However, the results
of the DCoP questionnaire will also be useful for the members of the DCoP who
will participate in the community beyond the end of the project, as well as
for other related research projects that are interested in establishing
similar communities or connecting with the DCoP.
## DS.WP5.KMC.DCoP_Workshops_Evaluation
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Format**
</th>
<th>
**Size of data**
</th>
<th>
**Public/Non-public**
</th> </tr>
<tr>
<td>
Excel spreadsheets
(Input to deliverables 5.2, 5.3, 5.5)
</td>
<td>
.xlsx
</td>
<td>
25Kb
</td>
<td>
Non-public
</td> </tr>
<tr>
<td>
Unprocessed original paper questionnaires, notes, and post-it’s
</td>
<td>
Paper documents
</td>
<td>
259 A4 pages
</td>
<td>
Non-public
</td> </tr> </table>
**Purpose of the data collection/generation:** Collect feedback from
participants at DCoP workshops and webinars to improve and tailor future
events to their wants and needs.
**Relation to the objectives of the project:** This data directly contributed
to achieving objective 4:
_To establish a forum - the Community of Resilience and Crisis Practitioners -
with a lifetime that will extend beyond the end of the project, that will:_
1. _Bring together infrastructure operators, policy makers and other relevant stakeholders;_
2. _Allow them to exchange views and experiences in a dynamic, interactive and fluent way enabled by social media;_ **Re-use of existing data:** None.
**Origin of data:** Evaluation surveys.
**Data utility:** This data was used internally in the project, to improve
each DCoP workshop and raise the attractiveness of becoming a DCoP member and
participating at these events. Part of this data was transcribed and is
presented in Deliverables 5.2, 5.3, 5.5. Tentatively, the unprocessed data
could be analysed in future scientific publications.
# FAIR Data Management
## Making data findable
**Discoverability of data (metadata provision):** No metadata in the form of
unprocessed data collected through pilot exercises, interviews, workshops,
questionnaires and surveys (such as interview notes) will be made available
due to reasons explained in the beginning of section 3. Metadata in the form
of descriptions of the process and methodologies used to collect the data, in
addition to the public part of all datasets, are included in the deliverables
available in PDF/A format on the DARWIN project website 7 .
**Identifiability of data:**
* No system for unique identifiers, such as Digital Object Identifiers, has been applied to the publicly available data in this project.
* For internal organisation of confidential, anonymised metadata collected during the third DCoP workshop a Google Forms questionnaire created by WP4 was used. A number systems was used to preserve each participants anonymity and privacy while at the same time enabling tracking of responses between sessions for comparison and analysis.
**Naming conventions used:** DARWIN deliverables, which contain all publicly
available data generated in the project, make use of the same persistent
system for identifiers: The identifier starts with the name of the project as
a prefix, followed by a "D" for deliverable, followed by the number of the WP,
followed by the number of the deliverable in that WP, and ending with the full
title of the document, such as: _DARWIN_Dx.y_Title of deliverable_ .
**Approach to search keywords:** All DARWIN deliverables include search
keywords on the cover page.
**Approach for clear versioning:** All DARWIN deliverables includes a table on
page 3 containing clear versioning and description of document history.
**Standards for metadata creation:** No metadata will be made publicly
available. Internally in the project, Excel spreadsheets were used to
aggregate and organise metadata collected from different sources of evidence.
## Making data openly accessible
**Open data:** All open data in DARWIN is included in the deliverables and the
DARWIN wiki, which is all available through the project website 8 . The open
data consists of analysis of aggregated data collected from different sources
of evidence, as well as descriptions of processes and methodologies for data
collection and generation.
**Closed data:** All metadata in the form of unprocessed data collected
through pilot exercises, interviews, workshops, questionnaires and surveys
(such as interview notes) will remain closed/confidential, due to the reasons
described in the beginning of section 3. For WP1, the rationale for keeping
the dataset "DS.WP1.Literature.Working.Material" closed, and only available to
partners participating in the SLR, is because the data in this spreadsheet
consist of interpretations of what from the scientific content of the SLR
journals are relevant and useful to DARWIN. In addition, sharing data directly
from this spreadsheet, parts of which are directly copied from the scientific
journals themselves, would violate copyright laws.
**How and where data will be made available:** All publicly available data is
made available on the DARWIN project website – either in the form of PDF/A
documents (deliverables), or in the DAWRIN Wiki 9 . The DARWIN Wiki also
includes an option to create and download a "book version" of its content in
PDF/A format. The open research data collected in the project is archived in
NSD's research data repository. NSD is one of the largest archives of its kind
and used by researchers and students in Norway and abroad. Using the NSD data
repository will ensure long-term and secure preservation of the data and
results from the project. In addition, all deliverables are included in
SINTEFs Open Research Data Repository 10 .
**Methods, software or tools needed to access the data:** No specific method,
software or tool, other than an internet connection and internet browser, will
be needed to access the publicly available data from DARWIN. **Access
restrictions:** There will be no access restriction on any open data from
DARWIN. The only minor restriction is that the DARWIN Wiki and its content is
subject to a Creative Commons CC-By 4.0 license, which requires the users to
give credit to the DARWIN project and European Commission as funding agency
when reused.
### Open access to publications
The DARWIN project has worked by the policy that any publications from the
project must be available as open access (as far as practically possible).
There are two main routes for providing open access publications: Green and
Gold (see Figure 2). Gold open access means the article is available as open
access by the scientific publisher. Some journals require an author-processing
fee for publishing open access. Green open access or self-archiving and means
that the published article or the final peer-reviewed manuscript is archived
by the researcher in an online repository (e.g. project website and SINTEF
Open research repository), in most cases after its publication. Most journals
within the social sciences domains require authors to delay self-archiving to
repositories to 12 months after the article first being published.
**Figure 2: Open Access routes (source: European IPR Helpdesk)**
The project has published more than 5 peer-reviewed publications. The project
members strive to publish in journals were free open access is available (gold
open access), as far as possible. In some occasions, priority might be given
to journals or conferences with high impact were full open access might be not
available. High ranked journals are important to achieve impact in the area of
science and knowledge. Details on publications, journals, conferences and
updated KPIs are included in deliverable 6.7 Dissemination, exploitation and
external collaborations strategy.
## Making data interoperable
**Interoperability of data:** All publicly available data in DARWIN are made
available in text formats, namely PDF/A, or in text format in the wiki. The
reference list in the DS.WP1.Literature.Analysis uses the APA standard for
referencing. All context specific metadata is summarised on a level that is
not relevant for data pooling.
## Increase data re-use
**Licenses:** The only data from the project subject to a license is the
DARWIN wiki and its content. This is covered by a Creative Commons CC-By 4.0
license, which lets users use the wiki freely, but requires them to credit the
project and the European Commission as funding agency if any data from this is
referred to or reused externally. All other data are openly available and
under no restrictions for re-use. For more information on IPR principles
applied to other DARWIN results (e.g. simulation tool), please see deliverable
6.8 Plan for Business and Exploitation of Results (Final).
**Re-use:** All deliverables are available for download and re-use on the
DARWIN project website as soon as possible after being submitted to the
European Commission. All public/open deliverables include a description in
section 1 of the intended readership of each deliverable. This outlines who
the deliverable might be useful for outside the project consortium and
provides guidance to external readers on whether the content of the
deliverable is relevant and interesting for them to re-use. Non-public data
from the project will remain available to the consortium partners only after
the end of the project.
**Restrictions on re-use and data embargo periods:** No data embargo period
will be applied to the open deliverables from the DARWIN project. The DARWIN
wiki is currently closed in that a user account login is required to access
the data. All the members of the DCoP have access to the wiki through such
user accounts. On October 15 th 2018 the user account restrictions will be
removed and the DARWIN Wiki will be openly available to anyone who visits the
project website. There is no time-limit on the availability of the open data
from the DARWIN project; it will be available on the project website, in the
NSD archives, and in the SINTEF data repository for an unlimited time-period.
No restrictions on re-use, apart from the license mentioned above, applies to
the open data from DARWIN.
# Data security
The coordinating organisation of the DARWIN project, SINTEF, is subject to the
laws and guidelines that are relevant for this project in Norway, which at the
beginning of the project were Personal Data Act _LOV 200004_ ‐ _14 nr 13_ and
the Ethical guidelines for Internet Research 14. As of June 20 th 2018 this
law was replaced by _LOV-2018-06-15-38_ , which updates the Personal Data Act
to implement the EU's Privacy Policy (GDPR) in Norway and makes it Norwegian
law. The Norwegian Data Inspectorate is an independent administrative body
that ensures the enforcement of the new Personal Data Act. The Norwegian
Social Science Data Services (NSD) is its partner for implementation of the
statutory data privacy requirements in the research community. At the
beginning of the project SINTEF reported all planned studies to NSD. This
means that specific efforts have been taken towards ensuring the privacy of
participants who take part in DARWIN studies, regardless of whether they live
in Norway or in any other partner‐country. Other partners have similarly been
bound by local 11 and EU-level legislation 12 as well as following their
own in-house ethical procedures in association with research projects (e.g.
BGU for example submits research conducted by the university personnel to an
Internal Review Board committee that has independent authority, and the
studies are conducted only after approval has been provided in writing).
As mentioned in section 2, the project has taken steps to assure that the
handling and storing of data is in accordance with EU law, in particular the
GDPR. All personal data has been stored (if required in encrypted format) on
secure, password/ token‐protected servers.
During the project period, personal data has been de-identified; i.e. name and
other characteristics that could identify a person has been removed and
replaced by a number, which refers to a separate list of identifiable data.
Once the project has finished, data will be completely anonymized, meaning
links to lists of names and contact-information will be deleted and the
anonymisation will be irreversible. No personal data will be stored after the
end of the project period.
All open research data from DARWIN will be documented and archived in the
NSD's research data repository 13 , and thus placed at the disposal of
colleagues who want to re-use or elaborate on its findings 17 .
We ensure that personal data is kept securely. Any publications, including
publications online, neither directly or indirectly lead to a breach of agreed
confidentiality and anonymity 18 .
The research outcomes is reported without contravening the right to privacy
and data protection. (Reference to Deliverable 7.4 _Ethical Approvals_ ,
section 2, Requirement ER7, regarding FOI and KMC practices concerning
personal data).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1027_CHEOPS_653296.md
|
# Executive Summary
This deliverable describes the research datasets that will be produced within
CHEOPS and if and how they will be made available. **Need for the
Deliverable**
It is in the interest of society that best use is made from datasets obtained
with the help of public funding. To this end, consideration of data format and
post-project storage and curation should be made at an early stage.
**Objectives of the Deliverable**
With the help of this deliverable it shall be ensured that as much research
data as possible is ‘FAIR’:
* **F** indable
* **A** ccessible
* **I** nteroperable to specific quality standards
* **R** e-useable beyond the original purpose for which it was collected
## Outcomes
All WP leaders have defined and described the most important datasets
generated within their work package.
## Next steps
The DMP is not a fixed document, but rather represents the current status of
reflection within the consortium about the data that will be produced and the
DMP will evolve during the lifespan of the project.
..
# 1 Introduction
In Horizon 2020, the EC is implementing a pilot action on open access to
research data. Participation in the pilot is voluntary, but participating
projects are required to develop a Data Management Plan (DMP), in which they
specify which data will be open. The CHEOPS consortium has chosen not to
participate in the pilot action, but nevertheless has promised to deliver a
DMP as part of WP6.
## 1.1 Objectives
The purpose of the DMP is to provide an analysis of the main elements of the
data management policy that will be used by the CHEOPS consortium with regard
to all the datasets that will be generated by the project. The DMP is not a
fixed document, but rather represents the current status of reflection within
the consortium about the data that will be produced and the DMP will evolve
during the lifespan of the project. According to the _EC Guidelines on Data
Management in Horizon_ _2020_ 1 , scientific research data should be ‘FAIR’:
* **_F_ indable ** : Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)?
* **_A_ ccessible ** : Are the data and associated software produced and/or used in the project accessible and in what modalities, scope, licenses (e.g. licencing framework for research and education, embargo periods, commercial exploitation, etc.)?
* **_I_ nteroperable ** : Are the data produced and/or used in the project interoperable, that is allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available [open] software applications, and in particular facilitating re-combinations with different datasets from different origins)?
* **Re-usable** : Are the data produced and/or used in the project useable by third parties, in particular after the end of the project?
According to the EC guidelines the DMP should describe how the FAIR principles
will be implemented. In addition, the DMP should also address the allocation
of resources to data management, data security, ethical aspects and any
additional procedures for data management that are made use of.
## 1.2 Structure
The DMP is organised as follows: After a section with general considerations
that are shared for all datasets (Section 2), the 5 datasets are described in
Sections 3 to 7 and all questions listed in the DMP template of the _EC
Guidelines on Data Management in Horizon 2020_ 1 are addressed for each
dataset.
# 2 General considerations (applying to all datasets)
## 2.1 Data archiving and preservation
CHEOPS intends to deposit the data selected for sharing in one of the EU
supported open access data repositories such as OpenAIRE _www.openaire.eu_ or
ZENODO _https://zenodo.org_ . The ultimate decision on the repository and
the modalities will be taken at the time the datasets are available.
## 2.2 Metadata
It is planned to make the metadata available through the same repository the
data will be stored in. The metadata will use standard format as much as
possible. ZENODO for instance stores all metadata internally in the MARC
(MAchine-Readable Cataloging) format and allows export into several standard
formats such as MARCXML, Dublin Core and DataCite Metadata Schema according to
OpenAire Guidelines.
The metadata will include among others:
* The terms “European Union (EU)” and “Horizon 2020”, “perovskite”, “photovoltaics” and
“solar cell”
* The project acronym “CHEOPS” as well as the grant number “653296” The publication date
## 2.3 Making data findable
Typically, the organisations hosting a data repository (e.g. ZENODO) are
assigning a unique Digital Object Identifier.
The CHEOPS project website will be kept online for at least 3 years after the
end of the project and will contain references to the datasets in the open
access repositories.
## 2.4 Licensing and re-use of data
It is the intention of the CHEOPS consortium to allow the widest possible re-
use of data. Creative Commons licensing is considered for some datasets, but
currently no definitive decision has been taken. Apart from the restrictions
of the Creative Commons license chosen (if this is the case) no other
restrictions for re-use of data will apply.
## 2.5 Time of making datasets available
As a general rule, datasets will not be released before the publication date
of the scientific paper in which the data are reported the first time. It is
the intention of the CHEOPS consortium to make the datasets publicly available
as early as possible after the publication date, but potential restrictions or
embargo periods of the scientific journal will have to be respected. CHEOPS WP
leaders will jointly review the status of actual upload of the datasets at the
occasion of the 6-monthly meetings of the Executive Board and the Annual
Meetings.
## 2.6 Data security
Regular backup schemes are usually in place for the data repositories and
CHEOPS does not have to take care of this.
There is no sensitive data such as personal or health data collected or
processed in CHEOPS and therefore no specific requirements apply.
## 2.7 Ethical aspects
No ethical aspects need to be considered for the datasets concerned. This has
been confirmed by the comment of the EC Project Officer in the request for
revision of this deliverable.
## 2.8 Updates of the Data Management Plan
The DMP is not a fixed document, but rather represents the current status of
reflection within the consortium about the data that will be produced and the
DMP will evolve during the lifespan of the project. The next updated version
of the DMP will be produced in Month 18. A further update is scheduled for
Month 30, to allow sufficient time in the final 6 months of the project to
implement the plan and store the research data in the repository.
# 3 Dataset on perovskite single junction PV devices (from WP1) 3.1 Data
summary
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**State the purpose of the data collection / generation**
</td>
<td>
In the framework of WP1, data will be produced on typical structures (for 2
polarities) of the used perovskite single junction devices with typical
structure composition and thicknesses of the individual layers. Information
(non-confidential) on process conditions of the different layers will also be
documented. Eventually, information on the achieved device performances will
also be measured, along with descriptions of the procedure that has been used
to measure the data.
The purpose of the data generation is to allow comparison between perovskite
PV devices with different structures to ultimately identify the best
combination of processing techniques and materials.
</td> </tr>
<tr>
<td>
**Explain the relation to the objectives of the project**
</td>
<td>
The data is produced as part of the process to reach the project’s technical
objective 1 (TO1 in the DoA) of upscaling the perovskite PV technology.
</td> </tr>
<tr>
<td>
**Specify the types and formats of data generated / collected**
</td>
<td>
The data on device structure and process conditions of the different layers
will be descriptive, while the measured device performances will each consist
of a combination of:
1. Name of the parameter measured
2. Numeric value measured
3. Physical unit
</td> </tr>
<tr>
<td>
**Specify the origin of the data**
</td>
<td>
The data will be documented or measured by the CHEOPS partners in WP1.
</td> </tr>
<tr>
<td>
**State the expected size of the data (if known)**
</td>
<td>
Not known yet.
</td> </tr>
<tr>
<td>
**Outline the data utility: To whom it will be useful**
</td>
<td>
These data could be useful for people working in the field of perovskite based
PV and in general to the PV community.
Similar data already exist in several published works from different groups.
Our dataset could be compared to this already existing information.
</td> </tr> </table>
## 3.2 FAIR Data
### 3.2.1 Making data findable, including provisions for metadata
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Outline the discoverability of data (metadata provision)**
</td>
<td>
_See general considerations in Sections 2.2 and 2.3 above._
</td> </tr>
<tr>
<td>
**Outline the identifiability of data and refer to standard identification
mechanisms. Do you make use of persistent unique identifiers such as Digital
Object Identifiers?**
</td>
<td>
_See general considerations in Sections 2.2 and 2.3 above._
</td> </tr>
<tr>
<td>
**Outline naming conventions used**
</td>
<td>
To be defined at a later stage.
</td> </tr>
<tr>
<td>
**Outline the approach towards search keyword**
</td>
<td>
_See general considerations and list of default keywords in Section 2.2
above._
</td> </tr>
<tr>
<td>
**Outline the approach for clear versioning**
</td>
<td>
To be defined at a later stage.
</td> </tr>
<tr>
<td>
**Specify standards for metadata creation (if any). If there are no standards
in your discipline describe what type of metadata will be created and how.**
</td>
<td>
This kind of datasets can typically be found in scientific publications but
there exist no standards for these publications. Very recently, in October
2015, the “Nature Materials” journal has made and attempt at harmonisation by
developing a checklist for photovoltaic research:
_http://www.nature.com/nmat/journal/v14/n11/full/nmat4473.html_
CHEOPS will consider this checklist and monitor its further development, as it
might help to create a metadata set by allowing proper comparison between the
different published results.
</td> </tr> </table>
### 3.2.2 Making data openly accessible
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Specify which data will be made openly available. If some data is kept
closed provide rationale for doing so**
</td>
<td>
In the framework of CHEOPS, the information on device performances, stability,
measurement protocols and device structures will be shared
</td> </tr>
<tr>
<td>
**Specify how the data will be made available**
</td>
<td>
Currently data are shared via scientific communication, either in the form of
scientific papers or at conferences via oral or visual presentations. While
the peer-reviewed publications will be made available as open access, it is
the intention to also provide the underlying data itself by storing it in an
open repository ( _see Section 2.1 above_ ).
</td> </tr> </table>
<table>
<tr>
<th>
**Specify what methods or software tools are needed to access the data. Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?**
</th>
<th>
No special software is needed to access the data.
</th> </tr>
<tr>
<td>
**Specify where the data and associated metadata, documentation and code are
deposited.**
</td>
<td>
_See general considerations outlined in Section 2.1 above._
</td> </tr>
<tr>
<td>
**Specify how access will be provided in case there are any restrictions**
</td>
<td>
The data that will be made available will be available without restrictions
</td> </tr> </table>
### 3.2.3 Making data interoperable
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.**
</td>
<td>
_See last point under Section 3.2.1 above._
</td> </tr>
<tr>
<td>
**Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow interdisciplinary interoperability? If not,
will you provide mapping to more commonly used ontologies?**
</td>
<td>
No standard vocabulary currently available.
_Please also see the comments made under Section 3.2.1 above_ .
</td> </tr> </table>
### 3.2.4 Increase data re-use (through clarifying licenses)
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Specify how the data will be licensed to permit the widest re-use possible**
</td>
<td>
_See general considerations under Section 2.4 above._
</td> </tr>
<tr>
<td>
**Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data**
</td>
<td>
_See general considerations under Section 2.5 above._
</td> </tr>
<tr>
<td>
**embargo is needed.**
</td>
<td>
</td> </tr>
<tr>
<td>
**Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the reuse of
some data is restricted, explain why.**
</td>
<td>
_See general considerations under Section 2.4 above._
</td> </tr>
<tr>
<td>
**Describe the data quality assurance process**
</td>
<td>
For use within CHEOPS, a standardised sample size and geometry as well as
standard operating procedures (SOP) for sample shipment and sample measurement
has been agreed upon. Following these SOPs will be mandatory for all
measurements carried out during the project. The SOPs were made available to
the CHEOPS consortium in deliverable D6.3, the “Quality and Best Practice
Manual” and will be made publicly available.
</td> </tr>
<tr>
<td>
**Specify the length of time for which the data will remain re-useable.**
</td>
<td>
The data made available will remain re-useable for an unrestricted duration.
</td> </tr> </table>
## 3.3 Allocation of resources
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Estimate the costs for making your data FAIR. Describe how you intend to
cover these costs**
</td>
<td>
The small amount of staff costs required for inserting the data into the
repository will be covered from the CHEOPS project budget.
The hosting of the data in the repository is typically free (e.g. ZENODO).
</td> </tr>
<tr>
<td>
**Clearly identify responsibilities for data management in your project**
</td>
<td>
General decisions (e.g. on licences or repositories) will be taken by the
Executive Board. For actual implementation of data management in each WP, the
WP leaders are responsible.
For this dataset, WP1 leader CSEM is in charge.
</td> </tr>
<tr>
<td>
**Describe costs and potential value of long term preservation**
</td>
<td>
There will be no long-term costs for CHEOPS partners for maintaining the data
repository.
</td> </tr> </table>
## 3.4 Data security
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Address data recovery as well as secure storage and transfer of sensitive
data**
</td>
<td>
_See general considerations in Section 2.6._
</td> </tr> </table>
## 3.5 Ethical aspects
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former**
</td>
<td>
Not applicable. _See also general consideration in Section 2.7_
</td> </tr> </table>
## 3.6 Other
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Refer to other national / funder / sectorial / departmental procedures for**
**data management that you are using (if any)**
</td>
<td>
Not applicable.
</td> </tr> </table>
# 4 Dataset on stability testing and encapsulation methods of Perovskite PV
devices (from WP2) 4.1 Data summary
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**State the purpose of the data collection / generation**
</td>
<td>
The purpose of the data collection is to test the efficacy of the different
encapsulation methods used by the CHEOPS consortium members. Analysis on
encapsulation layers will be carried on via calcium tests. Additionally,
calcium tests will also asses the quality of transportation devices used
during the exchange of samples among partners. The information produced is not
confidential.
</td> </tr>
<tr>
<td>
**Explain the relation to the objectives of the project**
</td>
<td>
The data is produced as part of the process to reach the project’s technical
objective 1 (TO1 in the DoA) of upscaling the perovskite PV technology.
</td> </tr>
<tr>
<td>
**Specify the types and formats of data generated / collected**
</td>
<td>
The measured data from the calcium test will each consist of a combination of:
1. Description of the encapsulation process used
2. Name of the CHEOPS partner providing the sample
3. Date of the measurement
4. Humidity
5. Temperature
6. Transmission of light over time
</td> </tr>
<tr>
<td>
**Specify the origin of the data**
</td>
<td>
The data will be documented or measured by Fraunhofer from samples delivered
by partners in WP2.
</td> </tr>
<tr>
<td>
**State the expected size of the data (if known)**
</td>
<td>
A few KB per sample.
</td> </tr>
<tr>
<td>
**Outline the data utility: To whom it will be useful**
</td>
<td>
These data could be useful for people working in the field of perovskite based
PV and in general to the PV community.
Similar data already exist in several published works from different groups.
Our dataset could be compared to this already existing information.
</td> </tr> </table>
## 4.2 FAIR Data
### 4.2.1 Making data findable, including provisions for metadata
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Outline the discoverability of data (metadata provision)**
</td>
<td>
_See general considerations in Sections 2.2 and 2.3 above._
</td> </tr>
<tr>
<td>
**Outline the identifiability of data and refer to standard identification
mechanisms.**
**Do you make use of persistent unique identifiers such as Digital Object
Identifiers?**
</td>
<td>
_See general considerations in Sections 2.2 and 2.3 above._
</td> </tr>
<tr>
<td>
**Outline naming conventions used**
</td>
<td>
_To be defined at a later stage._
</td> </tr>
<tr>
<td>
**Outline the approach towards search keyword**
</td>
<td>
_See general considerations and list of default keywords in Section_
_2.2 above._ Additional keywords specifically for this dataset will be
‘Calcium test’ and ‘transmission rate’
</td> </tr>
<tr>
<td>
**Outline the approach for clear versioning**
</td>
<td>
_To be defined at a later stage._
</td> </tr>
<tr>
<td>
**Specify standards for metadata creation (if any). If there are no standards
in your discipline describe what type of metadata will be created and how.**
</td>
<td>
Stability measurement standards have been developed for different types of
photovoltaics. In this project, we are following standards developed for
organic photovoltaics. This information can be found in Reese _et al._
_DOI:10.1016/j.solmat.2011.01.036_ where the protocol ISOS-D-3, standard used
by the consortium is described in great detail. On the other hand, calcium
test is a popular method to test the permeation of water vapour through a
membrane which is widely explained in the literature, e.g. _DOI:
10.1016/S00406090(02)00584-9_ .
</td> </tr> </table>
### 4.2.2 Making data openly accessible
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Specify which data will be made openly available. If some data is kept
closed provide rationale for doing so**
</td>
<td>
In the framework of CHEOPS, the information on device performances, stability,
measurement protocols and device structures will be shared.
</td> </tr>
<tr>
<td>
**Specify how the data will be made available**
</td>
<td>
Currently data are shared via scientific communication, either in the form of
scientific papers or at conferences via oral or visual presentations. While
the peer-reviewed publications will be made available as open access, it is
the intention to also provide the underlying data itself by storing it in an
open repository ( _see Section 2.1 above_ )
</td> </tr>
<tr>
<td>
**Specify what methods or software tools are needed to access the data. Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?**
</td>
<td>
No special software is needed to access the data.
</td> </tr>
<tr>
<td>
**Specify where the data and associated metadata, documentation and code are
deposited.**
</td>
<td>
_See general considerations outlined in Section 2.1 above._
</td> </tr>
<tr>
<td>
**Specify how access will be provided in case there are any restrictions**
</td>
<td>
The data will be made available without restrictions.
</td> </tr> </table>
### 4.2.3 Making data interoperable
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.**
</td>
<td>
_See last point under Section 4.2.1 above._
</td> </tr>
<tr>
<td>
**Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow interdisciplinary interoperability? If not,
will you provide mapping to more commonly used ontologies?**
</td>
<td>
No standard vocabulary currently available. _Please also see the comments made
under Section 4.2.1 above._
</td> </tr> </table>
### 4.2.4 Increase data re-use (through clarifying licenses)
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Specify how the data will be licensed to permit the widest re-use possible**
</td>
<td>
At present, no licensing is envisaged.
_Also see general considerations under Section 2.4 above._
</td> </tr>
<tr>
<td>
**Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data**
</td>
<td>
At present, no embargo period is envisaged.
_Also see general considerations under Section 2.5 above._
</td> </tr>
<tr>
<td>
**embargo is needed.**
</td>
<td>
</td> </tr>
<tr>
<td>
**Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the reuse of
some data is restricted, explain why.**
</td>
<td>
_See general considerations under Section 2.4 above._
</td> </tr>
<tr>
<td>
**Describe the data quality assurance process**
</td>
<td>
For use within CHEOPS, a standardised sample size and geometry as well as
standard operating procedures (SOP) for sample shipment and sample measurement
has been agreed upon. Following these SOPs will be mandatory for all
measurements carried out during the project. The SOPs were made available to
the CHEOPS consortium in deliverable D6.3, the “Quality and Best Practice
Manual” and will be made publicly available.
</td> </tr>
<tr>
<td>
**Specify the length of time for which the data will remain re-useable.**
</td>
<td>
The data made available will remain re-useable for an unrestricted duration.
</td> </tr> </table>
## 4.3 Allocation of resources
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Estimate the costs for making your data FAIR. Describe how you intend to
cover these costs**
</td>
<td>
The small amount of staff costs required for inserting the data into the
repository will be covered from the CHEOPS project budget.
The hosting of the data in the repository is typically free (e.g. ZENODO).
</td> </tr>
<tr>
<td>
**Clearly identify responsibilities for data management in your project**
</td>
<td>
General decisions (e.g. on licences or repositories) will be taken by the
Executive Board. For actual implementation of data management in each WP, the
WP leaders are responsible.
For this dataset, WP2 leader Fraunhofer is in charge.
</td> </tr>
<tr>
<td>
**Describe costs and potential value of long term preservation**
</td>
<td>
There will be no long-term costs for CHEOPS partners for maintaining the data
repository.
</td> </tr> </table>
## 4.4 Data security
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Address data recovery as well as secure storage and transfer of sensitive
data**
</td>
<td>
_See general considerations in Section 2.6._
</td> </tr> </table>
## 4.5 Ethical aspects
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former**
</td>
<td>
Not applicable. _See also general consideration in Section 2.7_
</td> </tr> </table>
## 4.6 Other
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Refer to other national / funder / sectorial / departmental procedures for**
**data management that you are using (if any)**
</td>
<td>
Not applicable.
</td> </tr> </table>
# 5 Dataset on risk assessment and roadmap development (from WP3)
## 5.1 Data summary
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**State the purpose of the data collection / generation**
</td>
<td>
In the framework of WP3, data will be generated to characterise potential
emissions of different chemicals brought by CHEOPS devices all along the life
cycle, from production to end-of-life. This will include the results of tests
made in standard conditions, but also during or after damage (incineration,
leakage after breakage). Results could be extended to exposure assessment
(workers, users, …). To our knowledge, those results are likely to be novel,
and as such would be published in scientific reviews.
Socio-economic analysis will also collect and generate data on the assessment
of potential impacts of a large scale development of CHEOPS devices. Those
impacts will be assessed qualitatively, quantitatively, or in monetary terms.
Those results will be published in peer-reviewed journals.
</td> </tr>
<tr>
<td>
**Explain the relation to the objectives of the project**
</td>
<td>
The data is produced as part of the process to reach the project’s main
objective of identifying and addressing risks to the perovskite PV technology.
In particular, it addresses Market Objective 3 (MO3 in the DoA).
</td> </tr>
<tr>
<td>
**Specify the types and formats of data generated / collected**
</td>
<td>
Concerning emissions characterisation, the format of data has not yet been
defined.
Concerning socio-economic analysis, results will follow the guidelines
produced by international organisations and agencies like _OECD_ 2 and
_ECHA_ 3 .
</td> </tr>
<tr>
<td>
**Specify the origin of the data**
</td>
<td>
The data will mainly be documented or measured by the CHEOPS partners in WP3.
Data on emissions after damage will be obtained by INERIS.
</td> </tr>
<tr>
<td>
**State the expected size of the data (if known)**
</td>
<td>
Not known yet.
</td> </tr>
<tr>
<td>
**Outline the data utility: To whom it will be useful**
</td>
<td>
These data could be useful for people working in the field of perovskite based
PV, but also to public authorities and stakeholders of the solar energy
economy.
</td> </tr> </table>
## 5.2 FAIR Data
### 5.2.1 Making data findable, including provisions for metadata
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Outline the discoverability of data (metadata provision)**
</td>
<td>
_See general considerations in Sections 2.2 and 2.3 above._
</td> </tr>
<tr>
<td>
**Outline the identifiability of data and refer to standard identification
mechanisms.**
**Do you make use of persistent unique identifiers such as Digital Object
Identifiers?**
</td>
<td>
_See general considerations in Sections 2.2 and 2.3 above._
</td> </tr>
<tr>
<td>
**Outline naming conventions used**
</td>
<td>
To be defined at a later stage.
</td> </tr>
<tr>
<td>
**Outline the approach towards search keyword**
</td>
<td>
_See general considerations and list of default keywords in Section 2.2
above._
</td> </tr>
<tr>
<td>
**Outline the approach for clear versioning**
</td>
<td>
To be defined at a later stage.
</td> </tr>
<tr>
<td>
**Specify standards for metadata creation (if any). If there are no standards
in your discipline describe what type of metadata will be created and how.**
</td>
<td>
Concerning the socio-economic analysis, the results will follow the guidelines
produced by international organisations and agencies like _OECD_ 3 and
_ECHA_ 5 .
</td> </tr> </table>
### 5.2.2 Making data openly accessible
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Specify which data will be made openly available. If some data is kept
closed provide rationale for doing so**
</td>
<td>
In the framework of CHEOPS, the information on device performances, stability,
measurement protocols and device structures will be shared.
</td> </tr>
<tr>
<td>
**Specify how the data will be made available**
</td>
<td>
Currently data are shared via scientific communication, either in the form of
scientific papers or at conferences via oral or visual presentations. While
the peer-reviewed publications will be made available as open access, it is
the intention to also provide the underlying data itself by storing it in an
open repository ( _see Section 2.1 above_ ).
</td> </tr>
<tr>
<td>
**Specify what methods or software tools are needed to access the data. Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?**
</td>
<td>
No software is needed to access the data.
</td> </tr>
<tr>
<td>
**Specify where the data and associated metadata, documentation and code are
deposited.**
</td>
<td>
_See general considerations outlined in Section 2.1 above._
</td> </tr>
<tr>
<td>
**Specify how access will be provided in case there are any restrictions**
</td>
<td>
The data that will be made available will be available without restrictions.
</td> </tr> </table>
### 5.2.3 Making data interoperable
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.**
</td>
<td>
_See last point under Section 5.2.1 above._
</td> </tr>
<tr>
<td>
**Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow interdisciplinary interoperability? If not,
will you provide mapping to more commonly used ontologies?**
</td>
<td>
_No standard vocabulary currently available. Please also see the comments made
under Section 5.2.1 above._
</td> </tr> </table>
### 5.2.4 Increase data re-use (through clarifying licenses)
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Specify how the data will be licensed to permit the widest re-use possible**
</td>
<td>
_See general considerations under Section 2.4 above._
</td> </tr>
<tr>
<td>
**Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data**
</td>
<td>
_See general considerations under Section 2.5 above._
</td> </tr>
<tr>
<td>
**embargo is needed.**
</td>
<td>
</td> </tr>
<tr>
<td>
**Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the reuse of
some data is restricted, explain why.**
</td>
<td>
_See general considerations under Section 2.4 above._
</td> </tr>
<tr>
<td>
**Describe the data quality assurance process**
</td>
<td>
To be defined in detail at a later stage. But data production will follow the
most demanding standards of replicability required for scientific
publications.
</td> </tr>
<tr>
<td>
**Specify the length of time for which the data will remain re-useable.**
</td>
<td>
The data made available will remain re-useable for an unrestricted duration.
</td> </tr> </table>
## 5.3 Allocation of resources
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Estimate the costs for making your data FAIR. Describe how you intend to
cover these costs**
</td>
<td>
The small amount of staff costs required for inserting the data into the
repository will be covered from the CHEOPS project budget.
The hosting of the data in the repository is typically free (e.g. ZENODO).
</td> </tr>
<tr>
<td>
**Clearly identify responsibilities for data management in your project**
</td>
<td>
General decisions (e.g. on licences or repositories) will be taken by the
Executive Board. For actual implementation of data management in each WP, the
WP leaders are responsible.
For this dataset, WP3 leader INERIS is in charge.
</td> </tr>
<tr>
<td>
**Describe costs and potential value of long term preservation**
</td>
<td>
There will be no long-term costs for CHEOPS partners for maintaining the data
repository.
</td> </tr> </table>
## 5.4 Data security
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Address data recovery as well as secure storage and transfer of sensitive
data**
</td>
<td>
_See general considerations in Section 2.6._
</td> </tr> </table>
## 5.5 Ethical aspects
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**To be covered in the context of the ethics review, ethics section of DoA and
ethics**
</td>
<td>
Not applicable. _See also general consideration in Section 2.7_
</td> </tr> </table>
**deliverables. Include references and related technical aspects if not
covered by the former**
## 5.6 Other
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Refer to other national / funder / sectorial / departmental procedures for**
**data management that you are using (if any)**
</td>
<td>
Not applicable.
</td> </tr> </table>
# 6 Dataset on life cycle analysis (from WP3) 6.1 Data summary
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**State the purpose of the data collection / generation**
</td>
<td>
1. Data collected for the manufacturing of the CHEOPS PV modules are the inputs and outputs of materials and energy for the factory and the transport of materials to and waste from the factory.
2. Calculated environmental impact results
</td> </tr>
<tr>
<td>
**Explain the relation to the objectives of the project**
</td>
<td>
The data is produced as part of the process to reach the project’s main
objective of identifying and addressing risks to the perovskite PV technology.
In particular, it addresses Market Objective 3 (MO3 in the DoA).
</td> </tr>
<tr>
<td>
**Specify the types and formats of data generated / collected**
</td>
<td>
The Life Cycle analysis developed in the framework of WP3 will follow the ISO
standards 14040:2006. Since the ecoinvent 3 data set will be used for
background data the ecoinvent data structure will be used.
For the environmental impact assessment the ILCD method present in the Simapro
software will be used.
</td> </tr>
<tr>
<td>
**Specify the origin of the data**
</td>
<td>
The data will be collected and processed by the CHEOPS partner SMART in WP3.
Input data will be taken also from the ecoinvent 3 database.
</td> </tr>
<tr>
<td>
**State the expected size of the data (if known)**
</td>
<td>
Not known yet.
</td> </tr>
<tr>
<td>
**Outline the data utility: To whom it will be useful**
</td>
<td>
These data could be useful for people working in the field of perovskite based
PV, but also to public authorities and stakeholders of the solar energy
economy.
</td> </tr> </table>
## 6.2 FAIR Data
### 6.2.1 Making data findable, including provisions for metadata
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Outline the discoverability of data (metadata provision)**
</td>
<td>
_See general considerations in Sections 2.2 and 2.3 above._
</td> </tr>
<tr>
<td>
**Outline the identifiability of data and refer to standard identification
mechanisms.**
**Do you make use of persistent unique identifiers**
</td>
<td>
_See general considerations in Sections 2.2 and 2.3 above._
</td> </tr>
<tr>
<td>
**such as Digital Object Identifiers?**
</td>
<td>
</td> </tr>
<tr>
<td>
**Outline naming conventions used**
</td>
<td>
Ecoinvent 3 naming
</td> </tr>
<tr>
<td>
**Outline the approach towards search keyword**
</td>
<td>
_See general considerations and list of default keywords in Section 2.2
above._
Additional keywords specifically for this dataset will be ‘life cycle
assessment’
</td> </tr>
<tr>
<td>
**Outline the approach for clear versioning**
</td>
<td>
_To be defined at a later stage._
</td> </tr>
<tr>
<td>
**Specify standards for metadata creation (if any). If there are no standards
in your discipline describe what type of metadata will be created and how.**
</td>
<td>
The Life Cycle Analysis developed in the framework of WP3 will follow the ISO
standards 14040:2006. Since the ecoinvent 3 data set will be used for
background data the ecoinvent data structure will be used.
</td> </tr> </table>
### 6.2.2 Making data openly accessible
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Specify which data will be made openly available. If some data is kept
closed provide rationale for doing so**
</td>
<td>
The environmental impacts calculated with the ILCD impact assessment method.
</td> </tr>
<tr>
<td>
**Specify how the data will be made available**
</td>
<td>
Currently data are shared via scientific communication, either in the form of
scientific papers or at conferences via oral or visual presentations. While
the peer-reviewed publications will be made available as open access, it is
the intention to also provide the underlying data itself by storing it in an
open repository ( _see Section_
_2.1 above_ )
</td> </tr>
<tr>
<td>
**Specify what methods or software tools are needed to access the data. Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?**
</td>
<td>
No specific software is needed to access the data.
</td> </tr>
<tr>
<td>
**Specify where the data and associated metadata, documentation and code are
deposited.**
</td>
<td>
See general considerations outlined in Section 2.1 above.
</td> </tr>
<tr>
<td>
**Specify how access will be provided in case there are any restrictions**
</td>
<td>
The data that will be made available will be available without restrictions.
</td> </tr> </table>
### 6.2.3 Making data interoperable
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.**
</td>
<td>
_See last point under Section 6.2.1 above._
</td> </tr>
<tr>
<td>
**Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow interdisciplinary interoperability? If not,
will you provide mapping to more commonly used ontologies?**
</td>
<td>
No standard vocabulary currently available. Please also see the comments made
under _Section 6.2.1_ above.
</td> </tr> </table>
### 6.2.4 Increase data re-use (through clarifying licenses)
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Specify how the data will be licensed to permit the widest re-use possible**
</td>
<td>
No licensing. Free use.
_Also see general considerations under Section 2.4 above._
</td> </tr>
<tr>
<td>
**Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed.**
</td>
<td>
_See general considerations under Section 2.5 above._
</td> </tr>
<tr>
<td>
**Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the reuse of
some data is restricted, explain why.**
</td>
<td>
_See general considerations under Section 2.4 above._
</td> </tr>
<tr>
<td>
**Describe the data quality assurance process**
</td>
<td>
Review by CHEOPS partners.
</td> </tr>
<tr>
<td>
**Specify the length of time for which the data will remain re-useable.**
</td>
<td>
The data made available will remain re-useable for an unrestricted duration.
</td> </tr> </table>
## 6.3 Allocation of resources
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Estimate the costs for making your data FAIR. Describe how you intend to
cover these costs**
</td>
<td>
The small amount of staff costs required for inserting the data into the
repository will be covered from the CHEOPS project budget.
The hosting of the data in the repository is typically free (e.g. ZENODO).
</td> </tr>
<tr>
<td>
**Clearly identify responsibilities for data management in your project**
</td>
<td>
General decisions (e.g. on licences or repositories) will be taken by the
Executive Board. For actual implementation of data management in each WP, the
WP leaders are responsible. For this dataset, SMART is in charge.
</td> </tr>
<tr>
<td>
**Describe costs and potential value of long term preservation**
</td>
<td>
There will be no long-term costs for CHEOPS partners for maintaining the data
repository.
</td> </tr> </table>
## 6.4 Data security
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Address data recovery as well as secure storage and transfer of sensitive
data**
</td>
<td>
_See general considerations in Section 2.6._
</td> </tr> </table>
## 6.5 Ethical aspects
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former**
</td>
<td>
_Not applicable. See also general consideration in Section 2.7_
</td> </tr> </table>
## 6.6 Other
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Refer to other national / funder / sectorial / departmental procedures for**
**data management that you are using (if any)**
</td>
<td>
_Not applicable._
</td> </tr> </table>
# 7 Dataset on perovskite/silicon tandem cell development (from WP4)
## 7.1 Data summary
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**State the purpose of the data collection / generation**
</td>
<td>
We generate data about device fabrication, material composition and device
performances. Data will be obtained for the individual cells as well as for
the tandem devices.
Similar data already exist in the form of publications from other groups. Our
dataset can be compared to existing results.
</td> </tr>
<tr>
<td>
**Explain the relation to the objectives of the project**
</td>
<td>
The data is produced as part of the process to reach the project’s technical
objective 2 (TO2 in the DoA) of manufacturing monolithic 2-terminal PK/c-Si
heterojunction tandem (demonstrator) cells.
</td> </tr>
<tr>
<td>
**Specify the types and formats of data generated / collected**
</td>
<td>
The data on device structure and process conditions of the different layers
will be descriptive, while the measured device performances will each consist
of a combination of:
1. Name of the parameter measured
2. Numeric value measured
3. Physical unit
</td> </tr>
<tr>
<td>
**Specify the origin of the data**
</td>
<td>
The data will be documented or measured by the CHEOPS partners in WP4.
</td> </tr>
<tr>
<td>
**State the expected size of the data (if known)**
</td>
<td>
Not known yet.
</td> </tr>
<tr>
<td>
**Outline the data utility: To whom it will be useful**
</td>
<td>
These data could be useful for other research groups working especially in the
field of perovskite-based photovoltaics, but also for other domains of
(academic and industrial) PV research and development.
</td> </tr> </table>
## 7.2 FAIR Data
### 7.2.1 Making data findable, including provisions for metadata
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Outline the discoverability of data (metadata provision)**
</td>
<td>
_See general considerations in Sections 2.2 and 2.3 above._
</td> </tr>
<tr>
<td>
**Outline the identifiability of data and refer to standard identification
mechanisms.**
**Do you make use of persistent unique identifiers such as Digital Object**
</td>
<td>
_See general considerations in Sections 2.2 and 2.3 above._
</td> </tr>
<tr>
<td>
**Identifiers?**
</td>
<td>
</td> </tr>
<tr>
<td>
**Outline naming conventions used**
</td>
<td>
To be defined at a later stage.
</td> </tr>
<tr>
<td>
**Outline the approach towards search keyword**
</td>
<td>
_See general considerations and list of default keywords in Section 2.2
above._
Additional keywords specifically for this dataset will be ‘tandem’, ‘four-
terminal’, ‘monolithic’ and ‘two-terminal’.
</td> </tr>
<tr>
<td>
**Outline the approach for clear versioning**
</td>
<td>
To be defined at a later stage.
</td> </tr>
<tr>
<td>
**Specify standards for metadata creation (if any). If there are no standards
in your discipline describe what type of metadata will be created and how.**
</td>
<td>
This kind of datasets can typically be found in scientific publications but
there exist no standards for these publications. Very recently, in October
2015, the “Nature Materials” journal has made and attempt at harmonisation by
developing a checklist for photovoltaic research:
_http://www.nature.com/nmat/journal/v14/n11/full/nmat4473.html_
CHEOPS will consider this checklist and monitor its further development, as it
might help to create a metadata set by allowing proper comparison between the
different published results.
</td> </tr> </table>
### 7.2.2 Making data openly accessible
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Specify which data will be made openly available. If some data is kept
closed provide rationale for doing so**
</td>
<td>
In the framework of CHEOPS, the information on device performances, stability,
measurement protocols and device structures will be shared
</td> </tr>
<tr>
<td>
**Specify how the data will be made available**
</td>
<td>
Currently data are shared via scientific communication, either in the form of
scientific papers or at conferences via oral or visual presentations. While
the peer-reviewed publications will be made available as open access, it is
the intention to also provide the underlying data itself by storing it in an
open repository ( _see Section 2.1 above_ )
</td> </tr>
<tr>
<td>
**Specify what methods or software tools are needed to access the data.**
</td>
<td>
No special software is needed to access the data.
</td> </tr>
<tr>
<td>
**Specify where the data and associated metadata, documentation and code are
deposited.**
</td>
<td>
_See general considerations outlined in Section 2.1 above._
</td> </tr>
<tr>
<td>
**Specify how access will be provided in case there are any restrictions**
</td>
<td>
The data that will be made available will be available without restrictions.
</td> </tr> </table>
### 7.2.3 Making data interoperable
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.**
</td>
<td>
_See last point under Section 7.2.1 above._
</td> </tr>
<tr>
<td>
**Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow interdisciplinary interoperability? If not,
will you provide mapping to more commonly used ontologies?**
</td>
<td>
No standard vocabulary currently available. _Please also see the comments made
under Section 7.2.1 above._
</td> </tr> </table>
### 7.2.4 Increase data re-use (through clarifying licenses)
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Specify how the data will be licensed to permit the widest re-use possible**
</td>
<td>
Creative Common licenses such as CC-BY-NC or CC-BY-NC-SA could be an idea. We
will seek a consortium decision on this issue.
_Also see general considerations under Section 2.4 above._
</td> </tr>
<tr>
<td>
**Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed.**
</td>
<td>
_See general considerations under Section 2.5 above._
</td> </tr>
<tr>
<td>
**Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project?**
</td>
<td>
_See general considerations under Section 2.4 above._
</td> </tr>
<tr>
<td>
**Describe the data quality assurance process**
</td>
<td>
For use within CHEOPS, a standardised sample size and geometry as well as
standard operating procedures (SOP) for sample shipment and sample measurement
has been agreed upon. Following these SOPs will be mandatory for all
measurements carried out during the project. The SOPs were made available to
the CHEOPS consortium in deliverable D6.3, the “Quality and Best Practice
Manual” and will be made publicly available.
</td> </tr>
<tr>
<td>
**Specify the length of time for which the data will remain re-useable.**
</td>
<td>
The data made available will remain re-useable for an unrestricted duration.
</td> </tr> </table>
## 7.3 Allocation of resources
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Estimate the costs for making your data FAIR. Describe how you intend to
cover these costs**
</td>
<td>
The small amount of staff costs required for inserting the data into the
repository will be covered from the CHEOPS project budget.
The hosting of the data in the repository is typically free (e.g. ZENODO).
</td> </tr>
<tr>
<td>
**Clearly identify responsibilities for data management in your project**
</td>
<td>
General decisions (e.g. on licences or repositories) will be taken by the
Executive Board. For actual implementation of data management in each WP, the
WP leaders are responsible.
For this dataset, WP4 leader EPFL is in charge.
</td> </tr>
<tr>
<td>
**Describe costs and potential value of long term preservation**
</td>
<td>
There will be no long-term costs for CHEOPS partners for maintaining the data
repository.
</td> </tr> </table>
## 7.4 Data security
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Address data recovery as well as secure storage and transfer of sensitive
data**
</td>
<td>
_See general considerations in Section 2.6._
</td> </tr> </table>
## 7.5 Ethical aspects
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former**
</td>
<td>
_Not applicable. See also general consideration in Section 2.7_
</td> </tr> </table>
## 7.6 Other
<table>
<tr>
<th>
**DMP component**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Refer to other national / funder / sectorial / departmental procedures for**
**data management that you are using (if any)**
</td>
<td>
_Not applicable._
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1030_CONQUER_665172.md
|
Due to the high diversity of experiments and measurements inside CONQUER and
due to the high interdisciplinarity of the consortium we expect a vast amount
of automatically generated data from different instruments as well as
metadata. The nature of data will vary widely, including textual, numerical,
qualitative and quantitative data, mostly in digital form. The DMP will
outline how to handle, curate, regulate, access and publish data. Each PI
within the CONQUER consortium is responsible for compliance with these
guidelines at his/her research site.
The scope of this DMP includes all data that is acquired through the research
carried out in CONQUER. Any information should be recorded regardless of the
form or the media in which they may exist, which is necessary to support or
validate observations, findings or outputs.
# Open data policy
## Relation to dissemination and IP management
It must be emphasised that the guidelines for data management must be
intimately intertwined with the rules for dissemination of the results and for
protection of intellectual property (IP) (treated in the separate deliverable
D1.1, IP management plan). Owing to their nature, dissemination of results,
the provision of open access data and the protection of IP are partially
mutually conflicting and cannot, thus, be treated separately.
There is no specific potential of conflicts between open access data and open
access publications
A partial conflict can arise in the context of IP according to article 27 of
the GA (protection of results — visibility of EU funding) which contains an
explicit obligation for examining results for the possibility to protect them:
_27.1 Obligation to protect the results_
_Each beneficiary must examine the possibility of protecting its results and
must adequately protect them — for an appropriate period and with appropriate
territorial coverage — if:_
1. _the results can reasonably be expected to be commercially or industrially exploited and_
2. _protecting them is possible, reasonable and justified (given the circumstances). When deciding on protection, the beneficiary must consider its own legitimate interests and the legitimate interests (especially commercial) of the other beneficiaries._
Though scientific data themselves cannot be protected (does not by itself
generate IP), they are the basis for all IPR generated within CONQUER.
Therefore data cannot be made open access before checking their relevance for
IP protection. Furthermore data have to go through a certain procedure for
quality assurance (QA) because low quality data are not only useless but may
even be counter-productive when other researchers rely on them.
Currently there is consensus in the consortium of CONQUER that the basic
mechanism for quality assurance is to publish the data and/or results related
to these data in peer reviewed journals thus demonstrating scientific
significance. No other data shall be considered for being made open access. It
may, however, turn out in the future that there are other means for quality
assurance which may also qualify the data for being made open access. The
identification of such alternative procedures will be a matter of discussion
in the consortium and may lead to future modifications of the data management
plan.
## Data life cycle for scientific data
Fig. 1 shows a general workflow for the treatment of research data and project
results in
CONQUER which is in accordance with both article 27 and 29 of the GA. In the
first stage
Research data
IP relevant ?
protection
priority date
\+
\-
publication
relevant ?
\+
\-
publication
acceptance date
annotation, metadata
open access
database
internal
database
public
Delay: > 4 months
Delay: > 4 months
GA
§
27
GA
§
29
coordinator:
regular update of info on IP
quality assurance
procedure
project results
optional
embargo period
open access
publications
Fig. 1. Schematic of the workflow for the treatment of research data with
respect to IP, dissemination and open access.
research data are usually collected in either one or more local databases. In
many cases these data will generate further project results (inventions,
software, designs, reports,...). Each party generating new data has to check
if they are relevant for results which can be protected.
If results are identified as relevant for protection, a decision shall be made
by the respective party/parties about the most appropriate method of
protection (e.g. patenting). No further dissemination of the results under
consideration can take place before the possibility of protection has been
clarified and, in case the protection is possible, a priority date has been
obtained (e. g. a patent has been filed). Fig. 1 illustrates the priorities.
Consequently the timing for publications and the public access of publications
is constrained by protection measures. It must be emphasised that, given the
usual times for preparing e.g. a patent submission, it may take at least 4 – 6
months before the results can be published. In the second step, an assessment
has to be made if the results should be published. Publishing can be
restricted for a number of further reasons, not just related to patenting of
technical inventions For example, researchers may decide to keep confidential
since the results are not yet mature enough (further improvements are being
done), or for other commercial or legitimate reasons.
After the check for IP, the consortium members may or may not move on
continuously to exploit the results – whereby exploitation means "use" and is
not necessarily commercial. For example, results might be made available for
use by researchers (under appropriate terms), or educators, or industry.
According to the EC open access data policy as many data as possible should be
made accessible to the public so as they are a valuable resource for
stimulating further innovation, also by users outside of the project
consortium. In any case, however, QA must precede any release of the data to
the open access database, as illustrated in fig. 1\. Following the current
consensus of the consortium QA is done by scientific publication. Considering
the usual times for submission and reviewing another delay of at least 4 -6
months must be expected. Once the main findings are published the data
supporting these findings can be made publicly available, unless there are
reasons not to make them open access (see 2.3, Embargo period).
Following this workflow data which can be scientifically published without
affecting protection can be made publicly available immediately after
publication.
## Embargo period
In order to assure that the researches have enough time to analyse and publish
the generated data an embargo period of up to 3 years can be defined. During
this period the PI in agreement with the coordinator can decide on prematurely
making the data open access. Independent of the embargo period the data must
not be made public if the data are relevant for intellectual property rights
(D1.1, IP management plan). In special cases the embargo period can be
extended to a maximum of 6 years.
## Data storage
All open access data will be stored making use of the EU-funded open access
project _openAire_ ( _https://www.openaire.eu/_ ) and its free open data
repository _ZENODO_ ( _https://zenodo.org/_ ) were selected to upload and
share both publications and data. A new repository called ‘ _CONQUER FET-open
project_ ’ has been created in the platform _Zenodo_ , which provides free
storage space on a CERN-based server system (unlimited, max. 2 GB per file).
This community will be used to collect all publications. In the CONQUER
webpage a link will be crated to this database which also implements all
functions for visualisation, filtering and download. Open access data will
then be stored under a license according to
_http://creativecommons.org/publicdomain/zero/1.0/_ ; however, there is
always also the possibility to upload data under closed access conditions.
# Curation and preservation
After the generation of scientific data, all partners are responsible to
manage, annotate and securely store the data at their site. Regardless of
public accessibility and associated central data collection, each participant
is responsible to archive the data for at least 10 years, irrespective of
whether they exist in hardcopy or electronic form.
Management and annotation of research data must be as such as to assure the
following:
* Discoverability: metadata published with the research data must be sufficiently detailed so that public users can discover what research data exists. Before data will be made available to the public the project steering group (PSG) will decide which kind of identifier (Persistent identifier PI, UID) will be appropriate.
* Understandability: metadata must include a description of data acquisition, origin, processing and/or analysis.
Data and metadata shall be stored in standard formats if they exist (Detailed
specification of the data formats is found in sec. 6).
Following the recommendations of the Research Councils UK (RCUK)[2] data that
is used in publications will be accessible for at least 10 years through the
database on the website. All data published within CONQUER will be equipped
with a unique identifier.
A regular assessment has to be carried out in order to identify erroneous data
that can be removed from the database (in this case the meta data shall be
preserved and extended by a statement, why the data is not accessible any
more).
## Data security
There are two types of data security to be considered: 1.) _Secure transfer_
and 2.) _secure storage_ of research data. When data is transferred between
the partners for the purpose of discussion or central collection a secure
connection has to be established (e.g. ssh, hhtps, sftp). Transmission via
email is discouraged. Secure storage of data at the local sites of the
partners has to be managed by the local PI. The local PI is responsible for
establishing secure data storage and back-up systems to prevent data loss and
unauthorised access. After upload of open access data to ZENODO, data security
is provided by the mechanisms of ZENODO.
# Data sharing policies (within the consortium)
In order to make use of synergies of recorded data, data sharing can and will
occur within the consortium. The following guidelines will be established to
specify the data sharing processes:
* Define data sharing policies: The data originator may define a policy for how to handle the data.
* Use of data for publication or exploitation has to be attuned with the originator of the data. If ambiguities or disputes arise, the coordinator may dictate further steps and has the final word.
# Data publication and third party accessibility
CONQUER is a scientific project and therefore all partners are intrinsically
motivated to publish all findings, results, successes and failures as early as
possible. All data generated within CONQUER that are made publicly accessible
are published under a suitable license which should be chosen individually for
each data set by the PI. Examples for open access licences can be found e.g.
on the Creative Commons website [3]. A guideline for choosing appropriate
licenses for different classes of data inside CONQUER will be elaborated by
the PSG.
Furthermore CONQUER pursues a multiple licencing strategy which applies if
third parties want to use the published data for commercial purposes. In this
case an appropriate license for commercial purposes must be defined.
As stated above, if the data is copied, modified and/or reused the source of
the data has to be properly cited. Properly cited means that the citation
comprises enough information to uniquely locate the version of the data being
cited (even if its location changes, → persistent identifier).
# Data description and data standards
This section contains a summary of all datasets which will be produced within
CONQUER. It should be emphasised that this list is based on the current state
of knowledge and is expected to be extended and/or corrected during the
project lifetime. Dataset sises and their expected numbers are estimates and
serve for configuring the server for the centralised database.
<table>
<tr>
<th>
**Name/party**
</th>
<th>
**NQR spectra /TUG**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Each synthesised NQR-CR compound will produce several files corresponding to
peak detection, T1 and T2 measurement, determination of the temperature
coefficient.
There is no standardised file format available for NQR spectra.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**format**
</td>
<td>
**size per set**
</td>
<td>
**# sets**
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
series of frequencies and complex numbers for the signals
</td>
<td>
ASCII
</td>
<td>
<100 MB
</td>
<td>
≤100
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
NQR sequence parameters, temperature, coil specifications, sample volume,
date, time, experimental setup (e.g. temperature sweeps)
</td>
<td>
ASCII, pdf
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Type of open access data**
</td>
<td>
Preprocessed raw spectra
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
YES
</td> </tr> </table>
<table>
<tr>
<th>
**Name/party**
</th>
<th>
**MR images /TUG**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
The final tests for contrast generation in MR images is performed with a
Siemens Skyra 3T scanner which includes a B0 insert. The samples will be
phantoms and cell cultures.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**format**
</td>
<td>
**size per set**
</td>
<td>
**# sets**
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
Proprietary format
</td>
<td>
Siemens
</td>
<td>
\-
</td>
<td>
≤100
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
Reconstructed images, available in a standardised medical image file format
</td>
<td>
DICOM
</td>
<td>
100MB
</td>
<td>
≤100
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
Metadata:
experimental setup
</td>
<td>
DICOM, ASCII, pdf
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Type of open access data**
</td>
<td>
DICOM images
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
YES
</td> </tr> </table>
<table>
<tr>
<th>
**Name/party**
</th>
<th>
**DLS and zeta potential /UM**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
DLS and Zeta-potential analysis is carried out for each synthesised
nanoparticle. By doing so the hydrodynamic diameter and the surface charge of
the particles are determined
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**format**
</td>
<td>
**size per set**
</td>
<td>
**# sets**
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
Particle size and zeta potential values. Temperature, electrophoretic mobility
and conductivity values are generated in a proprietary format
</td>
<td>
.csv
</td>
<td>
≤ 1MB
</td>
<td>
≤100
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
Excel tables and graphs are generated
</td>
<td>
.xlsx
</td>
<td>
≤ 1MB
</td>
<td>
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
experimental setup
</td>
<td>
ASCII, pdf
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Type of open access data**
</td>
<td>
Data should be presented in a compact form as a set of different measurements
in an excel document. The single spectrum data contains less information.
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
YES
</td> </tr> </table>
<table>
<tr>
<th>
**Name/party**
</th>
<th>
**TEM and SEM images /UM**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
All promising nanoparticles will be characterised by TEM and SEM. The shape
and size of the particles is determined in this case.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**format**
</td>
<td>
**size per set**
</td>
<td>
**# sets**
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
Greyscale map
</td>
<td>
.jpeg, .jpg, .tif
</td>
<td>
≤ 5MB
</td>
<td>
≤100
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
experimental setup
</td>
<td>
ASCII, pdf
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Type of open access data**
</td>
<td>
raw images, metadata
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
YES
</td> </tr> </table>
<table>
<tr>
<th>
**Name/party**
</th>
<th>
**ATR-FTIR /UM**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
All promising nanoparticles will be characterised by ATR-FTIR.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**format**
</td>
<td>
**size per set**
</td>
<td>
**# sets**
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
Absorption or transmittance spectra in a proprietary format.
</td>
<td>
.spv
</td>
<td>
≤ 1MB
</td>
<td>
≤100
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
The spectra are generated in excel or origin lab
</td>
<td>
.xlsx
.ojp
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
experimental setup
</td>
<td>
ASCII, pdf
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Type of open access data**
</td>
<td>
Data should be presented in a compact form as a set of different measurements
in an excel document. The single spectrum data contains less information.
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
YES
</td> </tr> </table>
<table>
<tr>
<th>
**Name/party**
</th>
<th>
**QCM-D /UM**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
The real-time interaction of the particles in fluids with solid surfaces can
be observed by frequency changes of an oscillating quartz crystal.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**format**
</td>
<td>
**size per set**
</td>
<td>
**# sets**
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
Frequency changes of the oscillating crystal upon reaction with the particles
are recorded in a proprietary format.
</td>
<td>
.qsd,
.qtd
</td>
<td>
≤ 5MB
≤0.5MB
</td>
<td>
≤50
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
The data is processed in origin lab or excel
</td>
<td>
.xlsx
.ojp
</td>
<td>
≤ 5MB
</td>
<td>
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
experimental setup
</td>
<td>
ASCII, pdf
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Type of open access data**
</td>
<td>
Raw data, processed data and metadata
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
YES
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**Name/party**
</th>
<th>
**TGA DSC /UM**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Changes in physical and chemical properties of the particles in dependence of
temperature are determined.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**format**
</td>
<td>
**size per set**
</td>
<td>
**# sets**
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
The mass loss and the heat capacity in dependence of temperature and time are
recorded proprietary format.
</td>
<td>
ASCII
</td>
<td>
≤0.5MB
</td>
<td>
≤50
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
The processed data is generated in excel or in origin lab.
</td>
<td>
.xlsx
.ojp
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
experimental setup
</td>
<td>
ASCII, pdf
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Type of open access data**
</td>
<td>
Raw data, processed data and metadata
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
YES
</td> </tr> </table>
<table>
<tr>
<th>
**Name/party**
</th>
<th>
**Other measurements, process yields etc /UM**
</th>
<th>
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Calculations regarding the nanoparticles synthesis yield will be produced
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**format**
</td>
<td>
**size per set**
</td>
<td>
**# sets**
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
Simple calculations in excel
</td>
<td>
.xlsx
</td>
<td>
≤0.5MB
</td>
<td>
≤100
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
No metadata
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Type of open access data**
</td>
<td>
Raw data
</td>
<td>
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
YES
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**Name/party**
</th>
<th>
**Data from fluorescence, luminescence and absorbance readers /MUG**
</th>
<th>
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**format**
</td>
<td>
**size per set**
</td>
<td>
**# sets**
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
</td>
<td>
.xls
</td>
<td>
<100kB
</td>
<td>
<100
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
experimental setup
</td>
<td>
ASCII, pdf
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Type of open**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**access data**
</td>
<td>
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
NO
</td> </tr> </table>
<table>
<tr>
<th>
**Name/party**
</th>
<th>
**Flocel TEER /MUG**
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**format**
</td>
<td>
**size per set**
</td>
<td>
**# sets**
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
</td>
<td>
.xls
</td>
<td>
<50kB
</td>
<td>
30
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
experimental setup
</td>
<td>
ASCII, pdf
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
NO
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**Name/party**
</th>
<th>
**Microscopic Images /MUG**
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
LSM meta data, other microscopical data
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**format**
</td>
<td>
**size per set**
</td>
<td>
**# sets**
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
Proprietary format
</td>
<td>
</td>
<td>
</td>
<td>
<100
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
</td>
<td>
.jpg,
.tiff
</td>
<td>
1-5 MB
</td>
<td>
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
experimental setup
</td>
<td>
ASCII, pdf
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
NO
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**Name/party**
</th>
<th>
**NMR relaxation data /UWM**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
1 H spin-lattice relaxation dispersion profiles are recorded. The relaxation
rates are determined by analysing time dependencies of 1 H magnetization.
The data will be available in the form of *.org (Origin) files and *.pdf files
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**format**
</td>
<td>
**size per set**
</td>
<td>
**# sets**
</td> </tr>
<tr>
<td>
**Raw data**
</td>
<td>
Series of magnetization values versus time for different resonance frequencies
</td>
<td>
ASCII
</td>
<td>
≤ 1MB
</td>
<td>
≤300
</td> </tr>
<tr>
<td>
**Processed data**
</td>
<td>
1 H magnetization versus time, 1 H spin-lattice relaxation rates versus
frequency
</td>
<td>
ASCII pdf org
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
Experimental settings and sample specification
</td>
<td>
ASCII pdf
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Type of open access data**
</td>
<td>
Processed data
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
YES
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1033_REPROGRAM_667837.md
|
# Executive summary
This deliverable is based on the _Guidelines on FAIR Data Management in
Horizon 2020_ including the Horizon 2020 FAIR Data Management Plan (DMP)
template. Version: 26 July 2016. This Data Management Plan is a key
deliverable in this program, and documents the approach taken by the REPROGRAM
consortium to render research data findable, accessible, interoperable and re-
usable, where possible, within the context of the project.
# Introduction
## Purpose of the document
This Data Management Plan (DMP) describes the data management life cycle for
the data sets to be collected and processed by the REPROGRAM project. The DMP
outlines the handling of research data during the project, and how and what
parts of the data sets will be made available after the project has been
completed. This includes an assessment of when and how data can be shared
without disclosing directly or indirectly identifiable information from, for
instance, study participants.
This initial DMP needs to be updated over the course of the REPROGRAM project
whenever significant changes arise, such as (but not limited to):
* new data;
* changes in consortium policies (e.g. new innovation potential, decision to file for a patent);
* changes in consortium composition and external factors (e.g. new consortium members joining or former members leaving).
The DMP will be updated as a minimum in time with the periodic and final
evaluation of the project. This DMP will have a clear version number and
include a timetable for updates (see page 2).
The DMP specifies in combination with the Compulsory Deliverables related to
Clinical Studies performed during the REPROGRAM project the availability of
research data, describes measures to ensure data are properly anonymized to
ensure the privacy of study participants, and to ensure the open data strategy
reflects the consortium agreement and remains consistent with the continuously
evolving exploitation roadmap.
With regard to open access to scientific publications and conforming to the
recent ‘open access to publications obligations In Horizon 2020’ letter sent
by Robert-Jan Smits on March 27, 2017, the REPROGRAM consortium aims to
publish in open access journals (gold open access), and to establish links to
publications behind pay-walls available as final peerreviewed manuscripts in
an online repository after publication (green open access). To ensure gold
open access, the REPROGRAM consortium places priority on relevant journals
choosing gold open access journals. With regard to the latter, following the
recommendations of the DMP ensures we will only submit our work to journals
with an easy access to third parties. This is expected to contribute to the
current state-of-play of compliance with the Horizon 2020 open access
obligation. Currently, 68% of publications produced with Horizon 2020 funding
are subject to open access, the majority through the green route.
## Intended readership
This DMP deliverable is intended for use internally in the REPROGRAM project
only and provides guidance on data management to the consortium and involved
staff at the premises of each beneficiary responsible for data management
activities. It is particularly relevant for partners responsible for data
collection and processing. It is a snapshot of the DMP at the current stage;
however, the DMP will evolve throughout the project as new procedures etc. are
added or existing ones are changed.
## Structure of the document
The structure of this deliverable is based on the Horizon 2020 FAIR Data
Management Plan (DMP) template. Version: 26 July 2016:
* Section 2: Data summary
* Section 3: FAIR data
* Section 4: Allocation of resources
* Section 5: Data security and ethical aspects
* Section 6: Other issues
## Relationship with other deliverables
This DMP complements the:
* Consortium Agreement
* Quality Assurance File (D1.1)
* Roadmap for exploitation (D6.1)
* Exploitation report (D6.3)
* Dissemination report (D6.4)
# Data summary
The REPROGRAM project aims to uncover the novel revealed memory feature of
innate immunity (trained immunity) as a common mechanism perpetuating
inflammation in cardiovascular disease as well as its pathophysiological
relevance in other chronic inflammatory diseases, in particular rheumatoid
arthritis with a comparable societal impact. The disease-aetiology focus of
the REPOGRAM project will contribute to support innovation in the development
of evidence-based treatments in modulating highly relevant trained immunity
pathways. Hence, the REPROGRAM project bears direct clinical relevance for a
large number of individuals: subjects with cardiovascular disease risk
factors, patients with established cardiovascular disease, as well as patients
with chronic inflammatory diseases states. The integrated approach combining
molecule-to-man-to-mass studies is critical to succeed in understanding the
regulation, relevance and therapeutic modulation of trained immunity as a
common mechanism of disease, by which this project aims to deliver new safe
and effective treatment strategies attenuating the inflammatory state in
atherosclerosis as well as other chronic inflammatory disease.
The REPROGRAM project will generate both preclinical and clinical data
according to methodological GLP and GCP standards and standard operating
procedures defined by locally certified staff members. During the REPROGRAM
project all members will actively share data through operational standardized
databases that are developed, curated and preserved by qualified data managers
working in their institution.
**Grant agreement** . Research data which is created in the project is owned
by the (joint) partner(s) who generate(s) it (Grant Agreement Art. 26). Each
partner must disseminate its results as soon as possible unless there is
legitimate interest to protect the results. A partner that intends to
disseminate its results must give advance notice to the other partners (at
least
45 days) together with sufficient information on the results it will
disseminate (Grant Agreement Art. 29.1). In accordance with Grant Agreement
Art. 25, data must be made available to partners upon request, including in
the context of checks, reviews, audits or investigations. Data will be made
accessible and available for re-use and secondary analysis.
Types of data generated during the REPROGRAM project:
* Observational data (captured in situ, can’t be recaptured, recreated or replaced)
* Experimental data (data collected under controlled conditions, in situ, in vitro, in vivo and ex vivo, should be reproducible)
* Derived or compiled data (should be reproducible)
* Reference data (static collection [peer-reviewed] datasets, most probably published and/or curated)
Research data comes in many varied formats: text, numeric, multimedia, models,
software languages, discipline specific, and instrument specific.
The list of data formats generated in the REPROGRAM project is extensive, and
includes (but is not limited to):
* delimited text of given character set (.txt)
* widely-used proprietary formats, e.g. MS Word (.doc/.docx) - Rich Text Format (.rtf)
* SPSS portable format (.por) Comma-separated values (CSV) files (.csv)
* MS Excel files (.xls/.xlsx)
* IBM Statistics package (SPSS)
* MS Access (.mdb/.accdb)
* OpenDocument Spreadsheet (.ods)
* structured text or mark-up file containing metadata information, e.g. DDI XML file
* JPEG (.jpeg, .jpg)
* TIFF (other versions; .tif, .tiff)
* Adobe Portable Document Format (.PDF)
* Gene Transfer Format [.GTF]
* MPEG-4 High Profile (.mp4)
* PET image format (DICOM)
Within the REPROGRAM project approximately 57 separate datasets will be
created (see list in table below). They are listed under each of the work
package deliverables taken from the GA Annex 1 – Description of Action. The
datasets will have the same structure, in accordance with the guide of Horizon
2020 for the Data Management Plan.
_Table 1. Potential datasets_
<table>
<tr>
<th>
**Set no.**
</th>
<th>
**WP**
</th>
<th>
**Data type**
</th>
<th>
**Format**
</th>
<th>
**IPR owner**
</th> </tr>
<tr>
<td>
1
</td>
<td>
2
</td>
<td>
In vivo data on atherosclerosis-induced epigenetic changes in myeloid
(precursor) cells by feeding atherosclerotic LDLR-/- mice a high fat diet.
</td>
<td>
.xlsx,
PDF
</td>
<td>
LMU
</td> </tr>
<tr>
<td>
2
</td>
<td>
2
</td>
<td>
In vivo data on myocardial infarction-induced epigenetic changes in myeloid
(precursor) cells.
</td>
<td>
.xlsx,
PDF
</td>
<td>
LMU
</td> </tr>
<tr>
<td>
3
</td>
<td>
2
</td>
<td>
In vivo data on long-lasting changes in innate immune system activation by
competitive adoptive transfer of bone marrow harvested from the induced
atherosclerosis model.
</td>
<td>
.xlsx,
PDF
</td>
<td>
LMU
</td> </tr>
<tr>
<td>
4
</td>
<td>
2
</td>
<td>
In vivo data on long-lasting changes in training of hematopoietic stem cells
by competitive adoptive transfer of bone marrow harvested from the induced
atherosclerosis model.
</td>
<td>
.xlsx,
PDF
</td>
<td>
LMU
</td> </tr>
<tr>
<td>
5
</td>
<td>
2
</td>
<td>
In vivo data on long-lasting changes in innate immune system activation by
competitive adoptive transfer of bone marrow harvested from the induced
myocardial infarction model.
</td>
<td>
.xlsx,
PDF
</td>
<td>
LMU
</td> </tr>
<tr>
<td>
6
</td>
<td>
2
</td>
<td>
In vivo data on long-lasting changes in training of hematopoietic stem cells
by competitive adoptive transfer of bone marrow harvested from the induced
myocardial infarction model.
</td>
<td>
.xlsx,
PDF
</td>
<td>
LMU
</td> </tr>
<tr>
<td>
7
</td>
<td>
2
</td>
<td>
In vivo data on the impact of atherosclerosis induced histone modifications on
atherosclerotic burden (hematopoietic stem cells, plaque size and stage) in
REVERSA mice.
</td>
<td>
.xlsx,
PDF
</td>
<td>
LMU
</td> </tr>
<tr>
<td>
8
</td>
<td>
2
</td>
<td>
In vivo data on the impact of myocardial infarction induced histone
modifications on atherosclerotic burden (hematopoietic stem cells, plaque size
and stage) in REVERSA mice.
</td>
<td>
.xlsx,
PDF
</td>
<td>
LMU
</td> </tr>
<tr>
<td>
9
</td>
<td>
2
</td>
<td>
In vitro screening data of epigenetic modulators for their capacity to
reprogram histones and prevent atherogenic risk factor/myocardial infarction
-induced histone modifications in immune cells.
</td>
<td>
.xlsx,
PDF
</td>
<td>
LMU
</td> </tr> </table>
<table>
<tr>
<th>
10
</th>
<th>
2
</th>
<th>
In vivo data on in vitro identified epigenetic modulators ith respect to their
effect on immune cell function and phenotype as well as histone modification
marks, in relation to their impact on atherosclerotic lesion burden and stage.
</th>
<th>
.xlsx,
PDF, jpeg
</th>
<th>
AMC
</th> </tr>
<tr>
<td>
11
</td>
<td>
3
</td>
<td>
In vitro data on phenotype characterization of risk factor induced pro-
atherogenesis in human innate immune cells and its progenitors from healthy
control subjects.
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC
</td> </tr>
<tr>
<td>
12
</td>
<td>
3
</td>
<td>
In vitro data on the major activating histone modifications upon risk factor
induced pro-atherogenesis in human innate immune cells and its progenitors
from healthy control subjects using chromatin immunoprecipitation (ChIP)
sequencing assays.
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC
</td> </tr>
<tr>
<td>
13
</td>
<td>
3
</td>
<td>
In vitro data on the transcriptome of human innate immune cells and its
progenitors from healthy control subjects upon risk factor induced pro-
atherogenesis using RNA sequencing.
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC
</td> </tr>
<tr>
<td>
14
</td>
<td>
3
</td>
<td>
In vitro metabolome analysis of human innate immune cells and its progenitors
from healthy control subjects upon risk factor induced pro-atherogenesis using
mass spectrometry.
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC
</td> </tr>
<tr>
<td>
15
</td>
<td>
3
</td>
<td>
In vitro data on selective compounds targeting epigenetics or cellular
metabolism are able to prevent the proatherogenic switch in the healthy donor
monocytes.
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC
</td> </tr>
<tr>
<td>
16
</td>
<td>
3
</td>
<td>
In vitro data on bone marrow precursor (hematopoietic stem cells) activation
from healthy control subjects assessing lineage differentiation, inflammatory
markers and proliferative capacity after exposure to pro-atherogenic
substances.
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC/AMC
</td> </tr>
<tr>
<td>
17
</td>
<td>
3
</td>
<td>
In vitro data on trained circulating monocytes isolated from patients with
familial hypercholesterolemia using flow cytometry, stimulation assays with
TLR ligands, transendothelial migration, and analysis of the epigenome,
transcriptome and metabolome
</td>
<td>
.xlsx,
PDF
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
18
</td>
<td>
3
</td>
<td>
In vitro data on trained circulating monocytes isolated from patients with
isolated elevated levels of lp(a) using flow cytometry, stimulation assays
with TLR ligands, transendothelial migration, and analysis of the epigenome,
transcriptome and metabolome
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC/AMC
</td> </tr>
<tr>
<td>
19
</td>
<td>
3
</td>
<td>
In vitro data on trained circulating monocytes isolated from patients with
isolated low HDL cholesterol levels using flow cytometry, stimulation assays
with TLR ligands, transendothelial migration, and analysis of the epigenome,
transcriptome
</td>
<td>
.xlsx,
PDF
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
20
</td>
<td>
3
</td>
<td>
In vitro data on trained circulating monocytes isolated from patients with
excessive smoking behaviour using flow cytometry, stimulation assays with TLR
ligands, transendothelial migration, and analysis of the epigenome,
transcriptome and metabolome*
_*This dataset has been removed and explained so in Deliverable 4.1._
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC
</td> </tr>
<tr>
<td>
21
</td>
<td>
3
</td>
<td>
In vitro data on trained circulating monocytes isolated from patients with
premature atherosclerosis using flow cytometry, stimulation assays with TLR
ligands,
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC
</td> </tr> </table>
<table>
<tr>
<th>
22
</th>
<th>
3
</th>
<th>
In vitro data on trained circulating monocytes isolated from patients after an
acute cardiovascular event is using flow cytometry,
</th>
<th>
.xlsx,
PDF
</th>
<th>
AMC
</th> </tr>
<tr>
<td>
23
</td>
<td>
3
</td>
<td>
In vitro data on trained monocytes isolated from healthy subjects using flow
cytometry, stimulation assays with TLR ligands, trans-endothelial migration,
and analysis of the epigenome, transcriptome and metabolome
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC
</td> </tr>
<tr>
<td>
24
</td>
<td>
3
</td>
<td>
In vitro data on healthy donor monocytes that are exposed to pooled serum of
the selected patient groups in presence / absence of specific inhibitors.
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC
</td> </tr>
<tr>
<td>
25
</td>
<td>
3
</td>
<td>
In vitro data on the lineage differentiation, inflammatory markers and
proliferative capacity of hematopoietic stem cells from patients with
atherogenic risk factors or postmyocardial infarction.
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC/AMC
</td> </tr>
<tr>
<td>
26
</td>
<td>
3
</td>
<td>
In vitro data on the lineage differentiation, inflammatory markers and
proliferative capacity of hematopoietic stem cells from healthy controls.
</td>
<td>
.xlsx,
PDF
</td>
<td>
RadboudUMC
</td> </tr>
<tr>
<td>
27
</td>
<td>
3
</td>
<td>
FDG-PET data for inflammatory activity of the arterial wall as well as bone
marrow and splenic activity in patients within 1 week and >3months after acute
coronary syndrome.
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
28
</td>
<td>
3
</td>
<td>
FDG-PET data for inflammatory activity of the arterial wall as well as bone
marrow and splenic activity in patients with high LDL cholesterol levels.
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
29
</td>
<td>
3
</td>
<td>
FDG-PET data for inflammatory activity of the arterial wall as well as bone
marrow and splenic activity in patients with elevated levels of lp(a).
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
RadboudUMC/AMC
</td> </tr>
<tr>
<td>
30
</td>
<td>
3
</td>
<td>
FDG-PET data for inflammatory activity of the arterial wall as well as bone
marrow and splenic activity in patients with low HDL cholesterol levels.
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
31
</td>
<td>
3
</td>
<td>
FDG-PET data for inflammatory activity of the arterial wall as well as bone
marrow and splenic activity in patients with excessive smoking behavior.
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
retracted
</td> </tr>
<tr>
<td>
32
</td>
<td>
3
</td>
<td>
FDG-PET data for inflammatory activity of the arterial wall as well as bone
marrow and splenic activity in healthy controls.
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
RadboudUMC/AMC
</td> </tr>
<tr>
<td>
33
</td>
<td>
3
</td>
<td>
Integrated data set following a systems medicine approach to assess the
regulation of systemic and local immune cell production/activity in patients
within 1 week and >3months after acute coronary syndrome.
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
34
</td>
<td>
3
</td>
<td>
Integrated data set following a systems medicine approach to assess the
regulation of systemic and local immune cell production/activity in patients
with high LDL cholesterol levels.
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
35
</td>
<td>
3
</td>
<td>
Integrated data set following a systems medicine approach to assess the
regulation of systemic and local immune cell production/activity in patients
with elevated levels of lp(a).
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
AMC
</td> </tr> </table>
<table>
<tr>
<th>
36
</th>
<th>
3
</th>
<th>
Integrated data set following a systems medicine approach to assess the
regulation of systemic and local immune cell production/activity in patients
with low HDL cholesterol levels.
</th>
<th>
xlsx, PDF, jpeg,
DICOM
</th>
<th>
AMC
</th> </tr>
<tr>
<td>
37
</td>
<td>
3
</td>
<td>
Integrated data set following a systems medicine approach to assess the
regulation of systemic and local immune cell production/activity in patients
with excessive smoking behavior.
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
retracted
</td> </tr>
<tr>
<td>
38
</td>
<td>
3
</td>
<td>
Integrated data set following a systems medicine approach to assess the
regulation of systemic and local immune cell production/activity in healthy
controls.
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
RadboudUMC
</td> </tr>
<tr>
<td>
39
</td>
<td>
4
</td>
<td>
Gene score of prevalent SNPs in enzymes contributing to epigenetic modulation
in humans.
</td>
<td>
xlsx, PDF,
GTF
</td>
<td>
RadboudUMC
</td> </tr>
<tr>
<td>
40
</td>
<td>
4
</td>
<td>
Clinical assessment of predictive value of critical enzymes of epigenetic
reprogramming on cardiovascular risk in the general population.
</td>
<td>
xlsx, PDF,
GTF
</td>
<td>
REGIONH
</td> </tr>
<tr>
<td>
41
</td>
<td>
4
</td>
<td>
Clinical data on the reversibility of epigenetic remodeling by lowering LDL-c
in relation to inflammatory activation.
</td>
<td>
xlsx,
PDF
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
42
</td>
<td>
4
</td>
<td>
Ex vivo data on monocyte phenotyping combined with epigenome/transcriptome
analysis in patients with genetically elevated LDL cholesterol who underwent
statininduced LDL cholesterol lowering treatment.
</td>
<td>
xlsx, PDF,
GTF
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
43
</td>
<td>
4
</td>
<td>
Multi-level PET/CT clinical data (spleen/bonemarrow/arterial wall) on
inflammatory activation in patients with genetically elevated LDL cholesterol
who received oral dosing of the short-chain fatty acid butyrate.
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
44
</td>
<td>
4
</td>
<td>
Ex vivo data on monocyte phenotyping combined with epigenome/transcriptome
analysis in patients with genetically elevated LDL cholesterol who received
oral dosing of the short-chain fatty acid butyrate.
</td>
<td>
xlsx, PDF,
GTF
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
45
</td>
<td>
4
</td>
<td>
Clinical data from a proof-of-concept study in patients at increased
cardiovascular risk to evaluate whether epigenetic marks and inflammatory
activation can be reversed, using multi-level PET/CT
(spleen/bonemarrow/arterial wall).
</td>
<td>
xlsx, PDF, jpeg,
DICOM
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
46
</td>
<td>
4
</td>
<td>
Ex vivo data on monocyte phenotyping combined with epigenome/transcriptome
analysis from a proof-of-concept
study in patients at increased cardiovascular risk to evaluate whether
epigenetic marks and inflammatory activation can be reversed.
</td>
<td>
xlsx, PDF,
GTF
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
47
</td>
<td>
4
</td>
<td>
Clinical data on nanoparticle delivery of promising treatment candidates by
liposomal packaging.
</td>
<td>
xlsx, PDF,
GTF,
DICOM
</td>
<td>
AMC
</td> </tr>
<tr>
<td>
48
</td>
<td>
5
</td>
<td>
In vitro data on the relation between chronic inflammatory disease-associated
DAMPs and histone modification in human innate immune cells.
</td>
<td>
.xlsx,
PDF
</td>
<td>
UZH
</td> </tr>
<tr>
<td>
49
</td>
<td>
5
</td>
<td>
In vitro data on genome-wide methylome analysis (DNA methylation and
hydroxymethylation) for the assessment of the epigenetic landscape at the
level of major activating histone modifications.
</td>
<td>
xlsx, PDF,
GTF
</td>
<td>
UZH
</td> </tr>
<tr>
<td>
50
</td>
<td>
5
</td>
<td>
In vitro data on chromatin immunoprecipitation (ChIP)sequencing assays for the
assessment of the epigenetic landscape at the level of major activating
histone modifications.
</td>
<td>
xlsx, PDF,
GTF
</td>
<td>
UZH
</td> </tr>
<tr>
<td>
51
</td>
<td>
5
</td>
<td>
Clinical data on phenotype, function and epigenome of monocytes harvested from
rheumatoid arthritis patients, with active disease. healthy matched control
subjects
</td>
<td>
xlsx, PDF,
GTF
</td>
<td>
UZH
</td> </tr>
<tr>
<td>
52
</td>
<td>
5
</td>
<td>
Clinical data on phenotype, function and epigenome of monocytes harvested from
rheumatoid arthritis patients in stable remission.
</td>
<td>
xlsx, PDF,
GTF
</td>
<td>
UZH
</td> </tr>
<tr>
<td>
53
</td>
<td>
5
</td>
<td>
Clinical data on phenotype, function and epigenome of monocytes harvested from
healthy matched control subjects.
</td>
<td>
xlsx, PDF,
GTF
</td>
<td>
UZH
</td> </tr>
<tr>
<td>
54
</td>
<td>
5
</td>
<td>
Clinical FDG-PET data for the assessment of inflammatory activity of the
arterial wall and bone marrow to assess the
correlation between circulating monocytes, DNA hydroxymethylation, histone
modification marks and inflammatory activity of arterial wall in patients with
active rheumatoid arthritis.
</td>
<td>
xlsx, PDF,
GTF,
DICOM
</td>
<td>
UZH
</td> </tr>
<tr>
<td>
55
</td>
<td>
5
</td>
<td>
Clinical FDG-PET data for the assessment of inflammatory activity of the
arterial wall and bone marrow to assess the
correlation between circulating monocytes, DNA hydroxymethylation, histone
modification marks and inflammatory activity of arterial wall in patients with
remissive rheumatoid arthritis.
</td>
<td>
xlsx, PDF,
GTF,
DICOM
</td>
<td>
UZH
</td> </tr>
<tr>
<td>
56
</td>
<td>
5
</td>
<td>
Clinical FDG-PET data for the assessment of inflammatory activity of the
arterial wall and bone marrow to assess the
correlation between circulating monocytes, DNA hydroxymethylation, histone
modification marks and inflammatory activity of arterial wall in healthy
matched control subjects.
</td>
<td>
xlsx, PDF,
GTF,
DICOM
</td>
<td>
UZH
</td> </tr>
<tr>
<td>
57
</td>
<td>
5
</td>
<td>
Integrated imaging data on bone marrow/spleen/arterial wall and data on
circulating cells as well as epigenetic data to gain insight in the regulation
of systemic and local immune cell production/activity in rheumatoid arthritis.
</td>
<td>
xlsx, PDF,
GTF,
DICOM
</td>
<td>
UZH
</td> </tr> </table>
_Table 2. Work task leaders in accordance with the GA Annex 1 – Description of
Action_
<table>
<tr>
<th>
**Task**
</th>
<th>
**Task leader**
</th> </tr>
<tr>
<td>
2.1
</td>
<td>
Prof. Lutgens (LMU)
</td> </tr>
<tr>
<td>
2.2
</td>
<td>
Prof. Lutgens (LMU)
</td> </tr>
<tr>
<td>
2.3
</td>
<td>
Prof. Lutgens (LMU)
</td> </tr>
<tr>
<td>
2.4
</td>
<td>
Prof. Stroes (AMC)
</td> </tr>
<tr>
<td>
3.1
</td>
<td>
Prof. Riksen (RadboudUMC)
</td> </tr>
<tr>
<td>
3.2
</td>
<td>
Prof. Riksen (RadboudUMC)
</td> </tr>
<tr>
<td>
3.3
</td>
<td>
Prof. Riksen (RadboudUMC)
</td> </tr>
<tr>
<td>
4.1
</td>
<td>
Prof. Riksen (RadboudUMC)
</td> </tr>
<tr>
<td>
4.2
</td>
<td>
Prof. Nordestgaard (REGIONH)
</td> </tr>
<tr>
<td>
4.3
</td>
<td>
Prof. Stroes (AMC)
</td> </tr>
<tr>
<td>
4.4
</td>
<td>
Prof. Stroes (AMC)
</td> </tr>
<tr>
<td>
4.5
</td>
<td>
Prof. Stroes (AMC)
</td> </tr>
<tr>
<td>
5.1
</td>
<td>
Prof. Neidhart (UZH)
</td> </tr>
<tr>
<td>
5.2
</td>
<td>
Prof. Neidhart (UZH)
</td> </tr>
<tr>
<td>
5.3
</td>
<td>
Prof. Neidhart (UZH)
</td> </tr> </table>
The expected size of the data collected within the REPROGRAM project is
calculated as follows:
* Each in situ and/or in vitro experiment log will be approximately 5 megabytes.
* Each in vivo animal experiment log will be approximately 5 megabytes.
* Each imaging experiment will be approximately 50-150 megabytes.
* Each patient record will be approximately 1 megabyte.
* RNA and chromatin immunoprecipitation sequencing assays records will be approximately 1 gigabyte per subject (in total 150 gigabytes for WP3 and 100 gigabytes for WP4)
In total, the REPROGRAM project will generate between 250-275 gigabytes of
data.
During the REPROGRAM project existing data is being re-used in task 4.1 in
which functionality of identified SNPs in major enzymes involved in epigenetic
modulation will be assessed using published data as well as prediction
programs. Furthermore, the correlation between the gene score and the trained
immunity responses on the one hand, or epigenetic histone marks on the other
hand, will be assessed in the available data from the Human Functional
Genomics cohorts ( _www.humanfunctionalgenomics.org)_ .
Secondary use of the data generated within the REPROGRAM project is
foreseeable as follows:
* Further use by original researchers;
* Combinations with other data;
* Re-analysis using novel methods;
* Meta-analysis; • General reference.
Data generated within the REPROGRAM project will be useful to researchers
within and outside of the REPROGRAM consortium. External parties such as
pharmaceutical companies, and health (policy) agencies.
# FAIR data
Wilkinson et al (2016) published on The FAIR Guiding Principles for scientific
data management and stewardship. This source defines good data management
which is not a goal in itself, but rather is the key conduit leading to
knowledge discovery and innovation, and to subsequent data and knowledge
integration and reuse by the community after the data publication process.
Unfortunately, the existing digital ecosystem surrounding data publication
prevents us from extracting maximum benefit from our research investments.
Partially in response to this, science funders, publishers and governmental
agencies are beginning to require data management and stewardship plans for
data generated in publicly funded experiments. Beyond proper collection,
annotation, and archival, data stewardship includes the notion of ‘long-term
care’ of valuable digital assets, with the goal that they should be discovered
and re-used for downstream investigations, either alone, or in combination
with newly generated data. The outcomes from good data management and
stewardship, therefore, are high quality digital publications that facilitate
and simplify this ongoing process of discovery, evaluation, and reuse in
downstream studies. What constitutes ‘good data management’ is, however,
largely undefined, and is generally left as a decision for the data or
repository owner. Therefore, bringing some clarity around the goals of good
data management and stewardship, and defining simple guideposts to inform
those who publish and/or preserve scholarly data, would be of great utility.
This article described four foundational principles— **Findability,
Accessibility, Interoperability, and Reusability** — that serve to guide data
producers and publishers as they navigate around these obstacles, thereby
helping to maximize the added-value gained by contemporary, formal digital
publishing. Importantly, the principles apply not only to ‘data’ in the
conventional sense, but also to the algorithms, tools, and workflows that led
to that data.
## Making data findable, including provisions for metadata
Data generated in the REPROGRAM project will be documented and be made
discoverable and accessible through a dedicated webpage on the project’s
website: _http://reprogramhorizon2020.eu_ . Upon scientific publication, a
DOI will be assigned to datasets for effective and persistent citation when it
is uploaded to a repository (e.g. NCBI GEO database). This DOI can be used in
any relevant publications to direct readers to the underlying dataset.
Each dataset generated during the project will be allocated a dataset
identifier. This dataset identifier will be, in combination with dataset
information, included in a metadata file at the beginning of the
documentation, and updated with each version.
The REPROGRAM dataset identifier will comprise of the following:
1. A unique chronological number of the datasets in the project will be added to the metadata file.
2. A prefix "REP" indicating a REPROGRAM dataset.
3. A unique identification number linking with the dataset work package and task.
4. For each new version of a dataset it will be allocated with a version number.
5. For example: 01_REP_WP2_T2.1_v0.1.xlsx
Search key words will be provided in the metadata file when the dataset is
deposited which will optimise possibilities for re-use. The specific metadata
contents, formats and volume are given in the table below and will be further
defined in future versions of the DMP.
_Table 3. Datasets fields (example)_
<table>
<tr>
<th>
Dataset Identifier
</th>
<th>
01_REP_WP2_T2.1_v0.1.xlsx
</th> </tr>
<tr>
<td>
Title of Dataset
</td>
<td>
In vivo data on atherosclerosis-induced epigenetic changes in myeloid
(precursor) cells by feeding atherosclerotic LDLR-/- mice a high fat diet.
</td> </tr>
<tr>
<td>
Lead Partners
</td>
<td>
LMU
</td> </tr>
<tr>
<td>
Work Package
</td>
<td>
2
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
Experimental results on epigenetic changes …
</td> </tr>
<tr>
<td>
Dissemination goals
</td>
<td>
Peer reviewed journal
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
.xlsx / .PDF
</td> </tr>
<tr>
<td>
Expected Size
</td>
<td>
5 megabytes
</td> </tr>
<tr>
<td>
Expected Repository
</td>
<td>
NCBI GEO database
</td> </tr>
<tr>
<td>
DOI (if known)
</td>
<td>
To be inserted once the dataset is reposited
</td> </tr>
<tr>
<td>
Date of Submission
</td>
<td>
31-12-2018 or before if used for publication
</td> </tr>
<tr>
<td>
Key words
</td>
<td>
Atherosclerosis, LDL receptor knock-out mice, epigenetics …
</td> </tr>
<tr>
<td>
Version Number
</td>
<td>
v0.1
</td> </tr>
<tr>
<td>
Link to metadata file
</td>
<td>
</td> </tr> </table>
## Making data openly accessible
**Consortium Agreement:** Any party whose tasks under the Work Packages
include the collection of data shall procure that it has obtained or will
obtain data in accordance with all applicable local laws, regulations and
codes of practice, and in accordance with a current ethics committee approval,
and from participants that have given their informed consent for their Data to
be used and stored for research purposes. The database (REPROGRAM data
repository) shall be considered as a jointly owned results owned by all the
parties without prejudice of the rights of any party owning or generating any
data contained in the Database. The database shall be made available to the
other parties for the purpose of the project and the exercise of Access
Rights. No party may make any use of such provided data for any purposes
outside of the implementation of the project (including publication) or other
than for the exercise of Access Rights without first securing agreement from
the project coordinator. Each party shall ensure that, to the best of its
knowledge, it can grant Access Rights with regard to its data contained in the
database and fulfil its obligations under this Consortium Agreement
notwithstanding any rights of its employees, or persons it engages to perform
collection of the data. Each party ensures that all data that is transferred
under this Consortium Agreement or held in the database will be de-identified
and will contain no identifiable health information.
**REPROGRAM repository** . Not all project partners have access to an
institutional repository. Therefore, data generated in the REPROGRAM project
will be documented and be made discoverable and accessible through a dedicated
webpage (REPROGRAM data repository) on the project’s website:
_http://reprogram-horizon2020.eu_ . The use of the REPROGRAM data repository
ensures that data management procedures are unified across the project. A
webpage functionality will be setup for easy upload of project datasets and
inclusion in the metadata file. Details of how to access the data will be
available on this webpage. A ‘data request form’ will be created to facilitate
this process.
A data access committee consisting of Prof. Stroes, Prof. Lutgens, and Prof.
Riksen will gather monthly to process all data requests. If a request is
granted, a Data Access Agreement form will be executed, after which transfer
of the data can be arranged.
Data objects will be deposited in the REPROGRAM’s repository under:
* Open access to data files and metadata and data files provided over standard protocols.
* Use and reuse of data permitted.
* Privacy of its users protected.
Since the data is being deposited in a central repository, a dataset registry
record should also be created in local host institutions repositories e.g.
PURE for UST. The registry record should include relevant metadata explaining
what data exists, and a DOI linking to where the data is available in the
external repository. Any data which is deposited externally in a closed state,
i.e. it is not accessible, should also be deposited in a local institutional
repository, so that the partner is still able to access the data.
During embargo periods, information about the restricted data will be
published in the REPROGRAM data repository, and details of when the data will
become available will be included in the metadata. Where a restriction on open
access to data is necessary, attempts will be made to make data available
under controlled conditions to other individual researchers. All the public
data of the project will be made openly accessible in the repository. Non-
public data will be archived at the repository using the “closed access”
option.
For appropriate intellectual property management, there are several restricted
datasets. These will be shown in the REPROGRAM data repository metadata file.
These datasets are proprietary to the relevant partners and may only be used
in the restricted application of developing compounds to support the work of
this project. As these activities are enabling aspects of the project allowing
the development of new therapies, it is not felt that restrictions will impact
on eventual dissemination of the project outputs for the enhanced
understanding of common mechanisms of chronic inflammatory diseases and their
relevance in comorbidities.
## Making data interoperable
The REPROGRAM project aims to collect and document the data in a standardised
way to ensure that, the datasets can be understood, interpreted and shared in
isolation alongside accompanying metadata and documentation.
Generated data will be preserved in the REPROGRAM data repository and on
institutional intranet platforms until the end of the project. A metadata file
will be created and linked within each dataset. It will include the following
information:
**General Information**
* Title of the dataset
* Dataset Identifier
* Responsible Partner
* Author Information
* Date of data collection
* Geographic location of data collection
* The title of project and funding sources that supported the collection of the data
**Sharing/Access Information**
* Licenses/access restrictions placed on the data
* Link to data Repository
* Links to other publicly accessible locations of the data - Links to publications that cite or use the data - Was data derived from another source?
**Dataset/File Overview**
* This dataset contains X sub-dataset as listed below
* What is the status of the documented data? – “complete”, “in progress”, or “planned” - Are there plans to update the data?
**Methodological Information**
* Used materials
* Description of methods used for experimental design and data collection: <Include links or references to publications or other documentation containing experimental design or protocols used in data collection>
* Methods for processing the data: <describe how the submitted data were generated from the raw or collected data>
* Instruments and software used in data collection and processing-specific information needed to interpret the data
* Standards and calibration information, if appropriate
* Environmental/experimental conditions
* Describe any quality-assurance procedures performed on the data
* Dataset benefits
## Increase data re-use (through clarifying licences)
The datasets will be made available for re-use through data requests to the
REPROGRAM data repository and uploads to public repositories upon scientific
peer-reviewed publication. In principle, the data will be stored in the
REPROGRAM data repository after the conclusion of the project without
additional cost. All the research data will be of the highest quality, have
long-term validity and will be well documented in such a way that other
researchers are able to gain access and understand them after several years.
If datasets are updated, the partner that possesses the data has the
responsibility to manage the different versions and to make sure that the
latest version is available in the case of publically available data. Quality
control of the data is the responsibility of the relevant responsible partner
generating the data.
# Allocation of resources
There are minor immediate costs anticipated to make the datasets produced
FAIR. First the dedicated webpage needs to be developed, including its defined
functionalities (e.g. data request form, metadata file, etc.). The datasets
will be deposited in the REPROGRAM repository for at least 5 years after the
conclusion of the project. These costs will be covered by the local
institution of the Project Coordinator (AMC).
Prof. Stroes and qualified data managers based at the Academic Medical Center
(AMC) are responsible for data management within the REPROGRAM project,
specifically for D6.5 creation of initial data management plan and updating
the data management plan and ensuring the datasets are recorded. The PI of
each partner will have overall responsibility for implementing the data
management plan.
Each REPROGRAM consortium partner should respect the policies set out in this
data management plan. Datasets have to be created, managed and stored
appropriately and in line with European Commission, national and local
legislation. Dataset validation and registration of metadata and backing up
data for sharing through repositories is the responsibility of the partner
that generates the data in the particular WP.
The datasets in the REPROGRAM repository will be preserved in line with the
European Commission Data Deposit Policy. The data will be preserved
indefinitely (minimum of 5 years) and the costs anticipated for archiving data
in this repository will be covered by AMC.
Costs related to open access to research data in Horizon 2020 are eligible for
reimbursement
during the duration of the project under the conditions defined in the H2020
Grant Agreement, Art 6, but also other articles relevant for the cost category
chosen.
# Data security and ethical aspects
For the duration of the project, datasets will be stored on the responsible
partner’s centrally provided storage, detailed in the table below.
<table>
<tr>
<th>
AMC
</th>
<th>
All data is stored at the same time on internal servers of AMC (G-drive). The
AMC Medical Library’s centralised research data management system, maintained
by qualified data managers, offers the secure storage of research data. This
central data storage system provides a good place to archive processed
datasets, and to archive qualitative data, such as recordings of interviews.
Data would also be fully copied in cloud-based repositories once provided by
the central ICT system of AMC including Surfdrive. Selected data is also
stored in cloud-based repositories (Dropbox, Google drive) for sharing easily.
</th> </tr>
<tr>
<td>
RadboudUMC
</td>
<td>
RadboudUMC uses file folders on the university’s network drive that enables
research to grant colleagues reading and/or writing rights in the ICT
infrastructure. They can use an e-number (guest account) to grant access to
external colleagues. Furthermore, Surfdrive is used as the legally secure
alternative to the USA grounded Dropbox service. Surfdrive is a personal cloud
storage service for Dutch higher education and research, hosted in the
Netherlands.
</td> </tr>
<tr>
<td>
LMU
</td>
<td>
tbd
</td> </tr>
<tr>
<td>
REGIONH
</td>
<td>
All code and resultant data are stored in data repositories that are fully
copied across numerous computers both on site at REGIONH and off site.
Selected data is also stored in cloud-based repositories (Dropbox, Google
drive, etc.).
</td> </tr>
<tr>
<td>
UMF
</td>
<td>
Data security is provided by access controls defined at a user level. The data
will be stored on network drives. Sensitive data can be encrypted and then
stored in the “Home directory” of UMF. Non-sensitive data is stored in the
same directory. Data is securely stored and backed up daily, so a deleted file
can be restored within 24 hours.
</td> </tr>
<tr>
<td>
UNIMI
</td>
<td>
tbd
</td> </tr>
<tr>
<td>
UZH
</td>
<td>
In cooperation with unit Service and Support for Science IT (S3IT) and the
Zentralbibliothek the Main Library is pursuing the goal of developing
consultation opportunities in the fields of data management, long-term
archiving and data publishing, based on the accumulated experiences in the
field of Open Access. The ScienceCloud is a multipurpose compute and storage
infrastructure of the University of Zurich serving most computational and
storage needs of research. It is an Infrastructure-as-a-Service (IaaS)
solution specifically targeted to address large scale computational research;
it is based on the
OpenStack cloud management software and on Ceph for the underneath storage
infrastructure. UZH also uses Zenodo, an open access data repository at CERN
in cooperation with OpenAIRE2020.
</td> </tr>
<tr>
<td>
MGH
</td>
<td>
Project data is stored on the internal intranet servers. Dedicated security
software is used to back up files on a daily basis. This is a cloud-based
solution and data is backed up to a data center in Boston. Data on this server
is also covered with a data recovery procedure
</td> </tr>
<tr>
<td>
</td>
<td>
which is replicated real time to the same data center to enable remote access.
</td> </tr>
<tr>
<td>
Sensilab
</td>
<td>
Sensilab has an internal archive where all proprietary data are stored.
Moreover, all data are stored in secondary secure archives that are backed up
every night.
</td> </tr>
<tr>
<td>
Servier
</td>
<td>
The company has an IT group who have responsibility for IT infrastructure and
data security. Electronic data is stored locally on network drives and/or data
base systems. Data is backed up daily.
</td> </tr>
<tr>
<td>
Descin
</td>
<td>
Descin has access to a dedicated cloud-based data archive where all
proprietary data are securely stored.
</td> </tr> </table>
In the REPROGRAM project, (genetic or other sensitive) data and selected
preclinical data will be included to study common pathophysiological
mechanisms of chronic inflammatory diseases. For maximum safety of data a
number of safety procedures will be implemented:
* Rule of data austerity: all databases will host only phenotype and biosample data which are absolutely essential for clear definition of phenotypes and biosamples. “Reserve” data will rejected and not hosted.
* Data transfer into the database will be performed only using highly secured transfer protocols (128bit encryption).
* Access to the database will be granted only to few selected and appropriately educated personnel. These persons will be provided with a personalized user name and password.
* Only pseudonymized patient data will be integrated in the database.
* All genetic and / or molecular data imply a potential risk for depseudonymization if they can be connected to phenotype data. To avoid any potential risk of depseudonymization phenotype data and molecular / genetic data will be physically separated.
* The process of depseudonymization will be impossible at the central database.
For REPROGRAM partners involved in use and storage of biological human
samples, they will explicitly follow Directive 2001/20/EEC of the European
Parliament and of the Council of 4th April 2001 on the approximation of the
laws, regulations and administrative provisions of the Member States relating
to the implementation of good clinical practice in the conduct of clinical
trials on medicinal products for human use, as well as the Guidelines as
suggested by the European Science foundation, in European science foundation
policy briefing May 2001, on Controlled clinical trials.
From the intellectual property management point of view, all REPROGRAM
partners will adhere to Directive 98/44/EC of the European Parliament and of
the Council of 6 July 1998 on the legal protection of biotechnological
inventions.
# Ethical aspects
The REPROGRAM project is an international research and innovation project that
proposes that trained immunity is an important final common pathway
contributing to the maintenance of an activated state of innate immune cells;
hence, modulation of the molecular mechanisms mediating trained immunity
provides a promising strategy to safely and effectively reverse the chronic
inflammatory state in both atherosclerosis as well as other chronic
inflammatory diseases states. It is executed by a highly trans-disciplinary
and intersectoral consortium of leading experts in atherosclerosis, immunology
and epigenetics. Within the REPROGRAM project several animal studies will be
performed as well as work with clinical samples (blood-based) from cohorts and
five clinical trials in WP4. In specific, the REPROGRAM project plans to
perform the following research activities:
* Animal experiments in mice.
* Use and long-term storage of biological samples from human individuals (e.g., blood).
* Collection and use of clinical data, including genetic data.
* Research in patients with (risk factors for) atherosclerosis or active and remissive RA as well as research with healthy human subjects from population-based cohorts.
* **Clinical study 1** : The inflammatory state in patients at risk for atherosclerosis (WP3). Multicentre (Netherlands and Italy) observational study in human subjects. Subjects will visit the centre twice: 1 visit to harvest monocytes and HSCs, and 1 visit to perform FDG-PET imaging. o **Clinical study 2** : The effect of modulating risk factors on trained immunity (WP4). Double-blind, placebo-controlled, randomized clinical trial (Netherlands) to assess the effects of lowering LDL-c levels, to decrease the training effects of circulating LDL-c. Subjects will undergo PET imaging <7 days before start of intervention. After 3 months (+/- 3 days) of study medication subjects will undergo a repeat scan.
* **Clinical study 3** : Proof-of-concept study in patients at increased cardiovascular risk to evaluate whether epigenetic marks and inflammatory activation can be reversed (WP4). Double-blind, placebo-controlled, randomized intervention trial (Netherlands) to assess the effects of epigenetic modulators, to decrease the training effects. Subjects will undergo PET imaging <7 days before start of intervention. After 3 months (+/- 3 days) of study medication subjects will undergo a repeat scan.
* **Clinical study 4** : Effect of short-chain fatty acid Butyrate to prevent trained immunity (WP4). Double-blind, placebo-controlled, randomized intervention trial (Netherlands) to assess the effects of epigenetic modulators, to decrease the training effects. Subjects will undergo PET imaging <7 days before start of Butyrate. After 3 months (+/- 3 days) of study medication subjects will undergo a repeat scan. o **Clinical study 5** : Inflammatory activity in patients with active and remissive RA (WP5). This is an observational study in human subjects. Subjects will visit the centre twice: 1 visit to harvest monocytes, and 1 visit to perform FDG-PET imaging.
More detailed, the REPROGRAM project plans to conduct the following type of
studies:
* Preclinical studies in existing and/or de novo in vitro, ex vivo and in vivo models.
* Prospective clinical studies involving patients with cardiovascular risk factors and atherosclerosis at different age stages as well as appropriate control individuals.
* Retrospective observational studies involving biological samples from human individuals.
All REPROGRAM consortium members state that the proposed research activities
do not involve:
* Human cloning for reproductive purposes.
* Research intended to modify the genetic heritage of human beings, which could make such changes heritable.
* Activities intended to create human embryos solely for the purpose of research or for the purpose of stem cell procurement, including by means of somatic cell nuclear transfer.
* Research involving use of human embryos or embryonic stem cells with the exception of blanked or isolated human embryonic stem cells in culture.
All REPROGRAM partners comply with the ethical principles as set out in
Article 34 of the Grant Agreement, which states that all activities must be
carried out in compliance with:
1. ethical principles (including the highest standards of research integrity — as set out, for instance, in the European Code of Conduct for Research Integrity (European Science Foundation, 2011) — and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) and
2. applicable international, EU and national law.
# Confidentiality
The REPROGRAM partners must retain any data, documents or other material as
confidential during the implementation for the project. Further details on
confidentiality can be found in Art 36 of the Grant Agreement along with the
obligation to protect results in Art 27 and relevant articles defined in the
fully executed Consortium Agreement.
# Other issues
In addition to the European Commission policies on open data management, The
REPROGRAM consortium partners must also adhere to their national, local and/or
institutional policies and procedures for data management.
**AMC:** _https://www.amc.nl/web/AMC-website/Research-Code/1-Introduction.htm_
**RadboudUMC** : _https://www.radboudumc.nl/en/research/principles-of-
research/scientificintegrity/the-backbone-of-research_
**LMU** : _https://www.helmholtz-muenchen.de/fileadmin/HZM-Corporate-_
_Website/Bilder/HZM/Forschung/pdf/Rules_for_Safeguarding_Good_Scientific_Practice_at_H_
_MGU_c-l-s_eng__06.10.2015.pdf_
_http://www.hfsp.org/sites/www.hfsp.org/files/webfm/Communications/empfehlung_wiss__
_praxis_0198.pdf_
**REGIONH** : _http://www.science.ku.dk/english/research/good-scientific-
practice/_
**UMF** : tbd
**UNIMI** : tbd
**UZH** :
_https://rechtssammlung.sp.ethz.ch/_layouts/15/start.aspx#/default.aspx_
**MGH** : _https://ori.hhs.gov_
_https://www.nsf.gov/od/oise/intl-research-integrity.jsp_
**Sensilab** : Sensilab has its own set of internal policies and procedures on
data management.
**Servier** : Data management and information technology policies for the
company are set out in written policies which are subject to periodic review.
**Descin** : Descin has its own set of internal policies and procedures on
data management.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.